Commit 306a2aa9 authored by Petr Pudlak's avatar Petr Pudlak

Merge branch 'stable-2.12' into stable-2.13

* stable-2.12
  Update design doc with solution for Issue 1094
  Prevent multiple communication nics for one instance
  Remove outdated reference to ganeti-masterd
  Update ganeti-luxid man page
  Add a man page for ganeti-wconfd
  Make htools tolerate missing "dtotal" and "dfree" on luxi
  Get QuickCheck 2.7 compatibility
  TestCommon: Fix QuickCheck import warnings
  Full QuickCheck 2.7 compatibility
  Add a CPP macro for checking the version of QuickCheck
  QuickCheck 2.7 compatibility

* stable-2.11
  Downgrade log-message for rereading job
  Dowgrade log-level for successful requests

Conflicts:
	test/hs/Test/Ganeti/TestCommon.hs
Resolution:
	test/hs/Test/Ganeti/TestCommon.hs: keep additions from both
          versions
Signed-off-by: default avatarPetr Pudlak <pudlak@google.com>
Reviewed-by: default avatarKlaus Aehlig <aehlig@google.com>
parents fce1b94e 9fa84388
......@@ -1282,6 +1282,13 @@ hs-pkg-versions:
-DMONAD_CONTROL_MAJOR=\1 -DMONAD_CONTROL_MINOR=\2 -DMONAD_CONTROL_REV=\3/'\
-e 's/^\s*//' \
>> $@
ghc-pkg list --simple-output QuickCheck \
| sed -r -e '$$!d' \
-e 's/^QuickCheck-([0-9]+(\.[0-9]+)*)/\1 0 0 0/' \
-e 's/\./ /g' -e 's/([0-9]+) *([0-9]+) *([0-9]+) .*/\
-DQUICKCHECK_MAJOR=\1 -DQUICKCHECK_MINOR=\2 -DQUICKCHECK_REV=\3/'\
-e 's/^\s*//' \
>> $@
HS_MAKEFILE_GHC_SRCS = $(HS_SRC_PROGS:%=%.hs)
if WANT_HSTESTS
......@@ -1564,6 +1571,7 @@ man_MANS = \
man/ganeti-extstorage-interface.7 \
man/ganeti-rapi.8 \
man/ganeti-watcher.8 \
man/ganeti-wconfd.8 \
man/ganeti.7 \
man/gnt-backup.8 \
man/gnt-cluster.8 \
......
......@@ -388,10 +388,12 @@ in the design.
- Instead of using the same certificate for all nodes as both, server
and client certificate, we generate a common server certificate (and
the corresponding private key) for all nodes and a different client
certificate (and the corresponding private key) for each node. All
those certificates will be self-signed for now. The client
certificates will use the node UUID as serial number to ensure
uniqueness within the cluster.
certificate (and the corresponding private key) for each node. The
server certificate will be self-signed. The client certficate will
be signed by the server certificate. The client certificates will
use the node UUID as serial number to ensure uniqueness within the
cluster. They will use the host's hostname as the certificate
common name (CN).
- In addition, we store a mapping of
(node UUID, client certificate digest) in the cluster's configuration
and ssconf for hosts that are master or master candidate.
......@@ -450,9 +452,21 @@ Drawbacks of this design:
- Even though this proposal is an improvement towards the previous
situation in Ganeti, it still does not use the full power of SSL. For
further improvements, see Section "Related and future work".
- Signing the client certificates with the server certificate will
increase the complexity of the renew-crypto, as a renewal of the
server certificates requires the renewal (and signing) of all client
certificates as well.
Alternative proposals:
- The initial version of this document described a setup where the
client certificates were also self-signed. This led to a serious
problem (Issue 1094), which would only have been solvable by
distributing all client certificates to all nodes and load them
as trusted CAs. As this would have resulted in having to restart
noded on all nodes every time a node is added, removed, demoted
or promoted, this was not feasible and we switched to client
certficates which are signed by the server certificate.
- Instead of generating a client certificate per node, one could think
of just generating two different client certificates, one for normal
nodes and one for master candidates. Noded could then just check if
......@@ -535,6 +549,8 @@ Cluster verify will be extended by the following checks:
- Whether no node tries to use the certificate of another node. In
particular, it is important to check that no normal node tries to
use the certificate of a master candidate.
- Whether there are still self-signed client certificates in use (from
a pre 2.12.4 Ganeti version).
Crypto renewal
......@@ -554,6 +570,18 @@ due inconsistent updating after a demotion or offlining), the user can use
this option to renew the client certificates and update the candidate
certificate map.
Note that renewing the server certificate requires all client certificates
being renewed and signed by the new server certificate, because
otherwise their signature can not be verified by the server who only has
the new server certificate then.
As there was a different design in place in Ganeti 2.12.4 and previous
versions, we have to ensure that renew-crypto works on pre 2.12 versions and
2.12.1-4. Users that got hit by Issue 1094 will be encouraged to run
renew-crypto at least once after switching to 2.12.5. Those who did not
encounter this bug yet, will still get nagged friendly by gnt-cluster
verify.
Further considerations
----------------------
......@@ -614,26 +642,19 @@ As a trade-off wrt to complexity and implementation effort, we did not
implement them yet (as of version 2.11) but describe them here for
future reference.
- All SSL certificates that Ganeti uses so far are self-signed. It would
increase the security if they were signed by a common CA. There is
already a design doc for a Ganeti CA which was suggested in a
different context (related to import/export). This would also be a
benefit for the RPC calls. See design doc :doc:`design-impexp2` for
more information. Implementing a CA is rather complex, because it
would mean also to support renewing the CA certificate and providing
and supporting infrastructure to revoke compromised certificates.
- The server certificate is currently self-signed and the client certificates
are signed by the server certificate. It would increase the security if they
were signed by a common CA. There is already a design doc for a Ganeti CA
which was suggested in a different context (related to import/export).
This would also be a benefit for the RPC calls. See design doc
:doc:`design-impexp2` for more information. Implementing a CA is rather
complex, because it would mean also to support renewing the CA certificate and
providing and supporting infrastructure to revoke compromised certificates.
- An extension of the previous suggestion would be to even enable the
system administrator to use an external CA. Especially in bigger
setups, where already an SSL infrastructure exists, it would be useful
if Ganeti can simply be integrated with it, rather than forcing the
user to use the Ganeti CA.
- A lighter version of using a CA would be to use the server certificate
to sign the client certificate instead of using self-signed
certificates for both. The probleme here is that this would make
renewing the server certificate rather complicated, because all client
certificates would need to be resigned and redistributed as well,
which leads to interesting chicken-and-egg problems when this is done
via RPC calls.
- Ganeti RPC calls are currently done without checking if the hostname
of the node complies with the common name of the certificate. This
might be a desirable feature, but would increase the effort when a
......
......@@ -587,14 +587,14 @@ test on that, by default 500 of those big instances are generated for each
property. In many cases, it would be sufficient to only generate those 500
instances once and test all properties on those. To do this, create a property
that uses ``conjoin`` to combine several properties into one. Use
``printTestCase`` to add expressive error messages. For example::
``counterexample`` to add expressive error messages. For example::
prop_myMegaProp :: myBigType -> Property
prop_myMegaProp b =
conjoin
[ printTestCase
[ counterexample
("Something failed horribly here: " ++ show b) (subProperty1 b)
, printTestCase
, counterexample
("Something else failed horribly here: " ++ show b)
(subProperty2 b)
, -- more properties here ...
......
......@@ -442,15 +442,28 @@ class LUInstanceCreate(LogicalUnit):
raise errors.OpPrereqError("Cannot do IP address check without a name"
" check", errors.ECODE_INVAL)
# instance name verification
if self.op.name_check:
self.hostname = _CheckHostnameSane(self, self.op.instance_name)
self.op.instance_name = self.hostname.name
# used in CheckPrereq for ip ping check
self.check_ip = self.hostname.ip
else:
self.check_ip = None
# add NIC for instance communication
if self.op.instance_communication:
nic_name = _ComputeInstanceCommunicationNIC(self.op.instance_name)
self.op.nics.append({constants.INIC_NAME: nic_name,
constants.INIC_MAC: constants.VALUE_GENERATE,
constants.INIC_IP: constants.NIC_IP_POOL,
constants.INIC_NETWORK:
self.cfg.GetInstanceCommunicationNetwork()})
for nic in self.op.nics:
if nic.get(constants.INIC_NAME, None) == nic_name:
break
else:
self.op.nics.append({constants.INIC_NAME: nic_name,
constants.INIC_MAC: constants.VALUE_GENERATE,
constants.INIC_IP: constants.NIC_IP_POOL,
constants.INIC_NETWORK:
self.cfg.GetInstanceCommunicationNetwork()})
# timeouts for unsafe OS installs
if self.op.helper_startup_timeout is None:
......@@ -470,15 +483,6 @@ class LUInstanceCreate(LogicalUnit):
self._CheckDiskArguments()
assert self.op.disk_template is not None
# instance name verification
if self.op.name_check:
self.hostname = _CheckHostnameSane(self, self.op.instance_name)
self.op.instance_name = self.hostname.name
# used in CheckPrereq for ip ping check
self.check_ip = self.hostname.ip
else:
self.check_ip = None
# file storage checks
if (self.op.file_driver and
not self.op.file_driver in constants.FILE_DRIVER):
......
......@@ -21,7 +21,7 @@ commands), **gnt-debug**\(8) (debug commands).
Ganeti daemons: **ganeti-watcher**\(8) (automatic instance restarter),
**ganeti-cleaner**\(8) (job queue cleaner), **ganeti-noded**\(8) (node
daemon), **ganeti-masterd**\(8) (master daemon), **ganeti-rapi**\(8)
daemon), **ganeti-rapi**\(8)
(remote API daemon).
Ganeti htools: **htools**\(1) (generic binary), **hbal**\(1) (cluster
......
......@@ -16,7 +16,9 @@ DESCRIPTION
-----------
**ganeti-luxid** is a daemon used to answer queries related to the
configuration and the current live state of a Ganeti cluster.
configuration and the current live state of a Ganeti cluster. Additionally,
it is the autorative daemon for the Ganeti job queue. Jobs can be
submitted via this daemon and it schedules and starts them.
For testing purposes, you can give the ``-f`` option and the
program won't detach from the running terminal.
......@@ -28,9 +30,7 @@ passing in the ``--syslog`` option.
The **ganeti-luxid** daemon listens on a Unix socket
(``@LOCALSTATEDIR@/run/ganeti/socket/ganeti-query``) on which it exports
a ``Luxi`` endpoint, serving query operations only. Commands and tools
use this socket if the build-time option for split queries has been
enabled.
a ``Luxi`` endpoint supporting the full set of commands.
The daemon will refuse to start if the user and group do not match the
one defined at build time; this behaviour can be overridden by the
......@@ -43,13 +43,8 @@ allow failover in a two-node cluster, this can be overridden by the
option has to be given as well.
ROLE
~~~~
The role of the query daemon is to answer queries about the (live)
cluster state without going through the master daemon. Only queries
which don't require locks can be handles by the query daemon, which
might lead to slightly outdated results in some cases.
Only queries which don't require locks can be handled by the luxi daemon,
which might lead to slightly outdated results in some cases.
The config is reloaded from disk automatically when it changes, with a
rate limit of once per second.
......
ganeti-wconfd(8) Ganeti | Version @GANETI_VERSION@
==================================================
Name
----
ganeti-wconfd - Ganeti configuration writing daemon
Synopsis
--------
**ganeti-wcond** [-f] [-d] [--syslog] [--no-user-checks]
[--no-voting --yes-do-it] [--force-node]
DESCRIPTION
-----------
**ganeti-wconfd** is the daemon that has authoritative knowledge
about the configuration and is the only entity that can accept
changes to it. All jobs that need to modify the configuration will
do so by sending appropriate requests to this daemon.
For testing purposes, you can give the ``-f`` option and the
program won't detach from the running terminal.
Debug-level message can be activated by giving the ``-d`` option.
Logging to syslog, rather than its own log file, can be enabled by
passing in the ``--syslog`` option.
The **ganeti-wconfd** daemon listens on a Unix socket
(``@LOCALSTATEDIR@/run/ganeti/socket/ganeti-query``) on which it accepts all
requests in an internal protocol format, used by Ganeti jobs.
The daemon will refuse to start if the user and group do not match the
one defined at build time; this behaviour can be overridden by the
``--no-user-checks`` option.
The daemon will refuse to start if it cannot verify that the majority
of cluster nodes believes that it is running on the master node. To
allow failover in a two-node cluster, this can be overridden by the
``--no-voting`` option. As it this is dangerous, the ``--yes-do-it``
option has to be given as well. Also, if the option ``--force-node``
is given, it will accept to run on a non-master node; it should not
be necessary to give this option manually, but
``gnt-cluster masterfailover`` will use it internally to start
the daemon in order to update the master-node information in the
configuration.
......@@ -238,8 +238,12 @@ parseNode ktg [ name, mtotal, mnode, mfree, dtotal, dfree
xmtotal <- lvconvert 0.0 "mtotal" mtotal
xmnode <- lvconvert 0 "mnode" mnode
xmfree <- lvconvert 0 "mfree" mfree
xdtotal <- lvconvert 0.0 "dtotal" dtotal
xdfree <- lvconvert 0 "dfree" dfree
let xdtotal = genericResult (const 0.0) id
$ lvconvert 0.0 "dtotal" dtotal
xdfree = genericResult (const 0) id
$ lvconvert 0 "dfree" dfree
-- "dtotal" and "dfree" might be missing, e.g., if sharedfile
-- is the only supported disk template
xctotal <- lvconvert 0.0 "ctotal" ctotal
xcnos <- lvconvert 0 "cnos" cnos
let node = flip Node.setCpuSpeed xcpu_speed .
......
......@@ -168,7 +168,7 @@ readJobStatus jWS@(JobWithStat {jStat=fstat, jJob=job}) = do
return Nothing
Just fstat' -> do
let jids = show $ fromJobId jid
logInfo $ "Rereading job " ++ jids
logDebug $ "Rereading job " ++ jids
readResult <- loadJobFromDisk qdir True jid
case readResult of
Bad s -> do
......
......@@ -400,7 +400,7 @@ logMsg handler req (Bad err) =
logMsg handler req (Ok result) = do
-- only log the first 2,000 chars of the result
logDebug $ "Result (truncated): " ++ take 2000 (J.encode result)
logInfo $ "Successfully handled " ++ hInputLogShort handler req
logDebug $ "Successfully handled " ++ hInputLogShort handler req
-- | Prepares an outgoing message.
prepareMsg
......
......@@ -146,9 +146,9 @@ prop_monad_laws :: Int -> Result Int
-> Property
prop_monad_laws a m (Fun _ k) (Fun _ h) =
conjoin
[ printTestCase "return a >>= k == k a" ((return a >>= k) ==? k a)
, printTestCase "m >>= return == m" ((m >>= return) ==? m)
, printTestCase "m >>= (\\x -> k x >>= h) == (m >>= k) >>= h)"
[ counterexample "return a >>= k == k a" ((return a >>= k) ==? k a)
, counterexample "m >>= return == m" ((m >>= return) ==? m)
, counterexample "m >>= (\\x -> k x >>= h) == (m >>= k) >>= h)"
((m >>= (\x -> k x >>= h)) ==? ((m >>= k) >>= h))
]
......@@ -159,11 +159,11 @@ prop_monad_laws a m (Fun _ k) (Fun _ h) =
-- > v >> mzero = mzero
prop_monadplus_mzero :: Result Int -> Fun Int (Result Int) -> Property
prop_monadplus_mzero v (Fun _ f) =
printTestCase "mzero >>= f = mzero" ((mzero >>= f) ==? mzero) .&&.
counterexample "mzero >>= f = mzero" ((mzero >>= f) ==? mzero) .&&.
-- FIXME: since we have "many" mzeros, we can't test for equality,
-- just that we got back a 'Bad' value; I'm not sure if this means
-- our MonadPlus instance is not sound or not...
printTestCase "v >> mzero = mzero" (isBad (v >> mzero))
counterexample "v >> mzero = mzero" (isBad (v >> mzero))
testSuite "BasicTypes"
[ 'prop_functor_id
......
......@@ -70,10 +70,10 @@ prop_req_sign key (NonNegative timestamp) (Positive bad_delta)
bad_timestamp = timestamp + if pm then bad_delta' else (-bad_delta')
ts_ok = Confd.Utils.parseRequest key signed good_timestamp
ts_bad = Confd.Utils.parseRequest key signed bad_timestamp
in printTestCase "Failed to parse good message"
in counterexample "Failed to parse good message"
(ts_ok ==? BasicTypes.Ok (encoded, crq)) .&&.
printTestCase ("Managed to deserialise message with bad\
\ timestamp, got " ++ show ts_bad)
counterexample ("Managed to deserialise message with bad\
\ timestamp, got " ++ show ts_bad)
(ts_bad ==? BasicTypes.Bad "Too old/too new timestamp or clock skew")
-- | Tests that a ConfdReply can be properly encoded, signed and parsed using
......@@ -105,7 +105,7 @@ prop_bad_key salt crq =
forAll (vector 20 `suchThat` (/= key_sign)) $ \key_verify ->
let signed = Confd.Utils.signMessage key_sign salt (J.encode crq)
encoded = J.encode signed
in printTestCase ("Accepted message signed with different key" ++ encoded) $
in counterexample ("Accepted message signed with different key" ++ encoded) $
(Confd.Utils.parseSignedMessage key_verify encoded
:: BasicTypes.Result (String, String, Confd.ConfdRequest)) ==?
BasicTypes.Bad "HMAC verification failed"
......
......@@ -93,8 +93,8 @@ prop_Load_Instance name mem dsk vcpus status
sbal, pnode, pnode, tags]
in case inst of
Bad msg -> failTest $ "Failed to load instance: " ++ msg
Ok (_, i) -> printTestCase "Mismatch in some field while\
\ loading the instance" $
Ok (_, i) -> counterexample "Mismatch in some field while\
\ loading the instance" $
Instance.name i == name &&
Instance.vcpus i == vcpus &&
Instance.mem i == mem &&
......@@ -111,7 +111,7 @@ prop_Load_InstanceFail ktn fields =
length fields < 10 || length fields > 12 ==>
case Text.loadInst nl fields of
Ok _ -> failTest "Managed to load instance from invalid data"
Bad msg -> printTestCase ("Unrecognised error message: " ++ msg) $
Bad msg -> counterexample ("Unrecognised error message: " ++ msg) $
"Invalid/incomplete instance data: '" `isPrefixOf` msg
where nl = Map.fromList ktn
......@@ -218,7 +218,7 @@ prop_CreateSerialise =
[] []
of
Bad msg -> failTest $ "Failed to allocate: " ++ msg
Ok (_, _, _, [], _) -> printTestCase
Ok (_, _, _, [], _) -> counterexample
"Failed to allocate: no allocations" False
Ok (_, nl', il', _, _) ->
let cdata = Loader.ClusterData defGroupList nl' il' ctags
......
......@@ -163,9 +163,9 @@ prop_Alloc_sane inst =
Just (xnl, xi, _, cv) ->
let il' = Container.add (Instance.idx xi) xi il
tbl = Cluster.Table xnl il' cv []
in printTestCase "Cluster can be balanced after allocation"
in counterexample "Cluster can be balanced after allocation"
(not (canBalance tbl True True False)) .&&.
printTestCase "Solution score differs from actual node list"
counterexample "Solution score differs from actual node list"
(abs (Cluster.compCV xnl - cv) < 1e-12)
-- | Checks that on a 2-5 node cluster, we can allocate a random
......@@ -194,7 +194,7 @@ prop_CanTieredAlloc =
all_nodes fn = sum $ map fn (Container.elems nl)
all_res fn = sum $ map fn [ai_alloc, ai_pool, ai_unav]
in conjoin
[ printTestCase "No instances allocated" $ not (null ixes)
[ counterexample "No instances allocated" $ not (null ixes)
, IntMap.size il' ==? length ixes
, length ixes ==? length cstats
, all_res Types.allocInfoVCpus ==? all_nodes Node.hiCpu
......@@ -261,7 +261,7 @@ check_EvacMode grp inst result =
v -> failmsg ("invalid solution: " ++ show v) False
]
where failmsg :: String -> Bool -> Property
failmsg msg = printTestCase ("Failed to evacuate: " ++ msg)
failmsg msg = counterexample ("Failed to evacuate: " ++ msg)
idx = Instance.idx inst
-- | Checks that on a 4-8 node cluster, once we allocate an instance,
......@@ -327,7 +327,7 @@ prop_AllocBalance =
let ynl = Container.add (Node.idx hnode) hnode xnl
cv = Cluster.compCV ynl
tbl = Cluster.Table ynl il' cv []
in printTestCase "Failed to rebalance" $
in counterexample "Failed to rebalance" $
canBalance tbl True True False
-- | Checks consistency.
......@@ -391,9 +391,9 @@ prop_AllocPolicy =
let rqn = Instance.requiredNodes $ Instance.diskTemplate inst
node' = Node.setPolicy ipol node
nl = makeSmallCluster node' count
in printTestCase "Allocation check:"
in counterexample "Allocation check:"
(isNothing (canAllocOn (makeSmallCluster node count) rqn inst)) .&&.
printTestCase "Policy failure check:" (isJust $ canAllocOn nl rqn inst)
counterexample "Policy failure check:" (isJust $ canAllocOn nl rqn inst)
testSuite "HTools/Cluster"
[ 'prop_Score_Zero
......
......@@ -88,7 +88,7 @@ prop_findByName =
in conjoin
[ Container.findByName nl' (Node.name target) ==? Just target
, Container.findByName nl' (Node.alias target) ==? Just target
, printTestCase "Found non-existing name"
, counterexample "Found non-existing name"
(isNothing (Container.findByName nl' othername))
]
......
......@@ -324,7 +324,7 @@ prop_rMem inst =
in case (node_add_ab, node_add_nb, node_del_ab, node_del_nb) of
(Ok a_ab, Ok a_nb,
Ok d_ab, Ok d_nb) ->
printTestCase "Consistency checks failed" $
counterexample "Consistency checks failed" $
Node.rMem a_ab > orig_rmem &&
Node.rMem a_ab - orig_rmem == Instance.mem inst_ab &&
Node.rMem a_nb == orig_rmem &&
......
......@@ -83,12 +83,12 @@ instance Arbitrary Types.ISpec where
cpu_c <- arbitrary::Gen (NonNegative Int)
nic_c <- arbitrary::Gen (NonNegative Int)
su <- arbitrary::Gen (NonNegative Int)
return Types.ISpec { Types.iSpecMemorySize = fromIntegral mem_s
, Types.iSpecCpuCount = fromIntegral cpu_c
, Types.iSpecDiskSize = fromIntegral dsk_s
, Types.iSpecDiskCount = fromIntegral dsk_c
, Types.iSpecNicCount = fromIntegral nic_c
, Types.iSpecSpindleUse = fromIntegral su
return Types.ISpec { Types.iSpecMemorySize = fromEnum mem_s
, Types.iSpecCpuCount = fromEnum cpu_c
, Types.iSpecDiskSize = fromEnum dsk_s
, Types.iSpecDiskCount = fromEnum dsk_c
, Types.iSpecNicCount = fromEnum nic_c
, Types.iSpecSpindleUse = fromEnum su
}
-- | Generates an ispec bigger than the given one.
......@@ -100,12 +100,12 @@ genBiggerISpec imin = do
cpu_c <- choose (Types.iSpecCpuCount imin, maxBound)
nic_c <- choose (Types.iSpecNicCount imin, maxBound)
su <- choose (Types.iSpecSpindleUse imin, maxBound)
return Types.ISpec { Types.iSpecMemorySize = fromIntegral mem_s
, Types.iSpecCpuCount = fromIntegral cpu_c
, Types.iSpecDiskSize = fromIntegral dsk_s
, Types.iSpecDiskCount = fromIntegral dsk_c
, Types.iSpecNicCount = fromIntegral nic_c
, Types.iSpecSpindleUse = fromIntegral su
return Types.ISpec { Types.iSpecMemorySize = fromEnum mem_s
, Types.iSpecCpuCount = fromEnum cpu_c
, Types.iSpecDiskSize = fromEnum dsk_s
, Types.iSpecDiskCount = fromEnum dsk_c
, Types.iSpecNicCount = fromEnum nic_c
, Types.iSpecSpindleUse = fromEnum su
}
genMinMaxISpecs :: Gen Types.MinMaxISpecs
......@@ -164,7 +164,7 @@ prop_IPolicy_serialisation = testSerialisation
prop_opToResult :: Types.OpResult Int -> Property
prop_opToResult op =
case op of
Bad _ -> printTestCase ("expected bad but got " ++ show r) $ isBad r
Bad _ -> counterexample ("expected bad but got " ++ show r) $ isBad r
Ok v -> case r of
Bad msg -> failTest ("expected Ok but got Bad " ++ msg)
Ok v' -> v ==? v'
......
......@@ -139,7 +139,7 @@ isAlmostEqual (LCList c1) (LCList c2) =
(length c1 ==? length c2) .&&.
conjoin (zipWith isAlmostEqual c1 c2)
isAlmostEqual (LCString s1) (LCString s2) = s1 ==? s2
isAlmostEqual (LCDouble d1) (LCDouble d2) = printTestCase msg $ rel <= 1e-12
isAlmostEqual (LCDouble d1) (LCDouble d2) = counterexample msg $ rel <= 1e-12
where rel = relativeError d1 d2
msg = "Relative error " ++ show rel ++ " not smaller than 1e-12\n" ++
"expected: " ++ show d2 ++ "\n but got: " ++ show d1
......@@ -166,7 +166,7 @@ prop_config :: LispConfig -> Property
prop_config conf =
case A.parseOnly lispConfigParser . pack . serializeConf $ conf of
Left msg -> failTest $ "Parsing failed: " ++ msg
Right obtained -> printTestCase "Failing almost equal check" $
Right obtained -> counterexample "Failing almost equal check" $
isAlmostEqual obtained conf
-- | Test whether a randomly generated UptimeInfo text line can be parsed.
......
......@@ -75,10 +75,11 @@ case_JobPriorityDef = do
prop_JobPriority :: Property
prop_JobPriority =
forAll (listOf1 (genQueuedOpCode `suchThat`
(not . opStatusFinalized . qoStatus))) $ \ops -> do
(not . opStatusFinalized . qoStatus)))
$ \ops -> property $ do
jid0 <- makeJobId 0
let job = QueuedJob jid0 ops justNoTs justNoTs justNoTs Nothing Nothing
calcJobPriority job ==? minimum (map qoPriority ops)
return $ calcJobPriority job ==? minimum (map qoPriority ops) :: Gen Property
-- | Tests default job status.
case_JobStatusDef :: Assertion
......@@ -102,15 +103,15 @@ prop_JobStatus =
-- computes status for a job with an added opcode after
st_post_op pop = calcJobStatus (job1 { qjOps = qjOps job1 ++ [pop] })
in conjoin
[ printTestCase "pre-success doesn't change status"
[ counterexample "pre-success doesn't change status"
(st_pre_op op_succ ==? st1)
, printTestCase "post-success doesn't change status"
, counterexample "post-success doesn't change status"
(st_post_op op_succ ==? st1)
, printTestCase "pre-error is error"
, counterexample "pre-error is error"
(st_pre_op op_err ==? JOB_STATUS_ERROR)
, printTestCase "pre-canceling is canceling"
, counterexample "pre-canceling is canceling"
(st_pre_op op_cnl ==? JOB_STATUS_CANCELING)
, printTestCase "pre-canceled is canceled"
, counterexample "pre-canceled is canceled"
(st_pre_op op_cnd ==? JOB_STATUS_CANCELED)
]
......@@ -170,10 +171,10 @@ prop_ListJobIDs = monadicIO $ do
full_dir <- extractJobIDs $ getJobIDs [tempdir]
invalid_dir <- getJobIDs [tempdir </> "no-such-dir"]
return (empty_dir, sortJobIDs full_dir, invalid_dir)
stop $ conjoin [ printTestCase "empty directory" $ e ==? []
, printTestCase "directory with valid names" $
stop $ conjoin [ counterexample "empty directory" $ e ==? []
, counterexample "directory with valid names" $
f ==? sortJobIDs jobs
, printTestCase "invalid directory" $ isBad g
, counterexample "invalid directory" $ isBad g
]
-- | Tests loading jobs from disk.
......@@ -210,7 +211,7 @@ prop_LoadJobs = monadicIO $ do
, current ==? Ganeti.BasicTypes.Ok (job, False)
, archived ==? Ganeti.BasicTypes.Ok (job, True)
, missing_current ==? noSuchJob
, printTestCase "broken job" (isBad broken)
, counterexample "broken job" (isBad broken)
]
-- | Tests computing job directories. Creates random directories,
......@@ -253,15 +254,15 @@ prop_InputOpCode meta i =
-- | Tests 'extractOpSummary'.
prop_extractOpSummary :: MetaOpCode -> Int -> Property
prop_extractOpSummary meta i =
conjoin [ printTestCase "valid opcode" $
conjoin [ counterexample "valid opcode" $
extractOpSummary (ValidOpCode meta) ==? summary
, printTestCase "invalid opcode, correct object" $
, counterexample "invalid opcode, correct object" $
extractOpSummary (InvalidOpCode jsobj) ==? summary
, printTestCase "invalid opcode, empty object" $
, counterexample "invalid opcode, empty object" $
extractOpSummary (InvalidOpCode emptyo) ==? invalid
, printTestCase "invalid opcode, object with invalid OP_ID" $
, counterexample "invalid opcode, object with invalid OP_ID" $
extractOpSummary (InvalidOpCode invobj) ==? invalid
, printTestCase "invalid opcode, not jsobject" $
, counterexample "invalid opcode, not jsobject" $
extractOpSummary (InvalidOpCode jsinval) ==? invalid
]
where summary = opSummary (metaOpCode meta)
......
......@@ -91,8 +91,9 @@ prop_arrayMaybeFromObj t xs k =
prop_arrayMaybeFromObjFail :: String -> String -> Property
prop_arrayMaybeFromObjFail t k =
case JSON.tryArrayMaybeFromObj t [] k of
BasicTypes.Ok r -> fail $
"Unexpected result, got: " ++ show (r::[Maybe Int])
BasicTypes.Ok r -> property
(fail $ "Unexpected result, got: " ++ show (r::[Maybe Int])
:: Gen Property)
BasicTypes.Bad e -> conjoin [ Data.List.isInfixOf t e ==? True
, Data.List.isInfixOf k e ==? True
]
......
......@@ -145,7 +145,7 @@ prop_LocksDisjoint =
forAll (arbitrary `suchThat` (/= a)) $ \b ->
let aExclusive = M.keysSet . M.filter (== OwnExclusive) $ listLocks a state
bAll = M.keysSet $ listLocks b state
in printTestCase
in counterexample
(show a ++ "'s exclusive lock" ++ " is not respected by " ++ show b)
(S.null $ S.intersection aExclusive bAll)
......@@ -156,7 +156,7 @@ prop_LockslistComplete =
forAll (arbitrary :: Gen TestOwner) $ \a ->
forAll ((arbitrary :: Gen (LockAllocation TestLock TestOwner))
`suchThat` (not . M.null . listLocks a)) $ \state ->
printTestCase "All owned locks must be mentioned in the all-locks list" $
counterexample "All owned locks must be mentioned in the all-locks list" $
let allLocks = listAllLocks state in
all (`elem` allLocks) (M.keys $ listLocks a state)
......@@ -165,8 +165,8 @@ prop_LockslistComplete =
prop_LocksAllOwnersSubsetLockslist :: Property
prop_LocksAllOwnersSubsetLockslist =
forAll (arbitrary :: Gen (LockAllocation TestLock TestOwner)) $ \state ->
printTestCase "The list of all active locks must contain all locks mentioned\
\ in the locks state" $
counterexample "The list of all active locks must contain all locks mentioned\
\ in the locks state" $
S.isSubsetOf (S.fromList . map fst $ listAllLocksOwners state)
(S.fromList $ listAllLocks state)
......@@ -177,7 +177,7 @@ prop_LocksAllOwnersComplete =
forAll (arbitrary :: Gen TestOwner) $ \a ->
forAll ((arbitrary :: Gen (LockAllocation TestLock TestOwner))
`suchThat` (not . M.null . listLocks a)) $ \state ->
printTestCase "Owned locks must be mentioned in list of all locks' state" $
counterexample "Owned locks must be mentioned in list of all locks' state" $
let allLocksState = listAllLocksOwners state
in flip all (M.toList $ listLocks a state) $ \(lock, ownership) ->
elem (a, ownership) . fromMaybe [] $ lookup lock allLocksState
......@@ -188,8 +188,8 @@ prop_LocksAllOwnersSound :: Property
prop_LocksAllOwnersSound =
forAll ((arbitrary :: Gen (LockAllocation TestLock TestOwner))