Commit e375fb61 authored by Iustin Pop's avatar Iustin Pop
Browse files

Fix a type issue and bad logic in cluster verification

Commit 2e04d454

 introduced the new offline state for the instance
state, but being a big monolithic patch it sneaked in something that
doesn't make sense.

The checks for extra instances (either wrongly up or just unknown) are
done purely on a name-basis, not on objects, so the types there are
wrong. Furthermore, they have no relation to the admin state of the
instance, so we just drop the entire if block. We keep the increment
of the offline instance count, but move it to a different loop over
Signed-off-by: default avatarIustin Pop <>
Reviewed-by: default avatarMichael Hanselmann <>
parent 4b5a9365
......@@ -3152,6 +3152,8 @@ class LUClusterVerifyGroup(LogicalUnit, _VerifyErrors):
for instance in self.my_inst_names:
inst_config = self.my_inst_info[instance]
if inst_config.admin_state == constants.ADMINST_OFFLINE:
i_offline += 1
for nname in inst_config.all_nodes:
if nname not in node_image:
......@@ -3291,12 +3293,6 @@ class LUClusterVerifyGroup(LogicalUnit, _VerifyErrors):
non_primary_inst = set(nimg.instances).difference(nimg.pinst)
for inst in non_primary_inst:
# FIXME: investigate best way to handle offline insts
if inst.admin_state == constants.ADMINST_OFFLINE:
if verbose:
feedback_fn("* Skipping offline instance %s" %
i_offline += 1
test = inst in self.all_inst_info
_ErrorIf(test, constants.CV_EINSTANCEWRONGNODE, inst,
"instance should not run on node %s",
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment