Commit a3de2ae7 authored by Iustin Pop's avatar Iustin Pop
Browse files

cluster verify and instance disks on offline nodes



Currently, cluster-verify says:

- ERROR: instance instance14: couldn't retrieve status for disk/0 on node3: node offline
- ERROR: instance instance14: instance lives on offline node(s) node3
- ERROR: instance instance15: couldn't retrieve status for disk/0 on node3: node offline
- ERROR: instance instance15: instance lives on offline node(s) node3

This is redundant as the “lives on offline node” message should be all we need to
understand the cluster situation.

The patch fixes this and also corrects a very old idiom.
Signed-off-by: default avatarIustin Pop <iustin@google.com>
Reviewed-by: default avatarStephen Shirley <diamond@google.com>
parent f7661f6b
......@@ -1554,7 +1554,7 @@ class LUClusterVerify(LogicalUnit):
node_current)
for node, n_img in node_image.items():
if (not node == node_current):
if node != node_current:
test = instance in n_img.instances
_ErrorIf(test, self.EINSTANCEWRONGNODE, instance,
"instance should not run on node %s", node)
......@@ -1564,7 +1564,11 @@ class LUClusterVerify(LogicalUnit):
for idx, (success, status) in enumerate(disks)]
for nname, success, bdev_status, idx in diskdata:
_ErrorIf(instanceconfig.admin_up and not success,
# the 'ghost node' construction in Exec() ensures that we have a
# node here
snode = node_image[nname]
bad_snode = snode.ghost or snode.offline
_ErrorIf(instanceconfig.admin_up and not success and not bad_snode,
self.EINSTANCEFAULTYDISK, instance,
"couldn't retrieve status for disk/%s on %s: %s",
idx, nname, bdev_status)
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment