Skip to content
Snippets Groups Projects
Commit af797be5 authored by Michael Hanselmann's avatar Michael Hanselmann
Browse files

_DeclareLocksForMigration: Fix non-DRBD locking issue


When non-DRBD disks are used for an instance,
“lu.needed_locks[locking.LEVEL_NODE]” is set to “locking.ALL_SET” (which
is None). The assertion will then fail as None evaluates to False.

Reported by Constantinos Venetsanopoulos.

Signed-off-by: default avatarMichael Hanselmann <hansmi@google.com>
Reviewed-by: default avatarGuido Trotter <ultrotter@google.com>
parent d4e4b2fd
No related branches found
No related tags found
No related merge requests found
......@@ -7874,6 +7874,8 @@ def _DeclareLocksForMigration(lu, level):
 
instance = lu.cfg.GetInstanceInfo(lu.op.instance_name)
 
# Node locks are already declared here rather than at LEVEL_NODE as we need
# the instance object anyway to declare the node allocation lock.
if instance.disk_template in constants.DTS_EXT_MIRROR:
if lu.op.target_node is None:
lu.needed_locks[locking.LEVEL_NODE] = locking.ALL_SET
......@@ -7887,7 +7889,8 @@ def _DeclareLocksForMigration(lu, level):
 
elif level == locking.LEVEL_NODE:
# Node locks are declared together with the node allocation lock
assert lu.needed_locks[locking.LEVEL_NODE]
assert (lu.needed_locks[locking.LEVEL_NODE] or
lu.needed_locks[locking.LEVEL_NODE] is locking.ALL_SET)
 
elif level == locking.LEVEL_NODE_RES:
# Copy node locks
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment