Commit 4ecb94d5 authored by Iustin Pop's avatar Iustin Pop
Browse files

Fix potential data-loss bug in disk wipe routines



For the 2.4 release, we only add the missing RPC calls. However, this
needs to be fixed properly, by preventing usage of mis-configured
disks.

Also add a bit more logging so that it's directly clear on which node
the wipe is being done.
Signed-off-by: default avatarIustin Pop <iustin@google.com>
Reviewed-by: default avatarRené Nussbaumer <rn@google.com>
parent f5182ecb
......@@ -6629,6 +6629,10 @@ def _WipeDisks(lu, instance):
"""
node = instance.primary_node
for device in instance.disks:
lu.cfg.SetDiskID(device, node)
logging.info("Pause sync of instance %s disks", instance.name)
result = lu.rpc.call_blockdev_pause_resume_sync(node, instance.disks, True)
......@@ -6640,7 +6644,8 @@ def _WipeDisks(lu, instance):
try:
for idx, device in enumerate(instance.disks):
lu.LogInfo("* Wiping disk %d", idx)
logging.info("Wiping disk %d for instance %s", idx, instance.name)
logging.info("Wiping disk %d for instance %s, node %s",
idx, instance.name, node)
# The wipe size is MIN_WIPE_CHUNK_PERCENT % of the instance disk but
# MAX_WIPE_CHUNK at max
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment