Release all node locks during disk replace
This patch extends commit 7ea7bcf6 by releasing all node locks in disk replace for the early release mode. The rationale behind this is: - LUCreateInstance already releases all node locks while waiting for disk synchronization, and does an instance startup later - WaitForSync only runs (for disk template 'drbd') 'lvs' and read /proc/drbd on the primary node, which should be (modulo bugs in LVM) safe for parallel run In any case, the worst I could foresee is a node having N lvs commands run in parallel on it, while being a primary for disk storage. Based on create instance doing this safely, and the fact that burnin with more than two instances per node is safe, I think this can be applied. Signed-off-by:Iustin Pop <iustin@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
Loading
Please register or sign in to comment