Skip to content
Snippets Groups Projects
  1. Feb 15, 2010
    • Iustin Pop's avatar
      Release all node locks during disk replace · d5cd389c
      Iustin Pop authored
      
      This patch extends commit 7ea7bcf6 by releasing all node locks in disk
      replace for the early release mode. The rationale behind this is:
      
      - LUCreateInstance already releases all node locks while waiting for
        disk synchronization, and does an instance startup later
      - WaitForSync only runs (for disk template 'drbd') 'lvs' and read
        /proc/drbd on the primary node, which should be (modulo bugs in LVM)
        safe for parallel run
      
      In any case, the worst I could foresee is a node having N lvs commands
      run in parallel on it, while being a primary for disk storage. Based on
      create instance doing this safely, and the fact that burnin with more
      than two instances per node is safe, I think this can be applied.
      
      Signed-off-by: default avatarIustin Pop <iustin@google.com>
      Reviewed-by: default avatarMichael Hanselmann <hansmi@google.com>
      d5cd389c
    • Iustin Pop's avatar
      Unify a few re.compile calls in DRBD · 9122e60a
      Iustin Pop authored
      
      These are both cleanups and, in the case of _MassageProcData, switching
      from a weaker RE to a stronger one (we now need cs: in the line,
      previosuly any line starting with \d+: was accepted).
      
      Signed-off-by: default avatarIustin Pop <iustin@google.com>
      Reviewed-by: default avatarMichael Hanselmann <hansmi@google.com>
      9122e60a
    • Iustin Pop's avatar
      Auto-enable early release for offline old nodes · 9af0fa6a
      Iustin Pop authored
      
      In case the old node is offline, we won't be able to talk to it to
      remove the storage, and in most cases the node is powered
      off/unreachable.
      
      In this case, it makes no sense to delay the storage release, so we
      enable automatically early_release mode, gaining parallelism during node
      evacuation.
      
      Signed-off-by: default avatarIustin Pop <iustin@google.com>
      Reviewed-by: default avatarMichael Hanselmann <hansmi@google.com>
      9af0fa6a
  2. Feb 11, 2010
  3. Feb 10, 2010
  4. Feb 09, 2010
    • Iustin Pop's avatar
      Add an early release lock/storage for disk replace · 7ea7bcf6
      Iustin Pop authored
      
      This patch adds an early_release parameter in the OpReplaceDisks and
      OpEvacuateNode opcodes, allowing earlier release of storage and more
      importantly of internal Ganeti locks.
      
      The behaviour of the early release is that any locks and storage on all
      secondary nodes are released early. This is valid for change secondary
      (where we remove the storage on the old secondary, and release the locks
      on the old and new secondary) and replace on secondary (where we remove
      the old storage and release the lock on the secondary node.
      
      Using this, on a three node setup:
      
      - instance1 on nodes A:B
      - instance2 on nodes C:B
      
      It is possible to run in parallel a replace-disks -s (on secondary) for
      instances 1 and 2.
      
      Replace on primary will remove the storage, but not the locks, as we use
      the primary node later in the LU to check consistency.
      
      It is debatable whether to also remove the locks on the primary node,
      and thus making replace-disks keep zero locks during the sync. While
      this would allow greatly enhanced parallelism, let's first see how
      removal of secondary locks works.
      
      Signed-off-by: default avatarIustin Pop <iustin@google.com>
      Reviewed-by: default avatarGuido Trotter <ultrotter@google.com>
      7ea7bcf6
  5. Feb 08, 2010
  6. Feb 03, 2010
  7. Feb 01, 2010
  8. Jan 29, 2010
  9. Jan 28, 2010
  10. Jan 27, 2010
Loading