1. 03 Aug, 2011 1 commit
  2. 02 Aug, 2011 2 commits
  3. 28 Jul, 2011 2 commits
  4. 27 Jul, 2011 1 commit
  5. 26 Jul, 2011 4 commits
  6. 25 Jul, 2011 1 commit
  7. 22 Jul, 2011 3 commits
  8. 21 Jul, 2011 4 commits
  9. 08 Jul, 2011 1 commit
  10. 05 Jul, 2011 2 commits
  11. 01 Jul, 2011 1 commit
  12. 28 Jun, 2011 2 commits
    • Iustin Pop's avatar
      Fix bug in recreate-disks for DRBD instances · b768099e
      Iustin Pop authored
      
      
      The new functionality in 2.4.2 for recreate-disks to change nodes is
      broken for DRBD instances: it simply changes the nodes without caring
      for the DRBD minors mapping, which will lead to conflicts in non-empty
      clusters.
      
      This patch changes Exec() method of this LU significantly, to both fix
      the DRBD minor usage and make sure that we don't have partial
      modification to the instance objects:
      
      - the first half of the method makes all the checks and computes the
        needed configuration changes
      - the second half then performs the configuration changes and
        recreates the disks
      
      This way, instances will either be fully modified or not at all;
      whether the disks are successfully recreate is another point, but at
      least we'll have the configuration sane.
      Signed-off-by: default avatarIustin Pop <iustin@google.com>
      Reviewed-by: default avatarMichael Hanselmann <hansmi@google.com>
      b768099e
    • Iustin Pop's avatar
      Fix a lint warning · 78ff9e8f
      Iustin Pop authored
      Patch db8e5f1c
      
       removed the use of feedback_fn, hence pylint warn
      now.
      Signed-off-by: default avatarIustin Pop <iustin@google.com>
      Reviewed-by: default avatarRené Nussbaumer <rn@google.com>
      78ff9e8f
  13. 27 Jun, 2011 1 commit
    • Iustin Pop's avatar
      Fix bug in drbd8 replace disks on current nodes · db8e5f1c
      Iustin Pop authored
      
      
      Currently the drbd8 replace-disks on the same node (i.e. -p or -s) has
      a bug in that it does modify the instance disk temporarily before
      changing it back to the same value. However, we don't need to, and
      shouldn't do that: what this operation do is simply change the LVM
      configuration on the node, but otherwise the instance disks keep the
      same configuration as before.
      
      In the current code, this change back-and-forth is fine *unless* we
      fail during attaching the new LVs to DRBD; in which case, we're left
      with a half-modified disk, which is entirely wrong.
      
      So we change the code in two ways:
      
      - use temporary copies of the disk children in the old_lvs var
      - stop updating disk.children
      
      Which means that the instance should not be modified anymore (except
      maybe for SetDiskID, which is a legacy and unfortunate decision that
      will have to cleaned up sometime).
      Signed-off-by: default avatarIustin Pop <iustin@google.com>
      Reviewed-by: default avatarMichael Hanselmann <hansmi@google.com>
      db8e5f1c
  14. 23 Jun, 2011 1 commit
  15. 17 Jun, 2011 5 commits
  16. 15 Jun, 2011 1 commit
  17. 08 Jun, 2011 2 commits
    • Michael Hanselmann's avatar
      Fix locking issues in LUClusterVerifyGroup · fceb01fe
      Michael Hanselmann authored
      
      
      - Use functions in ConfigWriter instead of custom loops
      - Calculate nodes only once instances locks are acquired, removes one
        potential race condition
      - Don't retrieve lists of all node/instance information without locks
      - Additionally move the end of the node time check window after the
        first RPC call--the second call isn't involved in checking the
        node time at all
      Signed-off-by: default avatarMichael Hanselmann <hansmi@google.com>
      Reviewed-by: default avatarRené Nussbaumer <rn@google.com>
      fceb01fe
    • Michael Hanselmann's avatar
      cmdlib: Acquire BGL for LUClusterVerifyConfig · c5312a10
      Michael Hanselmann authored
      
      
      LUClusterVerifyConfig verifies a number of configuration settings. For
      doing so, it needs a consistent list of nodes, groups and instances. So
      far no locks were acquired at all (except for the BGL in shared mode).
      This is a race condition (e.g. if a node group is added in parallel) and
      can be fixed by acquiring the BGL in exclusive mode. Since this LU
      verifies the cluster-wide configuration, doing so instead of acquiring
      individual locks is just.
      
      Includes one typo fix and one docstring update.
      Signed-off-by: default avatarMichael Hanselmann <hansmi@google.com>
      Reviewed-by: default avatarRené Nussbaumer <rn@google.com>
      c5312a10
  18. 07 Jun, 2011 2 commits
  19. 01 Jun, 2011 2 commits
  20. 31 May, 2011 2 commits