1. 22 Jun, 2010 1 commit
  2. 25 May, 2010 1 commit
  3. 10 May, 2010 1 commit
  4. 21 Apr, 2010 1 commit
  5. 16 Apr, 2010 2 commits
  6. 12 Apr, 2010 3 commits
  7. 08 Apr, 2010 1 commit
  8. 23 Mar, 2010 2 commits
  9. 17 Mar, 2010 3 commits
  10. 16 Mar, 2010 1 commit
  11. 15 Mar, 2010 2 commits
  12. 12 Mar, 2010 2 commits
  13. 11 Mar, 2010 3 commits
  14. 09 Mar, 2010 2 commits
  15. 22 Feb, 2010 1 commit
  16. 11 Feb, 2010 3 commits
  17. 09 Feb, 2010 1 commit
    • Iustin Pop's avatar
      Add an early release lock/storage for disk replace · 7ea7bcf6
      Iustin Pop authored
      
      
      This patch adds an early_release parameter in the OpReplaceDisks and
      OpEvacuateNode opcodes, allowing earlier release of storage and more
      importantly of internal Ganeti locks.
      
      The behaviour of the early release is that any locks and storage on all
      secondary nodes are released early. This is valid for change secondary
      (where we remove the storage on the old secondary, and release the locks
      on the old and new secondary) and replace on secondary (where we remove
      the old storage and release the lock on the secondary node.
      
      Using this, on a three node setup:
      
      - instance1 on nodes A:B
      - instance2 on nodes C:B
      
      It is possible to run in parallel a replace-disks -s (on secondary) for
      instances 1 and 2.
      
      Replace on primary will remove the storage, but not the locks, as we use
      the primary node later in the LU to check consistency.
      
      It is debatable whether to also remove the locks on the primary node,
      and thus making replace-disks keep zero locks during the sync. While
      this would allow greatly enhanced parallelism, let's first see how
      removal of secondary locks works.
      Signed-off-by: default avatarIustin Pop <iustin@google.com>
      Reviewed-by: default avatarGuido Trotter <ultrotter@google.com>
      7ea7bcf6
  18. 25 Jan, 2010 1 commit
  19. 20 Jan, 2010 1 commit
  20. 15 Jan, 2010 1 commit
  21. 13 Jan, 2010 1 commit
  22. 05 Jan, 2010 1 commit
    • Iustin Pop's avatar
      Introduce a Luxi call for GetTags · 7699c3af
      Iustin Pop authored
      
      
      This changes from submitting jobs to get the tags (in cli scripts) to
      queries, which (since the tags query is a cheap one) should be much
      faster.
      
      The tags queries are already done without locks (in the generic query
      paths for instances/nodes/cluster), so this shouldn't break tags query
      via gnt-* list-tags.
      
      On a small cluster, the runtime of gnt-cluster/gnt-instance list tags
      more than halves; on a big cluster (with many MCs) I expect it to be
      more than 5 times faster. The speed of the tags get is not the main
      gain, it is eliminating a job when a simple query is enough.
      Signed-off-by: default avatarIustin Pop <iustin@google.com>
      Reviewed-by: default avatarRené Nussbaumer <rn@google.com>
      7699c3af
  23. 04 Jan, 2010 2 commits
  24. 28 Dec, 2009 1 commit
  25. 16 Dec, 2009 1 commit
  26. 25 Nov, 2009 1 commit