Skip to content
Snippets Groups Projects
  1. Jun 01, 2010
  2. May 31, 2010
    • Apollon Oikonomopoulos's avatar
      KVM: Migration bandwidth and downtime control · e43d4f9f
      Apollon Oikonomopoulos authored
      
      Introduce 2 new hypervisor options, migration_bandwidth and migration_downtime
      and implement KVM migration bandwidth and downtime control.
      
      migration_bandwidth controls KVM's maximal bandwidth during migration, in
      MiB/s. Default value is 32 MiB/s, same as KVM's internal default. This option
      is a global hypervisor option.
      
      migration_downtime sets the amount of time (in ms) a KVM instance is allowed to
      freeze while copying memory pages. This is useful when migrating busy guests,
      as KVM's internal default of 30ms is too low for the page-copying algorithm to
      converge. This is a per-instance option, with a default of 30ms, same as KVM's
      internal default.
      
      Signed-off-by: default avatarApollon Oikonomopoulos <apollon@noc.grnet.gr>
      Signed-off-by: default avatarBalazs Lecz <leczb@google.com>
      Reviewed-by: default avatarBalazs Lecz <leczb@google.com>
      e43d4f9f
  3. May 25, 2010
  4. May 14, 2010
  5. Apr 20, 2010
  6. Apr 19, 2010
  7. Apr 16, 2010
  8. Apr 12, 2010
  9. Apr 08, 2010
  10. Mar 31, 2010
  11. Mar 17, 2010
  12. Mar 15, 2010
  13. Mar 12, 2010
  14. Mar 10, 2010
  15. Mar 09, 2010
  16. Feb 22, 2010
  17. Feb 11, 2010
  18. Feb 10, 2010
  19. Feb 09, 2010
    • Iustin Pop's avatar
      Add an early release lock/storage for disk replace · 7ea7bcf6
      Iustin Pop authored
      
      This patch adds an early_release parameter in the OpReplaceDisks and
      OpEvacuateNode opcodes, allowing earlier release of storage and more
      importantly of internal Ganeti locks.
      
      The behaviour of the early release is that any locks and storage on all
      secondary nodes are released early. This is valid for change secondary
      (where we remove the storage on the old secondary, and release the locks
      on the old and new secondary) and replace on secondary (where we remove
      the old storage and release the lock on the secondary node.
      
      Using this, on a three node setup:
      
      - instance1 on nodes A:B
      - instance2 on nodes C:B
      
      It is possible to run in parallel a replace-disks -s (on secondary) for
      instances 1 and 2.
      
      Replace on primary will remove the storage, but not the locks, as we use
      the primary node later in the LU to check consistency.
      
      It is debatable whether to also remove the locks on the primary node,
      and thus making replace-disks keep zero locks during the sync. While
      this would allow greatly enhanced parallelism, let's first see how
      removal of secondary locks works.
      
      Signed-off-by: default avatarIustin Pop <iustin@google.com>
      Reviewed-by: default avatarGuido Trotter <ultrotter@google.com>
      7ea7bcf6
  20. Feb 01, 2010
  21. Jan 27, 2010
  22. Jan 22, 2010
  23. Dec 16, 2009
  24. Dec 10, 2009
  25. Nov 25, 2009
  26. Nov 16, 2009
  27. Nov 06, 2009
Loading