1. 03 Oct, 2011 2 commits
  2. 30 Sep, 2011 13 commits
  3. 29 Sep, 2011 10 commits
    • Andrea Spadaccini's avatar
    • Andrea Spadaccini's avatar
      Add memory transfer progress info to migration · 61643226
      Andrea Spadaccini authored
      
      
      * hypervisor/hv_kvm.py
        - parse the memory transfer status
      
      * cmdlib.py
        - represent memory transfer info, if available
      Signed-off-by: default avatarAndrea Spadaccini <spadaccio@google.com>
      Reviewed-by: default avatarMichael Hanselmann <hansmi@google.com>
      61643226
    • Andrea Spadaccini's avatar
      Make migration RPC non-blocking · 6a1434d7
      Andrea Spadaccini authored
      
      
      To add status reporting for the KVM migration, the instance_migrate RPC
      must be non-blocking. Moreover, there must be a way to represent the
      migration status and a way to fetch it.
      
      * constants.py:
        - add constants representing the migration statuses
      
      * objects.py:
        - add the MigrationStatus object
      
      * hypervisor/hv_base.py
        - change the FinalizeMigration method name to FinalizeMigrationDst
        - add the FinalizeMigrationSource method
        - add the GetMigrationStatus method
      
      * hypervisor/hv_kvm.py
        - change the implementation of MigrateInstance to be non-blocking
          (i.e. do not poll the status of the migration)
        - implement the new methods defined in BaseHypervisor
      
      * backend.py, server/noded.py, rpc.py
        - add methods to call the new hypervisor methods
        - fix documentation of the existing methods to reflect the changes
      
      * cmdlib.py
        - adapt the logic of TLMigrateInstance._ExecMigration to reflect
          the changes
      Signed-off-by: default avatarAndrea Spadaccini <spadaccio@google.com>
      Reviewed-by: default avatarMichael Hanselmann <hansmi@google.com>
      6a1434d7
    • Andrea Spadaccini's avatar
      f8326fca
    • Iustin Pop's avatar
      Add an allocation limit to hspace · b8a2c0ab
      Iustin Pop authored
      
      
      This is very useful for testing/benchmarking.
      Signed-off-by: default avatarIustin Pop <iustin@google.com>
      Reviewed-by: default avatarAgata Murawska <agatamurawska@google.com>
      b8a2c0ab
    • Iustin Pop's avatar
      Small simplification in tryAlloc · 1bf6d813
      Iustin Pop authored
      
      Signed-off-by: default avatarIustin Pop <iustin@google.com>
      Reviewed-by: default avatarAgata Murawska <agatamurawska@google.com>
      1bf6d813
    • Iustin Pop's avatar
      Change how node pairs are generated/used · b0631f10
      Iustin Pop authored
      
      
      Currently, the node pairs used for allocation are a simple [(primary,
      secondary)] list of tuples, as this is how they were used before the
      previous patch. However, for that patch, we use them separately per
      primary node, and we have to unpack this list right after generation.
      
      Therefore it makes sense to directly generate the list in the correct
      form, and remove the split from tryAlloc. This should not be slower
      than the previous patch, at least, possibly even faster.
      Signed-off-by: default avatarIustin Pop <iustin@google.com>
      Reviewed-by: default avatarAgata Murawska <agatamurawska@google.com>
      b0631f10
    • Iustin Pop's avatar
      Parallelise instance allocation/capacity computation · f828f4aa
      Iustin Pop authored
      This patch finally enables parallelisation in instance placement.
      
      My original try for enabling this didn't work well, but it took a
      while (and liberal use of threadscope) to understand why. The attempt
      was to simply `parMap rwhnf` over allocateOnPair, however this is not
      good as for a 100-node cluster, this will create roughly 100*100
      sparks, which is way too much: each individual spark is too small, and
      there are too many sparks. Furthermore, the combining of the
      allocateOnPair results was done single-threaded, losing even more
      parallelism. So we had O(n²) sparks to run in parallel, each spark of
      size O(1), and we combine single-threadedly a list of O(n²) length.
      
      The new algorithm does a two-stage process: we group the list of valid
      pairs per primary node, relying on the fact that usually the secondary
      nodes are somewhat balanced (it's definitely true for 'blank' cluster
      computations). We then run in parallel over all primary nodes, doing
      both the individual allocateOnPair calls *and* the concatAllocs
      summarisation. This leaves only the summing of the primary group
      results together for the main execution thread. The new numbers are:
      O(n) sparks, each of size O(n), and we combine single-threadedly a
      list of O(n) length.
      
      This translates directly into a reasonable speedup (relative numbers
      for allocation of 3 instances on a 120-node cluster):
      
      - original code (non-threaded): 1.00 (baseline)
      - first attempt (2 threads):    0.81 (20% slowdown
      
      )
      - new code (non-threaded):      1.00 (no slowdown)
      - new code (threaded/1 thread): 1.00
      - new code (2 threads):         1.65 (65% faster)
      
      We don't get a 2x speedup, because the GC time increases. Fortunately
      the code should scale well to more cores, so on many-core machines we
      should get a nice overall speedup. On a different machine with 4
      cores, we get 3.29x.
      Signed-off-by: default avatarIustin Pop <iustin@google.com>
      Reviewed-by: default avatarAgata Murawska <agatamurawska@google.com>
      f828f4aa
    • Iustin Pop's avatar
      Abstract comparison of AllocElements · d7339c99
      Iustin Pop authored
      
      
      This is moved outside of the concatAllocs as it will be needed in
      another place in the future.
      Signed-off-by: default avatarIustin Pop <iustin@google.com>
      Reviewed-by: default avatarAgata Murawska <agatamurawska@google.com>
      d7339c99
    • Iustin Pop's avatar
      Change type of Cluster.AllocSolution · 129734d3
      Iustin Pop authored
      Originally, this data type was used both by instance allocation (1
      result), and by instance relocation (many results, one per
      instance). As such, the field 'asSolutions' was a list, and the
      various code paths checked whether the length of the list matches the
      current mode. This is very ugly, as we can't guarantee this matching
      via the type system; hence the FIXME in the code.
      
      However, commit 6804faa0
      
       removed the instance evacuation code, and thus
      we now always use just one allocation solution. Hence we can change
      the data type to a simply Maybe type, and get rid of many 'otherwise
      barf out' conditions.
      Signed-off-by: default avatarIustin Pop <iustin@google.com>
      Reviewed-by: default avatarAgata Murawska <agatamurawska@google.com>
      129734d3
  4. 28 Sep, 2011 7 commits
  5. 27 Sep, 2011 5 commits
  6. 26 Sep, 2011 3 commits