1. 17 Jul, 2013 7 commits
  2. 16 Jul, 2013 1 commit
  3. 15 Jul, 2013 4 commits
  4. 12 Jul, 2013 2 commits
  5. 11 Jul, 2013 3 commits
  6. 10 Jul, 2013 6 commits
  7. 09 Jul, 2013 4 commits
  8. 08 Jul, 2013 2 commits
  9. 05 Jul, 2013 1 commit
  10. 04 Jul, 2013 4 commits
    • Klaus Aehlig's avatar
      Also remove prop_IterateAlloc_sane from test list · 51b12695
      Klaus Aehlig authored
      In f4d1bb75
       that test was removed, but forgotten to remove it
      from the list of tests to be executed. Fix that.
      Signed-off-by: default avatarKlaus Aehlig <aehlig@google.com>
      Reviewed-by: default avatarMichele Tartara <mtartara@google.com>
    • Klaus Aehlig's avatar
      Fix documentation for prop_Alloc_sane · 09d8b0fc
      Klaus Aehlig authored
      As discussed in the last commit, placing a new instance on the cluster
      can lead to a cluster that can be improved by moving previously added
      instances. For an empty cluster, however, there are no previous
      instances. So add this to the test description to make obvious why
      this test tests for a valid property.
      Signed-off-by: default avatarKlaus Aehlig <aehlig@google.com>
      Reviewed-by: default avatarGuido Trotter <ultrotter@google.com>
    • Klaus Aehlig's avatar
      Remove IterateAllocSane test · f4d1bb75
      Klaus Aehlig authored
      The test is testing for a property that just isn't true. Iterated
      allocation greedily place one instance at a time taking the locally
      most balanced solution. Then it is tested whether the resulting global
      allocation can be improved.
      To see that this assumption does not hold, consider placing 3
      identical instances on 3 nodes. The most balanced allocation of all 3
      instances would be that each node is primary and secondary for one
      instance. Now let's see what iterative allocation does.
      Up to symmetry, the placing of the first instance is unique. Any
      placement of the second instance that keeps the way to the global
      optimum open would be one node being primary and secondary for one
      instance each, one node being only secondary for instance, and one
      node being only primary for one instance. An alternative allocation
      would be to place the instances on two different nodes as primaries
      and using the third node as shared secondary.
      For cpu (2 nodes with 1 cpu, 1 with none), free memory (on two nodes
      all minus 1 unit, on one node all), and disk (2 nodes with 1 disk, 1
      with 2) there is no difference between these allocations. For reserved
      memory, there's a difference in the values. In the first case, on two nodes,
      there's one unit reserved and nothing on the third. In the second
      case, on two nodes, there is nothing reserved, while on the third
      node, there still is only one unit reserved, as the two instances have
      different primaries. Nevertheless, the standard deviations of
      0,0,1 and 1,1,0 are both sqrt(5/9); so, in everything that contributes
      to the balancedness metric these allocations are equal. Therefore, it is
      locally correct to chose the wrong allocation.
      Signed-off-by: default avatarKlaus Aehlig <aehlig@google.com>
      Reviewed-by: default avatarThomas Thrainer <thomasth@google.com>
    • Guido Trotter's avatar
      Release version 2.7.0 · 788529f2
      Guido Trotter authored
      We don't have notice of anything blocking for 2.7, and it's been in
      release candidate state long enough. Any future problems can be
      addressed as bugfixes.
      Signed-off-by: default avatarGuido Trotter <ultrotter@google.com>
      Reviewed-by: default avatarKlaus Aehlig <aehlig@google.com>
  11. 03 Jul, 2013 3 commits
  12. 02 Jul, 2013 1 commit
  13. 01 Jul, 2013 1 commit
  14. 28 Jun, 2013 1 commit