Commit ca084577 authored by Oleg Ponomarev's avatar Oleg Ponomarev Committed by Klaus Aehlig

Add tests for avoid-disk-moves=*factor* option

In the test provided disk move failover and replace move normally takes
place for inst2 in order to avoid all instances running on the same node
and satisfy desired locations.

* In the first test the avoid-disk-moves *factor* is small and the i
  optmization performed by a single failover and replace move.

* In the second test the avoid-disk-moves *factor* is big enough and
  failover and replace single move is splitted into two separate steps.
  That's because gain in cluster score for the failover and replace is
  not *factor* times bigger than the gain for simple failover move.
Signed-off-by: 's avatarOleg Ponomarev <onponomarev@gmail.com>
Signed-off-by: 's avatarKlaus Aehlig <aehlig@google.com>
Reviewed-by: 's avatarKlaus Aehlig <aehlig@google.com>
parent da9ee63e
......@@ -1716,6 +1716,7 @@ TEST_FILES = \
test/data/htools/hail-node-evac.json \
test/data/htools/hail-reloc-drbd.json \
test/data/htools/hail-reloc-drbd-crowded.json \
test/data/htools/hbal-avoid-disk-moves.data \
test/data/htools/hbal-cpu-speed.data \
test/data/htools/hbal-desiredlocation-1.data \
test/data/htools/hbal-desiredlocation-2.data \
......
group-01|fake-uuid-01|preferred||
node-01|16384|0|14336|409600|306600|16|N|fake-uuid-01|1|power:a
node-02|16384|0|16384|409600|357800|16|N|fake-uuid-01|1|power:b
node-03|16384|0|16384|409600|357800|16|N|fake-uuid-01|1|power:a
node-04|16384|0|16384|409600|409600|16|N|fake-uuid-01|1|power:b
inst1|1024|51200|1|running|Y|node-01|node-02|drbd|power:a|1
inst2|1024|51200|1|running|Y|node-01|node-03|drbd|power:a|1
htools:nlocation:power
htools:desiredlocation:power
......@@ -92,3 +92,11 @@
node-02 0
node-03 1/
>>>= 0
./test/hs/hbal -t $TESTDATA_DIR/hbal-avoid-disk-moves.data --avoid-disk-moves=1.2
>>>/Solution length=1/
>>>= 0
./test/hs/hbal -t $TESTDATA_DIR/hbal-avoid-disk-moves.data --avoid-disk-moves=5
>>>/Solution length=2/
>>>= 0
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment