- Oct 25, 2010
-
-
Iustin Pop authored
This simplifies the maintenance of the man pages, and unifies the rst-to-* converter to pandoc.
-
- Oct 21, 2010
-
-
Iustin Pop authored
This is a work in progress, will be modified along with the progress of Ganeti 2.3.
-
- Oct 07, 2010
-
-
Iustin Pop authored
-
Iustin Pop authored
-
- Oct 06, 2010
-
-
Iustin Pop authored
Currently, the key metrics/tiered spec computations show the virtual cpu count. However, since we do have a maximum ration Vcpu/Pcpu, we can also show the “normalized” cpu count, i.e. the equivalent physical cpu count corresponding to the virtual ones.
-
Iustin Pop authored
-
- Sep 15, 2010
-
-
Iustin Pop authored
Currently, hbal will abort immediately when requested (^C, or SIGINT, etc.). This is not nice, since then the already started jobs need to be tracked manually. This patch adds a signal handler for SIGINT and SIGTERM, which will, the first time, simply record the shutdown request (and hbal will then exit once all jobs in the current jobset finish), and at the second request, will cause an immediate exit.
-
- Sep 03, 2010
-
-
Iustin Pop authored
-
Iustin Pop authored
Also adds them in hbal.
-
Iustin Pop authored
Recent hbal seems to run many steps for small improvements (< 1e-3), so we should stop early in this case. We add a new option (-g), that will be used for the minimum gain during balancing. This check will only become active when the cluster score is below a threshold (--min-gain-limit), so as to not stop rebalances too early.
-
- Sep 02, 2010
-
-
Iustin Pop authored
This will make the automated builds flag any problems.
-
Iustin Pop authored
These are just variations of the standard debug, but are provided for simpler code, since lazyness is something causing non-computation of debug statements.
-
Iustin Pop authored
The addition of a new secondary on a node is doing two memory tests: - in strict mode, reject if we get into N+1 failure - reject if the new instance memory is greater than the free memory (not available memory) on the node The last check is designed to ensure that, irrespective of the other secondary instances on this node, we are able to failover/migrate the newly-added instance. However, we should allow this, if the instances comes from an offline node, which doesn't offer anything (not even disk replication). Therefore this patch makes this check conditional on the strict mode.
-
Iustin Pop authored
-
- Aug 30, 2010
-
-
Iustin Pop authored
-
Iustin Pop authored
Otherwise the saved cluster state and the in-memory one are wrong.
-
Iustin Pop authored
This also uncovered a few issues with the allocation model (instances not being marked up, etc.). Compared to hbal, hspace will generate either one or two files (for both the standard and the tiered allocation mode), depending on the input parameters.
-
Iustin Pop authored
The Cluster.iterateAlloc and tieredAlloc functions are changed to also return the updated instance list, since it is needed to have a “full” cluster view.
-
Iustin Pop authored
Also move the LUXI execution (-X) to the end, after all the output messages are printed. No good in waiting for the messages for a long while, especially as they are not up-to-date stats after the job execution, just an estimation of what the state will be.
-
Iustin Pop authored
This is currently hardcoded in an internal function in hscan.hs, and we move it to Text.hs for later use.
-
- Aug 25, 2010
-
-
Iustin Pop authored
This option will in the future be used to serialize the cluster state in hbal and hspace after the rebalance/allocation steps.
-
Iustin Pop authored
This checks that the Node text serialization and deserialization operations are idempotent when combined other.
-
Iustin Pop authored
Currently, the hostnames are almost fully arbitrary chars, which breaks the assumption that nodes/instances will be normal DNS hostnames. This patch adds some custom generators for these hostnames, that will allow better testing of text loader serialization/deserialization.
-
- Aug 24, 2010
-
-
Iustin Pop authored
Currently these are in hscan, and cannot be reused easily.
-
- Jul 29, 2010
-
-
Iustin Pop authored
Again, thanks to lintian.
-
- Jul 27, 2010
-
-
Iustin Pop authored
Currently we show the instance index, but this makes no sense outside the current running program. Instead, we show the instance name.
-
Iustin Pop authored
-
Iustin Pop authored
This looks better for text-only viewing…
-
- Jul 23, 2010
-
-
Iustin Pop authored
If some clusters failed during RAPI collection, exit with exit code 2 so that tests can detect this failure.
-
Iustin Pop authored
-
- Jul 22, 2010
-
-
Iustin Pop authored
-
Iustin Pop authored
-
Iustin Pop authored
The (recently-enabled) live test coverage stats found a few low-hanging fruits in the tests we do…
-
- Jul 21, 2010
-
-
Iustin Pop authored
… which fixes the issue noted in the previous commit (almost a brown paper bag change).
-
Iustin Pop authored
While this doesn't work correctly yet (hpc sum seems to only take common modules, not the sum of modules?), it prepares for gathering coverage data during live-test (as an alternative to unittest coverage data).
-
Iustin Pop authored
This is needed so that in the coverage report we list all modules, even the ones we don't test at all, such that we get the complete results.
-
Iustin Pop authored
Currently, this metric tracks the nodes failing the N+1 check. While this helps (in some cases) to evacuate such nodes, it's not a good metric since rarely it will change during a step (only at the last instance moving away). Therefore we replace it with the count of instances living on such nodes, which is much better because: - moving an instance away while the node is still N+1 failing will still reflect in the score as an optimization - moving the last instance causing an N+1 failure will result in a heavy decrease of this score, thus giving the right bonus to clear this status
-
Iustin Pop authored
Currently all metrics have the same weight (we just sum them together). However, for the hard constraints (N+1 failures, offline nodes, etc.) we should handle the metrics differently based on their meaning. For example, an instance living on a primary offline node is worse than an instance having its secondary node offline, which in turn is worse than an instance having its secondary node failing N+1. To express this case in our code, we introduce a table of weights for the metrics, with which we can influence their relative importance.
-
Iustin Pop authored
This patch switches the applyMove function to the extended versions of Node.addPri and addSec, and passes the override flag based on the state of the node that we're moving away from.
-
Iustin Pop authored
In case an instance is living on an offline node, it doesn't make sense to refuse moving it because that would create N+1 failures; failing N+1 is still much better than not running at all. Similarly, if the secondary node of an instance is offline, meaning the instance doesn't have any redundancy, we have a worse case than having a secondary that is N+1 failing and it could not accept the instance as primary, but it stil does redundancy for it. To allow this, we rename Node.addPri to addPriEx and introduce an extra parameter (addPri is a partial application of addPriEx and keeps the same signature). Node.addSec gets the same treatement.
-