- Jul 20, 2009
-
-
Michael Hanselmann authored
File "../test/ganeti.hooks_unittest.py", line 239, in setUp self.lu = FakeLU(FakeProc(), self.op, self.context, None) File "…/ganeti/cmdlib.py", line 92, in __init__ self.LogStep = processor.LogStep AttributeError: FakeProc instance has no attribute 'LogStep' Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
Michael Hanselmann authored
This class will be used for a new opcode to evacuate nodes. Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
Michael Hanselmann authored
Before IAllocator would access them using “self.lu.cfg” and “self.lu.rpc”. It shouldn't know about the internals of the LU. Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
Iustin Pop authored
The merge of commit 360b0dc2 into branch-2.1 broke import of backend, since it uses hypervisor.GetHypervisor() which returns an instance of the hypervisor. Some of the hypervisors create directories at init time, thus the import of backend failed due this chain if it's not done on a (proper) ganeti node, such as during unittest time. This patch adds in hypervisor a GetHypervisorClass() function, which returns the class not the instance of the hypervisor, and uses that in _BuildUploadFiles(). The existing GetHypervisor is then changed to use this function. Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
- Jul 19, 2009
-
-
Iustin Pop authored
Conflicts: lib/backend.py: non-trivial conflict but easy to solve
-
Iustin Pop authored
The list of upload files is built currently at every UploadFile() call. This patch moves it to a separate variable which is initialized only once. This won't make much difference but I regard it as cleanup. Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Guido Trotter <ultrotter@google.com>
-
Iustin Pop authored
Conflicts: lib/cli.py: trivial extra empty line
-
Iustin Pop authored
Commit 55efe6da "Convert instance reinstall to multi instance model" actually broke instance reinstall for single-instance cases. This one-liner fixes it. Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Guido Trotter <ultrotter@google.com> (cherry picked from commit b6e243ab)
-
Iustin Pop authored
It seems epydoc needs fully-qualified references, and doesn't deal with relative ones (not even in the current module) if there are any ambiguities. There are other epydoc warnings, in the rapi docstrings, but those are left as-is as they're removed in 2.1. Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Guido Trotter <ultrotter@google.com>
-
Iustin Pop authored
Currently, unclean master daemon shutdown overwrites all of a job's opcode status and result with error/None. This is incorrect, since the any already finished opcode(s) should have their status and result preserved, and only not-yet-processed opcodes should be marked as ‘error’. Cancelling jobs between opcodes does the same (but this is not allowed currently by the code, so it's not as important as unclean shutdown). This patch adds a new _QueuedJob function that only overwrites the status and result of finalized opcodes, which is then used in job queue init and in the cancel job functions. The patch also adds some comments and a new set constants in constants.py highlighting the finalized vs. non-finalized opcode statuses. Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Guido Trotter <ultrotter@google.com>
-
Iustin Pop authored
Currently gnt-debug submits jobs individually, but in 2.1 JobExecutor uses the optimized SubmitManyJobs luxi call and as such should be used whenever multiple jobs need to be submitted. This patch converts gnt-debug submit-job to use it and also removes an extra empty line in the JobExecutor class. Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Guido Trotter <ultrotter@google.com>
-
Iustin Pop authored
This patch converts ‘gnt-instance reinstall’ from single-instance to multi-instance model; since this is dangerours, it's required to pass “--force --force-multiple” to skip the confirmation. Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Guido Trotter <ultrotter@google.com> (cherry picked from commit 55efe6da)
-
Iustin Pop authored
This small patch changed the batch create functionality to use the job executor instead of single-job submits. Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Guido Trotter <ultrotter@google.com> (cherry picked from commit d4dd4b74)
-
Iustin Pop authored
This patch changes the generic "multiple job executor" to use the many jobs submit model, which automatically makes all its users use the new model. This makes, for example, startup/shutdown of a full cluster much more logical (all the submitted job IDs are visible fast, and then waiting for them proceeds normally). Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Guido Trotter <ultrotter@google.com> (cherry picked from commit 23b4b983)
-
Iustin Pop authored
As a workaround for the job submit timeouts that we have, this patch adds a new luxi call for multi-job submit; the advantage is that all the jobs are added in the queue and only after the workers can start processing them. This is definitely faster than per-job submit, where the submission of new jobs competes with the workers processing jobs. On a pure no-op OpDelay opcode (not on master, not on nodes), we have: - 100 jobs: - individual: submit time ~21s, processing time ~21s - multiple: submit time 7-9s, processing time ~22s - 250 jobs: - individual: submit time ~56s, processing time ~57s run 2: ~54s ~55s - multiple: submit time ~20s, processing time ~51s run 2: ~17s ~52s which shows that we indeed gain on the client side, and maybe even on the total processing time for a high number of jobs. For just 10 or so I expect the difference to be just noise. This will probably require increasing the timeout a little when submitting too many jobs - 250 jobs at ~20 seconds is close to the current rw timeout of 60s. Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Guido Trotter <ultrotter@google.com> (cherry picked from commit 2971c913)
-
Iustin Pop authored
If a job with more than one opcodes is being processed, and the master daemon crashes between two opcodes, we have the first N opcodes marked successful, and the rest marked as queued. This means that the overall jbo status is queued, and thus on master daemon restart it will be resent for completion. However, the RunTask() function in jqueue.py doesn't deal with partially-completed jobs. This patch makes it simply skip such opcodes. An alternative option would be to not mark partially-completed jobs as QUEUED but instead RUNNING, which would result in aborting of the job at restart time. Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Guido Trotter <ultrotter@google.com>
-
Iustin Pop authored
In case the job fails, we try to set the job's run_op_idx to -1. However, this is a wrong variable, which wasn't detected until the __slots__ addition. The correct variable is run_op_index. Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Guido Trotter <ultrotter@google.com>
-
- Jul 17, 2009
-
-
Iustin Pop authored
Adding slots to _QueuedOpCode decreases memory usage (of these objects) by roughly four times. It is a lesser change for _QueuedJobs. Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
Michael Hanselmann authored
* commit 'origin/next': ganeti.initd: Pass $*_ARGS to programs when restarting them
-
Michael Hanselmann authored
Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
Iustin Pop authored
* next: Optimizie OpCode loading Yet another fallout from the pylint fixes
-
Iustin Pop authored
This patch converts the opcode loading to a pre-built map (at import time) instead of iteration over the globals dict at each call. Microbenchmarks show that this should be around three times faster, and burnin still passes. Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
Iustin Pop authored
Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Olivier Tharan <olive@google.com>
-
Guido Trotter authored
* next: Fix another issue with hypervisor_name change Update NEWS and version for 2.0.2 release Improve the description of node flags in man page Add enabled hypervisors to TestConfigRunner Add a few more checks to verify config Make sure enabled_hypervisors list is valid Change default stripe count to 1 Use full-stripe size in LVM growth Remove ConfigWriter.InitConfig RAPI: implement instance reinstall Conflicts: test/ganeti.config_unittest.py 529d13a4 contained a small fix which was also present in 066f465d
-
Guido Trotter authored
* master: Update NEWS and version for 2.0.2 release Improve the description of node flags in man page Change default stripe count to 1 Use full-stripe size in LVM growth RAPI: implement instance reinstall
-
Iustin Pop authored
Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
Iustin Pop authored
Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Guido Trotter <ultrotter@google.com>
-
- Jul 16, 2009
-
-
Raiford Storey authored
[iustin@google.com: slightly reworded the explanation for offline and changed the commit message] Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
Guido Trotter authored
This parameter is now mandatory for the cluster config to work. Signed-off-by:
Guido Trotter <ultrotter@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
Guido Trotter authored
- Check that the enabled hypervisors list is valid - Check that the master node is a valid node Signed-off-by:
Guido Trotter <ultrotter@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
Guido Trotter authored
Signed-off-by:
Guido Trotter <ultrotter@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
Guido Trotter authored
Currently we have both a default_hypervisor and an enabled_hypervisors list. The former is only settable at cluster init time, while the latter can be changed with cluster modify. This becomes cumbersome in a few ways: at cluster init time for example if we pass in a list of enabled hypervisors which doesn't include the "default" xen-pvm one, we're also forced to pass a default hypervisor, or an error will be reported. It is also currently possible to disable the default hypervisor in cluster-modify (with unknown results). In order to avoid this we get rid of this field altogether, and define the "first" enabled hypervisor as the default one. This allows ease of changing which one is the default, and at the same time maintains coherency. At configuration upgrade we make sure that the old default is first in the list, so that 2.0 cluster defaults are preserved. Signed-off-by:
Guido Trotter <ultrotter@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
Guido Trotter authored
This reflects a discussion we had, according to which the full "parameters" implementation is too heavy weight for 2.1, and we should have a partial version for now, and decide again later. Signed-off-by:
Guido Trotter <ultrotter@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
Iustin Pop authored
In order not to change the default during a stable series, we modify configure.ac to default to one stripe, in effect keeping the status quo (well, minus the LVM Attach() changes). Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Guido Trotter <ultrotter@google.com>
-
Michael Hanselmann authored
Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
Guido Trotter authored
InitConfig currently creates the cluster config_data, then puts it into a dict, passes it to SimpleConfigWriter to load it from a dict (which just reuses the dict value) and then saves it. The SimpleConfigWriter is then returned, but ignored. With this patch we just write out the config_data at InitConfig time, and thus can remove SimpleConfigWriter altogether. The now unused SimpleConfigReader.FromDict is also gone. Signed-off-by:
Guido Trotter <ultrotter@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
Guido Trotter authored
InitConfig returns a SimpleConfigWriter to InitCluster, which then passes it on to ssh.WriteKnownHostsFile, which extracts a couple of values from it. One line later the full ConfigWriter is initialized. By initializing it one line before we can pass the full writer to ssh.WriteKnownHostsFile, and thus we don't need to care anymore for the InitConfig returned SimpleConfigWriter Signed-off-by:
Guido Trotter <ultrotter@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
Guido Trotter authored
I got overexcited and forgot we have to remain compatible with python 2.4. With this patch we move from sha256 to sha1 for hmac authenticated serialized messages, and we handle both newer and older python, by importing the right module for each. Signed-off-by:
Guido Trotter <ultrotter@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
Iustin Pop authored
LVM has issues when growing stripped volumes, so it's best to specify the growth in exact multiples of the full stripe size (as precise as possible). For this we need to do a couple of changes: - in LVM Attach(), we query additionally the VG extent size and the LV stripe count; since this makes lvs return a (possibly) multi-line output, we now split it into lines and only take the last one - in LVM Grow(), we round up the increase in multiples of the full stripe size The patch also sets the correct target size in DRBD growth. Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Olivier Tharan <olive@google.com>
-
- Jul 14, 2009
-
-
Guido Trotter authored
It's been replaced by a simpler bootstrap.InitConfig function, which does the same job, and is currently unused. Signed-off-by:
Guido Trotter <ultrotter@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-