- Feb 26, 2010
-
-
Balazs Lecz authored
Signed-off-by:
Balazs Lecz <leczb@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
Guido Trotter authored
These hooks are run on all nodes, after the "base" daemons are started. Signed-off-by:
Guido Trotter <ultrotter@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
- Feb 22, 2010
-
-
Guido Trotter authored
This function is a generic pythonic version of runparts. We currently use it in the backend HooksRunner, but we'll use it for running different directories as well. Signed-off-by:
Guido Trotter <ultrotter@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
Guido Trotter authored
And save lots of lines of code, in the process Signed-off-by:
Guido Trotter <ultrotter@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
Guido Trotter authored
This allows to run a command with only the passed in environment, rather than just updating the default one with it. Now with unit testing. Signed-off-by:
Guido Trotter <ultrotter@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
René Nussbaumer authored
Signed-off-by:
René Nussbaumer <rn@google.com> Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
Michael Hanselmann authored
Jobs submitted via the standard command line utilities didn't give any indication that anything is happening while they were waiting in the job queue (e.g. due to other jobs using all worker threads) or acquiring locks. This could be very confusing for people not familiar with Ganeti's architecture. Now they'll show a message after the first WaitForJobChanges timeout. Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
Michael Hanselmann authored
If too many clients try to connect to the master at the same time, some of them might fail if the master doesn't accept the connections fast enough. Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
Iustin Pop authored
Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
Iustin Pop authored
We add this as a new opcode since we don't want to alter the behaviour of current opcodes/lus. Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
Iustin Pop authored
Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
Iustin Pop authored
This is a new mode that request a solution for the evacuation of multiple nodes. The external script will be fed a list of names, and is expected to return a list of [instance, new_node(s)] lists, detailing the evacuation path of each instance. Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
Iustin Pop authored
This patch switches the default result key from 'nodes' to 'result'. The old name is still accepted for backwards-compatiblity, and should be removed in later versions. Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
Iustin Pop authored
Currently the 'name' parameter in the constructor is required (as a non-keyword argument). Since the (to follow) node evac IAllocator mode doesn't have 'name' as a valid argument, we're moving this one into the per-request key, leaving the constructor required arguments more abstract. Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
Iustin Pop authored
This moves the setting of the request member on the in_data, of the request type, and of the branching basef on request type outside of individual functions and directly into the constructor. Since the values we're using externally are identical to the constants.py values, we're also using those directly. Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
- Feb 19, 2010
-
-
Michael Hanselmann authored
Until now this was only done for the master node, though the problem originally fixed in 8f215968 also occurs for other node daemons. Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
- Feb 18, 2010
-
-
Michael Hanselmann authored
Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
Michael Hanselmann authored
This function could be useful in other places and this way we can easily unittest it. Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
Michael Hanselmann authored
On fork, the tempfile module's pseudo random generator is not reset. If several processes (e.g. two children or parent and child) try to create a temporary file, they'll conflict. This function can be used to reset the name generator which contains the pseudo random generator. A unittest is included. It is in a separate script because it changes a variable in the tempfile module to speed up the test. Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
Iustin Pop authored
In case we add a node with “--no-ssh-key-check”, this should override any default yes/ask values in the system-wide (or user) ssh key check. Currently this only works in batch mode, whereas in non-batch we only override a 'no'. The patch fixes SshRunner such that in non-batch mode we enforce the value of StrictHostKeyChecking in all cases. Bug found and initial investigation by Theo Van Dinter. Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
- Feb 17, 2010
-
-
Iustin Pop authored
This should have been done in the _ExpandNodeName patch. Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
Iustin Pop authored
There's no such thing as OpProgrammerError (I found this as I wrote it in code in another place, and pylint complained). Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Guido Trotter <ultrotter@google.com>
-
Iustin Pop authored
Currently we have lots of duplication of the error-checking (and proper exception raising) around node/instance name expansion. LUCreateInstance is the only place where we have abstracted this. This patch creates two functions (ExpandNodeName and ExpandInstanceName) that will either raise the proper exception or return the expanded name. This allows a lot of cleanup of duplicate code. Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
- Feb 15, 2010
-
-
Iustin Pop authored
This patch extends commit 7ea7bcf6 by releasing all node locks in disk replace for the early release mode. The rationale behind this is: - LUCreateInstance already releases all node locks while waiting for disk synchronization, and does an instance startup later - WaitForSync only runs (for disk template 'drbd') 'lvs' and read /proc/drbd on the primary node, which should be (modulo bugs in LVM) safe for parallel run In any case, the worst I could foresee is a node having N lvs commands run in parallel on it, while being a primary for disk storage. Based on create instance doing this safely, and the fact that burnin with more than two instances per node is safe, I think this can be applied. Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
Iustin Pop authored
These are both cleanups and, in the case of _MassageProcData, switching from a weaker RE to a stronger one (we now need cs: in the line, previosuly any line starting with \d+: was accepted). Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
Iustin Pop authored
In case the old node is offline, we won't be able to talk to it to remove the storage, and in most cases the node is powered off/unreachable. In this case, it makes no sense to delay the storage release, so we enable automatically early_release mode, gaining parallelism during node evacuation. Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
- Feb 11, 2010
-
-
Iustin Pop authored
This should fix issue 68: some hooks should be run on more nodes than currently. GrowDisk runs on both nodes, remove run the post hook on the instance's nodes, and failover and migrate run the post hook on the source node too. Thanks to Maxence for the initial investigation and patch. Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
Iustin Pop authored
Per issue 71, the migrate and failover need special variables for keeping the nodes consistent during instance migrations. Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
Iustin Pop authored
Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
Iustin Pop authored
Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
Iustin Pop authored
This patch changes SubmitOpCode and SubmitOrSend such that we have a single function that does generic CLI options to opcode attributes function. This will allow, once all scripts pass the opts argument to SubmitOpCode, to pass the debug parameter or the dry-run one to the LUs. Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
Iustin Pop authored
This changes from boolean to integer/count (for a future differentiation based on the actual debug level). All the uses of the code only test it's boolean status, so it still works as an integer value. Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
Iustin Pop authored
Also automatically fix opcodes which have this missing in the LU init routine. Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
- Feb 10, 2010
-
-
Michael Hanselmann authored
While commit 413b7472 fixed the issue of poll(2) returning too soon, it didn't work when the poll(2) call should've been blocking. This is now fixed and verified. Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
Iustin Pop authored
Commit 154b9580 changed (correctly) the __slots__ usage, but this broke dumpers/loaders since we relied directly on the own class __slots__ field. To compensate, we introduce a simple function for computing the slots across all parent classes (if any), and use this instead of __slots__ directly. Note: the _all_slots() function is duplicated between objects.py and opcodes.py, but the only other options is to introduce a lang.py for such very basic language items. Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
Michael Hanselmann authored
Iustin Pop noticed unusually high CPU usage with 2.1's master daemon, even with very simple opcodes like OP_TEST_DELAY. As it turns out, we inadvertently passed seconds as milliseconds to a call to poll(2). Due to the way the loop around the call works it didn't break competely, but caused higher CPU usage by the poll(2) call returning too early. Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
- Feb 09, 2010
-
-
Iustin Pop authored
This patch adds an early_release parameter in the OpReplaceDisks and OpEvacuateNode opcodes, allowing earlier release of storage and more importantly of internal Ganeti locks. The behaviour of the early release is that any locks and storage on all secondary nodes are released early. This is valid for change secondary (where we remove the storage on the old secondary, and release the locks on the old and new secondary) and replace on secondary (where we remove the old storage and release the lock on the secondary node. Using this, on a three node setup: - instance1 on nodes A:B - instance2 on nodes C:B It is possible to run in parallel a replace-disks -s (on secondary) for instances 1 and 2. Replace on primary will remove the storage, but not the locks, as we use the primary node later in the LU to check consistency. It is debatable whether to also remove the locks on the primary node, and thus making replace-disks keep zero locks during the sync. While this would allow greatly enhanced parallelism, let's first see how removal of secondary locks works. Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Guido Trotter <ultrotter@google.com>
-
- Feb 08, 2010
-
-
Michael Hanselmann authored
When evacuating nodes, the iallocator was run for all instances without taking planned changes into consideration. This patch delays part of CheckPrereq and running the iallocator for node evacuation. Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
- Feb 03, 2010
-
-
Iustin Pop authored
This doesn't implement the full functionality, we need to add the debug level to the opcodes too, but at least won't require changing the RPC calls during the 2.1 series. Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
Michael Hanselmann authored
My previous patch, commit 785d142e, fixed the case where a node is marked offline. With this patch it'll also handle other failures correctly. * Hooks Results - ERROR: node node2.example.com: Communication failure in hooks execution: Connection failed (111: Connection refused) Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-