An error occurred fetching the project authors.
- Nov 18, 2010
-
-
Iustin Pop authored
Currently, reinstallation of a DRBD instance with the secondary node offline does: node1# gnt-instance reinstall -f instance1 Waiting for job 139053 for instance1... Thu Nov 18 01:36:09 2010 - WARNING: Could not prepare block device disk/0 on node node3 (is_primary=False, pass=1): Node is marked offline Thu Nov 18 01:36:09 2010 - WARNING: Could not shutdown block device disk/0 on node node3: Node is marked offline Job 139053 for instance1 has failed: Failure: command execution error: Disk consistency error Since this fails anyway, let's check the secondary nodes, thus preventing any modifications to the instance (e.g. OS type change): node1# gnt-instance reinstall -f instance1 Waiting for job 139058 for instance1... Job 139058 for instance1 has failed: Failure: prerequisites not met for this operation: error type: wrong_state, error details: Instance secondary node offline, cannot reinstall: node3 The patch needs modifications to the _CheckNodeOnline function, in order to display meaningful messages ("Can't use offline node" would be very confusing for an instance reinstall, since we didn't select a node manually). Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
René Nussbaumer <rn@google.com>
-
Iustin Pop authored
I was using the feedback_fn function incorrectly (it doesn't automatically expand the arguments). Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
René Nussbaumer <rn@google.com>
-
- Nov 17, 2010
-
-
Iustin Pop authored
Since the contents of the dict is validated via the ForceDictType, we can simply require that it is a dict here. The previous check was wrong, as it was copied from the HV checks (which also doesn't verify the leaf dict type). Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
- Nov 11, 2010
-
-
Iustin Pop authored
And fix an error message. Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
David Knowles authored
Note: It appears this has been around since the initial checkin of TemporaryReservationManager. I have no idea what this could break, so someone else may want to test this more thoroughly. Signed-off-by:
David Knowles <dknowles@google.com> Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
- Nov 03, 2010
-
-
Michael Hanselmann authored
Tests have shown that the changes in commit b8d26c6e don't work as wanted. If any disk wasn't found on the node, all disks located on the same node would show as faulty. The cause was incorrect exception handling on the node. This patch changes the RPC call to return a per-disk success/error status, avoiding the problem. Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
Luca Bigliardi <shammash@google.com>
-
Michael Hanselmann authored
Some of then were forgotten. Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
René Nussbaumer <rn@google.com>
-
- Nov 01, 2010
-
-
Guido Trotter authored
We can now change a nodes' secondary ip. Signed-off-by:
Guido Trotter <ultrotter@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
Guido Trotter authored
This is already disabled for the same type of request a couple of lines above. The new code was introduced in e986f20c but didn't have the disables. Signed-off-by:
Guido Trotter <ultrotter@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
Guido Trotter authored
There is no "private" ip in Ganeti, we only have primary and secondary ones. Whether they are public or private is a per-installation detail. Signed-off-by:
Guido Trotter <ultrotter@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
Guido Trotter authored
Signed-off-by:
Guido Trotter <ultrotter@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
Guido Trotter authored
Signed-off-by:
Guido Trotter <ultrotter@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
Guido Trotter authored
The "I always wanted to do this" commit. Signed-off-by:
Guido Trotter <ultrotter@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
Guido Trotter authored
Changing the volume group is a lot less frequent than acting on a node group. As such we drop the "-g" shortcut and require the long option to be passed. In 2.3 the commands which used to accept the volume group as "-g" won't have any node group option, so no confusion will arise. Later on we may pass "-g" as the initial node group name to gnt-cluster init, although that's not strictly necessary, as modifying it later is always possible. Signed-off-by:
Guido Trotter <ultrotter@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
Michael Hanselmann authored
This fixes a bug where the ssconf_instance_list file was not updated after an instance rename. Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
René Nussbaumer <rn@google.com>
-
- Oct 29, 2010
-
-
Michael Hanselmann authored
Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
René Nussbaumer <rn@google.com>
-
Michael Hanselmann authored
Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
René Nussbaumer <rn@google.com>
-
Michael Hanselmann authored
Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
René Nussbaumer <rn@google.com>
-
Michael Hanselmann authored
Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
René Nussbaumer <rn@google.com>
-
Michael Hanselmann authored
Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
René Nussbaumer <rn@google.com>
-
Michael Hanselmann authored
Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
René Nussbaumer <rn@google.com>
-
- Oct 28, 2010
-
-
Michael Hanselmann authored
A new constant, LUXI_VERSION, is used to verify the peer's version. The version is optional, so old(er) clients and servers talking to peers not supporting it won't break. Example with mismatching library: $ gnt-instance list Unhandled Ganeti error: LUXI version mismatch, server 2020000, request 1010000 Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
Michael Hanselmann authored
This allows LUXI errors to be encoded and serialized. Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
Michael Hanselmann authored
To remove the instance after an export it needs to be stopped. This can be achived using the parameter “shutdown”, or by explicitly shutting down the instance before exporting. The latter would still require the “shutdown” parameter to be set. To make it more intuitive, this requirement is changed with this patch. Instances already stopped are accepted for automatic removal. Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
Guido Trotter authored
Signed-off-by:
Guido Trotter <ultrotter@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
Guido Trotter authored
The nodes and instances parameters to the constructor are mandatory anyway, as a value of None will fail when creating the LockSet. Rather than fixing this adding code lines, since we never used the default value, let's remove them and require that the parameters are passed. This also fixes the only places where we inited GanetiLockManager with keyed parameters and without arguments. Signed-off-by:
Guido Trotter <ultrotter@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
Iustin Pop authored
This is just a basic check, plus a warning. In the future, we might do more checks, or prevent simple onlining (without readd) if --force is not passed. Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Guido Trotter <ultrotter@google.com>
-
Iustin Pop authored
We will need the new role in CheckPrereq, so move its computation there and save the new role to self. Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Guido Trotter <ultrotter@google.com>
-
Iustin Pop authored
This small patch modifies LUCreateInstance, LUReplaceDisks and LUMoveInstance to not use non-vm_capable nodes. Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
Iustin Pop authored
Also changes the error code for the other CheckNode* helpers to ECODE_STATE, not ECODE_INVAL: ECODE_INVAL is for requests that are invalid (e.g. create drbd instance with one node), whereas ECODE_STATE denote requests that are not satisfiable due to cluster/node/instance state. Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
Iustin Pop authored
Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
Iustin Pop authored
Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
Iustin Pop authored
Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
Iustin Pop authored
This is used in two places already, and will be needed in a third, so let's abstract it. Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
Iustin Pop authored
The method to make vm_capable integrate easily into cluster verify is as follows: - we add a new NV_VMNODES that represents *non*-vm-capable nodes - the LU populates this list (it's expected that non-vm_capable nodes are few compared to vm_capable nodes) - backend skips the checks that are related to VM hosting - in the LU, we reorder the VM-related checks so that they occur after the non-VM (generic) tests, and we only execute them conditionally Additionally, we add some support to the instance checks to detect instances living on bad nodes. Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
Iustin Pop authored
Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
Iustin Pop authored
And also do some cleanup: we only run the role changed actions if the node has actually changed roles. Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
Iustin Pop authored
This can be used to compute a node's instances easily, and a small function to get all non-vm_capable nodes. Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Guido Trotter <ultrotter@google.com>
-
Michael Hanselmann authored
This will show a warning if, for example, one side of a DRBD disk becomes unavailable. The data is collected separately from the other verification data. Example output: * Verifying instance status - ERROR: instance inst1: disk/0 on node2 is faulty Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
- Oct 27, 2010
-
-
Michael Hanselmann authored
Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-