- 01 Dec, 2011 4 commits
-
-
Andrea Spadaccini authored
Signed-off-by:
Andrea Spadaccini <spadaccio@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
Andrea Spadaccini authored
Signed-off-by:
Andrea Spadaccini <spadaccio@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
Andrea Spadaccini authored
objects.py: * add disk parameters to Disk, Cluster, NodeGroup. constants.py: * add dictionaries that will hold types and default values for disk parameters (for now, empty). test/ganeti.constants_unittest.py: * add unit tests for consistency in disk parameters default values. rest of files: * add to gnt-cluster and gnt-group the options to manipulate disk parameters. Signed-off-by:
Andrea Spadaccini <spadaccio@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
Michael Hanselmann authored
A quick QA run successfully finished with these changes. Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
Andrea Spadaccini <spadaccio@google.com>
-
- 30 Nov, 2011 1 commit
-
-
Michael Hanselmann authored
Keeping the node state up to date will require information from multiple VGs and hypervisors. Instead of requiring multiple calls this change allows a single call to return all needed information. Existing users are changed. Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
- 24 Nov, 2011 3 commits
-
-
Michael Hanselmann authored
Note: This bug only manifests itself in Ganeti 2.5, but since the problematic code also exists in 2.4, I decided to fix it there. If a node was assigned to a new group using “gnt-group assign-nodes” the node object's group would be changed, but not the duplicate member list in the group object. The latter is an optimization to require fewer locks for other operations. The per-group member list is only kept in memory and not written to disk. Ganeti 2.5 starts to make use of the data kept in the per-group member list and consequently fails when it is out of date. The following commands can be used to reproduce the issue in 2.5 (in 2.4 the issue was confirmed using additional logging): $ gnt-group add foo $ gnt-group assign-nodes foo $(gnt-node list --no-header -o name) $ gnt-cluster verify # Fails with KeyError This patch moves the code modifying node and group objects into “config.ConfigWriter” to do the complete operation under the config lock, and also to avoid making use of side-effects of modifying objects without calling “ConfigWriter.Update”. A unittest is included. Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
Iustin Pop <iustin@google.com> (cherry picked from commit 218f4c3d)
-
Michael Hanselmann authored
Note: This bug only manifests itself in Ganeti 2.5, but since the problematic code also exists in 2.4, I decided to fix it there. If a node was assigned to a new group using “gnt-group assign-nodes” the node object's group would be changed, but not the duplicate member list in the group object. The latter is an optimization to require fewer locks for other operations. The per-group member list is only kept in memory and not written to disk. Ganeti 2.5 starts to make use of the data kept in the per-group member list and consequently fails when it is out of date. The following commands can be used to reproduce the issue in 2.5 (in 2.4 the issue was confirmed using additional logging): $ gnt-group add foo $ gnt-group assign-nodes foo $(gnt-node list --no-header -o name) $ gnt-cluster verify # Fails with KeyError This patch moves the code modifying node and group objects into “config.ConfigWriter” to do the complete operation under the config lock, and also to avoid making use of side-effects of modifying objects without calling “ConfigWriter.Update”. A unittest is included. Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
Michael Hanselmann authored
Commit c50452c3 added an exception when all instances should be evacuated off a node, but did so in a way which made pylint complain about unreachable code. Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
- 23 Nov, 2011 4 commits
-
-
Michael Hanselmann authored
There is a design issue in the iallocator interface which prevents us from doing this. Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
René Nussbaumer <rn@google.com>
-
Michael Hanselmann authored
Until now the iallocator constants for node evacuation (IALLOCATOR_NEVAC_*) were also used for the opcode. However, it turned out this was due to a misunderstanding and is incorrect. This patch adds new constants (with the same values) and changes the affected places. Fortunately the RAPI client already used good names, so no changes are necessary. Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
Iustin Pop <iustin@google.com> Signed-off-by:
Michael Hanselmann <hansmi@google.com>
-
Michael Hanselmann authored
When evacuating a node, only an assertion without informative text was used to check if the necessary node locks had been acquired. This was on top of evaluating the list of nodes without having a node group lock, so this was changed as well. Also update some exception messages to include “retry the operation”. Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
Michael Hanselmann authored
ConfigWriter.GetAllInstancesInfo returns a dictionary, not a list. Removing a node would fail with “too many values to unpack”. Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
Guido Trotter <ultrotter@google.com>
-
- 22 Nov, 2011 4 commits
-
-
Guido Trotter authored
Queries are already compatible (be/memory is an alias for be/maxmem) and import/exports work. This patch patch fixes it for cluster init, modify and instance add/start/modify. Signed-off-by:
Guido Trotter <ultrotter@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
Guido Trotter authored
Since for now we can only start instances at their maximum memory, we modify all checks to use that value. When we'll have better support for using a value in between some of these checks have to move to minimum memory. Signed-off-by:
Guido Trotter <ultrotter@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
Guido Trotter authored
Import uses the old "memory" parameter to populate the two new ones, if they're not overridden already. FinalizeExport exports minmem and maxmem, but also memory, as maxmem, to allow importing to older ganeti clusters. Signed-off-by:
Guido Trotter <ultrotter@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
Guido Trotter authored
Also pass the "memory" value for retrocompatibility, for now. Signed-off-by:
Guido Trotter <ultrotter@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
- 21 Nov, 2011 2 commits
-
-
Andrea Spadaccini authored
In the last merge I erroneously discarded the changes introduced by commit 2a6de57a "Check the results of master IP RPCs". This commit reintroduces them. Signed-off-by:
Andrea Spadaccini <spadaccio@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
Michael Hanselmann authored
Patch tested and confirmed to work by Andrea Spadaccini <spadaccio@google.com>. Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
Andrea Spadaccini <spadaccio@google.com>
-
- 17 Nov, 2011 3 commits
-
-
Iustin Pop authored
Signed-off-by:
Iustin Pop <iustin@google.com> Reviewed-by:
Michael Hanselmann <hansmi@google.com>
-
Michael Hanselmann authored
After iallocator ran we can release any unused node locks. Since they must be in exclusive mode this should improve parallelization during instance creation. Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
Michael Hanselmann authored
… instead of a variable which needs to be incremented for every step. Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
- 16 Nov, 2011 3 commits
-
-
Agata Murawska authored
Signed-off-by:
Agata Murawska <agatamurawska@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
Agata Murawska authored
Signed-off-by:
Agata Murawska <agatamurawska@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
Agata Murawska authored
Signed-off-by:
Agata Murawska <agatamurawska@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
- 15 Nov, 2011 14 commits
-
-
Michael Hanselmann authored
The locking library doesn't like it when “release()” is called on a lockset or lock which isn't held by the current thread. Instead of modifying the library, which could have other side-effects, this rather simple change avoids errors when a LU simply tries to release all locks, even when it doesn't own any at a certain level. Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
Michael Hanselmann authored
Also acquire instance and resource locks in shared mode (see comment). Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
Michael Hanselmann authored
If early-release is not used, the resource lock is kept while waiting for disks to sync. Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
Michael Hanselmann authored
Important for when disks are converted. Release locks once they're not needed anymore. Make liberal use of assertions. Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
Michael Hanselmann authored
Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
Michael Hanselmann authored
Removing an instance affects available disk space and memory. Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
Michael Hanselmann authored
Recreating disks conflicts with other disk operations, therefore the node resource lock must be acquired. Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
Michael Hanselmann authored
Since this doesn't really touch the node, but it conflicts with e.g. growing a disk, the resource lock must be acquired. Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
Michael Hanselmann authored
Also add one more feedback line. Downgrade instance lock to shared mode while we're only waiting for disks to sync. The node lock is released when not needed anymore. Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
Michael Hanselmann authored
The node resource lock is released once the disks are in sync (that is, after wiping). Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
Michael Hanselmann authored
Nothing is being written to. Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
Michael Hanselmann authored
Nothing is written to. Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
Michael Hanselmann authored
No writes are being done. Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
Michael Hanselmann authored
Just in case we ever add locks for querying nodes. Currently _NodeQuery's DeclareLocks is a no-op function. Signed-off-by:
Michael Hanselmann <hansmi@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
- 14 Nov, 2011 2 commits
-
-
Andrea Spadaccini authored
Change the master IP address RPC call chain to accept the use_external_master_ip_script parameter. Introduces an unused parameter in backend.ActivateMasterIp and backend.DeactivateMasterIp, that will be used in the next commit. Signed-off-by:
Andrea Spadaccini <spadaccio@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-
Andrea Spadaccini authored
Update cluster-verify to check the integrity of the default master IP address setup script and the presence and executability of the external one (if currently in use by the cluster). Signed-off-by:
Andrea Spadaccini <spadaccio@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
-