- 28 May, 2013 7 commits
-
-
Bernardo Dal Seno authored
Otherwise LVM may use a smaller number of PVs (spindles) to accommodate for the default stripes. Signed-off-by:
Bernardo Dal Seno <bdalseno@google.com> Reviewed-by:
Helga Velroyen <helgav@google.com>
-
Bernardo Dal Seno authored
If they are not specified in the command line an error is reported. Also, disk creation would fail without them. QA has been updated. Signed-off-by:
Bernardo Dal Seno <bdalseno@google.com> Reviewed-by:
Helga Velroyen <helgav@google.com>
-
Bernardo Dal Seno authored
When exclusive storage is active, any wrong or missing spindles information in disks gets updated too. Signed-off-by:
Bernardo Dal Seno <bdalseno@google.com> Reviewed-by:
Helga Velroyen <helgav@google.com>
-
Bernardo Dal Seno authored
This RPC replaces the existing one that only returned disk size. Signed-off-by:
Bernardo Dal Seno <bdalseno@google.com> Reviewed-by:
Helga Velroyen <helgav@google.com>
-
Bernardo Dal Seno authored
Two new methods are created to get the number of spindles, one alone, and one together with size. For devices that don't support spindles, None is returned. Signed-off-by:
Bernardo Dal Seno <bdalseno@google.com> Reviewed-by:
Helga Velroyen <helgav@google.com>
-
Bernardo Dal Seno authored
When an LV gets attached, the list of the PVs used by the LV is built. This will be used to count spindles for exclusive_storage, but it could also be useful to optimize disk growing and snapshotting. Signed-off-by:
Bernardo Dal Seno <bdalseno@google.com> Reviewed-by:
Helga Velroyen <helgav@google.com>
-
Bernardo Dal Seno authored
The parsing of "lvs" output is moved into private methods. The code is slightly more readable and testable. The split in two methods is useful for the following patches. Unit tests for the new functions are provided. Signed-off-by:
Bernardo Dal Seno <bdalseno@google.com> Reviewed-by:
Helga Velroyen <helgav@google.com>
-
- 24 May, 2013 4 commits
-
-
Bernardo Dal Seno authored
The bug was introduced in 345d395d . Signed-off-by:
Bernardo Dal Seno <bdalseno@google.com> Reviewed-by:
Thomas Thrainer <thomasth@google.com> Reviewed-by:
Michele Tartara <mtartara@google.com>
-
Guido Trotter authored
* stable-2.8: Bump up version for 2.7.0~rc2 release Create overall design document for 2.8 Add NEWS entry for SO_PEERCRED fix Workaround missing SO_PEERCRED Add debugging clause to _ExpandCheckDisks error Reduce pylint maximum file length to 4500 Mention hail network incompatibility in manpages Remove obsolete Debian-related documentation Update NEWS for 2.7.0 rc2 Improve installation documentation Add Harep man page Stash Xen config file after a failed startup Fix owner of the OS log dir Signed-off-by:
Guido Trotter <ultrotter@google.com> Reviewed-by:
Michele Tartara <mtartara@google.com> Reviewed-by:
Bernardo Dal Seno <bdalseno@google.com>
-
Guido Trotter authored
* stable-2.7: Bump up version for 2.7.0~rc2 release Add NEWS entry for SO_PEERCRED fix Workaround missing SO_PEERCRED Add debugging clause to _ExpandCheckDisks error Mention hail network incompatibility in manpages Remove obsolete Debian-related documentation Update NEWS for 2.7.0 rc2 Improve installation documentation Fix owner of the OS log dir Conflicts: lib/cmdlib.py: port to cmdlib/instance_storage.py lib/tools/ensure_dirs.py: trivial Signed-off-by:
Guido Trotter <ultrotter@google.com> Reviewed-by:
Klaus Aehlig <aehlig@google.com>
-
Guido Trotter authored
Signed-off-by:
Guido Trotter <ultrotter@google.com> Reviewed-by:
Klaus Aehlig <aehlig@google.com>
-
- 23 May, 2013 13 commits
-
-
Bernardo Dal Seno authored
"gnt-instance add" and "gnt-instance recreate-disks" are tested giving the number of spindles, when supported. Also, QA for "gnt-instance recreate-disks" now covers the case where disks are resized. Signed-off-by:
Bernardo Dal Seno <bdalseno@google.com> Reviewed-by:
Thomas Thrainer <thomasth@google.com>
-
Michele Tartara authored
Also, cleanup the list of draft designs. Signed-off-by:
Michele Tartara <mtartara@google.com> Reviewed-by:
Guido Trotter <ultrotter@google.com>
-
Bernardo Dal Seno authored
Spindles didn't exist in 2.8 and should be removed to downgrade. Signed-off-by:
Bernardo Dal Seno <bdalseno@google.com> Reviewed-by:
Thomas Thrainer <thomasth@google.com>
-
Bernardo Dal Seno authored
For each disk, the number of requested spindles (if present) is shown. Signed-off-by:
Bernardo Dal Seno <bdalseno@google.com> Reviewed-by:
Thomas Thrainer <thomasth@google.com>
-
Bernardo Dal Seno authored
The requested number of spindles is used to allocate PVs when creating new LVs. Signed-off-by:
Bernardo Dal Seno <bdalseno@google.com> Reviewed-by:
Thomas Thrainer <thomasth@google.com>
-
Bernardo Dal Seno authored
Document all the arguments and return values of bdev.Create() and bdev.BlockDev.Create(). Signed-off-by:
Bernardo Dal Seno <bdalseno@google.com> Reviewed-by:
Thomas Thrainer <thomasth@google.com>
-
Bernardo Dal Seno authored
The field is filled with the value provided on the command line. Signed-off-by:
Bernardo Dal Seno <bdalseno@google.com> Reviewed-by:
Thomas Thrainer <thomasth@google.com>
-
Bernardo Dal Seno authored
Masterd checks that specifications for new disks don't include spindles when exclusive storage is disabled. Signed-off-by:
Bernardo Dal Seno <bdalseno@google.com> Reviewed-by:
Thomas Thrainer <thomasth@google.com>
-
Bernardo Dal Seno authored
The option is parsed but ignored, for the moment. Signed-off-by:
Bernardo Dal Seno <bdalseno@google.com> Reviewed-by:
Thomas Thrainer <thomasth@google.com>
-
Bernardo Dal Seno authored
Escaping and initial capitals were not uniform. Signed-off-by:
Bernardo Dal Seno <bdalseno@google.com> Reviewed-by:
Thomas Thrainer <thomasth@google.com>
-
Bernardo Dal Seno authored
The configuration is still the same as in 2.8 (the reference stable version for this branch), so downgrade shouldn't do anything. Unit tests are also updated, with a new 2.8 configuration file. The configuration file used for the upgrade+downgrade test was tailored to the 2.7 downgrade, and it's not needed any more. Signed-off-by:
Bernardo Dal Seno <bdalseno@google.com> Reviewed-by:
Thomas Thrainer <thomasth@google.com>
-
Guido Trotter authored
Signed-off-by:
Guido Trotter <ultrotter@google.com> Reviewed-by:
Thomas Thrainer <thomasth@google.com>
-
Michele Tartara authored
Ganeti is currently not able to detect a legit shutdown request performed by a user from inside a Xen domain. This patch provides a design document to implement a mechanism able to cope with such events. Signed-off-by:
Michele Tartara <mtartara@google.com> Reviewed-by:
Guido Trotter <ultrotter@google.com>
-
- 22 May, 2013 16 commits
-
-
Klaus Aehlig authored
Signed-off-by:
Klaus Aehlig <aehlig@google.com> Reviewed-by:
Guido Trotter <ultrotter@google.com>
-
Klaus Aehlig authored
hroller now also supports the options --skip-non-redundant and --ignore-non-redundant, and this should be documented in the man page as well. While there, also use the same order in the options section as in the synopsis, and in the synopsis group the algorithms into - those that modify the set of nodes to be scheduled, and - those that modify the constraints to be taken into account. Signed-off-by:
Klaus Aehlig <aehlig@google.com> Reviewed-by:
Guido Trotter <ultrotter@google.com>
-
Guido Trotter authored
This bug happens in a few new distributions, so we workaround it by defining the constant ourselves, if it's missing. Signed-off-by:
Guido Trotter <ultrotter@google.com> Reviewed-by:
Thomas Thrainer <thomasth@google.com>
-
Guido Trotter authored
This has been reported by users, so we should have the extra debugging available. Signed-off-by:
Guido Trotter <ultrotter@google.com> Reviewed-by:
Michele Tartara <mtartara@google.com>
-
Thomas Thrainer authored
The longest Python files we still have are around 4200 lines long. In order to prevent future growth, limit the maximum file length (checked by pylint) to 4500 lines. Signed-off-by:
Thomas Thrainer <thomasth@google.com> Reviewed-by:
Guido Trotter <ultrotter@google.com>
-
Klaus Aehlig authored
The cluster now consists of 3 nodes, with drbd instances between nodes 1 and 2, and 2 and 3. Additionally, nodes 1 and 3 each contain a non-redundant instance, but node 2 cannot hold two additional instances. So, - if we take non-redundant instances into account (the new default behavior), the nodes have to be rebooted individually, - if we ignore non-redundant instances, nodes 1 and 3 can be rebooted simultaneously, and - if we skip nodes with non-redundant instances, only a single node remains (of course, forming a single reboot group). Signed-off-by:
Klaus Aehlig <aehlig@google.com> Reviewed-by:
Guido Trotter <ultrotter@google.com>
-
Guido Trotter authored
We can't fix this in the 2.7 version, so it should be documented. Signed-off-by:
Guido Trotter <ultrotter@google.com> Reviewed-by:
Thomas Thrainer <thomasth@google.com>
-
Thomas Thrainer authored
This part of the documentation refers to Grub instead of Grub2, but Grub2 has been the standard boot loader since Squeeze. As this part only (wrongly) repeats the preceeding documentation, it's removed completely. Signed-off-by:
Thomas Thrainer <thomasth@google.com> Reviewed-by:
Guido Trotter <ultrotter@google.com>
-
Guido Trotter authored
Move "local" entries at the bottom, and leave global 2.7 entries at the top, as for the other releases. Signed-off-by:
Guido Trotter <ultrotter@google.com> Reviewed-by:
Michele Tartara <mtartara@google.com>
-
Klaus Aehlig authored
The example cluster consists of 6 nodes, each hosting 2 instances and having capacity for 3. So, while the drbd-induced graph consists of only insulated nodes, no more than two nodes can be rebooted at the same time. Signed-off-by:
Klaus Aehlig <aehlig@google.com> Reviewed-by:
Guido Trotter <ultrotter@google.com>
-
Klaus Aehlig authored
Add an option to hroller restoring the old behavior on not taking any non-redundant instances into account when forming reboot groups. Signed-off-by:
Klaus Aehlig <aehlig@google.com> Reviewed-by:
Guido Trotter <ultrotter@google.com>
-
Klaus Aehlig authored
Non-redundant instances need to be moved to a different node before maintenance of the node. Even though they can be moved to any node, there must be enough capacity to host the instances of the reboot group to be evacuated. This is achieved by greedily moving the non-redundant instances to other nodes, till we run out of capacity. In this way we refine the groups obtained by coloring the drbd-induced graph. Signed-off-by:
Klaus Aehlig <aehlig@google.com> Reviewed-by:
Guido Trotter <ultrotter@google.com>
-
Klaus Aehlig authored
So far, hroller ignores the fact, that non-redundant instances exist. One option to deal is non-redundant instances is to not schedule those nodes for reboot. This is supported by adding the option --skip-non-redundant. Signed-off-by:
Klaus Aehlig <aehlig@google.com> Reviewed-by:
Guido Trotter <ultrotter@google.com>
-
Klaus Aehlig authored
Signed-off-by:
Klaus Aehlig <aehlig@google.com> Reviewed-by:
Guido Trotter <ultrotter@google.com>
-
Thomas Thrainer authored
Based on user feedback the installation documentation is clarified and extended. Signed-off-by:
Thomas Thrainer <thomasth@google.com> Reviewed-by:
Michele Tartara <mtartara@google.com> (cherry picked from commit 3913eaa7 ) Signed-off-by:
Thomas Thrainer <thomasth@google.com> Reviewed-by:
Guido Trotter <ultrotter@google.com>
-
Thomas Thrainer authored
Based on user feedback the installation documentation is clarified and extended. Signed-off-by:
Thomas Thrainer <thomasth@google.com> Reviewed-by:
Michele Tartara <mtartara@google.com>
-