Commit ebab8f54 authored by Thomas Thrainer's avatar Thomas Thrainer
Browse files

Merge branch 'stable-2.9'

* stable-2.9:
  Version bump for 2.9.0 rc3
  Add NEWS entry for 2.9.0 rc3
  Remove incorrect comment
  cfg auto update: match ipolicy with enabled disk templates
  Remove obsolete configure option for shared file storage
* stable-2.8:
  Improve harep documentation

(all trivial merges)
Signed-off-by: default avatarThomas Thrainer <>
Reviewed-by: default avatarHelga Velroyen <>
parents 70b634e6 71ae80d2
......@@ -30,10 +30,10 @@ New features
procedure between two Ganeti versions that are both 2.10 or higher.
Version 2.9.0 rc2
Version 2.9.0 rc3
*(Released Wed, 9 Oct 2013)*
*(Released Tue, 15 Oct 2013)*
Incompatible/important changes
......@@ -99,9 +99,20 @@ Haskell
- ``hslogger`` ( is now always
required, even if confd is not enabled.
Since 2.9.0 rc1
Since 2.9.0 rc2
- in implicit configuration upgrade, match ipolicy with enabled disk templates
- improved harep documentation (inherited from stable-2.8)
Version 2.9.0 rc2
*(Released Wed, 9 Oct 2013)*
The second release candidate in the 2.9 series. Since 2.9.0 rc1:
- Fix bug in cfgupgrade that led to failure when upgrading from 2.8 with
at least one DRBD instance.
- Fix bug in cfgupgrade that led to an invalid 2.8 configuration after
......@@ -157,15 +157,6 @@ AC_ARG_WITH([kvm-kernel],
AC_SUBST(KVM_KERNEL, $kvm_kernel)
# --with-shared-file-storage-dir=...
[directory to store files for shared file-based backend]
[ (default is /srv/ganeti/shared-file-storage)]
AC_SUBST(SHARED_FILE_STORAGE_DIR, $shared_file_storage_dir)
# --with-kvm-path=...
......@@ -443,10 +443,13 @@ class ConfigData(ConfigObject):
for instance in self.instances.values():
if self.nodegroups is None:
self.nodegroups = {}
for nodegroup in self.nodegroups.values():
nodegroup.ipolicy, self.cluster.enabled_disk_templates)
if self.cluster.drbd_usermode_helper is None:
if self.cluster.IsDiskTemplateEnabled(constants.DT_DRBD8):
self.cluster.drbd_usermode_helper = constants.DEFAULT_DRBD_HELPER
......@@ -454,15 +457,12 @@ class ConfigData(ConfigObject):
self.networks = {}
for network in self.networks.values():
def _UpgradeEnabledDiskTemplates(self):
"""Upgrade the cluster's enabled disk templates by inspecting the currently
enabled and/or used disk templates.
# enabled_disk_templates in the cluster config were introduced in 2.8.
# Remove this code once upgrading from earlier versions is deprecated.
if not self.cluster.enabled_disk_templates:
template_set = \
set([inst.disk_template for inst in self.instances.values()])
......@@ -479,6 +479,8 @@ class ConfigData(ConfigObject):
self.cluster.ipolicy, self.cluster.enabled_disk_templates)
class NIC(ConfigObject):
......@@ -913,6 +915,15 @@ class InstancePolicy(ConfigObject):
used as a placeholder for a few functions.
def UpgradeDiskTemplates(cls, ipolicy, enabled_disk_templates):
"""Upgrades the ipolicy configuration."""
if constants.IPOLICY_DTS in ipolicy:
if not set(ipolicy[constants.IPOLICY_DTS]).issubset(
ipolicy[constants.IPOLICY_DTS] = list(
set(ipolicy[constants.IPOLICY_DTS]) & set(enabled_disk_templates))
def CheckParameterSyntax(cls, ipolicy, check_std):
""" Check the instance policy for validity.
......@@ -16,10 +16,56 @@ SYNOPSIS
harep is the Ganeti auto-repair tool. It is able to detect that an instance is
Harep is the Ganeti auto-repair tool. It is able to detect that an instance is
broken and to generate a sequence of jobs that will fix it, in accordance to the
policies set by the administrator.
Harep is able to recognize what state an instance is in (healthy, suspended,
needs repair, repair disallowed, pending repair, repair disallowed, repair
failed) and to lead it through a sequence of steps that will bring the instance
back to the healthy state. Therefore, harep is mainly meant to be run regularly
and frequently using a cron job, so that is can actually follow the instance
along all the process. At every run, harep will update the tags it adds to
instances that describe its repair status, and will submit jobs that actually
perform the required repair operations.
By default, harep only reports on the health status of instances, but doesn't
perform any action, as they might be potentially dangerous. Therefore, harep
will only touch instances that it has been explicitly authorized to work on.
The tags enabling harep, can be associated to single instances, or to a
nodegroup or to the whole cluster, therefore affecting all the instances they
contain. The possible tags share the common structure::
where ``<type>`` can have the following values:
* ``fix-storage``: allow disk replacement or fix the backend without affecting the instance
itself (broken DRBD secondary)
* ``migrate``: allow instance migration
* ``failover``: allow instance reboot on the secondary
* ``reinstall``: allow disks to be recreated and the instance to be reinstalled
Each element in the list of tags, includes all the authorizations of the
previous one, with ``fix-storage`` being the least powerful and ``reinstall``
being the most powerful.
In case multiple autorepair tags act on the same instance, only one can actually
be active. The conflict is solved according to the following rules:
#. if multiple tags are in the same object, the least destructive takes
#. if the tags are across objects, the nearest tag wins.
A cluster has instances I1 and I2, where I1 has the ``failover`` tag, and
the cluster has both ``fix-storage`` and ``reinstall``.
The I1 instance will be allowed to ``failover``, the I2 instance only to
......@@ -232,6 +232,33 @@ class TestClusterObject(unittest.TestCase):
cluster = objects.Cluster(ipolicy={"unknown_key": None})
self.assertRaises(errors.ConfigurationError, cluster.UpgradeConfig)
def testUpgradeEnabledDiskTemplates(self):
cfg = objects.ConfigData()
cfg.cluster = objects.Cluster()
cfg.cluster.volume_group_name = "myvg"
instance1 = objects.Instance()
instance1.disk_template = constants.DT_DISKLESS
instance2 = objects.Instance()
instance2.disk_template = constants.DT_RBD
cfg.instances = { "myinstance1": instance1, "myinstance2": instance2 }
nodegroup = objects.NodeGroup()
nodegroup.ipolicy = {}
nodegroup.ipolicy[constants.IPOLICY_DTS] = [instance1.disk_template, \
cfg.cluster.ipolicy = {}
cfg.cluster.ipolicy[constants.IPOLICY_DTS] = \
[constants.DT_EXT, constants.DT_DISKLESS]
cfg.nodegroups = { "mynodegroup": nodegroup }
expected_disk_templates = [constants.DT_DRBD8,
class TestClusterObjectTcpUdpPortPool(unittest.TestCase):
def testNewCluster(self):
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment