Commit 951accad authored by Iustin Pop's avatar Iustin Pop
Browse files

Merge branch 'devel-2.6'



* devel-2.6:
  Make stable-2.6 compatible with newer pep8
  Fix computation of disk sizes in _ComputeDiskSize
  Add verification of RPC results in _WipeDisks
  Add test for checking that all gnt-* subcommands run OK
  Fix double use of PRIORITY_OPT in gnt-node migrate
  Add new Makefile target to rebuild the whole dist
  rapi client: accept arbitrary shutdown arguments
  Handle offline nodes for "instance down" checks
  Add missing rst files to Makefile.am
  Release version 2.6.0 (final)
  Fix 'explicitely' common typo
  Fix issue in LUClusterVerifyGroup with multi-group clusters
  Add QA test for node group modification of ndparams
  Fix node group modification of node parameters
  Fix RST formatting in NEWS file
  Update NEWS and bump version for release 2.5.2
  Fix boot=on flag for CDROMs
  KVM: only pass boot flag once
  Ensure a stable content of the bash completion file

Conflicts (all trivial):
        Makefile.am  (design drafts on both sides, pep8 changes)
        autotools/build-bash-completion (copyright years)
Signed-off-by: default avatarIustin Pop <iustin@google.com>
Reviewed-by: default avatarRené Nussbaumer <rn@google.com>
parents 5d0389dd b5df6331
......@@ -103,7 +103,7 @@ dependencies.
distributions need to apply the patches on their own.
Ganeti will use the option if it's detected by the ``configure``
script; auto-detection can be disabled by explicitely passing
script; auto-detection can be disabled by explicitly passing
``--enable-socat-compress`` (use the option to disable compression) or
``--disable-socat-compress`` (don't use the option).
......
......@@ -336,6 +336,7 @@ docrst = \
doc/design-node-state-cache.rst \
doc/design-virtual-clusters.rst \
doc/design-bulk-create.rst \
doc/design-query-splitting.rst \
doc/devnotes.rst \
doc/glossary.rst \
doc/hooks.rst \
......@@ -346,6 +347,7 @@ docrst = \
doc/locking.rst \
doc/move-instance.rst \
doc/news.rst \
doc/ovfconverter.rst \
doc/rapi.rst \
doc/security.rst \
doc/upgrade.rst \
......@@ -1406,6 +1408,19 @@ distcheck-hook:
distcheck-release dist-release: export BUILD_RELEASE = 1
distcheck-release: distcheck
distrebuildcheck: dist
set -e; \
builddir=$$(mktemp -d $(abs_srcdir)/distrebuildcheck.XXXXXXX); \
trap "echo Removing $$builddir; cd $(abs_srcdir); rm -rf $$builddir" EXIT; \
cd $$builddir; \
tar xzf $(abs_srcdir)/$(distdir).tar.gz; \
cd $(distdir); \
./configure; \
$(MAKE) maintainer-clean; \
cp $(abs_srcdir)/vcs-version .; \
./configure; \
$(MAKE) $(AM_MAKEFLAGS)
dist-release: dist
set -e; \
for i in $(DIST_ARCHIVES); do \
......
......@@ -2,10 +2,20 @@ News
====
Version 2.6.0 rc4
-----------------
Version 2.6.0
-------------
*(Released Fri, 27 Jul 2012)*
.. attention:: The ``LUXI`` protocol has been made more consistent
regarding its handling of command arguments. This, however, leads to
incompatibility issues with previous versions. Please ensure that you
restart Ganeti daemons soon after the upgrade, otherwise most
``LUXI`` calls (job submission, setting/resetting the drain flag,
pausing/resuming the watcher, cancelling and archiving jobs, querying
the cluster configuration) will fail.
*(Released Thu, 19 Jul 2012)*
New features
~~~~~~~~~~~~
......@@ -305,10 +315,20 @@ changed to allow the same clock skew as permitted by the cluster
verification. This will remove some rare but hard to diagnose errors in
import-export.
The ``LUXI`` protocol has been made more consistent regarding its
handling of command arguments. This, however, leads to incompatibility
issues with previous versions. Please ensure that you restart Ganeti
daemons after the upgrade, otherwise job submission will fail.
Version 2.6.0 rc4
-----------------
*(Released Thu, 19 Jul 2012)*
Very few changes from rc4 to the final release, only bugfixes:
- integrated fixes from release 2.5.2 (fix general boot flag for KVM
instance, fix CDROM booting for KVM instances)
- fixed node group modification of node parameters
- fixed issue in LUClusterVerifyGroup with multi-group clusters
- fixed generation of bash completion to ensure a stable ordering
- fixed a few typos
Version 2.6.0 rc3
......@@ -413,6 +433,26 @@ Plus integrated fixes from the 2.5 branch:
- KVM live migration when using a custom keymap
Version 2.5.2
-------------
*(Released Tue, 24 Jul 2012)*
A small bugfix release, with no new features:
- fixed bash-isms in kvm-ifup, for compatibility with systems which use a
different default shell (e.g. Debian, Ubuntu)
- fixed KVM startup and live migration with a custom keymap (fixes Issue
243 and Debian bug #650664)
- fixed compatibility with KVM versions that don't support multiple boot
devices (fixes Issue 230 and Debian bug #624256)
Additionally, a few fixes were done to the build system (fixed parallel
build failures) and to the unittests (fixed race condition in test for
FileID functions, and the default enable/disable mode for QA test is now
customisable).
Version 2.5.1
-------------
......
#!/usr/bin/python
#
# Copyright (C) 2009, 2012 Google Inc.
# Copyright (C) 2009, 2010, 2011, 2012 Google Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
......@@ -319,7 +319,7 @@ class CompletionWriter:
wrote_opt = False
for (suggest, allnames) in values.iteritems():
for (suggest, allnames) in values.items():
longnames = [i for i in allnames if i.startswith("--")]
if wrote_opt:
......@@ -551,14 +551,16 @@ def WriteCompletion(sw, scriptname, funcname,
# Group commands by arguments and options
grouped_cmds = {}
for cmd, (_, argdef, optdef, _, _) in commands.iteritems():
for cmd, (_, argdef, optdef, _, _) in commands.items():
if not (argdef or optdef):
continue
grouped_cmds.setdefault((tuple(argdef), tuple(optdef)), set()).add(cmd)
# We're doing options and arguments to commands
sw.Write("""case "${COMP_WORDS[1]}" in""")
for ((argdef, optdef), cmds) in grouped_cmds.items():
sort_grouped = sorted(grouped_cmds.items(),
key=lambda (_, y): sorted(y)[0])
for ((argdef, optdef), cmds) in sort_grouped:
assert argdef or optdef
sw.Write("%s)", "|".join(map(utils.ShellQuote, sorted(cmds))))
sw.IncIndent()
......@@ -610,7 +612,7 @@ def GetCommands(filename, module):
aliases = getattr(module, "aliases", {})
if aliases:
commands = commands.copy()
for name, target in aliases.iteritems():
for name, target in aliases.items():
commands[name] = commands[target]
return commands
......
......@@ -2,7 +2,7 @@
m4_define([gnt_version_major], [2])
m4_define([gnt_version_minor], [6])
m4_define([gnt_version_revision], [0])
m4_define([gnt_version_suffix], [~rc4])
m4_define([gnt_version_suffix], [])
m4_define([gnt_version_full],
m4_format([%d.%d.%d%s],
gnt_version_major, gnt_version_minor,
......
......@@ -64,7 +64,7 @@ An opcode runs only once all its dependency requirements have been
fulfilled.
Any job referring to a cancelled job is also cancelled unless it
explicitely lists :pyeval:`constants.JOB_STATUS_CANCELED` as a requested
explicitly lists :pyeval:`constants.JOB_STATUS_CANCELED` as a requested
status.
In case a referenced job can not be found in the normal queue or the
......
......@@ -392,7 +392,7 @@ Other discussed solutions
Another solution discussed was to add an additional column for each
non-static field containing the status. Clients interested in the status
could explicitely query for it.
could explicitly query for it.
.. vim: set textwidth=72 :
.. Local Variables:
......
......@@ -309,7 +309,7 @@ oMachineReadable = Option "" ["machine-readable"]
flag <- parseYesNo True f
return $ opts { optMachineReadable = flag }) "CHOICE")
"enable machine readable output (pass either 'yes' or 'no' to\
\ explicitely control the flag, or without an argument defaults to\
\ explicitly control the flag, or without an argument defaults to\
\ yes"
oMaxCpu :: OptType
......
......@@ -124,7 +124,7 @@ def _BuildOpcodeParams(op_id, include, exclude, alias):
key=compat.fst)
for (rapi_name, name, default, test, doc) in params_with_alias:
# Hide common parameters if not explicitely included
# Hide common parameters if not explicitly included
if (name in COMMON_PARAM_NAMES and
(not include or name not in include)):
continue
......
......@@ -1001,7 +1001,7 @@ commands = {
MigrateNode, ARGS_ONE_NODE,
[FORCE_OPT, NONLIVE_OPT, MIGRATION_MODE_OPT, DST_NODE_OPT,
IALLOCATOR_OPT, PRIORITY_OPT, IGNORE_IPOLICY_OPT,
NORUNTIME_CHGS_OPT, SUBMIT_OPT, PRIORITY_OPT],
NORUNTIME_CHGS_OPT, SUBMIT_OPT],
"[-f] <node>",
"Migrate all the primary instance on a node away from it"
" (only for instances of type drbd)"),
......
......@@ -1111,13 +1111,16 @@ def _CheckInstanceState(lu, instance, req_states, msg=None):
 
if constants.ADMINST_UP not in req_states:
pnode = instance.primary_node
ins_l = lu.rpc.call_instance_list([pnode], [instance.hypervisor])[pnode]
ins_l.Raise("Can't contact node %s for instance information" % pnode,
prereq=True, ecode=errors.ECODE_ENVIRON)
if instance.name in ins_l.payload:
raise errors.OpPrereqError("Instance %s is running, %s" %
(instance.name, msg), errors.ECODE_STATE)
if not lu.cfg.GetNodeInfo(pnode).offline:
ins_l = lu.rpc.call_instance_list([pnode], [instance.hypervisor])[pnode]
ins_l.Raise("Can't contact node %s for instance information" % pnode,
prereq=True, ecode=errors.ECODE_ENVIRON)
if instance.name in ins_l.payload:
raise errors.OpPrereqError("Instance %s is running, %s" %
(instance.name, msg), errors.ECODE_STATE)
else:
lu.LogWarning("Primary node offline, ignoring check that instance"
" is down")
 
 
def _ComputeMinMaxSpec(name, qualifier, ipolicy, value):
......@@ -3218,10 +3221,12 @@ class LUClusterVerifyGroup(LogicalUnit, _VerifyErrors):
if master_node not in self.my_node_info:
additional_nodes.append(master_node)
vf_node_info.append(self.all_node_info[master_node])
# Add the first vm_capable node we find which is not included
# Add the first vm_capable node we find which is not included,
# excluding the master node (which we already have)
for node in absent_nodes:
nodeinfo = self.all_node_info[node]
if nodeinfo.vm_capable and not nodeinfo.offline:
if (nodeinfo.vm_capable and not nodeinfo.offline and
node != master_node):
additional_nodes.append(node)
vf_node_info.append(self.all_node_info[node])
break
......@@ -9022,6 +9027,7 @@ def _WipeDisks(lu, instance):
result = lu.rpc.call_blockdev_pause_resume_sync(node,
(instance.disks, instance),
True)
result.Raise("Failed RPC to node %s for pausing the disk syncing" % node)
 
for idx, success in enumerate(result.payload):
if not success:
......@@ -9069,12 +9075,17 @@ def _WipeDisks(lu, instance):
(instance.disks, instance),
False)
 
for idx, success in enumerate(result.payload):
if not success:
lu.LogWarning("Resume sync of disk %d failed, please have a"
" look at the status and troubleshoot the issue", idx)
logging.warn("resume-sync of instance %s for disks %d failed",
instance.name, idx)
if result.fail_msg:
lu.LogWarning("RPC call to %s for resuming disk syncing failed,"
" please have a look at the status and troubleshoot"
" the issue: %s", node, result.fail_msg)
else:
for idx, success in enumerate(result.payload):
if not success:
lu.LogWarning("Resume sync of disk %d failed, please have a"
" look at the status and troubleshoot the issue", idx)
logging.warn("resume-sync of instance %s for disks %d failed",
instance.name, idx)
 
 
def _CreateDisks(lu, instance, to_skip=None, target_node=None):
......@@ -9214,7 +9225,7 @@ def _ComputeDiskSizePerVG(disk_template, disks):
 
 
def _ComputeDiskSize(disk_template, disks):
"""Compute disk size requirements in the volume group
"""Compute disk size requirements according to disk template
 
"""
# Required free disk space as a function of disk and swap space
......@@ -9224,10 +9235,10 @@ def _ComputeDiskSize(disk_template, disks):
# 128 MB are added for drbd metadata for each disk
constants.DT_DRBD8:
sum(d[constants.IDISK_SIZE] + DRBD_META_SIZE for d in disks),
constants.DT_FILE: None,
constants.DT_SHARED_FILE: 0,
constants.DT_FILE: sum(d[constants.IDISK_SIZE] for d in disks),
constants.DT_SHARED_FILE: sum(d[constants.IDISK_SIZE] for d in disks),
constants.DT_BLOCK: 0,
constants.DT_RBD: 0,
constants.DT_RBD: sum(d[constants.IDISK_SIZE] for d in disks),
}
 
if disk_template not in req_size_dict:
......@@ -14106,7 +14117,7 @@ class LUGroupSetParams(LogicalUnit):
 
if self.op.ndparams:
new_ndparams = _GetUpdatedParams(self.group.ndparams, self.op.ndparams)
utils.ForceDictType(self.op.ndparams, constants.NDS_PARAMETER_TYPES)
utils.ForceDictType(new_ndparams, constants.NDS_PARAMETER_TYPES)
self.new_ndparams = new_ndparams
 
if self.op.diskparams:
......
......@@ -48,7 +48,7 @@ confirming what you already got.
# pylint: disable=E0203
# E0203: Access to member %r before its definition, since we use
# objects.py which doesn't explicitely initialise its members
# objects.py which doesn't explicitly initialise its members
import time
import random
......
......@@ -1143,7 +1143,7 @@ class ConfigWriter:
if target is None:
if len(self._config_data.nodegroups) != 1:
raise errors.OpPrereqError("More than one node group exists. Target"
" group must be specified explicitely.")
" group must be specified explicitly.")
else:
return self._config_data.nodegroups.keys()[0]
if target in self._config_data.nodegroups:
......
......@@ -974,10 +974,14 @@ class KVMHypervisor(hv_base.BaseHypervisor):
kvm_cmd.extend(["-no-reboot"])
hvp = instance.hvparams
boot_disk = hvp[constants.HV_BOOT_ORDER] == constants.HT_BO_DISK
boot_cdrom = hvp[constants.HV_BOOT_ORDER] == constants.HT_BO_CDROM
boot_floppy = hvp[constants.HV_BOOT_ORDER] == constants.HT_BO_FLOPPY
boot_network = hvp[constants.HV_BOOT_ORDER] == constants.HT_BO_NETWORK
kernel_path = hvp[constants.HV_KERNEL_PATH]
if kernel_path:
boot_disk = boot_cdrom = boot_floppy = boot_network = False
else:
boot_disk = hvp[constants.HV_BOOT_ORDER] == constants.HT_BO_DISK
boot_cdrom = hvp[constants.HV_BOOT_ORDER] == constants.HT_BO_CDROM
boot_floppy = hvp[constants.HV_BOOT_ORDER] == constants.HT_BO_FLOPPY
boot_network = hvp[constants.HV_BOOT_ORDER] == constants.HT_BO_NETWORK
self.ValidateParameters(hvp)
......@@ -992,6 +996,10 @@ class KVMHypervisor(hv_base.BaseHypervisor):
if boot_network:
kvm_cmd.extend(["-boot", "n"])
# whether this is an older KVM version that uses the boot=on flag
# on devices
needs_boot_flag = (v_major, v_min) < (0, 14)
disk_type = hvp[constants.HV_DISK_TYPE]
if disk_type == constants.HT_DISK_PARAVIRTUAL:
if_val = ",if=virtio"
......@@ -1019,7 +1027,7 @@ class KVMHypervisor(hv_base.BaseHypervisor):
if boot_disk:
kvm_cmd.extend(["-boot", "c"])
boot_disk = False
if (v_major, v_min) < (0, 14) and disk_type != constants.HT_DISK_IDE:
if needs_boot_flag and disk_type != constants.HT_DISK_IDE:
boot_val = ",boot=on"
drive_val = "file=%s,format=raw%s%s%s" % (dev_path, if_val, boot_val,
......@@ -1034,19 +1042,22 @@ class KVMHypervisor(hv_base.BaseHypervisor):
iso_image = hvp[constants.HV_CDROM_IMAGE_PATH]
if iso_image:
options = ",format=raw,media=cdrom"
# set cdrom 'if' type
if boot_cdrom:
kvm_cmd.extend(["-boot", "d"])
if cdrom_disk_type != constants.HT_DISK_IDE:
options = "%s,boot=on,if=%s" % (options, constants.HT_DISK_IDE)
else:
options = "%s,boot=on" % options
actual_cdrom_type = constants.HT_DISK_IDE
elif cdrom_disk_type == constants.HT_DISK_PARAVIRTUAL:
actual_cdrom_type = "virtio"
else:
if cdrom_disk_type == constants.HT_DISK_PARAVIRTUAL:
if_val = ",if=virtio"
else:
if_val = ",if=%s" % cdrom_disk_type
options = "%s%s" % (options, if_val)
drive_val = "file=%s%s" % (iso_image, options)
actual_cdrom_type = cdrom_disk_type
if_val = ",if=%s" % actual_cdrom_type
# set boot flag, if needed
boot_val = ""
if boot_cdrom:
kvm_cmd.extend(["-boot", "d"])
if needs_boot_flag:
boot_val = ",boot=on"
# and finally build the entire '-drive' value
drive_val = "file=%s%s%s%s" % (iso_image, options, if_val, boot_val)
kvm_cmd.extend(["-drive", drive_val])
iso_image2 = hvp[constants.HV_KVM_CDROM2_IMAGE_PATH]
......@@ -1056,8 +1067,7 @@ class KVMHypervisor(hv_base.BaseHypervisor):
if_val = ",if=virtio"
else:
if_val = ",if=%s" % cdrom_disk_type
options = "%s%s" % (options, if_val)
drive_val = "file=%s%s" % (iso_image2, options)
drive_val = "file=%s%s%s" % (iso_image2, options, if_val)
kvm_cmd.extend(["-drive", drive_val])
floppy_image = hvp[constants.HV_KVM_FLOPPY_IMAGE_PATH]
......@@ -1071,7 +1081,6 @@ class KVMHypervisor(hv_base.BaseHypervisor):
drive_val = "file=%s%s" % (floppy_image, options)
kvm_cmd.extend(["-drive", drive_val])
kernel_path = hvp[constants.HV_KERNEL_PATH]
if kernel_path:
kvm_cmd.extend(["-kernel", kernel_path])
initrd_path = hvp[constants.HV_INITRD_PATH]
......@@ -1515,9 +1524,9 @@ class KVMHypervisor(hv_base.BaseHypervisor):
self.BalloonInstanceMemory(instance, start_memory)
if start_kvm_paused:
# To control CPU pinning, ballooning, and vnc/spice passwords the VM was
# started in a frozen state. If freezing was not explicitely requested
# resume the vm status.
# To control CPU pinning, ballooning, and vnc/spice passwords
# the VM was started in a frozen state. If freezing was not
# explicitly requested resume the vm status.
self._CallMonitorCommand(instance.name, self._CONT_CMD)
def StartInstance(self, instance, block_devices, startup_paused):
......
......@@ -262,9 +262,9 @@ class CommandBuilder(object):
dd_cmd.write(" && ")
dd_cmd.write("{ ")
# Setting LC_ALL since we want to parse the output and explicitely
# redirecting stdin, as the background process (dd) would have /dev/null as
# stdin otherwise
# Setting LC_ALL since we want to parse the output and explicitly
# redirecting stdin, as the background process (dd) would have
# /dev/null as stdin otherwise
dd_cmd.write("LC_ALL=C dd bs=%s <&0 2>&%d & pid=${!};" %
(BUFSIZE, self._dd_stderr_fd))
# Send PID to daemon
......
......@@ -29,7 +29,7 @@ pass to and from external parties.
# pylint: disable=E0203,W0201,R0902
# E0203: Access to member %r before its definition, since we use
# objects.py which doesn't explicitely initialise its members
# objects.py which doesn't explicitly initialise its members
# W0201: Attribute '%s' defined outside __init__
......
......@@ -923,7 +923,8 @@ class GanetiRapiClient(object): # pylint: disable=R0904
("/%s/instances/%s/reboot" %
(GANETI_RAPI_VERSION, instance)), query, None)
def ShutdownInstance(self, instance, dry_run=False, no_remember=False):
def ShutdownInstance(self, instance, dry_run=False, no_remember=False,
**kwargs):
"""Shuts down an instance.
@type instance: str
......@@ -937,12 +938,14 @@ class GanetiRapiClient(object): # pylint: disable=R0904
"""
query = []
body = kwargs
_AppendDryRunIf(query, dry_run)
_AppendIf(query, no_remember, ("no-remember", 1))
return self._SendRequest(HTTP_PUT,
("/%s/instances/%s/shutdown" %
(GANETI_RAPI_VERSION, instance)), query, None)
(GANETI_RAPI_VERSION, instance)), query, body)
def StartupInstance(self, instance, dry_run=False, no_remember=False):
"""Starts up an instance.
......
......@@ -190,9 +190,11 @@ boot\_order
as 'dc'.
For KVM the boot order is either "floppy", "cdrom", "disk" or
"network". Please note that older versions of KVM couldn't
netboot from virtio interfaces. This has been fixed in more recent
versions and is confirmed to work at least with qemu-kvm 0.11.1.
"network". Please note that older versions of KVM couldn't netboot
from virtio interfaces. This has been fixed in more recent versions
and is confirmed to work at least with qemu-kvm 0.11.1. Also note
that if you have set the ``kernel_path`` option, that will be used
for booting, and this setting will be silently ignored.
blockdev\_prefix
Valid for the Xen HVM and PVM hypervisors.
......@@ -424,9 +426,10 @@ kernel\_path
Valid for the Xen PVM and KVM hypervisors.
This option specifies the path (on the node) to the kernel to boot
the instance with. Xen PVM instances always require this, while
for KVM if this option is empty, it will cause the machine to load
the kernel from its disks.
the instance with. Xen PVM instances always require this, while for
KVM if this option is empty, it will cause the machine to load the
kernel from its disks (and the boot will be done accordingly to
``boot_order``).
kernel\_args
Valid for the Xen PVM and KVM hypervisors.
......
......@@ -96,6 +96,7 @@
"cluster-repair-disk-sizes": true,
"haskell-confd": true,
"htools": true,
"group-list": true,
"group-rwops": true,
......
#
#
# Copyright (C) 2010, 2011 Google Inc.
# Copyright (C) 2010, 2011, 2012 Google Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
......@@ -103,6 +103,12 @@ def TestGroupModify():
"min=%s,max=%s,std=0" % (min_v, max_v), group1], fail=True)
AssertCommand(["gnt-group", "modify", "--specs-mem-size",
"min=%s,max=%s" % (min_v, max_v), group1])
AssertCommand(["gnt-group", "modify",
"--node-parameters", "spindle_count=10", group1])
if qa_config.TestEnabled("htools"):
AssertCommand(["hbal", "-L", "-G", group1])
AssertCommand(["gnt-group", "modify",
"--node-parameters", "spindle_count=default", group1])
finally:
AssertCommand(["gnt-group", "remove", group1])
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment