Commit f3ac6f36 authored by Klaus Aehlig's avatar Klaus Aehlig

Merge branch 'stable-2.10' into master

* stable-2.10
  Version bump for 2.10.0~rc1
  Update NEWS for 2.10.0 rc1 release
  Fix pylint 0.26.0/Python 2.7 warning
  Update INSTALL and devnotes for 2.10 release
* stable-2.9
  Bump revision for 2.9.2
  Update NEWS for 2.9.2 release
  Pass hvparams to GetInstanceInfo
  Adapt parameters that moved to instance variables
  Avoid lines longer than 80 chars
  SingleNotifyPipeCondition: don't share pollers
  KVM: use custom KVM path if set for version checking
* stable-2.8
  Version bump for 2.8.3
  Update NEWS for 2.8.3 release
  Support reseting arbitrary params of ext disks
  Allow modification of arbitrary params for ext
  Do not clear disk.params in UpgradeConfig()
  SetDiskID() before accepting an instance
  Lock group(s) when creating instances
  Fix job error message after unclean master shutdown
  Add default file_driver if missing
  Update tests
  Xen handle domain shutdown
  Fix evacuation out of drained node
  Refactor reading live data in htools
  master-up-setup: Ping multiple times with a shorter interval
  Add a packet number limit to "fping" in master-ip-setup
  Fix a bug in InstanceSetParams concerning names
  build_chroot: hard-code the version of blaze-builder
  Fix error printing
  Allow link local IPv6 gateways
  Fix NODE/NODE_RES locking in LUInstanceCreate
  eta-reduce isIpV6
  Ganeti.Rpc: use brackets for ipv6 addresses
  Update NEWS file with socket permission fix info
  Fix socket permissions after master-failover

Conflicts:
	NEWS
	devel/build_chroot
	lib/cmdlib/instance.py
	lib/hypervisor/hv_xen.py
	lib/jqueue.py
	src/Ganeti/Luxi.hs
	tools/cfgupgrade
Resolution:
	- tools/cfgupgrade: ignore downgrade changes from 2.10
	- NEWS: take both changes
	- devel/build_chroot: both changes differed only in indentation;
	    use indentation from master
	- lib/hypervisor/hv_xen.py: manually apply fd201010 and 70d8491f to
	    the conflicting hunks from stable-2.10
	- lib/jqueue.py: manually apply 9cbcb1be to the conflicting hunk from
	    master
Semanitcal conflicts:
	- configure.ac: undo revision bump to ~rc1
	- lib/query.py: manually merge the two independently added
	    functions _GetInstAllNicVlans
Signed-off-by: default avatarKlaus Aehlig <aehlig@google.com>
Reviewed-by: default avatarHelga Velroyen <helgav@google.com>
parents 9ba38706 a5c50971
......@@ -49,6 +49,7 @@ Before installing, please verify that you have the following programs:
<http://code.google.com/p/ipaddr-py/>`_
- `Bitarray Python library <http://pypi.python.org/pypi/bitarray/>`_
- `GNU Make <http://www.gnu.org/software/make/>`_
- `GNU M4 <http://www.gnu.org/software/m4/>`_
These programs are supplied as part of most Linux distributions, so
usually they can be installed via the standard package manager. Also
......@@ -56,7 +57,7 @@ many of them will already be installed on a standard machine. On
Debian/Ubuntu, you can use this command line to install all required
packages, except for RBD, DRBD and Xen::
$ apt-get install lvm2 ssh bridge-utils iproute iputils-arping make \
$ apt-get install lvm2 ssh bridge-utils iproute iputils-arping make m4 \
ndisc6 python python-openssl openssl \
python-pyparsing python-simplejson python-bitarray \
python-pyinotify python-pycurl python-ipaddr socat fping
......@@ -75,14 +76,14 @@ If bitarray is missing it can be installed from easy-install::
Note that this does not install optional packages::
$ apt-get install python-paramiko python-affinity qemu-img
$ apt-get install python-paramiko python-affinity qemu-utils
If some of the python packages are not available in your system,
you can try installing them using ``easy_install`` command.
For example::
$ apt-get install python-setuptools python-dev
$ cd / && sudo easy_install \
$ cd / && easy_install \
affinity \
bitarray \
ipaddr
......@@ -205,6 +206,7 @@ can use either apt::
$ apt-get install libghc-crypto-dev libghc-text-dev \
libghc-hinotify-dev libghc-regex-pcre-dev \
libpcre3-dev \
libghc-attoparsec-dev libghc-vector-dev \
libghc-snap-server-dev
......@@ -219,7 +221,7 @@ to install them.
In case you still use ghc-6.12, note that ``cabal`` would automatically try to
install newer versions of some of the libraries snap-server depends on, that
cannot be compiled with ghc-6.12, so you have to install snap-server on its
own, esplicitly forcing the installation of compatible versions::
own, explicitly forcing the installation of compatible versions::
$ cabal install MonadCatchIO-transformers==0.2.2.0 mtl==2.0.1.0 \
hashable==1.1.2.0 case-insensitive==0.3 parsec==3.0.1 \
......
......@@ -54,10 +54,10 @@ For Haskell:
at least version 1.0.0.0
Version 2.10.0 alpha1
---------------------
Version 2.10.0 rc1
------------------
*(unreleased)*
*(Released Tue, 17 Dec 2013)*
Incompatible/important changes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
......@@ -121,6 +121,96 @@ Python
- The version requirements for ``python-mock`` have increased to at least
version 1.0.1. It is still used for testing only.
Since 2.10.0 beta1
~~~~~~~~~~~~~~~~~~
- All known issues in 2.10.0 beta1 have been resolved (see changes from
the 2.8 branch).
- Improve handling of KVM runtime files from earlier Ganeti versions
- Documentation fixes
Inherited from the 2.9 branch:
- use custom KVM path if set for version checking
- SingleNotifyPipeCondition: don't share pollers
Inherited from the 2.8 branch:
- Fixed Luxi daemon socket permissions after master-failover
- Improve IP version detection code directly checking for colons rather than
passing the family from the cluster object
- Fix NODE/NODE_RES locking in LUInstanceCreate by not acquiring NODE_RES locks
opportunistically anymore (Issue 622)
- Allow link local IPv6 gateways (Issue 624)
- Fix error printing (Issue 616)
- Fix a bug in InstanceSetParams concerning names: in case no name is passed in
disk modifications, keep the old one. If name=none then set disk name to
None.
- Update build_chroot script to work with the latest hackage packages
- Add a packet number limit to "fping" in master-ip-setup (Issue 630)
- Fix evacuation out of drained node (Issue 615)
- Add default file_driver if missing (Issue 571)
- Fix job error message after unclean master shutdown (Issue 618)
- Lock group(s) when creating instances (Issue 621)
- SetDiskID() before accepting an instance (Issue 633)
- Allow the ext template disks to receive arbitrary parameters, both at creation
time and while being modified
- Xen handle domain shutdown (future proofing cherry-pick)
- Refactor reading live data in htools (future proofing cherry-pick)
Version 2.10.0 beta1
--------------------
*(Released Wed, 27 Nov 2013)*
This was the first beta release of the 2.10 series. All important changes
are listed in the latest 2.10 entry.
Known issues
~~~~~~~~~~~~
The following issues are known to be present in the beta and will be fixed
before rc1.
- Issue 477: Wrong permissions for confd LUXI socket
- Issue 621: Instance related opcodes do not aquire network/group locks
- Issue 622: Assertion Error: Node locks differ from node resource locks
- Issue 623: IPv6 Masterd <-> Luxid communication error
Version 2.9.2
-------------
*(Released Fri, 13 Dec 2013)*
- use custom KVM path if set for version checking
- SingleNotifyPipeCondition: don't share pollers
Inherited from the 2.8 branch:
- Fixed Luxi daemon socket permissions after master-failover
- Improve IP version detection code directly checking for colons rather than
passing the family from the cluster object
- Fix NODE/NODE_RES locking in LUInstanceCreate by not acquiring NODE_RES locks
opportunistically anymore (Issue 622)
- Allow link local IPv6 gateways (Issue 624)
- Fix error printing (Issue 616)
- Fix a bug in InstanceSetParams concerning names: in case no name is passed in
disk modifications, keep the old one. If name=none then set disk name to
None.
- Update build_chroot script to work with the latest hackage packages
- Add a packet number limit to "fping" in master-ip-setup (Issue 630)
- Fix evacuation out of drained node (Issue 615)
- Add default file_driver if missing (Issue 571)
- Fix job error message after unclean master shutdown (Issue 618)
- Lock group(s) when creating instances (Issue 621)
- SetDiskID() before accepting an instance (Issue 633)
- Allow the ext template disks to receive arbitrary parameters, both at creation
time and while being modified
- Xen handle domain shutdown (future proofing cherry-pick)
- Refactor reading live data in htools (future proofing cherry-pick)
Version 2.9.1
-------------
......@@ -257,6 +347,34 @@ This was the first beta release of the 2.9 series. All important changes
are listed in the latest 2.9 entry.
Version 2.8.3
-------------
*(Released Thu, 12 Dec 2013)*
- Fixed Luxi daemon socket permissions after master-failover
- Improve IP version detection code directly checking for colons rather than
passing the family from the cluster object
- Fix NODE/NODE_RES locking in LUInstanceCreate by not acquiring NODE_RES locks
opportunistically anymore (Issue 622)
- Allow link local IPv6 gateways (Issue 624)
- Fix error printing (Issue 616)
- Fix a bug in InstanceSetParams concerning names: in case no name is passed in
disk modifications, keep the old one. If name=none then set disk name to
None.
- Update build_chroot script to work with the latest hackage packages
- Add a packet number limit to "fping" in master-ip-setup (Issue 630)
- Fix evacuation out of drained node (Issue 615)
- Add default file_driver if missing (Issue 571)
- Fix job error message after unclean master shutdown (Issue 618)
- Lock group(s) when creating instances (Issue 621)
- SetDiskID() before accepting an instance (Issue 633)
- Allow the ext template disks to receive arbitrary parameters, both at creation
time and while being modified
- Xen handle domain shutdown (future proofing cherry-pick)
- Refactor reading live data in htools (future proofing cherry-pick)
Version 2.8.2
-------------
......
......@@ -235,6 +235,17 @@ case $DIST_RELEASE in
python-bitarray python-ipaddr python-yaml qemu-utils python-coverage pep8 \
shelltestrunner python-dev pylint openssh-client vim git git-email
# We need version 0.9.4 of pyinotify because the packaged version, 0.9.3, is
# incompatibile with the packaged version of python-epydoc 3.0.1.
# Reason: a logger class in pyinotify calculates its superclasses at
# runtime, which clashes with python-epydoc's static analysis phase.
#
# Problem introduced in:
# https://github.com/seb-m/pyinotify/commit/2c7e8f8959d2f8528e0d90847df360
# and "fixed" in:
# https://github.com/seb-m/pyinotify/commit/98c5f41a6e2e90827a63ff1b878596
in_chroot -- \
easy_install pyinotify==0.9.4
in_chroot -- \
......
......@@ -127,7 +127,8 @@ file
configure for high performance. Note that for security reasons the
file storage directory must be listed under
``/etc/ganeti/file-storage-paths``, and that file is not copied
automatically to all nodes by Ganeti.
automatically to all nodes by Ganeti. The format of that file is a
newline-separated list of directories.
sharedfile
The instance will use plain files as backend, but Ganeti assumes that
......
......@@ -88,21 +88,24 @@ give IP pool management capabilities. A network's pool is defined by two
bitfields, the length of the network size each:
``reservations``
This field holds all IP addresses reserved by Ganeti instances, as
well as cluster IP addresses (node addresses + cluster master)
This field holds all IP addresses reserved by Ganeti instances.
``external reservations``
This field holds all IP addresses that are manually reserved by the
administrator, because some other equipment is using them outside the
scope of Ganeti.
administrator (external gateway, IPs of external servers, etc) or
automatically by ganeti (the network/broadcast addresses,
Cluster IPs (node addresses + cluster master)). These IPs are excluded
from the IP pool and cannot be assigned automatically by ganeti to
instances (via ip=pool).
The bitfields are implemented using the python-bitarray package for
space efficiency and their binary value stored base64-encoded for JSON
compatibility. This approach gives relatively compact representations
even for large IPv4 networks (e.g. /20).
Ganeti-owned IP addresses (node + master IPs) are reserved automatically
if the cluster's data network itself is placed under pool management.
Cluster IP addresses (node + master IPs) are reserved automatically
as external if the cluster's data network itself is placed under
pool management.
Helper ConfigWriter methods provide free IP address generation and
reservation, using a TemporaryReservationManager.
......@@ -129,10 +132,14 @@ node-specific underlying infrastructure.
We also introduce a new ``ip`` address value, ``constants.NIC_IP_POOL``,
that specifies that a given NIC's IP address should be obtained using
the IP address pool of the specified network. This value is only valid
the first available IP address inside the pool of the specified network.
(reservations OR external_reservations). This value is only valid
for NICs belonging to a network. A NIC's IP address can also be
specified manually, as long as it is contained in the network the NIC
is connected to.
is connected to. In case this IP is externally reserved, Ganeti will produce
an error which the user can override if explicitly requested. Of course
this IP will be reserved and will not be able to be assigned to another
instance.
Hooks
......
......@@ -48,10 +48,9 @@ To generate unittest coverage reports (``make coverage``), `coverage
Installation of all dependencies listed here::
$ apt-get install python-setuptools automake git fakeroot
$ apt-get install pandoc python-epydoc graphviz
$ apt-get install pandoc python-epydoc graphviz python-sphinx
$ apt-get install python-yaml
$ cd / && sudo easy_install \
sphinx \
$ cd / && easy_install \
logilab-astng==0.24.1 \
logilab-common==0.58.3 \
pylint==0.26.0 \
......@@ -91,7 +90,7 @@ required ones from the quick install document) via::
libghc-test-framework-dev \
libghc-test-framework-quickcheck2-dev \
libghc-test-framework-hunit-dev \
libghc-temporary-dev \
libghc-temporary-dev shelltestrunner \
hscolour hlint
Or alternatively via ``cabal``::
......
......@@ -98,6 +98,7 @@ __all__ = [
"GLOBAL_GLUSTER_FILEDIR_OPT",
"GLOBAL_SHARED_FILEDIR_OPT",
"HOTPLUG_OPT",
"HOTPLUG_IF_POSSIBLE_OPT",
"HVLIST_OPT",
"HVOPTS_OPT",
"HYPERVISOR_OPT",
......@@ -1668,6 +1669,12 @@ HOTPLUG_OPT = cli_option("--hotplug", dest="hotplug",
action="store_true", default=False,
help="Hotplug supported devices (NICs and Disks)")
HOTPLUG_IF_POSSIBLE_OPT = cli_option("--hotplug-if-possible",
dest="hotplug_if_possible",
action="store_true", default=False,
help="Hotplug devices in case"
" hotplug is supported")
#: Options provided by all commands
COMMON_OPTS = [DEBUG_OPT, REASON_OPT]
......
......@@ -1362,6 +1362,7 @@ def SetInstanceParams(opts, args):
nics=nics,
disks=disks,
hotplug=opts.hotplug,
hotplug_if_possible=opts.hotplug_if_possible,
disk_template=opts.disk_template,
remote_node=opts.node,
pnode=opts.new_primary_node,
......@@ -1569,7 +1570,8 @@ commands = {
[DISK_TEMPLATE_OPT, SINGLE_NODE_OPT, OS_OPT, FORCE_VARIANT_OPT,
OSPARAMS_OPT, DRY_RUN_OPT, PRIORITY_OPT, NWSYNC_OPT, OFFLINE_INST_OPT,
ONLINE_INST_OPT, IGNORE_IPOLICY_OPT, RUNTIME_MEM_OPT,
NOCONFLICTSCHECK_OPT, NEW_PRIMARY_OPT, HOTPLUG_OPT],
NOCONFLICTSCHECK_OPT, NEW_PRIMARY_OPT, HOTPLUG_OPT,
HOTPLUG_IF_POSSIBLE_OPT],
"<instance>", "Alters the parameters of an instance"),
"shutdown": (
GenericManyOps("shutdown", _ShutdownInstance), [ArgInstance()],
......
......@@ -567,7 +567,6 @@ class LUInstanceCreate(LogicalUnit):
if self.op.opportunistic_locking:
self.opportunistic_locks[locking.LEVEL_NODE] = True
self.opportunistic_locks[locking.LEVEL_NODE_RES] = True
else:
(self.op.pnode_uuid, self.op.pnode) = \
ExpandNodeUuidAndName(self.cfg, self.op.pnode_uuid, self.op.pnode)
......@@ -606,6 +605,30 @@ class LUInstanceCreate(LogicalUnit):
self.needed_locks[locking.LEVEL_NODE_RES] = \
CopyLockList(self.needed_locks[locking.LEVEL_NODE])
# Optimistically acquire shared group locks (we're reading the
# configuration). We can't just call GetInstanceNodeGroups, because the
# instance doesn't exist yet. Therefore we lock all node groups of all
# nodes we have.
if self.needed_locks[locking.LEVEL_NODE] == locking.ALL_SET:
# In the case we lock all nodes for opportunistic allocation, we have no
# choice than to lock all groups, because they're allocated before nodes.
# This is sad, but true. At least we release all those we don't need in
# CheckPrereq later.
self.needed_locks[locking.LEVEL_NODEGROUP] = locking.ALL_SET
else:
self.needed_locks[locking.LEVEL_NODEGROUP] = \
list(self.cfg.GetNodeGroupsFromNodes(
self.needed_locks[locking.LEVEL_NODE]))
self.share_locks[locking.LEVEL_NODEGROUP] = 1
def DeclareLocks(self, level):
if level == locking.LEVEL_NODE_RES and \
self.opportunistic_locks[locking.LEVEL_NODE]:
# Even when using opportunistic locking, we require the same set of
# NODE_RES locks as we got NODE locks
self.needed_locks[locking.LEVEL_NODE_RES] = \
self.owned_locks(locking.LEVEL_NODE)
def _RunAllocator(self):
"""Run the allocator based on input opcode.
......@@ -883,6 +906,21 @@ class LUInstanceCreate(LogicalUnit):
"""Check prerequisites.
"""
# Check that the optimistically acquired groups are correct wrt the
# acquired nodes
owned_groups = frozenset(self.owned_locks(locking.LEVEL_NODEGROUP))
owned_nodes = frozenset(self.owned_locks(locking.LEVEL_NODE))
cur_groups = list(self.cfg.GetNodeGroupsFromNodes(owned_nodes))
if not owned_groups.issuperset(cur_groups):
raise errors.OpPrereqError("New instance %s's node groups changed since"
" locks were acquired, current groups are"
" are '%s', owning groups '%s'; retry the"
" operation" %
(self.op.instance_name,
utils.CommaJoin(cur_groups),
utils.CommaJoin(owned_groups)),
errors.ECODE_STATE)
self._CalculateFileStorageDir()
if self.op.mode == constants.INSTANCE_IMPORT:
......@@ -995,6 +1033,9 @@ class LUInstanceCreate(LogicalUnit):
ReleaseLocks(self, locking.LEVEL_NODE, keep=keep_locks)
ReleaseLocks(self, locking.LEVEL_NODE_RES, keep=keep_locks)
ReleaseLocks(self, locking.LEVEL_NODE_ALLOC)
# Release all unneeded group locks
ReleaseLocks(self, locking.LEVEL_NODEGROUP,
keep=self.cfg.GetNodeGroupsFromNodes(keep_locks))
assert (self.owned_locks(locking.LEVEL_NODE) ==
self.owned_locks(locking.LEVEL_NODE_RES)), \
......@@ -1045,7 +1086,8 @@ class LUInstanceCreate(LogicalUnit):
self.LogInfo("Chose IP %s from network %s", nic.ip, nobj.name)
else:
try:
self.cfg.ReserveIp(net_uuid, nic.ip, self.proc.GetECId())
self.cfg.ReserveIp(net_uuid, nic.ip, self.proc.GetECId(),
check=self.op.conflicts_check)
except errors.ReservationError:
raise errors.OpPrereqError("IP address %s already in use"
" or does not belong to network %s" %
......@@ -1948,7 +1990,6 @@ class LUInstanceMultiAlloc(NoHooksLU):
if self.op.opportunistic_locking:
self.opportunistic_locks[locking.LEVEL_NODE] = True
self.opportunistic_locks[locking.LEVEL_NODE_RES] = True
else:
nodeslist = []
for inst in self.op.instances:
......@@ -1965,6 +2006,14 @@ class LUInstanceMultiAlloc(NoHooksLU):
# prevent accidential modification)
self.needed_locks[locking.LEVEL_NODE_RES] = list(nodeslist)
def DeclareLocks(self, level):
if level == locking.LEVEL_NODE_RES and \
self.opportunistic_locks[locking.LEVEL_NODE]:
# Even when using opportunistic locking, we require the same set of
# NODE_RES locks as we got NODE locks
self.needed_locks[locking.LEVEL_NODE_RES] = \
self.owned_locks(locking.LEVEL_NODE)
def CheckPrereq(self):
"""Check prerequisite.
......@@ -2323,8 +2372,7 @@ class LUInstanceSetParams(LogicalUnit):
else:
raise errors.ProgrammerError("Unhandled operation '%s'" % op)
@staticmethod
def _VerifyDiskModification(op, params, excl_stor):
def _VerifyDiskModification(self, op, params, excl_stor):
"""Verifies a disk modification.
"""
......@@ -2351,10 +2399,12 @@ class LUInstanceSetParams(LogicalUnit):
if constants.IDISK_SIZE in params:
raise errors.OpPrereqError("Disk size change not possible, use"
" grow-disk", errors.ECODE_INVAL)
if len(params) > 2:
raise errors.OpPrereqError("Disk modification doesn't support"
" additional arbitrary parameters",
errors.ECODE_INVAL)
# Disk modification supports changing only the disk name and mode.
# Changing arbitrary parameters is allowed only for ext disk template",
if self.instance.disk_template != constants.DT_EXT:
utils.ForceDictType(params, constants.MODIFIABLE_IDISK_PARAMS_TYPES)
name = params.get(constants.IDISK_NAME, None)
if name is not None and name.lower() == constants.VALUE_NONE:
params[constants.IDISK_NAME] = None
......@@ -2630,7 +2680,8 @@ class LUInstanceSetParams(LogicalUnit):
# Reserve new IP if in the new network if any
elif new_net_uuid:
try:
self.cfg.ReserveIp(new_net_uuid, new_ip, self.proc.GetECId())
self.cfg.ReserveIp(new_net_uuid, new_ip, self.proc.GetECId(),
check=self.op.conflicts_check)
self.LogInfo("Reserving IP %s in network %s",
new_ip, new_net_obj.name)
except errors.ReservationError:
......@@ -2849,10 +2900,19 @@ class LUInstanceSetParams(LogicalUnit):
# dictionary with instance information after the modification
ispec = {}
if self.op.hotplug:
if self.op.hotplug or self.op.hotplug_if_possible:
result = self.rpc.call_hotplug_supported(self.instance.primary_node,
self.instance)
result.Raise("Hotplug is not supported.")
if result.fail_msg:
if self.op.hotplug:
result.Raise("Hotplug is not possible: %s" % result.fail_msg,
prereq=True)
else:
self.LogWarning(result.fail_msg)
self.op.hotplug = False
self.LogInfo("Modification will take place without hotplugging.")
else:
self.op.hotplug = True
# Prepare NIC modifications
self.nicmod = _PrepareContainerMods(self.op.nics, _InstNicModPrivate)
......@@ -3321,20 +3381,32 @@ class LUInstanceSetParams(LogicalUnit):
if not self.instance.disks_active:
ShutdownInstanceDisks(self, self.instance, disks=[disk])
@staticmethod
def _ModifyDisk(idx, disk, params, _):
def _ModifyDisk(self, idx, disk, params, _):
"""Modifies a disk.
"""
changes = []
mode = params.get(constants.IDISK_MODE, None)
if mode:
disk.mode = mode
if constants.IDISK_MODE in params:
disk.mode = params.get(constants.IDISK_MODE)
changes.append(("disk.mode/%d" % idx, disk.mode))
name = params.get(constants.IDISK_NAME, None)
disk.name = name
changes.append(("disk.name/%d" % idx, disk.name))
if constants.IDISK_NAME in params:
disk.name = params.get(constants.IDISK_NAME)
changes.append(("disk.name/%d" % idx, disk.name))
# Modify arbitrary params in case instance template is ext
for key, value in params.iteritems():
if (key not in constants.MODIFIABLE_IDISK_PARAMS and
self.instance.disk_template == constants.DT_EXT):
# stolen from GetUpdatedParams: default means reset/delete
if value.lower() == constants.VALUE_DEFAULT:
try:
del disk.params[key]
except KeyError:
pass
else:
disk.params[key] = value
changes.append(("disk.params:%s/%d" % (key, idx), value))
return changes
......
......@@ -167,7 +167,7 @@ class LUNetworkAdd(LogicalUnit):
for ip in [node.primary_ip, node.secondary_ip]:
try:
if pool.Contains(ip):
pool.Reserve(ip)
pool.Reserve(ip, external=True)
self.LogInfo("Reserved IP address of node '%s' (%s)",
node.name, ip)
except errors.AddressPoolError, err:
......@@ -177,7 +177,7 @@ class LUNetworkAdd(LogicalUnit):
master_ip = self.cfg.GetClusterInfo().master_ip
try:
if pool.Contains(master_ip):
pool.Reserve(master_ip)
pool.Reserve(master_ip, external=True)
self.LogInfo("Reserved cluster master IP address (%s)", master_ip)
except errors.AddressPoolError, err:
self.LogWarning("Cannot reserve cluster master IP address (%s): %s",
......@@ -363,10 +363,7 @@ class LUNetworkSetParams(LogicalUnit):
if self.op.add_reserved_ips:
for ip in self.op.add_reserved_ips:
try:
if self.pool.IsReserved(ip):
self.LogWarning("IP address %s is already reserved", ip)
else:
self.pool.Reserve(ip, external=True)
self.pool.Reserve(ip, external=True)
except errors.AddressPoolError, err:
self.LogWarning("Cannot reserve IP address %s: %s", ip, err)
......@@ -376,10 +373,7 @@ class LUNetworkSetParams(LogicalUnit):
self.LogWarning("Cannot unreserve Gateway's IP")
continue
try:
if not self.pool.IsReserved(ip):
self.LogWarning("IP address %s is already unreserved", ip)
else:
self.pool.Release(ip, external=True)
self.pool.Release(ip, external=True)
except errors.AddressPoolError, err:
self.LogWarning("Cannot release IP address %s: %s", ip, err)
......
......@@ -395,7 +395,7 @@ class ConfigWriter(object):
_, address, _ = self._temporary_ips.Generate([], gen_one, ec_id)
return address
def _UnlockedReserveIp(self, net_uuid, address, ec_id):
def _UnlockedReserveIp(self, net_uuid, address, ec_id, check=True):
"""Reserve a given IPv4 address for use by an instance.
"""
......@@ -403,22 +403,25 @@ class ConfigWriter(object):
pool = network.AddressPool(nobj)
try:
isreserved = pool.IsReserved(address)
isextreserved = pool.IsReserved(address, external=True)
except errors.AddressPoolError:
raise errors.ReservationError("IP address not in network")
if isreserved:
raise errors.ReservationError("IP address already in use")
if check and isextreserved:
raise errors.ReservationError("IP is externally reserved")
return self._temporary_ips.Reserve(ec_id,
(constants.RESERVE_ACTION,
address, net_uuid))
@locking.ssynchronized(_config_lock, shared=1)
def ReserveIp(self, net_uuid, address, ec_id):
def ReserveIp(self, net_uuid, address, ec_id, check=True):
"""Reserve a given IPv4 address for use by an instance.
"""
if net_uuid:
return self._UnlockedReserveIp(net_uuid, address, ec_id)
return self._UnlockedReserveIp(net_uuid, address, ec_id, check)
@locking.ssynchronized(_config_lock, shared=1)
def ReserveLV(self, lv_name, ec_id):
......
......@@ -188,22 +188,45 @@ def _GetExistingDeviceInfo(dev_type, device, runtime):
return found[0]
def _AnalyzeSerializedRuntime(serialized_runtime):
"""Return runtime entries for a serialized runtime file
def _UpgradeSerializedRuntime(serialized_runtime):
"""Upgrade runtime data
Remove any deprecated fields or change the format of the data.
The runtime files are not upgraded when Ganeti is upgraded, so the required
modification have to be performed here.
@type serialized_runtime: string
@param serialized_runtime: raw text data read from actual runtime file
@return: (cmd, nics, hvparams, bdevs)
@rtype: list
@return: (cmd, nic dicts, hvparams, bdev dicts)
@rtype: tuple
"""
loaded_runtime = serializer.Load(serialized_runtime)
if len(loaded_runtime) == 3:
serialized_disks = []
kvm_cmd, serialized_nics, hvparams = loaded_runtime
kvm_cmd, serialized_nics, hvparams = loaded_runtime[:3]
if len(loaded_runtime) >= 4:
serialized_disks = loaded_runtime[3]
else:
kvm_cmd, serialized_nics, hvparams, serialized_disks = loaded_runtime
serialized_disks = []