Commit c486fb6c authored by Thomas Thrainer's avatar Thomas Thrainer

Merge branch 'stable-2.9' into stable-2.10

* stable-2.9
  Bump revision for 2.9.2
  Update NEWS for 2.9.2 release
  Pass hvparams to GetInstanceInfo
  Adapt parameters that moved to instance variables
  Avoid lines longer than 80 chars
  SingleNotifyPipeCondition: don't share pollers
  KVM: use custom KVM path if set for version checking
* stable-2.8
  Version bump for 2.8.3
  Update NEWS for 2.8.3 release
  Support reseting arbitrary params of ext disks
  Allow modification of arbitrary params for ext
  Do not clear disk.params in UpgradeConfig()
  SetDiskID() before accepting an instance
  Lock group(s) when creating instances
  Fix job error message after unclean master shutdown
  Add default file_driver if missing
  Update tests
  Xen handle domain shutdown
  Fix evacuation out of drained node
  Refactor reading live data in htools
  master-up-setup: Ping multiple times with a shorter interval
  Add a packet number limit to "fping" in master-ip-setup
  Fix a bug in InstanceSetParams concerning names
  build_chroot: hard-code the version of blaze-builder
  Fix error printing
  Allow link local IPv6 gateways
  Fix NODE/NODE_RES locking in LUInstanceCreate
  eta-reduce isIpV6
  Ganeti.Rpc: use brackets for ipv6 addresses
  Update NEWS file with socket permission fix info
  Fix socket permissions after master-failover

Conflicts:
	NEWS
	configure.ac
	devel/build_chroot
	lib/constants.py
	src/Ganeti/Rpc.hs

Resolution:
    NEWS: take both additions
    configure.ac: ignore version bump
    constants.py: move constants to Constants.hs
    instance_migration.py: Remove call to SetDiskID(...), it has been removed in 2.10
    instance_unittest.py: Adapt test to new logic in LU
    Rest: trivial
Signed-off-by: default avatarThomas Thrainer <thomasth@google.com>
Reviewed-by: default avatarMichele Tartara <mtartara@google.com>
parents e34f46e6 89c63fbe
...@@ -81,6 +81,39 @@ before rc1. ...@@ -81,6 +81,39 @@ before rc1.
- Issue 623: IPv6 Masterd <-> Luxid communication error - Issue 623: IPv6 Masterd <-> Luxid communication error
Version 2.9.2
-------------
*(Released Fri, 13 Dec 2013)*
- use custom KVM path if set for version checking
- SingleNotifyPipeCondition: don't share pollers
Inherited from the 2.8 branch:
- Fixed Luxi daemon socket permissions after master-failover
- Improve IP version detection code directly checking for colons rather than
passing the family from the cluster object
- Fix NODE/NODE_RES locking in LUInstanceCreate by not acquiring NODE_RES locks
opportunistically anymore (Issue 622)
- Allow link local IPv6 gateways (Issue 624)
- Fix error printing (Issue 616)
- Fix a bug in InstanceSetParams concerning names: in case no name is passed in
disk modifications, keep the old one. If name=none then set disk name to
None.
- Update build_chroot script to work with the latest hackage packages
- Add a packet number limit to "fping" in master-ip-setup (Issue 630)
- Fix evacuation out of drained node (Issue 615)
- Add default file_driver if missing (Issue 571)
- Fix job error message after unclean master shutdown (Issue 618)
- Lock group(s) when creating instances (Issue 621)
- SetDiskID() before accepting an instance (Issue 633)
- Allow the ext template disks to receive arbitrary parameters, both at creation
time and while being modified
- Xen handle domain shutdown (future proofing cherry-pick)
- Refactor reading live data in htools (future proofing cherry-pick)
Version 2.9.1 Version 2.9.1
------------- -------------
...@@ -216,6 +249,34 @@ This was the first beta release of the 2.9 series. All important changes ...@@ -216,6 +249,34 @@ This was the first beta release of the 2.9 series. All important changes
are listed in the latest 2.9 entry. are listed in the latest 2.9 entry.
Version 2.8.3
-------------
*(Released Thu, 12 Dec 2013)*
- Fixed Luxi daemon socket permissions after master-failover
- Improve IP version detection code directly checking for colons rather than
passing the family from the cluster object
- Fix NODE/NODE_RES locking in LUInstanceCreate by not acquiring NODE_RES locks
opportunistically anymore (Issue 622)
- Allow link local IPv6 gateways (Issue 624)
- Fix error printing (Issue 616)
- Fix a bug in InstanceSetParams concerning names: in case no name is passed in
disk modifications, keep the old one. If name=none then set disk name to
None.
- Update build_chroot script to work with the latest hackage packages
- Add a packet number limit to "fping" in master-ip-setup (Issue 630)
- Fix evacuation out of drained node (Issue 615)
- Add default file_driver if missing (Issue 571)
- Fix job error message after unclean master shutdown (Issue 618)
- Lock group(s) when creating instances (Issue 621)
- SetDiskID() before accepting an instance (Issue 633)
- Allow the ext template disks to receive arbitrary parameters, both at creation
time and while being modified
- Xen handle domain shutdown (future proofing cherry-pick)
- Refactor reading live data in htools (future proofing cherry-pick)
Version 2.8.2 Version 2.8.2
------------- -------------
......
...@@ -170,6 +170,7 @@ case $DIST_RELEASE in ...@@ -170,6 +170,7 @@ case $DIST_RELEASE in
in_chroot -- \ in_chroot -- \
cabal install --global \ cabal install --global \
blaze-builder==0.3.1.1 \
network==2.3 \ network==2.3 \
regex-pcre==0.94.4 \ regex-pcre==0.94.4 \
hinotify==0.3.2 \ hinotify==0.3.2 \
......
...@@ -461,7 +461,7 @@ class LUInstanceCreate(LogicalUnit): ...@@ -461,7 +461,7 @@ class LUInstanceCreate(LogicalUnit):
if (not self.op.file_driver and if (not self.op.file_driver and
self.op.disk_template in [constants.DT_FILE, self.op.disk_template in [constants.DT_FILE,
constants.DT_SHARED_FILE]): constants.DT_SHARED_FILE]):
self.op.file_driver = constants.FD_LOOP self.op.file_driver = constants.FD_DEFAULT
### Node/iallocator related checks ### Node/iallocator related checks
CheckIAllocatorOrNode(self, "iallocator", "pnode") CheckIAllocatorOrNode(self, "iallocator", "pnode")
...@@ -568,7 +568,6 @@ class LUInstanceCreate(LogicalUnit): ...@@ -568,7 +568,6 @@ class LUInstanceCreate(LogicalUnit):
if self.op.opportunistic_locking: if self.op.opportunistic_locking:
self.opportunistic_locks[locking.LEVEL_NODE] = True self.opportunistic_locks[locking.LEVEL_NODE] = True
self.opportunistic_locks[locking.LEVEL_NODE_RES] = True
else: else:
(self.op.pnode_uuid, self.op.pnode) = \ (self.op.pnode_uuid, self.op.pnode) = \
ExpandNodeUuidAndName(self.cfg, self.op.pnode_uuid, self.op.pnode) ExpandNodeUuidAndName(self.cfg, self.op.pnode_uuid, self.op.pnode)
...@@ -607,6 +606,30 @@ class LUInstanceCreate(LogicalUnit): ...@@ -607,6 +606,30 @@ class LUInstanceCreate(LogicalUnit):
self.needed_locks[locking.LEVEL_NODE_RES] = \ self.needed_locks[locking.LEVEL_NODE_RES] = \
CopyLockList(self.needed_locks[locking.LEVEL_NODE]) CopyLockList(self.needed_locks[locking.LEVEL_NODE])
# Optimistically acquire shared group locks (we're reading the
# configuration). We can't just call GetInstanceNodeGroups, because the
# instance doesn't exist yet. Therefore we lock all node groups of all
# nodes we have.
if self.needed_locks[locking.LEVEL_NODE] == locking.ALL_SET:
# In the case we lock all nodes for opportunistic allocation, we have no
# choice than to lock all groups, because they're allocated before nodes.
# This is sad, but true. At least we release all those we don't need in
# CheckPrereq later.
self.needed_locks[locking.LEVEL_NODEGROUP] = locking.ALL_SET
else:
self.needed_locks[locking.LEVEL_NODEGROUP] = \
list(self.cfg.GetNodeGroupsFromNodes(
self.needed_locks[locking.LEVEL_NODE]))
self.share_locks[locking.LEVEL_NODEGROUP] = 1
def DeclareLocks(self, level):
if level == locking.LEVEL_NODE_RES and \
self.opportunistic_locks[locking.LEVEL_NODE]:
# Even when using opportunistic locking, we require the same set of
# NODE_RES locks as we got NODE locks
self.needed_locks[locking.LEVEL_NODE_RES] = \
self.owned_locks(locking.LEVEL_NODE)
def _RunAllocator(self): def _RunAllocator(self):
"""Run the allocator based on input opcode. """Run the allocator based on input opcode.
...@@ -873,6 +896,21 @@ class LUInstanceCreate(LogicalUnit): ...@@ -873,6 +896,21 @@ class LUInstanceCreate(LogicalUnit):
"""Check prerequisites. """Check prerequisites.
""" """
# Check that the optimistically acquired groups are correct wrt the
# acquired nodes
owned_groups = frozenset(self.owned_locks(locking.LEVEL_NODEGROUP))
owned_nodes = frozenset(self.owned_locks(locking.LEVEL_NODE))
cur_groups = list(self.cfg.GetNodeGroupsFromNodes(owned_nodes))
if not owned_groups.issuperset(cur_groups):
raise errors.OpPrereqError("New instance %s's node groups changed since"
" locks were acquired, current groups are"
" are '%s', owning groups '%s'; retry the"
" operation" %
(self.op.instance_name,
utils.CommaJoin(cur_groups),
utils.CommaJoin(owned_groups)),
errors.ECODE_STATE)
self._CalculateFileStorageDir() self._CalculateFileStorageDir()
if self.op.mode == constants.INSTANCE_IMPORT: if self.op.mode == constants.INSTANCE_IMPORT:
...@@ -985,6 +1023,9 @@ class LUInstanceCreate(LogicalUnit): ...@@ -985,6 +1023,9 @@ class LUInstanceCreate(LogicalUnit):
ReleaseLocks(self, locking.LEVEL_NODE, keep=keep_locks) ReleaseLocks(self, locking.LEVEL_NODE, keep=keep_locks)
ReleaseLocks(self, locking.LEVEL_NODE_RES, keep=keep_locks) ReleaseLocks(self, locking.LEVEL_NODE_RES, keep=keep_locks)
ReleaseLocks(self, locking.LEVEL_NODE_ALLOC) ReleaseLocks(self, locking.LEVEL_NODE_ALLOC)
# Release all unneeded group locks
ReleaseLocks(self, locking.LEVEL_NODEGROUP,
keep=self.cfg.GetNodeGroupsFromNodes(keep_locks))
assert (self.owned_locks(locking.LEVEL_NODE) == assert (self.owned_locks(locking.LEVEL_NODE) ==
self.owned_locks(locking.LEVEL_NODE_RES)), \ self.owned_locks(locking.LEVEL_NODE_RES)), \
...@@ -1939,7 +1980,6 @@ class LUInstanceMultiAlloc(NoHooksLU): ...@@ -1939,7 +1980,6 @@ class LUInstanceMultiAlloc(NoHooksLU):
if self.op.opportunistic_locking: if self.op.opportunistic_locking:
self.opportunistic_locks[locking.LEVEL_NODE] = True self.opportunistic_locks[locking.LEVEL_NODE] = True
self.opportunistic_locks[locking.LEVEL_NODE_RES] = True
else: else:
nodeslist = [] nodeslist = []
for inst in self.op.instances: for inst in self.op.instances:
...@@ -1956,6 +1996,14 @@ class LUInstanceMultiAlloc(NoHooksLU): ...@@ -1956,6 +1996,14 @@ class LUInstanceMultiAlloc(NoHooksLU):
# prevent accidential modification) # prevent accidential modification)
self.needed_locks[locking.LEVEL_NODE_RES] = list(nodeslist) self.needed_locks[locking.LEVEL_NODE_RES] = list(nodeslist)
def DeclareLocks(self, level):
if level == locking.LEVEL_NODE_RES and \
self.opportunistic_locks[locking.LEVEL_NODE]:
# Even when using opportunistic locking, we require the same set of
# NODE_RES locks as we got NODE locks
self.needed_locks[locking.LEVEL_NODE_RES] = \
self.owned_locks(locking.LEVEL_NODE)
def CheckPrereq(self): def CheckPrereq(self):
"""Check prerequisite. """Check prerequisite.
...@@ -2314,8 +2362,7 @@ class LUInstanceSetParams(LogicalUnit): ...@@ -2314,8 +2362,7 @@ class LUInstanceSetParams(LogicalUnit):
else: else:
raise errors.ProgrammerError("Unhandled operation '%s'" % op) raise errors.ProgrammerError("Unhandled operation '%s'" % op)
@staticmethod def _VerifyDiskModification(self, op, params, excl_stor):
def _VerifyDiskModification(op, params, excl_stor):
"""Verifies a disk modification. """Verifies a disk modification.
""" """
...@@ -2342,10 +2389,12 @@ class LUInstanceSetParams(LogicalUnit): ...@@ -2342,10 +2389,12 @@ class LUInstanceSetParams(LogicalUnit):
if constants.IDISK_SIZE in params: if constants.IDISK_SIZE in params:
raise errors.OpPrereqError("Disk size change not possible, use" raise errors.OpPrereqError("Disk size change not possible, use"
" grow-disk", errors.ECODE_INVAL) " grow-disk", errors.ECODE_INVAL)
if len(params) > 2:
raise errors.OpPrereqError("Disk modification doesn't support" # Disk modification supports changing only the disk name and mode.
" additional arbitrary parameters", # Changing arbitrary parameters is allowed only for ext disk template",
errors.ECODE_INVAL) if self.instance.disk_template != constants.DT_EXT:
utils.ForceDictType(params, constants.MODIFIABLE_IDISK_PARAMS_TYPES)
name = params.get(constants.IDISK_NAME, None) name = params.get(constants.IDISK_NAME, None)
if name is not None and name.lower() == constants.VALUE_NONE: if name is not None and name.lower() == constants.VALUE_NONE:
params[constants.IDISK_NAME] = None params[constants.IDISK_NAME] = None
...@@ -3322,20 +3371,32 @@ class LUInstanceSetParams(LogicalUnit): ...@@ -3322,20 +3371,32 @@ class LUInstanceSetParams(LogicalUnit):
if not self.instance.disks_active: if not self.instance.disks_active:
ShutdownInstanceDisks(self, self.instance, disks=[disk]) ShutdownInstanceDisks(self, self.instance, disks=[disk])
@staticmethod def _ModifyDisk(self, idx, disk, params, _):
def _ModifyDisk(idx, disk, params, _):
"""Modifies a disk. """Modifies a disk.
""" """
changes = [] changes = []
mode = params.get(constants.IDISK_MODE, None) if constants.IDISK_MODE in params:
if mode: disk.mode = params.get(constants.IDISK_MODE)
disk.mode = mode
changes.append(("disk.mode/%d" % idx, disk.mode)) changes.append(("disk.mode/%d" % idx, disk.mode))
name = params.get(constants.IDISK_NAME, None) if constants.IDISK_NAME in params:
disk.name = name disk.name = params.get(constants.IDISK_NAME)
changes.append(("disk.name/%d" % idx, disk.name)) changes.append(("disk.name/%d" % idx, disk.name))
# Modify arbitrary params in case instance template is ext
for key, value in params.iteritems():
if (key not in constants.MODIFIABLE_IDISK_PARAMS and
self.instance.disk_template == constants.DT_EXT):
# stolen from GetUpdatedParams: default means reset/delete
if value.lower() == constants.VALUE_DEFAULT:
try:
del disk.params[key]
except KeyError:
pass
else:
disk.params[key] = value
changes.append(("disk.params:%s/%d" % (key, idx), value))
return changes return changes
......
...@@ -2209,7 +2209,7 @@ class KVMHypervisor(hv_base.BaseHypervisor): ...@@ -2209,7 +2209,7 @@ class KVMHypervisor(hv_base.BaseHypervisor):
result = utils.RunCmd([kvm_path] + optlist) result = utils.RunCmd([kvm_path] + optlist)
if result.failed and not can_fail: if result.failed and not can_fail:
raise errors.HypervisorError("Unable to get KVM %s output" % raise errors.HypervisorError("Unable to get KVM %s output" %
" ".join(cls._KVMOPTS_CMDS[option])) " ".join(optlist))
return result.output return result.output
@classmethod @classmethod
...@@ -2461,10 +2461,10 @@ class KVMHypervisor(hv_base.BaseHypervisor): ...@@ -2461,10 +2461,10 @@ class KVMHypervisor(hv_base.BaseHypervisor):
""" """
result = self.GetLinuxNodeInfo() result = self.GetLinuxNodeInfo()
# FIXME: this is the global kvm version, but the actual version can be kvmpath = constants.KVM_PATH
# customized as an hv parameter. we should use the nodegroup's default kvm if hvparams is not None:
# path parameter here. kvmpath = hvparams.get(constants.HV_KVM_PATH, constants.KVM_PATH)
_, v_major, v_min, v_rev = self._GetKVMVersion(constants.KVM_PATH) _, v_major, v_min, v_rev = self._GetKVMVersion(kvmpath)
result[constants.HV_NODEINFO_KEY_VERSION] = (v_major, v_min, v_rev) result[constants.HV_NODEINFO_KEY_VERSION] = (v_major, v_min, v_rev)
return result return result
...@@ -2518,11 +2518,11 @@ class KVMHypervisor(hv_base.BaseHypervisor): ...@@ -2518,11 +2518,11 @@ class KVMHypervisor(hv_base.BaseHypervisor):
""" """
msgs = [] msgs = []
# FIXME: this is the global kvm binary, but the actual path can be kvmpath = constants.KVM_PATH
# customized as an hv parameter; we should use the nodegroup's if hvparams is not None:
# default kvm path parameter here. kvmpath = hvparams.get(constants.HV_KVM_PATH, constants.KVM_PATH)
if not os.path.exists(constants.KVM_PATH): if not os.path.exists(kvmpath):
msgs.append("The KVM binary ('%s') does not exist" % constants.KVM_PATH) msgs.append("The KVM binary ('%s') does not exist" % kvmpath)
if not os.path.exists(constants.SOCAT_PATH): if not os.path.exists(constants.SOCAT_PATH):
msgs.append("The socat binary ('%s') does not exist" % msgs.append("The socat binary ('%s') does not exist" %
constants.SOCAT_PATH) constants.SOCAT_PATH)
......
...@@ -167,6 +167,15 @@ def _GetInstanceList(fn, include_node, _timeout=5): ...@@ -167,6 +167,15 @@ def _GetInstanceList(fn, include_node, _timeout=5):
return _ParseInstanceList(lines, include_node) return _ParseInstanceList(lines, include_node)
def _IsInstanceRunning(instance_info):
return instance_info == "r-----" \
or instance_info == "-b----"
def _IsInstanceShutdown(instance_info):
return instance_info == "---s--"
def _ParseNodeInfo(info): def _ParseNodeInfo(info):
"""Return information about the node. """Return information about the node.
...@@ -613,24 +622,65 @@ class XenHypervisor(hv_base.BaseHypervisor): ...@@ -613,24 +622,65 @@ class XenHypervisor(hv_base.BaseHypervisor):
return self._StopInstance(name, force, instance.hvparams) return self._StopInstance(name, force, instance.hvparams)
def _ShutdownInstance(self, name, hvparams):
"""Shutdown an instance if the instance is running.
@type name: string
@param name: name of the instance to stop
@type hvparams: dict of string
@param hvparams: hypervisor parameters of the instance
The '-w' flag waits for shutdown to complete which avoids the need
to poll in the case where we want to destroy the domain
immediately after shutdown.
"""
instance_info = self.GetInstanceInfo(name, hvparams=hvparams)
if instance_info is None or _IsInstanceShutdown(instance_info[4]):
logging.info("Failed to shutdown instance %s, not running", name)
return None
return self._RunXen(["shutdown", "-w", name], hvparams)
def _DestroyInstance(self, name, hvparams):
"""Destroy an instance if the instance if the instance exists.
@type name: string
@param name: name of the instance to destroy
@type hvparams: dict of string
@param hvparams: hypervisor parameters of the instance
"""
instance_info = self.GetInstanceInfo(name, hvparams=hvparams)
if instance_info is None:
logging.info("Failed to destroy instance %s, does not exist", name)
return None
return self._RunXen(["destroy", name], hvparams)
def _StopInstance(self, name, force, hvparams): def _StopInstance(self, name, force, hvparams):
"""Stop an instance. """Stop an instance.
@type name: string @type name: string
@param name: name of the instance to be shutdown @param name: name of the instance to destroy
@type force: boolean @type force: boolean
@param force: flag specifying whether shutdown should be forced @param force: whether to do a "hard" stop (destroy)
@type hvparams: dict of string @type hvparams: dict of string
@param hvparams: hypervisor parameters of the instance @param hvparams: hypervisor parameters of the instance
""" """
if force: if force:
action = "destroy" result = self._DestroyInstance(name, hvparams)
else: else:
action = "shutdown" self._ShutdownInstance(name, hvparams)
result = self._DestroyInstance(name, hvparams)
result = self._RunXen([action, name], hvparams) if result is not None and result.failed and \
if result.failed: self.GetInstanceInfo(name, hvparams=hvparams) is not None:
raise errors.HypervisorError("Failed to stop instance %s: %s, %s" % raise errors.HypervisorError("Failed to stop instance %s: %s, %s" %
(name, result.fail_reason, result.output)) (name, result.fail_reason, result.output))
......
...@@ -1757,8 +1757,9 @@ class JobQueue(object): ...@@ -1757,8 +1757,9 @@ class JobQueue(object):
job.MarkUnfinishedOps(constants.OP_STATUS_QUEUED, None) job.MarkUnfinishedOps(constants.OP_STATUS_QUEUED, None)
restartjobs.append(job) restartjobs.append(job)
else: else:
to_encode = errors.OpExecError("Unclean master daemon shutdown")
job.MarkUnfinishedOps(constants.OP_STATUS_ERROR, job.MarkUnfinishedOps(constants.OP_STATUS_ERROR,
"Unclean master daemon shutdown") _EncodeOpError(to_encode))
job.Finalize() job.Finalize()
self.UpdateJobUnlocked(job) self.UpdateJobUnlocked(job)
......
...@@ -97,20 +97,16 @@ class _SingleNotifyPipeConditionWaiter(object): ...@@ -97,20 +97,16 @@ class _SingleNotifyPipeConditionWaiter(object):
""" """
__slots__ = [ __slots__ = [
"_fd", "_fd",
"_poller",
] ]
def __init__(self, poller, fd): def __init__(self, fd):
"""Constructor for _SingleNotifyPipeConditionWaiter """Constructor for _SingleNotifyPipeConditionWaiter
@type poller: select.poll
@param poller: Poller object
@type fd: int @type fd: int
@param fd: File descriptor to wait for @param fd: File descriptor to wait for
""" """
object.__init__(self) object.__init__(self)
self._poller = poller
self._fd = fd self._fd = fd
def __call__(self, timeout): def __call__(self, timeout):
...@@ -121,6 +117,8 @@ class _SingleNotifyPipeConditionWaiter(object): ...@@ -121,6 +117,8 @@ class _SingleNotifyPipeConditionWaiter(object):
""" """
running_timeout = utils.RunningTimeout(timeout, True) running_timeout = utils.RunningTimeout(timeout, True)
poller = select.poll()
poller.register(self._fd, select.POLLHUP)
while True: while True:
remaining_time = running_timeout.Remaining() remaining_time = running_timeout.Remaining()
...@@ -133,7 +131,7 @@ class _SingleNotifyPipeConditionWaiter(object): ...@@ -133,7 +131,7 @@ class _SingleNotifyPipeConditionWaiter(object):
remaining_time *= 1000 remaining_time *= 1000
try: try:
result = self._poller.poll(remaining_time) result = poller.poll(remaining_time)
except EnvironmentError, err: except EnvironmentError, err:
if err.errno != errno.EINTR: if err.errno != errno.EINTR:
raise raise
...@@ -222,7 +220,6 @@ class SingleNotifyPipeCondition(_BaseCondition): ...@@ -222,7 +220,6 @@ class SingleNotifyPipeCondition(_BaseCondition):
""" """
__slots__ = [ __slots__ = [
"_poller",
"_read_fd", "_read_fd",
"_write_fd", "_write_fd",
"_nwaiters", "_nwaiters",
...@@ -240,7 +237,6 @@ class SingleNotifyPipeCondition(_BaseCondition): ...@@ -240,7 +237,6 @@ class SingleNotifyPipeCondition(_BaseCondition):
self._notified = False self._notified = False
self._read_fd = None self._read_fd = None
self._write_fd = None self._write_fd = None
self._poller = None
def _check_unnotified(self): def _check_unnotified(self):
"""Throws an exception if already notified. """Throws an exception if already notified.
...@@ -260,7 +256,6 @@ class SingleNotifyPipeCondition(_BaseCondition): ...@@ -260,7 +256,6 @@ class SingleNotifyPipeCondition(_BaseCondition):
if self._write_fd is not None: if self._write_fd is not None:
os.close(self._write_fd) os.close(self._write_fd)
self._write_fd = None self._write_fd = None
self._poller = None
def wait(self, timeout): def wait(self, timeout):
"""Wait for a notification. """Wait for a notification.
...@@ -274,12 +269,10 @@ class SingleNotifyPipeCondition(_BaseCondition): ...@@ -274,12 +269,10 @@ class SingleNotifyPipeCondition(_BaseCondition):
self._nwaiters += 1 self._nwaiters += 1
try: try:
if self._poller is None: if self._read_fd is None:
(self._read_fd, self._write_fd) = os.pipe() (self._read_fd, self._write_fd) = os.pipe()
self._poller = select.poll()
self._poller.register(self._read_fd, select.POLLHUP)
wait_fn = self._waiter_class(self._poller, self._read_fd) wait_fn = self._waiter_class(self._read_fd)
state = self._release_save() state = self._release_save()
try: try:
# Wait for notification # Wait for notification
......
...@@ -674,7 +674,7 @@ class IAllocator(object): ...@@ -674,7 +674,7 @@ class IAllocator(object):
assert ninfo.name in node_results, "Missing basic data for node %s" % \ assert ninfo.name in node_results, "Missing basic data for node %s" % \
ninfo.name ninfo.name
if not (ninfo.offline or ninfo.drained): if not ninfo.offline:
nresult.Raise("Can't get data for node %s" % ninfo.name) nresult.Raise("Can't get data for node %s" % ninfo.name)
node_iinfo[nuuid].Raise("Can't get node instance info from node %s" % node_iinfo[nuuid].Raise("Can't get node instance info from node %s" %
ninfo.name) ninfo.name)
......
...@@ -158,7 +158,7 @@ class AddressPool(object): ...@@ -158,7 +158,7 @@ class AddressPool(object):
assert self.gateway in self.network assert self.gateway in self.network
if self.network6 and self.gateway6: if self.network6 and self.gateway6:
assert self.gateway6 in self.network6 assert self.gateway6 in self.network6 or self.gateway6.is_link_local
return True return True
......