Commit c486fb6c authored by Thomas Thrainer's avatar Thomas Thrainer

Merge branch 'stable-2.9' into stable-2.10

* stable-2.9
  Bump revision for 2.9.2
  Update NEWS for 2.9.2 release
  Pass hvparams to GetInstanceInfo
  Adapt parameters that moved to instance variables
  Avoid lines longer than 80 chars
  SingleNotifyPipeCondition: don't share pollers
  KVM: use custom KVM path if set for version checking
* stable-2.8
  Version bump for 2.8.3
  Update NEWS for 2.8.3 release
  Support reseting arbitrary params of ext disks
  Allow modification of arbitrary params for ext
  Do not clear disk.params in UpgradeConfig()
  SetDiskID() before accepting an instance
  Lock group(s) when creating instances
  Fix job error message after unclean master shutdown
  Add default file_driver if missing
  Update tests
  Xen handle domain shutdown
  Fix evacuation out of drained node
  Refactor reading live data in htools
  master-up-setup: Ping multiple times with a shorter interval
  Add a packet number limit to "fping" in master-ip-setup
  Fix a bug in InstanceSetParams concerning names
  build_chroot: hard-code the version of blaze-builder
  Fix error printing
  Allow link local IPv6 gateways
  Fix NODE/NODE_RES locking in LUInstanceCreate
  eta-reduce isIpV6
  Ganeti.Rpc: use brackets for ipv6 addresses
  Update NEWS file with socket permission fix info
  Fix socket permissions after master-failover

Conflicts:
	NEWS
	configure.ac
	devel/build_chroot
	lib/constants.py
	src/Ganeti/Rpc.hs

Resolution:
    NEWS: take both additions
    configure.ac: ignore version bump
    constants.py: move constants to Constants.hs
    instance_migration.py: Remove call to SetDiskID(...), it has been removed in 2.10
    instance_unittest.py: Adapt test to new logic in LU
    Rest: trivial
Signed-off-by: default avatarThomas Thrainer <thomasth@google.com>
Reviewed-by: default avatarMichele Tartara <mtartara@google.com>
parents e34f46e6 89c63fbe
......@@ -81,6 +81,39 @@ before rc1.
- Issue 623: IPv6 Masterd <-> Luxid communication error
Version 2.9.2
-------------
*(Released Fri, 13 Dec 2013)*
- use custom KVM path if set for version checking
- SingleNotifyPipeCondition: don't share pollers
Inherited from the 2.8 branch:
- Fixed Luxi daemon socket permissions after master-failover
- Improve IP version detection code directly checking for colons rather than
passing the family from the cluster object
- Fix NODE/NODE_RES locking in LUInstanceCreate by not acquiring NODE_RES locks
opportunistically anymore (Issue 622)
- Allow link local IPv6 gateways (Issue 624)
- Fix error printing (Issue 616)
- Fix a bug in InstanceSetParams concerning names: in case no name is passed in
disk modifications, keep the old one. If name=none then set disk name to
None.
- Update build_chroot script to work with the latest hackage packages
- Add a packet number limit to "fping" in master-ip-setup (Issue 630)
- Fix evacuation out of drained node (Issue 615)
- Add default file_driver if missing (Issue 571)
- Fix job error message after unclean master shutdown (Issue 618)
- Lock group(s) when creating instances (Issue 621)
- SetDiskID() before accepting an instance (Issue 633)
- Allow the ext template disks to receive arbitrary parameters, both at creation
time and while being modified
- Xen handle domain shutdown (future proofing cherry-pick)
- Refactor reading live data in htools (future proofing cherry-pick)
Version 2.9.1
-------------
......@@ -216,6 +249,34 @@ This was the first beta release of the 2.9 series. All important changes
are listed in the latest 2.9 entry.
Version 2.8.3
-------------
*(Released Thu, 12 Dec 2013)*
- Fixed Luxi daemon socket permissions after master-failover
- Improve IP version detection code directly checking for colons rather than
passing the family from the cluster object
- Fix NODE/NODE_RES locking in LUInstanceCreate by not acquiring NODE_RES locks
opportunistically anymore (Issue 622)
- Allow link local IPv6 gateways (Issue 624)
- Fix error printing (Issue 616)
- Fix a bug in InstanceSetParams concerning names: in case no name is passed in
disk modifications, keep the old one. If name=none then set disk name to
None.
- Update build_chroot script to work with the latest hackage packages
- Add a packet number limit to "fping" in master-ip-setup (Issue 630)
- Fix evacuation out of drained node (Issue 615)
- Add default file_driver if missing (Issue 571)
- Fix job error message after unclean master shutdown (Issue 618)
- Lock group(s) when creating instances (Issue 621)
- SetDiskID() before accepting an instance (Issue 633)
- Allow the ext template disks to receive arbitrary parameters, both at creation
time and while being modified
- Xen handle domain shutdown (future proofing cherry-pick)
- Refactor reading live data in htools (future proofing cherry-pick)
Version 2.8.2
-------------
......
......@@ -170,6 +170,7 @@ case $DIST_RELEASE in
in_chroot -- \
cabal install --global \
blaze-builder==0.3.1.1 \
network==2.3 \
regex-pcre==0.94.4 \
hinotify==0.3.2 \
......
......@@ -461,7 +461,7 @@ class LUInstanceCreate(LogicalUnit):
if (not self.op.file_driver and
self.op.disk_template in [constants.DT_FILE,
constants.DT_SHARED_FILE]):
self.op.file_driver = constants.FD_LOOP
self.op.file_driver = constants.FD_DEFAULT
### Node/iallocator related checks
CheckIAllocatorOrNode(self, "iallocator", "pnode")
......@@ -568,7 +568,6 @@ class LUInstanceCreate(LogicalUnit):
if self.op.opportunistic_locking:
self.opportunistic_locks[locking.LEVEL_NODE] = True
self.opportunistic_locks[locking.LEVEL_NODE_RES] = True
else:
(self.op.pnode_uuid, self.op.pnode) = \
ExpandNodeUuidAndName(self.cfg, self.op.pnode_uuid, self.op.pnode)
......@@ -607,6 +606,30 @@ class LUInstanceCreate(LogicalUnit):
self.needed_locks[locking.LEVEL_NODE_RES] = \
CopyLockList(self.needed_locks[locking.LEVEL_NODE])
# Optimistically acquire shared group locks (we're reading the
# configuration). We can't just call GetInstanceNodeGroups, because the
# instance doesn't exist yet. Therefore we lock all node groups of all
# nodes we have.
if self.needed_locks[locking.LEVEL_NODE] == locking.ALL_SET:
# In the case we lock all nodes for opportunistic allocation, we have no
# choice than to lock all groups, because they're allocated before nodes.
# This is sad, but true. At least we release all those we don't need in
# CheckPrereq later.
self.needed_locks[locking.LEVEL_NODEGROUP] = locking.ALL_SET
else:
self.needed_locks[locking.LEVEL_NODEGROUP] = \
list(self.cfg.GetNodeGroupsFromNodes(
self.needed_locks[locking.LEVEL_NODE]))
self.share_locks[locking.LEVEL_NODEGROUP] = 1
def DeclareLocks(self, level):
if level == locking.LEVEL_NODE_RES and \
self.opportunistic_locks[locking.LEVEL_NODE]:
# Even when using opportunistic locking, we require the same set of
# NODE_RES locks as we got NODE locks
self.needed_locks[locking.LEVEL_NODE_RES] = \
self.owned_locks(locking.LEVEL_NODE)
def _RunAllocator(self):
"""Run the allocator based on input opcode.
......@@ -873,6 +896,21 @@ class LUInstanceCreate(LogicalUnit):
"""Check prerequisites.
"""
# Check that the optimistically acquired groups are correct wrt the
# acquired nodes
owned_groups = frozenset(self.owned_locks(locking.LEVEL_NODEGROUP))
owned_nodes = frozenset(self.owned_locks(locking.LEVEL_NODE))
cur_groups = list(self.cfg.GetNodeGroupsFromNodes(owned_nodes))
if not owned_groups.issuperset(cur_groups):
raise errors.OpPrereqError("New instance %s's node groups changed since"
" locks were acquired, current groups are"
" are '%s', owning groups '%s'; retry the"
" operation" %
(self.op.instance_name,
utils.CommaJoin(cur_groups),
utils.CommaJoin(owned_groups)),
errors.ECODE_STATE)
self._CalculateFileStorageDir()
if self.op.mode == constants.INSTANCE_IMPORT:
......@@ -985,6 +1023,9 @@ class LUInstanceCreate(LogicalUnit):
ReleaseLocks(self, locking.LEVEL_NODE, keep=keep_locks)
ReleaseLocks(self, locking.LEVEL_NODE_RES, keep=keep_locks)
ReleaseLocks(self, locking.LEVEL_NODE_ALLOC)
# Release all unneeded group locks
ReleaseLocks(self, locking.LEVEL_NODEGROUP,
keep=self.cfg.GetNodeGroupsFromNodes(keep_locks))
assert (self.owned_locks(locking.LEVEL_NODE) ==
self.owned_locks(locking.LEVEL_NODE_RES)), \
......@@ -1939,7 +1980,6 @@ class LUInstanceMultiAlloc(NoHooksLU):
if self.op.opportunistic_locking:
self.opportunistic_locks[locking.LEVEL_NODE] = True
self.opportunistic_locks[locking.LEVEL_NODE_RES] = True
else:
nodeslist = []
for inst in self.op.instances:
......@@ -1956,6 +1996,14 @@ class LUInstanceMultiAlloc(NoHooksLU):
# prevent accidential modification)
self.needed_locks[locking.LEVEL_NODE_RES] = list(nodeslist)
def DeclareLocks(self, level):
if level == locking.LEVEL_NODE_RES and \
self.opportunistic_locks[locking.LEVEL_NODE]:
# Even when using opportunistic locking, we require the same set of
# NODE_RES locks as we got NODE locks
self.needed_locks[locking.LEVEL_NODE_RES] = \
self.owned_locks(locking.LEVEL_NODE)
def CheckPrereq(self):
"""Check prerequisite.
......@@ -2314,8 +2362,7 @@ class LUInstanceSetParams(LogicalUnit):
else:
raise errors.ProgrammerError("Unhandled operation '%s'" % op)
@staticmethod
def _VerifyDiskModification(op, params, excl_stor):
def _VerifyDiskModification(self, op, params, excl_stor):
"""Verifies a disk modification.
"""
......@@ -2342,10 +2389,12 @@ class LUInstanceSetParams(LogicalUnit):
if constants.IDISK_SIZE in params:
raise errors.OpPrereqError("Disk size change not possible, use"
" grow-disk", errors.ECODE_INVAL)
if len(params) > 2:
raise errors.OpPrereqError("Disk modification doesn't support"
" additional arbitrary parameters",
errors.ECODE_INVAL)
# Disk modification supports changing only the disk name and mode.
# Changing arbitrary parameters is allowed only for ext disk template",
if self.instance.disk_template != constants.DT_EXT:
utils.ForceDictType(params, constants.MODIFIABLE_IDISK_PARAMS_TYPES)
name = params.get(constants.IDISK_NAME, None)
if name is not None and name.lower() == constants.VALUE_NONE:
params[constants.IDISK_NAME] = None
......@@ -3322,20 +3371,32 @@ class LUInstanceSetParams(LogicalUnit):
if not self.instance.disks_active:
ShutdownInstanceDisks(self, self.instance, disks=[disk])
@staticmethod
def _ModifyDisk(idx, disk, params, _):
def _ModifyDisk(self, idx, disk, params, _):
"""Modifies a disk.
"""
changes = []
mode = params.get(constants.IDISK_MODE, None)
if mode:
disk.mode = mode
if constants.IDISK_MODE in params:
disk.mode = params.get(constants.IDISK_MODE)
changes.append(("disk.mode/%d" % idx, disk.mode))
name = params.get(constants.IDISK_NAME, None)
disk.name = name
changes.append(("disk.name/%d" % idx, disk.name))
if constants.IDISK_NAME in params:
disk.name = params.get(constants.IDISK_NAME)
changes.append(("disk.name/%d" % idx, disk.name))
# Modify arbitrary params in case instance template is ext
for key, value in params.iteritems():
if (key not in constants.MODIFIABLE_IDISK_PARAMS and
self.instance.disk_template == constants.DT_EXT):
# stolen from GetUpdatedParams: default means reset/delete
if value.lower() == constants.VALUE_DEFAULT:
try:
del disk.params[key]
except KeyError:
pass
else:
disk.params[key] = value
changes.append(("disk.params:%s/%d" % (key, idx), value))
return changes
......
......@@ -2209,7 +2209,7 @@ class KVMHypervisor(hv_base.BaseHypervisor):
result = utils.RunCmd([kvm_path] + optlist)
if result.failed and not can_fail:
raise errors.HypervisorError("Unable to get KVM %s output" %
" ".join(cls._KVMOPTS_CMDS[option]))
" ".join(optlist))
return result.output
@classmethod
......@@ -2461,10 +2461,10 @@ class KVMHypervisor(hv_base.BaseHypervisor):
"""
result = self.GetLinuxNodeInfo()
# FIXME: this is the global kvm version, but the actual version can be
# customized as an hv parameter. we should use the nodegroup's default kvm
# path parameter here.
_, v_major, v_min, v_rev = self._GetKVMVersion(constants.KVM_PATH)
kvmpath = constants.KVM_PATH
if hvparams is not None:
kvmpath = hvparams.get(constants.HV_KVM_PATH, constants.KVM_PATH)
_, v_major, v_min, v_rev = self._GetKVMVersion(kvmpath)
result[constants.HV_NODEINFO_KEY_VERSION] = (v_major, v_min, v_rev)
return result
......@@ -2518,11 +2518,11 @@ class KVMHypervisor(hv_base.BaseHypervisor):
"""
msgs = []
# FIXME: this is the global kvm binary, but the actual path can be
# customized as an hv parameter; we should use the nodegroup's
# default kvm path parameter here.
if not os.path.exists(constants.KVM_PATH):
msgs.append("The KVM binary ('%s') does not exist" % constants.KVM_PATH)
kvmpath = constants.KVM_PATH
if hvparams is not None:
kvmpath = hvparams.get(constants.HV_KVM_PATH, constants.KVM_PATH)
if not os.path.exists(kvmpath):
msgs.append("The KVM binary ('%s') does not exist" % kvmpath)
if not os.path.exists(constants.SOCAT_PATH):
msgs.append("The socat binary ('%s') does not exist" %
constants.SOCAT_PATH)
......
......@@ -167,6 +167,15 @@ def _GetInstanceList(fn, include_node, _timeout=5):
return _ParseInstanceList(lines, include_node)
def _IsInstanceRunning(instance_info):
return instance_info == "r-----" \
or instance_info == "-b----"
def _IsInstanceShutdown(instance_info):
return instance_info == "---s--"
def _ParseNodeInfo(info):
"""Return information about the node.
......@@ -613,24 +622,65 @@ class XenHypervisor(hv_base.BaseHypervisor):
return self._StopInstance(name, force, instance.hvparams)
def _ShutdownInstance(self, name, hvparams):
"""Shutdown an instance if the instance is running.
@type name: string
@param name: name of the instance to stop
@type hvparams: dict of string
@param hvparams: hypervisor parameters of the instance
The '-w' flag waits for shutdown to complete which avoids the need
to poll in the case where we want to destroy the domain
immediately after shutdown.
"""
instance_info = self.GetInstanceInfo(name, hvparams=hvparams)
if instance_info is None or _IsInstanceShutdown(instance_info[4]):
logging.info("Failed to shutdown instance %s, not running", name)
return None
return self._RunXen(["shutdown", "-w", name], hvparams)
def _DestroyInstance(self, name, hvparams):
"""Destroy an instance if the instance if the instance exists.
@type name: string
@param name: name of the instance to destroy
@type hvparams: dict of string
@param hvparams: hypervisor parameters of the instance
"""
instance_info = self.GetInstanceInfo(name, hvparams=hvparams)
if instance_info is None:
logging.info("Failed to destroy instance %s, does not exist", name)
return None
return self._RunXen(["destroy", name], hvparams)
def _StopInstance(self, name, force, hvparams):
"""Stop an instance.
@type name: string
@param name: name of the instance to be shutdown
@param name: name of the instance to destroy
@type force: boolean
@param force: flag specifying whether shutdown should be forced
@param force: whether to do a "hard" stop (destroy)
@type hvparams: dict of string
@param hvparams: hypervisor parameters of the instance
"""
if force:
action = "destroy"
result = self._DestroyInstance(name, hvparams)
else:
action = "shutdown"
self._ShutdownInstance(name, hvparams)
result = self._DestroyInstance(name, hvparams)
result = self._RunXen([action, name], hvparams)
if result.failed:
if result is not None and result.failed and \
self.GetInstanceInfo(name, hvparams=hvparams) is not None:
raise errors.HypervisorError("Failed to stop instance %s: %s, %s" %
(name, result.fail_reason, result.output))
......
......@@ -1757,8 +1757,9 @@ class JobQueue(object):
job.MarkUnfinishedOps(constants.OP_STATUS_QUEUED, None)
restartjobs.append(job)
else:
to_encode = errors.OpExecError("Unclean master daemon shutdown")
job.MarkUnfinishedOps(constants.OP_STATUS_ERROR,
"Unclean master daemon shutdown")
_EncodeOpError(to_encode))
job.Finalize()
self.UpdateJobUnlocked(job)
......
......@@ -97,20 +97,16 @@ class _SingleNotifyPipeConditionWaiter(object):
"""
__slots__ = [
"_fd",
"_poller",
]
def __init__(self, poller, fd):
def __init__(self, fd):
"""Constructor for _SingleNotifyPipeConditionWaiter
@type poller: select.poll
@param poller: Poller object
@type fd: int
@param fd: File descriptor to wait for
"""
object.__init__(self)
self._poller = poller
self._fd = fd
def __call__(self, timeout):
......@@ -121,6 +117,8 @@ class _SingleNotifyPipeConditionWaiter(object):
"""
running_timeout = utils.RunningTimeout(timeout, True)
poller = select.poll()
poller.register(self._fd, select.POLLHUP)
while True:
remaining_time = running_timeout.Remaining()
......@@ -133,7 +131,7 @@ class _SingleNotifyPipeConditionWaiter(object):
remaining_time *= 1000
try:
result = self._poller.poll(remaining_time)
result = poller.poll(remaining_time)
except EnvironmentError, err:
if err.errno != errno.EINTR:
raise
......@@ -222,7 +220,6 @@ class SingleNotifyPipeCondition(_BaseCondition):
"""
__slots__ = [
"_poller",
"_read_fd",
"_write_fd",
"_nwaiters",
......@@ -240,7 +237,6 @@ class SingleNotifyPipeCondition(_BaseCondition):
self._notified = False
self._read_fd = None
self._write_fd = None
self._poller = None
def _check_unnotified(self):
"""Throws an exception if already notified.
......@@ -260,7 +256,6 @@ class SingleNotifyPipeCondition(_BaseCondition):
if self._write_fd is not None:
os.close(self._write_fd)
self._write_fd = None
self._poller = None
def wait(self, timeout):
"""Wait for a notification.
......@@ -274,12 +269,10 @@ class SingleNotifyPipeCondition(_BaseCondition):
self._nwaiters += 1
try:
if self._poller is None:
if self._read_fd is None:
(self._read_fd, self._write_fd) = os.pipe()
self._poller = select.poll()
self._poller.register(self._read_fd, select.POLLHUP)
wait_fn = self._waiter_class(self._poller, self._read_fd)
wait_fn = self._waiter_class(self._read_fd)
state = self._release_save()
try:
# Wait for notification
......
......@@ -674,7 +674,7 @@ class IAllocator(object):
assert ninfo.name in node_results, "Missing basic data for node %s" % \
ninfo.name
if not (ninfo.offline or ninfo.drained):
if not ninfo.offline:
nresult.Raise("Can't get data for node %s" % ninfo.name)
node_iinfo[nuuid].Raise("Can't get node instance info from node %s" %
ninfo.name)
......
......@@ -158,7 +158,7 @@ class AddressPool(object):
assert self.gateway in self.network
if self.network6 and self.gateway6:
assert self.gateway6 in self.network6
assert self.gateway6 in self.network6 or self.gateway6.is_link_local
return True
......
......@@ -829,7 +829,12 @@ class Disk(ConfigObject):
child.UpgradeConfig()
# FIXME: Make this configurable in Ganeti 2.7
self.params = {}
# Params should be an empty dict that gets filled any time needed
# In case of ext template we allow arbitrary params that should not
# be overrided during a config reload/upgrade.
if not self.params or not isinstance(self.params, dict):
self.params = {}
# add here config upgrade for this disk
# map of legacy device types (mapping differing LD constants to new
......
......@@ -950,6 +950,9 @@ fdBlktap = Types.fileDriverToRaw FileBlktap
fdLoop :: String
fdLoop = Types.fileDriverToRaw FileLoop
fdDefault :: String
fdDefault = fdLoop
fileDriver :: FrozenSet String
fileDriver =
ConstantUtils.mkSet $
......@@ -2312,6 +2315,15 @@ idiskParamsTypes =
idiskParams :: FrozenSet String
idiskParams = ConstantUtils.mkSet (Map.keys idiskParamsTypes)
modifiableIdiskParamsTypes :: Map String VType
modifiableIdiskParamsTypes =
Map.fromList [(idiskMode, VTypeString),
(idiskName, VTypeString)]
modifiableIdiskParams :: FrozenSet String
modifiableIdiskParams =
ConstantUtils.mkSet (Map.keys modifiableIdiskParamsTypes)
-- * inic* constants are used in opcodes, to create/change nics
inicBridge :: String
......@@ -4313,6 +4325,9 @@ uuidRegex = "^[a-f0-9]{8}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{4}-[a-f0-9]{12}$"
-- * Luxi constants
luxiSocketPerms :: Int
luxiSocketPerms = 0o660
luxiKeyMethod :: String
luxiKeyMethod = "method"
......
......@@ -136,7 +136,7 @@ parseNode ktg n a = do
gidx <- lookupGroup ktg n guuid
ndparams <- extract "ndparams" >>= asJSObject
excl_stor <- tryFromObj desc (fromJSObject ndparams) "exclusive_storage"
let live = not offline && not drained && vm_capable'
let live = not offline && vm_capable'
lvextract def = eitherLive live def . extract
sptotal <- if excl_stor
then lvextract 0 "total_spindles"
......@@ -150,7 +150,7 @@ parseNode ktg n a = do
ctotal <- lvextract 0.0 "total_cpus"
cnos <- lvextract 0 "reserved_cpus"
let node = Node.create n mtotal mnode mfree dtotal dfree ctotal cnos
(not live) sptotal spfree gidx excl_stor
(not live || drained) sptotal spfree gidx excl_stor
return (n, node)
-- | Parses a group as found in the cluster group list.
......
......@@ -215,7 +215,7 @@ parseNode ktg [ name, mtotal, mnode, mfree, dtotal, dfree
xgdx <- convert "group.uuid" g_uuid >>= lookupGroup ktg xname
xtags <- convert "tags" tags
xexcl_stor <- convert "exclusive_storage" excl_stor
let live = not xoffline && not xdrained && xvm_capable
let live = not xoffline && xvm_capable
lvconvert def n d = eitherLive live def $ convert n d
xsptotal <- if xexcl_stor
then lvconvert 0 "sptotal" sptotal
......@@ -230,7 +230,8 @@ parseNode ktg [ name, mtotal, mnode, mfree, dtotal, dfree
xcnos <- lvconvert 0 "cnos" cnos
let node = flip Node.setNodeTags xtags $
Node.create xname xmtotal xmnode xmfree xdtotal xdfree
xctotal xcnos (not live) xsptotal xspfree xgdx xexcl_stor
xctotal xcnos (not live || xdrained) xsptotal xspfree
xgdx xexcl_stor
return (xname, node)
parseNode _ v = fail ("Invalid node query result: " ++ show v)
......
......@@ -159,7 +159,7 @@ parseNode ktg a = do
excl_stor <- tryFromObj desc (fromJSObject ndparams) "exclusive_storage"
guuid <- annotateResult desc $ maybeFromObj a "group.uuid"
guuid' <- lookupGroup ktg name (fromMaybe defaultGroupID guuid)
let live = not offline && not drained && vm_cap'
let live = not offline && vm_cap'
lvextract def = eitherLive live def . extract
sptotal <- if excl_stor
then lvextract 0 "sptotal"
......@@ -175,7 +175,7 @@ parseNode ktg a = do
tags <- extract "tags"
let node = flip Node.setNodeTags tags $
Node.create name mtotal mnode mfree dtotal dfree ctotal cnos
(not live) sptotal spfree guuid' excl_stor
(not live || drained) sptotal spfree guuid' excl_stor
return (name, node)
-- | Construct a group from a JSON object.
......
......@@ -67,6 +67,7 @@ import Text.JSON.Types
import System.Directory (removeFile)
import System.IO (hClose, hFlush, hWaitForInput, Handle, IOMode(..))
import System.IO.Error (isEOFError)
import System.Posix.Files
import System.Timeout
import qualified Network.Socket as S
......@@ -233,8 +234,9 @@ getServer :: Bool -> FilePath -> IO S.Socket
getServer setOwner path = do
s <- S.socket S.AF_UNIX S.Stream S.defaultProtocol
S.bindSocket s (S.SockAddrUnix path)
when setOwner . setOwnerAndGroupFromNames path GanetiLuxid $
ExtraGroup DaemonsGroup
when setOwner $ do
setOwnerAndGroupFromNames path GanetiLuxid $ ExtraGroup DaemonsGroup
setFileMode path $ fromIntegral luxiSocketPerms
S.listen s 5 -- 5 is the max backlog
return s
......
......@@ -145,12 +145,19 @@ data HttpClientRequest = HttpClientRequest
, requestOpts :: [CurlOption] -- ^ The various curl options
}
-- | Check if a string represented address is IPv6
isIpV6 :: String -> Bool
isIpV6 = (':' `elem`)
-- | Prepare url for the HTTP request.
prepareUrl :: (RpcCall a) => Node -> a -> String
prepareUrl node call =
let node_ip = nodePrimaryIp node
node_address = if isIpV6 node_ip
then "[" ++ node_ip ++ "]"
else node_ip
port = C.defaultNodedPort
path_prefix = "https://" ++ node_ip ++ ":" ++ show port
path_prefix = "https://" ++ node_address ++ ":" ++ show port
in path_prefix ++ "/" ++ rpcCallName call
-- | Create HTTP request for a given node provided it is online,
......
......@@ -2243,8 +2243,8 @@ class TestLUInstanceSetParams(CmdlibTestCase):
constants.IDISK_MODE: "invalid",
constants.IDISK_NAME: "new_name"
}]])
self.ExecOpCodeExpectOpPrereqError(
op, "Disk modification doesn't support additional arbitrary parameters")
self.ExecOpCodeEx