Commit 6aac5aef authored by Iustin Pop's avatar Iustin Pop
Browse files

Merge remote branch 'origin/devel-2.4'



* origin/devel-2.4:
  Fix errors in hooks documentation
  Clarify a bit the noded man page
  Note --no-remember in NEWS
  Switch QA over to using instance stop --no-remember
  Implement no_remember at RAPI level
  Implement no_remember at CLI level
  Introduce instance start/stop no_remember attribute
  Bump version for the 2.4.2 release
  Fix a bug in LUInstanceMove
  Abstract ignore_consistency opcode parameter
  Preload the string-escape code in noded
  Fix error in iallocator documentation reg. disk mode
  Try to prevent instance memory changes N+1 failures
  Update NEWS file for the 2.4.2 release

Conflicts:
        NEWS                (trivial)
        doc/iallocator.rst  (kept our version)
        lib/cli.py          (trivial)
        lib/opcodes.py      (removed duplicated work, both branches
                             introduced the same new variable
                              PIgnoreConsistency :)
        lib/rapi/client.py  (trivial)
        lib/rapi/rlib2.py   (almost trivial)
        qa/ganeti-qa.py     (below trivial)
Signed-off-by: default avatarIustin Pop <iustin@google.com>
Reviewed-by: default avatarGuido Trotter <ultrotter@google.com>
parents 235407ba 8ac5c5d7
......@@ -20,6 +20,93 @@ Version 2.5.0 beta1
documentation <install>`
Version 2.4.3
-------------
*(unreleased)*
- Added a new parameter to instance stop/start called ``--no-remember``
that will make the state change to not be remembered
Version 2.4.2
-------------
*(Released Thu, 12 May 2011)*
Many bug-fixes and a few new small features:
- Fixed a bug related to log opening failures
- Fixed a bug in instance listing with orphan instances
- Fixed a bug which prevented resetting the cluster-level node parameter
``oob_program`` to the default
- Many fixes related to the ``cluster-merge`` tool
- Fixed a race condition in the lock monitor, which caused failures
during (at least) creation of many instances in parallel
- Improved output for gnt-job info
- Removed the quiet flag on some ssh calls which prevented debugging
failures
- Improved the N+1 failure messages in cluster verify by actually
showing the memory values (needed and available)
- Increased lock attempt timeouts so that when executing long operations
(e.g. DRBD replace-disks) other jobs do not enter 'blocking acquire'
too early and thus prevent the use of the 'fair' mechanism
- Changed instance query data (``gnt-instance info``) to not acquire
locks unless needed, thus allowing its use on locked instance if only
static information is asked for
- Improved behaviour with filesystems that do not support rename on an
opened file
- Fixed the behaviour of ``prealloc_wipe_disks`` cluster parameter which
kept locks on all nodes during the wipe, which is unneeded
- Fixed ``gnt-watcher`` handling of errors during hooks execution
- Fixed bug in ``prealloc_wipe_disks`` with small disk sizes (less than
10GiB) which caused the wipe to fail right at the end in some cases
- Fixed master IP activation when doing master failover with no-voting
- Fixed bug in ``gnt-node add --readd`` which allowed the re-adding of
the master node itself
- Fixed potential data-loss in under disk full conditions, where Ganeti
wouldn't check correctly the return code and would consider
partially-written files 'correct'
- Fixed bug related to multiple VGs and DRBD disk replacing
- Added new disk parameter ``metavg`` that allows placement of the meta
device for DRBD in a different volume group
- Fixed error handling in the node daemon when the system libc doesn't
have major number 6 (i.e. if ``libc.so.6`` is not the actual libc)
- Fixed lock release during replace-disks, which kept cluster-wide locks
when doing disk replaces with an iallocator script
- Added check for missing bridges in cluster verify
- Handle EPIPE errors while writing to the terminal better, so that
piping the output to e.g. ``less`` doesn't cause a backtrace
- Fixed rare case where a ^C during Luxi calls could have been
interpreted as server errors, instead of simply terminating
- Fixed a race condition in LUGroupAssignNodes (``gnt-group
assign-nodes``)
- Added a few more parameters to the KVM hypervisor, allowing a second
CDROM, custom disk type for CDROMs and a floppy image
- Removed redundant message in instance rename when the name is given
already as a FQDN
- Added option to ``gnt-instance recreate-disks`` to allow creating the
disks on new nodes, allowing recreation when the original instance
nodes are completely gone
- Added option when converting disk templates to DRBD to skip waiting
for the resync, in order to make the instance available sooner
- Added two new variables to the OS scripts environment (containing the
instance's nodes)
- Made the root_path and optional parameter for the xen-pvm hypervisor,
to allow use of ``pvgrub`` as bootloader
- Changed the instance memory modifications to only check out-of-memory
conditions on memory increases, and turned the secondary node warnings
into errors (they can still be overridden via ``--force``)
- Fixed the handling of a corner case when the Python installation gets
corrupted (e.g. a bad disk) while ganeti-noded is running and we try
to execute a command that doesn't exist
- Fixed a bug in ``gnt-instance move`` (LUInstanceMove) when the primary
node of the instance returned failures during instance shutdown; this
adds the option ``--ignore-consistency`` to gnt-instance move
And as usual, various improvements to the error messages, documentation
and man pages.
Version 2.4.1
-------------
......
# Configure script for Ganeti
m4_define([gnt_version_major], [2])
m4_define([gnt_version_minor], [4])
m4_define([gnt_version_revision], [1])
m4_define([gnt_version_revision], [2])
m4_define([gnt_version_suffix], [])
m4_define([gnt_version_full],
m4_format([%d.%d.%d%s],
......
......@@ -114,7 +114,7 @@ Operation list
Node operations
~~~~~~~~~~~~~~~
OP_ADD_NODE
OP_NODE_ADD
+++++++++++
Adds a node to the cluster.
......@@ -125,7 +125,7 @@ Adds a node to the cluster.
:post-execution: all nodes plus the new node
OP_REMOVE_NODE
OP_NODE_REMOVE
++++++++++++++
Removes a node from the cluster. On the removed node the hooks are
......@@ -171,7 +171,7 @@ Relocate secondary instances from a node.
Node group operations
~~~~~~~~~~~~~~~~~~~~~
OP_ADD_GROUP
OP_GROUP_ADD
++++++++++++
Adds a node group to the cluster.
......@@ -191,7 +191,7 @@ Changes a node group's parameters.
:pre-execution: master node
:post-execution: master node
OP_REMOVE_GROUP
OP_GROUP_REMOVE
+++++++++++++++
Removes a node group from the cluster. Since the node group must be
......@@ -203,7 +203,7 @@ not exist, and the hook is only executed in the master node.
:pre-execution: master node
:post-execution: master node
OP_RENAME_GROUP
OP_GROUP_RENAME
+++++++++++++++
Renames a node group.
......@@ -228,8 +228,8 @@ The INSTANCE_NICn_* and INSTANCE_DISKn_* variables represent the
properties of the *n* -th NIC and disk, and are zero-indexed.
OP_INSTANCE_ADD
+++++++++++++++
OP_INSTANCE_CREATE
++++++++++++++++++
Creates a new instance.
......@@ -397,7 +397,7 @@ Replace the disks of an instance.
Cluster operations
~~~~~~~~~~~~~~~~~~
OP_POST_INIT_CLUSTER
OP_CLUSTER_POST_INIT
++++++++++++++++++++
This hook is called via a special "empty" LU right after cluster
......@@ -408,7 +408,7 @@ initialization.
:pre-execution: none
:post-execution: master node
OP_DESTROY_CLUSTER
OP_CLUSTER_DESTROY
++++++++++++++++++
The post phase of this hook is called during the execution of destroy
......
......@@ -130,6 +130,7 @@ __all__ = [
"NOSTART_OPT",
"NOSSH_KEYCHECK_OPT",
"NOVOTING_OPT",
"NO_REMEMBER_OPT",
"NWSYNC_OPT",
"ON_PRIMARY_OPT",
"ON_SECONDARY_OPT",
......@@ -1197,6 +1198,12 @@ FORCE_FILTER_OPT = cli_option("-F", "--filter", dest="force_filter",
help=("Whether command argument should be treated"
" as filter"))
NO_REMEMBER_OPT = cli_option("--no-remember",
dest="no_remember",
action="store_true", default=False,
help="Perform but do not record the change"
" in the configuration")
#: Options provided by all commands
COMMON_OPTS = [DEBUG_OPT]
......
......@@ -661,7 +661,8 @@ def _StartupInstance(name, opts):
"""
op = opcodes.OpInstanceStartup(instance_name=name,
force=opts.force,
ignore_offline_nodes=opts.ignore_offline)
ignore_offline_nodes=opts.ignore_offline,
no_remember=opts.no_remember)
# do not add these parameters to the opcode unless they're defined
if opts.hvparams:
op.hvparams = opts.hvparams
......@@ -700,7 +701,8 @@ def _ShutdownInstance(name, opts):
"""
return opcodes.OpInstanceShutdown(instance_name=name,
timeout=opts.timeout,
ignore_offline_nodes=opts.ignore_offline)
ignore_offline_nodes=opts.ignore_offline,
no_remember=opts.no_remember)
def ReplaceDisks(opts, args):
......@@ -868,7 +870,8 @@ def MoveInstance(opts, args):
op = opcodes.OpInstanceMove(instance_name=instance_name,
target_node=opts.node,
shutdown_timeout=opts.shutdown_timeout)
shutdown_timeout=opts.shutdown_timeout,
ignore_consistency=opts.ignore_consistency)
SubmitOrSend(op, opts, cl=cl)
return 0
......@@ -1384,7 +1387,7 @@ commands = {
'move': (
MoveInstance, ARGS_ONE_INSTANCE,
[FORCE_OPT, SUBMIT_OPT, SINGLE_NODE_OPT, SHUTDOWN_TIMEOUT_OPT,
DRY_RUN_OPT, PRIORITY_OPT],
DRY_RUN_OPT, PRIORITY_OPT, IGNORE_CONSIST_OPT],
"[-f] <instance>", "Move instance to an arbitrary node"
" (only for instances of type file and lv)"),
'info': (
......@@ -1442,14 +1445,15 @@ commands = {
[m_node_opt, m_pri_node_opt, m_sec_node_opt, m_clust_opt,
m_node_tags_opt, m_pri_node_tags_opt, m_sec_node_tags_opt,
m_inst_tags_opt, m_inst_opt, m_force_multi, TIMEOUT_OPT, SUBMIT_OPT,
DRY_RUN_OPT, PRIORITY_OPT, IGNORE_OFFLINE_OPT],
DRY_RUN_OPT, PRIORITY_OPT, IGNORE_OFFLINE_OPT, NO_REMEMBER_OPT],
"<instance>", "Stops an instance"),
'startup': (
GenericManyOps("startup", _StartupInstance), [ArgInstance()],
[FORCE_OPT, m_force_multi, m_node_opt, m_pri_node_opt, m_sec_node_opt,
m_node_tags_opt, m_pri_node_tags_opt, m_sec_node_tags_opt,
m_inst_tags_opt, m_clust_opt, m_inst_opt, SUBMIT_OPT, HVOPTS_OPT,
BACKEND_OPT, DRY_RUN_OPT, PRIORITY_OPT, IGNORE_OFFLINE_OPT],
BACKEND_OPT, DRY_RUN_OPT, PRIORITY_OPT, IGNORE_OFFLINE_OPT,
NO_REMEMBER_OPT],
"<instance>", "Starts an instance"),
'reboot': (
GenericManyOps("reboot", _RebootInstance), [ArgInstance()],
......
......@@ -5459,7 +5459,8 @@ class LUInstanceStartup(LogicalUnit):
instance = self.instance
force = self.op.force
self.cfg.MarkInstanceUp(instance.name)
if not self.op.no_remember:
self.cfg.MarkInstanceUp(instance.name)
if self.primary_offline:
assert self.op.ignore_offline_nodes
......@@ -5624,7 +5625,8 @@ class LUInstanceShutdown(LogicalUnit):
node_current = instance.primary_node
timeout = self.op.timeout
self.cfg.MarkInstanceDown(instance.name)
if not self.op.no_remember:
self.cfg.MarkInstanceDown(instance.name)
if self.primary_offline:
assert self.op.ignore_offline_nodes
......@@ -10084,6 +10086,7 @@ class LUInstanceSetParams(LogicalUnit):
self.be_inst = i_bedict # the new dict (without defaults)
else:
self.be_new = self.be_inst = {}
be_old = cluster.FillBE(instance)
# osparams processing
if self.op.osparams:
......@@ -10095,7 +10098,8 @@ class LUInstanceSetParams(LogicalUnit):
self.warn = []
if constants.BE_MEMORY in self.op.beparams and not self.op.force:
if (constants.BE_MEMORY in self.op.beparams and not self.op.force and
be_new[constants.BE_MEMORY] > be_old[constants.BE_MEMORY]):
mem_check_list = [pnode]
if be_new[constants.BE_AUTO_BALANCE]:
# either we changed auto_balance to yes or it was from before
......@@ -10136,16 +10140,17 @@ class LUInstanceSetParams(LogicalUnit):
for node, nres in nodeinfo.items():
if node not in instance.secondary_nodes:
continue
msg = nres.fail_msg
if msg:
self.warn.append("Can't get info from secondary node %s: %s" %
(node, msg))
elif not isinstance(nres.payload.get('memory_free', None), int):
self.warn.append("Secondary node %s didn't return free"
" memory information" % node)
nres.Raise("Can't get info from secondary node %s" % node,
prereq=True, ecode=errors.ECODE_STATE)
if not isinstance(nres.payload.get('memory_free', None), int):
raise errors.OpPrereqError("Secondary node %s didn't return free"
" memory information" % node,
errors.ECODE_STATE)
elif be_new[constants.BE_MEMORY] > nres.payload['memory_free']:
self.warn.append("Not enough memory to failover instance to"
" secondary node %s" % node)
raise errors.OpPrereqError("This change will prevent the instance"
" from failover to its secondary node"
" %s, due to not enough memory" % node,
errors.ECODE_STATE)
# NIC processing
self.nic_pnew = {}
......
......@@ -114,6 +114,10 @@ _PQueryWhat = ("what", ht.NoDefault, ht.TElemOf(constants.QR_VIA_OP),
_PIpCheckDoc = "Whether to ensure instance's IP address is inactive"
#: Do not remember instance state changes
_PNoRemember = ("no_remember", False, ht.TBool,
"Do not remember the state change")
#: OP_ID conversion regular expression
_OPID_RE = re.compile("([a-z])([A-Z])")
......@@ -988,6 +992,7 @@ class OpInstanceStartup(OpCode):
("hvparams", ht.EmptyDict, ht.TDict,
"Temporary hypervisor parameters, hypervisor-dependent"),
("beparams", ht.EmptyDict, ht.TDict, "Temporary backend parameters"),
_PNoRemember,
]
......@@ -999,6 +1004,7 @@ class OpInstanceShutdown(OpCode):
_PIgnoreOfflineNodes,
("timeout", constants.DEFAULT_SHUTDOWN_TIMEOUT, ht.TPositiveInt,
"How long to wait for instance to shut down"),
_PNoRemember,
]
......@@ -1087,6 +1093,7 @@ class OpInstanceMove(OpCode):
_PInstanceName,
_PShutdownTimeout,
("target_node", ht.NoDefault, ht.TNonEmptyString, "Target node"),
_PIgnoreConsistency,
]
......
......@@ -863,13 +863,15 @@ class GanetiRapiClient(object): # pylint: disable-msg=R0904
("/%s/instances/%s/reboot" %
(GANETI_RAPI_VERSION, instance)), query, None)
def ShutdownInstance(self, instance, dry_run=False):
def ShutdownInstance(self, instance, dry_run=False, no_remember=False):
"""Shuts down an instance.
@type instance: str
@param instance: the instance to shut down
@type dry_run: bool
@param dry_run: whether to perform a dry run
@type no_remember: bool
@param no_remember: if true, will not record the state change
@rtype: string
@return: job id
......@@ -877,18 +879,22 @@ class GanetiRapiClient(object): # pylint: disable-msg=R0904
query = []
if dry_run:
query.append(("dry-run", 1))
if no_remember:
query.append(("no-remember", 1))
return self._SendRequest(HTTP_PUT,
("/%s/instances/%s/shutdown" %
(GANETI_RAPI_VERSION, instance)), query, None)
def StartupInstance(self, instance, dry_run=False):
def StartupInstance(self, instance, dry_run=False, no_remember=False):
"""Starts up an instance.
@type instance: str
@param instance: the instance to start up
@type dry_run: bool
@param dry_run: whether to perform a dry run
@type no_remember: bool
@param no_remember: if true, will not record the state change
@rtype: string
@return: job id
......@@ -896,6 +902,8 @@ class GanetiRapiClient(object): # pylint: disable-msg=R0904
query = []
if dry_run:
query.append(("dry-run", 1))
if no_remember:
query.append(("no-remember", 1))
return self._SendRequest(HTTP_PUT,
("/%s/instances/%s/startup" %
......
......@@ -862,14 +862,16 @@ class R_2_instances_name_startup(baserlib.R_Generic):
"""
instance_name = self.items[0]
force_startup = bool(self._checkIntVariable('force'))
no_remember = bool(self._checkIntVariable('no_remember'))
op = opcodes.OpInstanceStartup(instance_name=instance_name,
force=force_startup,
dry_run=bool(self.dryRun()))
dry_run=bool(self.dryRun()),
no_remember=no_remember)
return baserlib.SubmitJob([op])
def _ParseShutdownInstanceRequest(name, data, dry_run):
def _ParseShutdownInstanceRequest(name, data, dry_run, no_remember):
"""Parses a request for an instance shutdown.
@rtype: L{opcodes.OpInstanceShutdown}
......@@ -879,6 +881,7 @@ def _ParseShutdownInstanceRequest(name, data, dry_run):
return baserlib.FillOpcode(opcodes.OpInstanceShutdown, data, {
"instance_name": name,
"dry_run": dry_run,
"no_remember": no_remember,
})
......@@ -896,8 +899,9 @@ class R_2_instances_name_shutdown(baserlib.R_Generic):
"""
baserlib.CheckType(self.request_body, dict, "Body contents")
no_remember = bool(self._checkIntVariable('no_remember'))
op = _ParseShutdownInstanceRequest(self.items[0], self.request_body,
bool(self.dryRun()))
bool(self.dryRun()), no_remember)
return baserlib.SubmitJob([op])
......
......@@ -33,6 +33,7 @@ import os
import sys
import logging
import signal
import codecs
from optparse import OptionParser
......@@ -974,6 +975,13 @@ def CheckNoded(_, args):
print >> sys.stderr, ("Usage: %s [-f] [-d] [-p port] [-b ADDRESS]" %
sys.argv[0])
sys.exit(constants.EXIT_FAILURE)
try:
codecs.lookup("string-escape")
except LookupError:
print >> sys.stderr, ("Can't load the string-escape code which is part"
" of the Python installation. Is your installation"
" complete/correct? Aborting.")
sys.exit(constants.EXIT_FAILURE)
def PrepNoded(options, _):
......
......@@ -31,8 +31,8 @@ Logging to syslog, rather than its own log file, can be enabled by
passing in the ``--syslog`` option.
The **ganeti-noded** daemon listens to port 1811 TCP, on all
interfaces, by default. This can be overridden by an entry the
services database (usually /etc/services) or by passing the ``-p``
interfaces, by default. The port can be overridden by an entry the
services database (usually ``/etc/services``) or by passing the ``-p``
option. The ``-b`` option can be used to specify the address to bind
to (defaults to ``0.0.0.0``).
......
......@@ -843,7 +843,7 @@ STARTUP
| **startup**
| [--force] [--ignore-offline]
| [--force-multiple]
| [--force-multiple] [--no-remember]
| [--instance \| --node \| --primary \| --secondary \| --all \|
| --tags \| --node-tags \| --pri-node-tags \| --sec-node-tags]
| [{-H|--hypervisor-parameters} ``key=value...``]
......@@ -901,6 +901,12 @@ mark the instance as started even if the primary is not available.
The ``--force-multiple`` will skip the interactive confirmation in the
case the more than one instance will be affected.
The ``--no-remember`` option will perform the startup but not change
the state of the instance in the configuration file (if it was stopped
before, Ganeti will still thinks it needs to be stopped). This can be
used for testing, or for a one shot-start where you don't want the
watcher to restart the instance if it crashes.
The ``-H (--hypervisor-parameters)`` and ``-B (--backend-parameters)``
options specify temporary hypervisor and backend parameters that can
be used to start an instance with modified parameters. They can be
......@@ -933,7 +939,7 @@ SHUTDOWN
| **shutdown**
| [--timeout=*N*]
| [--force-multiple] [--ignore-offline]
| [--force-multiple] [--ignore-offline] [--no-remember]
| [--instance \| --node \| --primary \| --secondary \| --all \|
| --tags \| --node-tags \| --pri-node-tags \| --sec-node-tags]
| [--submit]
......@@ -962,6 +968,15 @@ can be examined via **gnt-job info**.
force the instance to be marked as stopped. This option should be used
with care as it can lead to an inconsistent cluster state.
The ``--no-remember`` option will perform the shutdown but not change
the state of the instance in the configuration file (if it was running
before, Ganeti will still thinks it needs to be running). This can be
useful for a cluster-wide shutdown, where some instances are marked as
up and some as down, and you don't want to change the running state:
you just need to disable the watcher, shutdown all instances with
``--no-remember``, and when the watcher is activated again it will
restore the correct runtime state for all instances.
Example::
# gnt-instance shutdown instance1.example.com
......@@ -1315,7 +1330,8 @@ Example (and expected output)::
MOVE
^^^^
**move** [-f] [-n *node*] [--shutdown-timeout=*N*] [--submit]
**move** [-f] [--ignore-consistency]
[-n *node*] [--shutdown-timeout=*N*] [--submit]
{*instance*}
Move will move the instance to an arbitrary node in the cluster. This
......@@ -1330,6 +1346,10 @@ before forcing the shutdown (e.g. ``xm destroy`` in XEN, killing the
kvm process for KVM, etc.). By default two minutes are given to each
instance to stop.
The ``--ignore-consistency`` option will make Ganeti ignore any errors
in trying to shutdown the instance on its node; useful if the
hypervisor is broken and you want to recuperate the data.
The ``--submit`` option is used to send the job to the master daemon
but not wait for its completion. The job ID will be shown so that it
can be examined via **gnt-job info**.
......
......@@ -347,16 +347,16 @@ def RunExportImportTests(instance, pnode, snode):
qa_config.ReleaseInstance(newinst)
def RunDaemonTests(instance, pnode):
def RunDaemonTests(instance):
"""Test the ganeti-watcher script.
"""
RunTest(qa_daemon.TestPauseWatcher)
RunTestIf("instance-automatic-restart",
qa_daemon.TestInstanceAutomaticRestart, pnode, instance)
qa_daemon.TestInstanceAutomaticRestart, instance)
RunTestIf("instance-consecutive-failures",
qa_daemon.TestInstanceConsecutiveFailures, pnode, instance)
qa_daemon.TestInstanceConsecutiveFailures, instance)
RunTest(qa_daemon.TestResumeWatcher)
......@@ -440,7 +440,7 @@ def RunQa():
RunGroupListTests()
RunTestIf("cluster-epo", qa_cluster.TestClusterEpo)
RunExportImportTests(instance, pnode, None)
RunDaemonTests(instance, pnode)
RunDaemonTests(instance)
RunRepairDiskSizes()
RunTest(qa_instance.TestInstanceRemove, instance)
del instance
......
#
#
# Copyright (C) 2007, 2008, 2009, 2010 Google Inc.
# Copyright (C) 2007, 2008, 2009, 2010, 2011 Google Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
......@@ -35,35 +35,30 @@ import qa_error
from qa_utils import AssertMatch, AssertCommand, StartSSH, GetCommandOutput
def _InstanceRunning(node, name):
def _InstanceRunning(name):
"""Checks whether an instance is running.
@param node: node the instance runs on
@param name: full name of the Xen instance
@param name: full name of the instance
"""
cmd = utils.ShellQuoteArgs(['xm', 'list', name]) + ' >/dev/null'
ret = StartSSH(node['primary'], cmd).wait()
master = qa_config.GetMasterNode()
cmd = (utils.ShellQuoteArgs(["gnt-instance", "list", "-o", "status", name]) +
' | grep running')
ret = StartSSH(master["primary"], cmd).wait()
return ret == 0
def _XmShutdownInstance(node, name):
"""Shuts down instance using "xm" and waits for completion.
def _ShutdownInstance(name):
"""Shuts down instance without recording state and waits for completion.
@param node: node the instance runs on
@param name: full name of Xen instance
@param name: full name of the instance
"""
AssertCommand(["xm", "shutdown", name], node=node)
AssertCommand(["gnt-instance", "shutdown", "--no-remember", name])
# Wait up to a minute
end = time.time() + 60
while time.time() <= end:
if not _InstanceRunning(node, name):
break
time.sleep(5)
else:
raise qa_error.Error("xm shutdown failed")
if _InstanceRunning(name):
raise qa_error.Error("instance shutdown failed")
def _ResetWatcherDaemon():
......@@ -108,25 +103,25 @@ def TestResumeWatcher():
AssertMatch(output, r"^.*\bis not paused\b.*")
def TestInstanceAutomaticRestart(node, instance):
def TestInstanceAutomaticRestart(instance):
"""Test automatic restart of instance by ganeti-watcher.
"""
inst_name = qa_utils.ResolveInstanceName(instance["name"])
_ResetWatcherDaemon()
_XmShutdownInstance(node, inst_name)
_ShutdownInstance(inst_name)
_RunWatcherDaemon()
time.sleep(5)
if not _InstanceRunning(node, inst_name):