Commit 8f9069e5 authored by Iustin Pop's avatar Iustin Pop

Merge branch 'devel-2.2'

* devel-2.2:
  setup-ssh: fix updating of authorized_keys
  setup-ssh: Also use keys from the ssh-agent
  setup-ssh: try to use key auth first
  setup-ssh: redo the logging levels
  setup-ssh: only read the ssh port once
  setup-ssh: fix the logging error message
  Use Sphinx' :rfc: extension to refer to RFCs
  Document non-standard usage of JSON in RAPI
  Fix small spelling mistake
  Explicitly add dry-run to some commands
  Stop adding the dry-run option by default
  Fix a few commands behaviour with dry-run
  jqueue: Remove lock status field
  QA: Run simple job queue test
  Don't ignore secondary node silently
  etags: force Python as a language
Signed-off-by: default avatarIustin Pop <iustin@google.com>
Reviewed-by: default avatarGuido Trotter <ultrotter@google.com>
parents 981732fb 634a9a35
......@@ -685,7 +685,7 @@ TAGS: $(BUILT_SOURCES)
find . -path './lib/*.py' -o -path './scripts/gnt-*' -o \
-path './daemons/ganeti-*' -o -path './tools/*' -o \
-path './qa/*.py' | \
etags -
etags -l python -
.PHONY: coverage
coverage: $(BUILT_SOURCES) $(python_tests)
......
......@@ -88,7 +88,7 @@ Version 2.2.0 beta 0
- Added per-request RPC timeout
- RAPI now requires a Content-Type header for requests with a body (e.g.
``PUT`` or ``POST``) which must be set to ``application/json`` (see
RFC2616 (HTTP/1.1), section 7.2.1)
:rfc:`2616` (HTTP/1.1), section 7.2.1)
- ``ganeti-watcher`` attempts to restart ``ganeti-rapi`` if RAPI is not
reachable
- Implemented initial support for running Ganeti daemons as separate
......
......@@ -411,7 +411,7 @@ between clusters before instances can be moved. If the third party does
not know the secret, it can't forge the certificates or redirect the
data. Unless disabled by a new cluster parameter, verifying the HMAC
signatures must be mandatory. The HMAC signature for X509 certificates
will be prepended to the certificate similar to an RFC822 header and
will be prepended to the certificate similar to an :rfc:`822` header and
only covers the certificate (from ``-----BEGIN CERTIFICATE-----`` to
``-----END CERTIFICATE-----``). The header name will be
``X-Ganeti-Signature`` and its value will have the format
......
......@@ -53,7 +53,7 @@ Example::
.. [#pwhash] Using the MD5 hash of username, realm and password is
described in RFC2617_ ("HTTP Authentication"), sections 3.2.2.2 and
described in :rfc:`2617` ("HTTP Authentication"), sections 3.2.2.2 and
3.3. The reason for using it over another algorithm is forward
compatibility. If ``ganeti-rapi`` were to implement HTTP Digest
authentication in the future, the same hash could be used.
......@@ -65,19 +65,51 @@ Protocol
--------
The protocol used is JSON_ over HTTP designed after the REST_ principle.
HTTP Basic authentication as per RFC2617_ is supported.
HTTP Basic authentication as per :rfc:`2617` is supported.
.. _JSON: http://www.json.org/
.. _REST: http://en.wikipedia.org/wiki/Representational_State_Transfer
.. _RFC2617: http://tools.ietf.org/rfc/rfc2617.txt
A note on JSON as used by RAPI
++++++++++++++++++++++++++++++
JSON_ as used by Ganeti RAPI does not conform to the specification in
:rfc:`4627`. Section 2 defines a JSON text to be either an object
(``{"key": "value", …}``) or an array (``[1, 2, 3, …]``). In violation
of this RAPI uses plain strings (``"master-candidate"``, ``"1234"``) for
some requests or responses. Changing this now would likely break
existing clients and cause a lot of trouble.
.. highlight:: ruby
Unlike Python's `JSON encoder and decoder
<http://docs.python.org/library/json.html>`_, other programming
languages or libraries may only provide a strict implementation, not
allowing plain values. For those, responses can usually be wrapped in an
array whose first element is then used, e.g. the response ``"1234"``
becomes ``["1234"]``. This works equally well for more complex values.
Example in Ruby::
require "json"
# Insert code to get response here
response = "\"1234\""
decoded = JSON.parse("[#{response}]").first
Short of modifying the encoder to allow encoding to a less strict
format, requests will have to be formatted by hand. Newer RAPI requests
already use a dictionary as their input data and shouldn't cause any
problems.
PUT or POST?
------------
According to RFC2616 the main difference between PUT and POST is that
POST can create new resources but PUT can only create the resource the
URI was pointing to on the PUT request.
According to :rfc:`2616` the main difference between PUT and POST is
that POST can create new resources but PUT can only create the resource
the URI was pointing to on the PUT request.
Unfortunately, due to historic reasons, the Ganeti RAPI library is not
consistent with this usage, so just use the methods as documented below
......
......@@ -93,7 +93,7 @@ Remote API
----------
Starting with Ganeti 2.0, Remote API traffic is encrypted using SSL/TLS
by default. It supports Basic authentication as per RFC2617.
by default. It supports Basic authentication as per :rfc:`2617`.
Paths for certificate, private key and CA files required for SSL/TLS
will be set at source configure time. Symlinks or command line
......
......@@ -62,6 +62,7 @@ __all__ = [
"DISK_OPT",
"DISK_TEMPLATE_OPT",
"DRAINED_OPT",
"DRY_RUN_OPT",
"DRBD_HELPER_OPT",
"EARLY_RELEASE_OPT",
"ENABLED_HV_OPT",
......@@ -576,11 +577,11 @@ SYNC_OPT = cli_option("--sync", dest="do_locking",
help=("Grab locks while doing the queries"
" in order to ensure more consistent results"))
_DRY_RUN_OPT = cli_option("--dry-run", default=False,
action="store_true",
help=("Do not execute the operation, just run the"
" check steps and verify it it could be"
" executed"))
DRY_RUN_OPT = cli_option("--dry-run", default=False,
action="store_true",
help=("Do not execute the operation, just run the"
" check steps and verify it it could be"
" executed"))
VERBOSE_OPT = cli_option("-v", "--verbose", default=False,
action="store_true",
......@@ -1096,7 +1097,7 @@ def _ParseArgs(argv, commands, aliases):
cmd = aliases[cmd]
func, args_def, parser_opts, usage, description = commands[cmd]
parser = OptionParser(option_list=parser_opts + [_DRY_RUN_OPT, DEBUG_OPT],
parser = OptionParser(option_list=parser_opts + [DEBUG_OPT],
description=description,
formatter=TitledHelpFormatter(),
usage="%%prog %s %s" % (cmd, usage))
......@@ -1599,7 +1600,8 @@ def SetGenericOpcodeOpts(opcode_list, options):
if not options:
return
for op in opcode_list:
op.dry_run = options.dry_run
if hasattr(options, "dry_run"):
op.dry_run = options.dry_run
op.debug_level = options.debug
......
......@@ -6601,6 +6601,16 @@ class LUCreateInstance(LogicalUnit):
### Node/iallocator related checks
_CheckIAllocatorOrNode(self, "iallocator", "pnode")
if self.op.pnode is not None:
if self.op.disk_template in constants.DTS_NET_MIRROR:
if self.op.snode is None:
raise errors.OpPrereqError("The networked disk templates need"
" a mirror node", errors.ECODE_INVAL)
elif self.op.snode:
self.LogWarning("Secondary node will be ignored on non-mirrored disk"
" template")
self.op.snode = None
self._cds = _GetClusterDomainSecret()
if self.op.mode == constants.INSTANCE_IMPORT:
......@@ -7147,9 +7157,6 @@ class LUCreateInstance(LogicalUnit):
# mirror node verification
if self.op.disk_template in constants.DTS_NET_MIRROR:
if self.op.snode is None:
raise errors.OpPrereqError("The networked disk templates need"
" a mirror node", errors.ECODE_INVAL)
if self.op.snode == pnode.name:
raise errors.OpPrereqError("The secondary node cannot be the"
" primary node.", errors.ECODE_INVAL)
......
......@@ -167,13 +167,11 @@ class _QueuedJob(object):
@ivar received_timestamp: the timestamp for when the job was received
@ivar start_timestmap: the timestamp for start of execution
@ivar end_timestamp: the timestamp for end of execution
@ivar lock_status: In-memory locking information for debugging
"""
# pylint: disable-msg=W0212
__slots__ = ["queue", "id", "ops", "log_serial",
"received_timestamp", "start_timestamp", "end_timestamp",
"lock_status", "change",
"__weakref__"]
def __init__(self, queue, job_id, ops):
......@@ -199,9 +197,6 @@ class _QueuedJob(object):
self.start_timestamp = None
self.end_timestamp = None
# In-memory attributes
self.lock_status = None
def __repr__(self):
status = ["%s.%s" % (self.__class__.__module__, self.__class__.__name__),
"id=%s" % self.id,
......@@ -228,9 +223,6 @@ class _QueuedJob(object):
obj.start_timestamp = state.get("start_timestamp", None)
obj.end_timestamp = state.get("end_timestamp", None)
# In-memory attributes
obj.lock_status = None
obj.ops = []
obj.log_serial = 0
for op_state in state["ops"]:
......@@ -368,8 +360,6 @@ class _QueuedJob(object):
row.append(self.start_timestamp)
elif fname == "end_ts":
row.append(self.end_timestamp)
elif fname == "lock_status":
row.append(self.lock_status)
elif fname == "summary":
row.append([op.input.Summary() for op in self.ops])
else:
......@@ -439,16 +429,15 @@ class _OpExecCallbacks(mcpu.OpExecCbBase):
Processor.ExecOpCode) set to OP_STATUS_WAITLOCK.
"""
assert self._op in self._job.ops
assert self._op.status in (constants.OP_STATUS_WAITLOCK,
constants.OP_STATUS_CANCELING)
# All locks are acquired by now
self._job.lock_status = None
# Cancel here if we were asked to
self._CheckCancel()
logging.debug("Opcode is now running")
self._op.status = constants.OP_STATUS_RUNNING
self._op.exec_timestamp = TimeStampNow()
......@@ -490,9 +479,6 @@ class _OpExecCallbacks(mcpu.OpExecCbBase):
assert self._op.status in (constants.OP_STATUS_WAITLOCK,
constants.OP_STATUS_CANCELING)
# Not getting the queue lock because this is a single assignment
self._job.lock_status = msg
# Cancel here if we were asked to
self._CheckCancel()
......@@ -755,7 +741,6 @@ class _JobQueueWorker(workerpool.BaseWorker):
op.result = result
op.end_timestamp = TimeStampNow()
if idx == count - 1:
job.lock_status = None
job.end_timestamp = TimeStampNow()
# Consistency check
......@@ -797,7 +782,6 @@ class _JobQueueWorker(workerpool.BaseWorker):
errors.GetEncodedError(i.result)
for i in job.ops[idx:])
finally:
job.lock_status = None
job.end_timestamp = TimeStampNow()
queue.UpdateJobUnlocked(job)
finally:
......@@ -809,7 +793,6 @@ class _JobQueueWorker(workerpool.BaseWorker):
try:
job.MarkUnfinishedOps(constants.OP_STATUS_CANCELED,
"Job canceled by request")
job.lock_status = None
job.end_timestamp = TimeStampNow()
queue.UpdateJobUnlocked(job)
finally:
......
......@@ -255,7 +255,7 @@ class SshRunner:
if node.startswith(remotehostname + "."):
msg = "hostname not FQDN"
else:
msg = "hostname mistmatch"
msg = "hostname mismatch"
return False, ("%s: expected %s but got %s" %
(msg, node, remotehostname))
......
......@@ -85,6 +85,7 @@ def SetupCluster(rapi_user, rapi_secret):
if qa_config.TestEnabled('create-cluster'):
RunTest(qa_cluster.TestClusterInit, rapi_user, rapi_secret)
RunTest(qa_node.TestNodeAddAll)
RunTest(qa_cluster.TestJobqueue)
else:
# consider the nodes are already there
qa_node.MarkNodeAddedAll()
......
......@@ -139,6 +139,16 @@ def TestClusterVerify():
AssertEqual(StartSSH(master['primary'],
utils.ShellQuoteArgs(cmd)).wait(), 0)
def TestJobqueue():
"""gnt-debug test-jobqueue"""
master = qa_config.GetMasterNode()
cmd = ["gnt-debug", "test-jobqueue"]
AssertEqual(StartSSH(master["primary"],
utils.ShellQuoteArgs(cmd)).wait(), 0)
def TestClusterReservedLvs():
"""gnt-cluster reserved lvs"""
master = qa_config.GetMasterNode()
......
#!/usr/bin/python
#
# Copyright (C) 2006, 2007 Google Inc.
# Copyright (C) 2006, 2007, 2010 Google Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
......@@ -136,6 +136,7 @@ import_opts = [
SRC_DIR_OPT,
SRC_NODE_OPT,
SUBMIT_OPT,
DRY_RUN_OPT,
]
......@@ -147,7 +148,7 @@ commands = {
'export': (
ExportInstance, ARGS_ONE_INSTANCE,
[FORCE_OPT, SINGLE_NODE_OPT, NOSHUTDOWN_OPT, SHUTDOWN_TIMEOUT_OPT,
REMOVE_INSTANCE_OPT, IGNORE_REMOVE_FAILURES_OPT],
REMOVE_INSTANCE_OPT, IGNORE_REMOVE_FAILURES_OPT, DRY_RUN_OPT],
"-n <target_node> [opts...] <name>",
"Exports an instance to an image"),
'import': (
......@@ -155,7 +156,7 @@ commands = {
"[...] -t disk-type -n node[:secondary-node] <name>",
"Imports an instance from an exported image"),
'remove': (
RemoveExport, [ArgUnknown(min=1, max=1)], [],
RemoveExport, [ArgUnknown(min=1, max=1)], [DRY_RUN_OPT],
"<name>", "Remove exports of named instance from the filesystem."),
}
......
......@@ -186,7 +186,8 @@ def RenameCluster(opts, args):
op = opcodes.OpRenameCluster(name=new_name)
result = SubmitOpCode(op, opts=opts, cl=cl)
ToStdout("Cluster renamed from '%s' to '%s'", cluster_name, result)
if result:
ToStdout("Cluster renamed from '%s' to '%s'", cluster_name, result)
return 0
......@@ -863,22 +864,23 @@ commands = {
"", "Destroy cluster"),
'rename': (
RenameCluster, [ArgHost(min=1, max=1)],
[FORCE_OPT],
[FORCE_OPT, DRY_RUN_OPT],
"<new_name>",
"Renames the cluster"),
'redist-conf': (
RedistributeConfig, ARGS_NONE, [SUBMIT_OPT],
RedistributeConfig, ARGS_NONE, [SUBMIT_OPT, DRY_RUN_OPT],
"", "Forces a push of the configuration file and ssconf files"
" to the nodes in the cluster"),
'verify': (
VerifyCluster, ARGS_NONE,
[VERBOSE_OPT, DEBUG_SIMERR_OPT, ERROR_CODES_OPT, NONPLUS1_OPT],
[VERBOSE_OPT, DEBUG_SIMERR_OPT, ERROR_CODES_OPT, NONPLUS1_OPT,
DRY_RUN_OPT],
"", "Does a check on the cluster configuration"),
'verify-disks': (
VerifyDisks, ARGS_NONE, [],
"", "Does a check on the cluster disk status"),
'repair-disk-sizes': (
RepairDiskSizes, ARGS_MANY_INSTANCES, [],
RepairDiskSizes, ARGS_MANY_INSTANCES, [DRY_RUN_OPT],
"", "Updates mismatches in recorded disk sizes"),
'master-failover': (
MasterFailover, ARGS_NONE, [NOVOTING_OPT],
......@@ -930,7 +932,8 @@ commands = {
[BACKEND_OPT, CP_SIZE_OPT, ENABLED_HV_OPT, HVLIST_OPT,
NIC_PARAMS_OPT, NOLVM_STORAGE_OPT, VG_NAME_OPT, MAINTAIN_NODE_HEALTH_OPT,
UIDPOOL_OPT, ADD_UIDS_OPT, REMOVE_UIDS_OPT, DRBD_HELPER_OPT,
NODRBD_STORAGE_OPT, DEFAULT_IALLOCATOR_OPT, RESERVED_LVS_OPT],
NODRBD_STORAGE_OPT, DEFAULT_IALLOCATOR_OPT, RESERVED_LVS_OPT,
DRY_RUN_OPT],
"[opts...]",
"Alters the parameters of the cluster"),
"renew-crypto": (
......
#!/usr/bin/python
#
# Copyright (C) 2006, 2007 Google Inc.
# Copyright (C) 2006, 2007, 2010 Google Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
......@@ -407,6 +407,7 @@ commands = {
action="append", help="Select nodes to sleep on"),
cli_option("-r", "--repeat", type="int", default="0", dest="repeat",
help="Number of times to repeat the sleep"),
DRY_RUN_OPT,
],
"[opts...] <duration>", "Executes a TestDelay OpCode"),
'submit-job': (
......@@ -420,6 +421,7 @@ commands = {
action="store_true", help="Show timing stats"),
cli_option("--each", default=False, action="store_true",
help="Submit each job separately"),
DRY_RUN_OPT,
],
"<op_list_file...>", "Submits jobs built from json files"
" containing a list of serialized opcodes"),
......@@ -447,6 +449,7 @@ commands = {
help="Select number of VCPUs for the instance"),
cli_option("--tags", default=None,
help="Comma separated list of tags"),
DRY_RUN_OPT,
],
"{opts...} <instance>", "Executes a TestAllocator OpCode"),
"test-jobqueue": (
......
......@@ -639,7 +639,8 @@ def RenameInstance(opts, args):
name_check=opts.name_check)
result = SubmitOrSend(op, opts)
ToStdout("Instance '%s' renamed to '%s'", args[0], result)
if result:
ToStdout("Instance '%s' renamed to '%s'", args[0], result)
return 0
......@@ -1401,6 +1402,7 @@ add_opts = [
NO_INSTALL_OPT,
OS_SIZE_OPT,
SUBMIT_OPT,
DRY_RUN_OPT,
]
commands = {
......@@ -1409,7 +1411,7 @@ commands = {
"[...] -t disk-type -n node[:secondary-node] -o os-type <name>",
"Creates and adds a new instance to the cluster"),
'batch-create': (
BatchCreate, [ArgFile(min=1, max=1)], [],
BatchCreate, [ArgFile(min=1, max=1)], [DRY_RUN_OPT],
"<instances.json>",
"Create a bunch of instances based on specs in the file."),
'console': (
......@@ -1418,17 +1420,19 @@ commands = {
"[--show-cmd] <instance>", "Opens a console on the specified instance"),
'failover': (
FailoverInstance, ARGS_ONE_INSTANCE,
[FORCE_OPT, IGNORE_CONSIST_OPT, SUBMIT_OPT, SHUTDOWN_TIMEOUT_OPT],
[FORCE_OPT, IGNORE_CONSIST_OPT, SUBMIT_OPT, SHUTDOWN_TIMEOUT_OPT,
DRY_RUN_OPT],
"[-f] <instance>", "Stops the instance and starts it on the backup node,"
" using the remote mirror (only for instances of type drbd)"),
'migrate': (
MigrateInstance, ARGS_ONE_INSTANCE,
[FORCE_OPT, NONLIVE_OPT, MIGRATION_MODE_OPT, CLEANUP_OPT],
[FORCE_OPT, NONLIVE_OPT, MIGRATION_MODE_OPT, CLEANUP_OPT, DRY_RUN_OPT],
"[-f] <instance>", "Migrate instance to its secondary node"
" (only for instances of type drbd)"),
'move': (
MoveInstance, ARGS_ONE_INSTANCE,
[FORCE_OPT, SUBMIT_OPT, SINGLE_NODE_OPT, SHUTDOWN_TIMEOUT_OPT],
[FORCE_OPT, SUBMIT_OPT, SINGLE_NODE_OPT, SHUTDOWN_TIMEOUT_OPT,
DRY_RUN_OPT],
"[-f] <instance>", "Move instance to an arbitrary node"
" (only for instances of type file and lv)"),
'info': (
......@@ -1458,63 +1462,67 @@ commands = {
[FORCE_OPT, OS_OPT, FORCE_VARIANT_OPT, m_force_multi, m_node_opt,
m_pri_node_opt, m_sec_node_opt, m_clust_opt, m_inst_opt, m_node_tags_opt,
m_pri_node_tags_opt, m_sec_node_tags_opt, m_inst_tags_opt, SELECT_OS_OPT,
SUBMIT_OPT],
SUBMIT_OPT, DRY_RUN_OPT],
"[-f] <instance>", "Reinstall a stopped instance"),
'remove': (
RemoveInstance, ARGS_ONE_INSTANCE,
[FORCE_OPT, SHUTDOWN_TIMEOUT_OPT, IGNORE_FAILURES_OPT, SUBMIT_OPT],
[FORCE_OPT, SHUTDOWN_TIMEOUT_OPT, IGNORE_FAILURES_OPT, SUBMIT_OPT,
DRY_RUN_OPT],
"[-f] <instance>", "Shuts down the instance and removes it"),
'rename': (
RenameInstance,
[ArgInstance(min=1, max=1), ArgHost(min=1, max=1)],
[NOIPCHECK_OPT, NONAMECHECK_OPT, SUBMIT_OPT],
[NOIPCHECK_OPT, NONAMECHECK_OPT, SUBMIT_OPT, DRY_RUN_OPT],
"<instance> <new_name>", "Rename the instance"),
'replace-disks': (
ReplaceDisks, ARGS_ONE_INSTANCE,
[AUTO_REPLACE_OPT, DISKIDX_OPT, IALLOCATOR_OPT, EARLY_RELEASE_OPT,
NEW_SECONDARY_OPT, ON_PRIMARY_OPT, ON_SECONDARY_OPT, SUBMIT_OPT],
NEW_SECONDARY_OPT, ON_PRIMARY_OPT, ON_SECONDARY_OPT, SUBMIT_OPT,
DRY_RUN_OPT],
"[-s|-p|-n NODE|-I NAME] <instance>",
"Replaces all disks for the instance"),
'modify': (
SetInstanceParams, ARGS_ONE_INSTANCE,
[BACKEND_OPT, DISK_OPT, FORCE_OPT, HVOPTS_OPT, NET_OPT, SUBMIT_OPT,
DISK_TEMPLATE_OPT, SINGLE_NODE_OPT, OS_OPT, FORCE_VARIANT_OPT,
OSPARAMS_OPT],
OSPARAMS_OPT, DRY_RUN_OPT],
"<instance>", "Alters the parameters of an instance"),
'shutdown': (
GenericManyOps("shutdown", _ShutdownInstance), [ArgInstance()],
[m_node_opt, m_pri_node_opt, m_sec_node_opt, m_clust_opt,
m_node_tags_opt, m_pri_node_tags_opt, m_sec_node_tags_opt,
m_inst_tags_opt, m_inst_opt, m_force_multi, TIMEOUT_OPT, SUBMIT_OPT],
m_inst_tags_opt, m_inst_opt, m_force_multi, TIMEOUT_OPT, SUBMIT_OPT,
DRY_RUN_OPT],
"<instance>", "Stops an instance"),
'startup': (
GenericManyOps("startup", _StartupInstance), [ArgInstance()],
[FORCE_OPT, m_force_multi, m_node_opt, m_pri_node_opt, m_sec_node_opt,
m_node_tags_opt, m_pri_node_tags_opt, m_sec_node_tags_opt,
m_inst_tags_opt, m_clust_opt, m_inst_opt, SUBMIT_OPT, HVOPTS_OPT,
BACKEND_OPT],
BACKEND_OPT, DRY_RUN_OPT],
"<instance>", "Starts an instance"),
'reboot': (
GenericManyOps("reboot", _RebootInstance), [ArgInstance()],
[m_force_multi, REBOOT_TYPE_OPT, IGNORE_SECONDARIES_OPT, m_node_opt,
m_pri_node_opt, m_sec_node_opt, m_clust_opt, m_inst_opt, SUBMIT_OPT,
m_node_tags_opt, m_pri_node_tags_opt, m_sec_node_tags_opt,
m_inst_tags_opt, SHUTDOWN_TIMEOUT_OPT],
m_inst_tags_opt, SHUTDOWN_TIMEOUT_OPT, DRY_RUN_OPT],
"<instance>", "Reboots an instance"),
'activate-disks': (
ActivateDisks, ARGS_ONE_INSTANCE, [SUBMIT_OPT, IGNORE_SIZE_OPT],
ActivateDisks, ARGS_ONE_INSTANCE,
[SUBMIT_OPT, IGNORE_SIZE_OPT],
"<instance>", "Activate an instance's disks"),
'deactivate-disks': (
DeactivateDisks, ARGS_ONE_INSTANCE, [SUBMIT_OPT],
DeactivateDisks, ARGS_ONE_INSTANCE, [SUBMIT_OPT, DRY_RUN_OPT],
"<instance>", "Deactivate an instance's disks"),
'recreate-disks': (
RecreateDisks, ARGS_ONE_INSTANCE, [SUBMIT_OPT, DISKIDX_OPT],
RecreateDisks, ARGS_ONE_INSTANCE, [SUBMIT_OPT, DISKIDX_OPT, DRY_RUN_OPT],
"<instance>", "Recreate an instance's disks"),
'grow-disk': (
GrowDisk,
[ArgInstance(min=1, max=1), ArgUnknown(min=1, max=1),
ArgUnknown(min=1, max=1)],
[SUBMIT_OPT, NWSYNC_OPT],
[SUBMIT_OPT, NWSYNC_OPT, DRY_RUN_OPT],
"<instance> <disk> <size>", "Grow an instance's disk"),
'list-tags': (
ListTags, ARGS_ONE_INSTANCE, [],
......
......@@ -479,7 +479,8 @@ def PowercycleNode(opts, args):
op = opcodes.OpPowercycleNode(node_name=node, force=opts.force)
result = SubmitOpCode(op, opts=opts)
ToStderr(result)
if result:
ToStderr(result)
return 0
......@@ -709,14 +710,14 @@ commands = {
'modify': (
SetNodeParams, ARGS_ONE_NODE,
[FORCE_OPT, SUBMIT_OPT, MC_OPT, DRAINED_OPT, OFFLINE_OPT,
AUTO_PROMOTE_OPT],
AUTO_PROMOTE_OPT, DRY_RUN_OPT],
"<node_name>", "Alters the parameters of a node"),
'powercycle': (
PowercycleNode, ARGS_ONE_NODE,
[FORCE_OPT, CONFIRM_OPT],
[FORCE_OPT, CONFIRM_OPT, DRY_RUN_OPT],
"<node_name>", "Tries to forcefully powercycle a node"),
'remove': (
RemoveNode, ARGS_ONE_NODE, [],
RemoveNode, ARGS_ONE_NODE, [DRY_RUN_OPT],
"<node_name>", "Removes a node from the cluster"),
'volumes': (
ListVolumes, [ArgNode()],
......@@ -733,14 +734,14 @@ commands = {
[ArgNode(min=1, max=1),
ArgChoice(min=1, max=1, choices=_MODIFIABLE_STORAGE_TYPES),
ArgFile(min=1, max=1)],
[ALLOCATABLE_OPT],
[ALLOCATABLE_OPT, DRY_RUN_OPT],
"<node_name> <storage_type> <name>", "Modify storage volume on a node"),
'repair-storage': (
RepairStorage,
[ArgNode(min=1, max=1),
ArgChoice(min=1, max=1, choices=_REPAIRABLE_STORAGE_TYPES),
ArgFile(min=1, max=1)],
[IGNORE_CONSIST_OPT],
[IGNORE_CONSIST_OPT, DRY_RUN_OPT],
"<node_name> <storage_type> <name>",
"Repairs a storage volume on a node"),
'list-tags': (
......
#!/usr/bin/python
#
# Copyright (C) 2006, 2007 Google Inc.
# Copyright (C) 2006, 2007, 2010 Google Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
......@@ -254,7 +254,7 @@ def ModifyOS(opts, args):
candidate_pool_size=None,
os_hvp=os_hvp,
osparams=osp)
SubmitOpCode(op)
SubmitOpCode(op, opts=opts)
return 0
......@@ -269,7 +269,7 @@ commands = {
ShowOSInfo, [ArgOs()], [], "", "Show detailed information about "
"operating systems"),
'modify': (
ModifyOS, ARGS_ONE_OS, [HVLIST_OPT, OSPARAMS_OPT], "",
ModifyOS, ARGS_ONE_OS, [HVLIST_OPT, OSPARAMS_OPT, DRY_RUN_OPT], "",
"Modify the OS parameters"),
}
......
......@@ -129,7 +129,7 @@ def SetupSSH(transport):
except IOError:
# Sadly paramiko doesn't provide errno or similiar
# so we can just assume that the path already exists
logging.info("Path %s seems already to exist on remote node. Ignore.",
logging.info("Path %s seems already to exist on remote node. Ignoring.",
auth_path)
for name, (data, perm) in filemap.iteritems():
......@@ -137,6 +137,14 @@ def SetupSSH(transport):
authorized_keys = sftp.open(auth_keys, "a+")
try:
# Due to the way SFTPFile and BufferedFile are implemented,
# opening in a+ mode and then issuing a read(), readline() or
# iterating over the file (which uses read() internally) will see
# an empty file, since the paramiko internal file position and the
# OS-level file-position are desynchronized; therefore, we issue
# an explicit seek to resynchronize these; writes should (note
# should) still go to the right place
authorized_keys.seek(0, 0)
# We don't have to close, as the close happened already in AddAuthorizedKey
utils.AddAuthorizedKey(authorized_keys, filemap[pub_key][0])
finally:
......@@ -169,6 +177,14 @@ def ParseOptions():
" <node...>"), prog=program)
parser.add_option(cli.DEBUG_OPT)
parser.add_option(cli.VERBOSE_OPT)
default_key = ssh.GetUserFiles(constants.GANETI_RUNAS)[0]
parser.add_option(optparse.Option("-f", dest="private_key",
default=default_key,
help="The private key to (try to) use for"
"authentication "))
parser.add_option(optparse.Option("--key-type", dest="key_type",
choices=("rsa", "dsa"), default="dsa",
help="The private key type (rsa or dsa)"))
(options, args) = parser.parse_args()
......@@ -192,22 +208,92 @@ def SetupLogging(options):
stderr_handler = logging.StreamHandler()