Commit 93384b8c authored by Guido Trotter's avatar Guido Trotter
Browse files

Merge branch 'devel-2.4'



* devel-2.4:
  Use floppy disk and a second CDROM on KVM
  Document the selection of instance kernels
  Make root_path an optional hypervisor parameter
  Some man page updates
  Add 2 new variables to the OS scripts environment
  Add --no-wait-for-sync when converting to drbd
  Recreate instance disks: allow changing nodes
  Rename instance: only show new name when different
  Fix race condition in LUGroupAssignNodes
  Re-wrap and fix formatting issues in gnt-instance.rst
  Documentation for the new parameters for KVM
  cmdlib: Fix typo, s/nick/NIC/
  A small optimisation in cluster verify
  A few docstring fixes
  luxi: do not handle KeyboardInterrupt
  Handle EPIPE errors while writing to the terminal
  Cluster verify: check for missing bridges

Conflicts:
	lib/cmdlib.py
          - manually merge the 2.4 fix
	lib/opcodes.py
          - add new field from 2.4, but also describe it
	man/gnt-cluster.rst
	man/gnt-instance.rst
	man/gnt-node.rst
          - merge new attributes with general 2.4 manpage fixes
Signed-off-by: default avatarGuido Trotter <ultrotter@google.com>
Reviewed-by: default avatarIustin Pop <iustin@google.com>
parents 9626f028 fba7f911
......@@ -371,6 +371,64 @@ non-managed configuration that the instance had, the transition should
be seamless for the instance. For more than one disk, just pass another
disk parameter (e.g. ``--disk 1:adopt=...``).
Instance kernel selection
+++++++++++++++++++++++++
The kernel that instances uses to bootup can come either from the node,
or from instances themselves, depending on the setup.
Xen-PVM
~~~~~~~
With Xen PVM, there are three options.
First, you can use a kernel from the node, by setting the hypervisor
parameters as such:
- ``kernel_path`` to a valid file on the node (and appropriately
``initrd_path``)
- ``kernel_args`` optionally set to a valid Linux setting (e.g. ``ro``)
- ``root_path`` to a valid setting (e.g. ``/dev/xvda1``)
- ``bootloader_path`` and ``bootloader_args`` to empty
Alternatively, you can delete the kernel management to instances, and
use either ``pvgrub`` or the deprecated ``pygrub``. For this, you must
install the kernels and initrds in the instance, and create a valid grub
v1 configuration file.
For ``pvgrub`` (new in version 2.4.2), you need to set:
- ``kernel_path`` to point to the ``pvgrub`` loader present on the node
(e.g. ``/usr/lib/xen/boot/pv-grub-x86_32.gz``)
- ``kernel_args`` to the path to the grub config file, relative to the
instance (e.g. ``(hd0,0)/grub/menu.lst``)
- ``root_path`` **must** be empty
- ``bootloader_path`` and ``bootloader_args`` to empty
While ``pygrub`` is deprecated, here is how you can configure it:
- ``bootloader_path`` to the pygrub binary (e.g. ``/usr/bin/pygrub``)
- the other settings are not important
More information can be found in the Xen wiki pages for `pvgrub
<http://wiki.xensource.com/xenwiki/PvGrub>`_ and `pygrub
<http://wiki.xensource.com/xenwiki/PyGrub>`_.
KVM
~~~
For KVM also the kernel can be loaded either way.
For loading the kernels from the node, you need to set:
- ``kernel_path`` to a valid value
- ``initrd_path`` optionally set if you use an initrd
- ``kernel_args`` optionally set to a valid value (e.g. ``ro``)
If you want instead to have the instance boot from its disk (and execute
its bootloader), simply set the ``kernel_path`` parameter to an empty
string, and all the others will be ignored.
Instance HA features
--------------------
......
......@@ -644,6 +644,10 @@ def VerifyNode(what, cluster_name):
if constants.NV_OSLIST in what and vm_capable:
result[constants.NV_OSLIST] = DiagnoseOS()
if constants.NV_BRIDGES in what and vm_capable:
result[constants.NV_BRIDGES] = [bridge
for bridge in what[constants.NV_BRIDGES]
if not utils.BridgeExists(bridge)]
return result
......@@ -2183,12 +2187,14 @@ def OSEnvironment(instance, inst_os, debug=0):
"""
result = OSCoreEnv(instance.os, inst_os, instance.osparams, debug=debug)
for attr in ["name", "os", "uuid", "ctime", "mtime"]:
for attr in ["name", "os", "uuid", "ctime", "mtime", "primary_node"]:
result["INSTANCE_%s" % attr.upper()] = str(getattr(instance, attr))
result['HYPERVISOR'] = instance.hypervisor
result['DISK_COUNT'] = '%d' % len(instance.disks)
result['NIC_COUNT'] = '%d' % len(instance.nics)
result['INSTANCE_SECONDARY_NODES'] = \
('%s' % " ".join(instance.secondary_nodes))
# Disks
for idx, disk in enumerate(instance.disks):
......
......@@ -27,6 +27,7 @@ import textwrap
import os.path
import time
import logging
import errno
from cStringIO import StringIO
from ganeti import utils
......@@ -1984,6 +1985,12 @@ def GenericMain(commands, override=None, aliases=None):
ToStderr("Aborted. Note that if the operation created any jobs, they"
" might have been submitted and"
" will continue to run in the background.")
except IOError, err:
if err.errno == errno.EPIPE:
# our terminal went away, we'll exit
sys.exit(constants.EXIT_FAILURE)
else:
raise
return result
......@@ -2871,13 +2878,20 @@ def _ToStream(stream, txt, *args):
@param txt: the message
"""
if args:
args = tuple(args)
stream.write(txt % args)
else:
stream.write(txt)
stream.write('\n')
stream.flush()
try:
if args:
args = tuple(args)
stream.write(txt % args)
else:
stream.write(txt)
stream.write('\n')
stream.flush()
except IOError, err:
if err.errno == errno.EPIPE:
# our terminal went away, we'll exit
sys.exit(constants.EXIT_FAILURE)
else:
raise
def ToStdout(txt, *args):
......
......@@ -607,8 +607,17 @@ def RecreateDisks(opts, args):
else:
opts.disks = []
if opts.node:
pnode, snode = SplitNodeOption(opts.node)
nodes = [pnode]
if snode is not None:
nodes.append(snode)
else:
nodes = []
op = opcodes.OpInstanceRecreateDisks(instance_name=instance_name,
disks=opts.disks)
disks=opts.disks,
nodes=nodes)
SubmitOrSend(op, opts)
return 0
......@@ -1278,7 +1287,8 @@ def SetInstanceParams(opts, args):
os_name=opts.os,
osparams=opts.osparams,
force_variant=opts.force_variant,
force=opts.force)
force=opts.force,
wait_for_sync=opts.wait_for_sync)
# even if here we process the result, we allow submit only
result = SubmitOrSend(op, opts)
......@@ -1425,7 +1435,7 @@ commands = {
SetInstanceParams, ARGS_ONE_INSTANCE,
[BACKEND_OPT, DISK_OPT, FORCE_OPT, HVOPTS_OPT, NET_OPT, SUBMIT_OPT,
DISK_TEMPLATE_OPT, SINGLE_NODE_OPT, OS_OPT, FORCE_VARIANT_OPT,
OSPARAMS_OPT, DRY_RUN_OPT, PRIORITY_OPT],
OSPARAMS_OPT, DRY_RUN_OPT, PRIORITY_OPT, NWSYNC_OPT],
"<instance>", "Alters the parameters of an instance"),
'shutdown': (
GenericManyOps("shutdown", _ShutdownInstance), [ArgInstance()],
......@@ -1458,7 +1468,7 @@ commands = {
"[-f] <instance>", "Deactivate an instance's disks"),
'recreate-disks': (
RecreateDisks, ARGS_ONE_INSTANCE,
[SUBMIT_OPT, DISKIDX_OPT, DRY_RUN_OPT, PRIORITY_OPT],
[SUBMIT_OPT, DISKIDX_OPT, NODE_PLACEMENT_OPT, DRY_RUN_OPT, PRIORITY_OPT],
"<instance>", "Recreate an instance's disks"),
'grow-disk': (
GrowDisk,
......
......@@ -1531,7 +1531,7 @@ class LUClusterVerify(LogicalUnit):
ntime_diff)
def _VerifyNodeLVM(self, ninfo, nresult, vg_name):
"""Check the node time.
"""Check the node LVM results.
@type ninfo: L{objects.Node}
@param ninfo: the node to check
......@@ -1567,8 +1567,31 @@ class LUClusterVerify(LogicalUnit):
_ErrorIf(test, self.ENODELVM, node, "Invalid character ':' in PV"
" '%s' of VG '%s'", pvname, owner_vg)
def _VerifyNodeBridges(self, ninfo, nresult, bridges):
"""Check the node bridges.
@type ninfo: L{objects.Node}
@param ninfo: the node to check
@param nresult: the remote results for the node
@param bridges: the expected list of bridges
"""
if not bridges:
return
node = ninfo.name
_ErrorIf = self._ErrorIf # pylint: disable-msg=C0103
missing = nresult.get(constants.NV_BRIDGES, None)
test = not isinstance(missing, list)
_ErrorIf(test, self.ENODENET, node,
"did not return valid bridge information")
if not test:
_ErrorIf(bool(missing), self.ENODENET, node, "missing bridges: %s" %
utils.CommaJoin(sorted(missing)))
def _VerifyNodeNetwork(self, ninfo, nresult):
"""Check the node time.
"""Check the node network connectivity results.
@type ninfo: L{objects.Node}
@param ninfo: the node to check
......@@ -2239,12 +2262,11 @@ class LUClusterVerify(LogicalUnit):
drbd_helper = self.cfg.GetDRBDHelper()
hypervisors = self.cfg.GetClusterInfo().enabled_hypervisors
cluster = self.cfg.GetClusterInfo()
nodelist = utils.NiceSort(self.cfg.GetNodeList())
nodeinfo = [self.cfg.GetNodeInfo(nname) for nname in nodelist]
nodeinfo_byname = dict(zip(nodelist, nodeinfo))
instancelist = utils.NiceSort(self.cfg.GetInstanceList())
instanceinfo = dict((iname, self.cfg.GetInstanceInfo(iname))
for iname in instancelist)
nodeinfo_byname = self.cfg.GetAllNodesInfo()
nodelist = utils.NiceSort(nodeinfo_byname.keys())
nodeinfo = [nodeinfo_byname[nname] for nname in nodelist]
instanceinfo = self.cfg.GetAllInstancesInfo()
instancelist = utils.NiceSort(instanceinfo.keys())
groupinfo = self.cfg.GetAllNodeGroupsInfo()
i_non_redundant = [] # Non redundant instances
i_non_a_balanced = [] # Non auto-balanced instances
......@@ -2312,6 +2334,21 @@ class LUClusterVerify(LogicalUnit):
if drbd_helper:
node_verify_param[constants.NV_DRBDHELPER] = drbd_helper
# bridge checks
# FIXME: this needs to be changed per node-group, not cluster-wide
bridges = set()
default_nicpp = cluster.nicparams[constants.PP_DEFAULT]
if default_nicpp[constants.NIC_MODE] == constants.NIC_MODE_BRIDGED:
bridges.add(default_nicpp[constants.NIC_LINK])
for instance in instanceinfo.values():
for nic in instance.nics:
full_nic = cluster.SimpleFillNIC(nic.nicparams)
if full_nic[constants.NIC_MODE] == constants.NIC_MODE_BRIDGED:
bridges.add(full_nic[constants.NIC_LINK])
if bridges:
node_verify_param[constants.NV_BRIDGES] = list(bridges)
# Build our expected cluster state
node_image = dict((node.name, self.NodeImage(offline=node.offline,
name=node.name,
......@@ -2422,6 +2459,7 @@ class LUClusterVerify(LogicalUnit):
if refos_img is None:
refos_img = nimg
self._VerifyNodeOS(node_i, nimg, refos_img)
self._VerifyNodeBridges(node_i, nresult, bridges)
feedback_fn("* Verifying instance status")
for instance in instancelist:
......@@ -2974,8 +3012,8 @@ class LUClusterSetParams(LogicalUnit):
# if we're moving instances to routed, check that they have an ip
target_mode = params_filled[constants.NIC_MODE]
if target_mode == constants.NIC_MODE_ROUTED and not nic.ip:
nic_errors.append("Instance %s, nic/%d: routed nick with no ip" %
(instance.name, nic_idx))
nic_errors.append("Instance %s, nic/%d: routed NIC with no ip"
" address" % (instance.name, nic_idx))
if nic_errors:
raise errors.OpPrereqError("Cannot apply the change, errors:\n%s" %
"\n".join(nic_errors))
......@@ -5698,8 +5736,25 @@ class LUInstanceRecreateDisks(LogicalUnit):
HTYPE = constants.HTYPE_INSTANCE
REQ_BGL = False
def CheckArguments(self):
# normalise the disk list
self.op.disks = sorted(frozenset(self.op.disks))
def ExpandNames(self):
self._ExpandAndLockInstance()
self.recalculate_locks[locking.LEVEL_NODE] = constants.LOCKS_APPEND
if self.op.nodes:
self.op.nodes = [_ExpandNodeName(self.cfg, n) for n in self.op.nodes]
self.needed_locks[locking.LEVEL_NODE] = list(self.op.nodes)
else:
self.needed_locks[locking.LEVEL_NODE] = []
def DeclareLocks(self, level):
if level == locking.LEVEL_NODE:
# if we replace the nodes, we only need to lock the old primary,
# otherwise we need to lock all nodes for disk re-creation
primary_only = bool(self.op.nodes)
self._LockInstancesNodes(primary_only=primary_only)
def BuildHooksEnv(self):
"""Build hooks env.
......@@ -5725,12 +5780,31 @@ class LUInstanceRecreateDisks(LogicalUnit):
instance = self.cfg.GetInstanceInfo(self.op.instance_name)
assert instance is not None, \
"Cannot retrieve locked instance %s" % self.op.instance_name
_CheckNodeOnline(self, instance.primary_node)
if self.op.nodes:
if len(self.op.nodes) != len(instance.all_nodes):
raise errors.OpPrereqError("Instance %s currently has %d nodes, but"
" %d replacement nodes were specified" %
(instance.name, len(instance.all_nodes),
len(self.op.nodes)),
errors.ECODE_INVAL)
assert instance.disk_template != constants.DT_DRBD8 or \
len(self.op.nodes) == 2
assert instance.disk_template != constants.DT_PLAIN or \
len(self.op.nodes) == 1
primary_node = self.op.nodes[0]
else:
primary_node = instance.primary_node
_CheckNodeOnline(self, primary_node)
if instance.disk_template == constants.DT_DISKLESS:
raise errors.OpPrereqError("Instance '%s' has no disks" %
self.op.instance_name, errors.ECODE_INVAL)
_CheckInstanceDown(self, instance, "cannot recreate disks")
# if we replace nodes *and* the old primary is offline, we don't
# check
assert instance.primary_node in self.needed_locks[locking.LEVEL_NODE]
old_pnode = self.cfg.GetNodeInfo(instance.primary_node)
if not (self.op.nodes and old_pnode.offline):
_CheckInstanceDown(self, instance, "cannot recreate disks")
if not self.op.disks:
self.op.disks = range(len(instance.disks))
......@@ -5739,18 +5813,39 @@ class LUInstanceRecreateDisks(LogicalUnit):
if idx >= len(instance.disks):
raise errors.OpPrereqError("Invalid disk index '%s'" % idx,
errors.ECODE_INVAL)
if self.op.disks != range(len(instance.disks)) and self.op.nodes:
raise errors.OpPrereqError("Can't recreate disks partially and"
" change the nodes at the same time",
errors.ECODE_INVAL)
self.instance = instance
def Exec(self, feedback_fn):
"""Recreate the disks.
"""
# change primary node, if needed
if self.op.nodes:
self.instance.primary_node = self.op.nodes[0]
self.LogWarning("Changing the instance's nodes, you will have to"
" remove any disks left on the older nodes manually")
to_skip = []
for idx, _ in enumerate(self.instance.disks):
for idx, disk in enumerate(self.instance.disks):
if idx not in self.op.disks: # disk idx has not been passed in
to_skip.append(idx)
continue
# update secondaries for disks, if needed
if self.op.nodes:
if disk.dev_type == constants.LD_DRBD8:
# need to update the nodes
assert len(self.op.nodes) == 2
logical_id = list(disk.logical_id)
logical_id[0] = self.op.nodes[0]
logical_id[1] = self.op.nodes[1]
disk.logical_id = tuple(logical_id)
if self.op.nodes:
self.cfg.Update(self.instance, feedback_fn)
_CreateDisks(self, self.instance, to_skip=to_skip)
......@@ -5805,8 +5900,9 @@ class LUInstanceRename(LogicalUnit):
new_name = self.op.new_name
if self.op.name_check:
hostname = netutils.GetHostname(name=new_name)
self.LogInfo("Resolved given name '%s' to '%s'", new_name,
hostname.name)
if hostname != new_name:
self.LogInfo("Resolved given name '%s' to '%s'", new_name,
hostname.name)
if not utils.MatchNameComponent(self.op.new_name, [hostname.name]):
raise errors.OpPrereqError(("Resolved hostname '%s' does not look the"
" same as given hostname '%s'") %
......@@ -10196,7 +10292,8 @@ class LUInstanceSetParams(LogicalUnit):
self.cfg.Update(instance, feedback_fn)
# disks are created, waiting for sync
disk_abort = not _WaitForSync(self, instance)
disk_abort = not _WaitForSync(self, instance,
oneshot=not self.op.wait_for_sync)
if disk_abort:
raise errors.OpExecError("There are some degraded disks for"
" this instance, please cleanup manually")
......@@ -10886,20 +10983,40 @@ class LUGroupAssignNodes(NoHooksLU):
# We want to lock all the affected nodes and groups. We have readily
# available the list of nodes, and the *destination* group. To gather the
# list of "source" groups, we need to fetch node information.
self.node_data = self.cfg.GetAllNodesInfo()
affected_groups = set(self.node_data[node].group for node in self.op.nodes)
affected_groups.add(self.group_uuid)
# list of "source" groups, we need to fetch node information later on.
self.needed_locks = {
locking.LEVEL_NODEGROUP: list(affected_groups),
locking.LEVEL_NODEGROUP: set([self.group_uuid]),
locking.LEVEL_NODE: self.op.nodes,
}
def DeclareLocks(self, level):
if level == locking.LEVEL_NODEGROUP:
assert len(self.needed_locks[locking.LEVEL_NODEGROUP]) == 1
# Try to get all affected nodes' groups without having the group or node
# lock yet. Needs verification later in the code flow.
groups = self.cfg.GetNodeGroupsFromNodes(self.op.nodes)
self.needed_locks[locking.LEVEL_NODEGROUP].update(groups)
def CheckPrereq(self):
"""Check prerequisites.
"""
assert self.needed_locks[locking.LEVEL_NODEGROUP]
assert (frozenset(self.acquired_locks[locking.LEVEL_NODE]) ==
frozenset(self.op.nodes))
expected_locks = (set([self.group_uuid]) |
self.cfg.GetNodeGroupsFromNodes(self.op.nodes))
actual_locks = self.acquired_locks[locking.LEVEL_NODEGROUP]
if actual_locks != expected_locks:
raise errors.OpExecError("Nodes changed groups since locks were acquired,"
" current groups are '%s', used to be '%s'" %
(utils.CommaJoin(expected_locks),
utils.CommaJoin(actual_locks)))
self.node_data = self.cfg.GetAllNodesInfo()
self.group = self.cfg.GetNodeGroup(self.group_uuid)
instance_data = self.cfg.GetAllInstancesInfo()
......@@ -10935,6 +11052,9 @@ class LUGroupAssignNodes(NoHooksLU):
for node in self.op.nodes:
self.node_data[node].group = self.group_uuid
# FIXME: Depends on side-effects of modifying the result of
# C{cfg.GetAllNodesInfo}
self.cfg.Update(self.group, feedback_fn) # Saves all modified nodes.
@staticmethod
......
......@@ -1429,6 +1429,17 @@ class ConfigWriter:
for node in self._UnlockedGetNodeList()])
return my_dict
@locking.ssynchronized(_config_lock, shared=1)
def GetNodeGroupsFromNodes(self, nodes):
"""Returns groups for a list of nodes.
@type nodes: list of string
@param nodes: List of node names
@rtype: frozenset
"""
return frozenset(self._UnlockedGetNodeInfo(name).group for name in nodes)
def _UnlockedGetMasterCandidateStats(self, exceptions=None):
"""Get the number of current and maximum desired and possible candidates.
......
......@@ -663,7 +663,7 @@ HVS_PARAMETER_TYPES = {
HV_KERNEL_PATH: VTYPE_STRING,
HV_KERNEL_ARGS: VTYPE_STRING,
HV_INITRD_PATH: VTYPE_STRING,
HV_ROOT_PATH: VTYPE_STRING,
HV_ROOT_PATH: VTYPE_MAYBE_STRING,
HV_SERIAL_CONSOLE: VTYPE_BOOL,
HV_USB_MOUSE: VTYPE_STRING,
HV_DEVICE_MODEL: VTYPE_STRING,
......@@ -901,6 +901,7 @@ NV_VERSION = "version"
NV_VGLIST = "vglist"
NV_VMNODES = "vmnodes"
NV_OOB_PATHS = "oob-paths"
NV_BRIDGES = "bridges"
# Instance status
INSTST_RUNNING = "running"
......
......@@ -459,7 +459,7 @@ class XenPvmHypervisor(XenHypervisor):
constants.HV_BOOTLOADER_ARGS: hv_base.NO_CHECK,
constants.HV_KERNEL_PATH: hv_base.REQ_FILE_CHECK,
constants.HV_INITRD_PATH: hv_base.OPT_FILE_CHECK,
constants.HV_ROOT_PATH: hv_base.REQUIRED_CHECK,
constants.HV_ROOT_PATH: hv_base.NO_CHECK,
constants.HV_KERNEL_ARGS: hv_base.NO_CHECK,
constants.HV_MIGRATION_PORT: hv_base.NET_PORT_CHECK,
constants.HV_MIGRATION_MODE: hv_base.MIGRATION_MODE_CHECK,
......@@ -521,7 +521,8 @@ class XenPvmHypervisor(XenHypervisor):
config.write("vif = [%s]\n" % ",".join(vif_data))
config.write("disk = [%s]\n" % ",".join(disk_data))
config.write("root = '%s'\n" % hvp[constants.HV_ROOT_PATH])
if hvp[constants.HV_ROOT_PATH]:
config.write("root = '%s'\n" % hvp[constants.HV_ROOT_PATH])
config.write("on_poweroff = 'destroy'\n")
config.write("on_reboot = 'restart'\n")
config.write("on_crash = 'restart'\n")
......
#
#
# Copyright (C) 2006, 2007 Google Inc.
# Copyright (C) 2006, 2007, 2011 Google Inc.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
......@@ -297,6 +297,8 @@ def ParseResponse(msg):
# Parse the result
try:
data = serializer.LoadJson(msg)
except KeyboardInterrupt:
raise
except Exception, err:
raise ProtocolError("Error while deserializing response: %s" % str(err))
......
......@@ -1123,6 +1123,8 @@ class OpInstanceRecreateDisks(OpCode):
_PInstanceName,
("disks", ht.EmptyList, ht.TListOf(ht.TPositiveInt),
"List of disk indexes"),
("nodes", ht.EmptyList, ht.TListOf(ht.TNonEmptyString),
"New instance nodes, if relocation is desired"),
]
......@@ -1173,6 +1175,8 @@ class OpInstanceSetParams(OpCode):
("os_name", None, ht.TMaybeString,
"Change instance's OS name. Does not reinstall the instance."),
("osparams", None, ht.TMaybeDict, "Per-instance OS parameters"),
("wait_for_sync", True, ht.TBool,
"Whether to wait for the disk to synchronize, when changing template"),
]
......
......@@ -137,25 +137,26 @@ INIT
~~~~
| **init**
| [-s *secondary\_ip*]
| [{-s|--secondary-ip} *secondary\_ip*]
| [--vg-name *vg-name*]
| [--master-netdev *interface-name*]
| [-m *mac-prefix*]
| [{-m|--mac-prefix} *mac-prefix*]
| [--no-lvm-storage]
| [--no-etc-hosts]
| [--no-ssh-init]
| [--file-storage-dir *dir*]
| [--enabled-hypervisors *hypervisors*]
| [-t *hypervisor name*]
| [--hypervisor-parameters *hypervisor*:*hv-param*=*value*[,*hv-param*=*value*...]]
| [--backend-parameters *be-param*=*value* [,*be-param*=*value*...]]
| [--nic-parameters *nic-param*=*value* [,*nic-param*=*value*...]]
| [{-H|--hypervisor-parameters} *hypervisor*:*hv-param*=*value*[,*hv-param*=*value*...]]
| [{-B|--backend-parameters} *be-param*=*value* [,*be-param*=*value*...]]
| [{-N|--nic-parameters} *nic-param*=*value* [,*nic-param*=*value*...]]
| [--maintain-node-health {yes \| no}]
| [--uid-pool *user-id pool definition*]
| [-I *default instance allocator*]
| [{-I|--default-iallocator} *default instance allocator*]
| [--primary-ip-version *version*]
| [--prealloc-wipe-disks {yes \| no}]
| [--node-parameters *ndparams*]
| [{-C|--candidate-pool-size} *candidate\_pool\_size*]
| {*clustername*}
This commands is only run once initially on the first node of the
......@@ -170,13 +171,13 @@ address reserved exclusively for this purpose, i.e. not already in
use.
The cluster can run in two modes: single-home or dual-homed. In the
first case, all traffic (both public traffic, inter-node traffic
and data replication traffic) goes over the same interface. In the
first case, all traffic (both public traffic, inter-node traffic and
data replication traffic) goes over the same interface. In the
dual-homed case, the data replication traffic goes over the second
network. The ``-s`` option here marks the cluster as dual-homed and
its parameter represents this node's address on the second network.
If you initialise the cluster with ``-s``, all nodes added must
have a secondary IP as well.
network. The ``-s (--secondary-ip)`` option here marks the cluster as
dual-homed and its parameter represents this node's address on the
second network. If you initialise the cluster with ``-s``, all nodes
added must have a secondary IP as well.
Note that for Ganeti it doesn't matter if the secondary network is
actually a separate physical network, or is done using tunneling,
......@@ -196,10 +197,10 @@ interface on which the master will activate its IP address. It's
important that all nodes have this interface because you'll need it
for a master failover.
The ``-m`` option will let you specify a three byte prefix under
which the virtual MAC addresses of your instances will be
generated. The prefix must be specified in the format XX:XX:XX and
the default is aa:00:00.
The ``-m (--mac-prefix)`` option will let you specify a three byte
prefix under which the virtual MAC addresses of your instances will be
generated. The prefix must be specified in the format ``XX:XX:XX`` and
the default is ``aa:00:00``.