Commit 1b7f2c85 authored by Iustin Pop's avatar Iustin Pop
Browse files

Add RST version of gnt-instance man page


Signed-off-by: default avatarIustin Pop <iustin@google.com>
Reviewed-by: default avatarRené Nussbaumer <rn@google.com>
parent de8eea3e
gnt-instance(8) Ganeti | Version @GANETI_VERSION@
=================================================
Name
----
gnt-instance - Ganeti instance administration
Synopsis
--------
**gnt-instance** {command} [arguments...]
DESCRIPTION
-----------
The **gnt-instance** command is used for instance administration in
the Ganeti system.
COMMANDS
--------
Creation/removal/querying
~~~~~~~~~~~~~~~~~~~~~~~~~
ADD
^^^
| **add**
| {-t {diskless | file \| plain \| drbd}}
| {--disk=*N*: {size=*VAL* \| adopt=*LV*},mode=*ro\|rw* \| -s *SIZE*}
| [--no-ip-check] [--no-name-check] [--no-start] [--no-install]
| [--net=*N* [:options...] \| --no-nics]
| [-B *BEPARAMS*]
| [-H *HYPERVISOR* [: option=*value*... ]]
| [--file-storage-dir *dir\_path*] [--file-driver {loop \| blktap}]
| {-n *node[:secondary-node]* \| --iallocator *name*}
| {-o *os-type*}
| [--submit]
| {*instance*}
Creates a new instance on the specified host. The *instance* argument
must be in DNS, but depending on the bridge/routing setup, need not be
in the same network as the nodes in the cluster.
The ``disk`` option specifies the parameters for the disks of the
instance. The numbering of disks starts at zero, and at least one disk
needs to be passed. For each disk, either the size or the adoption
source needs to be given, and optionally the access mode (read-only or
the default of read-write) can also be specified. The size is
interpreted (when no unit is given) in mebibytes. You can also use one
of the suffixes *m*, *g* or *t* to specify the exact the units used;
these suffixes map to mebibytes, gibibytes and tebibytes.
When using the ``adopt`` key in the disk definition, Ganeti will
reuse those volumes (instead of creating new ones) as the
instance's disks. Ganeti will rename these volumes to the standard
format, and (without installing the OS) will use them as-is for the
instance. This allows migrating instances from non-managed mode
(e.q. plain KVM with LVM) to being managed via Ganeti. Note that
this works only for the \`plain' disk template (see below for
template details).
Alternatively, a single-disk instance can be created via the ``-s``
option which takes a single argument, the size of the disk. This is
similar to the Ganeti 1.2 version (but will only create one disk).
The minimum disk specification is therefore ``--disk 0:size=20G`` (or
``-s 20G`` when using the ``-s`` option), and a three-disk instance
can be specified as ``--disk 0:size=20G --disk 1:size=4G --disk
2:size=100G``.
The ``--no-ip-check`` skips the checks that are done to see if the
instance's IP is not already alive (i.e. reachable from the master
node).
The ``--no-name-check`` skips the check for the instance name via
the resolver (e.g. in DNS or /etc/hosts, depending on your setup).
Since the name check is used to compute the IP address, if you pass
this option you must also pass the ``--no-ip-check`` option.
If you don't wat the instance to automatically start after
creation, this is possible via the ``--no-start`` option. This will
leave the instance down until a subsequent **gnt-instance start**
command.
The NICs of the instances can be specified via the ``--net``
option. By default, one NIC is created for the instance, with a
random MAC, and set up according the the cluster level nic
parameters. Each NIC can take these parameters (all optional):
mac
either a value or 'generate' to generate a new unique MAC
ip
specifies the IP address assigned to the instance from the Ganeti
side (this is not necessarily what the instance will use, but what
the node expects the instance to use)
mode
specifies the connection mode for this nic: routed or bridged.
link
in bridged mode specifies the bridge to attach this NIC to, in
routed mode it's intended to differentiate between different
routing tables/instance groups (but the meaning is dependent on the
network script, see gnt-cluster(8) for more details)
Of these "mode" and "link" are nic parameters, and inherit their
default at cluster level.
Alternatively, if no network is desired for the instance, you can
prevent the default of one NIC with the ``--no-nics`` option.
The ``-o`` options specifies the operating system to be installed.
The available operating systems can be listed with **gnt-os list**.
Passing ``--no-install`` will however skip the OS installation,
allowing a manual import if so desired. Note that the
no-installation mode will automatically disable the start-up of the
instance (without an OS, it most likely won't be able to start-up
successfully).
The ``-B`` option specifies the backend parameters for the
instance. If no such parameters are specified, the values are
inherited from the cluster. Possible parameters are:
memory
the memory size of the instance; as usual, suffixes can be used to
denote the unit, otherwise the value is taken in mebibites
vcpus
the number of VCPUs to assign to the instance (if this value makes
sense for the hypervisor)
auto\_balance
whether the instance is considered in the N+1 cluster checks
(enough redundancy in the cluster to survive a node failure)
The ``-H`` option specified the hypervisor to use for the instance
(must be one of the enabled hypervisors on the cluster) and
optionally custom parameters for this instance. If not other
options are used (i.e. the invocation is just -H *NAME*) the
instance will inherit the cluster options. The defaults below show
the cluster defaults at cluster creation time.
The possible hypervisor options are as follows:
boot\_order
Valid for the Xen HVM and KVM hypervisors.
A string value denoting the boot order. This has different meaning
for the Xen HVM hypervisor and for the KVM one.
For Xen HVM, The boot order is a string of letters listing the boot
devices, with valid device letters being:
a
floppy drive
c
hard disk
d
CDROM drive
n
network boot (PXE)
The default is not to set an HVM boot order which is interpreted as
'dc'.
For KVM the boot order is either "cdrom", "disk" or "network".
Please note that older versions of KVM couldn't netboot from virtio
interfaces. This has been fixed in more recent versions and is
confirmed to work at least with qemu-kvm 0.11.1.
cdrom\_image\_path
Valid for the Xen HVM and KVM hypervisors.
The path to a CDROM image to attach to the instance.
nic\_type
Valid for the Xen HVM and KVM hypervisors.
This parameter determines the way the network cards are presented
to the instance. The possible options are:
rtl8139 (default for Xen HVM) (HVM & KVM)
ne2k\_isa (HVM & KVM)
ne2k\_pci (HVM & KVM)
i82551 (KVM)
i82557b (KVM)
i82559er (KVM)
pcnet (KVM)
e1000 (KVM)
paravirtual (default for KVM) (HVM & KVM)
disk\_type
Valid for the Xen HVM and KVM hypervisors.
This parameter determines the way the disks are presented to the
instance. The possible options are:
ioemu (default for HVM & KVM) (HVM & KVM)
ide (HVM & KVM)
scsi (KVM)
sd (KVM)
mtd (KVM)
pflash (KVM)
vnc\_bind\_address
Valid for the Xen HVM and KVM hypervisors.
Specifies the address that the VNC listener for this instance
should bind to. Valid values are IPv4 addresses. Use the address
0.0.0.0 to bind to all available interfaces (this is the default)
or specify the address of one of the interfaces on the node to
restrict listening to that interface.
vnc\_tls
Valid for the KVM hypervisor.
A boolean option that controls whether the VNC connection is
secured with TLS.
vnc\_x509\_path
Valid for the KVM hypervisor.
If ``vnc_tls`` is enabled, this options specifies the path to the
x509 certificate to use.
vnc\_x509\_verify
Valid for the KVM hypervisor.
acpi
Valid for the Xen HVM and KVM hypervisors.
A boolean option that specifies if the hypervisor should enable
ACPI support for this instance. By default, ACPI is disabled.
pae
Valid for the Xen HVM and KVM hypervisors.
A boolean option that specifies if the hypervisor should enabled
PAE support for this instance. The default is false, disabling PAE
support.
use\_localtime
Valid for the Xen HVM and KVM hypervisors.
A boolean option that specifies if the instance should be started
with its clock set to the localtime of the machine (when true) or
to the UTC (When false). The default is false, which is useful for
Linux/Unix machines; for Windows OSes, it is recommended to enable
this parameter.
kernel\_path
Valid for the Xen PVM and KVM hypervisors.
This option specifies the path (on the node) to the kernel to boot
the instance with. Xen PVM instances always require this, while for
KVM if this option is empty, it will cause the machine to load the
kernel from its disks.
kernel\_args
Valid for the Xen PVM and KVM hypervisors.
This options specifies extra arguments to the kernel that will be
loaded. device. This is always used for Xen PVM, while for KVM it
is only used if the ``kernel_path`` option is also specified.
The default setting for this value is simply ``"ro"``, which mounts
the root disk (initially) in read-only one. For example, setting
this to single will cause the instance to start in single-user
mode.
initrd\_path
Valid for the Xen PVM and KVM hypervisors.
This option specifies the path (on the node) to the initrd to boot
the instance with. Xen PVM instances can use this always, while for
KVM if this option is only used if the ``kernel_path`` option is
also specified. You can pass here either an absolute filename (the
path to the initrd) if you want to use an initrd, or use the format
no\_initrd\_path for no initrd.
root\_path
Valid for the Xen PVM and KVM hypervisors.
This options specifies the name of the root device. This is always
needed for Xen PVM, while for KVM it is only used if the
``kernel_path`` option is also specified.
serial\_console
Valid for the KVM hypervisor.
This boolean option specifies whether to emulate a serial console
for the instance.
disk\_cache
Valid for the KVM hypervisor.
The disk cache mode. It can be either default to not pass any cache
option to KVM, or one of the KVM cache modes: none (for direct
I/O), writethrough (to use the host cache but report completion to
the guest only when the host has committed the changes to disk) or
writeback (to use the host cache and report completion as soon as
the data is in the host cache). Note that there are special
considerations for the cache mode depending on version of KVM used
and disk type (always raw file under Ganeti), please refer to the
KVM documentation for more details.
security\_model
Valid for the KVM hypervisor.
The security model for kvm. Currently one of "none", "user" or
"pool". Under "none", the default, nothing is done and instances
are run as the Ganeti daemon user (normally root).
Under "user" kvm will drop privileges and become the user specified
by the security\_domain parameter.
Under "pool" a global cluster pool of users will be used, making
sure no two instances share the same user on the same node. (this
mode is not implemented yet)
security\_domain
Valid for the KVM hypervisor.
Under security model "user" the username to run the instance under.
It must be a valid username existing on the host.
Cannot be set under security model "none" or "pool".
kvm\_flag
Valid for the KVM hypervisor.
If "enabled" the -enable-kvm flag is passed to kvm. If "disabled"
-disable-kvm is passed. If unset no flag is passed, and the default
running mode for your kvm binary will be used.
mem\_path
Valid for the KVM hypervisor.
This option passes the -mem-path argument to kvm with the path (on
the node) to the mount point of the hugetlbfs file system, along
with the -mem-prealloc argument too.
use\_chroot
Valid for the KVM hypervisor.
This boolean option determines wether to run the KVM instance in a
chroot directory.
If it is set to ``true``, an empty directory is created before
starting the instance and its path is passed via the -chroot flag
to kvm. The directory is removed when the instance is stopped.
It is set to ``false`` by default.
migration\_downtime
Valid for the KVM hypervisor.
The maximum amount of time (in ms) a KVM instance is allowed to be
frozen during a live migration, in order to copy dirty memory
pages. Default value is 30ms, but you may need to increase this
value for busy instances.
This option is only effective with kvm versions >= 87 and qemu-kvm
versions >= 0.11.0.
cpu\_mask
Valid for the LXC hypervisor.
The processes belonging to the given instance are only scheduled on
the specified CPUs.
The parameter format is a comma-separated list of CPU IDs or CPU ID
ranges. The ranges are defined by a lower and higher boundary,
separated by a dash. The boundaries are inclusive.
usb\_mouse
Valid for the KVM hypervisor.
This option specifies the usb mouse type to be used. It can be
"mouse" or "tablet". When using VNC it's recommended to set it to
"tablet".
The ``--iallocator`` option specifies the instance allocator plugin
to use. If you pass in this option the allocator will select nodes
for this instance automatically, so you don't need to pass them
with the ``-n`` option. For more information please refer to the
instance allocator documentation.
The ``-t`` options specifies the disk layout type for the instance.
The available choices are:
diskless
This creates an instance with no disks. Its useful for testing only
(or other special cases).
file
Disk devices will be regular files.
plain
Disk devices will be logical volumes.
drbd
Disk devices will be drbd (version 8.x) on top of lvm volumes.
The optional second value of the ``--node`` is used for the drbd
template type and specifies the remote node.
If you do not want gnt-instance to wait for the disk mirror to be
synced, use the ``--no-wait-for-sync`` option.
The ``--file-storage-dir`` specifies the relative path under the
cluster-wide file storage directory to store file-based disks. It
is useful for having different subdirectories for different
instances. The full path of the directory where the disk files are
stored will consist of cluster-wide file storage directory +
optional subdirectory + instance name. Example:
/srv/ganeti/file-storage/mysubdir/instance1.example.com. This
option is only relevant for instances using the file storage
backend.
The ``--file-driver`` specifies the driver to use for file-based
disks. Note that currently these drivers work with the xen
hypervisor only. This option is only relevant for instances using
the file storage backend. The available choices are:
loop
Kernel loopback driver. This driver uses loopback devices to access
the filesystem within the file. However, running I/O intensive
applications in your instance using the loop driver might result in
slowdowns. Furthermore, if you use the loopback driver consider
increasing the maximum amount of loopback devices (on most systems
it's 8) using the max\_loop param.
blktap
The blktap driver (for Xen hypervisors). In order to be able to use
the blktap driver you should check if the 'blktapctrl' user space
disk agent is running (usually automatically started via xend).
This user-level disk I/O interface has the advantage of better
performance. Especially if you use a network file system (e.g. NFS)
to store your instances this is the recommended choice.
The ``--submit`` option is used to send the job to the master
daemon but not wait for its completion. The job ID will be shown so
that it can be examined via **gnt-job info**.
Example::
# gnt-instance add -t file --disk 0:size=30g -B memory=512 -o debian-etch \
-n node1.example.com --file-storage-dir=mysubdir instance1.example.com
# gnt-instance add -t plain --disk 0:size=30g -B memory=512 -o debian-etch \
-n node1.example.com instance1.example.com
# gnt-instance add -t drbd --disk 0:size=30g -B memory=512 -o debian-etch \
-n node1.example.com:node2.example.com instance2.example.com
BATCH-CREATE
^^^^^^^^^^^^
**batch-create** {instances\_file.json}
This command (similar to the Ganeti 1.2 **batcher** tool) submits
multiple instance creation jobs based on a definition file. The
instance configurations do not encompass all the possible options
for the **add** command, but only a subset.
The instance file should be a valid-formed JSON file, containing a
dictionary with instance name and instance parameters. The accepted
parameters are:
disk\_size
The size of the disks of the instance.
disk\_template
The disk template to use for the instance, the same as in the
**add** command.
backend
A dictionary of backend parameters.
hypervisor
A dictionary with a single key (the hypervisor name), and as value
the hypervisor options. If not passed, the default hypervisor and
hypervisor options will be inherited.
mac, ip, mode, link
Specifications for the one NIC that will be created for the
instance. 'bridge' is also accepted as a backwards compatibile
key.
nics
List of nics that will be created for the instance. Each entry
should be a dict, with mac, ip, mode and link as possible keys.
Please don't provide the "mac, ip, mode, link" parent keys if you
use this method for specifying nics.
primary\_node, secondary\_node
The primary and optionally the secondary node to use for the
instance (in case an iallocator script is not used).
iallocator
Instead of specifying the nodes, an iallocator script can be used
to automatically compute them.
start
whether to start the instance
ip\_check
Skip the check for already-in-use instance; see the description in
the **add** command for details.
name\_check
Skip the name check for instances; see the description in the
**add** command for details.
file\_storage\_dir, file\_driver
Configuration for the file disk type, see the **add** command for
details.
A simple definition for one instance can be (with most of the
parameters taken from the cluster defaults)::
{
"instance3": {
"template": "drbd",
"os": "debootstrap",
"disk_size": ["25G"],
"iallocator": "dumb"
},
"instance5": {
"template": "drbd",
"os": "debootstrap",
"disk_size": ["25G"],
"iallocator": "dumb",
"hypervisor": "xen-hvm",
"hvparams": {"acpi": true},
"backend": {"memory": 512}
}
}
The command will display the job id for each submitted instance, as
follows::
# gnt-instance batch-create instances.json
instance3: 11224
instance5: 11225
REMOVE
^^^^^^
**remove** [--ignore-failures] [--shutdown-timeout=*N*] [--submit]
{*instance*}
Remove an instance. This will remove all data from the instance and
there is *no way back*. If you are not sure if you use an instance
again, use **shutdown** first and leave it in the shutdown state
for a while.
The ``--ignore-failures`` option will cause the removal to proceed
even in the presence of errors during the removal of the instance
(e.g. during the shutdown or the disk removal). If this option is
not given, the command will stop at the first error.
The ``--shutdown-timeout`` is used to specify how much time to wait
before forcing the shutdown (e.g. ``xm destroy`` in Xen, killing the
kvm process for KVM, etc.). By default two minutes are given to each
instance to stop.
The ``--submit`` option is used to send the job to the master
daemon but not wait for its completion. The job ID will be shown so
that it can be examined via **gnt-job info**.
Example::
# gnt-instance remove instance1.example.com
LIST
^^^^
| **list**
| [--no-headers] [--separator=*SEPARATOR*] [--units=*UNITS*]
| [-o *[+]FIELD,...*] [--roman] [instance...]
Shows the currently configured instances with memory usage, disk
usage, the node they are running on, and their run status.
The ``--no-headers`` option will skip the initial header line. The
``--separator`` option takes an argument which denotes what will be
used between the output fields. Both these options are to help
scripting.
The units used to display the numeric values in the output varies,
depending on the options given. By default, the values will be
formatted in the most appropriate unit. If the ``--separator``
option is given, then the values are shown in mebibytes to allow
parsing by scripts. In both cases, the ``--units`` option can be
used to enforce a given output unit.
The ``--roman`` option allows latin people to better understand the
cluster instances' status.
The ``-o`` option takes a comma-separated list of output fields.
The available fields and their meaning are:
name
the instance name
os
the OS of the instance
pnode
the primary node of the instance
snodes
comma-separated list of secondary nodes for the instance; usually
this will be just one node
admin\_state
the desired state of the instance (either "yes" or "no" denoting
the instance should run or not)
disk\_template
the disk template of the instance
oper\_state
the actual state of the instance; can be one of the values
"running", "stopped", "(node down)"
status