-
Christos Stavrakakis authored
This patch updates gnt-instance man page to include refering to devices by their name and UUID. Signed-off-by:
Christos Stavrakakis <cstavr@grnet.gr> Reviewed-by:
Helga Velroyen <helgav@google.com>
12f126b2
gnt-instance(8) Ganeti | Version @GANETI_VERSION@
Name
gnt-instance - Ganeti instance administration
Synopsis
gnt-instance {command} [arguments...]
DESCRIPTION
The gnt-instance command is used for instance administration in the Ganeti system.
COMMANDS
Creation/removal/querying
ADD
Creates a new instance on the specified host. The instance argument must be in DNS, but depending on the bridge/routing setup, need not be in the same network as the nodes in the cluster.
The disk
option specifies the parameters for the disks of the
instance. The numbering of disks starts at zero, and at least one disk
needs to be passed. For each disk, either the size or the adoption
source needs to be given. The size is interpreted (when no unit is
given) in mebibytes. You can also use one of the suffixes m, g or
t to specify the exact the units used; these suffixes map to
mebibytes, gibibytes and tebibytes. Each disk can also take these
parameters (all optional):
- mode
- The access mode. Either
ro
(read-only) or the defaultrw
(read-write). - name
- this option specifies a name for the disk, which can be used as a disk identifier. An instance can not have two disks with the same name.
- vg
- The LVM volume group. This works only for LVM and DRBD devices.
- metavg
- This options specifies a different VG for the metadata device. This works only for DRBD devices
When creating ExtStorage disks, also arbitrary parameters can be passed,
to the ExtStorage provider. Those parameters are passed as additional
comma separated options. Therefore, an ExtStorage disk provided by
provider pvdr1
with parameters param1
, param2
would be
passed as --disk 0:size=10G,provider=pvdr1,param1=val1,param2=val2
.
When using the adopt
key in the disk definition, Ganeti will
reuse those volumes (instead of creating new ones) as the
instance's disks. Ganeti will rename these volumes to the standard
format, and (without installing the OS) will use them as-is for the
instance. This allows migrating instances from non-managed mode
(e.g. plain KVM with LVM) to being managed via Ganeti. Please note that
this works only for the `plain' disk template (see below for
template details).
Alternatively, a single-disk instance can be created via the -s
option which takes a single argument, the size of the disk. This is
similar to the Ganeti 1.2 version (but will only create one disk).
The minimum disk specification is therefore --disk 0:size=20G
(or
-s 20G
when using the -s
option), and a three-disk instance
can be specified as --disk 0:size=20G --disk 1:size=4G --disk
2:size=100G
.
The minimum information needed to specify an ExtStorage disk are the
size
and the provider
. For example:
--disk 0:size=20G,provider=pvdr1
.
The --no-ip-check
skips the checks that are done to see if the
instance's IP is not already alive (i.e. reachable from the master
node).
The --no-name-check
skips the check for the instance name via
the resolver (e.g. in DNS or /etc/hosts, depending on your setup).
Since the name check is used to compute the IP address, if you pass
this option you must also pass the --no-ip-check
option.
If you don't want the instance to automatically start after
creation, this is possible via the --no-start
option. This will
leave the instance down until a subsequent gnt-instance start
command.
The NICs of the instances can be specified via the --net
option. By default, one NIC is created for the instance, with a
random MAC, and set up according the the cluster level NIC
parameters. Each NIC can take these parameters (all optional):
- mac
- either a value or 'generate' to generate a new unique MAC
- ip
- specifies the IP address assigned to the instance from the Ganeti side (this is not necessarily what the instance will use, but what the node expects the instance to use)
- mode
- specifies the connection mode for this NIC: routed, bridged or openvswitch.
- link
- in bridged or openvswitch mode specifies the interface to attach this NIC to, in routed mode it's intended to differentiate between different routing tables/instance groups (but the meaning is dependent on the network script, see gnt-cluster(8) for more details). Note that openvswitch support is also hypervisor dependent.
- network
- derives the mode and the link from the settings of the network which is identified by its name. If the network option is chosen, link and mode must not be specified. Note that the mode and link depend on the network-to-nodegroup connection, thus allowing different nodegroups to be connected to the same network in different ways.
- name
- this option specifies a name for the NIC, which can be used as a NIC identifier. An instance can not have two NICs with the same name.
Of these "mode" and "link" are NIC parameters, and inherit their
default at cluster level. Alternatively, if no network is desired for
the instance, you can prevent the default of one NIC with the
--no-nics
option.
The -o (--os-type)
option specifies the operating system to be
installed. The available operating systems can be listed with
gnt-os list. Passing --no-install
will however skip the OS
installation, allowing a manual import if so desired. Note that the
no-installation mode will automatically disable the start-up of the
instance (without an OS, it most likely won't be able to start-up
successfully).
The -B (--backend-parameters)
option specifies the backend
parameters for the instance. If no such parameters are specified, the
values are inherited from the cluster. Possible parameters are:
- maxmem
- the maximum memory size of the instance; as usual, suffixes can be used to denote the unit, otherwise the value is taken in mebibytes
- minmem
- the minimum memory size of the instance; as usual, suffixes can be used to denote the unit, otherwise the value is taken in mebibytes
- vcpus
- the number of VCPUs to assign to the instance (if this value makes sense for the hypervisor)
- auto_balance
- whether the instance is considered in the N+1 cluster checks (enough redundancy in the cluster to survive a node failure)
- always_failover
-
True
orFalse
, whether the instance must be failed over (shut down and rebooted) always or it may be migrated (briefly suspended)
Note that before 2.6 Ganeti had a memory
parameter, which was the
only value of memory an instance could have. With the
maxmem
/minmem
change Ganeti guarantees that at least the minimum
memory is always available for an instance, but allows more memory to be
used (up to the maximum memory) should it be free.
The -H (--hypervisor-parameters)
option specified the hypervisor
to use for the instance (must be one of the enabled hypervisors on the
cluster) and optionally custom parameters for this instance. If not
other options are used (i.e. the invocation is just -H NAME) the
instance will inherit the cluster options. The defaults below show the
cluster defaults at cluster creation time.
The possible hypervisor options are as follows:
- boot_order
-
Valid for the Xen HVM and KVM hypervisors.
A string value denoting the boot order. This has different meaning for the Xen HVM hypervisor and for the KVM one.
For Xen HVM, The boot order is a string of letters listing the boot devices, with valid device letters being:
- a
- floppy drive
- c
- hard disk
- d
- CDROM drive
- n
- network boot (PXE)
The default is not to set an HVM boot order, which is interpreted as 'dc'.
For KVM the boot order is either "floppy", "cdrom", "disk" or "network". Please note that older versions of KVM couldn't netboot from virtio interfaces. This has been fixed in more recent versions and is confirmed to work at least with qemu-kvm 0.11.1. Also note that if you have set the
kernel_path
option, that will be used for booting, and this setting will be silently ignored. - blockdev_prefix
-
Valid for the Xen HVM and PVM hypervisors.
Relevant to non-pvops guest kernels, in which the disk device names are given by the host. Allows one to specify 'xvd', which helps run Red Hat based installers, driven by anaconda.
- floppy_image_path
-
Valid for the KVM hypervisor.
The path to a floppy disk image to attach to the instance. This is useful to install Windows operating systems on Virt/IO disks because you can specify here the floppy for the drivers at installation time.
- cdrom_image_path
-
Valid for the Xen HVM and KVM hypervisors.
The path to a CDROM image to attach to the instance.
- cdrom2_image_path
-
Valid for the KVM hypervisor.
The path to a second CDROM image to attach to the instance. NOTE: This image can't be used to boot the system. To do that you have to use the 'cdrom_image_path' option.
- nic_type
-
Valid for the Xen HVM and KVM hypervisors.
This parameter determines the way the network cards are presented to the instance. The possible options are:
- rtl8139 (default for Xen HVM) (HVM & KVM)
- ne2k_isa (HVM & KVM)
- ne2k_pci (HVM & KVM)
- i82551 (KVM)
- i82557b (KVM)
- i82559er (KVM)
- pcnet (KVM)
- e1000 (KVM)
- paravirtual (default for KVM) (HVM & KVM)
- disk_type
-
Valid for the Xen HVM and KVM hypervisors.
This parameter determines the way the disks are presented to the instance. The possible options are:
- ioemu [default] (HVM & KVM)
- ide (HVM & KVM)
- scsi (KVM)
- sd (KVM)
- mtd (KVM)
- pflash (KVM)
- cdrom_disk_type
-
Valid for the KVM hypervisor.
This parameter determines the way the cdroms disks are presented to the instance. The default behavior is to get the same value of the earlier parameter (disk_type). The possible options are:
- paravirtual
- ide
- scsi
- sd
- mtd
- pflash
- vnc_bind_address
-
Valid for the Xen HVM and KVM hypervisors.
Specifies the address that the VNC listener for this instance should bind to. Valid values are IPv4 addresses. Use the address 0.0.0.0 to bind to all available interfaces (this is the default) or specify the address of one of the interfaces on the node to restrict listening to that interface.
- vnc_tls
-
Valid for the KVM hypervisor.
A boolean option that controls whether the VNC connection is secured with TLS.
- vnc_x509_path
-
Valid for the KVM hypervisor.
If
vnc_tls
is enabled, this options specifies the path to the x509 certificate to use. - vnc_x509_verify
- Valid for the KVM hypervisor.
- spice_bind
-
Valid for the KVM hypervisor.
Specifies the address or interface on which the SPICE server will listen. Valid values are:
- IPv4 addresses, including 0.0.0.0 and 127.0.0.1
- IPv6 addresses, including :: and ::1
- names of network interfaces
If a network interface is specified, the SPICE server will be bound to one of the addresses of that interface.
- spice_ip_version
-
Valid for the KVM hypervisor.
Specifies which version of the IP protocol should be used by the SPICE server.
It is mainly intended to be used for specifying what kind of IP addresses should be used if a network interface with both IPv4 and IPv6 addresses is specified via the
spice_bind
parameter. In this case, if thespice_ip_version
parameter is not used, the default IP version of the cluster will be used. - spice_password_file
-
Valid for the KVM hypervisor.
Specifies a file containing the password that must be used when connecting via the SPICE protocol. If the option is not specified, passwordless connections are allowed.
- spice_image_compression
-
Valid for the KVM hypervisor.
Configures the SPICE lossless image compression. Valid values are:
- auto_glz
- auto_lz
- quic
- glz
- lz
- off
- spice_jpeg_wan_compression
-
Valid for the KVM hypervisor.
Configures how SPICE should use the jpeg algorithm for lossy image compression on slow links. Valid values are:
- auto
- never
- always
- spice_zlib_glz_wan_compression
-
Valid for the KVM hypervisor.
Configures how SPICE should use the zlib-glz algorithm for lossy image compression on slow links. Valid values are:
- auto
- never
- always
- spice_streaming_video
-
Valid for the KVM hypervisor.
Configures how SPICE should detect video streams. Valid values are:
- off
- all
- filter
- spice_playback_compression
-
Valid for the KVM hypervisor.
Configures whether SPICE should compress audio streams or not.
- spice_use_tls
-
Valid for the KVM hypervisor.
Specifies that the SPICE server must use TLS to encrypt all the traffic with the client.
- spice_tls_ciphers
-
Valid for the KVM hypervisor.
Specifies a list of comma-separated ciphers that SPICE should use for TLS connections. For the format, see man cipher(1).
- spice_use_vdagent
-
Valid for the KVM hypervisor.
Enables or disables passing mouse events via SPICE vdagent.
- cpu_type
-
Valid for the KVM hypervisor.
This parameter determines the emulated cpu for the instance. If this parameter is empty (which is the default configuration), it will not be passed to KVM.
Be aware of setting this parameter to
"host"
if you have nodes with different CPUs from each other. Live migration may stop working in this situation.For more information please refer to the KVM manual.
- acpi
-
Valid for the Xen HVM and KVM hypervisors.
A boolean option that specifies if the hypervisor should enable ACPI support for this instance. By default, ACPI is disabled.
- pae
-
Valid for the Xen HVM and KVM hypervisors.
A boolean option that specifies if the hypervisor should enabled PAE support for this instance. The default is false, disabling PAE support.
- use_localtime
-
Valid for the Xen HVM and KVM hypervisors.
A boolean option that specifies if the instance should be started with its clock set to the localtime of the machine (when true) or to the UTC (When false). The default is false, which is useful for Linux/Unix machines; for Windows OSes, it is recommended to enable this parameter.
- kernel_path
-
Valid for the Xen PVM and KVM hypervisors.
This option specifies the path (on the node) to the kernel to boot the instance with. Xen PVM instances always require this, while for KVM if this option is empty, it will cause the machine to load the kernel from its disks (and the boot will be done accordingly to
boot_order
). - kernel_args
-
Valid for the Xen PVM and KVM hypervisors.
This options specifies extra arguments to the kernel that will be loaded. device. This is always used for Xen PVM, while for KVM it is only used if the
kernel_path
option is also specified.The default setting for this value is simply
"ro"
, which mounts the root disk (initially) in read-only one. For example, setting this to single will cause the instance to start in single-user mode. - initrd_path
-
Valid for the Xen PVM and KVM hypervisors.
This option specifies the path (on the node) to the initrd to boot the instance with. Xen PVM instances can use this always, while for KVM if this option is only used if the
kernel_path
option is also specified. You can pass here either an absolute filename (the path to the initrd) if you want to use an initrd, or use the format no_initrd_path for no initrd. - root_path
-
Valid for the Xen PVM and KVM hypervisors.
This options specifies the name of the root device. This is always needed for Xen PVM, while for KVM it is only used if the
kernel_path
option is also specified.Please note, that if this setting is an empty string and the hypervisor is Xen it will not be written to the Xen configuration file
- serial_console
-
Valid for the KVM hypervisor.
This boolean option specifies whether to emulate a serial console for the instance. Note that some versions of KVM have a bug that will make an instance hang when configured to use the serial console unless a connection is made to it within about 2 seconds of the instance's startup. For such case it's recommended to disable this option, which is enabled by default.
- serial_speed
-
Valid for the KVM hypervisor.
This integer option specifies the speed of the serial console. Common values are 9600, 19200, 38400, 57600 and 115200: choose the one which works on your system. (The default is 38400 for historical reasons, but newer versions of kvm/qemu work with 115200)
- disk_cache
-
Valid for the KVM hypervisor.
The disk cache mode. It can be either default to not pass any cache option to KVM, or one of the KVM cache modes: none (for direct I/O), writethrough (to use the host cache but report completion to the guest only when the host has committed the changes to disk) or writeback (to use the host cache and report completion as soon as the data is in the host cache). Note that there are special considerations for the cache mode depending on version of KVM used and disk type (always raw file under Ganeti), please refer to the KVM documentation for more details.
- security_model
-
Valid for the KVM hypervisor.
The security model for kvm. Currently one of none, user or pool. Under none, the default, nothing is done and instances are run as the Ganeti daemon user (normally root).
Under user kvm will drop privileges and become the user specified by the security_domain parameter.
Under pool a global cluster pool of users will be used, making sure no two instances share the same user on the same node. (this mode is not implemented yet)
- security_domain
-
Valid for the KVM hypervisor.
Under security model user the username to run the instance under. It must be a valid username existing on the host.
Cannot be set under security model none or pool.
- kvm_flag
-
Valid for the KVM hypervisor.
If enabled the -enable-kvm flag is passed to kvm. If disabled -disable-kvm is passed. If unset no flag is passed, and the default running mode for your kvm binary will be used.
- mem_path
-
Valid for the KVM hypervisor.
This option passes the -mem-path argument to kvm with the path (on the node) to the mount point of the hugetlbfs file system, along with the -mem-prealloc argument too.
- use_chroot
-
Valid for the KVM hypervisor.
This boolean option determines whether to run the KVM instance in a chroot directory.
If it is set to
true
, an empty directory is created before starting the instance and its path is passed via the -chroot flag to kvm. The directory is removed when the instance is stopped.It is set to
false
by default. - migration_downtime
-
Valid for the KVM hypervisor.
The maximum amount of time (in ms) a KVM instance is allowed to be frozen during a live migration, in order to copy dirty memory pages. Default value is 30ms, but you may need to increase this value for busy instances.
This option is only effective with kvm versions >= 87 and qemu-kvm versions >= 0.11.0.
- cpu_mask
-
Valid for the Xen, KVM and LXC hypervisors.
The processes belonging to the given instance are only scheduled on the specified CPUs.
The format of the mask can be given in three forms. First, the word "all", which signifies the common case where all VCPUs can live on any CPU, based on the hypervisor's decisions.
Second, a comma-separated list of CPU IDs or CPU ID ranges. The ranges are defined by a lower and higher boundary, separated by a dash, and the boundaries are inclusive. In this form, all VCPUs of the instance will be mapped on the selected list of CPUs. Example:
0-2,5
, mapping all VCPUs (no matter how many) onto physical CPUs 0, 1, 2 and 5.The last form is used for explicit control of VCPU-CPU pinnings. In this form, the list of VCPU mappings is given as a colon (:) separated list, whose elements are the possible values for the second or first form above. In this form, the number of elements in the colon-separated list _must_ equal the number of VCPUs of the instance.
Example:
# Map the entire instance to CPUs 0-2 gnt-instance modify -H cpu_mask=0-2 my-inst # Map vCPU 0 to physical CPU 1 and vCPU 1 to CPU 3 (assuming 2 vCPUs) gnt-instance modify -H cpu_mask=1:3 my-inst # Pin vCPU 0 to CPUs 1 or 2, and vCPU 1 to any CPU gnt-instance modify -H cpu_mask=1-2:all my-inst # Pin vCPU 0 to any CPU, vCPU 1 to CPUs 1, 3, 4 or 5, and CPU 2 to # CPU 0 (backslashes for escaping the comma) gnt-instance modify -H cpu_mask=all:1\\,3-5:0 my-inst # Pin entire VM to CPU 0 gnt-instance modify -H cpu_mask=0 my-inst # Turn off CPU pinning (default setting) gnt-instance modify -H cpu_mask=all my-inst
- cpu_cap
-
Valid for the Xen hypervisor.
Set the maximum amount of cpu usage by the VM. The value is a percentage between 0 and (100 * number of VCPUs). Default cap is 0: unlimited.
- cpu_weight
-
Valid for the Xen hypervisor.
Set the cpu time ratio to be allocated to the VM. Valid values are between 1 and 65535. Default weight is 256.
- usb_mouse
-
Valid for the KVM hypervisor.
This option specifies the usb mouse type to be used. It can be "mouse" or "tablet". When using VNC it's recommended to set it to "tablet".
- keymap
-
Valid for the KVM hypervisor.
This option specifies the keyboard mapping to be used. It is only needed when using the VNC console. For example: "fr" or "en-gb".
- reboot_behavior
-
Valid for Xen PVM, Xen HVM and KVM hypervisors.
Normally if an instance reboots, the hypervisor will restart it. If this option is set to
exit
, the hypervisor will treat a reboot as a shutdown instead.It is set to
reboot
by default. - cpu_cores
-
Valid for the KVM hypervisor.
Number of emulated CPU cores.
- cpu_threads
-
Valid for the KVM hypervisor.
Number of emulated CPU threads.
- cpu_sockets
-
Valid for the KVM hypervisor.
Number of emulated CPU sockets.
- soundhw
-
Valid for the KVM hypervisor.
Comma separated list of emulated sounds cards, or "all" to enable all the available ones.
- usb_devices
-
Valid for the KVM hypervisor.
Comma separated list of usb devices. These can be emulated devices or passthrough ones, and each one gets passed to kvm with its own
-usbdevice
option. See the qemu(1) manpage for the syntax of the possible components. - vga
-
Valid for the KVM hypervisor.
Emulated vga mode, passed the the kvm -vga option.
- kvm_extra
-
Valid for the KVM hypervisor.
Any other option to the KVM hypervisor, useful tweaking anything that Ganeti doesn't support.
- machine_version
-
Valid for the KVM hypervisor.
Use in case an instance must be booted with an exact type of machine version (due to e.g. outdated drivers). In case it's not set the default version supported by your version of kvm is used.
- kvm_path
-
Valid for the KVM hypervisor.
Path to the userspace KVM (or qemu) program.
The -O (--os-parameters)
option allows customisation of the OS
parameters. The actual parameter names and values depends on the OS
being used, but the syntax is the same key=value. For example, setting
a hypothetical dhcp
parameter to yes can be achieved by:
gnt-instance add -O dhcp=yes ...
The -I (--iallocator)
option specifies the instance allocator plugin
to use (.
means the default allocator). If you pass in this option
the allocator will select nodes for this instance automatically, so you
don't need to pass them with the -n
option. For more information
please refer to the instance allocator documentation.
The -t (--disk-template)
options specifies the disk layout type
for the instance. The available choices are:
- diskless
- This creates an instance with no disks. Its useful for testing only (or other special cases).
- file
- Disk devices will be regular files.
- sharedfile
- Disk devices will be regulare files on a shared directory.
- plain
- Disk devices will be logical volumes.
- drbd
- Disk devices will be drbd (version 8.x) on top of lvm volumes.
- rbd
- Disk devices will be rbd volumes residing inside a RADOS cluster.
- blockdev
- Disk devices will be adopted pre-existent block devices.
- ext
- Disk devices will be provided by external shared storage, through the ExtStorage Interface using ExtStorage providers.
The optional second value of the -n (--node)
is used for the drbd
template type and specifies the remote node.
If you do not want gnt-instance to wait for the disk mirror to be
synced, use the --no-wait-for-sync
option.
The --file-storage-dir
specifies the relative path under the
cluster-wide file storage directory to store file-based disks. It is
useful for having different subdirectories for different
instances. The full path of the directory where the disk files are
stored will consist of cluster-wide file storage directory + optional
subdirectory + instance name. Example:
@RPL_FILE_STORAGE_DIR@/mysubdir/instance1.example.com
. This
option is only relevant for instances using the file storage backend.
The --file-driver
specifies the driver to use for file-based
disks. Note that currently these drivers work with the xen hypervisor
only. This option is only relevant for instances using the file
storage backend. The available choices are:
- loop
- Kernel loopback driver. This driver uses loopback devices to access the filesystem within the file. However, running I/O intensive applications in your instance using the loop driver might result in slowdowns. Furthermore, if you use the loopback driver consider increasing the maximum amount of loopback devices (on most systems it's 8) using the max_loop param.
- blktap
- The blktap driver (for Xen hypervisors). In order to be able to use the blktap driver you should check if the 'blktapctrl' user space disk agent is running (usually automatically started via xend). This user-level disk I/O interface has the advantage of better performance. Especially if you use a network file system (e.g. NFS) to store your instances this is the recommended choice.
If --ignore-ipolicy
is given any instance policy violations occuring
during this operation are ignored.
See ganeti(7) for a description of --submit
and other common
options.