-
Manuel Franceschini authored
Signed-off-by:
Manuel Franceschini <livewire@google.com> Reviewed-by:
Iustin Pop <iustin@google.com>
69affe73
gnt-cluster.sgml 35.17 KiB
<!doctype refentry PUBLIC "-//OASIS//DTD DocBook V4.1//EN" [
<!-- Fill in your name for FIRSTNAME and SURNAME. -->
<!-- Please adjust the date whenever revising the manpage. -->
<!ENTITY dhdate "<date>June 08, 2010</date>">
<!-- SECTION should be 1-8, maybe w/ subsection other parameters are
allowed: see man(7), man(1). -->
<!ENTITY dhsection "<manvolnum>8</manvolnum>">
<!ENTITY dhucpackage "<refentrytitle>gnt-cluster</refentrytitle>">
<!ENTITY dhpackage "gnt-cluster">
<!ENTITY debian "<productname>Debian</productname>">
<!ENTITY gnu "<acronym>GNU</acronym>">
<!ENTITY gpl "&gnu; <acronym>GPL</acronym>">
<!ENTITY footer SYSTEM "footer.sgml">
]>
<refentry>
<refentryinfo>
<copyright>
<year>2006</year>
<year>2007</year>
<year>2008</year>
<year>2009</year>
<holder>Google Inc.</holder>
</copyright>
&dhdate;
</refentryinfo>
<refmeta>
&dhucpackage;
&dhsection;
<refmiscinfo>Ganeti 2.2</refmiscinfo>
</refmeta>
<refnamediv>
<refname>&dhpackage;</refname>
<refpurpose>Ganeti administration, cluster-wide</refpurpose>
</refnamediv>
<refsynopsisdiv>
<cmdsynopsis>
<command>&dhpackage; </command>
<arg choice="req">command</arg>
<arg>arguments...</arg>
</cmdsynopsis>
</refsynopsisdiv>
<refsect1>
<title>DESCRIPTION</title>
<para>
The <command>&dhpackage;</command> is used for cluster-wide
administration in the Ganeti system.
</para>
</refsect1>
<refsect1>
<title>COMMANDS</title>
<refsect2>
<title>ADD-TAGS</title>
<cmdsynopsis>
<command>add-tags</command>
<arg choice="opt">--from <replaceable>file</replaceable></arg>
<arg choice="req"
rep="repeat"><replaceable>tag</replaceable></arg>
</cmdsynopsis>
<para>
Add tags to the cluster. If any of the tags contains invalid
characters, the entire operation will abort.
</para>
<para>
If the <option>--from</option> option is given, the list of
tags will be extended with the contents of that file (each
line becomes a tag). In this case, there is not need to pass
tags on the command line (if you do, both sources will be
used). A file name of - will be interpreted as stdin.
</para>
</refsect2>
<refsect2>
<title>COMMAND</title>
<cmdsynopsis>
<command>command</command>
<arg>-n <replaceable>node</replaceable></arg>
<arg choice="req"><replaceable>command</replaceable></arg>
</cmdsynopsis>
<para>
Executes a command on all nodes. If the option
<option>-n</option> is not given, the command will be executed
on all nodes, otherwise it will be executed only on the
node(s) specified. Use the option multiple times for running
it on multiple nodes, like:
<screen>
# gnt-cluster command -n node1.example.com -n node2.example.com date
</screen>
</para>
<para>
The command is executed serially on the selected nodes. If the
master node is present in the list, the command will be
executed last on the master. Regarding the other nodes, the
execution order is somewhat alphabetic, so that
node2.example.com will be earlier than node10.example.com but
after node1.example.com.
</para>
<para>
So given the node names node1, node2, node3, node10, node11,
with node3 being the master, the order will be: node1, node2,
node10, node11, node3.
</para>
<para>
The command is constructed by concatenating all other command
line arguments. For example, to list the contents of the
<filename class="directory">/etc</filename> directory on all
nodes, run:
<screen>
# gnt-cluster command ls -l /etc
</screen>
and the command which will be executed will be
<computeroutput>"ls -l /etc"</computeroutput>
</para>
</refsect2>
<refsect2>
<title>COPYFILE</title>
<cmdsynopsis>
<command>copyfile</command>
<arg>--use-replication-network</arg>
<arg>-n <replaceable>node</replaceable></arg>
<arg choice="req"><replaceable>file</replaceable></arg>
</cmdsynopsis>
<para>
Copies a file to all or to some nodes. The argument specifies
the source file (on the current system), the
<option>-n</option> argument specifies the target node, or
nodes if the option is given multiple times. If
<option>-n</option> is not given at all, the file will be
copied to all nodes.
Passing the <option>--use-replication-network</option> option
will cause the copy to be done over the replication network
(only matters if the primary/secondary IPs are different).
Example:
<screen>
# gnt-cluster -n node1.example.com -n node2.example.com copyfile /tmp/test
</screen>
This will copy the file <filename>/tmp/test</filename> from
the current node to the two named nodes.
</para>
</refsect2>
<refsect2>
<title>DESTROY</title>
<cmdsynopsis>
<command>destroy</command>
<arg choice="req">--yes-do-it</arg>
</cmdsynopsis>
<para>
Remove all configuration files related to the cluster, so that
a <command>gnt-cluster init</command> can be done again
afterwards.
</para>
<para>
Since this is a dangerous command, you are required to pass
the argument <replaceable>--yes-do-it.</replaceable>
</para>
</refsect2>
<refsect2>
<title>GETMASTER</title>
<cmdsynopsis>
<command>getmaster</command>
</cmdsynopsis>
<para>
Displays the current master node.
</para>
</refsect2>
<refsect2>
<title>INFO</title>
<cmdsynopsis>
<command>info</command>
<arg>--roman</arg>
</cmdsynopsis>
<para>
Shows runtime cluster information: cluster name, architecture
(32 or 64 bit), master node, node list and instance list.
</para>
<para>
Passing the <option>--roman</option> option gnt-cluster info will try
to print its integer fields in a latin friendly way. This allows
further diffusion of Ganeti among ancient cultures.
</para>
</refsect2>
<refsect2>
<title>INIT</title>
<cmdsynopsis>
<command>init</command>
<sbr>
<arg>-s <replaceable>secondary_ip</replaceable></arg>
<sbr>
<arg>-g <replaceable>vg-name</replaceable></arg>
<sbr>
<arg>--master-netdev <replaceable>interface-name</replaceable></arg>
<sbr>
<arg>-m <replaceable>mac-prefix</replaceable></arg>
<sbr>
<arg>--no-lvm-storage</arg>
<sbr>
<arg>--no-etc-hosts</arg>
<sbr>
<arg>--no-ssh-init</arg>
<sbr>
<arg>--file-storage-dir <replaceable>dir</replaceable></arg>
<sbr>
<arg>--enabled-hypervisors <replaceable>hypervisors</replaceable></arg>
<sbr>
<arg>-t <replaceable>hypervisor name</replaceable></arg>
<sbr>
<arg>--hypervisor-parameters <replaceable>hypervisor</replaceable>:<replaceable>hv-param</replaceable>=<replaceable>value</replaceable><arg rep="repeat" choice="opt">,<replaceable>hv-param</replaceable>=<replaceable>value</replaceable></arg></arg>
<sbr>
<arg>--backend-parameters <replaceable>be-param</replaceable>=<replaceable>value</replaceable><arg rep="repeat" choice="opt">,<replaceable>be-param</replaceable>=<replaceable>value</replaceable></arg></arg>
<sbr>
<arg>--nic-parameters <replaceable>nic-param</replaceable>=<replaceable>value</replaceable><arg rep="repeat" choice="opt">,<replaceable>nic-param</replaceable>=<replaceable>value</replaceable></arg></arg>
<sbr>
<arg>--maintain-node-health <group choice="req"><arg>yes</arg><arg>no</arg></group></arg>
<sbr>
<arg>--uid-pool <replaceable>user-id pool definition</replaceable></arg>
<sbr>
<arg>-I <replaceable>default instance allocator</replaceable></arg>
<sbr>
<arg>--primary-ip-version <replaceable>version</replaceable></arg>
<sbr>
<arg choice="req"><replaceable>clustername</replaceable></arg>
</cmdsynopsis>
<para>
This commands is only run once initially on the first node of
the cluster. It will initialize the cluster configuration and
setup ssh-keys and more.
</para>
<para>
Note that the <replaceable>clustername</replaceable> is not
any random name. It has to be resolvable to an IP address
using DNS, and it is best if you give the fully-qualified
domain name. This hostname must resolve to an IP address
reserved exclusively for this purpose.
</para>
<para>
The cluster can run in two modes: single-home or
dual-homed. In the first case, all traffic (both public
traffic, inter-node traffic and data replication traffic) goes
over the same interface. In the dual-homed case, the data
replication traffic goes over the second network. The
<option>-s</option> option here marks the cluster as
dual-homed and its parameter represents this node's address on
the second network. If you initialise the cluster with
<option>-s</option>, all nodes added must have a secondary IP
as well.
</para>
<para>
Note that for Ganeti it doesn't matter if the secondary
network is actually a separate physical network, or is done
using tunneling, etc. For performance reasons, it's
recommended to use a separate network, of course.
</para>
<para>
The <option>-g</option> option will let you specify a volume group
different than "xenvg" for Ganeti to use when creating instance disks.
This volume group must have the same name on all nodes. Once the
cluster is initialized this can be altered by using the
<command>modify</command> command. If you don't want to use lvm
storage at all use the <option>--no-lvm-storage</option> option.
Once the cluster is initialized you can change this setup with the
<command>modify</command> command.
</para>
<para>
The <option>--master-netdev</option> option is useful for specifying a
different interface on which the master will activate its IP address.
It's important that all nodes have this interface because you'll need
it for a master failover.
</para>
<para>
The <option>-m</option> option will let you specify a three byte prefix
under which the virtual MAC addresses of your instances will be
generated. The prefix must be specified in the format XX:XX:XX and the
default is aa:00:00.
</para>
<para>
The <option>--no-lvm-storage</option> option allows you to initialize
the cluster without lvm support. This means that only instances using
files as storage backend will be possible to create. Once the cluster
is initialized you can change this setup with the
<command>modify</command> command.
</para>
<para>
The <option>--no-etc-hosts</option> option allows you to initialize the
cluster without modifying the <filename>/etc/hosts</filename> file.
</para>
<para>
The <option>--no-ssh-init</option> option allows you to initialize the
cluster without creating or distributing SSH key pairs.
</para>
<para>
The <option>--file-storage-dir</option> option allows you
set the directory to use for storing the instance disk
files when using file storage as backend for instance disks.
</para>
<para>
The <option>--enabled-hypervisors</option> option allows you
to set the list of hypervisors that will be enabled for
this cluster. Instance hypervisors can only be chosen from
the list of enabled hypervisors, and the first entry of this list
will be used by default. Currently, the following hypervisors are
available:
</para>
<para>
<variablelist>
<varlistentry>
<term>xen-pvm</term>
<listitem>
<para>
Xen PVM hypervisor
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>xen-hvm</term>
<listitem>
<para>
Xen HVM hypervisor
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>kvm</term>
<listitem>
<para>
Linux KVM hypervisor
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>chroot</term>
<listitem>
<para>
a simple chroot manager that starts chroot based on a
script at the root of the filesystem holding the
chroot
<varlistentry>
<term>fake</term>
<listitem>
<para>
fake hypervisor for development/testing
</para>
</listitem>
</varlistentry>
</variablelist>
</para>
<para>
Either a single hypervisor name or a comma-separated list of
hypervisor names can be specified. If this option is not
specified, only the xen-pvm hypervisor is enabled by default.
</para>
<para>
The <option>--hypervisor-parameters</option> option allows you
to set default hypervisor specific parameters for the
cluster. The format of this option is the name of the
hypervisor, followed by a colon and a comma-separated list of
key=value pairs. The keys available for each hypervisors are
detailed in the <citerefentry>
<refentrytitle>gnt-instance</refentrytitle>
<manvolnum>8</manvolnum> </citerefentry> man page, in the
<command>add</command> command plus the following parameters
which are only configurable globally (at cluster level):
<variablelist>
<varlistentry>
<term>migration_port</term>
<listitem>
<simpara>Valid for the Xen PVM and KVM hypervisors.</simpara>
<para>
This options specifies the TCP port to use for
live-migration. For Xen, the same port should be
configured on all nodes in
the <filename>/etc/xen/xend-config.sxp</filename>
file, under the
key <quote>xend-relocation-port</quote>.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>migration_bandwidth</term>
<listitem>
<simpara>Valid for the KVM hypervisor.</simpara>
<para>
This option specifies the maximum bandwidth that KVM will
use for instance live migrations. The value is in MiB/s.
</para>
<simpara>This option is only effective with kvm versions >= 78
and qemu-kvm versions >= 0.10.0.
</simpara>
</listitem>
</varlistentry>
</variablelist>
</para>
<para>
The <option>--backend-parameters</option> option allows you to set
the default backend parameters for the cluster. The parameter
format is a comma-separated list of key=value pairs with the
following supported keys:
</para>
<para>
<variablelist>
<varlistentry>
<term>vcpus</term>
<listitem>
<para>
Number of VCPUs to set for an instance by default, must
be an integer, will be set to 1 if no specified.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>memory</term>
<listitem>
<para>
Amount of memory to allocate for an instance by default,
can be either an integer or an integer followed by a
unit (M for mebibytes and G for gibibytes are
supported), will be set to 128M if not specified.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>auto_balance</term>
<listitem>
<para>
Value of the auto_balance flag for instances to use by
default, will be set to true if not specified.
</para>
</listitem>
</varlistentry>
</variablelist>
</para>
<para>
The <option>--nic-parameters</option> option allows you to set
the default nic parameters for the cluster. The parameter
format is a comma-separated list of key=value pairs with the
following supported keys:
<variablelist>
<varlistentry>
<term>mode</term>
<listitem>
<para>
The default nic mode, 'routed' or 'bridged'.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>link</term>
<listitem>
<para>
In bridged mode the default NIC bridge. In routed mode it
represents an hypervisor-vif-script dependent value to allow
different instance groups. For example under the KVM default
network script it is interpreted as a routing table number or
name.
</para>
</listitem>
</varlistentry>
</variablelist>
</para>
<para>
The option <option>--maintain-node-health</option> allows to
enable/disable automatic maintenance actions on
nodes. Currently these include automatic shutdown of instances
and deactivation of DRBD devices on offline nodes; in the
future it might be extended to automatic removal of unknown
LVM volumes, etc.
</para>
<para>
The <option>--uid-pool</option> option initializes the user-id pool.
The <replaceable>user-id pool definition</replaceable> can contain a
list of user-ids and/or a list of user-id ranges. The parameter format
is a comma-separated list of numeric user-ids or user-id ranges.
The ranges are defined by a lower and higher boundary, separated
by a dash. The boundaries are inclusive.
If the <option>--uid-pool</option> option is not supplied, the
user-id pool is initialized to an empty list. An empty list means that
the user-id pool feature is disabled.
</para>
<para>
The <option>-I (--default-iallocator)</option> option specifies the
default instance allocator. The instance allocator will be used for
operations like instance creation, instance and node migration, etc.
when no manual override is specified. If this option is not specified,
the default instance allocator will be blank, which means that relevant
operations will require the administrator to manually specify either an
instance allocator, or a set of nodes.
The default iallocator can be changed later using the
<command>modify</command> command.
</para>
<para>
The <option>--primary-ip-version</option> option specifies the
IP version used for the primary address. Possible values are 4 and
6 for IPv4 and IPv6, respectively. This option is used when resolving
node names and the cluster name.
</para>
</refsect2>
<refsect2>
<title>LIST-TAGS</title>
<cmdsynopsis>
<command>list-tags</command>
</cmdsynopsis>
<para>List the tags of the cluster.</para>
</refsect2>
<refsect2>
<title>MASTER-FAILOVER</title>
<cmdsynopsis>
<command>master-failover</command>
<arg>--no-voting</arg>
</cmdsynopsis>
<para>
Failover the master role to the current node.
</para>
<para>
The <option>--no-voting</option> option skips the remote node agreement
checks. This is dangerous, but necessary in some cases (for example
failing over the master role in a 2 node cluster with the original
master down). If the original master then comes up, it won't be able to
start its master daemon because it won't have enough votes, but so
won't the new master, if the master daemon ever needs a restart. You
can pass <option>--no-voting</option> to
<command>ganeti-masterd</command> on the new master to solve this
problem, and run <command>gnt-cluster redist-conf</command> to make
sure the cluster is consistent again.
</para>
</refsect2>
<refsect2>
<title>MASTER-PING</title>
<cmdsynopsis>
<command>master-ping</command>
</cmdsynopsis>
<para>
Checks if the master daemon is alive.
</para>
<para>
If the master daemon is alive and can respond to a basic query
(the equivalent of <command>gnt-cluster info</command>), then
the exit code of the command will be 0. If the master daemon
is not alive (either due to a crash or because this is not the
master node), the exit code will be 1.
</para>
</refsect2>
<refsect2>
<title>MODIFY</title>
<cmdsynopsis>
<command>modify</command>
<sbr>
<arg choice="opt">-g <replaceable>vg-name</replaceable></arg>
<sbr>
<arg choice="opt">--no-lvm-storage</arg>
<sbr>
<arg choice="opt">--enabled-hypervisors
<replaceable>hypervisors</replaceable></arg>
<sbr>
<arg choice="opt">--hypervisor-parameters <replaceable>hypervisor</replaceable>:<replaceable>hv-param</replaceable>=<replaceable>value</replaceable><arg rep="repeat" choice="opt">,<replaceable>hv-param</replaceable>=<replaceable>value</replaceable></arg></arg>
<sbr>
<arg choice="opt">--backend-parameters <replaceable>be-param</replaceable>=<replaceable>value</replaceable><arg rep="repeat" choice="opt">,<replaceable>be-param</replaceable>=<replaceable>value</replaceable></arg></arg>
<sbr>
<arg choice="opt">--nic-parameters <replaceable>nic-param</replaceable>=<replaceable>value</replaceable><arg rep="repeat" choice="opt">,<replaceable>nic-param</replaceable>=<replaceable>value</replaceable></arg></arg>
<sbr>
<arg choice="opt">--uid-pool <replaceable>user-id pool definition</replaceable></arg>
<sbr>
<arg choice="opt">--add-uids <replaceable>user-id pool definition</replaceable></arg>
<sbr>
<arg choice="opt">--remove-uids <replaceable>user-id pool definition</replaceable></arg>
<sbr>
<arg choice="opt">-C <replaceable>candidate_pool_size</replaceable></arg>
<sbr>
<arg>--maintain-node-health <group choice="req"><arg>yes</arg><arg>no</arg></group></arg>
<sbr>
<arg choice="opt">-I <replaceable>default instance allocator</replaceable></arg>
<sbr>
<arg>--reserved-lvs=<replaceable>NAMES</replaceable></arg>
</cmdsynopsis>
<para>
Modify the options for the cluster.
</para>
<para>
The <option>-g</option>, <option>--no-lvm-storarge</option>,
<option>--enabled-hypervisors</option>,
<option>--hypervisor-parameters</option>,
<option>--backend-parameters</option>,
<option>--nic-parameters</option>,
<option>--maintain-node-health</option>,
<option>--uid-pool</option> and
<option>-I</option> options are
described in the <command>init</command> command.
</para>
<para>
The <option>-C</option> option specifies the
<varname>candidate_pool_size</varname> cluster parameter. This
is the number of nodes that the master will try to keep as
<literal>master_candidates</literal>. For more details about
this role and other node roles, see the <citerefentry>
<refentrytitle>ganeti</refentrytitle><manvolnum>7</manvolnum>
</citerefentry>. If you increase the size, the master will
automatically promote as many nodes as required and possible
to reach the intended number.
</para>
<para>
The <option>--add-uids</option> and <option>--remove-uids</option>
options can be used to modify the user-id pool by adding/removing
a list of user-ids or user-id ranges.
</para>
<para>
The option <option>--reserved-lvs</option> specifies a list
(comma-separated) of logical volume group names (regular
expressions) that will be ignored by the cluster verify
operation. This is useful if the volume group used for Ganeti
is shared with the system for other uses. Note that it's not
recommended to create and mark as ignored logical volume names
which match Ganeti's own name format (starting with UUID and
then <literal>.diskN</literal>), as this option only skips the
verification, but not the actual use of the names given.
</para>
<para>
To remove all reserved logical volumes, pass in an empty
argument to the option, as in <option>--reserved-lvs=</option>
or <option>--reserved-lvs ''</option>.
</para>
</refsect2>
<refsect2>
<title>QUEUE</title>
<cmdsynopsis>
<command>queue</command>
<arg choice="opt">drain</arg>
<arg choice="opt">undrain</arg>
<arg choice="opt">info</arg>
</cmdsynopsis>
<para>
Change job queue properties.
</para>
<para>
The <option>drain</option> option sets the drain flag on the
job queue. No new jobs will be accepted, but jobs already in
the queue will be processed.
</para>
<para>
The <option>undrain</option> will unset the drain flag on the
job queue. New jobs will be accepted.
</para>
<para>
The <option>info</option> option shows the properties of the
job queue.
</para>
</refsect2>
<refsect2>
<title>WATCHER</title>
<cmdsynopsis>
<command>watcher</command>
<group choice="req">
<arg>pause <replaceable>duration</replaceable></arg>
<arg>continue</arg>
<arg>info</arg>
</group>
</cmdsynopsis>
<para>
Make the watcher pause or let it continue.
</para>
<para>
The <option>pause</option> option causes the watcher to pause for
<replaceable>duration</replaceable> seconds.
</para>
<para>
The <option>continue</option> option will let the watcher continue.
</para>
<para>
The <option>info</option> option shows whether the watcher is currently
paused.
</para>
</refsect2>
<refsect2>
<title>redist-conf</title>
<cmdsynopsis>
<command>redist-conf</command>
<arg>--submit</arg>
</cmdsynopsis>
<para>
This command forces a full push of configuration files from
the master node to the other nodes in the cluster. This is
normally not needed, but can be run if the
<command>verify</command> complains about configuration
mismatches.
</para>
<para>
The <option>--submit</option> option is used to send the job
to the master daemon but not wait for its completion. The job
ID will be shown so that it can be examined via
<command>gnt-job info</command>.
</para>
</refsect2>
<refsect2>
<title>REMOVE-TAGS</title>
<cmdsynopsis>
<command>remove-tags</command>
<arg choice="opt">--from <replaceable>file</replaceable></arg>
<arg choice="req"
rep="repeat"><replaceable>tag</replaceable></arg>
</cmdsynopsis>
<para>
Remove tags from the cluster. If any of the tags are not
existing on the cluster, the entire operation will abort.
</para>
<para>
If the <option>--from</option> option is given, the list of
tags will be extended with the contents of that file (each
line becomes a tag). In this case, there is not need to pass
tags on the command line (if you do, both sources will be
used). A file name of - will be interpreted as stdin.
</para>
</refsect2>
<refsect2>
<title>RENAME</title>
<cmdsynopsis>
<command>rename</command>
<arg>-f</arg>
<arg choice="req"><replaceable>name</replaceable></arg>
</cmdsynopsis>
<para>
Renames the cluster and in the process updates the master IP
address to the one the new name resolves to. At least one of
either the name or the IP address must be different, otherwise
the operation will be aborted.
</para>
<para>
Note that since this command can be dangerous (especially when
run over SSH), the command will require confirmation unless
run with the <option>-f</option> option.
</para>
</refsect2>
<refsect2>
<title>RENEW-CRYPTO</title>
<cmdsynopsis>
<command>renew-crypto</command>
<arg>-f</arg>
<sbr>
<arg choice="opt">--new-cluster-certificate</arg>
<arg choice="opt">--new-confd-hmac-key</arg>
<sbr>
<arg choice="opt">--new-rapi-certificate</arg>
<arg choice="opt">--rapi-certificate <replaceable>rapi-cert</replaceable></arg>
<sbr>
<arg choice="opt">--new-cluster-domain-secret</arg>
<arg choice="opt">--cluster-domain-secret <replaceable>filename</replaceable></arg>
</cmdsynopsis>
<para>
This command will stop all
Ganeti daemons in the cluster and start them again once the new
certificates and keys are replicated. The options
<option>--new-cluster-certificate</option> and
<option>--new-confd-hmac-key</option> can be used to regenerate the
cluster-internal SSL certificate respective the HMAC key used by
<citerefentry>
<refentrytitle>ganeti-confd</refentrytitle><manvolnum>8</manvolnum>
</citerefentry>.
</para>
<para>
To generate a new self-signed RAPI certificate (used by <citerefentry>
<refentrytitle>ganeti-rapi</refentrytitle><manvolnum>8</manvolnum>
</citerefentry>) specify <option>--new-rapi-certificate</option>. If
you want to use your own certificate, e.g. one signed by a certificate
authority (CA), pass its filename to
<option>--rapi-certificate</option>.
</para>
<para>
<option>--new-cluster-domain-secret</option> generates a new, random
cluster domain secret. <option>--cluster-domain-secret</option> reads
the secret from a file. The cluster domain secret is used to sign
information exchanged between separate clusters via a third party.
</para>
</refsect2>
<refsect2>
<title>REPAIR-DISK-SIZES</title>
<cmdsynopsis>
<command>repair-disk-sizes</command>
<arg rep="repeat">instance</arg>
</cmdsynopsis>
<para>
This command checks that the recorded size of the given
instance's disks matches the actual size and updates any
mismatches found. This is needed if the Ganeti configuration
is no longer consistent with reality, as it will impact some
disk operations. If no arguments are given, all instances will
be checked.
</para>
<para>
Note that only active disks can be checked by this command; in
case a disk cannot be activated it's advised to use
<command>gnt-instance activate-disks --ignore-size ...</command> to
force activation without regard to the
current size.
</para>
<para>
When the all disk sizes are consistent, the command will
return no output. Otherwise it will log details about the
inconsistencies in the configuration.
</para>
</refsect2>
<refsect2>
<title>SEARCH-TAGS</title>
<cmdsynopsis>
<command>search-tags</command>
<arg choice="req"><replaceable>pattern</replaceable></arg>
</cmdsynopsis>
<para>
Searches the tags on all objects in the cluster (the cluster
itself, the nodes and the instances) for a given pattern. The
pattern is interpreted as a regular expression and a search
will be done on it (i.e. the given pattern is not anchored to
the beggining of the string; if you want that, prefix the
pattern with <literal>^</literal>).
</para>
<para>
If no tags are matching the pattern, the exit code of the
command will be one. If there is at least one match, the exit
code will be zero. Each match is listed on one line, the
object and the tag separated by a space. The cluster will be
listed as <filename>/cluster</filename>, a node will be listed
as
<filename>/nodes/<replaceable>name</replaceable></filename>,
and an instance as
<filename>/instances/<replaceable>name</replaceable></filename>.
Example:
</para>
<screen>
# gnt-cluster search-tags time
/cluster ctime:2007-09-01
/nodes/node1.example.com mtime:2007-10-04
</screen>
</refsect2>
<refsect2>
<title>VERIFY</title>
<cmdsynopsis>
<command>verify</command>
<arg choice="opt">--no-nplus1-mem</arg>
</cmdsynopsis>
<para>
Verify correctness of cluster configuration. This is safe with
respect to running instances, and incurs no downtime of the
instances.
</para>
<para>
If the <option>--no-nplus1-mem</option> option is given, Ganeti won't
check whether if it loses a node it can restart all the instances on
their secondaries (and report an error otherwise).
</para>
</refsect2>
<refsect2>
<title>VERIFY-DISKS</title>
<cmdsynopsis>
<command>verify-disks</command>
</cmdsynopsis>
<para>
The command checks which instances have degraded DRBD disks
and activates the disks of those instances.
</para>
<para>
This command is run from the <command>ganeti-watcher</command>
tool, which also has a different, complementary algorithm for
doing this check. Together, these two should ensure that DRBD
disks are kept consistent.
</para>
</refsect2>
<refsect2>
<title>VERSION</title>
<cmdsynopsis>
<command>version</command>
</cmdsynopsis>
<para>
Show the cluster version.
</para>
</refsect2>
</refsect1>
&footer;
</refentry>
<!-- Keep this comment at the end of the file
Local variables:
mode: sgml
sgml-omittag:t
sgml-shorttag:t
sgml-minimize-attributes:nil
sgml-always-quote-attributes:t
sgml-indent-step:2
sgml-indent-data:t
sgml-parent-document:nil
sgml-default-dtd-file:nil
sgml-exposed-tags:nil
sgml-local-catalogs:nil
sgml-local-ecat-files:nil
End:
-->