Skip to content
Snippets Groups Projects
Commit ec377077 authored by Iustin Pop's avatar Iustin Pop
Browse files

Docbook-relate changes on admin.sgml

This changes a lot of docbook-related stuff and addresses a few consistency
issues.

Reviewed-by: vylavera
parent 0bac47cf
No related branches found
No related tags found
No related merge requests found
...@@ -35,50 +35,67 @@ ...@@ -35,50 +35,67 @@
</para> </para>
<sect2> <sect2>
<title>Ganeti Terminology</title>
<title>Ganeti terminology</title>
<para>This section provides a small introduction to Ganeti terminology, <para>This section provides a small introduction to Ganeti terminology,
which might be useful to read the rest of the document. which might be useful to read the rest of the document.
<variablelist> <glosslist>
<varlistentry> <glossentry>
<term>Cluster</term> <glossterm>Cluster</glossterm>
<listitem><para>A set of machines (nodes) that cooperate to offer a <glossdef>
coherent highly available virtualization service.</para></listitem> <simpara>
</varlistentry> A set of machines (nodes) that cooperate to offer a
coherent highly available virtualization service.
<varlistentry> </simpara>
<term>Node</term> </glossdef>
<listitem><para>A physical machine which is member of a cluster. </glossentry>
Nodes are the basic cluster infrastructure, and are not fault <glossentry>
tolerant.</para></listitem> <glossterm>Node</glossterm>
</varlistentry> <glossdef>
<simpara>
<varlistentry> A physical machine which is member of a cluster.
<term>Master Node</term> Nodes are the basic cluster infrastructure, and are
<listitem><para>The node which controls the Cluster, from which all not fault tolerant.
Ganeti commands must be given.</para></listitem> </simpara>
</varlistentry> </glossdef>
</glossentry>
<varlistentry> <glossentry>
<term>Instance</term> <glossterm>Master node</glossterm>
<listitem><para>A virtual machine which runs on a cluster. It can be <glossdef>
a fault tolerant highly available entity.</para></listitem> <simpara>
</varlistentry> The node which controls the Cluster, from which all
Ganeti commands must be given.
<varlistentry> </simpara>
<term>Pool</term> </glossdef>
<listitem><para>A pool is a set of clusters sharing the same </glossentry>
network.</para></listitem> <glossentry>
</varlistentry> <glossterm>Instance</glossterm>
<glossdef>
<varlistentry> <simpara>
<term>Meta-Cluster</term> A virtual machine which runs on a cluster. It can be a
<listitem><para>Anything that concerns more than one fault tolerant highly available entity.
cluster.</para></listitem> </simpara>
</varlistentry> </glossdef>
</glossentry>
</variablelist> <glossentry>
<glossterm>Pool</glossterm>
<glossdef>
<simpara>
A pool is a set of clusters sharing the same network.
</simpara>
</glossdef>
</glossentry>
<glossentry>
<glossterm>Meta-Cluster</glossterm>
<glossdef>
<simpara>
Anything that concerns more than one cluster.
</simpara>
</glossdef>
</glossentry>
</glosslist>
</para> </para>
</sect2> </sect2>
...@@ -86,9 +103,11 @@ ...@@ -86,9 +103,11 @@
<sect2> <sect2>
<title>Prerequisites</title> <title>Prerequisites</title>
<para>You need to have your Ganeti cluster installed and configured <para>
before you try any of the commands in this document. Please follow the You need to have your Ganeti cluster installed and configured
"installing tutorial" for instructions on how to do that. before you try any of the commands in this document. Please
follow the <emphasis>Ganeti installation tutorial</emphasis>
for instructions on how to do that.
</para> </para>
</sect2> </sect2>
...@@ -100,39 +119,43 @@ ...@@ -100,39 +119,43 @@
<sect2> <sect2>
<title>Adding/Removing an instance</title> <title>Adding/Removing an instance</title>
<para>Adding a new virtual instance to your Ganeti cluster is really <para>
easy. The command is: Adding a new virtual instance to your Ganeti cluster is really
<programlisting> easy. The command is:
gnt-instance add -n TARGET_NODE -o OS_TYPE -t DISK_TEMPLATE INSTANCE_NAME
</programlisting> <synopsis>gnt-instance add -n <replaceable>TARGET_NODE</replaceable> -o <replaceable>OS_TYPE</replaceable> -t <replaceable>DISK_TEMPLATE</replaceable> <replaceable>INSTANCE_NAME</replaceable></synopsis>
The instance name must exist in dns and of course map to an address in
the same subnet as the cluster itself. Options you can give to this The instance name must be resolvable (e.g. exist in DNS) and
command include: of course map to an address in the same subnet as the cluster
itself. Options you can give to this command include:
<itemizedlist> <itemizedlist>
<listitem> <listitem>
<simpara>The disk size (-s)</simpara> <simpara>The disk size (<option>-s</option>)</simpara>
</listitem> </listitem>
<listitem> <listitem>
<simpara>The swap size (--swap-size)</simpara> <simpara>The swap size (<option>--swap-size</option>)</simpara>
</listitem> </listitem>
<listitem> <listitem>
<simpara>The memory size (-m)</simpara> <simpara>The memory size (<option>-m</option>)</simpara>
</listitem> </listitem>
<listitem> <listitem>
<simpara>The number of virtual CPUs (-p)</simpara> <simpara>The number of virtual CPUs (<option>-p</option>)</simpara>
</listitem> </listitem>
<listitem> <listitem>
<simpara>The instance ip address (-i) (use -i auto to make Ganeti <simpara>The instance ip address (<option>-i</option>) (use
record the address from dns)</simpara> the value <literal>auto</literal> to make Ganeti record the
address from dns)</simpara>
</listitem> </listitem>
<listitem> <listitem>
<simpara>The bridge to connect the instance to (-b), if you don't <simpara>The bridge to connect the instance to
want to use the default one</simpara> (<option>-b</option>), if you don't want to use the default
one</simpara>
</listitem> </listitem>
</itemizedlist> </itemizedlist>
</para> </para>
<para>There are four types of disk template you can choose from: <para>There are four types of disk template you can choose from:</para>
<variablelist> <variablelist>
<varlistentry> <varlistentry>
...@@ -156,60 +179,71 @@ gnt-instance add -n TARGET_NODE -o OS_TYPE -t DISK_TEMPLATE INSTANCE_NAME ...@@ -156,60 +179,71 @@ gnt-instance add -n TARGET_NODE -o OS_TYPE -t DISK_TEMPLATE INSTANCE_NAME
<varlistentry> <varlistentry>
<term>remote_raid1</term> <term>remote_raid1</term>
<listitem><para>A mirror is set between the local node and a remote <listitem>
one, which must be specified with the --secondary-node option. Use <simpara><emphasis role="strong">Note:</emphasis> This is
this option to obtain a highly available instance that can be failed only valid for multi-node clusters.</simpara>
over to a remote node should the primary one fail. <simpara>
</para></listitem> A mirror is set between the local node and a remote
one, which must be specified with the --secondary-node
option. Use this option to obtain a highly available
instance that can be failed over to a remote node
should the primary one fail.
</simpara>
</listitem>
</varlistentry> </varlistentry>
</variablelist> </variablelist>
For example if you want to create an highly available instance use the <para>
remote_raid1 disk template: For example if you want to create an highly available instance
<programlisting> use the remote_raid1 disk template:
gnt-instance add -n TARGET_NODE -o OS_TYPE -t remote_raid1 \ <synopsis>gnt-instance add -n <replaceable>TARGET_NODE</replaceable> -o <replaceable>OS_TYPE</replaceable> -t remote_raid1 \
--secondary-node=SECONDARY_NODE INSTANCE_NAME --secondary-node=<replaceable>SECONDARY_NODE</replaceable> <replaceable>INSTANCE_NAME</replaceable></synopsis>
</programlisting>
To know which operating systems your cluster supports you can use: <para>
<programlisting> To know which operating systems your cluster supports you can use:
gnt-os list
</programlisting> <synopsis>gnt-os list</synopsis>
</para> </para>
<para> <para>
Removing an instance is even easier than creating one. This operation is Removing an instance is even easier than creating one. This
non-reversible and destroys all the contents of your instance. Use with operation is non-reversible and destroys all the contents of
care: your instance. Use with care:
<programlisting>
gnt-instance remove INSTANCE_NAME <synopsis>gnt-instance remove <replaceable>INSTANCE_NAME</replaceable></synopsis>
</programlisting>
</para> </para>
</sect2> </sect2>
<sect2> <sect2>
<title>Starting/Stopping an instance</title> <title>Starting/Stopping an instance</title>
<para>Instances are automatically started at instance creation time. To <para>
manually start one which is currently stopped you can run: Instances are automatically started at instance creation
<programlisting> time. To manually start one which is currently stopped you can
gnt-instance startup INSTANCE_NAME run:
</programlisting>
While the command to stop one is: <synopsis>gnt-instance startup <replaceable>INSTANCE_NAME</replaceable></synopsis>
<programlisting>
gnt-instance shutdown INSTANCE_NAME While the command to stop one is:
</programlisting>
The command to see all the instances configured and their status is: <synopsis>gnt-instance shutdown <replaceable>INSTANCE_NAME</replaceable></synopsis>
<programlisting>
gnt-instance list The command to see all the instances configured and their
</programlisting> status is:
<synopsis>gnt-instance list</synopsis>
</para> </para>
<para>Do not use the xen commands to stop instances. If you run for <para>
example xm shutdown or xm destroy on an instance Ganeti will Do not use the xen commands to stop instances. If you run for
automatically restart it (via the example xm shutdown or xm destroy on an instance Ganeti will
<citerefentry><refentrytitle>ganeti-watcher</refentrytitle> automatically restart it (via the
<manvolnum>8</manvolnum></citerefentry>) <citerefentry><refentrytitle>ganeti-watcher</refentrytitle>
<manvolnum>8</manvolnum></citerefentry>)
</para> </para>
</sect2> </sect2>
...@@ -217,27 +251,33 @@ gnt-instance list ...@@ -217,27 +251,33 @@ gnt-instance list
<sect2> <sect2>
<title>Exporting/Importing an instance</title> <title>Exporting/Importing an instance</title>
<para>You can create a snapshot of an instance disk and Ganeti <para>
configuration, which then you can backup, or import into another cluster. You can create a snapshot of an instance disk and Ganeti
The way to export an instance is: configuration, which then you can backup, or import into
<programlisting> another cluster. The way to export an instance is:
gnt-backup export -n TARGET_NODE INSTANCE_NAME
</programlisting> <synopsis>gnt-backup export -n <replaceable>TARGET_NODE</replaceable> <replaceable>INSTANCE_NAME</replaceable></synopsis>
The target node can be any node in the cluster with enough space under
/srv/ganeti to hold the instance image. Use the --noshutdown option to The target node can be any node in the cluster with enough
snapshot an instance without rebooting it. Any previous snapshot of the space under <filename class="directory">/srv/ganeti</filename>
same instance existing cluster-wide under /srv/ganeti will be removed by to hold the instance image. Use the
this operation: if you want to keep them move them out of the Ganeti <option>--noshutdown</option> option to snapshot an instance
exports directory. without rebooting it. Any previous snapshot of the same
instance existing cluster-wide under <filename
class="directory">/srv/ganeti</filename> will be removed by
this operation: if you want to keep them move them out of the
Ganeti exports directory.
</para> </para>
<para>Importing an instance is as easy as creating a new one. The command <para>
is: Importing an instance is similar to creating a new one. The
<programlisting> command is:
gnt-backup import -n TRGT_NODE -t DISK_TMPL --src-node=NODE --src-dir=DIR INST_NAME
</programlisting> <synopsis>gnt-backup import -n <replaceable>TARGET_NODE</replaceable> -t <replaceable>DISK_TEMPLATE</replaceable> --src-node=<replaceable>NODE</replaceable> --src-dir=DIR INSTANCE_NAME</synopsis>
Most of the options available for gnt-instance add are supported here
too. Most of the options available for the command
<emphasis>gnt-instance add</emphasis> are supported here too.
</para> </para>
</sect2> </sect2>
...@@ -247,59 +287,74 @@ gnt-backup import -n TRGT_NODE -t DISK_TMPL --src-node=NODE --src-dir=DIR INST_N ...@@ -247,59 +287,74 @@ gnt-backup import -n TRGT_NODE -t DISK_TMPL --src-node=NODE --src-dir=DIR INST_N
<sect1> <sect1>
<title>High availability features</title> <title>High availability features</title>
<note>
<simpara>This section only applies to multi-node clusters.</simpara>
</note>
<sect2> <sect2>
<title>Failing over an instance</title> <title>Failing over an instance</title>
<para>If an instance is built in highly available mode you can at any <para>
time fail it over to its secondary node, even if the primary has somehow If an instance is built in highly available mode you can at
failed and it's not up anymore. Doing it is really easy, on the master any time fail it over to its secondary node, even if the
node you can just run: primary has somehow failed and it's not up anymore. Doing it
<programlisting> is really easy, on the master node you can just run:
gnt-instance failover INSTANCE_NAME
</programlisting> <synopsis>gnt-instance failover <replaceable>INSTANCE_NAME</replaceable></synopsis>
That's it. After the command completes the secondary node is now the
primary, and vice versa. That's it. After the command completes the secondary node is
now the primary, and vice versa.
</para> </para>
</sect2> </sect2>
<sect2> <sect2>
<title>Replacing an instance disks</title> <title>Replacing an instance disks</title>
<para>So what if instead the secondary node for an instance has failed, <para>
or you plan to remove a node from your cluster, and you failed over all So what if instead the secondary node for an instance has
its instances, but it's still secondary for some? The solution here is to failed, or you plan to remove a node from your cluster, and
replace the instance disks, changing the secondary node: you failed over all its instances, but it's still secondary
<programlisting> for some? The solution here is to replace the instance disks,
gnt-instance replace-disks -n NEW_SECONDARY INSTANCE_NAME changing the secondary node:
</programlisting>
This process is a bit longer, but involves no instance downtime, and at <synopsis>gnt-instance replace-disks -n <replaceable>NEW_SECONDARY</replaceable> <replaceable>INSTANCE_NAME</replaceable></synopsis>
the end of it the instance has changed its secondary node, to which it
can if necessary be failed over. This process is a bit longer, but involves no instance
downtime, and at the end of it the instance has changed its
secondary node, to which it can if necessary be failed over.
</para> </para>
</sect2> </sect2>
<sect2> <sect2>
<title>Failing over the master node</title> <title>Failing over the master node</title>
<para>This is all good as long as the Ganeti Master Node is up. Should it <para>
go down, or should you wish to decommission it, just run on any other node This is all good as long as the Ganeti Master Node is
the command: up. Should it go down, or should you wish to decommission it,
<programlisting> just run on any other node the command:
gnt-cluster masterfailover
</programlisting> <synopsis>gnt-cluster masterfailover</synopsis>
and the node you ran it on is now the new master.
and the node you ran it on is now the new master.
</para> </para>
</sect2> </sect2>
<sect2> <sect2>
<title>Adding/Removing nodes</title> <title>Adding/Removing nodes</title>
<para>And of course, now that you know how to move instances around, it's <para>
easy to free up a node, and then you can remove it from the cluster: And of course, now that you know how to move instances around,
<programlisting> it's easy to free up a node, and then you can remove it from
gnt-node remove NODE_NAME the cluster:
</programlisting>
and maybe add a new one: <synopsis>
<programlisting> gnt-node remove <replaceable>NODE_NAME</replaceable>
gnt-node add [--secondary-ip=ADDRESS] NODE_NAME </synopsis>
</programlisting>
and maybe add a new one:
<synopsis>
gnt-node add <optional><option>--secondary-ip=<replaceable>ADDRESS</replaceable></option></optional> <replaceable>NODE_NAME</replaceable>
</synopsis>
</para> </para>
</sect2> </sect2>
</sect1> </sect1>
...@@ -307,46 +362,53 @@ gnt-node add [--secondary-ip=ADDRESS] NODE_NAME ...@@ -307,46 +362,53 @@ gnt-node add [--secondary-ip=ADDRESS] NODE_NAME
<sect1> <sect1>
<title>Debugging Features</title> <title>Debugging Features</title>
<para>At some point you might need to do some debugging operations on your <para>
cluster or on your instances. This section will help you with the most used At some point you might need to do some debugging operations on
debugging functionalities. your cluster or on your instances. This section will help you
with the most used debugging functionalities.
</para> </para>
<sect2> <sect2>
<title>Accessing an instance's disks</title> <title>Accessing an instance's disks</title>
<para>From an instance's primary node you have access to its disks. Never <para>
ever mount the underlying logical volume manually on a fault tolerant From an instance's primary node you have access to its
instance, though or you risk breaking replication. The correct way to disks. Never ever mount the underlying logical volume manually
access them is to run the command: on a fault tolerant instance, or you risk breaking
<programlisting> replication. The correct way to access them is to run the
gnt-instance activate-disks INSTANCE_NAME command:
</programlisting>
And then access the device that gets created. Of course after you've <synopsis> gnt-instance activate-disks <replaceable>INSTANCE_NAME</replaceable></synopsis>
finished you can deactivate them with the deactivate-disks command, which
works in the same way. And then access the device that gets created. After you've
finished you can deactivate them with the deactivate-disks
command, which works in the same way.
</para> </para>
</sect2> </sect2>
<sect2> <sect2>
<title>Accessing an instance's console</title> <title>Accessing an instance's console</title>
<para>The command to access a running instance's console is: <para>
<programlisting> The command to access a running instance's console is:
gnt-instance console INSTANCE_NAME
</programlisting> <synopsis>gnt-instance console <replaceable>INSTANCE_NAME</replaceable></synopsis>
Use the console normally and then type ^] when done, to exit.
Use the console normally and then type
<userinput>^]</userinput> when done, to exit.
</para> </para>
</sect2> </sect2>
<sect2> <sect2>
<title>Instance Operating System Debugging</title> <title>Instance Operating System Debugging</title>
<para>Should you have any problems with operating systems support the <para>
command to ran to see a complete status for all your nodes is: Should you have any problems with operating systems support
<programlisting> the command to ran to see a complete status for all your nodes
gnt-os diagnose is:
</programlisting>
<synopsis>gnt-os diagnose</synopsis>
</para> </para>
</sect2> </sect2>
...@@ -354,16 +416,22 @@ gnt-os diagnose ...@@ -354,16 +416,22 @@ gnt-os diagnose
<sect2> <sect2>
<title>Cluster-wide debugging</title> <title>Cluster-wide debugging</title>
<para>The gnt-cluster command offers several options to run tests or <para>
execute cluster-wide operations. For example: The gnt-cluster command offers several options to run tests or
<programlisting> execute cluster-wide operations. For example:
<screen>
gnt-cluster command gnt-cluster command
gnt-cluster copyfile gnt-cluster copyfile
gnt-cluster verify gnt-cluster verify
gnt-cluster getmaster gnt-cluster getmaster
gnt-cluster version gnt-cluster version
</programlisting> </screen>
See the respective help to know more about their usage.
See the man page <citerefentry>
<refentrytitle>gnt-cluster</refentrytitle>
<manvolnum>8</manvolnum> </citerefentry> to know more about
their usage.
</para> </para>
</sect2> </sect2>
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment