Skip to content
Snippets Groups Projects
Commit bd028152 authored by Iustin Pop's avatar Iustin Pop
Browse files

Documentation: cleanup of local/remote_raid1

Since we have removed support for local and remote raid1, update the man
pages and guides to reflect the new situation.

Reviewed-by: imsnah
parent 447b2066
No related branches found
No related tags found
No related merge requests found
...@@ -12,7 +12,7 @@ Software Requirements ...@@ -12,7 +12,7 @@ Software Requirements
Before installing, please verify that you have the following programs: Before installing, please verify that you have the following programs:
- Xen virtualization (version 3.0.x or 3.1) - Xen virtualization (version 3.0.x or 3.1)
http://xen.xensource.com/ http://xen.xensource.com/
- DRBD (kernel module and userspace utils), version 0.7.x or 8.0.7+ - DRBD (kernel module and userspace utils), version 8.0.7+
http://www.drbd.org/ http://www.drbd.org/
- LVM2 - LVM2
http://sourceware.org/lvm2/ http://sourceware.org/lvm2/
...@@ -26,8 +26,6 @@ Before installing, please verify that you have the following programs: ...@@ -26,8 +26,6 @@ Before installing, please verify that you have the following programs:
http://developer.osdl.org/dev/iproute2 http://developer.osdl.org/dev/iproute2
- arping (part of iputils package) - arping (part of iputils package)
ftp://ftp.inr.ac.ru/ip-routing/iputils-current.tar.gz ftp://ftp.inr.ac.ru/ip-routing/iputils-current.tar.gz
- mdadm (Linux Software Raid tools) (needed only with drbd 0.7.x)
http://www.kernel.org/pub/linux/utils/raid/mdadm/
- Python 2.4 - Python 2.4
http://www.python.org http://www.python.org
- Python Twisted library (the core is enough) - Python Twisted library (the core is enough)
......
...@@ -5,7 +5,7 @@ Ganeti was developed to run on internal, trusted systems. As such, the ...@@ -5,7 +5,7 @@ Ganeti was developed to run on internal, trusted systems. As such, the
security model is all-or-nothing. security model is all-or-nothing.
All the Ganeti code runs as root, because all the operations that Ganeti All the Ganeti code runs as root, because all the operations that Ganeti
is doing require privileges: creating logical volumes, md arrays, is doing require privileges: creating logical volumes, drbd devices,
starting instances, etc. Running as root does not mean setuid, but that starting instances, etc. Running as root does not mean setuid, but that
you need to be root to run the cluster commands. you need to be root to run the cluster commands.
...@@ -39,7 +39,7 @@ determined by the weakest node. ...@@ -39,7 +39,7 @@ determined by the weakest node.
Note that only the ssh key will allow other machines to run random Note that only the ssh key will allow other machines to run random
commands on this node; the RPC method will run only: commands on this node; the RPC method will run only:
- well defined commands to create, remove, activate logical volumes, - well defined commands to create, remove, activate logical volumes,
DRBD disks, md arrays, start/stop instances, etc; drbd devices, start/stop instances, etc;
- run ssh commands on other nodes in the cluster, again well-defined - run ssh commands on other nodes in the cluster, again well-defined
- scripts under the /etc/ganeti/hooks directory - scripts under the /etc/ganeti/hooks directory
......
...@@ -104,8 +104,6 @@ ...@@ -104,8 +104,6 @@
<arg choice="req">-t<group> <arg choice="req">-t<group>
<arg>diskless</arg> <arg>diskless</arg>
<arg>plain</arg> <arg>plain</arg>
<arg>local_raid1</arg>
<arg>remote_raid1</arg>
<arg>drbd</arg> <arg>drbd</arg>
</group></arg> </group></arg>
<sbr> <sbr>
...@@ -172,36 +170,12 @@ ...@@ -172,36 +170,12 @@
<para>Disk devices will be logical volumes.</para> <para>Disk devices will be logical volumes.</para>
</listitem> </listitem>
</varlistentry> </varlistentry>
<varlistentry>
<term>local_raid1</term>
<listitem>
<para>
Disk devices will be md raid1 arrays over two local
logical volumes.
</para>
</listitem>
</varlistentry>
<varlistentry>
<term>remote_raid1</term>
<listitem>
<para>
Disk devices will be md raid1 arrays with one
component (so it's not actually raid1): a drbd (0.7.x)
device between the instance's primary node and the
node given by the second value of the
<option>--node</option> option.
</para>
</listitem>
</varlistentry>
<varlistentry> <varlistentry>
<term>drbd</term> <term>drbd</term>
<listitem> <listitem>
<para> <para>
Disk devices will be drbd (version 8.x) on top of lvm Disk devices will be drbd (version 8.x) on top of lvm
volumes. They are equivalent in functionality to volumes.
<replaceable>remote_raid1</replaceable>, but are
recommended for new instances (if you have drbd 8.x
installed).
</para> </para>
</listitem> </listitem>
</varlistentry> </varlistentry>
...@@ -209,8 +183,8 @@ ...@@ -209,8 +183,8 @@
</para> </para>
<para> <para>
The optional second value of the <option>--node</option> is used for The optional second value of the <option>--node</option> is
the remote raid template type and specifies the remote node. used for the drbd disk template and specifies the remote node.
</para> </para>
<para> <para>
......
...@@ -270,7 +270,7 @@ ...@@ -270,7 +270,7 @@
<para> <para>
The optional second value of the <option>--node</option> is used for The optional second value of the <option>--node</option> is used for
the remote raid template type and specifies the remote node. the drbd template type and specifies the remote node.
</para> </para>
<para> <para>
...@@ -327,7 +327,7 @@ ...@@ -327,7 +327,7 @@
interface has the advantage of better performance. Especially interface has the advantage of better performance. Especially
if you use a network file system (e.g. NFS) to store your instances if you use a network file system (e.g. NFS) to store your instances
this is the recommended choice. this is the recommended choice.
</para> </para>
<para> <para>
Example: Example:
...@@ -569,7 +569,7 @@ ...@@ -569,7 +569,7 @@
Show detailed information about the (given) instances. This Show detailed information about the (given) instances. This
is different from <command>list</command> as it shows is different from <command>list</command> as it shows
detailed data about the instance's disks (especially useful detailed data about the instance's disks (especially useful
for remote raid templates). for drbd disk template).
</para> </para>
</refsect3> </refsect3>
...@@ -905,12 +905,6 @@ ...@@ -905,12 +905,6 @@
<refsect3> <refsect3>
<title>REPLACE-DISKS</title> <title>REPLACE-DISKS</title>
<cmdsynopsis>
<command>replace-disks</command>
<arg choice="opt">--new-secondary <replaceable>NODE</replaceable></arg>
<arg choice="req"><replaceable>instance</replaceable></arg>
</cmdsynopsis>
<cmdsynopsis> <cmdsynopsis>
<command>replace-disks</command> <command>replace-disks</command>
<arg choice="opt">-s</arg> <arg choice="opt">-s</arg>
...@@ -929,24 +923,13 @@ ...@@ -929,24 +923,13 @@
<para> <para>
This command is a generalized form for adding and replacing This command is a generalized form for adding and replacing
disks. disks. It is currently only valid for the mirrored (DRBD)
</para> disk template.
<para>
The first form is usable with the
<literal>remote_raid1</literal> disk template. This will
replace the disks on both the primary and secondary node,
and optionally will change the secondary node to a new one
if you pass the <option>--new-secondary</option> option.
</para> </para>
<para> <para>
The second and third forms are usable with the The first form will do a secondary node change, while the
<literal>drbd</literal> disk template. The second form will second form will replace the disks on either the primary
do a secondary replacement, but as opposed to the
<literal>remote_raid1</literal> will not replace the disks
on the primary, therefore it will execute faster. The third
form will replace the disks on either the primary
(<option>-p</option>) or the secondary (<option>-s</option>) (<option>-p</option>) or the secondary (<option>-s</option>)
node of the instance only, without changing the node. node of the instance only, without changing the node.
</para> </para>
...@@ -965,16 +948,16 @@ ...@@ -965,16 +948,16 @@
successful, the command will show the location and name of successful, the command will show the location and name of
the block devices: the block devices:
<screen> <screen>
node1.example.com:sda:/dev/md0 node1.example.com:sda:/dev/drbd0
node1.example.com:sdb:/dev/md1 node1.example.com:sdb:/dev/drbd1
</screen> </screen>
In this example, <emphasis>node1.example.com</emphasis> is In this example, <emphasis>node1.example.com</emphasis> is
the name of the node on which the devices have been the name of the node on which the devices have been
activated. The <emphasis>sda</emphasis> and activated. The <emphasis>sda</emphasis> and
<emphasis>sdb</emphasis> are the names of the block devices <emphasis>sdb</emphasis> are the names of the block devices
inside the instance. <emphasis>/dev/md0</emphasis> and inside the instance. <emphasis>/dev/drbd0</emphasis> and
<emphasis>/dev/md1</emphasis> are the names of the block <emphasis>/dev/drbd1</emphasis> are the names of the block
devices as visible on the node. devices as visible on the node.
</para> </para>
...@@ -993,11 +976,11 @@ node1.example.com:sdb:/dev/md1 ...@@ -993,11 +976,11 @@ node1.example.com:sdb:/dev/md1
</cmdsynopsis> </cmdsynopsis>
<para> <para>
De-activates the block devices of the given instance. Note De-activates the block devices of the given instance. Note
that if you run this command for a remote raid instance that if you run this command for an instance with a drbd
type, while it is running, it will not be able to shutdown disk template, while it is running, it will not be able to
the block devices on the primary node, but it will shutdown shutdown the block devices on the primary node, but it will
the block devices on the secondary nodes, thus breaking the shutdown the block devices on the secondary nodes, thus
replication. breaking the replication.
</para> </para>
</refsect3> </refsect3>
...@@ -1019,8 +1002,8 @@ node1.example.com:sdb:/dev/md1 ...@@ -1019,8 +1002,8 @@ node1.example.com:sdb:/dev/md1
<para> <para>
Failover will fail the instance over its secondary Failover will fail the instance over its secondary
node. This works only for instances having a remote raid node. This works only for instances having a drbd disk
disk layout. template.
</para> </para>
<para> <para>
......
...@@ -146,7 +146,7 @@ ...@@ -146,7 +146,7 @@
This command will change the secondary node from the source This command will change the secondary node from the source
node to the destination node for all instances having the node to the destination node for all instances having the
source node as secondary. It works only for instances having source node as secondary. It works only for instances having
a remote_raid1 or drbd disk layout. a drbd disk template.
</para> </para>
<para> <para>
...@@ -170,7 +170,7 @@ ...@@ -170,7 +170,7 @@
<para> <para>
This command will fail over all instances having the given This command will fail over all instances having the given
node as primary to their secondary nodes. This works only for node as primary to their secondary nodes. This works only for
instances having a remote raid disk layout. instances having a drbd disk template.
</para> </para>
<para> <para>
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment