diff --git a/Makefile.am b/Makefile.am index e3ea4e3424d0c239a61c2ae71410bf6130fc2436..8a50b9c0239f8ce201de1da94b40beb41e31d591 100644 --- a/Makefile.am +++ b/Makefile.am @@ -109,7 +109,6 @@ http_PYTHON = \ docsgml = \ - doc/install.sgml \ doc/rapi.sgml docrst = \ @@ -117,6 +116,7 @@ docrst = \ doc/design-2.0.rst \ doc/hooks.rst \ doc/iallocator.rst \ + doc/install.rst \ doc/security.rst docdot = \ diff --git a/doc/install.rst b/doc/install.rst new file mode 100644 index 0000000000000000000000000000000000000000..3f676a344c0b68b2555105a80553f1902fd3300b --- /dev/null +++ b/doc/install.rst @@ -0,0 +1,607 @@ +Ganeti installation tutorial +============================ + +Documents Ganeti version 2.0. + +.. contents:: + +Introduction +------------ + +Ganeti is a cluster virtualization management system based on Xen or +KVM. This document explains how to bootstrap a Ganeti node (Xen +*dom0*), create a running cluster and install virtual instance (Xen +*domU*). You need to repeat most of the steps in this document for +every node you want to install, but of course we recommend creating +some semi-automatic procedure if you plan to deploy Ganeti on a +medium/large scale. + +A basic Ganeti terminology glossary is provided in the introductory +section of the *Ganeti administrator's guide*. Please refer to that +document if you are uncertain about the terms we are using. + +Ganeti has been developed for Linux and is distribution-agnostic. +This documentation will use Debian Lenny as an example system but the +examples can easily be translated to any other distribution. ou are +expected to be familiar with your distribution, its package management +system, and Xen or KVM before trying to use Ganeti. + +This document is divided into two main sections: + +- Installation of the base system and base components + +- Configuration of the environment for Ganeti + +Each of these is divided into sub-sections. While a full Ganeti system +will need all of the steps specified, some are not strictly required +for every environment. Which ones they are, and why, is specified in +the corresponding sections. + +Installing the base system and base components +---------------------------------------------- + +Hardware requirements ++++++++++++++++++++++ + +Any system supported by your Linux distribution is fine. 64-bit +systems are better as they can support more memory. + +Any disk drive recognized by Linux (``IDE``/``SCSI``/``SATA``/etc.) +is supported in Ganeti. Note that no shared storage (e.g. ``SAN``) is +needed to get high-availability features (but of course, one can be +used to store the images). It is highly recommended to use more than +one disk drive to improve speed. But Ganeti also works with one disk +per machine. + +Installing the base system +++++++++++++++++++++++++++ + +**Mandatory** on all nodes. + +It is advised to start with a clean, minimal install of the operating +system. The only requirement you need to be aware of at this stage is +to partition leaving enough space for a big (**minimum** 20GiB) LVM +volume group which will then host your instance filesystems, if you +want to use all Ganeti features. The volume group name Ganeti 2.0 uses +(by default) is ``xenvg``. + +You can also use file-based storage only, without LVM, but this setup +is not detailed in this document. + + +While you can use an existing system, please note that the Ganeti +installation is intrusive in terms of changes to the system +configuration, and it's best to use a newly-installed system without +important data on it. + +Also, for best results, it's advised that the nodes have as much as +possible the same hardware and software configuration. This will make +administration much easier. + +Hostname issues +~~~~~~~~~~~~~~~ + +Note that Ganeti requires the hostnames of the systems (i.e. what the +``hostname`` command outputs to be a fully-qualified name, not a short +name. In other words, you should use *node1.example.com* as a hostname +and not just *node1*. + +.. admonition:: Debian + + Debian Lenny and Etch configures the hostname differently than you + need it for Ganeti. For example, this is what Etch puts in + ``/etc/hosts`` in certain situations:: + + 127.0.0.1 localhost + 127.0.1.1 node1.example.com node1 + + but for Ganeti you need to have:: + + 127.0.0.1 localhost + 192.168.1.1 node1.example.com node1 + + replacing ``192.168.1.1`` with your node's address. Also, the file + ``/etc/hostname`` which configures the hostname of the system + should contain ``node1.example.com`` and not just ``node1`` (you + need to run the command ``/etc/init.d/hostname.sh start`` after + changing the file). + +Installing Xen +++++++++++++++ + +**Mandatory** on all nodes. + +While Ganeti is developed with the ability to modularly run on +different virtualization environments in mind the only two currently +useable on a live system are Xen and KVM. Supported +Xen versions are: 3.0.3, 3.0.4 and 3.1. + +Please follow your distribution's recommended way to install and set +up Xen, or install Xen from the upstream source, if you wish, +following their manual. For KVM, make sure you have a KVM-enabled +kernel and the KVM tools. + +After installing either hypervisor, you need to reboot into your new +system. On some distributions this might involve configuring GRUB +appropriately, whereas others will configure it automatically when you +install the respective kernels. + +.. admonition:: Debian + + Under Lenny or Etch you can install the relevant + ``xen-linux-system`` package, which will pull in both the + hypervisor and the relevant kernel. Also, if you are installing a + 32-bit Lenny/Etch, you should install the ``libc6-xen`` package + (run ``apt-get install libc6-xen``). + +Xen settings +~~~~~~~~~~~~ + +It's recommended that dom0 is restricted to a low amount of memory +(512MiB or 1GiB is reasonable) and that memory ballooning is disabled +in the file ``/etc/xen/xend-config.sxp`` by setting +the value ``dom0-min-mem`` to 0, +like this:: + + (dom0-min-mem 0) + +For optimum performance when running both CPU and I/O intensive +instances, it's also recommended that the dom0 is restricted to one +CPU only, for example by booting with the kernel parameter ``nosmp``. + +It is recommended that you disable xen's automatic save of virtual +machines at system shutdown and subsequent restore of them at reboot. +To obtain this make sure the variable ``XENDOMAINS_SAVE`` in the file +``/etc/default/xendomains`` is set to an empty value. + +.. admonition:: Debian + + Besides the ballooning change which you need to set in + ``/etc/xen/xend-config.sxp``, you need to set the memory and nosmp + parameters in the file ``/boot/grub/menu.lst``. You need to modify + the variable ``xenhopt`` to add ``dom0_mem=1024M`` like this:: + + ## Xen hypervisor options to use with the default Xen boot option + # xenhopt=dom0_mem=1024M + + and the ``xenkopt`` needs to include the ``nosmp`` option like + this:: + + ## Xen Linux kernel options to use with the default Xen boot option + # xenkopt=nosmp + + Any existing parameters can be left in place: it's ok to have + ``xenkopt=console=tty0 nosmp``, for example. After modifying the + files, you need to run:: + + /sbin/update-grub + +If you want to run HVM instances too with Ganeti and want VNC access +to the console of your instances, set the following two entries in +``/etc/xen/xend-config.sxp``:: + + (vnc-listen '0.0.0.0') (vncpasswd '') + +You need to restart the Xen daemon for these settings to take effect:: + + /etc/init.d/xend restart + +Selecting the instance kernel +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +After you have installed Xen, you need to tell Ganeti exactly what +kernel to use for the instances it will create. This is done by +creating a symlink from your actual kernel to +``/boot/vmlinuz-2.6-xenU``, and one from your initrd +to ``/boot/initrd-2.6-xenU``. Note that if you don't +use an initrd for the domU kernel, you don't need +to create the initrd symlink. + +.. admonition:: Debian + + After installation of the ``xen-linux-system`` package, you need to + run (replace the exact version number with the one you have):: + + cd /boot + ln -s vmlinuz-2.6.26-1-xen-amd64 vmlinuz-2.6-xenU + ln -s initrd.img-2.6.26-1-xen-amd64 initrd-2.6-xenU + +Installing DRBD ++++++++++++++++ + +Recommended on all nodes: DRBD_ is required if you want to use the +high availability (HA) features of Ganeti, but optional if you don't +require HA or only run Ganeti on single-node clusters. You can upgrade +a non-HA cluster to an HA one later, but you might need to export and +re-import all your instances to take advantage of the new features. + +.. _DRBD: http://www.drbd.org/ + +Supported DRBD versions: 8.0.x. It's recommended to have at least +version 8.0.12. + +Now the bad news: unless your distribution already provides it +installing DRBD might involve recompiling your kernel or anyway +fiddling with it. Hopefully at least the Xen-ified kernel source to +start from will be provided. + +The good news is that you don't need to configure DRBD at all. Ganeti +will do it for you for every instance you set up. If you have the +DRBD utils installed and the module in your kernel you're fine. Please +check that your system is configured to load the module at every boot, +and that it passes the following option to the module +``minor_count=255``. This will allow you to use up to 128 instances +per node (for most clusters 128 should be enough, though). + +.. admonition:: Debian + + On Debian, you can just install (build) the DRBD 8.0.x module with + the following commands (make sure you are running the Xen kernel):: + + apt-get install drbd8-source drbd8-utils + m-a update + m-a a-i drbd8 + echo drbd minor_count=128 >> /etc/modules + depmod -a + modprobe drbd minor_count=128 + + It is also recommended that you comment out the default resources + in the ``/etc/drbd.conf`` file, so that the init script doesn't try + to configure any drbd devices. You can do this by prefixing all + *resource* lines in the file with the keyword *skip*, like this:: + + skip resource r0 { + ... + } + + skip resource "r1" { + ... + } + +Other required software ++++++++++++++++++++++++ + +Besides Xen and DRBD, you will need to install the following (on all +nodes): + +- LVM version 2, `<http://sourceware.org/lvm2/>`_ + +- OpenSSL, `<http://www.openssl.org/>`_ + +- OpenSSH, `<http://www.openssh.com/portable.html>`_ + +- bridge utilities, `<http://bridge.sourceforge.net/>`_ + +- iproute2, `<http://developer.osdl.org/dev/iproute2>`_ + +- arping (part of iputils package), + `<ftp://ftp.inr.ac.ru/ip-routing/iputils-current.tar.gz>`_ + +- Python version 2.4 or 2.5, `<http://www.python.org>`_ + +- Python OpenSSL bindings, `<http://pyopenssl.sourceforge.net/>`_ + + +- simplejson Python module, `<http://www.undefined.org/python/#simplejson>`_ + +- pyparsing Python module, `<http://pyparsing.wikispaces.com/>`_ + +These programs are supplied as part of most Linux distributions, so +usually they can be installed via apt or similar methods. Also many of +them will already be installed on a standard machine. + + +.. admonition:: Debian + + You can use this command line to install all needed packages:: + + # apt-get install lvm2 ssh bridge-utils iproute iputils-arping \ + python python-pyopenssl openssl python-pyparsing python-simplejson + +Setting up the environment for Ganeti +------------------------------------- + +Configuring the network ++++++++++++++++++++++++ + +**Mandatory** on all nodes. + +Ganeti relies on Xen running in "bridge mode", which means the +instances network interfaces will be attached to a software bridge +running in dom0. Xen by default creates such a bridge at startup, but +your distribution might have a different way to do things. + +Beware that the default name Ganeti uses is ``xen-br0`` (which was +used in Xen 2.0) while Xen 3.0 uses ``xenbr0`` by default. The default +bridge your Ganeti cluster will use for new instances can be specified +at cluster initialization time. + +.. admonition:: Debian + + The recommended way to configure the Xen bridge is to edit your + ``/etc/network/interfaces`` file and substitute your normal + Ethernet stanza with the following snippet:: + + auto xen-br0 + iface xen-br0 inet static + address YOUR_IP_ADDRESS + netmask YOUR_NETMASK + network YOUR_NETWORK + broadcast YOUR_BROADCAST_ADDRESS + gateway YOUR_GATEWAY + bridge_ports eth0 + bridge_stp off + bridge_fd 0 + +The following commands need to be executed on the local console: + + ifdown eth0 + ifup xen-br0 + +To check if the bridge is setup, use the ``ip`` and ``brctl show`` +commands:: + + # ip a show xen-br0 + 9: xen-br0: <BROADCAST,MULTICAST,UP,10000> mtu 1500 qdisc noqueue + link/ether 00:20:fc:1e:d5:5d brd ff:ff:ff:ff:ff:ff + inet 10.1.1.200/24 brd 10.1.1.255 scope global xen-br0 + inet6 fe80::220:fcff:fe1e:d55d/64 scope link + valid_lft forever preferred_lft forever + + # brctl show xen-br0 + bridge name bridge id STP enabled interfaces + xen-br0 8000.0020fc1ed55d no eth0 + +Configuring LVM ++++++++++++++++ + +**Mandatory** on all nodes. + +The volume group is required to be at least 20GiB. + +If you haven't configured your LVM volume group at install time you +need to do it before trying to initialize the Ganeti cluster. This is +done by formatting the devices/partitions you want to use for it and +then adding them to the relevant volume group:: + + pvcreate /dev/sda3 + vgcreate xenvg /dev/sda3 + +or:: + + pvcreate /dev/sdb1 + pvcreate /dev/sdc1 + vgcreate xenvg /dev/sdb1 /dev/sdc1 + +If you want to add a device later you can do so with the *vgextend* +command:: + + pvcreate /dev/sdd1 + vgextend xenvg /dev/sdd1 + +Optional: it is recommended to configure LVM not to scan the DRBD +devices for physical volumes. This can be accomplished by editing +``/etc/lvm/lvm.conf`` and adding the +``/dev/drbd[0-9]+`` regular expression to the +``filter`` variable, like this:: + + filter = ["r|/dev/cdrom|", "r|/dev/drbd[0-9]+|" ] + +Installing Ganeti ++++++++++++++++++ + +**Mandatory** on all nodes. + +It's now time to install the Ganeti software itself. Download the +source from the project page at `<http://code.google.com/p/ganeti/>`_, +and install it (replace 2.0.0 with the latest version):: + + tar xvzf ganeti-2.0.0.tar.gz + cd ganeti-2.0.0 + ./configure --localstatedir=/var --sysconfdir=/etc + make + make install + mkdir /srv/ganeti/ /srv/ganeti/os /srv/ganeti/export + +You also need to copy the file +``doc/examples/ganeti.initd`` from the source archive +to ``/etc/init.d/ganeti`` and register it with your +distribution's startup scripts, for example in Debian:: + + update-rc.d ganeti defaults 20 80 + +In order to automatically restart failed instances, you need to setup +a cron job run the *ganeti-watcher* command. A sample cron file is +provided in the source at ``doc/examples/ganeti.cron`` and you can +copy that (eventually altering the path) to ``/etc/cron.d/ganeti``. + +Installing the Operating System support packages +++++++++++++++++++++++++++++++++++++++++++++++++ + +**Mandatory** on all nodes. + +To be able to install instances you need to have an Operating System +installation script. An example OS that works under Debian and can +install Debian and Ubuntu instace OSes is provided on the project web +site. Download it from the project page and follow the instructions +in the ``README`` file. Here is the installation procedure (replace +0.7 with the latest version that is compatible with your ganeti +version):: + + cd /usr/local/src/ + wget http://ganeti.googlecode.com/files/ganeti-instance-debootstrap-0.7.tar.gz + tar xzf ganeti-instance-debootstrap-0.7.tar.gz + cd ganeti-instance-debootstrap-0.7 + ./configure + make + make install + +In order to use this OS definition, you need to have internet access +from your nodes and have the *debootstrap*, *dump* and *restore* +commands installed on all nodes. Also, if the OS is configured to +partition the instance's disk in +``/etc/default/ganeti-instance-debootstrap``, you will need *kpartx* +installed. + +.. admonition:: Debian + + Use this command on all nodes to install the required packages:: + + apt-get install debootstrap dump kpartx + +Alternatively, you can create your own OS definitions. See the manpage +*ganeti-os-interface*. + +Initializing the cluster +++++++++++++++++++++++++ + +**Mandatory** on one node per cluster. + +The last step is to initialize the cluster. After you've repeated the +above process on all of your nodes, choose one as the master, and +execute:: + + gnt-cluster init <CLUSTERNAME> + +The *CLUSTERNAME* is a hostname, which must be resolvable (e.g. it +must exist in DNS or in ``/etc/hosts``) by all the nodes in the +cluster. You must choose a name different from any of the nodes names +for a multi-node cluster. In general the best choice is to have a +unique name for a cluster, even if it consists of only one machine, as +you will be able to expand it later without any problems. Please note +that the hostname used for this must resolve to an IP address reserved +**exclusively** for this purpose, and cannot be the name of the first +(master) node. + +If the bridge name you are using is not ``xen-br0``, use the *-b +<BRIDGENAME>* option to specify the bridge name. In this case, you +should also use the *--master-netdev <BRIDGENAME>* option with the +same BRIDGENAME argument. + +You can use a different name than ``xenvg`` for the volume group (but +note that the name must be identical on all nodes). In this case you +need to specify it by passing the *-g <VGNAME>* option to +``gnt-cluster init``. + +To set up the cluster as an HVM cluster, use the +``--enabled-hypervisors=xen-hvm`` option to enable the HVM hypervisor +(you can also add ``,xen-pvm`` to enable the PVM one too). You will +also need to create the VNC cluster password file +``/etc/ganeti/vnc-cluster-password`` which contains one line with the +default VNC password for the cluster. + +To setup the cluster for KVM-only usage (KVM and Xen cannot be mixed), +pass ``--enabled-hypervisors=kvm`` to the init command. + +You can also invoke the command with the ``--help`` option in order to +see all the possibilities. + +Joining the nodes to the cluster +++++++++++++++++++++++++++++++++ + +**Mandatory** for all the other nodes. + +After you have initialized your cluster you need to join the other +nodes to it. You can do so by executing the following command on the +master node:: + + gnt-node add <NODENAME> + +Separate replication network +++++++++++++++++++++++++++++ + +**Optional** + +Ganeti uses DRBD to mirror the disk of the virtual instances between +nodes. To use a dedicated network interface for this (in order to +improve performance or to enhance security) you need to configure an +additional interface for each node. Use the *-s* option with +``gnt-cluster init`` and ``gnt-node add`` to specify the IP address of +this secondary interface to use for each node. Note that if you +specified this option at cluster setup time, you must afterwards use +it for every node add operation. + +Testing the setup ++++++++++++++++++ + +Execute the ``gnt-node list`` command to see all nodes in the +cluster:: + + # gnt-node list + Node DTotal DFree MTotal MNode MFree Pinst Sinst + node1.example.com 197404 197404 2047 1896 125 0 0 + +Setting up and managing virtual instances +----------------------------------------- + +Setting up virtual instances +++++++++++++++++++++++++++++ + +This step shows how to setup a virtual instance with either +non-mirrored disks (``plain``) or with network mirrored disks +(``drbd``). All commands need to be executed on the Ganeti master +node (the one on which ``gnt-cluster init`` was run). Verify that the +OS scripts are present on all cluster nodes with ``gnt-os list``. + + +To create a virtual instance, you need a hostname which is resolvable +(DNS or ``/etc/hosts`` on all nodes). The following command will +create a non-mirrored instance for you:: + + gnt-instance add -t plain -s 1G -n node1 -o debootstrap instance1.example.com + * creating instance disks... + adding instance instance1.example.com to cluster config + - INFO: Waiting for instance instance1.example.com to sync disks. + - INFO: Instance instance1.example.com's disks are in sync. + creating os for instance instance1.example.com on node node1.example.com + * running the instance OS create scripts... + * starting instance... + +The above instance will have no network interface enabled. You can +access it over the virtual console with ``gnt-instance console +inst1``. There is no password for root. As this is a Debian instance, +you can modify the ``/etc/network/interfaces`` file to setup the +network interface (eth0 is the name of the interface provided to the +instance). + +To create a network mirrored instance, change the argument to the *-t* +option from ``plain`` to ``drbd`` and specify the node on which the +mirror should reside with the second value of the *--node* option, +like this (note that the command output includes timestamps which have +been removed for clarity):: + + # gnt-instance add -t drbd -s 1G -n node1:node2 -o debootstrap instance2 + * creating instance disks... + adding instance instance2.example.com to cluster config + - INFO: Waiting for instance instance2.example.com to sync disks. + - INFO: - device disk/0: 35.50% done, 11 estimated seconds remaining + - INFO: - device disk/0: 100.00% done, 0 estimated seconds remaining + - INFO: Instance instance2.example.com's disks are in sync. + creating os for instance instance2.example.com on node node1.example.com + * running the instance OS create scripts... + * starting instance... + +Managing virtual instances +++++++++++++++++++++++++++ + +All commands need to be executed on the Ganeti master node. + +To access the console of an instance, run:: + + gnt-instance console INSTANCENAME + +To shutdown an instance, run:: + + gnt-instance shutdown INSTANCENAME + +To startup an instance, run:: + + gnt-instance startup INSTANCENAME + +To failover an instance to its secondary node (only possible with +``drbd`` disk templates), run:: + + gnt-instance failover INSTANCENAME + +For more instance and cluster administration details, see the +*Ganeti administrator's guide*. diff --git a/doc/install.sgml b/doc/install.sgml deleted file mode 100644 index d1aaf1bc910da87febf35e85cf0955f433089aa4..0000000000000000000000000000000000000000 --- a/doc/install.sgml +++ /dev/null @@ -1,920 +0,0 @@ -<!DOCTYPE article PUBLIC "-//OASIS//DTD DocBook V4.2//EN" [ -]> - <article class="specification"> - <articleinfo> - <title>Ganeti installation tutorial</title> - </articleinfo> - <para>Documents Ganeti version 2.0</para> - - <sect1> - <title>Introduction</title> - - <para> - Ganeti is a cluster virtualization management system based on - Xen or KVM. This document explains how to bootstrap a Ganeti - node (Xen <literal>dom0</literal>), create a running cluster and - install virtual instance (Xen <literal>domU</literal>). You - need to repeat most of the steps in this document for every node - you want to install, but of course we recommend creating some - semi-automatic procedure if you plan to deploy Ganeti on a - medium/large scale. - </para> - - <para> - A basic Ganeti terminology glossary is provided in the - introductory section of the <emphasis>Ganeti administrator's - guide</emphasis>. Please refer to that document if you are - uncertain about the terms we are using. - </para> - - <para> - Ganeti has been developed for Linux and is - distribution-agnostic. This documentation will use Debian Lenny - as an example system but the examples can easily be translated - to any other distribution. You are expected to be familiar with - your distribution, its package management system, and Xen or KVM - before trying to use Ganeti. - </para> - - <para>This document is divided into two main sections: - - <itemizedlist> - <listitem> - <simpara>Installation of the base system and base - components</simpara> - </listitem> - <listitem> - <simpara>Configuration of the environment for - Ganeti</simpara> - </listitem> - </itemizedlist> - - Each of these is divided into sub-sections. While a full Ganeti system - will need all of the steps specified, some are not strictly required for - every environment. Which ones they are, and why, is specified in the - corresponding sections. - </para> - - </sect1> - - <sect1> - <title>Installing the base system and base components</title> - - <sect2> - <title>Hardware requirements</title> - - <para> - Any system supported by your Linux distribution is fine. 64-bit - systems are better as they can support more memory. - </para> - - <para> - Any disk drive recognized by Linux - (<literal>IDE</literal>/<literal>SCSI</literal>/<literal>SATA</literal>/etc.) - is supported in Ganeti. Note that no shared storage (e.g. - <literal>SAN</literal>) is needed to get high-availability features. It - is highly recommended to use more than one disk drive to improve speed. - But Ganeti also works with one disk per machine. - </para> - - <sect2> - <title>Installing the base system</title> - - <para> - <emphasis role="strong">Mandatory</emphasis> on all nodes. - </para> - - <para> - It is advised to start with a clean, minimal install of the - operating system. The only requirement you need to be aware of - at this stage is to partition leaving enough space for a big - (<emphasis role="strong">minimum - <constant>20GiB</constant></emphasis>) LVM volume group which - will then host your instance filesystems, if you want to use - all Ganeti features. The volume group name Ganeti 2.0 uses (by - default) is <emphasis>xenvg</emphasis>. - </para> - - <para> - You can also use file-based storage only, without LVM, but - this is not detailed in this document. - </para> - - <para> - While you can use an existing system, please note that the - Ganeti installation is intrusive in terms of changes to the - system configuration, and it's best to use a newly-installed - system without important data on it. - </para> - - <para> - Also, for best results, it's advised that the nodes have as - much as possible the same hardware and software - configuration. This will make administration much easier. - </para> - - <sect3> - <title>Hostname issues</title> - <para> - Note that Ganeti requires the hostnames of the systems - (i.e. what the <computeroutput>hostname</computeroutput> - command outputs to be a fully-qualified name, not a short - name. In other words, you should use - <literal>node1.example.com</literal> as a hostname and not - just <literal>node1</literal>. - </para> - - <formalpara> - <title>Debian</title> - <para> - Note that Debian Lenny configures the hostname differently - than you need it for Ganeti. For example, this is what - Etch puts in <filename>/etc/hosts</filename> in certain - situations: -<screen> -127.0.0.1 localhost -127.0.1.1 node1.example.com node1 -</screen> - - but for Ganeti you need to have: -<screen> -127.0.0.1 localhost -192.168.1.1 node1.example.com node1 -</screen> - replacing <literal>192.168.1.1</literal> with your node's - address. Also, the file <filename>/etc/hostname</filename> - which configures the hostname of the system should contain - <literal>node1.example.com</literal> and not just - <literal>node1</literal> (you need to run the command - <computeroutput>/etc/init.d/hostname.sh - start</computeroutput> after changing the file). - </para> - </formalpara> - </sect3> - - </sect2> - - <sect2> - <title>Installing Xen</title> - - <para> - <emphasis role="strong">Mandatory</emphasis> on all nodes. - </para> - - <para> - While Ganeti is developed with the ability to modularly run on - different virtualization environments in mind the only two - currently useable on a live system are <ulink - url="http://xen.xensource.com/">Xen</ulink> and KVM. Supported - versions are: <simplelist type="inline"> - <member><literal>3.0.3</literal></member> - <member><literal>3.0.4</literal></member> - <member><literal>3.1</literal></member> </simplelist>. - </para> - - <para> - Please follow your distribution's recommended way to install - and set up Xen, or install Xen from the upstream source, if - you wish, following their manual. For KVM, make sure you have - a KVM-enabled kernel and the KVM tools. - </para> - - <para> - After installing either hypervisor, you need to reboot into - your new system. On some distributions this might involve - configuring GRUB appropriately, whereas others will configure - it automatically when you install the respective kernels. - </para> - - <formalpara><title>Debian</title> - <para> - Under Debian Lenny or Etch you can install the relevant - <literal>xen-linux-system</literal> package, which will pull - in both the hypervisor and the relevant kernel. Also, if you - are installing a 32-bit Lenny/Etch, you should install the - <computeroutput>libc6-xen</computeroutput> package (run - <computeroutput>apt-get install libc6-xen</computeroutput>). - </para> - </formalpara> - - <sect3> - <title>Xen settings</title> - - <para> - It's recommended that dom0 is restricted to a low amount of - memory (<constant>512MiB</constant> or - <constant>1GiB</constant> is reasonable) and that memory - ballooning is disabled in the file - <filename>/etc/xen/xend-config.sxp</filename> by setting the - value <literal>dom0-min-mem</literal> to - <constant>0</constant>, like this: - <computeroutput>(dom0-min-mem 0)</computeroutput> - </para> - - <para> - For optimum performance when running both CPU and I/O - intensive instances, it's also recommended that the dom0 is - restricted to one CPU only, for example by booting with the - kernel parameter <literal>nosmp</literal>. - </para> - - <para> - It is recommended that you disable xen's automatic save of virtual - machines at system shutdown and subsequent restore of them at reboot. - To obtain this make sure the variable - <literal>XENDOMAINS_SAVE</literal> in the file - <literal>/etc/default/xendomains</literal> is set to an empty value. - </para> - - <formalpara> - <title>Debian</title> - <para> - Besides the ballooning change which you need to set in - <filename>/etc/xen/xend-config.sxp</filename>, you need to - set the memory and nosmp parameters in the file - <filename>/boot/grub/menu.lst</filename>. You need to - modify the variable <literal>xenhopt</literal> to add - <userinput>dom0_mem=1024M</userinput> like this: -<screen> -## Xen hypervisor options to use with the default Xen boot option -# xenhopt=dom0_mem=1024M -</screen> - and the <literal>xenkopt</literal> needs to include the - <userinput>nosmp</userinput> option like this: -<screen> -## Xen Linux kernel options to use with the default Xen boot option -# xenkopt=nosmp -</screen> - - Any existing parameters can be left in place: it's ok to - have <computeroutput>xenkopt=console=tty0 - nosmp</computeroutput>, for example. After modifying the - files, you need to run: -<screen> -/sbin/update-grub -</screen> - </para> - </formalpara> - <para> - If you want to run HVM instances too with Ganeti and want - VNC access to the console of your instances, set the - following two entries in - <filename>/etc/xen/xend-config.sxp</filename>: -<screen> -(vnc-listen '0.0.0.0') -(vncpasswd '') -</screen> - You need to restart the Xen daemon for these settings to - take effect: -<screen> -/etc/init.d/xend restart -</screen> - </para> - - </sect3> - - <sect3> - <title>Selecting the instance kernel</title> - - <para> - After you have installed Xen, you need to tell Ganeti - exactly what kernel to use for the instances it will - create. This is done by creating a - <emphasis>symlink</emphasis> from your actual kernel to - <filename>/boot/vmlinuz-2.6-xenU</filename>, and one from - your initrd to - <filename>/boot/initrd-2.6-xenU</filename>. Note that if you - don't use an initrd for the <literal>domU</literal> kernel, - you don't need to create the initrd symlink. - </para> - - <formalpara> - <title>Debian</title> - <para> - After installation of the - <literal>xen-linux-system</literal> package, you need to - run (replace the exact version number with the one you - have): - <screen> -cd /boot -ln -s vmlinuz-2.6.18-5-xen-686 vmlinuz-2.6-xenU -ln -s initrd.img-2.6.18-5-xen-686 initrd-2.6-xenU - </screen> - </para> - </formalpara> - </sect3> - - </sect2> - - <sect2> - <title>Installing DRBD</title> - - <para> - Recommended on all nodes: <ulink - url="http://www.drbd.org/">DRBD</ulink> is required if you - want to use the high availability (HA) features of Ganeti, but - optional if you don't require HA or only run Ganeti on - single-node clusters. You can upgrade a non-HA cluster to an - HA one later, but you might need to export and re-import all - your instances to take advantage of the new features. - </para> - - <para> - Supported DRBD versions: <literal>8.0.x</literal>. - It's recommended to have at least version <literal>8.0.12</literal>. - </para> - - <para> - Now the bad news: unless your distribution already provides it - installing DRBD might involve recompiling your kernel or - anyway fiddling with it. Hopefully at least the Xen-ified - kernel source to start from will be provided. - </para> - - <para> - The good news is that you don't need to configure DRBD at all. - Ganeti will do it for you for every instance you set up. If - you have the DRBD utils installed and the module in your - kernel you're fine. Please check that your system is - configured to load the module at every boot, and that it - passes the following option to the module - <computeroutput>minor_count=255</computeroutput>. This will - allow you to use up to 128 instances per node (for most clusters - <constant>128 </constant> should be enough, though). - </para> - - <formalpara><title>Debian</title> - <para> - You can just install (build) the DRBD 8.0.x module with the - following commands (make sure you are running the Xen - kernel): - </para> - </formalpara> - - <screen> -apt-get install drbd8-source drbd8-utils -m-a update -m-a a-i drbd8 -echo drbd minor_count=128 >> /etc/modules -depmod -a -modprobe drbd minor_count=128 - </screen> - - <para> - It is also recommended that you comment out the default - resources in the <filename>/etc/drbd.conf</filename> file, so - that the init script doesn't try to configure any drbd - devices. You can do this by prefixing all - <literal>resource</literal> lines in the file with the keyword - <literal>skip</literal>, like this: - </para> - - <screen> -skip resource r0 { -... -} - -skip resource "r1" { -... -} - </screen> - - </sect2> - - <sect2> - <title>Other required software</title> - - <para>Besides Xen and DRBD, you will need to install the - following (on all nodes):</para> - - <itemizedlist> - <listitem> - <simpara><ulink url="http://sourceware.org/lvm2/">LVM - version 2</ulink></simpara> - </listitem> - <listitem> - <simpara><ulink - url="http://www.openssl.org/">OpenSSL</ulink></simpara> - </listitem> - <listitem> - <simpara><ulink - url="http://www.openssh.com/portable.html">OpenSSH</ulink></simpara> - </listitem> - <listitem> - <simpara><ulink url="http://bridge.sourceforge.net/">Bridge - utilities</ulink></simpara> - </listitem> - <listitem> - <simpara><ulink - url="http://developer.osdl.org/dev/iproute2">iproute2</ulink></simpara> - </listitem> - <listitem> - <simpara><ulink - url="ftp://ftp.inr.ac.ru/ip-routing/iputils-current.tar.gz">arping</ulink> - (part of iputils package)</simpara> - </listitem> - <listitem> - <simpara><ulink url="http://www.python.org">Python 2.4</ulink></simpara> - </listitem> - <listitem> - <simpara><ulink - url="http://pyopenssl.sourceforge.net/">Python OpenSSL - bindings</ulink></simpara> - </listitem> - <listitem> - <simpara><ulink - url="http://www.undefined.org/python/#simplejson">simplejson Python - module</ulink></simpara> - </listitem> - <listitem> - <simpara><ulink - url="http://pyparsing.wikispaces.com/">pyparsing Python - module</ulink></simpara> - </listitem> - </itemizedlist> - - <para> - These programs are supplied as part of most Linux - distributions, so usually they can be installed via apt or - similar methods. Also many of them will already be installed - on a standard machine. - </para> - - - <formalpara><title>Debian</title> - - <para>You can use this command line to install all of them:</para> - - </formalpara> - <screen> -# apt-get install lvm2 ssh bridge-utils iproute iputils-arping \ - python python-pyopenssl openssl python-pyparsing python-simplejson - </screen> - - </sect2> - - </sect1> - - - <sect1> - <title>Setting up the environment for Ganeti</title> - - <sect2> - <title>Configuring the network</title> - - <para><emphasis role="strong">Mandatory</emphasis> on all nodes.</para> - - <para> - Ganeti relies on Xen running in "bridge mode", which means the - instances network interfaces will be attached to a software bridge - running in dom0. Xen by default creates such a bridge at startup, but - your distribution might have a different way to do things. - </para> - - <para> - Beware that the default name Ganeti uses is - <hardware>xen-br0</hardware> (which was used in Xen 2.0) - while Xen 3.0 uses <hardware>xenbr0</hardware> by - default. The default bridge your Ganeti cluster will use for new - instances can be specified at cluster initialization time. - </para> - - <formalpara><title>Debian</title> - <para> - The recommended Debian way to configure the Xen bridge is to - edit your <filename>/etc/network/interfaces</filename> file - and substitute your normal Ethernet stanza with the - following snippet: - - <screen> -auto xen-br0 -iface xen-br0 inet static - address <replaceable>YOUR_IP_ADDRESS</replaceable> - netmask <replaceable>YOUR_NETMASK</replaceable> - network <replaceable>YOUR_NETWORK</replaceable> - broadcast <replaceable>YOUR_BROADCAST_ADDRESS</replaceable> - gateway <replaceable>YOUR_GATEWAY</replaceable> - bridge_ports eth0 - bridge_stp off - bridge_fd 0 - </screen> - </para> - </formalpara> - - <para> -The following commands need to be executed on the local console - </para> - <screen> -ifdown eth0 -ifup xen-br0 - </screen> - - <para> - To check if the bridge is setup, use <command>ip</command> - and <command>brctl show</command>: - <para> - - <screen> -# ip a show xen-br0 -9: xen-br0: <BROADCAST,MULTICAST,UP,10000> mtu 1500 qdisc noqueue - link/ether 00:20:fc:1e:d5:5d brd ff:ff:ff:ff:ff:ff - inet 10.1.1.200/24 brd 10.1.1.255 scope global xen-br0 - inet6 fe80::220:fcff:fe1e:d55d/64 scope link - valid_lft forever preferred_lft forever - -# brctl show xen-br0 -bridge name bridge id STP enabled interfaces -xen-br0 8000.0020fc1ed55d no eth0 - </screen> - - - </sect2> - - <sect2> - <title>Configuring LVM</title> - - - <para><emphasis role="strong">Mandatory</emphasis> on all nodes.</para> - - <note> - <simpara>The volume group is required to be at least - <constant>20GiB</constant>.</simpara> - </note> - <para> - If you haven't configured your LVM volume group at install - time you need to do it before trying to initialize the Ganeti - cluster. This is done by formatting the devices/partitions you - want to use for it and then adding them to the relevant volume - group: - - <screen> -pvcreate /dev/sda3 -vgcreate xenvg /dev/sda3 - </screen> -or - <screen> -pvcreate /dev/sdb1 -pvcreate /dev/sdc1 -vgcreate xenvg /dev/sdb1 /dev/sdc1 - </screen> - </para> - - <para> - If you want to add a device later you can do so with the - <citerefentry><refentrytitle>vgextend</refentrytitle> - <manvolnum>8</manvolnum></citerefentry> command: - </para> - - <screen> -pvcreate /dev/sdd1 -vgextend xenvg /dev/sdd1 - </screen> - - <formalpara> - <title>Optional</title> - <para> - It is recommended to configure LVM not to scan the DRBD - devices for physical volumes. This can be accomplished by - editing <filename>/etc/lvm/lvm.conf</filename> and adding - the <literal>/dev/drbd[0-9]+</literal> regular expression to - the <literal>filter</literal> variable, like this: -<screen> - filter = [ "r|/dev/cdrom|", "r|/dev/drbd[0-9]+|" ] -</screen> - </para> - </formalpara> - - </sect2> - - <sect2> - <title>Installing Ganeti</title> - - <para><emphasis role="strong">Mandatory</emphasis> on all nodes.</para> - - <para> - It's now time to install the Ganeti software itself. Download - the source from <ulink - url="http://code.google.com/p/ganeti/"></ulink>. - </para> - - <screen> -tar xvzf ganeti-@GANETI_VERSION@.tar.gz -cd ganeti-@GANETI_VERSION@ -./configure --localstatedir=/var --sysconfdir=/etc -make -make install -mkdir /srv/ganeti/ /srv/ganeti/os /srv/ganeti/export - </screen> - - <para> - You also need to copy the file - <filename>doc/examples/ganeti.initd</filename> - from the source archive to - <filename>/etc/init.d/ganeti</filename> and register it with - your distribution's startup scripts, for example in Debian: - </para> - <screen>update-rc.d ganeti defaults 20 80</screen> - - <para> - In order to automatically restart failed instances, you need - to setup a cron job run the - <computeroutput>ganeti-watcher</computeroutput> program. A - sample cron file is provided in the source at - <filename>doc/examples/ganeti.cron</filename> and you can - copy that (eventually altering the path) to - <filename>/etc/cron.d/ganeti</filename> - </para> - - </sect2> - - <sect2> - <title>Installing the Operating System support packages</title> - - <para><emphasis role="strong">Mandatory</emphasis> on all nodes.</para> - - <para> - To be able to install instances you need to have an Operating - System installation script. An example OS that works under - Debian and can install Debian and Ubuntu instace OSes is - provided on the project web site. Download it from <ulink - url="http://code.google.com/p/ganeti/"></ulink> and follow the - instructions in the <filename>README</filename> file. Here is - the installation procedure (replace <constant>0.7</constant> - with the latest version that is compatible with your ganeti - version): - </para> - - <screen> -cd /usr/local/src/ -wget http://ganeti.googlecode.com/files/ganeti-instance-debootstrap-0.7.tar.gz -tar xzf ganeti-instance-debootstrap-0.7.tar.gz -cd ganeti-instance-debootstrap-0.7 -./configure -make -make install - </screen> - - <para> - In order to use this OS definition, you need to have internet - access from your nodes and have the <citerefentry> - <refentrytitle>debootstrap</refentrytitle> - <manvolnum>8</manvolnum></citerefentry>, <citerefentry> - <refentrytitle>dump</refentrytitle><manvolnum>8</manvolnum> - </citerefentry> and <citerefentry> - <refentrytitle>restore</refentrytitle> - <manvolnum>8</manvolnum> </citerefentry> commands installed on - all nodes. Also, if the OS is configured to partition the - instance's disk in - <filename>/etc/default/ganeti-instance-debootstrap</filename>, - you will need <command>kpartx</command> installed. - </para> - <formalpara> - <title>Debian</title> - <para> - Use this command on all nodes to install the required - packages: - - <screen>apt-get install debootstrap dump kpartx</screen> - </para> - </formalpara> - - <para> - Alternatively, you can create your own OS definitions. See the - manpage - <citerefentry> - <refentrytitle>ganeti-os-interface</refentrytitle> - <manvolnum>8</manvolnum> - </citerefentry>. - </para> - - </sect2> - - <sect2> - <title>Initializing the cluster</title> - - <para><emphasis role="strong">Mandatory:</emphasis> only on one - node per cluster.</para> - - - <para> - The last step is to initialize the cluster. After you've - repeated the above process on all of your nodes, choose one as - the master, and execute: - </para> - - <screen> -gnt-cluster init <replaceable>CLUSTERNAME</replaceable> - </screen> - - <para> - The <replaceable>CLUSTERNAME</replaceable> is a hostname, - which must be resolvable (e.g. it must exist in DNS or in - <filename>/etc/hosts</filename>) by all the nodes in the - cluster. You must choose a name different from any of the - nodes names for a multi-node cluster. In general the best - choice is to have a unique name for a cluster, even if it - consists of only one machine, as you will be able to expand it - later without any problems. Please note that the hostname used - for this must resolve to an IP address reserved <emphasis - role="strong">exclusively</emphasis> for this purpose. - </para> - - <para> - If the bridge name you are using is not - <literal>xen-br0</literal>, use the <option>-b - <replaceable>BRIDGENAME</replaceable></option> option to - specify the bridge name. In this case, you should also use the - <option>--master-netdev - <replaceable>BRIDGENAME</replaceable></option> option with the - same <replaceable>BRIDGENAME</replaceable> argument. - </para> - - <para> - You can use a different name than <literal>xenvg</literal> for - the volume group (but note that the name must be identical on - all nodes). In this case you need to specify it by passing the - <option>-g <replaceable>VGNAME</replaceable></option> option - to <computeroutput>gnt-cluster init</computeroutput>. - </para> - - <para> - To set up the cluster as an HVM cluster, use the - <option>--enabled-hypervisors=xen-hvm</option> option to - enable the HVM hypervisor (you can also add - <userinput>,xen-pvm</userinput> to enable the PVM one - too). You will also need to create the VNC cluster password - file <filename>/etc/ganeti/vnc-cluster-password</filename> - which contains one line with the default VNC password for the - cluster. - </para> - - <para> - To setup the cluster for KVM-only usage (KVM and Xen cannot be - mixed), pass <option>--enabled-hypervisors=kvm</option> to the - init command. - </para> - - <para> - You can also invoke the command with the - <option>--help</option> option in order to see all the - possibilities. - </para> - - </sect2> - - <sect2> - <title>Joining the nodes to the cluster</title> - - <para> - <emphasis role="strong">Mandatory:</emphasis> for all the - other nodes. - </para> - - <para> - After you have initialized your cluster you need to join the - other nodes to it. You can do so by executing the following - command on the master node: - </para> - <screen> -gnt-node add <replaceable>NODENAME</replaceable> - </screen> - </sect2> - - <sect2> - <title>Separate replication network</title> - - <para><emphasis role="strong">Optional</emphasis></para> - <para> - Ganeti uses DRBD to mirror the disk of the virtual instances - between nodes. To use a dedicated network interface for this - (in order to improve performance or to enhance security) you - need to configure an additional interface for each node. Use - the <option>-s</option> option with - <computeroutput>gnt-cluster init</computeroutput> and - <computeroutput>gnt-node add</computeroutput> to specify the - IP address of this secondary interface to use for each - node. Note that if you specified this option at cluster setup - time, you must afterwards use it for every node add operation. - </para> - </sect2> - - <sect2> - <title>Testing the setup</title> - - <para> - Execute the <computeroutput>gnt-node list</computeroutput> - command to see all nodes in the cluster: - <screen> -# gnt-node list -Node DTotal DFree MTotal MNode MFree Pinst Sinst -node1.example.com 197404 197404 2047 1896 125 0 0 - </screen> - </para> - </sect2> - - <sect1> - <title>Setting up and managing virtual instances</title> - <sect2> - <title>Setting up virtual instances</title> - <para> - This step shows how to setup a virtual instance with either - non-mirrored disks (<computeroutput>plain</computeroutput>) or - with network mirrored disks - (<computeroutput>drbd</computeroutput>). All - commands need to be executed on the Ganeti master node (the - one on which <computeroutput>gnt-cluster init</computeroutput> - was run). Verify that the OS scripts are present on all - cluster nodes with <computeroutput>gnt-os - list</computeroutput>. - </para> - <para> - To create a virtual instance, you need a hostname which is - resolvable (DNS or <filename>/etc/hosts</filename> on all - nodes). The following command will create a non-mirrored - instance for you: - </para> - <screen> -gnt-instance add --node=node1 -o debootstrap -t plain inst1.example.com -* creating instance disks... -adding instance inst1.example.com to cluster config -Waiting for instance inst1.example.com to sync disks. -Instance inst1.example.com's disks are in sync. -creating os for instance inst1.example.com on node node1.example.com -* running the instance OS create scripts... - </screen> - - <para> - The above instance will have no network interface enabled. - You can access it over the virtual console with - <computeroutput>gnt-instance console - <literal>inst1</literal></computeroutput>. There is no - password for root. As this is a Debian instance, you can - modify the <filename>/etc/network/interfaces</filename> file - to setup the network interface (<literal>eth0</literal> is the - name of the interface provided to the instance). - </para> - - <para> - To create a network mirrored instance, change the argument to - the <option>-t</option> option from <literal>plain</literal> - to <literal>drbd</literal> and specify the node on - which the mirror should reside with the second value of the - <option>--node</option> option, like this: - </para> - - <screen> -# gnt-instance add -t drbd -n node1:node2 -o debootstrap instance2 -* creating instance disks... -adding instance instance2 to cluster config -Waiting for instance instance1 to sync disks. -- device sdb: 3.50% done, 304 estimated seconds remaining -- device sdb: 21.70% done, 270 estimated seconds remaining -- device sdb: 39.80% done, 247 estimated seconds remaining -- device sdb: 58.10% done, 121 estimated seconds remaining -- device sdb: 76.30% done, 72 estimated seconds remaining -- device sdb: 94.80% done, 18 estimated seconds remaining -Instance instance2's disks are in sync. -creating os for instance instance2 on node node1.example.com -* running the instance OS create scripts... -* starting instance... - </screen> - - </sect2> - - <sect2> - <title>Managing virtual instances</title> - <para> - All commands need to be executed on the Ganeti master node - </para> - - <para> - To access the console of an instance, use - <computeroutput>gnt-instance console - <replaceable>INSTANCENAME</replaceable></computeroutput>. - </para> - - <para> - To shutdown an instance, use <computeroutput>gnt-instance - shutdown - <replaceable>INSTANCENAME</replaceable></computeroutput>. To - startup an instance, use <computeroutput>gnt-instance startup - <replaceable>INSTANCENAME</replaceable></computeroutput>. - </para> - - <para> - To failover an instance to its secondary node (only possible - with <literal>drbd</literal> disk templates), use - <computeroutput>gnt-instance failover - <replaceable>INSTANCENAME</replaceable></computeroutput>. - </para> - - <para> - For more instance and cluster administration details, see the - <emphasis>Ganeti administrator's guide</emphasis>. - </para> - - </sect2> - - </sect1> - - </article>