Commit ec5939e6 authored by Marios Kogias's avatar Marios Kogias
Browse files

added the improved quick install admin guide

parent 731db9e8
......@@ -37,18 +37,10 @@ snf-cyclades-app component (scheduled to be fixed in the next version).
For the rest of the documentation we will refer to the first physical node as
"node1" and the second as "node2". We will also assume that their domain names
are "node1.example.com" and "node2.example.com" and their IPs are "4.3.2.1" and
"4.3.2.2" respectively.
.. note:: It is import that the two machines are under the same domain name.
If they are not, you can do this by editting the file ``/etc/hosts``
on both machines, and add the following lines:
.. code-block:: console
4.3.2.1 node1.example.com
4.3.2.2 node2.example.com
are "node1.example.com" and "node2.example.com" and their public IPs are "4.3.2.1" and
"4.3.2.2" respectively. It is important that the two machines are under the same domain name.
In case you choose to follow a private installation you will need to
set up a private dns server, using dnsmasq for example. See node1 below for more.
General Prerequisites
=====================
......@@ -93,15 +85,18 @@ system clocks (e.g. by running ntpd).
Node1
-----
General Synnefo dependencies
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* apache (http server)
* gunicorn (WSGI http server)
* postgresql (database)
* rabbitmq (message queue)
* ntp (NTP daemon)
* gevent
* apache (http server)
* public certificate
* gunicorn (WSGI http server)
* postgresql (database)
* rabbitmq (message queue)
* ntp (NTP daemon)
* gevent
* dns server
You can install apache2, progresql and ntp by running:
......@@ -230,6 +225,54 @@ Create the file ``/etc/gunicorn.d/synnefo`` containing the following:
# /etc/init.d/gunicorn stop
Certificate Creation
~~~~~~~~~~~~~~~~~~~~~
Node1 will host Cyclades. Cyclades should communicate with the other snf tools over a trusted connection.
In order for the connection to be trusted, the keys provided to apache below should be signed with a certificate.
This certificate should be added to all nodes. In case you don't have signed keys you can create a self-signed certificate
and sign your keys with this. To do so on node1 run
.. code-block:: console
# aptitude install openvpn
# mkdir /etc/openvpn/easy-rsa
# cp -ai /usr/share/doc/openvpn/examples/easy-rsa/2.0/ /etc/openvpn/easy-rsa
# cd /etc/openvpn/easy-rsa/2.0
# vim vars
In vars you can set your own parameters such as KEY_COUNTRY
.. code-block:: console
# . ./vars
# ./clean-all
Now you can create the certificate
.. code-block:: console
# ./build-ca
The previous will create a ``ca.crt`` file. Copy this file under
``/usr/local/share/ca-certificates/`` directory and run :
.. code-block:: console
# update-ca-certificates
to update the records. You will have to do the following on node2 as well.
Now you can create the keys and sign them with the certificate
.. code-block:: console
# ./build-key-server node1.example.com
This will create a .pem and a .key file in your current folder. Copy these in
``/etc/ssl/certs/`` and ``/etc/ssl/private/`` respectively and
use them in the apache2 configuration file below instead of the defaults.
Apache2 setup
~~~~~~~~~~~~~
......@@ -247,6 +290,7 @@ following:
RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
</VirtualHost>
Create the file ``/etc/apache2/sites-available/synnefo-ssl`` containing the
following:
......@@ -304,6 +348,7 @@ Now enable sites and modules by running:
# /etc/init.d/apache2 stop
.. _rabbitmq-setup:
Message Queue setup
......@@ -335,6 +380,32 @@ directory inside it:
# chown www-data:www-data data
# chmod g+ws data
DNS server setup
~~~~~~~~~~~~~~~~
If your machines are not under the same domain nameyou have to set up a dns server.
In order to set up a dns server using dnsmasq do the following
.. code-block:: console
# apt-get install dnsmasq
Then edit you ``/etc/hosts/`` as follows
.. code-block:: console
4.3.2.1 node1.example.com
4.3.2.2 node2.example.com
Finally edit the ``/etc/dnsmasq.conf`` file and specify the ``listen-address`` and
the ``interface`` you would like to listen to.
Also add the following in your ``/etc/resolv.conf`` file
.. code-block:: console
nameserver 4.3.2.1
You are now ready with all general prerequisites concerning node1. Let's go to
node2.
......@@ -349,6 +420,8 @@ General Synnefo dependencies
* postgresql (database)
* ntp (NTP daemon)
* gevent
* certificates
* dns setup
You can install the above by running:
......@@ -487,11 +560,37 @@ As in node1, enable sites and modules by running:
# /etc/init.d/apache2 stop
Acquire certificate
~~~~~~~~~~~~~~~~~~~
Copy the certificate you created before on node1 (`ca.crt`) under the directory
``/usr/local/share/ca-certificate``
and run:
.. code-block:: console
# update-ca-certificates
to update the records.
DNS Setup
~~~~~~~~~
Add the following line in ``/etc/resolv.conf`` file
.. code-block:: console
nameserver 4.3.2.1
to inform the node about the new dns server.
We are now ready with all general prerequisites for node2. Now that we have
finished with all general prerequisites for both nodes, we can start installing
the services. First, let's install Astakos on node1.
Installation of Astakos on node1
================================
......@@ -628,11 +727,13 @@ More specifically astakos sends emails in the following cases
Astakos uses the Django internal email delivering mechanism to send email
notifications. A simple configuration, using an external smtp server to
deliver messages, is shown below.
deliver messages, is shown below. Alter the following example to meet your
smtp server characteristics. Notice that the smtp server is needed for a proper
installation
.. code-block:: python
# /etc/synnefo/10-snf-common-admins.conf
# /etc/synnefo/00-snf-common-admins.conf
EMAIL_HOST = "mysmtp.server.synnefo.org"
EMAIL_HOST_USER = "<smtpuser>"
EMAIL_HOST_PASSWORD = "<smtppassword>"
......@@ -810,6 +911,8 @@ offered by the services.
# copy the file to astakos-host
astakos-host$ snf-manage service-import --json pithos.json
Notice that in this installation astakos and cyclades are in node1 and pithos is in node2
Setting Default Base Quota for Resources
----------------------------------------
......@@ -1116,30 +1219,37 @@ For the purpose of this guide, we will assume that the :ref:`GANETI-MASTER
:ref:`GANETI-NODE <GANETI_NODES>` and is Master-capable and VM-capable too.
We highly recommend that you read the official Ganeti documentation, if you are
not familiar with Ganeti.
not familiar with Ganeti.
Unfortunatelly, the current stable version of the stock Ganeti (v2.6.2) doesn't
Unfortunately, the current stable version of the stock Ganeti (v2.6.2) doesn't
support IP pool management. This feature will be available in Ganeti >= 2.7.
Synnefo depends on the IP pool functionality of Ganeti, so you have to use
GRNET provided packages until stable 2.7 is out. To do so:
GRNET provided packages until stable 2.7 is out. These packages will also install
the proper version of Ganeti. To do so:
.. code-block:: console
# apt-get install snf-ganeti ganeti-htools
# rmmod -f drbd && modprobe drbd minor_count=255 usermode_helper=/bin/true
You should have:
Ganeti will make use of drbd. To enable this and make the configuration pemanent
you have to do the following :
.. code-block:: console
# rmmod -f drbd && modprobe drbd minor_count=255 usermode_helper=/bin/true
# echo 'drbd minor_count=255 usermode_helper=/bin/true' >> /etc/modules
Ganeti >= 2.6.2+ippool11+hotplug5+extstorage3+rdbfix1+kvmfix2-1
We assume that Ganeti will use the KVM hypervisor. After installing Ganeti on
both nodes, choose a domain name that resolves to a valid floating IP (let's
say it's ``ganeti.node1.example.com``). Make sure node1 and node2 have same
dsa/rsa keys and authorised_keys for password-less root ssh between each other.
If not then skip passing --no-ssh-init but be aware that it will replace
/root/.ssh/* related files and you might lose access to master node. Also,
make sure there is an lvm volume group named ``ganeti`` that will host your
VMs' disks. Finally, setup a bridge interface on the host machines (e.g: br0).
say it's ``ganeti.node1.example.com``). This IP is needed to communicate with
the Ganeti cluster. Make sure node1 and node2 have same dsa,rsa keys and authorised_keys
for password-less root ssh between each other. If not then skip passing --no-ssh-init but be
aware that it will replace /root/.ssh/* related files and you might lose access to master node.
Also, Ganeti will need a volume to host your VMs' disks. So, make sure there is an lvm volume
group named ``ganeti``. Finally, setup a bridge interface on the host machines (e.g: br0). This
will be needed for the network configuration afterwards.
Then run on node1:
.. code-block:: console
......@@ -1181,10 +1291,12 @@ to handle image files stored on Pithos. It also needs `python-psycopg2` to be
able to access the Pithos database. This is why, we also install them on *all*
VM-capable Ganeti nodes.
.. warning:: snf-image uses ``curl`` for handling URLs. This means that it will
not work out of the box if you try to use URLs served by servers which do
not have a valid certificate. To circumvent this you should edit the file
``/etc/default/snf-image``. Change ``#CURL="curl"`` to ``CURL="curl -k"``.
.. warning::
snf-image uses ``curl`` for handling URLs. This means that it will
not work out of the box if you try to use URLs served by servers which do
not have a valid certificate. In case you haven't followed the guide's
directions about the certificates,in order to circumvent this you should edit the file
``/etc/default/snf-image``. Change ``#CURL="curl"`` to ``CURL="curl -k"`` on every node.
After `snf-image` has been installed successfully, create the helper VM by
running on *both* nodes:
......@@ -1303,7 +1415,7 @@ In the above command:
* ``img_format``: set to ``diskdump`` to reflect the type of the uploaded Image
* ``img_id``: If you want to deploy an Image stored on Pithos (our case), this
should have the format ``pithos://<UUID>/<container>/<filename>``:
* ``username``: ``user@example.com`` (defined during Astakos sign up)
* ``UUID``: the username found in Cyclades Web UI under API access
* ``container``: ``pithos`` (default, if the Web UI was used)
* ``filename``: the name of file (visible also from the Web UI)
* ``img_properties``: taken from the metadata file. Used only the two mandatory
......@@ -1494,7 +1606,7 @@ managed from our previously defined network. Run on the GANETI-MASTER (node1):
--net 0:ip=pool,network=test-net-public \
testvm2
If the above returns successfully, connect to the new VM and run:
If the above returns successfully, connect to the new VM through VNC as before and run:
.. code-block:: console
......@@ -1766,7 +1878,7 @@ Edit ``/etc/synnefo/20-snf-cyclades-app-cloudbar.conf``:
CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/'
CLOUDBAR_SERVICES_URL = 'https://node1.example.com/astakos/ui/get_services'
CLOUDBAR_MENU_URL = 'https://account.node1.example.com/astakos/ui/get_menu'
CLOUDBAR_MENU_URL = 'https://node1.example.com/astakos/ui/get_menu'
``CLOUDBAR_LOCATION`` tells the client where to find the Astakos common
cloudbar. The ``CLOUDBAR_SERVICES_URL`` and ``CLOUDBAR_MENU_URL`` options are
......@@ -2027,6 +2139,7 @@ skipped.
node2 # snf-manage reconcile-resources-pithos --fix
node1 # snf-manage reconcile-resources-cyclades --fix
If all the above return successfully, then you have finished with the Cyclades
installation and setup.
......@@ -2171,7 +2284,7 @@ it to Cyclades, by running:
.. code-block:: console
$ kamaki image register "Debian Base" \
pithos://u53r-un1qu3-1d/images/debian_base-6.0-7-x86_64.diskdump \
pithos://u53r-un1qu3-1d/images/debian_base-6.0-11-x86_64.diskdump \
--public \
--disk-format=diskdump \
--property OSFAMILY=linux --property ROOT_PARTITION=1 \
......@@ -2180,7 +2293,7 @@ it to Cyclades, by running:
--property sortorder=1 --property USERS=root --property OS=debian
This command registers the Pithos file
``pithos://u53r-un1qu3-1d/images/debian_base-6.0-7-x86_64.diskdump`` as an
``pithos://u53r-un1qu3-1d/images/debian_base-6.0-11-x86_64.diskdump`` as an
Image in Cyclades. This Image will be public (``--public``), so all users will
be able to spawn VMs from it and is of type ``diskdump``. The first two
properties (``OSFAMILY`` and ``ROOT_PARTITION``) are mandatory. All the rest
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment