quick-install-admin-guide.rst 76.1 KB
Newer Older
1
2
3
.. _quick-install-admin-guide:

Administrator's Quick Installation Guide
4
5
6
7
8
9
10
11
12
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

This is the Administrator's quick installation guide.

It describes how to install the whole synnefo stack on two (2) physical nodes,
with minimum configuration. It installs synnefo from Debian packages, and
assumes the nodes run Debian Squeeze. After successful installation, you will
have the following services running:

13
14
15
16
    * Identity Management (Astakos)
    * Object Storage Service (Pithos+)
    * Compute Service (Cyclades)
    * Image Registry Service (Plankton)
17
18
19
20
21
22

and a single unified Web UI to manage them all.

The Volume Storage Service (Archipelago) and the Billing Service (Aquarium) are
not released yet.

23
24
If you just want to install the Object Storage Service (Pithos+), follow the
guide and just stop after the "Testing of Pithos+" section.
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41


Installation of Synnefo / Introduction
======================================

We will install the services with the above list's order. Cyclades and Plankton
will be installed in a single step (at the end), because at the moment they are
contained in the same software component. Furthermore, we will install all
services in the first physical node, except Pithos+ which will be installed in
the second, due to a conflict between the snf-pithos-app and snf-cyclades-app
component (scheduled to be fixed in the next version).

For the rest of the documentation we will refer to the first physical node as
"node1" and the second as "node2". We will also assume that their domain names
are "node1.example.com" and "node2.example.com" and their IPs are "4.3.2.1" and
"4.3.2.2" respectively.

42
43
44
45
46
47
48
49
50
.. note:: It is import that the two machines are under the same domain name.
    If they are not, you can do this by editting the file ``/etc/hosts``
    on both machines, and add the following lines:

    .. code-block:: console

        4.3.2.1     node1.example.com
        4.3.2.2     node2.example.com

51
52
53
54
55
56
57
58
59
60

General Prerequisites
=====================

These are the general synnefo prerequisites, that you need on node1 and node2
and are related to all the services (Astakos, Pithos+, Cyclades, Plankton).

To be able to download all synnefo components you need to add the following
lines in your ``/etc/apt/sources.list`` file:

61
62
| ``deb http://apt.dev.grnet.gr squeeze main``
| ``deb-src http://apt.dev.grnet.gr squeeze main``
63
| ``deb http://apt.dev.grnet.gr squeeze-backports main``
64

65
66
67
68
and import the repo's GPG key:

| ``curl https://dev.grnet.gr/files/apt-grnetdev.pub | apt-key add -``

69
70
71
72
73
74
75
Also add the following line to enable the ``squeeze-backports`` repository,
which may provide more recent versions of certain packages. The repository
is deactivated by default and must be specified expicitly in ``apt-get``
operations:

| ``deb http://backports.debian.org/debian-backports squeeze-backports main``

76
77
78
79
80
You also need a shared directory visible by both nodes. Pithos+ will save all
data inside this directory. By 'all data', we mean files, images, and pithos
specific mapping data. If you plan to upload more than one basic image, this
directory should have at least 50GB of free space. During this guide, we will
assume that node1 acts as an NFS server and serves the directory ``/srv/pithos``
81
82
to node2 (be sure to set no_root_squash flag). Node2 has this directory
mounted under ``/srv/pithos``, too.
83
84
85
86
87
88
89

Before starting the synnefo installation, you will need basic third party
software to be installed and configured on the physical nodes. We will describe
each node's general prerequisites separately. Any additional configuration,
specific to a synnefo service for each node, will be described at the service's
section.

90
91
92
Finally, it is required for Cyclades and Ganeti nodes to have synchronized
system clocks (e.g. by running ntpd).

93
94
95
96
97
98
Node1
-----

General Synnefo dependencies
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

99
100
101
102
103
104
    * apache (http server)
    * gunicorn (WSGI http server)
    * postgresql (database)
    * rabbitmq (message queue)
    * ntp (NTP daemon)
    * gevent
105

106
You can install apache2, progresql and ntp by running:
107
108
109

.. code-block:: console

110
   # apt-get install apache2 postgresql ntp
111

112
113
114
115
116
117
118
Make sure to install gunicorn >= v0.12.2. You can do this by installing from
the official debian backports:

.. code-block:: console

   # apt-get -t squeeze-backports install gunicorn

119
120
121
122
123
124
Also, make sure to install gevent >= 0.13.6. Again from the debian backports:

.. code-block:: console

   # apt-get -t squeeze-backports install python-gevent

125
126
On node1, we will create our databases, so you will also need the
python-psycopg2 package:
127
128
129
130
131

.. code-block:: console

   # apt-get install python-psycopg2

132
133
134
135
136
To install RabbitMQ>=2.8.4, use the RabbitMQ APT repository by adding the
following line to ``/etc/apt/sources.list``:

.. code-block:: console

137
    deb http://www.rabbitmq.com/debian testing main
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152

Add RabbitMQ public key, to trusted key list:

.. code-block:: console

  # wget http://www.rabbitmq.com/rabbitmq-signing-key-public.asc
  # apt-key add rabbitmq-signing-key-public.asc

Finally, to install the package run:

.. code-block:: console

  # apt-get update
  # apt-get install rabbitmq-server

153
154
155
156
157
158
159
160
161
Database setup
~~~~~~~~~~~~~~

On node1, we create a database called ``snf_apps``, that will host all django
apps related tables. We also create the user ``synnefo`` and grant him all
privileges on the database. We do this by running:

.. code-block:: console

162
163
164
165
166
    root@node1:~ # su - postgres
    postgres@node1:~ $ psql
    postgres=# CREATE DATABASE snf_apps WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0;
    postgres=# CREATE USER synnefo WITH PASSWORD 'example_passw0rd';
    postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_apps TO synnefo;
167
168
169
170
171
172
173
174

We also create the database ``snf_pithos`` needed by the pithos+ backend and
grant the ``synnefo`` user all privileges on the database. This database could
be created on node2 instead, but we do it on node1 for simplicity. We will
create all needed databases on node1 and then node2 will connect to them.

.. code-block:: console

175
176
    postgres=# CREATE DATABASE snf_pithos WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0;
    postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_pithos TO synnefo;
177
178
179
180
181
182
183

Configure the database to listen to all network interfaces. You can do this by
editting the file ``/etc/postgresql/8.4/main/postgresql.conf`` and change
``listen_addresses`` to ``'*'`` :

.. code-block:: console

184
    listen_addresses = '*'
185
186
187
188
189
190
191

Furthermore, edit ``/etc/postgresql/8.4/main/pg_hba.conf`` to allow node1 and
node2 to connect to the database. Add the following lines under ``#IPv4 local
connections:`` :

.. code-block:: console

192
193
    host		all	all	4.3.2.1/32	md5
    host		all	all	4.3.2.2/32	md5
194
195
196
197
198
199
200
201
202
203
204

Make sure to substitute "4.3.2.1" and "4.3.2.2" with node1's and node2's
actual IPs. Now, restart the server to apply the changes:

.. code-block:: console

   # /etc/init.d/postgresql restart

Gunicorn setup
~~~~~~~~~~~~~~

205
Create the file ``/etc/gunicorn.d/synnefo`` containing the following:
206
207
208

.. code-block:: console

209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
    CONFIG = {
     'mode': 'django',
     'environment': {
       'DJANGO_SETTINGS_MODULE': 'synnefo.settings',
     },
     'working_dir': '/etc/synnefo',
     'user': 'www-data',
     'group': 'www-data',
     'args': (
       '--bind=127.0.0.1:8080',
       '--worker-class=gevent',
       '--workers=8',
       '--log-level=debug',
     ),
    }
224

225
.. warning:: Do NOT start the server yet, because it won't find the
226
227
228
    ``synnefo.settings`` module. Also, in case you are using ``/etc/hosts``
    instead of a DNS to get the hostnames, change ``--worker-class=gevent`` to
    ``--worker-class=sync``. We will start the server after successful
229
    installation of astakos. If the server is running::
230

231
       # /etc/init.d/gunicorn stop
232
233
234
235

Apache2 setup
~~~~~~~~~~~~~

236
237
Create the file ``/etc/apache2/sites-available/synnefo`` containing the
following:
238
239
240

.. code-block:: console

241
242
    <VirtualHost *:80>
        ServerName node1.example.com
243

244
245
246
247
248
        RewriteEngine On
        RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC]
        RewriteRule ^(.*)$ - [F,L]
        RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
    </VirtualHost>
249

250
251
Create the file ``/etc/apache2/sites-available/synnefo-ssl`` containing the
following:
252
253
254

.. code-block:: console

255
256
257
    <IfModule mod_ssl.c>
    <VirtualHost _default_:443>
        ServerName node1.example.com
258

259
        Alias /static "/usr/share/synnefo/static"
260

261
262
        #  SetEnv no-gzip
        #  SetEnv dont-vary
263

264
       AllowEncodedSlashes On
265

266
       RequestHeader set X-Forwarded-Protocol "https"
267

268
269
270
271
    <Proxy * >
        Order allow,deny
        Allow from all
    </Proxy>
272

273
274
275
        SetEnv                proxy-sendchunked
        SSLProxyEngine        off
        ProxyErrorOverride    off
276

277
278
279
        ProxyPass        /static !
        ProxyPass        / http://localhost:8080/ retry=0
        ProxyPassReverse / http://localhost:8080/
280

281
282
283
        RewriteEngine On
        RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC]
        RewriteRule ^(.*)$ - [F,L]
284

285
286
287
288
289
        SSLEngine on
        SSLCertificateFile    /etc/ssl/certs/ssl-cert-snakeoil.pem
        SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
    </VirtualHost>
    </IfModule>
290
291
292
293
294
295
296
297
298
299
300
301
302

Now enable sites and modules by running:

.. code-block:: console

   # a2enmod ssl
   # a2enmod rewrite
   # a2dissite default
   # a2ensite synnefo
   # a2ensite synnefo-ssl
   # a2enmod headers
   # a2enmod proxy_http

303
.. warning:: Do NOT start/restart the server yet. If the server is running::
304

305
       # /etc/init.d/apache2 stop
306

307
308
.. _rabbitmq-setup:

309
310
311
312
313
314
315
316
317
Message Queue setup
~~~~~~~~~~~~~~~~~~~

The message queue will run on node1, so we need to create the appropriate
rabbitmq user. The user is named ``synnefo`` and gets full privileges on all
exchanges:

.. code-block:: console

Ilias Tsitsimpis's avatar
Ilias Tsitsimpis committed
318
   # rabbitmqctl add_user synnefo "example_rabbitmq_passw0rd"
319
320
321
322
323
   # rabbitmqctl set_permissions synnefo ".*" ".*" ".*"

We do not need to initialize the exchanges. This will be done automatically,
during the Cyclades setup.

324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
Pithos+ data directory setup
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

As mentioned in the General Prerequisites section, there is a directory called
``/srv/pithos`` visible by both nodes. We create and setup the ``data``
directory inside it:

.. code-block:: console

   # cd /srv/pithos
   # mkdir data
   # chown www-data:www-data data
   # chmod g+ws data

You are now ready with all general prerequisites concerning node1. Let's go to
node2.

Node2
-----

General Synnefo dependencies
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

347
348
349
350
351
    * apache (http server)
    * gunicorn (WSGI http server)
    * postgresql (database)
    * ntp (NTP daemon)
    * gevent
352
353
354
355
356

You can install the above by running:

.. code-block:: console

357
   # apt-get install apache2 postgresql ntp
358
359
360
361
362
363
364

Make sure to install gunicorn >= v0.12.2. You can do this by installing from
the official debian backports:

.. code-block:: console

   # apt-get -t squeeze-backports install gunicorn
365

366
367
368
369
370
371
Also, make sure to install gevent >= 0.13.6. Again from the debian backports:

.. code-block:: console

   # apt-get -t squeeze-backports install python-gevent

372
373
Node2 will connect to the databases on node1, so you will also need the
python-psycopg2 package:
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390

.. code-block:: console

   # apt-get install python-psycopg2

Database setup
~~~~~~~~~~~~~~

All databases have been created and setup on node1, so we do not need to take
any action here. From node2, we will just connect to them. When you get familiar
with the software you may choose to run different databases on different nodes,
for performance/scalability/redundancy reasons, but those kind of setups are out
of the purpose of this guide.

Gunicorn setup
~~~~~~~~~~~~~~

391
Create the file ``/etc/gunicorn.d/synnefo`` containing the following
392
393
394
395
(same contents as in node1; you can just copy/paste the file):

.. code-block:: console

396
397
398
    CONFIG = {
     'mode': 'django',
     'environment': {
399
      'DJANGO_SETTINGS_MODULE': 'synnefo.settings',
400
401
402
403
404
405
406
407
408
409
410
411
     },
     'working_dir': '/etc/synnefo',
     'user': 'www-data',
     'group': 'www-data',
     'args': (
       '--bind=127.0.0.1:8080',
       '--worker-class=gevent',
       '--workers=4',
       '--log-level=debug',
       '--timeout=43200'
     ),
    }
412

413
.. warning:: Do NOT start the server yet, because it won't find the
414
415
416
    ``synnefo.settings`` module. Also, in case you are using ``/etc/hosts``
    instead of a DNS to get the hostnames, change ``--worker-class=gevent`` to
    ``--worker-class=sync``. We will start the server after successful
417
    installation of astakos. If the server is running::
418

419
       # /etc/init.d/gunicorn stop
420

421
422
Apache2 setup
~~~~~~~~~~~~~
423

424
425
Create the file ``/etc/apache2/sites-available/synnefo`` containing the
following:
426

427
428
.. code-block:: console

429
430
    <VirtualHost *:80>
        ServerName node2.example.com
431

432
433
434
435
436
        RewriteEngine On
        RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC]
        RewriteRule ^(.*)$ - [F,L]
        RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
    </VirtualHost>
437
438
439
440
441
442

Create the file ``synnefo-ssl`` under ``/etc/apache2/sites-available/``
containing the following:

.. code-block:: console

443
444
445
    <IfModule mod_ssl.c>
    <VirtualHost _default_:443>
        ServerName node2.example.com
446

447
        Alias /static "/usr/share/synnefo/static"
448

449
450
451
        SetEnv no-gzip
        SetEnv dont-vary
        AllowEncodedSlashes On
452

453
        RequestHeader set X-Forwarded-Protocol "https"
454

455
456
457
458
        <Proxy * >
            Order allow,deny
            Allow from all
        </Proxy>
459

460
461
462
        SetEnv                proxy-sendchunked
        SSLProxyEngine        off
        ProxyErrorOverride    off
463

464
465
466
        ProxyPass        /static !
        ProxyPass        / http://localhost:8080/ retry=0
        ProxyPassReverse / http://localhost:8080/
467

468
469
470
471
472
        SSLEngine on
        SSLCertificateFile    /etc/ssl/certs/ssl-cert-snakeoil.pem
        SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
    </VirtualHost>
    </IfModule>
473
474
475
476
477
478
479
480
481
482
483
484
485

As in node1, enable sites and modules by running:

.. code-block:: console

   # a2enmod ssl
   # a2enmod rewrite
   # a2dissite default
   # a2ensite synnefo
   # a2ensite synnefo-ssl
   # a2enmod headers
   # a2enmod proxy_http

486
.. warning:: Do NOT start/restart the server yet. If the server is running::
487

488
       # /etc/init.d/apache2 stop
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503

We are now ready with all general prerequisites for node2. Now that we have
finished with all general prerequisites for both nodes, we can start installing
the services. First, let's install Astakos on node1.


Installation of Astakos on node1
================================

To install astakos, grab the package from our repository (make sure  you made
the additions needed in your ``/etc/apt/sources.list`` file, as described
previously), by running:

.. code-block:: console

504
   # apt-get install snf-astakos-app snf-quotaholder-app snf-pithos-backend
505
506
507
508
509
510
511
512
513
514
515
516

After successful installation of snf-astakos-app, make sure that also
snf-webproject has been installed (marked as "Recommended" package). By default
Debian installs "Recommended" packages, but if you have changed your
configuration and the package didn't install automatically, you should
explicitly install it manually running:

.. code-block:: console

   # apt-get install snf-webproject

The reason snf-webproject is "Recommended" and not a hard dependency, is to give
517
518
519
520
the experienced administrator the ability to install Synnefo in a custom made
`Django <https://www.djangoproject.com/>`_ project. This corner case
concerns only very advanced users that know what they are doing and want to
experiment with synnefo.
521

522
523
524

.. _conf-astakos:

525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
Configuration of Astakos
========================

Conf Files
----------

After astakos is successfully installed, you will find the directory
``/etc/synnefo`` and some configuration files inside it. The files contain
commented configuration options, which are the default options. While installing
new snf-* components, new configuration files will appear inside the directory.
In this guide (and for all services), we will edit only the minimum necessary
configuration options, to reflect our setup. Everything else will remain as is.

After getting familiar with synnefo, you will be able to customize the software
as you wish and fits your needs. Many options are available, to empower the
administrator with extensively customizable setups.

For the snf-webproject component (installed as an astakos dependency), we
need the following:

Edit ``/etc/synnefo/10-snf-webproject-database.conf``. You will need to
uncomment and edit the ``DATABASES`` block to reflect our database:

.. code-block:: console

550
551
552
553
    DATABASES = {
     'default': {
         # 'postgresql_psycopg2', 'postgresql','mysql', 'sqlite3' or 'oracle'
         'ENGINE': 'postgresql_psycopg2',
554
555
         # ATTENTION: This *must* be the absolute path if using sqlite3.
         # See: http://docs.djangoproject.com/en/dev/ref/settings/#name
556
557
558
559
560
561
562
563
         'NAME': 'snf_apps',
         'USER': 'synnefo',                      # Not used with sqlite3.
         'PASSWORD': 'example_passw0rd',         # Not used with sqlite3.
         # Set to empty string for localhost. Not used with sqlite3.
         'HOST': '4.3.2.1',
         # Set to empty string for default. Not used with sqlite3.
         'PORT': '5432',
     }
564
565
566
    }

Edit ``/etc/synnefo/10-snf-webproject-deploy.conf``. Uncomment and edit
567
``SECRET_KEY``. This is a Django specific setting which is used to provide a
568
569
570
571
572
seed in secret-key hashing algorithms. Set this to a random string of your
choise and keep it private:

.. code-block:: console

573
    SECRET_KEY = 'sy6)mw6a7x%n)-example_secret_key#zzk4jo6f2=uqu!1o%)'
574
575

For astakos specific configuration, edit the following options in
576
``/etc/synnefo/20-snf-astakos-app-settings.conf`` :
577
578
579

.. code-block:: console

580
    ASTAKOS_DEFAULT_ADMIN_EMAIL = None
581

582
    ASTAKOS_COOKIE_DOMAIN = '.example.com'
583

584
    ASTAKOS_BASEURL = 'https://node1.example.com'
585

586
587
The ``ASTAKOS_COOKIE_DOMAIN`` should be the base url of our domain (for all
services). ``ASTAKOS_BASEURL`` is the astakos home page.
588

589
590
591
592
593
594
``ASTAKOS_DEFAULT_ADMIN_EMAIL`` refers to the administrator's email.
Every time a new account is created a notification is sent to this email.
For this we need access to a running mail server, so we have disabled
it for now by setting its value to None. For more informations on this,
read the relative :ref:`section <mail-server>`.

595
596
.. note:: For the purpose of this guide, we don't enable recaptcha authentication.
    If you would like to enable it, you have to edit the following options:
597
598
599
600
601
602
603
604
605
606

    .. code-block:: console

        ASTAKOS_RECAPTCHA_PUBLIC_KEY = 'example_recaptcha_public_key!@#$%^&*('
        ASTAKOS_RECAPTCHA_PRIVATE_KEY = 'example_recaptcha_private_key!@#$%^&*('
        ASTAKOS_RECAPTCHA_USE_SSL = True
        ASTAKOS_RECAPTCHA_ENABLED = True

    For the ``ASTAKOS_RECAPTCHA_PUBLIC_KEY`` and ``ASTAKOS_RECAPTCHA_PRIVATE_KEY``
    go to https://www.google.com/recaptcha/admin/create and create your own pair.
607

608
609
610
611
Then edit ``/etc/synnefo/20-snf-astakos-app-cloudbar.conf`` :

.. code-block:: console

612
    CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/'
613

614
    CLOUDBAR_SERVICES_URL = 'https://node1.example.com/im/get_services'
615

616
    CLOUDBAR_MENU_URL = 'https://node1.example.com/im/get_menu'
617

618
619
620
Those settings have to do with the black cloudbar endpoints and will be
described in more detail later on in this guide. For now, just edit the domain
to point at node1 which is where we have installed Astakos.
621

622
623
If you are an advanced user and want to use the Shibboleth Authentication
method, read the relative :ref:`section <shibboleth-auth>`.
Sofia Papagiannaki's avatar
Sofia Papagiannaki committed
624

Christos Stavrakakis's avatar
Christos Stavrakakis committed
625
626
627
628
629
630
631
632
633
634
.. note:: Because Cyclades and Astakos are running on the same machine
    in our example, we have to deactivate the CSRF verification. We can do so
    by adding to
    ``/etc/synnefo/99-local.conf``:

    .. code-block:: console

        MIDDLEWARE_CLASSES.remove('django.middleware.csrf.CsrfViewMiddleware')
        TEMPLATE_CONTEXT_PROCESSORS.remove('django.core.context_processors.csrf')

635
636
637
638
639
640
641
642
643
644
645
646
647
648
Since version 0.13 you need to configure some basic settings for the new *Quota*
feature.

Specifically:

Edit ``/etc/synnefo/20-snf-astakos-app-settings.conf``:

.. code-block:: console

    QUOTAHOLDER_URL = 'https://node1.example.com/quotaholder/v'
    QUOTAHOLDER_TOKEN = 'aExampleTokenJbFm12w'
    ASTAKOS_QUOTAHOLDER_TOKEN = 'aExampleTokenJbFm12w'
    ASTAKOS_QUOTAHOLDER_URL = 'https://node1.example.com/quotaholder/v'

649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
Enable Pooling
--------------

This section can be bypassed, but we strongly recommend you apply the following,
since they result in a significant performance boost.

Synnefo includes a pooling DBAPI driver for PostgreSQL, as a thin wrapper
around Psycopg2. This allows independent Django requests to reuse pooled DB
connections, with significant performance gains.

To use, first monkey-patch psycopg2. For Django, run this before the
``DATABASES`` setting in ``/etc/synnefo/10-snf-webproject-database.conf``:

.. code-block:: console

664
665
    from synnefo.lib.db.pooled_psycopg2 import monkey_patch_psycopg2
    monkey_patch_psycopg2()
666

667
668
Since we are running with greenlets, we should modify psycopg2 behavior, so it
works properly in a greenlet context:
669
670
671

.. code-block:: console

672
673
    from synnefo.lib.db.psyco_gevent import make_psycopg_green
    make_psycopg_green()
674
675
676
677

Use the Psycopg2 driver as usual. For Django, this means using
``django.db.backends.postgresql_psycopg2`` without any modifications. To enable
connection pooling, pass a nonzero ``synnefo_poolsize`` option to the DBAPI
678
driver, through ``DATABASES.OPTIONS`` in Django.
679
680
681
682
683
684

All the above will result in an ``/etc/synnefo/10-snf-webproject-database.conf``
file that looks like this:

.. code-block:: console

685
686
687
    # Monkey-patch psycopg2
    from synnefo.lib.db.pooled_psycopg2 import monkey_patch_psycopg2
    monkey_patch_psycopg2()
688

689
690
691
    # If running with greenlets
    from synnefo.lib.db.psyco_gevent import make_psycopg_green
    make_psycopg_green()
692

693
694
695
696
697
    DATABASES = {
     'default': {
         # 'postgresql_psycopg2', 'postgresql','mysql', 'sqlite3' or 'oracle'
         'ENGINE': 'postgresql_psycopg2',
         'OPTIONS': {'synnefo_poolsize': 8},
698
699
700

         # ATTENTION: This *must* be the absolute path if using sqlite3.
         # See: http://docs.djangoproject.com/en/dev/ref/settings/#name
701
702
703
704
705
706
707
708
         'NAME': 'snf_apps',
         'USER': 'synnefo',                      # Not used with sqlite3.
         'PASSWORD': 'example_passw0rd',         # Not used with sqlite3.
         # Set to empty string for localhost. Not used with sqlite3.
         'HOST': '4.3.2.1',
         # Set to empty string for default. Not used with sqlite3.
         'PORT': '5432',
     }
709
    }
Christos Stavrakakis's avatar
Christos Stavrakakis committed
710

711
712
713
Database Initialization
-----------------------

714
After configuration is done, we initialize the database by running:
715

716
717
.. code-block:: console

718
    # snf-manage syncdb
719
720
721
722

At this example we don't need to create a django superuser, so we select
``[no]`` to the question. After a successful sync, we run the migration needed
for astakos:
723
724
725

.. code-block:: console

726
    # snf-manage migrate im
727

728
Then, we load the pre-defined user groups
Sofia Papagiannaki's avatar
Sofia Papagiannaki committed
729
730
731

.. code-block:: console

732
    # snf-manage loaddata groups
Sofia Papagiannaki's avatar
Sofia Papagiannaki committed
733

734
735
736
737
738
739
740
741
742
743
.. _services-reg:

Services Registration
---------------------

When the database is ready, we configure the elements of the Astakos cloudbar,
to point to our future services:

.. code-block:: console

744
745
746
    # snf-manage service-add "~okeanos home" https://node1.example.com/im/ home-icon.png
    # snf-manage service-add "cyclades" https://node1.example.com/ui/
    # snf-manage service-add "pithos+" https://node2.example.com/ui/
747
748
749
750
751
752
753
754

Servers Initialization
----------------------

Finally, we initialize the servers on node1:

.. code-block:: console

755
756
    root@node1:~ # /etc/init.d/gunicorn restart
    root@node1:~ # /etc/init.d/apache2 restart
757
758

We have now finished the Astakos setup. Let's test it now.
759
760
761
762
763
764
765
766
767


Testing of Astakos
==================

Open your favorite browser and go to:

``http://node1.example.com/im``

768
If this redirects you to ``https://node1.example.com/im/`` and you can see
769
770
771
772
773
the "welcome" door of Astakos, then you have successfully setup Astakos.

Let's create our first user. At the homepage click the "CREATE ACCOUNT" button
and fill all your data at the sign up form. Then click "SUBMIT". You should now
see a green box on the top, which informs you that you made a successful request
774
775
and the request has been sent to the administrators. So far so good, let's
assume that you created the user with username ``user@example.com``.
776
777
778
779
780

Now we need to activate that user. Return to a command prompt at node1 and run:

.. code-block:: console

781
    root@node1:~ # snf-manage user-list
782
783
784
785
786
787
788

This command should show you a list with only one user; the one we just created.
This user should have an id with a value of ``1``. It should also have an
"active" status with the value of ``0`` (inactive). Now run:

.. code-block:: console

789
    root@node1:~ # snf-manage user-update --set-active 1
790
791
792
793
794

This modifies the active value to ``1``, and actually activates the user.
When running in production, the activation is done automatically with different
types of moderation, that Astakos supports. You can see the moderation methods
(by invitation, whitelists, matching regexp, etc.) at the Astakos specific
795
796
797
documentation. In production, you can also manually activate a user, by sending
him/her an activation email. See how to do this at the :ref:`User
activation <user_activation>` section.
798

799
Now let's go back to the homepage. Open ``http://node1.example.com/im/`` with
800
801
802
803
804
805
806
807
808
809
your browser again. Try to sign in using your new credentials. If the astakos
menu appears and you can see your profile, then you have successfully setup
Astakos.

Let's continue to install Pithos+ now.


Installation of Pithos+ on node2
================================

810
To install pithos+, grab the packages from our repository (make sure  you made
811
812
813
814
815
the additions needed in your ``/etc/apt/sources.list`` file, as described
previously), by running:

.. code-block:: console

816
   # apt-get install snf-pithos-app snf-pithos-backend
817
818
819
820

After successful installation of snf-pithos-app, make sure that also
snf-webproject has been installed (marked as "Recommended" package). Refer to
the "Installation of Astakos on node1" section, if you don't remember why this
821
should happen. Now, install the pithos web interface:
822

823
824
825
826
827
828
829
.. code-block:: console

   # apt-get install snf-pithos-webclient

This package provides the standalone pithos web client. The web client is the
web UI for pithos+ and will be accessible by clicking "pithos+" on the Astakos
interface's cloudbar, at the top of the Astakos homepage.
830

831

832
833
.. _conf-pithos:

834
835
836
837
838
839
840
841
842
843
Configuration of Pithos+
========================

Conf Files
----------

After pithos+ is successfully installed, you will find the directory
``/etc/synnefo`` and some configuration files inside it, as you did in node1
after installation of astakos. Here, you will not have to change anything that
has to do with snf-common or snf-webproject. Everything is set at node1. You
844
only need to change settings that have to do with pithos+. Specifically:
845
846

Edit ``/etc/synnefo/20-snf-pithos-app-settings.conf``. There you need to set
Ilias Tsitsimpis's avatar
Ilias Tsitsimpis committed
847
this options:
848
849
850
851
852
853

.. code-block:: console

   PITHOS_BACKEND_DB_CONNECTION = 'postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos'

   PITHOS_BACKEND_BLOCK_PATH = '/srv/pithos/data'
854

855
856
   PITHOS_AUTHENTICATION_URL = 'https://node1.example.com/im/authenticate'
   PITHOS_AUTHENTICATION_USERS = None
857

858
   PITHOS_SERVICE_TOKEN = 'pithos_service_token22w=='
859
860
861
862
863
864
865
866
867
868
   PITHOS_USER_CATALOG_URL = 'https://node1.example.com/user_catalogs'
   PITHOS_USER_FEEDBACK_URL = 'https://node1.example.com/feedback'
   PITHOS_USER_LOGIN_URL = 'https://node1.example.com/login'

   PITHOS_QUOTAHOLDER_URL = 'https://node1.example.com/quotaholder/v'
   PITHOS_QUOTAHOLDER_TOKEN = 'aExampleTokenJbFm12w'
   PITHOS_USE_QUOTAHOLDER = True

   # Set to False if astakos & pithos are on the same host
   #PITHOS_PROXY_USER_SERVICES = True
Sofia Papagiannaki's avatar
Sofia Papagiannaki committed
869

870

871
872
873
874
875
The ``PITHOS_BACKEND_DB_CONNECTION`` option tells to the pithos+ app where to
find the pithos+ backend database. Above we tell pithos+ that its database is
``snf_pithos`` at node1 and to connect as user ``synnefo`` with password
``example_passw0rd``.  All those settings where setup during node1's "Database
setup" section.
876

877
878
The ``PITHOS_BACKEND_BLOCK_PATH`` option tells to the pithos+ app where to find
the pithos+ backend data. Above we tell pithos+ to store its data under
879
880
881
``/srv/pithos/data``, which is visible by both nodes. We have already setup this
directory at node1's "Pithos+ data directory setup" section.

882
The ``PITHOS_AUTHENTICATION_URL`` option tells to the pithos+ app in which URI
883
is available the astakos authentication api. If not set, pithos+ tries to
884
885
authenticate using the ``PITHOS_AUTHENTICATION_USERS`` user pool.

886
887
888
889
890
The ``PITHOS_SERVICE_TOKEN`` should be the Pithos+ token returned by running on
the Astakos node (node1 in our case):

.. code-block:: console

891
   # snf-manage service-list
892
893
894
895

The token has been generated automatically during the :ref:`Pithos+ service
registration <services-reg>`.

896
897
898
899
900
Then we need to setup the web UI and connect it to astakos. To do so, edit
``/etc/synnefo/20-snf-pithos-webclient-settings.conf``:

.. code-block:: console

901
902
    PITHOS_UI_LOGIN_URL = "https://node1.example.com/im/login?next="
    PITHOS_UI_FEEDBACK_URL = "https://node2.example.com/feedback"
903
904
905
906
907
908
909
910
911
912
913

The ``PITHOS_UI_LOGIN_URL`` option tells the client where to redirect you, if
you are not logged in. The ``PITHOS_UI_FEEDBACK_URL`` option points at the
pithos+ feedback form. Astakos already provides a generic feedback form for all
services, so we use this one.

Then edit ``/etc/synnefo/20-snf-pithos-webclient-cloudbar.conf``, to connect the
pithos+ web UI with the astakos web UI (through the top cloudbar):

.. code-block:: console

914
915
916
917
    CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/'
    PITHOS_UI_CLOUDBAR_ACTIVE_SERVICE = '3'
    CLOUDBAR_SERVICES_URL = 'https://node1.example.com/im/get_services'
    CLOUDBAR_MENU_URL = 'https://node1.example.com/im/get_menu'
918
919
920
921

The ``CLOUDBAR_LOCATION`` tells the client where to find the astakos common
cloudbar.

922
923
924
925
926
927
The ``PITHOS_UI_CLOUDBAR_ACTIVE_SERVICE`` points to an already registered
Astakos service. You can see all :ref:`registered services <services-reg>` by
running on the Astakos node (node1):

.. code-block:: console

928
   # snf-manage service-list
929

930
931
The value of ``PITHOS_UI_CLOUDBAR_ACTIVE_SERVICE`` should be the pithos
service's ``id`` as shown by the above command, in our case ``3``.
932
933
934

The ``CLOUDBAR_SERVICES_URL`` and ``CLOUDBAR_MENU_URL`` options are used by the
pithos+ web client to get from astakos all the information needed to fill its
935
own cloudbar. So we put our astakos deployment urls there.
936

937
938
939
940
941
942
943
Pooling and Greenlets
---------------------

Pithos is pooling-ready without the need of further configuration, because it
doesn't use a Django DB. It pools HTTP connections to Astakos and pithos
backend objects for access to the Pithos DB.

944
945
946
However, as in Astakos, since we are running with Greenlets, it is also
recommended to modify psycopg2 behavior so it works properly in a greenlet
context. This means adding the following lines at the top of your
947
948
949
950
``/etc/synnefo/10-snf-webproject-database.conf`` file:

.. code-block:: console

951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
    from synnefo.lib.db.psyco_gevent import make_psycopg_green
    make_psycopg_green()

Furthermore, add the ``--worker-class=gevent`` (or ``--worker-class=sync`` as
mentioned above, depending on your setup) argument on your
``/etc/gunicorn.d/synnefo`` configuration file. The file should look something
like this:

.. code-block:: console

    CONFIG = {
     'mode': 'django',
     'environment': {
       'DJANGO_SETTINGS_MODULE': 'synnefo.settings',
     },
     'working_dir': '/etc/synnefo',
     'user': 'www-data',
     'group': 'www-data',
     'args': (
       '--bind=127.0.0.1:8080',
       '--workers=4',
       '--worker-class=gevent',
       '--log-level=debug',
       '--timeout=43200'
     ),
    }
977

978
979
980
981
982
983
984
Stamp Database Revision
-----------------------

Pithos uses the alembic_ database migrations tool.

.. _alembic: http://alembic.readthedocs.org

Sofia Papagiannaki's avatar
Sofia Papagiannaki committed
985
986
987
After a sucessful installation, we should stamp it at the most recent
revision, so that future migrations know where to start upgrading in
the migration history.
988

Sofia Papagiannaki's avatar
Sofia Papagiannaki committed
989
First, find the most recent revision in the migration history:
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006

.. code-block:: console

    root@node2:~ # pithos-migrate history
    2a309a9a3438 -> 27381099d477 (head), alter public add column url
    165ba3fbfe53 -> 2a309a9a3438, fix statistics negative population
    3dd56e750a3 -> 165ba3fbfe53, update account in paths
    230f8ce9c90f -> 3dd56e750a3, Fix latest_version
    8320b1c62d9 -> 230f8ce9c90f, alter nodes add column latest version
    None -> 8320b1c62d9, create index nodes.parent

Finally, we stamp it with the one found in the previous step:

.. code-block:: console

    root@node2:~ # pithos-migrate stamp 27381099d477

1007
1008
1009
1010
1011
1012
1013
Servers Initialization
----------------------

After configuration is done, we initialize the servers on node2:

.. code-block:: console

1014
1015
    root@node2:~ # /etc/init.d/gunicorn restart
    root@node2:~ # /etc/init.d/apache2 restart
1016
1017
1018
1019
1020
1021
1022

You have now finished the Pithos+ setup. Let's test it now.


Testing of Pithos+
==================

1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
Open your browser and go to the Astakos homepage:

``http://node1.example.com/im``

Login, and you will see your profile page. Now, click the "pithos+" link on the
top black cloudbar. If everything was setup correctly, this will redirect you
to:


and you will see the blue interface of the Pithos+ application.  Click the
orange "Upload" button and upload your first file. If the file gets uploaded
successfully, then this is your first sign of a successful Pithos+ installation.
Go ahead and experiment with the interface to make sure everything works
correctly.

You can also use the Pithos+ clients to sync data from your Windows PC or MAC.

If you don't stumble on any problems, then you have successfully installed
Pithos+, which you can use as a standalone File Storage Service.

If you would like to do more, such as:

1045
1046
1047
1048
1049
1050
1051
1052
    * Spawning VMs
    * Spawning VMs from Images stored on Pithos+
    * Uploading your custom Images to Pithos+
    * Spawning VMs from those custom Images
    * Registering existing Pithos+ files as Images
    * Connect VMs to the Internet
    * Create Private Networks
    * Add VMs to Private Networks
1053
1054

please continue with the rest of the guide.
1055

1056

1057
1058
Cyclades (and Plankton) Prerequisites
=====================================
1059

1060
1061
1062
1063
Before proceeding with the Cyclades (and Plankton) installation, make sure you
have successfully set up Astakos and Pithos+ first, because Cyclades depends
on them. If you don't have a working Astakos and Pithos+ installation yet,
please return to the :ref:`top <quick-install-admin-guide>` of this guide.
1064

1065
1066
Besides Astakos and Pithos+, you will also need a number of additional working
prerequisites, before you start the Cyclades installation.
1067

1068
Ganeti
1069
------
1070

1071
1072
1073
1074
`Ganeti <http://code.google.com/p/ganeti/>`_ handles the low level VM management
for Cyclades, so Cyclades requires a working Ganeti installation at the backend.
Please refer to the
`ganeti documentation <http://docs.ganeti.org/ganeti/2.5/html>`_ for all the
1075
gory details. A successful Ganeti installation concludes with a working
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
:ref:`GANETI-MASTER <GANETI_NODES>` and a number of :ref:`GANETI-NODEs
<GANETI_NODES>`.

The above Ganeti cluster can run on different physical machines than node1 and
node2 and can scale independently, according to your needs.

For the purpose of this guide, we will assume that the :ref:`GANETI-MASTER
<GANETI_NODES>` runs on node1 and is VM-capable. Also, node2 is a
:ref:`GANETI-NODE <GANETI_NODES>` and is Master-capable and VM-capable too.

We highly recommend that you read the official Ganeti documentation, if you are
1087
not familiar with Ganeti.
1088

1089
1090
1091
1092
Unfortunatelly, the current stable version of the stock Ganeti (v2.6.2) doesn't
support IP pool management. This feature will be available in Ganeti >= 2.7.
Synnefo depends on the IP pool functionality of Ganeti, so you have to use
GRNET provided packages until stable 2.7 is out. To do so:
1093
1094
1095

.. code-block:: console

1096
   # apt-get install snf-ganeti ganeti-htools
1097
   # rmmod -f drbd && modprobe drbd minor_count=255 usermode_helper=/bin/true
1098

1099
You should have:
1100

1101
Ganeti >= 2.6.2+ippool11+hotplug5+extstorage3+rdbfix1+kvmfix2-1
1102
1103

We assume that Ganeti will use the KVM hypervisor. After installing Ganeti on
1104
1105
1106
1107
1108
1109
1110
1111
both nodes, choose a domain name that resolves to a valid floating IP (let's
say it's ``ganeti.node1.example.com``). Make sure node1 and node2 have same
dsa/rsa keys and authorised_keys for password-less root ssh between each other.
If not then skip passing --no-ssh-init but be aware that it will replace
/root/.ssh/* related files and you might lose access to master node. Also,
make sure there is an lvm volume group named ``ganeti`` that will host your
VMs' disks. Finally, setup a bridge interface on the host machines (e.g: br0).
Then run on node1:
1112
1113
1114

.. code-block:: console

1115
1116
1117
1118
1119
1120
    root@node1:~ # gnt-cluster init --enabled-hypervisors=kvm --no-ssh-init \
                    --no-etc-hosts --vg-name=ganeti --nic-parameters link=br0 \
                    --master-netdev eth0 ganeti.node1.example.com
    root@node1:~ # gnt-cluster modify --default-iallocator hail
    root@node1:~ # gnt-cluster modify --hypervisor-parameters kvm:kernel_path=
    root@node1:~ # gnt-cluster modify --hypervisor-parameters kvm:vnc_bind_address=0.0.0.0
1121

1122
1123
1124
1125
    root@node1:~ # gnt-node add --no-ssh-key-check --master-capable=yes \
                    --vm-capable=yes node2.example.com
    root@node1:~ # gnt-cluster modify --disk-parameters=drbd:metavg=ganeti
    root@node1:~ # gnt-group modify --disk-parameters=drbd:metavg=ganeti default
1126
1127
1128
1129
1130
1131
1132
1133

For any problems you may stumble upon installing Ganeti, please refer to the
`official documentation <http://docs.ganeti.org/ganeti/2.5/html>`_. Installation
of Ganeti is out of the scope of this guide.

.. _cyclades-install-snfimage:

snf-image
1134
---------
1135
1136

Installation
1137
~~~~~~~~~~~~
1138
1139
1140
1141
1142
1143
1144
For :ref:`Cyclades <cyclades>` to be able to launch VMs from specified Images,
you need the :ref:`snf-image <snf-image>` OS Definition installed on *all*
VM-capable Ganeti nodes. This means we need :ref:`snf-image <snf-image>` on
node1 and node2. You can do this by running on *both* nodes:

.. code-block:: console

1145
   # apt-get install snf-image snf-pithos-backend python-psycopg2
1146

1147
1148
1149
snf-image also needs the `snf-pithos-backend <snf-pithos-backend>`, to be able
to handle image files stored on Pithos+. It also needs `python-psycopg2` to be
able to access the Pithos+ database. This is why, we also install them on *all*
1150
VM-capable Ganeti nodes.
1151

1152
1153
1154
1155
1156
1157
.. warning:: snf-image uses ``curl`` for handling URLs. This means that it will
    not  work out of the box if you try to use URLs served by servers which do
    not have a valid certificate. To circumvent this you should edit the file
    ``/etc/default/snf-image``. Change ``#CURL="curl"`` to ``CURL="curl -k"``.

After `snf-image` has been installed successfully, create the helper VM by
1158
1159
1160
1161
1162
1163
1164
running on *both* nodes:

.. code-block:: console

   # snf-image-update-helper

This will create all the needed files under ``/var/lib/snf-image/helper/`` for
1165
1166
snf-image to run successfully, and it may take a few minutes depending on your
Internet connection.
1167
1168

Configuration
1169
~~~~~~~~~~~~~
1170
snf-image supports native access to Images stored on Pithos+. This means that
1171
1172
it can talk directly to the Pithos+ backend, without the need of providing a
public URL. More details, are described in the next section. For now, the only
1173
1174
1175
1176
1177
1178
1179
thing we need to do, is configure snf-image to access our Pithos+ backend.

To do this, we need to set the corresponding variables in
``/etc/default/snf-image``, to reflect our Pithos+ setup:

.. code-block:: console

1180
    PITHOS_DB="postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos"
1181

1182
    PITHOS_DATA="/srv/pithos/data"
1183

1184
1185
If you have installed your Ganeti cluster on different nodes than node1 and
node2 make sure that ``/srv/pithos/data`` is visible by all of them.
1186
1187
1188
1189
1190
1191

If you would like to use Images that are also/only stored locally, you need to
save them under ``IMAGE_DIR``, however this guide targets Images stored only on
Pithos+.

Testing
1192
~~~~~~~
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
You can test that snf-image is successfully installed by running on the
:ref:`GANETI-MASTER <GANETI_NODES>` (in our case node1):

.. code-block:: console

   # gnt-os diagnose

This should return ``valid`` for snf-image.

If you are interested to learn more about snf-image's internals (and even use
it alongside Ganeti without Synnefo), please see
1204
1205
1206
`here <https://code.grnet.gr/projects/snf-image/wiki>`_ for information
concerning installation instructions, documentation on the design and
implementation, and supported Image formats.
1207

1208
1209
.. _snf-image-images:

1210
1211
Actual Images for snf-image
---------------------------
1212
1213
1214
1215

Now that snf-image is installed successfully we need to provide it with some
Images. :ref:`snf-image <snf-image>` supports Images stored in ``extdump``,
``ntfsdump`` or ``diskdump`` format. We recommend the use of the ``diskdump``
1216
format. For more information about snf-image Image formats see `here
1217
1218
1219
1220
1221
<https://code.grnet.gr/projects/snf-image/wiki/Image_Format>`_.

:ref:`snf-image <snf-image>` also supports three (3) different locations for the
above Images to be stored:

1222
1223
1224
1225
    * Under a local folder (usually an NFS mount, configurable as ``IMAGE_DIR``
      in :file:`/etc/default/snf-image`)
    * On a remote host (accessible via public URL e.g: http://... or ftp://...)
    * On Pithos+ (accessible natively, not only by its public URL)
1226

1227
1228
For the purpose of this guide, we will use the Debian Squeeze Base Image found
on the official `snf-image page
1229
1230
1231
1232
1233
<https://code.grnet.gr/projects/snf-image/wiki#Sample-Images>`_. The image is
of type ``diskdump``. We will store it in our new Pithos+ installation.

To do so, do the following:

1234
a) Download the Image from the official snf-image page.
1235

1236
1237
b) Upload the Image to your Pithos+ installation, either using the Pithos+ Web
   UI or the command line client `kamaki
1238
1239
1240
   <http://docs.dev.grnet.gr/kamaki/latest/index.html>`_.

Once the Image is uploaded successfully, download the Image's metadata file
1241
1242
from the official snf-image page. You will need it, for spawning a VM from
Ganeti, in the next section.
1243

1244
1245
Of course, you can repeat the procedure to upload more Images, available from
the `official snf-image page
1246
1247
<https://code.grnet.gr/projects/snf-image/wiki#Sample-Images>`_.

1248
1249
.. _ganeti-with-pithos-images:

1250
Spawning a VM from a Pithos+ Image, using Ganeti
1251
------------------------------------------------
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262

Now, it is time to test our installation so far. So, we have Astakos and
Pithos+ installed, we have a working Ganeti installation, the snf-image
definition installed on all VM-capable nodes and a Debian Squeeze Image on
Pithos+. Make sure you also have the `metadata file
<https://pithos.okeanos.grnet.gr/public/gwqcv>`_ for this image.

Run on the :ref:`GANETI-MASTER's <GANETI_NODES>` (node1) command line:

.. code-block:: console

1263
   # gnt-instance add -o snf-image+default --os-parameters \
1264
                      img_passwd=my_vm_example_passw0rd,img_format=diskdump,img_id="pithos://UUID/pithos/debian_base-6.0-7-x86_64.diskdump",img_properties='{"OSFAMILY":"linux"\,"ROOT_PARTITION":"1"}' \
1265
                      -t plain --disk 0:size=2G --no-name-check --no-ip-check \
1266
1267
1268
1269
1270
1271
1272
                      testvm1

In the above command:

 * ``img_passwd``: the arbitrary root password of your new instance
 * ``img_format``: set to ``diskdump`` to reflect the type of the uploaded Image
 * ``img_id``: If you want to deploy an Image stored on Pithos+ (our case), this
1273
               should have the format ``pithos://<UUID>/<container>/<filename>``:
1274
1275
1276
               * ``username``: ``user@example.com`` (defined during Astakos sign up)
               * ``container``: ``pithos`` (default, if the Web UI was used)
               * ``filename``: the name of file (visible also from the Web UI)
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
 * ``img_properties``: taken from the metadata file. Used only the two mandatory
                       properties ``OSFAMILY`` and ``ROOT_PARTITION``. `Learn more
                       <https://code.grnet.gr/projects/snf-image/wiki/Image_Format#Image-Properties>`_

If the ``gnt-instance add`` command returns successfully, then run:

.. code-block:: console

   # gnt-instance info testvm1 | grep "console connection"

to find out where to connect using VNC. If you can connect successfully and can
login to your new instance using the root password ``my_vm_example_passw0rd``,
then everything works as expected and you have your new Debian Base VM up and
running.

If ``gnt-instance add`` fails, make sure that snf-image is correctly configured
1293
1294
1295
1296
1297
1298
1299
to access the Pithos+ database and the Pithos+ backend data (newer versions
require UUID instead of a username). Another issue you may encounter is that in
relatively slow setups, you may need to raise the default HELPER_*_TIMEOUTS in
/etc/default/snf-image. Also, make sure you gave the correct ``img_id`` and
``img_properties``. If ``gnt-instance add`` succeeds but you cannot connect,
again find out what went wrong. Do *NOT* proceed to the next steps unless you
are sure everything works till this point.
1300

1301
1302
If everything works, you have successfully connected Ganeti with Pithos+. Let's
move on to networking now.
1303

1304
.. warning::
Christos Stavrakakis's avatar
Christos Stavrakakis committed
1305

1306
    You can bypass the networking sections and go straight to
1307
1308
1309
    :ref:`Cyclades Ganeti tools <cyclades-gtools>`, if you do not want to setup
    the Cyclades Network Service, but only the Cyclades Compute Service
    (recommended for now).
1310

1311
1312
Networking Setup Overview
-------------------------
1313
1314
1315
1316

This part is deployment-specific and must be customized based on the specific
needs of the system administrator. However, to do so, the administrator needs
to understand how each level handles Virtual Networks, to be able to setup the
1317
1318
backend appropriately, before installing Cyclades. To do so, please read the
:ref:`Network <networks>` section before proceeding.
1319

1320
1321
1322
1323
1324
1325
Since synnefo 0.11 all network actions are managed with the snf-manage
network-* commands. This needs the underlying setup (Ganeti, nfdhcpd,
snf-network, bridges, vlans) to be already configured correctly. The only
actions needed in this point are:

a) Have Ganeti with IP pool management support installed.
1326

1327
b) Install :ref:`snf-network <snf-network>`, which provides a synnefo specific kvm-ifup script, etc.
1328

1329
c) Install :ref:`nfdhcpd <nfdhcpd>`, which serves DHCP requests of the VMs.
1330

1331
1332
1333
In order to test that everything is setup correctly before installing Cyclades,
we will make some testing actions in this section, and the actual setup will be
done afterwards with snf-manage commands.
1334

1335
.. _snf-network:
1336

1337
1338
snf-network
~~~~~~~~~~~
1339

1340
1341
1342
1343
snf-network includes `kvm-vif-bridge` script that is invoked every time
a tap (a VM's NIC) is created. Based on environment variables passed by
Ganeti it issues various commands depending on the network type the NIC is
connected to and sets up a corresponding dhcp lease.
1344

1345
Install snf-network on all Ganeti nodes:
1346
1347
1348

.. code-block:: console

1349
   # apt-get install snf-network
1350

1351
Then, in :file:`/etc/default/snf-network` set:
1352
1353
1354

.. code-block:: console

1355
   MAC_MASK=ff:ff:f0:00:00:00
1356

1357
.. _nfdhcpd:
1358

1359
nfdhcpd
1360
~~~~~~~
1361

1362
1363
1364
1365
Each NIC's IP is chosen by Ganeti (with IP pool management support).
`kvm-vif-bridge` script sets up dhcp leases and when the VM boots and
makes a dhcp request, iptables will mangle the packet and `nfdhcpd` will
create a dhcp response.
1366
1367
1368

.. code-block:: console

1369
   # apt-get install nfqueue-bindings-python=0.3+physindev-1
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
   # apt-get install nfdhcpd

Edit ``/etc/nfdhcpd/nfdhcpd.conf`` to reflect your network configuration. At
least, set the ``dhcp_queue`` variable to ``42`` and the ``nameservers``
variable to your DNS IP/s. Those IPs will be passed as the DNS IP/s of your new
VMs. Once you are finished, restart the server on all nodes:

.. code-block:: console

   # /etc/init.d/nfdhcpd restart

If you are using ``ferm``, then you need to run the following:

.. code-block:: console

   # echo "@include 'nfdhcpd.ferm';" >> /etc/ferm/ferm.conf
   # /etc/init.d/ferm restart

1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
or make sure to run after boot:

.. code-block:: console

   # iptables -t mangle -A PREROUTING -p udp -m udp --dport 67 -j NFQUEUE --queue-num 42

and if you have IPv6 enabled:

.. code-block:: console

   # ip6tables -t mangle -A PREROUTING -p ipv6-icmp -m icmp6 --icmpv6-type 133 -j NFQUEUE --queue-num 43
   # ip6tables -t mangle -A PREROUTING -p ipv6-icmp -m icmp6 --icmpv6-type 135 -j NFQUEUE --queue-num 44

You can check which clients are currently served by nfdhcpd by running:

.. code-block:: console

   # kill -SIGUSR1 `cat /var/run/nfdhcpd/nfdhcpd.pid`

When you run the above, then check ``/var/log/nfdhcpd/nfdhcpd.log``.

Public Network Setup
--------------------

To achieve basic networking the simplest way is to have a common bridge (e.g.
1413
1414
1415
``br0``, on the same collision domain with the router) where all VMs will
connect to. Packets will be "forwarded" to the router and then to the Internet.
If you want a more advanced setup (ip-less routing and proxy-arp plese refer to
1416
1417
1418
1419
1420
1421
1422
:ref:`Network <networks>` section).

Physical Host Setup
~~~~~~~~~~~~~~~~~~~

Assuming ``eth0`` on both hosts is the public interface (directly connected
to the router), run on every node:
1423

1424
.. code-block:: console
1425

1426
   # apt-get install vlan
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441