quick-install-admin-guide.rst 80.1 KB
Newer Older
1
2
.. _quick-install-admin-guide:

3
4
Administrator's Installation Guide
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
5

6
This is the Administrator's installation guide.
7
8
9
10
11
12

It describes how to install the whole synnefo stack on two (2) physical nodes,
with minimum configuration. It installs synnefo from Debian packages, and
assumes the nodes run Debian Squeeze. After successful installation, you will
have the following services running:

13
    * Identity Management (Astakos)
14
    * Object Storage Service (Pithos)
15
    * Compute Service (Cyclades)
16
17
    * Image Service (part of Cyclades)
    * Network Service (part of Cyclades)
18
19
20
21
22
23

and a single unified Web UI to manage them all.

The Volume Storage Service (Archipelago) and the Billing Service (Aquarium) are
not released yet.

24
25
If you just want to install the Object Storage Service (Pithos), follow the
guide and just stop after the "Testing of Pithos" section.
26
27
28
29
30


Installation of Synnefo / Introduction
======================================

31
32
33
We will install the services with the above list's order. The last three
services will be installed in a single step (at the end), because at the moment
they are contained in the same software component (Cyclades). Furthermore, we
34
will install all services in the first physical node, except Pithos which will
35
36
be installed in the second, due to a conflict between the snf-pithos-app and
snf-cyclades-app component (scheduled to be fixed in the next version).
37
38
39

For the rest of the documentation we will refer to the first physical node as
"node1" and the second as "node2". We will also assume that their domain names
40
41
42
43
are "node1.example.com" and "node2.example.com" and their public IPs are "4.3.2.1" and
"4.3.2.2" respectively. It is important that the two machines are under the same domain name.
In case you choose to follow a private installation you will need to
set up a private dns server, using dnsmasq for example. See node1 below for more.
44
45
46
47
48

General Prerequisites
=====================

These are the general synnefo prerequisites, that you need on node1 and node2
49
and are related to all the services (Astakos, Pithos, Cyclades).
50
51
52
53

To be able to download all synnefo components you need to add the following
lines in your ``/etc/apt/sources.list`` file:

54
55
| ``deb http://apt.dev.grnet.gr squeeze/``
| ``deb-src http://apt.dev.grnet.gr squeeze/``
56

57
58
59
60
and import the repo's GPG key:

| ``curl https://dev.grnet.gr/files/apt-grnetdev.pub | apt-key add -``

61
62
63
64
65
66
67
Also add the following line to enable the ``squeeze-backports`` repository,
which may provide more recent versions of certain packages. The repository
is deactivated by default and must be specified expicitly in ``apt-get``
operations:

| ``deb http://backports.debian.org/debian-backports squeeze-backports main``

68
You also need a shared directory visible by both nodes. Pithos will save all
69
70
71
72
data inside this directory. By 'all data', we mean files, images, and pithos
specific mapping data. If you plan to upload more than one basic image, this
directory should have at least 50GB of free space. During this guide, we will
assume that node1 acts as an NFS server and serves the directory ``/srv/pithos``
73
74
to node2 (be sure to set no_root_squash flag). Node2 has this directory
mounted under ``/srv/pithos``, too.
75
76
77
78
79
80
81

Before starting the synnefo installation, you will need basic third party
software to be installed and configured on the physical nodes. We will describe
each node's general prerequisites separately. Any additional configuration,
specific to a synnefo service for each node, will be described at the service's
section.

82
83
84
Finally, it is required for Cyclades and Ganeti nodes to have synchronized
system clocks (e.g. by running ntpd).

85
86
87
Node1
-----

88

89
90
91
General Synnefo dependencies
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

92
93
94
95
96
97
98
99
		* apache (http server)
		* public certificate
		* gunicorn (WSGI http server)
		* postgresql (database)
		* rabbitmq (message queue)
		* ntp (NTP daemon)
		* gevent
		* dns server
100

101
You can install apache2, postgresql and ntp by running:
102
103
104

.. code-block:: console

105
   # apt-get install apache2 postgresql ntp
106

107
108
109
110
111
112
113
Make sure to install gunicorn >= v0.12.2. You can do this by installing from
the official debian backports:

.. code-block:: console

   # apt-get -t squeeze-backports install gunicorn

114
115
116
117
118
119
Also, make sure to install gevent >= 0.13.6. Again from the debian backports:

.. code-block:: console

   # apt-get -t squeeze-backports install python-gevent

120
121
On node1, we will create our databases, so you will also need the
python-psycopg2 package:
122
123
124
125
126

.. code-block:: console

   # apt-get install python-psycopg2

127
128
129
130
131
To install RabbitMQ>=2.8.4, use the RabbitMQ APT repository by adding the
following line to ``/etc/apt/sources.list``:

.. code-block:: console

132
    deb http://www.rabbitmq.com/debian testing main
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147

Add RabbitMQ public key, to trusted key list:

.. code-block:: console

  # wget http://www.rabbitmq.com/rabbitmq-signing-key-public.asc
  # apt-key add rabbitmq-signing-key-public.asc

Finally, to install the package run:

.. code-block:: console

  # apt-get update
  # apt-get install rabbitmq-server

148
149
150
151
152
153
154
155
156
Database setup
~~~~~~~~~~~~~~

On node1, we create a database called ``snf_apps``, that will host all django
apps related tables. We also create the user ``synnefo`` and grant him all
privileges on the database. We do this by running:

.. code-block:: console

157
158
159
160
161
    root@node1:~ # su - postgres
    postgres@node1:~ $ psql
    postgres=# CREATE DATABASE snf_apps WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0;
    postgres=# CREATE USER synnefo WITH PASSWORD 'example_passw0rd';
    postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_apps TO synnefo;
162

163
We also create the database ``snf_pithos`` needed by the Pithos backend and
164
165
166
167
168
169
grant the ``synnefo`` user all privileges on the database. This database could
be created on node2 instead, but we do it on node1 for simplicity. We will
create all needed databases on node1 and then node2 will connect to them.

.. code-block:: console

170
171
    postgres=# CREATE DATABASE snf_pithos WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0;
    postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_pithos TO synnefo;
172
173
174
175
176
177
178

Configure the database to listen to all network interfaces. You can do this by
editting the file ``/etc/postgresql/8.4/main/postgresql.conf`` and change
``listen_addresses`` to ``'*'`` :

.. code-block:: console

179
    listen_addresses = '*'
180
181
182
183
184
185
186

Furthermore, edit ``/etc/postgresql/8.4/main/pg_hba.conf`` to allow node1 and
node2 to connect to the database. Add the following lines under ``#IPv4 local
connections:`` :

.. code-block:: console

187
188
    host		all	all	4.3.2.1/32	md5
    host		all	all	4.3.2.2/32	md5
189
190
191
192
193
194
195
196
197
198
199

Make sure to substitute "4.3.2.1" and "4.3.2.2" with node1's and node2's
actual IPs. Now, restart the server to apply the changes:

.. code-block:: console

   # /etc/init.d/postgresql restart

Gunicorn setup
~~~~~~~~~~~~~~

200
201
Rename the file ``/etc/gunicorn.d/synnefo.example`` to
``/etc/gunicorn.d/synnefo``, to make it a valid gunicorn configuration file:
202
203
204

.. code-block:: console

205
206
    # mv /etc/gunicorn.d/synnefo.example /etc/gunicorn.d/synnefo

207

208
.. warning:: Do NOT start the server yet, because it won't find the
209
210
211
    ``synnefo.settings`` module. Also, in case you are using ``/etc/hosts``
    instead of a DNS to get the hostnames, change ``--worker-class=gevent`` to
    ``--worker-class=sync``. We will start the server after successful
212
    installation of astakos. If the server is running::
213

214
       # /etc/init.d/gunicorn stop
215

216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
Certificate Creation
~~~~~~~~~~~~~~~~~~~~~

Node1 will host Cyclades. Cyclades should communicate with the other snf tools over a trusted connection.
In order for the connection to be trusted, the keys provided to apache below should be signed with a certificate.
This certificate should be added to all nodes. In case you don't have signed keys you can create a self-signed certificate
and sign your keys with this. To do so on node1 run

.. code-block:: console

		# aptitude install openvpn
		# mkdir /etc/openvpn/easy-rsa
		# cp -ai /usr/share/doc/openvpn/examples/easy-rsa/2.0/ /etc/openvpn/easy-rsa
		# cd /etc/openvpn/easy-rsa/2.0
		# vim vars

In vars you can set your own parameters such as KEY_COUNTRY

.. code-block:: console

	# . ./vars
	# ./clean-all

Now you can create the certificate

241
242
.. code-block:: console

243
244
245
246
247
248
249
250
251
252
253
254
255
256
		# ./build-ca

The previous will create a ``ca.crt`` file. Copy this file under
``/usr/local/share/ca-certificates/`` directory and run :

.. code-block:: console

		# update-ca-certificates

to update the records. You will have to do the following on node2 as well.

Now you can create the keys and sign them with the certificate

.. code-block:: console
257

258
259
260
		# ./build-key-server node1.example.com

This will create a .pem and a .key file in your current folder. Copy these in
261
``/etc/ssl/certs/`` and ``/etc/ssl/private/`` respectively and
262
263
use them in the apache2 configuration file below instead of the defaults.

264
265
266
Apache2 setup
~~~~~~~~~~~~~

267
268
Create the file ``/etc/apache2/sites-available/synnefo`` containing the
following:
269
270
271

.. code-block:: console

272
273
    <VirtualHost *:80>
        ServerName node1.example.com
274

275
276
277
278
279
        RewriteEngine On
        RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC]
        RewriteRule ^(.*)$ - [F,L]
        RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
    </VirtualHost>
280

281

282
283
Create the file ``/etc/apache2/sites-available/synnefo-ssl`` containing the
following:
284
285
286

.. code-block:: console

287
288
289
    <IfModule mod_ssl.c>
    <VirtualHost _default_:443>
        ServerName node1.example.com
290

291
        Alias /static "/usr/share/synnefo/static"
292

293
294
        #  SetEnv no-gzip
        #  SetEnv dont-vary
295

296
       AllowEncodedSlashes On
297

298
       RequestHeader set X-Forwarded-Protocol "https"
299

300
301
302
303
    <Proxy * >
        Order allow,deny
        Allow from all
    </Proxy>
304

305
306
307
        SetEnv                proxy-sendchunked
        SSLProxyEngine        off
        ProxyErrorOverride    off
308

309
310
311
        ProxyPass        /static !
        ProxyPass        / http://localhost:8080/ retry=0
        ProxyPassReverse / http://localhost:8080/
312

313
314
315
        RewriteEngine On
        RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC]
        RewriteRule ^(.*)$ - [F,L]
316

317
318
319
320
321
        SSLEngine on
        SSLCertificateFile    /etc/ssl/certs/ssl-cert-snakeoil.pem
        SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
    </VirtualHost>
    </IfModule>
322
323
324
325
326
327
328
329
330
331
332
333
334

Now enable sites and modules by running:

.. code-block:: console

   # a2enmod ssl
   # a2enmod rewrite
   # a2dissite default
   # a2ensite synnefo
   # a2ensite synnefo-ssl
   # a2enmod headers
   # a2enmod proxy_http

335
336
337
338
339
340
.. note:: This isn't really needed, but it's a good security practice to disable
    directory listing in apache::

        # a2dismod autoindex


341
.. warning:: Do NOT start/restart the server yet. If the server is running::
342

343
       # /etc/init.d/apache2 stop
344

345

346
347
.. _rabbitmq-setup:

348
349
350
351
352
353
354
355
356
Message Queue setup
~~~~~~~~~~~~~~~~~~~

The message queue will run on node1, so we need to create the appropriate
rabbitmq user. The user is named ``synnefo`` and gets full privileges on all
exchanges:

.. code-block:: console

Ilias Tsitsimpis's avatar
Ilias Tsitsimpis committed
357
   # rabbitmqctl add_user synnefo "example_rabbitmq_passw0rd"
358
359
360
361
362
   # rabbitmqctl set_permissions synnefo ".*" ".*" ".*"

We do not need to initialize the exchanges. This will be done automatically,
during the Cyclades setup.

363
364
Pithos data directory setup
~~~~~~~~~~~~~~~~~~~~~~~~~~~
365
366
367
368
369
370
371
372
373
374
375
376

As mentioned in the General Prerequisites section, there is a directory called
``/srv/pithos`` visible by both nodes. We create and setup the ``data``
directory inside it:

.. code-block:: console

   # cd /srv/pithos
   # mkdir data
   # chown www-data:www-data data
   # chmod g+ws data

377
378
379
380
381
382
383
DNS server setup
~~~~~~~~~~~~~~~~

If your machines are not under the same domain nameyou have to set up a dns server.
In order to set up a dns server using dnsmasq do the following

.. code-block:: console
384

385
386
387
388
389
390
391
392
393
394
395
396
				# apt-get install dnsmasq

Then edit you ``/etc/hosts/`` as follows

.. code-block:: console

		4.3.2.1     node1.example.com
		4.3.2.2     node2.example.com

Finally edit the ``/etc/dnsmasq.conf`` file and specify the ``listen-address`` and
the ``interface`` you would like to listen to.

397
Also add the following in your ``/etc/resolv.conf`` file
398
399
400
401
402

.. code-block:: console

		nameserver 4.3.2.1

403
404
405
406
407
408
409
410
411
You are now ready with all general prerequisites concerning node1. Let's go to
node2.

Node2
-----

General Synnefo dependencies
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

412
413
414
415
416
    * apache (http server)
    * gunicorn (WSGI http server)
    * postgresql (database)
    * ntp (NTP daemon)
    * gevent
417
418
    * certificates
    * dns setup
419
420
421
422
423

You can install the above by running:

.. code-block:: console

424
   # apt-get install apache2 postgresql ntp
425
426
427
428
429
430
431

Make sure to install gunicorn >= v0.12.2. You can do this by installing from
the official debian backports:

.. code-block:: console

   # apt-get -t squeeze-backports install gunicorn
432

433
434
435
436
437
438
Also, make sure to install gevent >= 0.13.6. Again from the debian backports:

.. code-block:: console

   # apt-get -t squeeze-backports install python-gevent

439
440
Node2 will connect to the databases on node1, so you will also need the
python-psycopg2 package:
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457

.. code-block:: console

   # apt-get install python-psycopg2

Database setup
~~~~~~~~~~~~~~

All databases have been created and setup on node1, so we do not need to take
any action here. From node2, we will just connect to them. When you get familiar
with the software you may choose to run different databases on different nodes,
for performance/scalability/redundancy reasons, but those kind of setups are out
of the purpose of this guide.

Gunicorn setup
~~~~~~~~~~~~~~

458
459
460
Rename the file ``/etc/gunicorn.d/synnefo.example`` to
``/etc/gunicorn.d/synnefo``, to make it a valid gunicorn configuration file
(as happened for node1):
461
462
463

.. code-block:: console

464
465
    # mv /etc/gunicorn.d/synnefo.example /etc/gunicorn.d/synnefo

466

467
.. warning:: Do NOT start the server yet, because it won't find the
468
469
470
    ``synnefo.settings`` module. Also, in case you are using ``/etc/hosts``
    instead of a DNS to get the hostnames, change ``--worker-class=gevent`` to
    ``--worker-class=sync``. We will start the server after successful
471
    installation of astakos. If the server is running::
472

473
       # /etc/init.d/gunicorn stop
474

475
476
Apache2 setup
~~~~~~~~~~~~~
477

478
479
Create the file ``/etc/apache2/sites-available/synnefo`` containing the
following:
480

481
482
.. code-block:: console

483
484
    <VirtualHost *:80>
        ServerName node2.example.com
485

486
487
488
489
490
        RewriteEngine On
        RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC]
        RewriteRule ^(.*)$ - [F,L]
        RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
    </VirtualHost>
491
492
493
494
495
496

Create the file ``synnefo-ssl`` under ``/etc/apache2/sites-available/``
containing the following:

.. code-block:: console

497
498
499
    <IfModule mod_ssl.c>
    <VirtualHost _default_:443>
        ServerName node2.example.com
500

501
        Alias /static "/usr/share/synnefo/static"
502

503
504
505
        SetEnv no-gzip
        SetEnv dont-vary
        AllowEncodedSlashes On
506

507
        RequestHeader set X-Forwarded-Protocol "https"
508

509
510
511
512
        <Proxy * >
            Order allow,deny
            Allow from all
        </Proxy>
513

514
515
516
        SetEnv                proxy-sendchunked
        SSLProxyEngine        off
        ProxyErrorOverride    off
517

518
519
520
        ProxyPass        /static !
        ProxyPass        / http://localhost:8080/ retry=0
        ProxyPassReverse / http://localhost:8080/
521

522
523
524
525
526
        SSLEngine on
        SSLCertificateFile    /etc/ssl/certs/ssl-cert-snakeoil.pem
        SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
    </VirtualHost>
    </IfModule>
527
528
529
530
531
532
533
534
535
536
537
538
539

As in node1, enable sites and modules by running:

.. code-block:: console

   # a2enmod ssl
   # a2enmod rewrite
   # a2dissite default
   # a2ensite synnefo
   # a2ensite synnefo-ssl
   # a2enmod headers
   # a2enmod proxy_http

540
541
542
543
544
.. note:: This isn't really needed, but it's a good security practice to disable
    directory listing in apache::

        # a2dismod autoindex

545
.. warning:: Do NOT start/restart the server yet. If the server is running::
546

547
       # /etc/init.d/apache2 stop
548

549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570

Acquire certificate
~~~~~~~~~~~~~~~~~~~

Copy the certificate you created before on node1 (`ca.crt`) under the directory
``/usr/local/share/ca-certificate``

and run:

.. code-block:: console

		# update-ca-certificates

to update the records.


DNS Setup
~~~~~~~~~

Add the following line in ``/etc/resolv.conf`` file

.. code-block:: console
571

572
573
574
575
		nameserver 4.3.2.1

to inform the node about the new dns server.

576
577
578
579
580
581
582
583
584
585
586
587
588
We are now ready with all general prerequisites for node2. Now that we have
finished with all general prerequisites for both nodes, we can start installing
the services. First, let's install Astakos on node1.

Installation of Astakos on node1
================================

To install astakos, grab the package from our repository (make sure  you made
the additions needed in your ``/etc/apt/sources.list`` file, as described
previously), by running:

.. code-block:: console

589
   # apt-get install snf-astakos-app snf-pithos-backend
590

591
592
.. _conf-astakos:

593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
Configuration of Astakos
========================

Conf Files
----------

After astakos is successfully installed, you will find the directory
``/etc/synnefo`` and some configuration files inside it. The files contain
commented configuration options, which are the default options. While installing
new snf-* components, new configuration files will appear inside the directory.
In this guide (and for all services), we will edit only the minimum necessary
configuration options, to reflect our setup. Everything else will remain as is.

After getting familiar with synnefo, you will be able to customize the software
as you wish and fits your needs. Many options are available, to empower the
administrator with extensively customizable setups.

For the snf-webproject component (installed as an astakos dependency), we
need the following:

Edit ``/etc/synnefo/10-snf-webproject-database.conf``. You will need to
uncomment and edit the ``DATABASES`` block to reflect our database:

.. code-block:: console

618
619
620
    DATABASES = {
     'default': {
         # 'postgresql_psycopg2', 'postgresql','mysql', 'sqlite3' or 'oracle'
621
         'ENGINE': 'django.db.backends.postgresql_psycopg2',
622
623
         # ATTENTION: This *must* be the absolute path if using sqlite3.
         # See: http://docs.djangoproject.com/en/dev/ref/settings/#name
624
625
626
627
628
629
630
631
         'NAME': 'snf_apps',
         'USER': 'synnefo',                      # Not used with sqlite3.
         'PASSWORD': 'example_passw0rd',         # Not used with sqlite3.
         # Set to empty string for localhost. Not used with sqlite3.
         'HOST': '4.3.2.1',
         # Set to empty string for default. Not used with sqlite3.
         'PORT': '5432',
     }
632
633
634
    }

Edit ``/etc/synnefo/10-snf-webproject-deploy.conf``. Uncomment and edit
635
``SECRET_KEY``. This is a Django specific setting which is used to provide a
636
seed in secret-key hashing algorithms. Set this to a random string of your
Giorgos Korfiatis's avatar
Giorgos Korfiatis committed
637
choice and keep it private:
638
639
640

.. code-block:: console

641
    SECRET_KEY = 'sy6)mw6a7x%n)-example_secret_key#zzk4jo6f2=uqu!1o%)'
642
643

For astakos specific configuration, edit the following options in
644
``/etc/synnefo/20-snf-astakos-app-settings.conf`` :
645
646
647

.. code-block:: console

648
    ASTAKOS_COOKIE_DOMAIN = '.example.com'
649

Giorgos Korfiatis's avatar
Giorgos Korfiatis committed
650
    ASTAKOS_BASE_URL = 'https://node1.example.com/astakos'
651

652
The ``ASTAKOS_COOKIE_DOMAIN`` should be the base url of our domain (for all
653
654
655
services). ``ASTAKOS_BASE_URL`` is the astakos top-level URL. Appending an
extra path (``/astakos`` here) is recommended in order to distinguish
components, if more than one are installed on the same machine.
656

657
658
.. note:: For the purpose of this guide, we don't enable recaptcha authentication.
    If you would like to enable it, you have to edit the following options:
659
660
661
662
663
664
665
666
667
668

    .. code-block:: console

        ASTAKOS_RECAPTCHA_PUBLIC_KEY = 'example_recaptcha_public_key!@#$%^&*('
        ASTAKOS_RECAPTCHA_PRIVATE_KEY = 'example_recaptcha_private_key!@#$%^&*('
        ASTAKOS_RECAPTCHA_USE_SSL = True
        ASTAKOS_RECAPTCHA_ENABLED = True

    For the ``ASTAKOS_RECAPTCHA_PUBLIC_KEY`` and ``ASTAKOS_RECAPTCHA_PRIVATE_KEY``
    go to https://www.google.com/recaptcha/admin/create and create your own pair.
669

670
671
672
673
Then edit ``/etc/synnefo/20-snf-astakos-app-cloudbar.conf`` :

.. code-block:: console

674
    CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/'
675

Giorgos Korfiatis's avatar
Giorgos Korfiatis committed
676
    CLOUDBAR_SERVICES_URL = 'https://node1.example.com/astakos/ui/get_services'
677

Giorgos Korfiatis's avatar
Giorgos Korfiatis committed
678
    CLOUDBAR_MENU_URL = 'https://node1.example.com/astakos/ui/get_menu'
679

680
681
682
Those settings have to do with the black cloudbar endpoints and will be
described in more detail later on in this guide. For now, just edit the domain
to point at node1 which is where we have installed Astakos.
683

684
685
If you are an advanced user and want to use the Shibboleth Authentication
method, read the relative :ref:`section <shibboleth-auth>`.
Sofia Papagiannaki's avatar
Sofia Papagiannaki committed
686

687
.. _email-configuration:
688
689
690
691

Email delivery configuration
----------------------------

692
693
694
695
Many of the ``astakos`` operations require server to notify service users and
administrators via email. e.g. right after the signup process the service sents
an email to the registered email address containing an email verification url,
after the user verifies the email address astakos once again needs to notify
696
697
698
699
700
administrators with a notice that a new account has just been verified.

More specifically astakos sends emails in the following cases

- An email containing a verification link after each signup process.
701
702
703
- An email to the people listed in ``ADMINS`` setting after each email
  verification if ``ASTAKOS_MODERATION`` setting is ``True``. The email
  notifies administrators that an additional action is required in order to
704
  activate the user.
705
- A welcome email to the user email and an admin notification to ``ADMINS``
706
  right after each account activation.
707
- Feedback messages submited from astakos contact view and astakos feedback
708
  API endpoint are sent to contacts listed in ``HELPDESK`` setting.
709
- Project application request notifications to people included in ``HELPDESK``
710
  and ``MANAGERS`` settings.
711
- Notifications after each project members action (join request, membership
712
713
  accepted/declinde etc.) to project members or project owners.

714
715
716
Astakos uses the Django internal email delivering mechanism to send email
notifications. A simple configuration, using an external smtp server to
deliver messages, is shown below. Alter the following example to meet your
717
718
smtp server characteristics. Notice that the smtp server is needed for a proper
installation
719
720

.. code-block:: python
721

722
    # /etc/synnefo/00-snf-common-admins.conf
723
724
725
726
727
728
    EMAIL_HOST = "mysmtp.server.synnefo.org"
    EMAIL_HOST_USER = "<smtpuser>"
    EMAIL_HOST_PASSWORD = "<smtppassword>"

    # this gets appended in all email subjects
    EMAIL_SUBJECT_PREFIX = "[example.synnefo.org] "
729

730
731
732
    # Address to use for outgoing emails
    DEFAULT_FROM_EMAIL = "server@example.synnefo.org"

733
    # Email where users can contact for support. This is used in html/email
734
735
736
737
738
739
740
741
    # templates.
    CONTACT_EMAIL = "server@example.synnefo.org"

    # The email address that error messages come from
    SERVER_EMAIL = "server-errors@example.synnefo.org"

Notice that since email settings might be required by applications other than
astakos they are defined in a different configuration file than the one
742
previously used to set astakos specific settings.
743

744
Refer to
745
`Django documentation <https://docs.djangoproject.com/en/1.4/topics/email/>`_
746
747
for additional information on available email settings.

748
749
750
751
As refered in the previous section, based on the operation that triggers
an email notification, the recipients list differs. Specifically for
emails whose recipients include contacts from your service team
(administrators, managers, helpdesk etc) synnefo provides the following
752
753
754
755
settings located in ``10-snf-common-admins.conf``:

.. code-block:: python

756
    ADMINS = (('Admin name', 'admin@example.synnefo.org'),
757
758
759
760
              ('Admin2 name', 'admin2@example.synnefo.org))
    MANAGERS = (('Manager name', 'manager@example.synnefo.org'),)
    HELPDESK = (('Helpdesk user name', 'helpdesk@example.synnefo.org'),)

761
762
763
764
765
766
767
Alternatively, it may be convenient to send e-mails to a file, instead of an actual smtp server, using the file backend. Do so by creating a configuration file ``/etc/synnefo/99-local.conf`` including the folowing:

.. code-block:: python

    EMAIL_BACKEND = 'django.core.mail.backends.filebased.EmailBackend'
    EMAIL_FILE_PATH = '/tmp/app-messages' 
  
768
769


770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
Enable Pooling
--------------

This section can be bypassed, but we strongly recommend you apply the following,
since they result in a significant performance boost.

Synnefo includes a pooling DBAPI driver for PostgreSQL, as a thin wrapper
around Psycopg2. This allows independent Django requests to reuse pooled DB
connections, with significant performance gains.

To use, first monkey-patch psycopg2. For Django, run this before the
``DATABASES`` setting in ``/etc/synnefo/10-snf-webproject-database.conf``:

.. code-block:: console

785
786
    from synnefo.lib.db.pooled_psycopg2 import monkey_patch_psycopg2
    monkey_patch_psycopg2()
787

788
789
Since we are running with greenlets, we should modify psycopg2 behavior, so it
works properly in a greenlet context:
790
791
792

.. code-block:: console

793
794
    from synnefo.lib.db.psyco_gevent import make_psycopg_green
    make_psycopg_green()
795
796
797
798

Use the Psycopg2 driver as usual. For Django, this means using
``django.db.backends.postgresql_psycopg2`` without any modifications. To enable
connection pooling, pass a nonzero ``synnefo_poolsize`` option to the DBAPI
799
driver, through ``DATABASES.OPTIONS`` in Django.
800
801
802
803
804
805

All the above will result in an ``/etc/synnefo/10-snf-webproject-database.conf``
file that looks like this:

.. code-block:: console

806
807
808
    # Monkey-patch psycopg2
    from synnefo.lib.db.pooled_psycopg2 import monkey_patch_psycopg2
    monkey_patch_psycopg2()
809

810
811
812
    # If running with greenlets
    from synnefo.lib.db.psyco_gevent import make_psycopg_green
    make_psycopg_green()
813

814
815
816
    DATABASES = {
     'default': {
         # 'postgresql_psycopg2', 'postgresql','mysql', 'sqlite3' or 'oracle'
817
         'ENGINE': 'django.db.backends.postgresql_psycopg2',
818
         'OPTIONS': {'synnefo_poolsize': 8},
819
820
821

         # ATTENTION: This *must* be the absolute path if using sqlite3.
         # See: http://docs.djangoproject.com/en/dev/ref/settings/#name
822
823
824
825
826
827
828
829
         'NAME': 'snf_apps',
         'USER': 'synnefo',                      # Not used with sqlite3.
         'PASSWORD': 'example_passw0rd',         # Not used with sqlite3.
         # Set to empty string for localhost. Not used with sqlite3.
         'HOST': '4.3.2.1',
         # Set to empty string for default. Not used with sqlite3.
         'PORT': '5432',
     }
830
    }
Christos Stavrakakis's avatar
Christos Stavrakakis committed
831

832
833
834
Database Initialization
-----------------------

835
After configuration is done, we initialize the database by running:
836

837
838
.. code-block:: console

839
    # snf-manage syncdb
840
841
842
843

At this example we don't need to create a django superuser, so we select
``[no]`` to the question. After a successful sync, we run the migration needed
for astakos:
844
845
846

.. code-block:: console

847
    # snf-manage migrate im
848
    # snf-manage migrate quotaholder_app
849

850
Then, we load the pre-defined user groups
Sofia Papagiannaki's avatar
Sofia Papagiannaki committed
851
852
853

.. code-block:: console

854
    # snf-manage loaddata groups
Sofia Papagiannaki's avatar
Sofia Papagiannaki committed
855

856
857
858
859
860
.. _services-reg:

Services Registration
---------------------

861
When the database is ready, we need to register the services. The following
862
863
864
command will ask you to register the standard Synnefo components (astakos,
cyclades, and pithos) along with the services they provide. Note that you
have to register at least astakos in order to have a usable authentication
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
system. For each component, you will be asked to provide two URLs: its base
URL and its UI URL.

The former is the location where the component resides; it should equal
the ``<component_name>_BASE_URL`` as specified in the respective component
settings. For example, the base URL for astakos would be
``https://node1.example.com/astakos``.

The latter is the URL that appears in the Cloudbar and leads to the
component UI. If you want to follow the default setup, set
the UI URL to ``<base_url>/ui/`` where ``base_url`` the component's base
URL as explained before. (You can later change the UI URL with
``snf-manage component-modify <component_name> --url new_ui_url``.)

The command will also register automatically the resource definitions
880
offered by the services.
881
882
883

.. code-block:: console

884
    # snf-component-register
885
886
887
888
889
890
891
892
893
894

.. note::

   This command is equivalent to running the following series of commands;
   it registers the three components in astakos and then in each host it
   exports the respective service definitions, copies the exported json file
   to the astakos host, where it finally imports it:

    .. code-block:: console

895
896
897
       astakos-host$ snf-manage component-add astakos --base-url astakos_base_url --ui-url astakos_ui_url
       astakos-host$ snf-manage component-add cyclades --base-url cyclades_base_url --ui-url cyclades_ui_url
       astakos-host$ snf-manage component-add pithos --base-url pithos_base_url --ui-url pithos_ui_url
898
899
900
901
902
903
904
905
       astakos-host$ snf-manage service-export-astakos > astakos.json
       astakos-host$ snf-manage service-import --json astakos.json
       cyclades-host$ snf-manage service-export-cyclades > cyclades.json
       # copy the file to astakos-host
       astakos-host$ snf-manage service-import --json cyclades.json
       pithos-host$ snf-manage service-export-pithos > pithos.json
       # copy the file to astakos-host
       astakos-host$ snf-manage service-import --json pithos.json
906

907
908
Notice that in this installation astakos and cyclades are in node1 and pithos is in node2

909
910
911
912
913
914
915
916
Setting Default Base Quota for Resources
----------------------------------------

We now have to specify the limit on resources that each user can employ
(exempting resources offered by projects).

.. code-block:: console

917
    # snf-manage resource-modify --default-quota-interactive
918

919
920
921
922
923
924
925
926

Servers Initialization
----------------------

Finally, we initialize the servers on node1:

.. code-block:: console

927
928
    root@node1:~ # /etc/init.d/gunicorn restart
    root@node1:~ # /etc/init.d/apache2 restart
929
930

We have now finished the Astakos setup. Let's test it now.
931
932
933
934
935
936
937


Testing of Astakos
==================

Open your favorite browser and go to:

Giorgos Korfiatis's avatar
Giorgos Korfiatis committed
938
``http://node1.example.com/astakos``
939

Giorgos Korfiatis's avatar
Giorgos Korfiatis committed
940
If this redirects you to ``https://node1.example.com/astakos/ui/`` and you can see
941
942
943
944
945
the "welcome" door of Astakos, then you have successfully setup Astakos.

Let's create our first user. At the homepage click the "CREATE ACCOUNT" button
and fill all your data at the sign up form. Then click "SUBMIT". You should now
see a green box on the top, which informs you that you made a successful request
946
947
and the request has been sent to the administrators. So far so good, let's
assume that you created the user with username ``user@example.com``.
948
949
950
951
952

Now we need to activate that user. Return to a command prompt at node1 and run:

.. code-block:: console

953
    root@node1:~ # snf-manage user-list
954
955

This command should show you a list with only one user; the one we just created.
Giorgos Korfiatis's avatar
Giorgos Korfiatis committed
956
957
This user should have an id with a value of ``1`` and flag "active" and
"verified" set to False. Now run:
958
959
960

.. code-block:: console

Giorgos Korfiatis's avatar
Giorgos Korfiatis committed
961
    root@node1:~ # snf-manage user-modify 1 --verify --accept
962

Giorgos Korfiatis's avatar
Giorgos Korfiatis committed
963
This verifies the user email and activates the user.
964
965
966
When running in production, the activation is done automatically with different
types of moderation, that Astakos supports. You can see the moderation methods
(by invitation, whitelists, matching regexp, etc.) at the Astakos specific
967
968
969
documentation. In production, you can also manually activate a user, by sending
him/her an activation email. See how to do this at the :ref:`User
activation <user_activation>` section.
970

Giorgos Korfiatis's avatar
Giorgos Korfiatis committed
971
Now let's go back to the homepage. Open ``http://node1.example.com/astkos/ui/`` with
972
973
974
975
your browser again. Try to sign in using your new credentials. If the astakos
menu appears and you can see your profile, then you have successfully setup
Astakos.

976
Let's continue to install Pithos now.
977
978


979
980
Installation of Pithos on node2
===============================
981

982
To install Pithos, grab the packages from our repository (make sure  you made
983
984
985
986
987
the additions needed in your ``/etc/apt/sources.list`` file, as described
previously), by running:

.. code-block:: console

988
   # apt-get install snf-pithos-app snf-pithos-backend
989

990
Now, install the pithos web interface:
991

992
993
994
995
996
.. code-block:: console

   # apt-get install snf-pithos-webclient

This package provides the standalone pithos web client. The web client is the
997
web UI for Pithos and will be accessible by clicking "pithos" on the Astakos
998
interface's cloudbar, at the top of the Astakos homepage.
999

1000

1001
1002
.. _conf-pithos:

1003
1004
Configuration of Pithos
=======================
1005
1006
1007
1008

Conf Files
----------

1009
After Pithos is successfully installed, you will find the directory
1010
1011
1012
``/etc/synnefo`` and some configuration files inside it, as you did in node1
after installation of astakos. Here, you will not have to change anything that
has to do with snf-common or snf-webproject. Everything is set at node1. You
1013
only need to change settings that have to do with Pithos. Specifically:
1014
1015

Edit ``/etc/synnefo/20-snf-pithos-app-settings.conf``. There you need to set
Ilias Tsitsimpis's avatar
Ilias Tsitsimpis committed
1016
this options:
1017
1018
1019

.. code-block:: console

1020
   ASTAKOS_AUTH_URL = 'https://node1.example.com/astakos/identity/v2.0'
1021

Giorgos Korfiatis's avatar
Giorgos Korfiatis committed
1022
   PITHOS_BASE_URL = 'https://node2.example.com/pithos'
1023
   PITHOS_BACKEND_DB_CONNECTION = 'postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos'
1024
   PITHOS_BACKEND_BLOCK_PATH = '/srv/pithos/data'
1025

1026
   PITHOS_SERVICE_TOKEN = 'pithos_service_token22w'
1027

1028

1029
1030
The ``PITHOS_BACKEND_DB_CONNECTION`` option tells to the Pithos app where to
find the Pithos backend database. Above we tell Pithos that its database is
1031
1032
1033
``snf_pithos`` at node1 and to connect as user ``synnefo`` with password
``example_passw0rd``.  All those settings where setup during node1's "Database
setup" section.
1034

1035
1036
The ``PITHOS_BACKEND_BLOCK_PATH`` option tells to the Pithos app where to find
the Pithos backend data. Above we tell Pithos to store its data under
1037
``/srv/pithos/data``, which is visible by both nodes. We have already setup this
1038
directory at node1's "Pithos data directory setup" section.
1039

1040
The ``ASTAKOS_AUTH_URL`` option informs the Pithos app where Astakos is.
1041
The Astakos service is used for user management (authentication, quotas, etc.)
1042

Giorgos Korfiatis's avatar
Giorgos Korfiatis committed
1043
1044
1045
1046
The ``PITHOS_BASE_URL`` setting must point to the top-level Pithos URL.

The ``PITHOS_SERVICE_TOKEN`` is the token used for authentication with astakos.
It can be retrieved by running on the Astakos node (node1 in our case):
1047
1048
1049

.. code-block:: console

Giorgos Korfiatis's avatar
Giorgos Korfiatis committed
1050
   # snf-manage component-list
1051

1052
The token has been generated automatically during the :ref:`Pithos service
1053
1054
registration <services-reg>`.

1055
1056
1057
1058
1059
The ``PITHOS_UPDATE_MD5`` option by default disables the computation of the
object checksums. This results to improved performance during object uploading.
However, if compatibility with the OpenStack Object Storage API is important
then it should be changed to ``True``.

1060
Then edit ``/etc/synnefo/20-snf-pithos-webclient-cloudbar.conf``, to connect the
1061
Pithos web UI with the astakos web UI (through the top cloudbar):
1062
1063
1064

.. code-block:: console

1065
    CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/'
Giorgos Korfiatis's avatar
Giorgos Korfiatis committed
1066
1067
    CLOUDBAR_SERVICES_URL = 'https://node1.example.com/astakos/ui/get_services'
    CLOUDBAR_MENU_URL = 'https://node1.example.com/astakos/ui/get_menu'
1068
1069
1070
1071
1072

The ``CLOUDBAR_LOCATION`` tells the client where to find the astakos common
cloudbar.

The ``CLOUDBAR_SERVICES_URL`` and ``CLOUDBAR_MENU_URL`` options are used by the
1073
Pithos web client to get from astakos all the information needed to fill its
1074
own cloudbar. So we put our astakos deployment urls there.
1075

1076
1077
1078
1079
1080
1081
1082
Pooling and Greenlets
---------------------

Pithos is pooling-ready without the need of further configuration, because it
doesn't use a Django DB. It pools HTTP connections to Astakos and pithos
backend objects for access to the Pithos DB.

1083
1084
1085
However, as in Astakos, since we are running with Greenlets, it is also
recommended to modify psycopg2 behavior so it works properly in a greenlet
context. This means adding the following lines at the top of your
1086
1087
1088
1089
``/etc/synnefo/10-snf-webproject-database.conf`` file:

.. code-block:: console

1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
    from synnefo.lib.db.psyco_gevent import make_psycopg_green
    make_psycopg_green()

Furthermore, add the ``--worker-class=gevent`` (or ``--worker-class=sync`` as
mentioned above, depending on your setup) argument on your
``/etc/gunicorn.d/synnefo`` configuration file. The file should look something
like this:

.. code-block:: console

    CONFIG = {
     'mode': 'django',
     'environment': {
       'DJANGO_SETTINGS_MODULE': 'synnefo.settings',
     },
     'working_dir': '/etc/synnefo',
     'user': 'www-data',
     'group': 'www-data',
     'args': (
       '--bind=127.0.0.1:8080',
       '--workers=4',
       '--worker-class=gevent',
       '--log-level=debug',
       '--timeout=43200'
     ),
    }
1116

1117
1118
1119
1120
1121
1122
1123
Stamp Database Revision
-----------------------

Pithos uses the alembic_ database migrations tool.

.. _alembic: http://alembic.readthedocs.org

1124
After a successful installation, we should stamp it at the most recent
Sofia Papagiannaki's avatar
Sofia Papagiannaki committed
1125
1126
revision, so that future migrations know where to start upgrading in
the migration history.
1127
1128
1129

.. code-block:: console

1130
    root@node2:~ # pithos-migrate stamp head
1131

1132
1133
1134
1135
1136
1137
1138
Servers Initialization
----------------------

After configuration is done, we initialize the servers on node2:

.. code-block:: console

1139
1140
    root@node2:~ # /etc/init.d/gunicorn restart
    root@node2:~ # /etc/init.d/apache2 restart
1141

1142
You have now finished the Pithos setup. Let's test it now.
1143
1144


1145
1146
Testing of Pithos
=================
1147

1148
1149
Open your browser and go to the Astakos homepage:

Giorgos Korfiatis's avatar
Giorgos Korfiatis committed
1150
``http://node1.example.com/astakos``
1151

1152
Login, and you will see your profile page. Now, click the "pithos" link on the
1153
1154
1155
1156
top black cloudbar. If everything was setup correctly, this will redirect you
to: