quick-install-admin-guide.rst 81.1 KB
Newer Older
1
2
.. _quick-install-admin-guide:

3
4
Administrator's Installation Guide
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
5

6
This is the Administrator's installation guide.
7
8
9
10
11
12

It describes how to install the whole synnefo stack on two (2) physical nodes,
with minimum configuration. It installs synnefo from Debian packages, and
assumes the nodes run Debian Squeeze. After successful installation, you will
have the following services running:

13
    * Identity Management (Astakos)
14
    * Object Storage Service (Pithos)
15
    * Compute Service (Cyclades)
16
17
    * Image Service (part of Cyclades)
    * Network Service (part of Cyclades)
18
19
20
21
22
23

and a single unified Web UI to manage them all.

The Volume Storage Service (Archipelago) and the Billing Service (Aquarium) are
not released yet.

24
25
If you just want to install the Object Storage Service (Pithos), follow the
guide and just stop after the "Testing of Pithos" section.
26
27
28
29
30


Installation of Synnefo / Introduction
======================================

31
32
33
We will install the services with the above list's order. The last three
services will be installed in a single step (at the end), because at the moment
they are contained in the same software component (Cyclades). Furthermore, we
34
will install all services in the first physical node, except Pithos which will
35
36
be installed in the second, due to a conflict between the snf-pithos-app and
snf-cyclades-app component (scheduled to be fixed in the next version).
37
38
39

For the rest of the documentation we will refer to the first physical node as
"node1" and the second as "node2". We will also assume that their domain names
40
41
42
43
are "node1.example.com" and "node2.example.com" and their public IPs are "4.3.2.1" and
"4.3.2.2" respectively. It is important that the two machines are under the same domain name.
In case you choose to follow a private installation you will need to
set up a private dns server, using dnsmasq for example. See node1 below for more.
44
45
46
47
48

General Prerequisites
=====================

These are the general synnefo prerequisites, that you need on node1 and node2
49
and are related to all the services (Astakos, Pithos, Cyclades).
50
51
52
53

To be able to download all synnefo components you need to add the following
lines in your ``/etc/apt/sources.list`` file:

54
55
| ``deb http://apt.dev.grnet.gr squeeze/``
| ``deb-src http://apt.dev.grnet.gr squeeze/``
56

57
58
59
60
and import the repo's GPG key:

| ``curl https://dev.grnet.gr/files/apt-grnetdev.pub | apt-key add -``

61
62
63
64
65
66
67
Also add the following line to enable the ``squeeze-backports`` repository,
which may provide more recent versions of certain packages. The repository
is deactivated by default and must be specified expicitly in ``apt-get``
operations:

| ``deb http://backports.debian.org/debian-backports squeeze-backports main``

68
You also need a shared directory visible by both nodes. Pithos will save all
69
70
71
72
data inside this directory. By 'all data', we mean files, images, and pithos
specific mapping data. If you plan to upload more than one basic image, this
directory should have at least 50GB of free space. During this guide, we will
assume that node1 acts as an NFS server and serves the directory ``/srv/pithos``
73
74
to node2 (be sure to set no_root_squash flag). Node2 has this directory
mounted under ``/srv/pithos``, too.
75
76
77
78
79
80
81

Before starting the synnefo installation, you will need basic third party
software to be installed and configured on the physical nodes. We will describe
each node's general prerequisites separately. Any additional configuration,
specific to a synnefo service for each node, will be described at the service's
section.

82
83
84
Finally, it is required for Cyclades and Ganeti nodes to have synchronized
system clocks (e.g. by running ntpd).

85
86
87
Node1
-----

88

89
90
91
General Synnefo dependencies
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

92
93
94
95
96
97
98
99
		* apache (http server)
		* public certificate
		* gunicorn (WSGI http server)
		* postgresql (database)
		* rabbitmq (message queue)
		* ntp (NTP daemon)
		* gevent
		* dns server
100

101
You can install apache2, postgresql, ntp and rabbitmq by running:
102
103
104

.. code-block:: console

105
   # apt-get install apache2 postgresql ntp rabbitmq-server
106

107
108
109
110
111
112
113
Make sure to install gunicorn >= v0.12.2. You can do this by installing from
the official debian backports:

.. code-block:: console

   # apt-get -t squeeze-backports install gunicorn

114
115
116
117
118
119
Also, make sure to install gevent >= 0.13.6. Again from the debian backports:

.. code-block:: console

   # apt-get -t squeeze-backports install python-gevent

120
121
On node1, we will create our databases, so you will also need the
python-psycopg2 package:
122
123
124
125
126

.. code-block:: console

   # apt-get install python-psycopg2

127

128
129
130
131
132
133
134
135
136
Database setup
~~~~~~~~~~~~~~

On node1, we create a database called ``snf_apps``, that will host all django
apps related tables. We also create the user ``synnefo`` and grant him all
privileges on the database. We do this by running:

.. code-block:: console

137
138
139
140
141
    root@node1:~ # su - postgres
    postgres@node1:~ $ psql
    postgres=# CREATE DATABASE snf_apps WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0;
    postgres=# CREATE USER synnefo WITH PASSWORD 'example_passw0rd';
    postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_apps TO synnefo;
142

143
We also create the database ``snf_pithos`` needed by the Pithos backend and
144
145
146
147
148
149
grant the ``synnefo`` user all privileges on the database. This database could
be created on node2 instead, but we do it on node1 for simplicity. We will
create all needed databases on node1 and then node2 will connect to them.

.. code-block:: console

150
151
    postgres=# CREATE DATABASE snf_pithos WITH ENCODING 'UTF8' LC_COLLATE='C' LC_CTYPE='C' TEMPLATE=template0;
    postgres=# GRANT ALL PRIVILEGES ON DATABASE snf_pithos TO synnefo;
152
153
154
155
156
157
158

Configure the database to listen to all network interfaces. You can do this by
editting the file ``/etc/postgresql/8.4/main/postgresql.conf`` and change
``listen_addresses`` to ``'*'`` :

.. code-block:: console

159
    listen_addresses = '*'
160
161
162
163
164
165
166

Furthermore, edit ``/etc/postgresql/8.4/main/pg_hba.conf`` to allow node1 and
node2 to connect to the database. Add the following lines under ``#IPv4 local
connections:`` :

.. code-block:: console

167
168
    host		all	all	4.3.2.1/32	md5
    host		all	all	4.3.2.2/32	md5
169
170
171
172
173
174
175
176
177
178
179

Make sure to substitute "4.3.2.1" and "4.3.2.2" with node1's and node2's
actual IPs. Now, restart the server to apply the changes:

.. code-block:: console

   # /etc/init.d/postgresql restart

Gunicorn setup
~~~~~~~~~~~~~~

180
181
Rename the file ``/etc/gunicorn.d/synnefo.example`` to
``/etc/gunicorn.d/synnefo``, to make it a valid gunicorn configuration file:
182
183
184

.. code-block:: console

185
186
    # mv /etc/gunicorn.d/synnefo.example /etc/gunicorn.d/synnefo

187

188
.. warning:: Do NOT start the server yet, because it won't find the
189
190
191
    ``synnefo.settings`` module. Also, in case you are using ``/etc/hosts``
    instead of a DNS to get the hostnames, change ``--worker-class=gevent`` to
    ``--worker-class=sync``. We will start the server after successful
192
    installation of astakos. If the server is running::
193

194
       # /etc/init.d/gunicorn stop
195

196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
Certificate Creation
~~~~~~~~~~~~~~~~~~~~~

Node1 will host Cyclades. Cyclades should communicate with the other snf tools over a trusted connection.
In order for the connection to be trusted, the keys provided to apache below should be signed with a certificate.
This certificate should be added to all nodes. In case you don't have signed keys you can create a self-signed certificate
and sign your keys with this. To do so on node1 run

.. code-block:: console

		# aptitude install openvpn
		# mkdir /etc/openvpn/easy-rsa
		# cp -ai /usr/share/doc/openvpn/examples/easy-rsa/2.0/ /etc/openvpn/easy-rsa
		# cd /etc/openvpn/easy-rsa/2.0
		# vim vars

In vars you can set your own parameters such as KEY_COUNTRY

.. code-block:: console

	# . ./vars
	# ./clean-all

Now you can create the certificate

221
222
.. code-block:: console

223
224
225
226
227
228
229
230
231
232
233
234
235
236
		# ./build-ca

The previous will create a ``ca.crt`` file. Copy this file under
``/usr/local/share/ca-certificates/`` directory and run :

.. code-block:: console

		# update-ca-certificates

to update the records. You will have to do the following on node2 as well.

Now you can create the keys and sign them with the certificate

.. code-block:: console
237

238
239
240
		# ./build-key-server node1.example.com

This will create a .pem and a .key file in your current folder. Copy these in
241
``/etc/ssl/certs/`` and ``/etc/ssl/private/`` respectively and
242
243
use them in the apache2 configuration file below instead of the defaults.

244
245
246
Apache2 setup
~~~~~~~~~~~~~

247
248
Create the file ``/etc/apache2/sites-available/synnefo`` containing the
following:
249
250
251

.. code-block:: console

252
253
    <VirtualHost *:80>
        ServerName node1.example.com
254

255
256
257
258
259
        RewriteEngine On
        RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC]
        RewriteRule ^(.*)$ - [F,L]
        RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
    </VirtualHost>
260

261

262
263
Create the file ``/etc/apache2/sites-available/synnefo-ssl`` containing the
following:
264
265
266

.. code-block:: console

267
268
269
    <IfModule mod_ssl.c>
    <VirtualHost _default_:443>
        ServerName node1.example.com
270

271
        Alias /static "/usr/share/synnefo/static"
272

273
274
        #  SetEnv no-gzip
        #  SetEnv dont-vary
275

276
       AllowEncodedSlashes On
277

278
       RequestHeader set X-Forwarded-Protocol "https"
279

280
281
282
283
    <Proxy * >
        Order allow,deny
        Allow from all
    </Proxy>
284

285
286
287
        SetEnv                proxy-sendchunked
        SSLProxyEngine        off
        ProxyErrorOverride    off
288

289
290
291
        ProxyPass        /static !
        ProxyPass        / http://localhost:8080/ retry=0
        ProxyPassReverse / http://localhost:8080/
292

293
294
295
        RewriteEngine On
        RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC]
        RewriteRule ^(.*)$ - [F,L]
296

297
298
299
300
301
        SSLEngine on
        SSLCertificateFile    /etc/ssl/certs/ssl-cert-snakeoil.pem
        SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
    </VirtualHost>
    </IfModule>
302
303
304
305
306
307
308
309
310
311
312
313
314

Now enable sites and modules by running:

.. code-block:: console

   # a2enmod ssl
   # a2enmod rewrite
   # a2dissite default
   # a2ensite synnefo
   # a2ensite synnefo-ssl
   # a2enmod headers
   # a2enmod proxy_http

315
316
317
318
319
320
.. note:: This isn't really needed, but it's a good security practice to disable
    directory listing in apache::

        # a2dismod autoindex


321
.. warning:: Do NOT start/restart the server yet. If the server is running::
322

323
       # /etc/init.d/apache2 stop
324

325

326
327
.. _rabbitmq-setup:

328
329
330
331
332
333
334
335
336
Message Queue setup
~~~~~~~~~~~~~~~~~~~

The message queue will run on node1, so we need to create the appropriate
rabbitmq user. The user is named ``synnefo`` and gets full privileges on all
exchanges:

.. code-block:: console

Ilias Tsitsimpis's avatar
Ilias Tsitsimpis committed
337
   # rabbitmqctl add_user synnefo "example_rabbitmq_passw0rd"
338
339
340
341
342
   # rabbitmqctl set_permissions synnefo ".*" ".*" ".*"

We do not need to initialize the exchanges. This will be done automatically,
during the Cyclades setup.

343
344
Pithos data directory setup
~~~~~~~~~~~~~~~~~~~~~~~~~~~
345
346
347
348
349
350
351
352
353
354
355
356

As mentioned in the General Prerequisites section, there is a directory called
``/srv/pithos`` visible by both nodes. We create and setup the ``data``
directory inside it:

.. code-block:: console

   # cd /srv/pithos
   # mkdir data
   # chown www-data:www-data data
   # chmod g+ws data

357
358
359
360
361
362
363
DNS server setup
~~~~~~~~~~~~~~~~

If your machines are not under the same domain nameyou have to set up a dns server.
In order to set up a dns server using dnsmasq do the following

.. code-block:: console
364

365
366
367
368
369
370
371
372
373
374
375
376
				# apt-get install dnsmasq

Then edit you ``/etc/hosts/`` as follows

.. code-block:: console

		4.3.2.1     node1.example.com
		4.3.2.2     node2.example.com

Finally edit the ``/etc/dnsmasq.conf`` file and specify the ``listen-address`` and
the ``interface`` you would like to listen to.

377
Also add the following in your ``/etc/resolv.conf`` file
378
379
380
381
382

.. code-block:: console

		nameserver 4.3.2.1

383
384
385
386
387
388
389
390
391
You are now ready with all general prerequisites concerning node1. Let's go to
node2.

Node2
-----

General Synnefo dependencies
~~~~~~~~~~~~~~~~~~~~~~~~~~~~

392
393
394
395
396
    * apache (http server)
    * gunicorn (WSGI http server)
    * postgresql (database)
    * ntp (NTP daemon)
    * gevent
397
398
    * certificates
    * dns setup
399
400
401
402
403

You can install the above by running:

.. code-block:: console

404
   # apt-get install apache2 postgresql ntp
405
406
407
408
409
410
411

Make sure to install gunicorn >= v0.12.2. You can do this by installing from
the official debian backports:

.. code-block:: console

   # apt-get -t squeeze-backports install gunicorn
412

413
414
415
416
417
418
Also, make sure to install gevent >= 0.13.6. Again from the debian backports:

.. code-block:: console

   # apt-get -t squeeze-backports install python-gevent

419
420
Node2 will connect to the databases on node1, so you will also need the
python-psycopg2 package:
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437

.. code-block:: console

   # apt-get install python-psycopg2

Database setup
~~~~~~~~~~~~~~

All databases have been created and setup on node1, so we do not need to take
any action here. From node2, we will just connect to them. When you get familiar
with the software you may choose to run different databases on different nodes,
for performance/scalability/redundancy reasons, but those kind of setups are out
of the purpose of this guide.

Gunicorn setup
~~~~~~~~~~~~~~

438
439
440
Rename the file ``/etc/gunicorn.d/synnefo.example`` to
``/etc/gunicorn.d/synnefo``, to make it a valid gunicorn configuration file
(as happened for node1):
441
442
443

.. code-block:: console

444
445
    # mv /etc/gunicorn.d/synnefo.example /etc/gunicorn.d/synnefo

446

447
.. warning:: Do NOT start the server yet, because it won't find the
448
449
450
    ``synnefo.settings`` module. Also, in case you are using ``/etc/hosts``
    instead of a DNS to get the hostnames, change ``--worker-class=gevent`` to
    ``--worker-class=sync``. We will start the server after successful
451
    installation of astakos. If the server is running::
452

453
       # /etc/init.d/gunicorn stop
454

455
456
Apache2 setup
~~~~~~~~~~~~~
457

458
459
Create the file ``/etc/apache2/sites-available/synnefo`` containing the
following:
460

461
462
.. code-block:: console

463
464
    <VirtualHost *:80>
        ServerName node2.example.com
465

466
467
468
469
470
        RewriteEngine On
        RewriteCond %{THE_REQUEST} ^.*(\\r|\\n|%0A|%0D).* [NC]
        RewriteRule ^(.*)$ - [F,L]
        RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI}
    </VirtualHost>
471
472
473
474
475
476

Create the file ``synnefo-ssl`` under ``/etc/apache2/sites-available/``
containing the following:

.. code-block:: console

477
478
479
    <IfModule mod_ssl.c>
    <VirtualHost _default_:443>
        ServerName node2.example.com
480

481
        Alias /static "/usr/share/synnefo/static"
482

483
484
485
        SetEnv no-gzip
        SetEnv dont-vary
        AllowEncodedSlashes On
486

487
        RequestHeader set X-Forwarded-Protocol "https"
488

489
490
491
492
        <Proxy * >
            Order allow,deny
            Allow from all
        </Proxy>
493

494
495
496
        SetEnv                proxy-sendchunked
        SSLProxyEngine        off
        ProxyErrorOverride    off
497

498
499
500
        ProxyPass        /static !
        ProxyPass        / http://localhost:8080/ retry=0
        ProxyPassReverse / http://localhost:8080/
501

502
503
504
505
506
        SSLEngine on
        SSLCertificateFile    /etc/ssl/certs/ssl-cert-snakeoil.pem
        SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
    </VirtualHost>
    </IfModule>
507
508
509
510
511
512
513
514
515
516
517
518
519

As in node1, enable sites and modules by running:

.. code-block:: console

   # a2enmod ssl
   # a2enmod rewrite
   # a2dissite default
   # a2ensite synnefo
   # a2ensite synnefo-ssl
   # a2enmod headers
   # a2enmod proxy_http

520
521
522
523
524
.. note:: This isn't really needed, but it's a good security practice to disable
    directory listing in apache::

        # a2dismod autoindex

525
.. warning:: Do NOT start/restart the server yet. If the server is running::
526

527
       # /etc/init.d/apache2 stop
528

529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550

Acquire certificate
~~~~~~~~~~~~~~~~~~~

Copy the certificate you created before on node1 (`ca.crt`) under the directory
``/usr/local/share/ca-certificate``

and run:

.. code-block:: console

		# update-ca-certificates

to update the records.


DNS Setup
~~~~~~~~~

Add the following line in ``/etc/resolv.conf`` file

.. code-block:: console
551

552
553
554
555
		nameserver 4.3.2.1

to inform the node about the new dns server.

556
557
558
559
560
561
562
563
564
565
566
567
568
We are now ready with all general prerequisites for node2. Now that we have
finished with all general prerequisites for both nodes, we can start installing
the services. First, let's install Astakos on node1.

Installation of Astakos on node1
================================

To install astakos, grab the package from our repository (make sure  you made
the additions needed in your ``/etc/apt/sources.list`` file, as described
previously), by running:

.. code-block:: console

569
   # apt-get install snf-astakos-app snf-pithos-backend
570

571
572
.. _conf-astakos:

573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
Configuration of Astakos
========================

Conf Files
----------

After astakos is successfully installed, you will find the directory
``/etc/synnefo`` and some configuration files inside it. The files contain
commented configuration options, which are the default options. While installing
new snf-* components, new configuration files will appear inside the directory.
In this guide (and for all services), we will edit only the minimum necessary
configuration options, to reflect our setup. Everything else will remain as is.

After getting familiar with synnefo, you will be able to customize the software
as you wish and fits your needs. Many options are available, to empower the
administrator with extensively customizable setups.

For the snf-webproject component (installed as an astakos dependency), we
need the following:

Edit ``/etc/synnefo/10-snf-webproject-database.conf``. You will need to
uncomment and edit the ``DATABASES`` block to reflect our database:

.. code-block:: console

598
599
600
    DATABASES = {
     'default': {
         # 'postgresql_psycopg2', 'postgresql','mysql', 'sqlite3' or 'oracle'
601
         'ENGINE': 'django.db.backends.postgresql_psycopg2',
602
603
         # ATTENTION: This *must* be the absolute path if using sqlite3.
         # See: http://docs.djangoproject.com/en/dev/ref/settings/#name
604
605
606
607
608
609
610
611
         'NAME': 'snf_apps',
         'USER': 'synnefo',                      # Not used with sqlite3.
         'PASSWORD': 'example_passw0rd',         # Not used with sqlite3.
         # Set to empty string for localhost. Not used with sqlite3.
         'HOST': '4.3.2.1',
         # Set to empty string for default. Not used with sqlite3.
         'PORT': '5432',
     }
612
613
614
    }

Edit ``/etc/synnefo/10-snf-webproject-deploy.conf``. Uncomment and edit
615
``SECRET_KEY``. This is a Django specific setting which is used to provide a
616
seed in secret-key hashing algorithms. Set this to a random string of your
Giorgos Korfiatis's avatar
Giorgos Korfiatis committed
617
choice and keep it private:
618
619
620

.. code-block:: console

621
    SECRET_KEY = 'sy6)mw6a7x%n)-example_secret_key#zzk4jo6f2=uqu!1o%)'
622
623

For astakos specific configuration, edit the following options in
624
``/etc/synnefo/20-snf-astakos-app-settings.conf`` :
625
626
627

.. code-block:: console

628
    ASTAKOS_COOKIE_DOMAIN = '.example.com'
629

Giorgos Korfiatis's avatar
Giorgos Korfiatis committed
630
    ASTAKOS_BASE_URL = 'https://node1.example.com/astakos'
631

632
The ``ASTAKOS_COOKIE_DOMAIN`` should be the base url of our domain (for all
633
634
635
services). ``ASTAKOS_BASE_URL`` is the astakos top-level URL. Appending an
extra path (``/astakos`` here) is recommended in order to distinguish
components, if more than one are installed on the same machine.
636

637
638
.. note:: For the purpose of this guide, we don't enable recaptcha authentication.
    If you would like to enable it, you have to edit the following options:
639
640
641
642
643
644
645
646
647
648

    .. code-block:: console

        ASTAKOS_RECAPTCHA_PUBLIC_KEY = 'example_recaptcha_public_key!@#$%^&*('
        ASTAKOS_RECAPTCHA_PRIVATE_KEY = 'example_recaptcha_private_key!@#$%^&*('
        ASTAKOS_RECAPTCHA_USE_SSL = True
        ASTAKOS_RECAPTCHA_ENABLED = True

    For the ``ASTAKOS_RECAPTCHA_PUBLIC_KEY`` and ``ASTAKOS_RECAPTCHA_PRIVATE_KEY``
    go to https://www.google.com/recaptcha/admin/create and create your own pair.
649

650
651
652
653
Then edit ``/etc/synnefo/20-snf-astakos-app-cloudbar.conf`` :

.. code-block:: console

654
    CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/'
655

Giorgos Korfiatis's avatar
Giorgos Korfiatis committed
656
    CLOUDBAR_SERVICES_URL = 'https://node1.example.com/astakos/ui/get_services'
657

Giorgos Korfiatis's avatar
Giorgos Korfiatis committed
658
    CLOUDBAR_MENU_URL = 'https://node1.example.com/astakos/ui/get_menu'
659

660
661
662
Those settings have to do with the black cloudbar endpoints and will be
described in more detail later on in this guide. For now, just edit the domain
to point at node1 which is where we have installed Astakos.
663

664
665
If you are an advanced user and want to use the Shibboleth Authentication
method, read the relative :ref:`section <shibboleth-auth>`.
Sofia Papagiannaki's avatar
Sofia Papagiannaki committed
666

667
.. _email-configuration:
668
669
670
671

Email delivery configuration
----------------------------

672
673
674
675
Many of the ``astakos`` operations require server to notify service users and
administrators via email. e.g. right after the signup process the service sents
an email to the registered email address containing an email verification url,
after the user verifies the email address astakos once again needs to notify
676
677
678
679
680
administrators with a notice that a new account has just been verified.

More specifically astakos sends emails in the following cases

- An email containing a verification link after each signup process.
681
682
683
- An email to the people listed in ``ADMINS`` setting after each email
  verification if ``ASTAKOS_MODERATION`` setting is ``True``. The email
  notifies administrators that an additional action is required in order to
684
  activate the user.
685
- A welcome email to the user email and an admin notification to ``ADMINS``
686
  right after each account activation.
687
- Feedback messages submited from astakos contact view and astakos feedback
688
  API endpoint are sent to contacts listed in ``HELPDESK`` setting.
689
- Project application request notifications to people included in ``HELPDESK``
690
  and ``MANAGERS`` settings.
691
- Notifications after each project members action (join request, membership
692
693
  accepted/declinde etc.) to project members or project owners.

694
695
696
Astakos uses the Django internal email delivering mechanism to send email
notifications. A simple configuration, using an external smtp server to
deliver messages, is shown below. Alter the following example to meet your
697
698
smtp server characteristics. Notice that the smtp server is needed for a proper
installation
699
700

.. code-block:: python
701

702
    # /etc/synnefo/00-snf-common-admins.conf
703
704
705
706
707
708
    EMAIL_HOST = "mysmtp.server.synnefo.org"
    EMAIL_HOST_USER = "<smtpuser>"
    EMAIL_HOST_PASSWORD = "<smtppassword>"

    # this gets appended in all email subjects
    EMAIL_SUBJECT_PREFIX = "[example.synnefo.org] "
709

710
711
712
    # Address to use for outgoing emails
    DEFAULT_FROM_EMAIL = "server@example.synnefo.org"

713
    # Email where users can contact for support. This is used in html/email
714
715
716
717
718
719
720
721
    # templates.
    CONTACT_EMAIL = "server@example.synnefo.org"

    # The email address that error messages come from
    SERVER_EMAIL = "server-errors@example.synnefo.org"

Notice that since email settings might be required by applications other than
astakos they are defined in a different configuration file than the one
722
previously used to set astakos specific settings.
723

724
Refer to
725
`Django documentation <https://docs.djangoproject.com/en/1.4/topics/email/>`_
726
727
for additional information on available email settings.

728
729
730
731
As refered in the previous section, based on the operation that triggers
an email notification, the recipients list differs. Specifically for
emails whose recipients include contacts from your service team
(administrators, managers, helpdesk etc) synnefo provides the following
732
settings located in ``00-snf-common-admins.conf``:
733
734
735

.. code-block:: python

736
    ADMINS = (('Admin name', 'admin@example.synnefo.org'),
737
738
739
740
              ('Admin2 name', 'admin2@example.synnefo.org))
    MANAGERS = (('Manager name', 'manager@example.synnefo.org'),)
    HELPDESK = (('Helpdesk user name', 'helpdesk@example.synnefo.org'),)

741
742
743
744
745
746
747
Alternatively, it may be convenient to send e-mails to a file, instead of an actual smtp server, using the file backend. Do so by creating a configuration file ``/etc/synnefo/99-local.conf`` including the folowing:

.. code-block:: python

    EMAIL_BACKEND = 'django.core.mail.backends.filebased.EmailBackend'
    EMAIL_FILE_PATH = '/tmp/app-messages' 
  
748
749


750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
Enable Pooling
--------------

This section can be bypassed, but we strongly recommend you apply the following,
since they result in a significant performance boost.

Synnefo includes a pooling DBAPI driver for PostgreSQL, as a thin wrapper
around Psycopg2. This allows independent Django requests to reuse pooled DB
connections, with significant performance gains.

To use, first monkey-patch psycopg2. For Django, run this before the
``DATABASES`` setting in ``/etc/synnefo/10-snf-webproject-database.conf``:

.. code-block:: console

765
766
    from synnefo.lib.db.pooled_psycopg2 import monkey_patch_psycopg2
    monkey_patch_psycopg2()
767

768
769
Since we are running with greenlets, we should modify psycopg2 behavior, so it
works properly in a greenlet context:
770
771
772

.. code-block:: console

773
774
    from synnefo.lib.db.psyco_gevent import make_psycopg_green
    make_psycopg_green()
775
776
777
778

Use the Psycopg2 driver as usual. For Django, this means using
``django.db.backends.postgresql_psycopg2`` without any modifications. To enable
connection pooling, pass a nonzero ``synnefo_poolsize`` option to the DBAPI
779
driver, through ``DATABASES.OPTIONS`` in Django.
780
781
782
783
784
785

All the above will result in an ``/etc/synnefo/10-snf-webproject-database.conf``
file that looks like this:

.. code-block:: console

786
787
788
    # Monkey-patch psycopg2
    from synnefo.lib.db.pooled_psycopg2 import monkey_patch_psycopg2
    monkey_patch_psycopg2()
789

790
791
792
    # If running with greenlets
    from synnefo.lib.db.psyco_gevent import make_psycopg_green
    make_psycopg_green()
793

794
795
796
    DATABASES = {
     'default': {
         # 'postgresql_psycopg2', 'postgresql','mysql', 'sqlite3' or 'oracle'
797
         'ENGINE': 'django.db.backends.postgresql_psycopg2',
798
         'OPTIONS': {'synnefo_poolsize': 8},
799
800
801

         # ATTENTION: This *must* be the absolute path if using sqlite3.
         # See: http://docs.djangoproject.com/en/dev/ref/settings/#name
802
803
804
805
806
807
808
809
         'NAME': 'snf_apps',
         'USER': 'synnefo',                      # Not used with sqlite3.
         'PASSWORD': 'example_passw0rd',         # Not used with sqlite3.
         # Set to empty string for localhost. Not used with sqlite3.
         'HOST': '4.3.2.1',
         # Set to empty string for default. Not used with sqlite3.
         'PORT': '5432',
     }
810
    }
Christos Stavrakakis's avatar
Christos Stavrakakis committed
811

812
813
814
Database Initialization
-----------------------

815
After configuration is done, we initialize the database by running:
816

817
818
.. code-block:: console

819
    # snf-manage syncdb
820
821
822
823

At this example we don't need to create a django superuser, so we select
``[no]`` to the question. After a successful sync, we run the migration needed
for astakos:
824
825
826

.. code-block:: console

827
    # snf-manage migrate im
828
    # snf-manage migrate quotaholder_app
829

830
Then, we load the pre-defined user groups
Sofia Papagiannaki's avatar
Sofia Papagiannaki committed
831
832
833

.. code-block:: console

834
    # snf-manage loaddata groups
Sofia Papagiannaki's avatar
Sofia Papagiannaki committed
835

836
837
838
839
840
.. _services-reg:

Services Registration
---------------------

841
When the database is ready, we need to register the services. The following
842
843
844
command will ask you to register the standard Synnefo components (astakos,
cyclades, and pithos) along with the services they provide. Note that you
have to register at least astakos in order to have a usable authentication
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
system. For each component, you will be asked to provide two URLs: its base
URL and its UI URL.

The former is the location where the component resides; it should equal
the ``<component_name>_BASE_URL`` as specified in the respective component
settings. For example, the base URL for astakos would be
``https://node1.example.com/astakos``.

The latter is the URL that appears in the Cloudbar and leads to the
component UI. If you want to follow the default setup, set
the UI URL to ``<base_url>/ui/`` where ``base_url`` the component's base
URL as explained before. (You can later change the UI URL with
``snf-manage component-modify <component_name> --url new_ui_url``.)

The command will also register automatically the resource definitions
860
offered by the services.
861
862
863

.. code-block:: console

864
    # snf-component-register
865
866
867
868
869
870
871
872
873
874

.. note::

   This command is equivalent to running the following series of commands;
   it registers the three components in astakos and then in each host it
   exports the respective service definitions, copies the exported json file
   to the astakos host, where it finally imports it:

    .. code-block:: console

875
876
877
       astakos-host$ snf-manage component-add astakos --base-url astakos_base_url --ui-url astakos_ui_url
       astakos-host$ snf-manage component-add cyclades --base-url cyclades_base_url --ui-url cyclades_ui_url
       astakos-host$ snf-manage component-add pithos --base-url pithos_base_url --ui-url pithos_ui_url
878
879
880
881
882
883
884
885
       astakos-host$ snf-manage service-export-astakos > astakos.json
       astakos-host$ snf-manage service-import --json astakos.json
       cyclades-host$ snf-manage service-export-cyclades > cyclades.json
       # copy the file to astakos-host
       astakos-host$ snf-manage service-import --json cyclades.json
       pithos-host$ snf-manage service-export-pithos > pithos.json
       # copy the file to astakos-host
       astakos-host$ snf-manage service-import --json pithos.json
886

887
888
Notice that in this installation astakos and cyclades are in node1 and pithos is in node2

889
890
891
892
Setting Default Base Quota for Resources
----------------------------------------

We now have to specify the limit on resources that each user can employ
893
894
895
(exempting resources offered by projects). When specifying storage or
memory size limits consider to add an appropriate size suffix to the
numeric value, i.e. 10240 MB, 10 GB etc.
896
897
898

.. code-block:: console

899
    # snf-manage resource-modify --default-quota-interactive
900

901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
.. _pithos_view_registration:

Register pithos view as an OAuth 2.0 client
-------------------------------------------

Starting from synnefo version 0.15, the pithos view, in order to get access to
the data of a protect pithos resource, has to be granted authorization for the
specific resource by astakos.

During the authorization grant procedure, it has to authenticate itself with
astakos since the later has to prevent serving requests by unknown/unauthorized
clients.

To register the pithos view as an OAuth 2.0 client in astakos, we have to run
the following command::

    snf-manage oauth2-client-add pithos-view --secret=<secret> --is-trusted --url https://node2.example.com/pithos/ui/view
918
919
920
921
922
923
924
925

Servers Initialization
----------------------

Finally, we initialize the servers on node1:

.. code-block:: console

926
927
    root@node1:~ # /etc/init.d/gunicorn restart
    root@node1:~ # /etc/init.d/apache2 restart
928
929

We have now finished the Astakos setup. Let's test it now.
930
931
932
933
934
935
936


Testing of Astakos
==================

Open your favorite browser and go to:

Giorgos Korfiatis's avatar
Giorgos Korfiatis committed
937
``http://node1.example.com/astakos``
938

Giorgos Korfiatis's avatar
Giorgos Korfiatis committed
939
If this redirects you to ``https://node1.example.com/astakos/ui/`` and you can see
940
941
942
943
944
the "welcome" door of Astakos, then you have successfully setup Astakos.

Let's create our first user. At the homepage click the "CREATE ACCOUNT" button
and fill all your data at the sign up form. Then click "SUBMIT". You should now
see a green box on the top, which informs you that you made a successful request
945
946
and the request has been sent to the administrators. So far so good, let's
assume that you created the user with username ``user@example.com``.
947
948
949
950
951

Now we need to activate that user. Return to a command prompt at node1 and run:

.. code-block:: console

952
    root@node1:~ # snf-manage user-list
953
954

This command should show you a list with only one user; the one we just created.
Giorgos Korfiatis's avatar
Giorgos Korfiatis committed
955
956
This user should have an id with a value of ``1`` and flag "active" and
"verified" set to False. Now run:
957
958
959

.. code-block:: console

Giorgos Korfiatis's avatar
Giorgos Korfiatis committed
960
    root@node1:~ # snf-manage user-modify 1 --verify --accept
961

Giorgos Korfiatis's avatar
Giorgos Korfiatis committed
962
This verifies the user email and activates the user.
963
964
965
When running in production, the activation is done automatically with different
types of moderation, that Astakos supports. You can see the moderation methods
(by invitation, whitelists, matching regexp, etc.) at the Astakos specific
966
967
968
documentation. In production, you can also manually activate a user, by sending
him/her an activation email. See how to do this at the :ref:`User
activation <user_activation>` section.
969

Giorgos Korfiatis's avatar
Giorgos Korfiatis committed
970
Now let's go back to the homepage. Open ``http://node1.example.com/astkos/ui/`` with
971
972
973
974
your browser again. Try to sign in using your new credentials. If the astakos
menu appears and you can see your profile, then you have successfully setup
Astakos.

975
Let's continue to install Pithos now.
976
977


978
979
Installation of Pithos on node2
===============================
980

981
To install Pithos, grab the packages from our repository (make sure  you made
982
983
984
985
986
the additions needed in your ``/etc/apt/sources.list`` file, as described
previously), by running:

.. code-block:: console

987
   # apt-get install snf-pithos-app snf-pithos-backend
988

989
Now, install the pithos web interface:
990

991
992
993
994
995
.. code-block:: console

   # apt-get install snf-pithos-webclient

This package provides the standalone pithos web client. The web client is the
996
web UI for Pithos and will be accessible by clicking "pithos" on the Astakos
997
interface's cloudbar, at the top of the Astakos homepage.
998

999

1000
1001
.. _conf-pithos:

1002
1003
Configuration of Pithos
=======================
1004
1005
1006
1007

Conf Files
----------

1008
After Pithos is successfully installed, you will find the directory
1009
1010
1011
``/etc/synnefo`` and some configuration files inside it, as you did in node1
after installation of astakos. Here, you will not have to change anything that
has to do with snf-common or snf-webproject. Everything is set at node1. You
1012
only need to change settings that have to do with Pithos. Specifically:
1013
1014

Edit ``/etc/synnefo/20-snf-pithos-app-settings.conf``. There you need to set
Ilias Tsitsimpis's avatar
Ilias Tsitsimpis committed
1015
this options:
1016
1017
1018

.. code-block:: console

1019
   ASTAKOS_AUTH_URL = 'https://node1.example.com/astakos/identity/v2.0'
1020

Giorgos Korfiatis's avatar
Giorgos Korfiatis committed
1021
   PITHOS_BASE_URL = 'https://node2.example.com/pithos'
1022
   PITHOS_BACKEND_DB_CONNECTION = 'postgresql://synnefo:example_passw0rd@node1.example.com:5432/snf_pithos'
1023
   PITHOS_BACKEND_BLOCK_PATH = '/srv/pithos/data'
1024

1025
   PITHOS_SERVICE_TOKEN = 'pithos_service_token22w'
1026

1027

1028
1029
The ``PITHOS_BACKEND_DB_CONNECTION`` option tells to the Pithos app where to
find the Pithos backend database. Above we tell Pithos that its database is
1030
1031
1032
``snf_pithos`` at node1 and to connect as user ``synnefo`` with password
``example_passw0rd``.  All those settings where setup during node1's "Database
setup" section.
1033

1034
1035
The ``PITHOS_BACKEND_BLOCK_PATH`` option tells to the Pithos app where to find
the Pithos backend data. Above we tell Pithos to store its data under
1036
``/srv/pithos/data``, which is visible by both nodes. We have already setup this
1037
directory at node1's "Pithos data directory setup" section.
1038

1039
The ``ASTAKOS_AUTH_URL`` option informs the Pithos app where Astakos is.
1040
The Astakos service is used for user management (authentication, quotas, etc.)
1041

Giorgos Korfiatis's avatar
Giorgos Korfiatis committed
1042
1043
1044
1045
The ``PITHOS_BASE_URL`` setting must point to the top-level Pithos URL.

The ``PITHOS_SERVICE_TOKEN`` is the token used for authentication with astakos.
It can be retrieved by running on the Astakos node (node1 in our case):
1046
1047
1048

.. code-block:: console

Giorgos Korfiatis's avatar
Giorgos Korfiatis committed
1049
   # snf-manage component-list
1050

1051
The token has been generated automatically during the :ref:`Pithos service
1052
1053
registration <services-reg>`.

1054
1055
1056
1057
1058
The ``PITHOS_UPDATE_MD5`` option by default disables the computation of the
object checksums. This results to improved performance during object uploading.
However, if compatibility with the OpenStack Object Storage API is important
then it should be changed to ``True``.

1059
Then edit ``/etc/synnefo/20-snf-pithos-webclient-cloudbar.conf``, to connect the
1060
Pithos web UI with the astakos web UI (through the top cloudbar):
1061
1062
1063

.. code-block:: console

1064
    CLOUDBAR_LOCATION = 'https://node1.example.com/static/im/cloudbar/'
Giorgos Korfiatis's avatar
Giorgos Korfiatis committed
1065
1066
    CLOUDBAR_SERVICES_URL = 'https://node1.example.com/astakos/ui/get_services'
    CLOUDBAR_MENU_URL = 'https://node1.example.com/astakos/ui/get_menu'
1067
1068
1069
1070
1071

The ``CLOUDBAR_LOCATION`` tells the client where to find the astakos common
cloudbar.

The ``CLOUDBAR_SERVICES_URL`` and ``CLOUDBAR_MENU_URL`` options are used by the
1072
Pithos web client to get from astakos all the information needed to fill its
1073
own cloudbar. So we put our astakos deployment urls there.
1074

1075
1076
1077
1078
1079
1080
1081
The ``PITHOS_OAUTH2_CLIENT_CREDENTIALS`` setting is used by the pithos view
in order to authenticate itself with astakos during the authorization grant
procedure and it should container the credentials issued for the pithos view
in `the pithos view registration step`__.

__ pithos_view_registration_

1082
1083
1084
1085
1086
1087
1088
Pooling and Greenlets
---------------------

Pithos is pooling-ready without the need of further configuration, because it
doesn't use a Django DB. It pools HTTP connections to Astakos and pithos
backend objects for access to the Pithos DB.

1089
1090
1091
However, as in Astakos, since we are running with Greenlets, it is also
recommended to modify psycopg2 behavior so it works properly in a greenlet
context. This means adding the following lines at the top of your
1092
1093
1094
1095
``/etc/synnefo/10-snf-webproject-database.conf`` file:

.. code-block:: console

1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
    from synnefo.lib.db.psyco_gevent import make_psycopg_green
    make_psycopg_green()

Furthermore, add the ``--worker-class=gevent`` (or ``--worker-class=sync`` as
mentioned above, depending on your setup) argument on your
``/etc/gunicorn.d/synnefo`` configuration file. The file should look something
like this:

.. code-block:: console

    CONFIG = {
     'mode': 'django',
     'environment': {
       'DJANGO_SETTINGS_MODULE': 'synnefo.settings',
     },
     'working_dir': '/etc/synnefo',
     'user': 'www-data',
     'group': 'www-data',
     'args': (
       '--bind=127.0.0.1:8080',
       '--workers=4',
       '--worker-class=gevent',
       '--log-level=debug',
       '--timeout=43200'
     ),
    }
1122

1123
1124
1125
1126
1127
1128
1129
Stamp Database Revision
-----------------------

Pithos uses the alembic_ database migrations tool.

.. _alembic: http://alembic.readthedocs.org

1130
After a successful installation, we should stamp it at the most recent
Sofia Papagiannaki's avatar
Sofia Papagiannaki committed
1131
1132
revision, so that future migrations know where to start upgrading in
the migration history.
1133