Commit 29e9698d authored by Paschalis Korosoglou's avatar Paschalis Korosoglou
Browse files

Docs rearrangement

parent 8c6b9a37
../central_service/app/api-doc/docs
\ No newline at end of file
../../central_service/app/api-doc/docs/API_lambda_applications.md
\ No newline at end of file
../../central_service/app/api-doc/docs/API_lambda_instances.md
\ No newline at end of file
../../central_service/app/api-doc/docs/API_users.md
\ No newline at end of file
## Central λ API
The central λ API UML Class diagram is shown in the following picture.
![Screenshot](../images/central-api-uml.png)
../../central_service/app/api-doc/docs/index.md
\ No newline at end of file
## Usage
Although, not intended to operate in this way, the Fokia library can be used to bootstrap and configure a λ instance. To install the Fokia library (locally) a user needs to have `pip` already installed and available.
The user must have (or create) a `~/.kamakirc` configuration file. Here is an example configuration:
```
[global]
default_cloud = lambda
; ca_certs = /path/to/certs
[cloud "lambda"]
url = https://accounts.okeanos.grnet.gr/identity/v2.0
token = your-okeanos-token
```
After checking out the [code base](https://github.com/grnet/okeanos-LoD) change directory into `core/` and install the Fokia prerequisites with the following command:
```bash
$ sudo pip install -r requirements.txt
```
Then install the Fokia library using the following command:
```bash
$ sudo python setup.py install
```
To bootstrap a λ instance the `lambda_instance_manager.py` executable inside the `core/fokia/` folder can be used. The available options are shown in the listing below:
```sh
$ python lambda_instance_manager.py -h
usage: lambda_instance_manager.py [-h] [--master-name MASTER_NAME]
[--slaves SLAVES]
[--vcpus_master VCPUS_MASTER]
[--vcpus_slave VCPUS_SLAVE]
[--ram_master RAM_MASTER]
[--ram_slave RAM_SLAVE]
[--disk_master DISK_MASTER]
[--disk_slave DISK_SLAVE]
[--project-name PROJECT_NAME]
optional arguments:
--master-name MASTER_NAME
Name of Flink master VM [default: lambda-master]
--slaves SLAVES Number of Flink slaves [default: 1]
--vcpus_master VCPUS_MASTER
Number of CPUs on Flink master [default: 4]
--vcpus_slave VCPUS_SLAVE
Number of CPUs on Flink slave(s) [default: 4]
--ram_master RAM_MASTER
Size of RAM on Flink master (in MB) [default: 4096MB]
--ram_slave RAM_SLAVE
Size of RAM on Flink slave(s) (in MB) [default: 4096MB]
--disk_master DISK_MASTER
Size of disk on Flink master (in GB) [default: 40GB]
--disk_slave DISK_SLAVE
Size of disk on Flink slave(s) (in GB) [default: 40GB]
```
## Overview
The Fokia library is used internally by the λ service in order to bootstrap and orchestrate ~okeanos resources (VMs, virtual networks, disks etc). Internally, Fokia implements calls to the ~okeanos API via the kamaki library and orchestrates the provisioned resources via the Ansible API library.
## UML Class diagram
The Fokia UML class diagram is the following one:
![Screenshot](../images/fokia-uml.png)
The Fokia library is installed on each λ service VM.
## Fokia Flowchart
The following flowchart displays the procedures followed by Fokia when creating a new λ instance.
![Screenshot](../images/fokia-flowchart.png)
# λ on Demand
These pages contain technical documentation regarding the Lambda on Demand service. The service targets ~okeanos users that want to deploy a fully capable λ instance (cluster) on top of ~okeanos resources.
## Description of λ (lambda) instance
Each λ instance provided through the service is comprised of the following building blocks:
- A YARN based hadoop infrastructure with HDFS
- A Flink based cluster running on top of YARN
- A Kafka brokering service (incl. Zookeper for service node discovery)
A λ instance comprised of `n` nodes (VMs) will contain `n` HDFS nodes, one Flink master and `n-1` Flink slaves, and `n` Kafka brokers as shown in the screenshot below
![Screenshot](images/lambda-composition.png)
## How the service works
In order to use the service an ~okeanos user will first need to spin up a service virtual machine on ~okeanos. To do so one needs to select and use the 'Lambda (λ) Service VM' image as shown on the screenshot below.
![Screenshot](images/images.png)
## Provisioning a λ instance
After spinning up a λ Service VM the user of the service can utilize λ instances on demand either through the service API or through the service web frontend as shown in the screenshot below.
![Screenshot](images/high-level-architecture.png)
The web front interface uses the API interface in the backend. More details on the API interface are available [here](lambda-api/index.md).
The API interface uses the Fokia library in its backend. More details on the Fokia library are available [here](fokia/description.md).
../webapp/api-doc/docs
\ No newline at end of file
../../webapp/api-doc/docs/ApplicationDelete.md
\ No newline at end of file
../../webapp/api-doc/docs/ApplicationDeploy.md
\ No newline at end of file
../../webapp/api-doc/docs/ApplicationDetails.md
\ No newline at end of file
../../webapp/api-doc/docs/ApplicationStart.md
\ No newline at end of file
../../webapp/api-doc/docs/ApplicationStop.md
\ No newline at end of file
../../webapp/api-doc/docs/ApplicationUpload.md
\ No newline at end of file
../../webapp/api-doc/docs/ApplicationWithdraw.md
\ No newline at end of file
../../webapp/api-doc/docs/ApplicationsCount.md
\ No newline at end of file
../../webapp/api-doc/docs/ApplicationsList.md
\ No newline at end of file
../../webapp/api-doc/docs/ApplicationsListDeployed.md
\ No newline at end of file
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment