Commit b782f338 authored by Themis Zamani's avatar Themis Zamani

Merge pull request #38 from efikalti/fixes

[LAM-88]As a ~okeanos user, I want to be able to destroy a λ instance I own 
parents 48da9c40 bd935851
......@@ -7,7 +7,7 @@ The libraries contained in the core package are responsible for creating a clust
### provisioner
The library is responsible for creating a VM cluster, using the Kamaki python API. It reads the authentication info from the .kamakirc, and accepts the cluster specs as arguments.
The library is responsible for creating/deleting a VM cluster, using the Kamaki python API. It reads the authentication info from the .kamakirc, and accepts the cluster specs as arguments.
### ansible_manager
......@@ -20,12 +20,22 @@ The library is responsible for managing the ansible, that will run on the cluste
### cluster_creator
The script is responsible for creating the entire lambda instance.
The script is responsible for creating/deleting the entire lambda instance.
Run script as `cluster_creator.py --action=create` for creating a lambda cluster.
Run script as `cluster_creator.py --action=delete --cluster_id=<id>` for deleting a lambda cluster.
According to the action selected, certain arguments must be modified.
If action is CREATE
* It sets the provisioner arguments (cluster specs), then calls the provisioner to create the cluster.
* After that, it gets the output dictionary of the provisioner and adds some more values to it, which are obtained using the provisioner, after the cluster creation.
* It calls the ansible_manager, to create the inventory, using the dictionary as input.
* Finally, it uses the created manager object (containing the inventory and constants), to run the required playbooks in the correct order, to create the lambda instance.
If action is DELETE
* It reads the cluster id from the arguments.
* Creates a query to read the cluster information from the database with this id.
* It call the delete_lambda_cluster method of the provisioner with the information it retrieved from the database.
## Prerequisites
* kamaki 0.13.4 or later
......@@ -46,7 +56,7 @@ default_cloud = lambda
url = https://accounts.okeanos.grnet.gr/identity/v2.0
token = your-okeanos-token
```
Note that you may retrieve your ~okeanos API token, after logging into the service, by visiting [this page][api_link].
Note that you may retrieve your ~okeanos API token, after logging into the service, by visiting [this page][api_link].
- Install required packages. Within the `core` directory execute `sudo pip install -r requirements.txt`.
- Install package using `sudo python setup.py install`
......@@ -54,7 +64,7 @@ Note that you may retrieve your ~okeanos API token, after logging into the servi
## Usage
To create a lambda instance, one must run `python cluster_creator.py` from within the `core/fokia` directory. To change the default settings (one master instance and one slave instance) one has to edit the `cluster_creator.py` script prior to executing it.
To create a lambda instance, one must run `python cluster_creator.py` from within the `core/fokia` directory. To change the default settings (one master instance and one slave instance) one has to edit the `cluster_creator.py` script prior to executing it.
......@@ -67,4 +77,4 @@ To test the library we use `tox`. In order to run the tests:
This will automatically create the testing environments required and run the tests
[api_link]: https://accounts.okeanos.grnet.gr/ui/api_access
\ No newline at end of file
[api_link]: https://accounts.okeanos.grnet.gr/ui/api_access
......@@ -5,6 +5,20 @@ import inspect
from fokia.provisioner import Provisioner
from fokia.ansible_manager import Manager
def get_cluster_details(cluster_id):
"""
:param cluster_id: id of the cluster
:returns: the details of the cluster after retrieving them from the database.
"""
#TODO
#1. create a query for the table cluster requesting the cluster info with this id
#2. parse the answer, create a dictionary object with this format:
"""
{'nodes':[master_id,node1_id,node2_id,...], 'vpn':vpn_id}
"""
#3. return dictionary, return null if the query did not return any answer.
if __name__ == "__main__":
start_time = time.time()
script_path = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))
......@@ -24,46 +38,53 @@ if __name__ == "__main__":
parser.add_argument('--ip_request', type=int, dest='ip_request', default=1)
parser.add_argument('--network_request', type=int, dest='network_request', default=1)
parser.add_argument('--image_name', type=str, dest='image_name', default='debian')
parser.add_argument('--cluster_size', type=int, dest='cluster_size', default=2)
parser.add_argument('--action', type=str, dest='action', default='create')
parser.add_argument('--cluster_id', type=int, dest='cluster_id', default=0)
args = parser.parse_args()
provisioner = Provisioner(cloud_name=args.cloud)
provisioner.create_lambda_cluster('lambda-master', slaves=args.slaves,
cluster_size=args.cluster_size,
vcpus_master=args.vcpus_master,
vcpus_slave=args.vcpus_slave,
ram_master=args.ram_master,
ram_slave=args.ram_slave,
disk_master=args.disk_master,
disk_slave=args.disk_slave,
ip_request=args.ip_request,
network_request=args.network_request,
project_name=args.project_name)
provisioner_response = provisioner.get_cluster_details()
master_id = provisioner_response['nodes']['master']['id']
master_ip = provisioner.get_server_private_ip(master_id)
provisioner_response['nodes']['master']['internal_ip'] = master_ip
slave_ids = [slave['id'] for slave in provisioner_response['nodes']['slaves']]
for i, slave in enumerate(provisioner_response['nodes']['slaves']):
slave_ip = provisioner.get_server_private_ip(slave['id'])
provisioner_response['nodes']['slaves'][i]['internal_ip'] = slave_ip
provisioner_response['pk'] = provisioner.get_private_key()
if args['action'] == 'create':
provisioner.create_lambda_cluster('lambda-master', slaves=args.slaves,
vcpus_master=args.vcpus_master,
vcpus_slave=args.vcpus_slave,
ram_master=args.ram_master,
ram_slave=args.ram_slave,
disk_master=args.disk_master,
disk_slave=args.disk_slave,
ip_request=args.ip_request,
network_request=args.network_request,
project_name=args.project_name)
provisioner_response = provisioner.get_cluster_details()
master_id = provisioner_response['nodes']['master']['id']
master_ip = provisioner.get_server_private_ip(master_id)
provisioner_response['nodes']['master']['internal_ip'] = master_ip
slave_ids = [slave['id'] for slave in provisioner_response['nodes']['slaves']]
for i, slave in enumerate(provisioner_response['nodes']['slaves']):
slave_ip = provisioner.get_server_private_ip(slave['id'])
provisioner_response['nodes']['slaves'][i]['internal_ip'] = slave_ip
provisioner_response['pk'] = provisioner.get_private_key()
print 'response =', provisioner_response
provisioner_time = time.time()
print 'response =', provisioner_response
provisioner_time = time.time()
manager = Manager(provisioner_response)
manager.create_inventory()
# manager.run_playbook(playbook_file=script_path + "/../../ansible/playbooks/test/testinventory.yml", tags=['hosts'])
# manager.run_playbook(playbook_file=script_path + "/../../ansible/playbooks/test/testproxy.yml", tags=['install'])
manager = Manager(provisioner_response)
manager.create_inventory()
# manager.run_playbook(playbook_file=script_path + "/../../ansible/playbooks/test/testinventory.yml", tags=['hosts'])
# manager.run_playbook(playbook_file=script_path + "/../../ansible/playbooks/test/testproxy.yml", tags=['install'])
manager.run_playbook(playbook_file=script_path + "/../../ansible/playbooks/cluster-install.yml")
manager.run_playbook(playbook_file=script_path + "/../../ansible/playbooks/cluster-install.yml")
manager.cleanup()
manager.cleanup()
provisioner_duration = provisioner_time - start_time
ansible_duration = time.time() - provisioner_time
provisioner_duration = provisioner_time - start_time
ansible_duration = time.time() - provisioner_time
print 'VM provisioning took', round(provisioner_duration), 'seconds'
print 'Ansible playbooks took', round(ansible_duration), 'seconds'
print 'VM provisioning took', round(provisioner_duration), 'seconds'
print 'Ansible playbooks took', round(ansible_duration), 'seconds'
elif args['action'] == 'delete':
details = get_cluster_details(args['cluster_id'])
if details != None:
provisioner.delete_lambda_cluster(details)
This diff is collapsed.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment