Commit c33ef20c authored by Georgios Ouzounis's avatar Georgios Ouzounis
Browse files

Corrections to Ansible README.md file.

parent ddad6d08
...@@ -54,6 +54,7 @@ Contains Ansible playbook for the deployment of Apache Flink. The playbook is sp ...@@ -54,6 +54,7 @@ Contains Ansible playbook for the deployment of Apache Flink. The playbook is sp
- Create softlink for Apache Flink(creates /usr/local/flink softlink). - Create softlink for Apache Flink(creates /usr/local/flink softlink).
- Configure Apache Flink(copies pre-created Apache Flink configuration files into /usr/local/flink/conf). - Configure Apache Flink(copies pre-created Apache Flink configuration files into /usr/local/flink/conf).
- Start Apache Flink(starts an Apache Yarn session with 2 TaskManagers and 512 MB of RAM each). - Start Apache Flink(starts an Apache Yarn session with 2 TaskManagers and 512 MB of RAM each).
Apache Flink needs to be installed only on master node. Information about the architecture of the cluster(number of slaves, etc...) are found through Apache Yarn. Apache Flink needs to be installed only on master node. Information about the architecture of the cluster(number of slaves, etc...) are found through Apache Yarn.
### How to deploy ### How to deploy
...@@ -63,7 +64,7 @@ $ansible-playbook -v playbooks/apache-flink/flink-install.yml ...@@ -63,7 +64,7 @@ $ansible-playbook -v playbooks/apache-flink/flink-install.yml
``` ```
## Apache Kafka ## Apache Kafka deployment
Contains Ansible playbook for the deployment of Apache kafka. The playbook is split into eleven (11) tasks: Contains Ansible playbook for the deployment of Apache kafka. The playbook is split into eleven (11) tasks:
- Download Apache Kafka(downloads Apache Kafka into /root). - Download Apache Kafka(downloads Apache Kafka into /root).
...@@ -77,6 +78,7 @@ Contains Ansible playbook for the deployment of Apache kafka. The playbook is sp ...@@ -77,6 +78,7 @@ Contains Ansible playbook for the deployment of Apache kafka. The playbook is sp
- Create Apache Kafka input topic(creates an Apache Kafka topic, named "input", to store input data). - Create Apache Kafka input topic(creates an Apache Kafka topic, named "input", to store input data).
- Create Apache Kafka batch output topic(creates an Apache Kafka topic, named "batch-output", to store the output data of the batch job). - Create Apache Kafka batch output topic(creates an Apache Kafka topic, named "batch-output", to store the output data of the batch job).
- Create Apache Kafka stream output topic(creates an Apache Kafka topic, named "stream-output", to store the output data of the stream job). - Create Apache Kafka stream output topic(creates an Apache Kafka topic, named "stream-output", to store the output data of the stream job).
Currently, the playbooks are run from an external node, and deploy both master and slave nodes. In future version, they will run from the master node to deploy the slave nodes. Currently, the playbooks are run from an external node, and deploy both master and slave nodes. In future version, they will run from the master node to deploy the slave nodes.
### How to deploy ### How to deploy
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment