DEV Community

Cover image for Setting up Docker Swarm in AWS
Sulagna Nandi
Sulagna Nandi

Posted on

Setting up Docker Swarm in AWS

Docker and Docker Swarm

Docker is an open source containerization platform. Docker containers are lightweight software packages which includes all tools , libraries and dependencies needed to run applications.
Before the concepts of containerization and virtualization, there was often a gap between the application developer team and final production team. This was due to the difference in the system and dependencies of the environment in which the application was developed to the environment in which it is being tested. To bridge this gap concepts of virtualization and containerization was introduced which enabled developers to run their application in different operating systems sharing the same hardware and then provide the entire system including the dependencies to the production team.

Docker Swarm is a mechanism to handle multiple images running on same host Operating System. Images here refer to the templates that help to build containers. In Docker Swarm we have multiple nodes which are called as - Master and Worker Nodes. In each of these Nodes Docker daemon is initialized to interact with the Docker API. Docker Swarm supports Load balancing and Auto-Scaling features by the use of simple commands. All these are enabled by running services from the Master node.

In this documentation I have demonstrated all the steps required to setup a Docker Swarm environment. For higher availability I have deployed each of the Docker nodes into different VPCs.

Step 1 : Create your VPCs :

(a) We create three VPCs with the motive to deploy three instances in each of them [one Master Node and two Worker Nodes] with CIDR value as needed. We can also have more than one Master and Worker Nodes.
VPCs

(b) Now, we need to create the Subnets for each VPCs
Subnets

(c) Since the nodes need to communicate between them, now we create a Peering connection between the different VPCs. To set a Peering connection search for the service and Create the Peering connection between each of the VPCs.(if three VPCs : A,B,C then create connection between A-->B , B-->C , C-->A)
Peering

After creation we need to accept the Peering request. In order to accept the request, select the request and go to Actions and select Accept Request.
Accept

(d) Now, define the route tables. Create the route tables for each VPC and define them with proper routes and attach to the appropriate subnets. Also, create a Internet Gateway and add to the route table in order to have internet access in each VPC. Finally the route table for each VPC should have the routes defined as follows:

Routes

Step 2 : Create your EC2 instances :

Deploy the instances in each VPC. Here since I have taken three nodes- one Master node and two Worker nodes, I have Launched three instances. Select the appropriate type of the Instances. In the Security rules remember to add a Custom TCP rule with port 2377. Here is a sample of the Security rules put:

TCP

Note: The Security rules should be added with only the IPs that are allowed or needed. The above rules are just for the demo.
Now, Connect to each instance.

Step 3 : Configure Docker Master and Worker Nodes:

(a) First of all, configure Docker into each node by using the command yum install docker* -y.
(b) Now, start and enable the Docker into each node by using the command systemctl start docker followed by systemctl enable docker.
(c) Next, choose a node and assign it as Master Node where all the services will run. For this run the command docker swarm init. Now, a token will be shown . Copy this token and paste it in the other nodes. Now the other nodes will become the Worker nodes.
Swarm

You can check the joining status of the nodes by using the command docker node ls in the Master Node.

(d) Create a Dockerfile that will download a sample website.
First of all to create the Dockerfile type vi Dockerfile. This Docker file will be created in every node.
This is a sample Docker file content:
File

Build the image by command docker image build -t appimg:1 .
You can check the image by using the command docker image ls.

Step 4 : Run the services and create the Containers:

(a) Now, you have to run the services from the Master Node. This can be done by using the command docker service create --name #name_of_the_service -p 80:80 #any_name_of_image:1

Check the service id using command docker service ls .Note this service ID.

(b) Next, create the containers using command: docker service scale #service_id = 4(you can give any number of container). The containers are randomly deployed on any one the Master or Worker nodes. Docker Swarm also provides the Load Balancing features thus providing maximum availability.

Finally, your docker swarm is configured and ready. To view the contents we can map our EC2 instances to Application Load Balancer and attach it to a Domain Name via Route53 and then hit the Domain name. To do so Refer here.

Top comments (0)