Introduction to Container Orchestration
Assuming that we have a containerized Python application and wanna to deploy on production. However to deploy on production we have some prerequisites to handle first:
service discovery, (i.e. connections between containers).
Auto Scaling.
High availability and fault tolerance.
With Traditional servers (i.e. virtual machines) All of these prerequisites was handled by bunch of tools (e.g. Vmware,) But how to achieve that while using containers!?
Here's the value behind The Container orchestration tools, Container orchestration is the process of managing and deploying containers, Orchestration helps to automate the deployment, scaling, and management of containerized applications.
Containers provide a lightweight and portable way to package and deploy applications, but managing them at scale can be challenging. Container orchestration tools help to simplify this process by providing a platform for deploying and managing containers across multiple hosts.
Orchestration tools typically provide the following features:
Service discovery: Containers need to communicate with each other, and orchestration tools provide a way to automatically discover other containers in the cluster.
Load balancing: Orchestration tools can automatically distribute incoming traffic across multiple containers, ensuring that the workload is evenly distributed.
Scaling: Orchestration tools can automatically scale the number of containers up or down based on the demand for the application.
Health monitoring: Orchestration tools can monitor the health of containers and automatically restart them if they fail.
Declare desired state: Orchestration tools will work to compare the desired state -that your provided- with the actual state -The real state of the running applications- and works to ensure that the actual state matches that desired state.
Some popular container orchestration tools include Docker Swarm, Kubernetes, Mesos. Each tool has its own strengths and weaknesses, and the choice of tool will depend on your specific requirements.
Container ecosystem layers:
The image provided by IBM docker essentials course
In this series will be using Docker swarm, Docker Swarm is a powerful and easy-to-use tool for managing containers at scale, and it has become a popular choice for organizations looking to deploy containerized applications in production environments.
Docker swarm Overview
Docker swarm is a container orchestration tool That allows you to Create, Deploy, Scale, and manage Cluster of Docker Hosts using declarative configuration file called Docker Compose or DockerCLI.
Docker swarm As Defined by Docker Docs "The cluster management and orchestration features **embedded in the Docker Engine* are built using swarmkit, Swarmkit is a separate project which implements Dockerโs orchestration layer and is used directly within Docker."* Docker Swarm is the Docker-native container orchestration platform that uses SwarmKit as its core library, That means you don't need any extra installation.
Docker swarm is consists of two main components: Manager nodes, and worker Nodes, The Manager node server is responsible on managing the entire cluster and schedule the tasks, While The Worker node is the host that run the container; That means we must have couple of servers to initiate the Swarm cluster., And that is the idea to deliver high availability to your applications
Swarm provide very powerful features with self managed and easy to use, Press Feature highlights for more.
kindly consider to assign a static IP for Nodes, Also consider Open protocols and ports between the hosts
The following ports must be available. On some systems, these ports are open by default.
- Port 2377 TCP for communication with and between manager nodes
- Port 7946 TCP/UDP for overlay network node discovery
- Port 4789 UDP (configurable) for overlay network traffic
Setup the environment
To initiate the Swarm Cluster we need couple of servers which is hard to achieve for learning purpose, In this lab will going to use Play with Docker, Play with docker is provided by Docker Inc to provide your the ability to initiate Nodes that have docker preinstalled and ready to use.
Enough talking, Down we go!!
Setup the Environment:
- Press to open Play with docker
- Sign-in with your docker account.
- Press Start, to start creating your workspace.
- This will open workspace like the below, Press Add new instance to initiate a server.
- Initiate Swarm Mode:
# get the NIC name "network interface card"
# in our case its eth0
ip a s
# initiate docker swarm
$ docker swarm init --advertise-addr eth0
- This will print out the below:
Swarm initialized: current node (yz3vrc2w1hwnrwkr5dfsctxkj) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-4xj2egkkxq8ofqkeg0s3zblrdpzcpqokgjyl5zpc1pja100641-3eqtbf6doialoa2spbr1o4dp0 192.168.0.28:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
I think it's defined well
- Now, Let's create Two instances and join them to our Swarm cluster.
- Press Node2, Copy and past the command provided by Swarm initiation
docker swarm join --token SWMTKN-1-4xj2egkkxq8ofqkeg0s3zblrdpzcpqokgjyl5zpc1pja100641-3eqtbf6doialoa2spbr1o4dp0 192.168.0.28:2377
- Do the same on Node3
- Here we go, list worker nodes:
- Press node1 which is the manager node, and type the below.
docker node ls
here we have three nodes with Ready status, Active availability, and one act as leader "Manager".
The asterisk means this node1 it's the node that handle this command
docker node ls
The Underline information
Container orchestration tools help to automate the deployment, scaling, high availability, and management of containerized applications.
Orchestration tools typically provide Service discovery, Load balancing, Scaling, Health monitoring, Declare desired state.
Docker Swarm is a powerful and easy-to-use tool for managing containers.
initiate Swarm mode Dose not need any extra installation.
Docker swarm is consists of two main components: Manager nodes, and worker Nodes, The Manager node server is responsible on managing the entire cluster and schedule the tasks, While The Worker node is the host that run the container.
use this command to initiate the swarm mode:
docker swarm init --advertise-addr eth0
- use this command on the manager node to list the node:
docker node ls
That's it, Very straightforward, very fast๐.
Hope this article inspired you and will appreciate your feedback. Thank you.
Top comments (0)