An essential part of preparing for the Docker Certified Associate (DCA) exam is to familiarize yourself with Container Orchestration. Container Orchestration requires a set of tools and scripts that you can use to host, configure, and manage containers in a production environment.
Deploying in Docker typically involves running various applications on different hosts. Container orchestration will help you set up a large number of application instances using a single command. Container orchestration tools also help scale your application’s instances up or down in response to fluctuations in demand. With container orchestration tools, you can also provide advanced networking between various containers.
Docker Swarm is hugely popular and easy to set up, yet has a few drawbacks when it comes to autoscaling and customizations.
MESOS is challenging to use, and is only recommended for advanced Cloud developers.
Kubernetes is a popular container orchestration solution that offers plenty of customization options and unmatched auto-scaling capabilities.
Docker Swarm helps you run applications on the Docker Engine seamlessly through multiple nodes that reside in the same containers. With Docker Swarm, you can always monitor the state, health, and performance of your containers and the hosts that run your applications.
As you prepare for the DCA exam, some of the topics of Docker Swarm that you’ll need an in-depth understanding of include:
- Swarm Architecture
- Setting up a 2-node cluster in Swarm
- Creating a demo swarm cluster setup
- Basic Swarm Operations
- Swarm High Availability and the Importance of Quorum
- Swam in High Availability Mode
- Auto-lock and a classroom demo
- Swarm Services
- Rolling Updates, Rollbacks, and Scaling
- Swarm Service Types
- Placement in Swarm
- Service in Swarm- Basic Operations
- Service in Swarm- placements, global, parallelism, and replicated
- Docker Config Objects
- The Docker Overlay Network
- MACVLan Networks
- Swarm Service Discovery
- Docker Stack
As you study for your exam, you should develop a high level of familiarity with the structure and architecture of Docker Swarm.
Docker Swarm lets you integrate different Docker machines onto a single cluster. This helps with your application’s load balancing and also improves its availability. A Docker Cluster is made up of different instances called Nodes. Nodes can be categorized into two types: Manager and Worker nodes.
A Manager Node receives instructions from a user, turns these into service tasks, which are then assigned to one or more worker nodes. Such nodes also help maintain the desired state of the cluster in which it belongs. Managers can be configured to run production workloads too, when needed.
On the other hand, a Worker Node receives instructions from the manager nodes, and uses these instructions to deploy and run the necessary containers.
- Swarm is easy to set up and maintain since all the features of Docker Swarm are embedded in the Docker Engine.
- Docker Swarm deploys applications in a Declarative format.
- The Swarm manager automatically scales and distributes application instances across worker nodes depending on demand.
- Rolling updates reconfigure your application one-at-a-time for easier change management.
- Docker Swarm performs desired state reconciliation for self-healing applications.
- SSL/TLS certificates secure communication between nodes via authentication and encryption
- Uses an external load balancer to distribute requests between nodes
This lesson is a demonstration of how you can create a Docker Swarm cluster with 2-worker nodes and 1-manager.
The prerequisites needed for this session will include:
- Machines (Nodes) deployed and designated as Manager, Worker-1, and Worker-2.
- The machines should have the Docker Engine installed.
- Each node should be assigned a static IP address.
- The ports TCP 2377, TCP & UDP 7946 and UDP 4789 should be opened.
To initialize Docker Swarm on your manager node, use the following command while the manager is active:
$ docker swarm init Swarm initialized: current node (whds9866c56gtgq3uf5jmfsip) is now a manager. To add a worker to this swarm, run the following command: docker swarm join --token SWMTKN-1-19nlqoifkry03y8l5242zl6e2te2k9dvzebf5b70ihhpn7r4qh-aqtxt2sd0sh0hj2f8ceupj53g 172.17.0.27:2377 To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
This command initializes Swarm on the selected node, which is now a manager. The command also returns a script you will use to add a worker to this swarm, as indicated on the Command Line Interface.
To add the second worker to this Swarm, use the following command:
$ docker swarm join-token worker To add a worker to this swarm, run the following command: docker swarm join --token SWMTKN-1-19nlqoifkry03y8l5242zl6e2te2k9dvzebf5b70ihhpn7r4qh-aqtxt2sd0sh0hj2f8ceupj53g 172.17.0.27:2377
To display a list of your nodes and workers’ names and status, type the following command onto the CLI:
$ docker node ls
You will learn some of the common Swarm operations that involve promoting, draining, and deleting nodes.
To promote a node to manager, you will run the command:
$ docker node promote worker1 Node worker1 promoted to a manager in the swarm.
To demote a manager node to Worker, you will run the command:
$ docker node demote worker1 Manager worker1 demoted in the swarm.
When you want to perform upgrades and maintenance on your cluster, you might need to drain each node independently, one at a time. Let us assume that the current state of the cluster has the following nodes, as shown below:
To drain your node, use the command:
$ docker node update --availability drain worker1 worker1
Once you are done patching or maintaining your node, you will run the update command but with the availability being active to bring it back up. The command to use is:
$ docker node update --availability active worker1 worker1
To delete a node from a cluster, drain it so that it’s workload is redistributed to another node, then run the command:
$ docker swarm leave Node left the swarm.
Docker Swarm uses a RAFT algorithm to create distributed consensus when more than one manager node is running on a cluster. Having multiple managers in a cluster helps with fault tolerance.
The RAFT algorithm initiates requests at random times. The first manager to respond then requests other managers in the cluster to make it a Leader. If the managers respond positively, the leader assumes this role, sending notifications and updating a shared database on the state of the cluster.
This database is available to all managers in the cluster. Before the leader makes any changes to a cluster, he sends instructions to other managers. The managers should reach a Quorum and agree before changes are effected. If the leader manager loses connectivity, other managers initiate a process to elect a new leader.
- Each cluster should have an odd number of managers for easier network segmentation.
- Every decision should be agreed upon when the number of managers presents reach a Quorum. The quorum for a cluster with N managers is N/2+1
- The number of failures a cluster can withstand is the Fault Tolerance and is calculated as 2*N -1
- Distribute manager nodes equally over different data centres/availability zones so the cluster can withstand sitewide disruptions.
If more than the allowed number of managers fail, you cannot perform managerial duties on your cluster. The worker nodes will, however, continue to run normally with all the services and configuration settings still active.
To resolve a failed cluster, you could attempt to bring the failed managers back online. If this fails, you can create a new cluster using the force-create command. This will be a healthy, single-manager cluster. The command for force create is:
$ docker swarm init --force-new-cluster
Once this cluster has been created, you can promote other workers into manager nodes using the promote command.
As you begin to deploy your clusters, you’ll need a way to run multiple instances of your application across several worker nodes to help with automation and load balancing. The Docker Service allows you to launch containers in a coordinated manner across several nodes.
To create 3 replicas of your application using the Docker service, run the command:
$ docker service create --replicas=3 App1
When you deploy your applications, an API server creates a service which is then divided into tasks by the orchestrator. The allocator then assigns each task an IP address, and a dispatcher assigns these tasks to individual workers. The scheduler then manages task handling by the workers.
Here are a few common service tasks and their Docker Commands.
Create an overlay network
$ docker network create --driver overlay my-overlay-network
Create a subnet
$ docker network create --driver overlay --subnet 10.15.0.0/16 my-overlay-network
Make a network attachable to external containers
$ docker network create --driver overlay --attachable my-overlay-network
Enable IPS Encryption
$ docker network create --driver overlay --opt encrypted my-overlay-network
Attach a service to a network
$ docker service create --network my-overlay-network my-web-service
Delete a newly created network
$ docker service create --network my-overlay-network my-web-service
Delete all networks
$ docker network prune
TCP 2377: Cluster Management Communications
TCP/UDP: Container Network Discovery/ Communication Among Nodes
UDP 4789: Overlay Network Traffic
To publish a host on port 80 pointing to a container on port 5000, use the command:
$ docker service create -p 80:5000 my-web-server
docker service create --publish published=80, target=5000 my-web-server
To include UDP:
$ docker service create -p 80:5000/UDP my-web-server
$ docker service create --publish published=80, target=5000, protocol=UDP my-web-server
Containers and services in a node can communicate with each other directly using their names. To make sure that these containers can ‘see’ each other, you should create an overlay network in which you should place the application and the naming service. For instance:
Create an overlay network:
$ docker network create --driver=overlay app-network
Then create an API server within this network:
$ docker service create --name=api-server --replicas=2 --network=app-network api server
Create the web service task:
$ docker service create --name=web --network=app-network web
The services can now reach each other using their service names. The web server can now reach the api-server using the service name api-server.
In Docker, a stack is a group of interrelated services that together form the functionality of an application. All of your application’s configuration settings and changes are stored in a docker configuration file, known as a Docker Stack or Docker Compose File. Docker Compose lets you create stack files in YAML, which makes your application easier to manage, highly distributed, and scalable. The docker stack file also lets you perform health checks on your container and set the grace period during which the health check stays inactive.
To create a docker stack file, run the command:
$ docker stack deploy --compose-file docker-compose.yml Other Docker Stack Commands include:
Create a Stack
$ docker stack deploy
list active stacks
$ docker stack ls
list services created by a stack
$ docker stack ps
list tasks running in a stack
$ docker stack ps
Delete a Stack
$ docker stack rm
To understand how container orchestration tools manage storage, it is important to know how docker manages storage in containers. Getting to know storage in docker will go a long way in helping you manage storage with Kubernetes. Storage in Docker is managed by two techniques: Storage Drivers and Volume Drivers.
Docker uses Storage Drivers to enable layered architecture. These are attached to containers by default and point file systems to the default path: /var/lib/docker where it stores files in subfolders such as aufs, containers, image and volumes.
Popular storage drivers include: AUFS, ZFS, BDRFS, Device Mapper, Overlay and Overlay2.
Volume Drivers help create persistent volumes in Docker. By default, volumes are assigned a local driver that stores data on the host's volume directory.
There are third-party volume driver plugins that help with storage on various public cloud platforms. These include: Azure File Storage, Convoy, DigitalOcean BlockStorage, Flocker, gce-docker, GlusterFS, NetApp, RexRay, Portworx, VMware vSphere Storage, among others. Docker automatically assigns the appropriate volume driver depending on the operating system and application needs.
To create a volume on Amazon AWS ElasticBlockStorage, run the command:
$ docker run -it \ --name app1 --volume driver rexray/ebs --mount src=ebs -vol,target=/var/lib/app1 app1
This command creates a persistent volume storage on Amazon EBS at app1’s default file location.
Here is a quick quiz to help you assess your knowledge. Leave your answers in the comments below and tag us back.
Quick Tip - Questions below may include a mix of DOMC and MCQ types.
docker stack deploy kubeapp
docker stack ls kubeapp
docker stack services kubeapp
docker stack rm kubeapp
docker promote node worker2
docker node promote worker2
docker swarm node promote worker2
docker swarm promote node worker2
docker stack deploy
docker stack ls
docker stack services
docker stack ps
- [A] 3
- [B] 5
- [C] 7
- [D] No limit
- [A] docker stack
- [B] services
- [C] pods
- [D] None of the above
Once you have understood Swarm architecture, set up a cluster, and ensured high availability, you would have developed enough familiarity to tackle real-world projects. Swarm services will help you automate the interaction between various nodes to help with load balancing for distributed containers. By following this guide, you will have enough knowledge on the importance of Container Orchestration and how Docker Swarm offers a simple, no-fuss framework that helps keep your containers healthy.
To test where you stand in your Docker certification journey take the DCA Readiness Test at dca.kodekloud.com. On KodeKloud, you also get a learning path with recommendations, sample questions and tips for clearing the DCA exam.