DEV Community

Cover image for Docker Certified Associate Exam Series (Part -2): Container Orchestration
KodeKloud
KodeKloud

Posted on • Originally published at kodekloud.com

Docker Certified Associate Exam Series (Part -2): Container Orchestration

Introduction to Container Orchestration

An essential part of preparing for the Docker Certified Associate (DCA) exam is to familiarize yourself with Container Orchestration. Container Orchestration requires a set of tools and scripts that you can use to host, configure, and manage containers in a production environment.

Deploying in Docker typically involves running various applications on different hosts. Container orchestration will help you set up a large number of application instances using a single command. Container orchestration tools also help scale your application’s instances up or down in response to fluctuations in demand. With container orchestration tools, you can also provide advanced networking between various containers.

Three of the most popular Container Orchestration tools are Docker Swarm, Kubernetes, and MESOS.

  • Docker Swarm is hugely popular and easy to set up, yet has a few drawbacks when it comes to autoscaling and customizations.

  • MESOS is challenging to use, and is only recommended for advanced Cloud developers.

  • Kubernetes is a popular container orchestration solution that offers plenty of customization options and unmatched auto-scaling capabilities.

For this part of the study guide series, we shall cover Docker Swarm.

Docker Swarm

Docker Swarm helps you run applications on the Docker Engine seamlessly through multiple nodes that reside in the same containers. With Docker Swarm, you can always monitor the state, health, and performance of your containers and the hosts that run your applications.

As you prepare for the DCA exam, some of the topics of Docker Swarm that you’ll need an in-depth understanding of include:

  • Swarm Architecture
  • Setting up a 2-node cluster in Swarm
  • Creating a demo swarm cluster setup
  • Basic Swarm Operations
  • Swarm High Availability and the Importance of Quorum
  • Swam in High Availability Mode
  • Auto-lock and a classroom demo
  • Swarm Services
  • Rolling Updates, Rollbacks, and Scaling
  • Swarm Service Types
  • Placement in Swarm
  • Service in Swarm- Basic Operations
  • Service in Swarm- placements, global, parallelism, and replicated
  • Docker Config Objects
  • The Docker Overlay Network
  • MACVLan Networks
  • Swarm Service Discovery
  • Docker Stack

Let’s explore some of these areas in detail:

Swarm Architecture

As you study for your exam, you should develop a high level of familiarity with the structure and architecture of Docker Swarm.

Docker Swarm lets you integrate different Docker machines onto a single cluster. This helps with your application’s load balancing and also improves its availability. A Docker Cluster is made up of different instances called Nodes. Nodes can be categorized into two types: Manager and Worker nodes.

A Manager Node receives instructions from a user, turns these into service tasks, which are then assigned to one or more worker nodes. Such nodes also help maintain the desired state of the cluster in which it belongs. Managers can be configured to run production workloads too, when needed.

On the other hand, a Worker Node receives instructions from the manager nodes, and uses these instructions to deploy and run the necessary containers.

Some features of Docker Swarm Architecture include:

  • Swarm is easy to set up and maintain since all the features of Docker Swarm are embedded in the Docker Engine.
  • Docker Swarm deploys applications in a Declarative format.
  • The Swarm manager automatically scales and distributes application instances across worker nodes depending on demand.
  • Rolling updates reconfigure your application one-at-a-time for easier change management.
  • Docker Swarm performs desired state reconciliation for self-healing applications.
  • SSL/TLS certificates secure communication between nodes via authentication and encryption
  • Uses an external load balancer to distribute requests between nodes Alt Text

Setting up a 2-Node Swarm Cluster

This lesson is a demonstration of how you can create a Docker Swarm cluster with 2-worker nodes and 1-manager.
The prerequisites needed for this session will include:

  • Machines (Nodes) deployed and designated as Manager, Worker-1, and Worker-2.
  • The machines should have the Docker Engine installed.
  • Each node should be assigned a static IP address.
  • The ports TCP 2377, TCP & UDP 7946 and UDP 4789 should be opened. Alt Text

To initialize Docker Swarm on your manager node, use the following command while the manager is active:

$ docker swarm init

Swarm initialized: current node (whds9866c56gtgq3uf5jmfsip) is now a manager.

To add a worker to this swarm, run the following command:

    docker swarm join --token SWMTKN-1-19nlqoifkry03y8l5242zl6e2te2k9dvzebf5b70ihhpn7r4qh-aqtxt2sd0sh0hj2f8ceupj53g 172.17.0.27:2377

To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
Enter fullscreen mode Exit fullscreen mode

This command initializes Swarm on the selected node, which is now a manager. The command also returns a script you will use to add a worker to this swarm, as indicated on the Command Line Interface.

To add the second worker to this Swarm, use the following command:

$ docker swarm join-token worker

To add a worker to this swarm, run the following command:

docker swarm join --token SWMTKN-1-19nlqoifkry03y8l5242zl6e2te2k9dvzebf5b70ihhpn7r4qh-aqtxt2sd0sh0hj2f8ceupj53g 172.17.0.27:2377
Enter fullscreen mode Exit fullscreen mode

To display a list of your nodes and workers’ names and status, type the following command onto the CLI:

$ docker node ls
Enter fullscreen mode Exit fullscreen mode

Swarm Operations

You will learn some of the common Swarm operations that involve promoting, draining, and deleting nodes.

To promote a node to manager, you will run the command:

$ docker node promote worker1
Node worker1 promoted to a manager in the swarm.
Enter fullscreen mode Exit fullscreen mode

To demote a manager node to Worker, you will run the command:

$ docker node demote worker1
Manager worker1 demoted in the swarm.
Enter fullscreen mode Exit fullscreen mode

When you want to perform upgrades and maintenance on your cluster, you might need to drain each node independently, one at a time. Let us assume that the current state of the cluster has the following nodes, as shown below:

Alt Text

To drain your node, use the command:

$ docker node update --availability drain worker1
worker1
Enter fullscreen mode Exit fullscreen mode

This command brings down containers on worker1 and runs replica instances on another worker until it gets back up.
Alt Text

Once you are done patching or maintaining your node, you will run the update command but with the availability being active to bring it back up. The command to use is:

$ docker node update --availability active worker1
worker1
Enter fullscreen mode Exit fullscreen mode

To delete a node from a cluster, drain it so that it’s workload is redistributed to another node, then run the command:

$ docker swarm leave
Node left the swarm.
Enter fullscreen mode Exit fullscreen mode

Swarm High Availability- Quorum

Docker Swarm uses a RAFT algorithm to create distributed consensus when more than one manager node is running on a cluster. Having multiple managers in a cluster helps with fault tolerance.

The RAFT algorithm initiates requests at random times. The first manager to respond then requests other managers in the cluster to make it a Leader. If the managers respond positively, the leader assumes this role, sending notifications and updating a shared database on the state of the cluster.
This database is available to all managers in the cluster. Before the leader makes any changes to a cluster, he sends instructions to other managers. The managers should reach a Quorum and agree before changes are effected. If the leader manager loses connectivity, other managers initiate a process to elect a new leader.

The best practices for high availability in Swarm include:

  • Each cluster should have an odd number of managers for easier network segmentation.
  • Every decision should be agreed upon when the number of managers presents reach a Quorum. The quorum for a cluster with N managers is N/2+1
  • The number of failures a cluster can withstand is the Fault Tolerance and is calculated as 2*N -1

Alt Text

  • Distribute manager nodes equally over different data centres/availability zones so the cluster can withstand sitewide disruptions.

If more than the allowed number of managers fail, you cannot perform managerial duties on your cluster. The worker nodes will, however, continue to run normally with all the services and configuration settings still active.

To resolve a failed cluster, you could attempt to bring the failed managers back online. If this fails, you can create a new cluster using the force-create command. This will be a healthy, single-manager cluster. The command for force create is:

$ docker swarm init --force-new-cluster
Enter fullscreen mode Exit fullscreen mode

Once this cluster has been created, you can promote other workers into manager nodes using the promote command.

Swarm Services

As you begin to deploy your clusters, you’ll need a way to run multiple instances of your application across several worker nodes to help with automation and load balancing. The Docker Service allows you to launch containers in a coordinated manner across several nodes.

To create 3 replicas of your application using the Docker service, run the command:

$ docker service create --replicas=3 App1
Enter fullscreen mode Exit fullscreen mode

Alt Text
When you deploy your applications, an API server creates a service which is then divided into tasks by the orchestrator. The allocator then assigns each task an IP address, and a dispatcher assigns these tasks to individual workers. The scheduler then manages task handling by the workers.

Here are a few common service tasks and their Docker Commands.

Create an overlay network

$ docker network create --driver overlay my-overlay-network
Enter fullscreen mode Exit fullscreen mode

Create a subnet

$ docker network create --driver overlay --subnet 10.15.0.0/16 my-overlay-network 
Enter fullscreen mode Exit fullscreen mode

Make a network attachable to external containers

$ docker network create --driver overlay --attachable my-overlay-network
Enter fullscreen mode Exit fullscreen mode

Enable IPS Encryption

$ docker network create --driver overlay --opt encrypted my-overlay-network
Enter fullscreen mode Exit fullscreen mode

Attach a service to a network

$ docker service create --network my-overlay-network my-web-service
Enter fullscreen mode Exit fullscreen mode

Delete a newly created network

$ docker service create --network my-overlay-network my-web-service
Enter fullscreen mode Exit fullscreen mode

Delete all networks

$ docker network prune
Enter fullscreen mode Exit fullscreen mode

Alt Text

Here are some network ports and their purposes.

TCP 2377: Cluster Management Communications
TCP/UDP: Container Network Discovery/ Communication Among Nodes
UDP 4789: Overlay Network Traffic

To publish a host on port 80 pointing to a container on port 5000, use the command:

$ docker service create -p 80:5000 my-web-server
Enter fullscreen mode Exit fullscreen mode

or

docker service create --publish published=80, target=5000 my-web-server
Enter fullscreen mode Exit fullscreen mode

To include UDP:

$ docker service create -p 80:5000/UDP my-web-server
Enter fullscreen mode Exit fullscreen mode

or

$ docker service create --publish published=80, target=5000, protocol=UDP my-web-server
Enter fullscreen mode Exit fullscreen mode

Swarm Service Discovery

Containers and services in a node can communicate with each other directly using their names. To make sure that these containers can ‘see’ each other, you should create an overlay network in which you should place the application and the naming service. For instance:

Create an overlay network:

$ docker network create --driver=overlay app-network
Enter fullscreen mode Exit fullscreen mode

Then create an API server within this network:

$ docker service create --name=api-server --replicas=2 --network=app-network api server
Enter fullscreen mode Exit fullscreen mode

Create the web service task:

$ docker service create --name=web --network=app-network web
Enter fullscreen mode Exit fullscreen mode

The services can now reach each other using their service names. The web server can now reach the api-server using the service name api-server.

Docker Stack

In Docker, a stack is a group of interrelated services that together form the functionality of an application. All of your application’s configuration settings and changes are stored in a docker configuration file, known as a Docker Stack or Docker Compose File. Docker Compose lets you create stack files in YAML, which makes your application easier to manage, highly distributed, and scalable. The docker stack file also lets you perform health checks on your container and set the grace period during which the health check stays inactive.

To create a docker stack file, run the command:

$ docker stack deploy --compose-file docker-compose.yml
Other Docker Stack Commands include:
Enter fullscreen mode Exit fullscreen mode

Task & Command

Create a Stack

$ docker stack deploy
Enter fullscreen mode Exit fullscreen mode

list active stacks

$ docker stack ls
Enter fullscreen mode Exit fullscreen mode

list services created by a stack

$ docker stack ps
Enter fullscreen mode Exit fullscreen mode

list tasks running in a stack

$ docker stack ps
Enter fullscreen mode Exit fullscreen mode

Delete a Stack

$ docker stack rm
Enter fullscreen mode Exit fullscreen mode

Docker Storage

To understand how container orchestration tools manage storage, it is important to know how docker manages storage in containers. Getting to know storage in docker will go a long way in helping you manage storage with Kubernetes. Storage in Docker is managed by two techniques: Storage Drivers and Volume Drivers.

Docker uses Storage Drivers to enable layered architecture. These are attached to containers by default and point file systems to the default path: /var/lib/docker where it stores files in subfolders such as aufs, containers, image and volumes.

Popular storage drivers include: AUFS, ZFS, BDRFS, Device Mapper, Overlay and Overlay2.

Volume Driver Plugins in Docker

Volume Drivers help create persistent volumes in Docker. By default, volumes are assigned a local driver that stores data on the host's volume directory.

There are third-party volume driver plugins that help with storage on various public cloud platforms. These include: Azure File Storage, Convoy, DigitalOcean BlockStorage, Flocker, gce-docker, GlusterFS, NetApp, RexRay, Portworx, VMware vSphere Storage, among others. Docker automatically assigns the appropriate volume driver depending on the operating system and application needs.

To create a volume on Amazon AWS ElasticBlockStorage, run the command:

$ docker run -it \
--name app1
    --volume driver rexray/ebs
    --mount src=ebs -vol,target=/var/lib/app1
    app1
Enter fullscreen mode Exit fullscreen mode

This command creates a persistent volume storage on Amazon EBS at app1’s default file location.

Sample Questions:

Here is a quick quiz to help you assess your knowledge. Leave your answers in the comments below and tag us back.

Quick Tip - Questions below may include a mix of DOMC and MCQ types.

Which command can be used to remove a kubeapp stack?

  • [A] docker stack deploy kubeapp
  • [B] docker stack ls kubeapp
  • [C] docker stack services kubeapp
  • [D] docker stack rm kubeapp

Which command can be used to promote worker2 to a manager node? Select the right answer.

  • [A] docker promote node worker2
  • [B] docker node promote worker2
  • [C] docker swarm node promote worker2
  • [D docker swarm promote node worker2

What is the command to list the stacks in the Docker host?

  • [A] docker stack deploy
  • [B] docker stack ls
  • [C] docker stack services
  • [D] docker stack ps

What is the maximum number of managers possible in a swarm cluster?

  • [A] 3
  • [B] 5
  • [C] 7
  • [D] No limit

... are one or more instances of a single application that runs across the Swarm Cluster.

  • [A] docker stack
  • [B] services
  • [C] pods
  • [D] None of the above

Conclusion

Once you have understood Swarm architecture, set up a cluster, and ensured high availability, you would have developed enough familiarity to tackle real-world projects. Swarm services will help you automate the interaction between various nodes to help with load balancing for distributed containers. By following this guide, you will have enough knowledge on the importance of Container Orchestration and how Docker Swarm offers a simple, no-fuss framework that helps keep your containers healthy.

To test where you stand in your Docker certification journey take the DCA Readiness Test at dca.kodekloud.com. On KodeKloud, you also get a learning path with recommendations, sample questions and tips for clearing the DCA exam.

Top comments (0)