Introduction
Docker Compose is a tool for defining and running multi-container Docker applications. It allows you to define the services that make up your application and how they interact with each other. Docker Compose uses a YAML file to configure the application services, networks, and volumes.
Here are some of the key elements we can find in a Docker Compose YAML file:
- version: Specifies the version of the Docker Compose file format being used.
- services: Defines the various services that make up the application. Each service is given a name and specifies its image, ports, environment variables, and any other necessary configuration options.
- networks: Specifies the networks that the application's services will use to communicate with each other.
- volumes: Defines the volumes that will be mounted to the containers in the application. configs: Specifies configuration files that will be injected into the containers as Docker Configs.
- secrets: Specifies secrets that will be injected into the containers as Docker Secrets.
- deploy: Specifies options for deploying the application as a stack, including replicas, update policies, and placement constraints.
💡 To read more regarding docker compose, you can look at the official documentation over here
Some of the important terminologies
✨ Infrastructure as Code (IaC)
- Technology for deploying and managing IT system infrastructure in the form of software, rather than by people.
- As the software is written in source code, management quality can be improved.
- Decreased work time for infrastructure managers due to parallel processing of multiple systems.
- Cost savings due to efficient processing possible by reducing work time.
- Lower error rate because only defined operations are performed by the software.
✨ Container Scaling Out (Scale Out)
- Scale out refers to increasing the number of servers operating the service in response to increasing loads to distribute the load.
- Horizontal scaling of containers: one of the core technologies of microservices, it prevents unnecessary system expansion by horizontally scaling only the containers responsible for a specific service.
- Vertical scaling (Scale Up): Concept of expanding the server vertically by increasing insufficient resources such as CPU and RAM.
- Traditional monolithic scaling method / Hardware is added because the entire system server cannot be added (there are limitations).
✨ Service Dependency and Discovery
- Services running on each container in a specific project have interdependent relationships (e.g., WEB-DB-Kafka, etc.).
- In a cloud environment, each service runs on an instance, and these instance information (IP, Port, etc.) has a characteristic that can easily change depending on the situation.
- Services with interdependencies are highly sensitive to such changes, and to quickly reflect the changed information, service discovery must be configured.
Docker Compose Wordpress
For starters, I will be creating an emtpy directory
mkdir /Compose
cd /Compose
Creating a new .yml
file
vi docker-compose.yml
# Compose File Format Version
version: '3.7'
services:
wordpress_db:
image: mysql:5.7
restart: always
environment:
MYSQL_ROOT_PASSWORD: wordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
networks:
- wordpress_net
volumes:
- wordpress_data:/var/lib/mysql
wordpress:
depends_on:
- wordpress_db
image: wordpress:latest
restart: always
ports:
- "80:80"
environment:
WORDPRESS_DB_HOST: wordpress_db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
WORDPRESS_DB_NAME: wordpress
networks:
- wordpress_net
volumes:
- wordpress_web:/var/www/html
volumes:
wordpress_data: {}
wordpress_web: {}
networks:
wordpress_net: {}
Checking the docker-compose version
docker-compose -v
Docker Compose version v2.2.3
If we use the following,
docker-compose config
It will show us the current configuration for the docker compose file in the current directory
Now composing the YAML file
docker-compose up -d
â ¿ Network compose_wordpress_net Created 0.4s
â ¿ Volume "compose_wordpress_data" Created 0.0s
â ¿ Volume "compose_wordpress_web" Created 0.0s
â ¿ Container compose-wordpress_db-1 Created 0.7s
â ¿ Container compose-wordpress-1 Created 0.0s
👉 The -d
option declares the container to run in the background
After the compose build is done, we can check the docker volume and network are created as per the compose file
docker volume ls
local compose_wordpress_data
local compose_wordpress_web
docker network ls
755a8af21193 compose_wordpress_net bridge local
We can also check compose status as well
docker-compose ls
docker-compose ps
Now if we open the website using the IP address of our host system
💡 When deleting composed containers, we can use
docker-compose down
however this won't delete the docker volume. We will need to include the-v
option to delete the docker volume as well
Docker Compose building Nginx
I will now demonstrate using docker compose to build a Dockerfile
Creating a new directory
mkdir /Compose/build
cd /Compose/build
Creating an index.html
echo "Hello My Nginx" > index.html
Creating the Dockerfile
vi Dockerfile
FROM nginx:latest
LABEL maintainer "Author <Author@localhost.com>“
ADD index.html /usr/share/nginx/html
Creating the docker-compose YAML file
vi docker-compose.yml
version: '3.7'
services:
web:
image: myweb/nginx:v1
build: .
restart: always
Now using docker compose to build our Dockerfile
docker-compose -p myweb up -d --build
[+] Running 2/2
â ¿ Network myweb_default Created 0.1s
â ¿ Container myweb-web-1 Started 0.2s
👉 Here the "-p" option specifies a project name, and the "--build" option skips the image search and pull, performing only the build operation. If the "--build" option is omitted, the pull operation is automatically performed, but using it is recommended during build operations to avoid errors.
We can confirm the process
docker-compose -p myweb ps -a
NAME COMMAND SERVICE STATUS PORTS
myweb-web-1 "/docker-entrypoint.…" web running 80/tcp
Another thing we can try is,
docker-compose -p myweb up -d --scale web=3
[+] Running 3/3
â ¿ Container myweb-web-3 Started 0.9s
â ¿ Container myweb-web-2 Started 0.9s
â ¿ Container myweb-web-1 Started 0.9s
Checking the compose process again
docker-compose -p myweb ps
NAME COMMAND SERVICE STATUS PORTS
myweb-web-1 "/docker-entrypoint.…" web running 80/tcp
myweb-web-2 "/docker-entrypoint.…" web running 80/tcp
myweb-web-3 "/docker-entrypoint.…" web running 80/tcp
👉 With the "--scale" option, it's possible to horizontally scale specific service containers, explicitly declaring a scale of 1 reduces the containers to 1. However, this option cannot be used when ports are specified, as it's not possible to expose multiple containers to the outside with the same port number. To serve multiple instances of the same web container, a proxy server must be used
Docker Compose HAproxy with Nginx LB
Starting with an empty directory
mkdir /Compose/prod
cd /Compose/prod
Creating an index.html
echo "Hello My Nginx" > index.html
Creating a Dockerfile for Nginx and HAproxy
vi Dockerfile_nginx
FROM nginx:latest
LABEL maintainer "Author <Author@localhost.com>"
ADD index.html /usr/share/nginx/html
WORKDIR /usr/share/nginx/html
vi Dockerfile_haproxy
FROM haproxy:2.3
LABEL maintainer "Author <Author@localhost.com>"
ADD haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
Configuring the haproxy.cfg
file
vi haproxy.cfg
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats timeout 30s
user haproxy
group haproxy
daemon
defaults
log global
mode http
option httplog
option dontlognull
option dontlog-normal
option http-server-close
maxconn 3000
timeout connect 10s
timeout http-request 10s
timeout http-keep-alive 10s
timeout client 1m
timeout server 1m
timeout queue 1m
listen stats
bind *:9000
stats enable
stats realm Haproxy Stats Page
stats uri /
stats auth admin:haproxy1
frontend proxy
bind *:80
default_backend WEB_SRV_list
backend WEB_SRV_list
balance roundrobin
option httpchk HEAD /
server prod-web-1 prod-web-1:80 check inter 3000 fall 5 rise 3
server prod-web-2 prod-web-2:80 check inter 3000 fall 5 rise 3
server prod-web-3 prod-web-3:80 check inter 3000 fall 5 rise 3
👉 At the backend section of the configuration file, the web server's status is checked using the HEAD method every 3 seconds. If the status check fails 5 times in a row, the web server is removed from the load balancer group. If the server becomes available again and passes 3 consecutive status checks, it is added back to the load balancer group.
Finally creating the docker compose YAML file
vi docker-compose.yml
version: '3.7'
services:
proxy:
depends_on:
- web
image: prod/haproxy:v1
build:
context: ./
dockerfile: ./Dockerfile_haproxy
restart: always
ports:
- "80:80"
- "9000:9000"
networks:
- myweb_net
web:
image: prod/nginx:v1
build:
context: ./
dockerfile: ./Dockerfile_nginx
restart: always
deploy:
mode: replicated
replicas: 3
networks:
- myweb_net
networks:
myweb_net: {}
Using docker compose to build our Dockerfiles
docker-compose up -d --build
Successfully built 5deb0e3b5821
Successfully tagged prod/haproxy:v1
[+] Running 5/5
:: Network prod_myweb_net Created 0.2s
:: Container prod-web-3 Started 0.7s
:: Container prod-web-1 Started 0.7s
:: Container prod-web-2 Started 0.6s
:: Container prod-proxy-1 Started 1.5s
Once the job is done, we can check the processes
docker-compose ps
NAME COMMAND SERVICE STATUS PORTS
prod-proxy-1 "docker-entrypoint.s…" proxy running 0.0.0.0:80->80/tcp, 0.0.0.0:9000->9000/tcp
prod-web-1 "/docker-entrypoint.…" web running 80/tcp
prod-web-2 "/docker-entrypoint.…" web running 80/tcp
prod-web-3 "/docker-entrypoint.…" web running 80/tcp
To test if the load balancing if it is working between these 3 web pages, we can include a different entry in each of the web server
docker exec -it prod-web-1 /bin/bash
root@c6f1c535ce6c:/usr/share/nginx/html# echo "prod-web-1 Server Main Page" >> index.html
root@c6f1c535ce6c:/usr/share/nginx/html# cat index.html
Hello My Nginx
prod-web-1 Server Main Page
docker exec -it prod-web-2 /bin/bash
root@e86233007a76:/usr/share/nginx/html# echo "prod-web-2 Server Main Page" >> index.html
root@e86233007a76:/usr/share/nginx/html# cat index.html
Hello My Nginx
prod-web-2 Server Main Page
docker exec -it prod-web-3 /bin/bash
root@11aa89e08619:/usr/share/nginx/html# echo "prod-web-3 Server Main Page" >> index.html
root@11aa89e08619:/usr/share/nginx/html# cat index.html
Hello My Nginx
prod-web-3 Server Main Page
Testing our setup
👉 We can confirm that round robin load balancing is working
Conclusion
In conclusion, Docker Compose is a powerful tool that simplifies the process of managing and deploying complex multi-container applications. Through hands-on experience building an Nginx server with HAProxy and a WordPress application, we have seen how Docker Compose streamlines the management of container orchestration, network configuration, and service scaling.
Top comments (0)