DEV Community

Cover image for Learn Docker — from the beginning, part V Docker Compose, variables, volumes, networks and databases
Chris Noring for Microsoft Azure

Posted on • Originally published at softchris.github.io on

Learn Docker — from the beginning, part V Docker Compose, variables, volumes, networks and databases

Follow me on Twitter, happy to take your suggestions on topics or improvements /Chris

This article is part of a series:

  • Docker - from the beginning, Part I, this covers: why Docker, how it works and covers some basic concepts like images, containers and the usage of a Dockerfile. It also introduces some basic Docker commands on how to manage the above concepts.
  • Docker - from the beginning, Part II, this is about learning about Volumes, what they are, how they can work for you and primarily how they will create an amazing development environment
  • Docker - from the beginning, Part III, this part is about understanding how to work with Databases in a containerized environment and in doing so we need to learn about linking and networks
  • Docker - from the beginning, Part IV, This part introduces Docker Compose and we learn how to manage a lot of containers and we learn why Docker Compose is better to use than just plain Docker commands
  • Docker - from the beginning, Part V, we are here

We will keep working on our project introduced in Part IV and in doing so we will showcase more Docker Compose features and essentially build out our project to cover everything you might possibly need.

In this part, we will cover:

  • Environment variables , now we have covered those in previous parts so this is mostly about how we set them in Docker Compose
  • Volumes , same thing with volumes, this has been covered in previous articles even though we will mention their use and how to work with them with Docker Compose
  • Networks and Databases , finally we will cover Databases and Networks, this part is a bit tricky but hopefully, we managed to explain it thoroughly

If you at any point should feel confused here is the repo this article is based on:

https://github.com/softchris/docker-compose-experiments

Resources

Using Docker and containerization is about breaking apart a monolith into microservices. Throughout this series, we will learn to master Docker and all its commands. Sooner or later you will want to take your containers to a production environment. That environment is usually the Cloud. When you feel you've got enough Docker experience have a look at these links to see how Docker can be used in the Cloud as well:

  • Containers in the Cloud Great overview page that shows what else there is to know about containers in the Cloud
  • Deploying your containers in the Cloud Tutorial that shows how easy it is to leverage your existing Docker skill and get your services running in the Cloud
  • Creating a container registry Your Docker images can be in Docker Hub but also in a Container Registry in the Cloud. Wouldn't it be great to store your images somewhere and actually be able to create a service from that Registry in a matter of minutes?

Environment variables

One of the things I’ve shown you in previous articles is how we can specify environment variables. Now variables can be set in the Dockerfile but we can definitely set them on the command line and thereby also in Docker Compose and specifically in docker-compose.yaml:

// docker-compose.yaml

version: '3'
services:
 product-service:
   build:
     context: ./product-service
   ports:
     - "8000:3000"
   environment:  
     - test=testvalue 
 inventory-service:
   build:
     context: ./inventory-service
   ports:
   - "8001:3000"
Enter fullscreen mode Exit fullscreen mode

Above we are creating an environment variable by defining environment followed by -test=testvalue, which means we create the variable test with value, testvalue.

We can easily test that this works by reading from process.env.test in our app.js file for the product-service directory.

Another way to test this is to run Docker compose and query for what environment variables are available, like so:

As you can see above we first run docker-compose ps and get the containers that are part of this Docker Compose session and then we run docker exec [container name] env to list the environment variables. A third option is to run docker exec -it [container name] bash and enter the container and use bash to echo out the variable value. There are quite a few ways to manage environment variables with Docker compose so have a read in the official docs, what else you can do.

Volumes

We’ve covered volumes in an earlier part of this series and we found them to be a great way to:

  • create a persistent space , this is ideal to create log files or output from a Database that we want to remain, once we tear down and run our containers
  • turn our development environment into a Volume , the benefits of doing so meant that we could start up a container and then change our code and see those changes reflected without having to rebuild or tear down our container, a real time saver.

Create a persistent space

Let’s see how we can deal with Volumes in Docker compose:

// docker-compose.yml

version: '3.3'
services:
 product-service:
   build:
     context: ./product-service
   ports:
     - "8000:3000"
   environment:
     - test=testvalue
 inventory-service:
   build:
     context: ./inventory-service
   ports:
     - "8001:3000"
  volumes:  
    - my-volume:/var/lib/data

volumes:  
  my-volume:
Enter fullscreen mode Exit fullscreen mode

Above we are creating a volume by the command volumes at the end of the file and on the second row we give it the name my-volume. Furthermore, in the inventory-service portion of our file, we refer to the just created volume and create a mapping to /var/lib/data which is a directory in the volume that will be persisted, through teardowns. Let’s look that it is correctly mapped:

As can be seen, by the above command, we first enter the container with docker exec followed by us navigating to our mapping directory, it is there, great :).

Let’s create a file in the data directory so we can prove that our volume mapping really works:

echo persist > persist.log

The above command creates a file persist.log with the content persist . Nothing fancy but it does create a file that we can look for after tearing down and restarting our container.

Now we can exit the container. Next, let’s recap on some useful Volume commands:

docker volume ls

The above lists all the currently mounted volumes. We can see that our created Volume is there compose-experiments_my-volume .

We can dive into more details with:

docker volume inspect compose-experiments_my-volume

Ok, so it’s giving us some details about our volume such as Mountpoint, which is where files will be persisted when we write to our volume mapping directory in the container.

Let’s now bring down our containers with:

docker-compose down

This means that the Volume should still be there so let’s bring them all up with:

docker-compose up -d

Let’s enter the container next and see if our persist.log file is there:

Oh yeah, it works.

Turn your current directory into a Volume

Ok, for this we need to add a new volume and we need to point out a directory on our computer and a place in the container that should be in sync. Your docker-compose.yaml file should look like the following:

// docker-compose.yaml

version: '3.3'
services:
  product-service:
    build:
      context: ./product-service
    ports:
      - "8000:3000"
    environment:
      - test=testvalue
    volumes:  
      - type: bind  
      source: ./product-service  
      target: /app  
  inventory-service:
    build:
      context: ./inventory-service
    ports:
      - "8001:3000"
    volumes:
      - my-volume:/var/lib/data

volumes:
  my-volume:
Enter fullscreen mode Exit fullscreen mode

The new addition is added to the product-service. We can see that we are specifying a volumes command with one entry. Let’s break down that entry:

  • type: bind , this creates a so-called bind mount, a type of volume more fit for purpose of syncing files between your local directory and your container
  • source , this is simply where your files are, as you can see we are pointing out ./product-service. This means that as soon as we change a file under that directory Docker will pick up on it.
  • target , this is the directory in the container, source and target will now be in sync we do a change in source, the same change will happen to target

Networks and databases

Ok then, this is the last part we aim to cover in this article. Let’s start with databases. All major vendors have a Docker image like Sql Server, Postgres, MySQL and so on. This means we don’t need to do the build-step to get them up and running but we do need to set things like environment variables and of course open up ports so we can interact with them. Let’s have a look at how we can add a MySQL database to our solution, that is our docker-compose.yml file.

Adding a database

Adding a database to docker-compose.yaml is about adding an already premade image. Lucky for us MySQL already provides a ready-made one. To add it we just need to add another entry under services: like so:

// docker-compose.yaml

product-db:
  image: mysql
  environment:
    - MYSQL_ROOT_PASSWORD=complexpassword
  ports:
    - 8002:3306
Enter fullscreen mode Exit fullscreen mode

Let’s break it down:

  • product-db is the name of our new service entry, we choose this name
  • image is a new command that we are using instead of build , we use this when the image is already built, which is true for most databases
  • environment , most databases will need to have a certain number of variables set to be able to connect to them like username, password and potentially the name of the database, this varies per type of database. In this case, we set MYSQL_ROOT_PASSWORD so we instruct the MySQL instance what the password is for the root user. We should consider creating a number of users with varying access levels
  • ports, this is exposing the ports that will be open and thereby this is our entrance in for talking to the database. By typing 8002:3306 we say that the container's port 3306 should be mapped to the external port 8002

Let’s see if we can get the database and the rest of our services up and running:

docker-compose up -d

Let’s verify with:

docker-compose ps OR docker ps

Looks, good, our database service experiments_product-db_1 seems to be up and running on port 8002. Let’s see if we can connect to the database next. The below command will connect us to the database, fingers crossed ;)

mysql -uroot -pcomplexpassword -h 0.0.0.0 -P 8002
Enter fullscreen mode Exit fullscreen mode

and the winner is…

Great, we did it. Next up let’s see if we can update one of our services to connect to the database.

Connecting to the database

There are three major ways we could be connecting to the database:

  • using docker client, we’ve tried this one already with mysql -uroot -pcomplexpassword -h 0.0.0.0 -P 8002
  • enter our container, we do this using docker exec -it [name of container] bash and then we type mysql inside of the container
  • connecting through our app, this is what we will look at next using the NPM library mysql

We will focus on the third choice, connecting to a database through our app. The database and the app will exist in different containers. So how do we get them to connect? The answer is:

  • needs to be in the same network , for two containers to talk to each other they need to be in the same network
  • database needs to be ready , it takes a while to start up a database and for your app to be able to talk to the database you need to ensure the database have started up correctly, this was a fun/interesting/painful til I figured it out, so don’t worry I got you, we will succeed :)
  • create a connection object , ensure we set up the connection object correctly in app.js for product-service

Let’s start with the first item here. How do we get the database and the container into the same network? Easy, we create a network and we place each container in that network. Let’s show this in docker-compose.yaml:

// excerpt from docker-compose.yaml

networks:
  products:
Enter fullscreen mode Exit fullscreen mode

We need to assign this network to each service, like so:

// excerpt from docker-compose.yaml

services:
  some-service:
    networks:  
      - products
Enter fullscreen mode Exit fullscreen mode

Now, for the second bullet point, how do we know that the database is finished initializing? Well, we do have a property called depends_on, with that property, we are able to specify that one container should wait for another container to start up first. That means we can specify it like so:

// excerpt from docker-compose.yaml

services:
 some-service:
   depends_on: db
 db:
   image: mysql
Enter fullscreen mode Exit fullscreen mode

Great so that solves it or? Nope nope nope, hold your horses:

So in Docker compose version 2 there used to be an alternative where we could check for a service’s health, if health was good we could process to spin up our container. It looked like so:

depends_on:
 db:
   condition: service_healthy
Enter fullscreen mode Exit fullscreen mode

This meant that we could wait for a database to initialize fully. This was not to last though, in version 3 this option is gone. Here is doc page that explains why, control startup and shutdown order. The gist of it is that now it’s up to us to find out when our database is done and ready to connect to. Docker suggests several scripts for this:

All these scripts have one thing in common, the idea is to listen to a specific host and port and when that replies back, then we run our app. So what do we need to do to make that work? Well let’s pick one of these scripts, namely wait-for-it and let’s list what we need to do:

  • copy this script into your service container
  • give the script execution rights
  • instruct the docker file to run the script with database host and port as args and then to run the service once the script OKs it

Let’s start with copying the script from GitHub into our product-service directory so it now looks like this:

/product-service
  wait-for-it.sh
  Dockerfile
  app.js
  package.json
Enter fullscreen mode Exit fullscreen mode

Now let’s open up the Dockerfile and add the following:

// Dockerfile

FROM node:latest

WORKDIR /app

ENV PORT=3000

COPY . .

RUN npm install

EXPOSE $PORT

COPY wait-for-it.sh /wait-for-it.sh

RUN chmod +x /wait-for-it.sh
Enter fullscreen mode Exit fullscreen mode

Above we are copying the wait-for-it.sh file to our container and on the line below we are giving it execution rights. Worth noting is how we also remove the ENTRYPOINT from our Dockerfile, we will instead instruct the container to start from the docker-compose.yaml file. Let’s have a look at said file next:

// excerpt from docker-compose.yaml

services:
 product-service:
 command: ["/wait-for-it.sh", "db:8002", "--", "npm", "start"]
 db:
 // definition of db service below
Enter fullscreen mode Exit fullscreen mode

Above we are telling it to run the wait-for-it.sh file and as an argument use db:8002 and after it gets a satisfactory response then we can go on to run npm start which will then start up our service. That sounds nice, will it work?

For full disclosure let’s show our full docker-compose.yaml file:

version: '3.3'
  services:
    product-service:
      depends_on:
        - "db"
      build:
        context: ./product-service
      command: ["/wait-for-it.sh", "db:8002", "--", "npm", "start"]
    ports:
      - "8000:3000"
    environment:
      - test=testvalue
      - DATABASE_PASSWORD=complexpassword
      - DATABASE_HOST=db
    volumes:
      - type: bind
      source: ./product-service
      target: /app
    networks:
      - products
   db:
     build: ./product-db
       restart: always
     environment:
       - "MYSQL_ROOT_PASSWORD=complexpassword"
       - "MYSQL_DATABASE=Products"
     ports:
       - "8002:3306"
     networks:
       - products
   inventory-service:
     build:
       context: ./inventory-service
     ports:
       - "8001:3000"
     volumes:
       - my-volume:/var/lib/data

volumes:
 my-volume:

networks:
 products:
Enter fullscreen mode Exit fullscreen mode

Ok, so to recap we placed product-service and db in the network products and we downloaded the script wait-for-it.sh and we told it to run before we spun up the app and in the process listen for the host and port of the database that would respond as soon as the database was ready for action. That means we have one step left to do, we need to adjust the app.js file of the product-service, so let’s open that file up:

// app.js

const express = require('express')
const mysql = require('mysql');
const app = express()
const port = process.env.PORT || 3000;
const test = process.env.test;

let attempts = 0;

const seconds = 1000;

function connect() {
  attempts++;

  console.log('password', process.env.DATABASE_PASSWORD);
  console.log('host', process.env.DATABASE_HOST);
  console.log(`attempting to connect to DB time: ${attempts}`);

 const con = mysql.createConnection({  
   host: process.env.DATABASE_HOST,  
   user: "root",  
   password: process.env.DATABASE_PASSWORD,  
   database: 'Products'  
 });

  con.connect(function (err) {  
   if (err) {  
     console.log("Error", err);  
     setTimeout(connect, 30 * seconds);  
   } else {  
     console.log('CONNECTED!');  
   }

  });

  conn.on('error', function(err) {  
    if(err) {  
      console.log('shit happened :)');  
      connect()  
    }   
  });

}
connect();

app.get('/', (req, res) => res.send(`Hello product service, changed ${test}`))

app.listen(port, () => console.log(`Example app listening on port ${port}!`))
Enter fullscreen mode Exit fullscreen mode

Above we can see that we have defined a connect() method that creates a connection by invoking createConnection() with an object as an argument. That input argument needs to know host, user, password and database. That seems perfectly reasonable. We also add a bit of logic to the connect() method call namely we invoke setTimeout(), this means that it will attempt to do another connection after 30 seconds. Now, because we use wait-for-it.sh that functionality isn’t really needed but we could rely on application code alone to ensure we get a connection. However, we also call conn.on('error') and the reason for doing so is that we can loose a connection and we should be good citizens and ensure we can get that connection back.

Anyway, we’ve done everything in our power, but because we’ve introduced changes to Dockerfile let’s rebuild everything with docker-compose build and then let’s bring everything up with:

docker-compose up

and….

There it is, Houston WE HAVE A CONNECTION, or as my friend Barney likes to put it:

Setting up the database — fill it with structure and data

Ok, maybe you were wondering about the way we built the service db ? That part of docker-compose.yaml looked like this:

// docker-compose.yaml

db:
  build: ./product-db
  restart: always
  environment:
    - "MYSQL_ROOT_PASSWORD=complexpassword"
    - "MYSQL_DATABASE=Products"
  ports:
    - "8002:3306"
  networks:
    - products
Enter fullscreen mode Exit fullscreen mode

I would you to look at build especially. We mentioned at the beginning of this article that we can pull down ready-made images of databases. That statement is still true but by creating our own Dockerfile for this, we can not only specify the database we want but we can also run commands like creating our database structure and insert seed data. Let’s have a close look at the directory product-db:

/product-db
  Dockerfile
  init.sql
Enter fullscreen mode Exit fullscreen mode

Ok, we have a Dockerfile, let’s look at that first:

// Dockerfile

FROM mysql:5.6

ADD init.sql /docker-entrypoint-initdb.d
Enter fullscreen mode Exit fullscreen mode

We specify that init.sql should be copied and renamed to docker-entrypoint-initdb.d which means it will run the first thing that happens. Great, what about the content of init.sql?

// init.sql

CREATE DATABASE IF NOT EXISTS Products;

# create tables here
# add seed data inserts here
Enter fullscreen mode Exit fullscreen mode

As you can see it doesn’t contain much for the moment but we can definitely expand it, which is important.

Summary

We have now come full circle in this series, we have explained everything from the beginning. The basic concepts, the core commands, how to deal with volumes and databases and also how to be even more effective with Docker Compose. This series will continue of course and go deeper and deeper into Docker but this should hopefully take you a bit on the way. Thanks for reading this far.


Top comments (32)

Collapse
 
edssatish profile image
eds-satish

Thank you so much for sharing your valuable knowledge. This is such a gem!!!

Collapse
 
softchris profile image
Chris Noring

glad to hear it

Collapse
 
thiago_tgo90 profile image
Thiago Oliveira

Hi, tks for these articles. They are very helpful.

Running the example from your repo (github.com/softchris/docker-compos...) I had a problem when I tried to start the containers.

When I run the code the way it is, I got the following error in the console (I increased the default timeout. Thought maybe it would be the problem):

wait-for-it.sh: timeout occurred after waiting 60 seconds for db:8000

When I update the compose file to use the container db port instead of the one mapped in the host, it works:

wait-for-it.sh: db:3306 is available after 7 seconds

I think, it kinda makes sense because one container will be talking to the other one without using the host (maybe?)

Sorry if you explained/fixed this and I missed the piece.

tks a lot.

Collapse
 
bijoy26 profile image
Anjum Rashid • Edited

I ran into same timeout issue as Thiago on Windows host and solved it by updating the internal DB port as instructed.

However, then I stumbled upon following issue on the product-service container:

sh: nodemon: command not found

Few hours of head scratching and unhealthy debugging later, the culprit turned out to be bind mount wiping out the node_modules directory. Got a workaround with this solution.

Better late than never!

Collapse
 
softchris profile image
Chris Noring

thank you for lettting me know. I'm sure someone else will scratch their head here and see your comment

Collapse
 
softchris profile image
Chris Noring

hi Thiago.
Thanks for writing this. I'm trying to understand what kind of OS you are on, linux, mac, windows? Just trying to rule out if it is OS dependent or not?

Collapse
 
thiago_tgo90 profile image
Thiago Oliveira

Hi Cris

I'm using Ubuntu 18.04.2 LTS and Docker (Client and Host) 18.03.1-ce

Thank you.

Collapse
 
opensas profile image
opensas

WOW! Excellent guide indeed.

I got a bit lost in the last step. What does the "ADD init.sql /docker-entrypoint-initdb.d" really do (just copying the file?) and how come mysql knows it has to run it? Or is it something programmed in the mysql:5.6 image? and how does it auths to the db?

The other question I wanted to ask, is how would you combine all this with a git repo in order to trigger automated tests and, if everything goes ok, automatic deploy?

Collapse
 
softchris profile image
Chris Noring

Thanks for that :)

From hub.docker.com/_/mysql
When a container is started for the first time, a new database with the specified name will be created and initialized with the provided configuration variables. Furthermore, it will execute files with extensions .sh, .sql and .sql.gz that are found in /docker-entrypoint-initdb.d

if I understand your question correctly that's a whole chapter in the official Docker documentation, docs.docker.com/docker-hub/builds/

Collapse
 
israelmuca profile image
Israel Muñoz

THIS IS AMAZING!

Thank you SO MUCH!

Now I have an urgent need to refactor mi monolithic API into microservices with Docker 😄

Collapse
 
softchris profile image
Chris Noring

That was the effect I was hoping for :) Good to hear you are finding it useful :)

Collapse
 
harkinj profile image
harkinj

Brilliant series of articles.I have recommended it to all my colleagues. Thanks very much.

Collapse
 
softchris profile image
Chris Noring

happy to hear that it helps someone. Thank you :)

Collapse
 
harkinj profile image
harkinj

On a broader subject. Do u think in the future will devs have to worry about setting up docker or k8s etc or rather just leverage a PaaS such as cloud foundry? I like dev but not ops :)

Thread Thread
 
softchris profile image
Chris Noring

That's a good question. Cloud is becoming more of a default. Tools will become better. It will be easier to do these things. There will be more services with a one click and you are in the cloud or just knobs and levers to pull to scale your app up and down. With that said there will always be a need to build these tools, question is it that falls into the laps of normal devs... DevOps is a very strong movement at the moment and understanding dev + devops imo makes you more into an architect

Thread Thread
 
harkinj profile image
harkinj

Thanks for the info and your time. I believe the whole ops side of devops will die off in 3-5 years as the tooling, PaaS (e.g cloud foundry) etc make it easier to any ops required and we will just e back to dev again :) lets wait and see. There might be a topic for a blog in our mini discussion :)

Collapse
 
isitavi profile image
Avijit Das Noyon

From:

conn.on('error', function(err) {

if(err) {

console.log('shit happened :)');

connect()

}

})

To:
con.on('error', function(err) {

if(err) {

console.log('shit happened :)');

connect()

}

})

one 'n' need to Exclude

Collapse
 
araphiel profile image
Armon Raphiel

Thank you for posting this series.

My biggest question: How do you deploy a docker-compose setup?

It seems like I only see people using compose for development.

  • Can we use it to deploy to a server?
  • Should we be using docker-compose at all for production servers?
  • What is the easiest way to get a bunch of containers running on a cloud service like Azure
Collapse
 
softchris profile image
Chris Noring

Here's an example of that xhttps://docs.microsoft.com/en-us/azure/app-service/containers/quickstart-multi-container

Collapse
 
narayanncube profile image
narayanncube

attempting to connect to DB time: 1
Example app listening on port 3000!
Error { Error: connect ECONNREFUSED 192.168.0.3:3306
at TCPConnectWrap.afterConnect as oncomplete
--------------------
at Protocol._enqueue (/app/node_modules/mysql/lib/protocol/Protocol.js:144:48)
at Protocol.handshake (/app/node_modules/mysql/lib/protocol/Protocol.js:51:23)
at Connection.connect (/app/node_modules/mysql/lib/Connection.js:119:18)
at connect (/app/app.js:27:7)
at Object. (/app/app.js:45:1)
at Module._compile (internal/modules/cjs/loader.js:816:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:827:10)
at Module.load (internal/modules/cjs/loader.js:685:32)
at Function.Module._load (internal/modules/cjs/loader.js:620:12)
at Function.Module.runMain (internal/modules/cjs/loader.js:877:12)
errno: 'ECONNREFUSED',
code: 'ECONNREFUSED',
syscall: 'connect',
address: '192.168.0.3',
port: 3306,
fatal: true }

I get this error

Collapse
 
goranpaunovic profile image
Goran Paunović

Great series of articles. It really helps to write everything down as you learn. You will learn it better and you can help others to learn from your mistakes. I hope you will continue to learn us as you learn more.

Collapse
 
elrumordelaluz profile image
Lionel

Hey Chris, thank your for the whole serie!!

Just a little detail to avoid errors for new readers, in the Turn your current directory into a Volume section, instead of version: 3 in the docker-compose.yaml, needs to be at least version 3.2 as you have in the repository file.

Again, thanks for the pretty well explained serie of posts!

Collapse
 
softchris profile image
Chris Noring

hi Lionel. Thanks for reaching out. Really appreciate your comment. I'll make sure to update