DEV Community

Cover image for Learn Docker - from the beginning, part I images and containers
Chris Noring for Microsoft Azure

Posted on • Originally published at softchris.github.io

Learn Docker - from the beginning, part I images and containers

Follow me on Twitter, happy to take your suggestions on topics or improvements /Chris

Subscribe to my channel

This article is part of a series:

  • Docker — from the beginning part I, we are here.
  • Docker — from the beginning, Part II, this is about Volumes and how we can use volumes to persist data but also how we can turn our development environment into a Volume and make our development experience considerably better
  • Docker — from the beginning, Part III, this is about how to deal with Databases, putting them into containers and how to make containers talk to other containers using legacy linking but also the new standard through networks
  • Docker — from the beginning, Part IV, this is how we manage more than one service using Docker Compose ( this is 1/2 part on Docker Compose)
  • Docker - from the beginning, Part V, this part is the second and concluding part on Docker Compose where we cover Volumes, Environment Variables and working with Databases and Networks

Now there are a ton of articles out there for Docker but I struggle with the fact that none of them are really thorough and explains what goes on, or rather that’s my impression, feel free to disagree :). I should say I’m writing a lot of these articles for me and my own understanding and to have fun in the process :). I also hope that it can be useful for you as well.

So I decided to dig relatively deep so that you all hopefully might benefit. TLDR, this is the first part in a series of articles on Docker, this part explains the basics and the reason I think you should use Docker.

This article really is Docker from the beginning, I assume no pre-knowledge, I assume nothing. Enjoy :)

alt text

In this article, we will attempt to cover the following topics

  • Why Docker and what is it, this is probably the most important part of the article, why Docker, why not some other technology or status quo? I will attempt to explain what Docker is and what it consists of.
  • Docker in action, we will dockerize an application to showcase we understand and can use the core concepts that make out Docker.
  • Improving our set up, we should ensure our solution does not rely on static values. We can ensure this by creating and setting environment variables whose value we can read from inside of our application.
  • Managing our container, now it’s fairly easy to get a container up and running but let’s look how to manage it, after all, we don’t want the container to be up and running forever. Even it’s a very lightweight thing, it adds up and it can block ports that you want to use for other things.

Remember that this is the first part of a series and that we will look into other things in this series on Docker such as Volumes, Linking, Micro Services, and Orchestration, but that will be covered in future parts.

Resources

Using Docker and containerization is about breaking apart a monolith into microservices. Throughout this series, we will learn to master Docker and all its commands. Sooner or later you will want to take your containers to a production environment. That environment is usually the Cloud. When you feel you've got enough Docker experience have a look at these links to see how Docker can be used in the Cloud as well:

  • Sign up for a free Azure account To use containers in the Cloud like a private registry you will need a free Azure account
  • Containers in the Cloud Great overview page that shows what else there is to know about containers in the Cloud
  • Deploying your containers in the Cloud Tutorial that shows how easy it is to leverage your existing Docker skill and get your services running in the Cloud
  • Creating a container registry Your Docker images can be in Docker Hub but also in a Container Registry in the Cloud. Wouldn't it be great to store your images somewhere and be able to create a service from that Registry in a matter of minutes?

Why Docker and what is it

Docker helps you create a reproducible environment. You can specify the exact version of different libraries, different environment variables and their values among other things. Most importantly you can run your application in isolation inside of that environment.

The big question is why we would want that?

  • onboarding, every time you onboard a new developer in a project they need to set up a lot of things like installing SDKs, development tools, databases, add permissions and so on. This a process that can take from one day to up to 2 weeks
  • environments look the same, using Docker you can create a DEV, STAGING as well as PRODUCTION environment that all look the same. That is really great as before Docker/containerization you could have environments that were similar but there might have been small differences and when you discovered a bug you could spend a lot of time chasing down the root cause of the bug. Sometimes the bug was in the source code itself but sometimes it was due to some difference in the environment and that usually took a long time to determine.
  • works on my machine, this point is much like the above but because Docker creates these isolated containers, where you specify exactly what they should contain, you can also ship these containers to the customers and they will operate in the exact same way as they did on your development machine/s.

What is it

Ok, so we’ve mentioned some great reasons above why you should investigate Docker but let's dive more into what Docker is. We’ve established that it lets us specify an environment like the OS, how to find and run the apps and the variables you need, but what else is there to know about Docker?

Docker creates stand-alone packages called containers that contain everything that is needed for you to run your application. Each container gets its own CPU, memory and network resources and does not depend on a specific operating system or kernel. The first that comes to mind when I describe the above is a Virtual Machine, but Docker differs in how it shares or dedicates resources. Docker uses a so-called layered file system which enables the containers to share common parts and the end result is that containers are way less of resource-hog on the host system than a virtual machine.

In short, the Docker containers, contain everything you need to run an application, including the source code you wrote. Containers are also isolated and secure light-weight units on your system. This makes it easy to create multiple micro-services that are written in different programming languages and that are using different versions of the same lib and even the same OS.

If you are curious about how exactly Docker does this I urge to have a look at the following links on layered file system and the library runc and also this great wikipedia overview of Docker.

Docker in action

Ok, so we covered what Docker is and some benefits. We also understood that the thing that eventually runs my application is called a container. But how do we get there? Well, we start out with a description file, called a Dockerfile. In this Dockerfile, we specify everything we need in terms of OS, environment variables and how to get our application in there.

Now we will jump in at the deep end. We will build an app and Dockerize it, so we will have our app running inside of a container, isolated from the outside world but reachable on ports that we explicitly open up.

We will take the following steps:

  • create an application, we will create a Node.js Express application, which will act as a REST API.
  • create a Dockerfile, a text file that tells Docker how to build our application
  • build an image, the pre-step to having our application up and running is to first create a so-called Docker image
  • create a container, this is the final step in which we will see our app up and running, we will create a container from a Docker image

Creating our app

We will now create an Express Node.js project and it will consist of the following files:

  • app.js, this is the file that spins up our REST
  • package.json, this is the manifest file for the project, here we will see all the dependencies like express but we will also declare a script start so we can easily start our application
  • Dockerfile, this is a file we will create to tell Docker how to Dockerize our application

To generate our package.json we just place ourselves in the projects directory and type:

npm init -y

This will produce the package.json file with a bunch of default values.

Then we should add the dependencies we are about to use, which is the library express , we install it by typing like this:

npm install express —-save

Let’s add some code

Now when we have done all the prework with generating a package.json file and installing dependencies, it’s time to add the code needed for our application to run, so we add the following code to app.js:

// app.js
const express = require('express')

const app = express()

const port = 3000

app.get('/', (req, res) => res.send('Hello World!'))

app.listen(port, () => console.log(`Example app listening on port ${port}!`))
Enter fullscreen mode Exit fullscreen mode

We can try and run this application by typing:

node app.js

Going to a web browser on http://localhost:3000 we should now see:

Hello world app

Ok so that works, good :)

One little comment though, we should take note of the fact that we are assigning the port to 3000 when we later create our Dockerfile.

Creating a Dockerfile

So the next step is creating our Dockerfile. Now, this file acts as a manifest but also as a build instruction file, how to get our app up and running. Ok, so what is needed to get the app up and running? We need to:

  • copy all app files into the docker container
  • install dependencies like express
  • open up a port, in the container that can be accessed from the outside
  • instruct, the container how to start our app

In a more complex application, we might need to do things like setting environment variables or set credentials for a database or run a database seed to populate the database and so on. For now, we only need the things we specified in our bullet list above. So let’s try to express that in our Dockerfile:

# Dockerfile

FROM node:latest

WORKDIR /app

COPY . .

RUN npm install

EXPOSE 3000

ENTRYPOINT ["node", "app.js"]
Enter fullscreen mode Exit fullscreen mode

Let’s break the above commands down:

  • FROM, this is us selecting an OS image from Docker Hub. Docker Hub is a global repository that contains images that we can pull down locally. In our case we are choosing an image based on Ubuntu that has Node.js installed, it’s called node. We also specify that we want the latest version of it, by using the following tag :latest
  • WORKDIR, this simply means we set a working directory. This is a way to set up for what is to happen later, in the next command below
  • COPY, here we copy the files from the directory we are standing into the directory specified by our WORKDIR command
  • RUN, this runs a command in the terminal, in our case we are installing all the libraries we need to build our Node.js express application
  • EXPOSE, this means we are opening up a port, it is through this port that we communicate with our container
  • ENTRYPOINT, this is where we should state how we start up our application, the commands need to be specified as an array so the array [“node”, “app.js”] will be translated to the node app.js in the terminal

Quick overview

Ok, so now we have created all the files we need for our project and it should look like this:

-| app.js // our express app
-| Dockerfile // our instruction file that Docker will read from
-| node_modules/ // directory created when we run npm install
-| package.json // npm init created this
-| package-lock.json // created when we installed libraries from NPM
Enter fullscreen mode Exit fullscreen mode

Building an image

There are two steps that need to be taken to have our application up and running inside of a container, those are:

  • creating an image, with the help of the Dockerfile and the command docker build we will create an image
  • start the container, now that we have an image from the action we took above we need to create a container

First things first, let’s create our image with the following command:

docker build -t chrisnoring/node:latest .

The above instruction creates an image. The . at the end is important as this instructs Docker and tells it where your Dockerfile is located, in this case, it is the directory you are standing in. If you don’t have the OS image, that we ask for in the FROM command, it will lead to it being pulled down from Docker Hub and then your specific image is being built.

Your terminal should look something like this:

alt text

What we see above is how the OS image node:latest is being pulled down from the Docker Hub and then each of our commands is being executed like WORKDIR, RUN and so on. Worth noting is how it says removing intermediate container after each step. Now, this is Docker being smart and caching all the different file layers after each command so it goes faster. In the end, we see successfully built which is our cue that everything was constructed successfully. Let’s have a look at our image with:

docker images

alt text

We have an image, success :)

Creating a container

Next step is to take our image and construct a container from it. A container is this isolated piece that runs our app inside of it. We build a container using docker run . The full command looks like this:

docker run chrisnoring/node

That’s not really good enough though as we need to map the internal port of the app to an external one, on the host machine. Remember this is an app that we want to reach through our browser. We do the mapping by using the flag -p like so:

-p [external port]:[internal port]

Now the full command now looks like this:

docker run -p 8000:3000 chrisnoring/node

Ok, running this command means we should be able to visit our container by going to http://localhost:8000, 8000 is our external port remember that maps to the internal port 3000. Let’s see, let’s open up a browser:

alt text

There we have it folks, a working container :D

alt text

Improving our set up with Environment Variables

Ok, so we’ve learned how to build our Docker image, we’ve learned how to run a container and thereby our app inside of it. However, we could be handling the part with PORT a bit nicer. Right now we need to keep track of the port we start the express server with, inside of our app.js , to make sure this matches what we write in the Dockerfile. It shouldn’t have to be that way, it’s just static and error-prone.

To fix it we could introduce an environment variable. This means that we need to do two things:

  • add an environment variable to the Dockerfile
  • read from the environment variable in app.js

Add an environment variable

For this we need to use the command ENV, like so:

ENV PORT=3000

Let’s add that to our Dockerfile so it now looks like so:

FROM node:latest

WORKDIR /app

COPY . .

ENV PORT=3000

RUN npm install

EXPOSE 3000

ENTRYPOINT ["node", "app.js"]
Enter fullscreen mode Exit fullscreen mode

Let’s do one more change namely to update EXPOSE to use our variable, so we git rid of static values and rely on variables instead, like so:

FROM node:latest

WORKDIR /app

COPY . .

ENV PORT=3000

RUN npm install

EXPOSE $PORT

ENTRYPOINT ["node", "app.js"]
Enter fullscreen mode Exit fullscreen mode

Note above how we change our EXPOSE command to $PORT, any variables we use needs to be prefixed with a $ character:

EXPOSE $PORT

Read the environment variable value in App.js

We can read values from environment variables in Node.js like so:

process.env.PORT

So let’s update our app.js code to this:

// app.js
const express = require('express')

const app = express()

const port = process.env.PORT

app.get('/', (req, res) => res.send('Hello World!'))

app.listen(port, () => console.log(`Example app listening on port ${port}!`))
Enter fullscreen mode Exit fullscreen mode

NOTE, when we do a change in our app.js or our Dockerfile we need to rebuild our image. That means we need to run the docker build command again and prior to that we need to have torn down our container with docker stop and docker rm. More on that in the upcoming sections.

Managing our container

Ok, so you have just started your container with docker run and you notice that you can’t shut it off in the terminal. Panic sets in ;) At this point you can go to another terminal window and do the following:

docker ps

This will list all running containers, you will be able to see the containers name as well as its id. It should look something like this:

alt text

As you see above we have the column CONTAINER_ID or NAMES column, both these values will work to stop our container, cause that is what we need to do, like so:

docker stop f40

We opt for using CONTAINER_ID and the three first digits, we don’t need more. This will effectively stop our container.

Daemon mode

We can do like we did above and open a separate terminal tab but running it in Daemon mode is a better option. This means that we run the container in the background and all output from it will not be visible. To make this happen we simply add the flag -d . Let’s try that out:

alt text

What we get now is just the container id back, that’s all we’re ever going to see. Now it’s easier for us to just stop it if we want, by typing docker stop 268 , that’s the three first digits from the above id.

Interactive mode

Interactive mode is an interesting one, this allows us to step into a running container and list files, or add/remove files or just about anything we can do for example bash. For this, we need the command docker exec, like so:

alt text

Above we run the command:

docker exec -it 268 bash

NOTE, the container needs to be up and running. If you've stopped it previously you should start it with docker start 268. Replace 268 with whatever id you got when it was created when you typed docker run.

268 is the three first digits if our container and -it means interactive mode and our argument bash at the end means we will run a bash shell.

We also run the command ls, once we get the bash shell up and running so that means we can easily list what’s in the container so we can verify we built it correctly but it’s a good way to debug as well.

If we just want to run something on the container like a node command, for example, we can type:

docker exec 268 node app.js

that will run the command node app.js in the container

Docker kill vs Docker stop

So far, we have been using docker stop as way to stop the container. There is another way of stopping the container namely docker kill , so what is the difference?

  • docker stop, this sends the signal SIGTERM followed by SIGKILL after a grace period. In short, this is a way to bring down the container in a more graceful way meaning it gets to release resources and saving state.
  • docker kill, this sends SIGKILL right away. This means resource release or state save might not work as intended. In development, it doesn’t really matter which one of the two commands are being used but in a production scenario it probably wiser to rely on docker stop

Cleaning up

During the course of development you will end up creating tons of container so ensure you clean up by typing:

docker rm id-of-container

Summary

Ok, so we have explained Docker from the beginning. We’ve covered motivations for using it and the basic concepts. Furthermore, we’ve looked into how to Dockerize an app and in doing so covered some useful Docker commands. There is so much more to know about Docker like how to work with Databases, Volumes, how to link containers and why and how to spin up and manage multiple containers, also known as orchestration.

But this is a series of articles, we have to stop somewhere or this article will be very long. Stay tuned for the next part where we will talk about Volumes and Databases.

Acknowledgments

Thank you Dan Wahlin Twitter for your amazing course on Docker, a lot of things Docker clicked for me because of your course.

Follow me on Twitter, I’m happy to answer your queries and questions and suggestions for topics.

Top comments (1)

Collapse
 
bobbyiliev profile image
Bobby Iliev

Great post! In addition to this I could also suggest this free eBook here:

GitHub logo bobbyiliev / introduction-to-docker-ebook

Free Introduction to Docker eBook

💡 Introduction to Docker

This is an open-source introduction to Docker guide that will help you learn the basics of Docker and how to start using containers for your SysOps, DevOps, and Dev projects. No matter if you are a DevOps/SysOps engineer, developer, or just a Linux enthusiast, you will most likely have to use Docker at some point in your career.

The guide is suitable for anyone working as a developer, system administrator, or a DevOps engineer and wants to learn the basics of Docker.

🚀 Download

To download a copy of the ebook use one of the following links:

📘 Chapters

🌟 Sponsors

Thanks to these fantastic companies that made this book possible!