DEV Community

Cover image for Docker guide
Robert Cooper
Robert Cooper

Posted on • Originally published at robertcooper.me

Docker guide

The purpose of this guide is to explain the most important concepts related to Docker to be able to effectively work with Docker for application development purposes.

What is Docker?

Docker is a tool that helps run applications in isolated environments.

There has always been a need to make sure applications can run in isolated environments for applications to run as expected on different machines/servers. This is important to be able to develop applications on a local machine and have them run as expected in a deployed environment. It's also important for developers to be able to run applications as expected on any computer to get up and running quickly and collaborate with other developers no matter how they've configured their computer.

The reason why running applications on different machines is so difficult is that all the right versions of an application's dependencies must be installed for it to run as expected. For example, if trying to run an API that is built with Node.js, and it was developed and tested on a machine using Node.js 12.8, it might not necessarily run the same on a machine that has Node.js version 10.18 installed. The same goes for any other dependency used in applications, such as python, ruby, php, typescript, nginx, apache, mysql, postgres, etc. Docker makes it possible to build containerized applications that hold the right versions of all dependencies and can run as intended on different machines.

Docker has 4 types of "objects" it uses to be able to create these isolated environments: images, containers, volumes, and networks. The two objects users will most often be directly working with are Docker images and Docker containers.

Docker objects: images, containers, volumes, and networks

Docker images contain the code that is needed to run an application and Docker containers are the isolated environments that are running the code from Docker images. A Docker image can be thought of as the blueprints and code used by a Docker container. Docker images can be saved to image registries for others to download the images. The most popular Docker image registry is Docker Hub. You can think of image registries as the NPM equivalent for Docker images.

Read more about Docker images and containers in the Docker images and Docker containers sections of the guide, respectively.

Docker volumes are used to persist data that is generated by running Docker containers. By saving data to a Docker volume, it will permit an application to have the same data even if the Docker container has been removed, recreated, or restarted. Docker volumes are also useful if two different containers should have access to the same data since both containers can point to the same volume.

Docker networks are used to make sure containers are isolated from each other and are also used to allow containers to communicate with each other.

Read more about Docker volumes and networks in the Docker volumes and Docker networks sections of the guide, respectively.

Related resources

What about virtual machines?

Virtual machines (VMs) often come up in the same conversation as Docker since they are both be used to create isolated environments.

With Docker, applications run in isolated environments called containers and each one of these containers share the same operating system kernel on a machine. On the other hand, applications that run on virtual machines run in their own operating system and don't share an underlying kernel. Virtual machines operate with the help of a hypervisor which is responsible for managing which operating system to run.

Type 1 Hypervisors

Type 1 Hypervisors (Paravirtualization): Hypervisor sits on bare metal hardware and each OS sits on the hypervisor. e.g. VMware ESXi, Oracle VM Server, Microsoft Hyper-V

Type 2 Hypervisors

Type 2 Hypervisors (Emulation): Hypervisor sits on top of the host OS. e.g. Oracle VirtualBox, Parallels, VMware Workstation

Side note: Linux kernel is fundamentally used

To run Linux Docker containers, a Linux kernel needs to be present on the machine. Docker handles this for you if you install the Docker Desktop app for Mac OS or Windows, which ships with LinuxKit: a lightweight Linux kernel that runs on Mac OS or Windows operating systems and is used by the Docker engine. The lightweight Linux kernel sits on a hypervisor that ships natively with Windows (reference) and Mac OS (reference).

Docker hardware and software breakdown on Linux, Mac OS, and Windows OS

The advantages of using Docker over virtual machines overwhelmingly tip the scales in favor of using Docker. Compared to virtual machines, Docker containers can get up in running within seconds compared to minutes, are lightweight (magnitude of MBs compared to GBs), are easily configurable, and use a small amount of resources. Perhaps the only reason to use a virtual machine over docker is if a high level of isolation is desired because of concerns about security vulnerabilities associated with having Docker containers using a shared kernel on the host operating system.

Related resources

Docker engine

The Docker engine needs to be installed and running on any machine that wants to do anything related to Docker. Starting the engine means that a long-running background process (the Docker daemon which also corresponds with the dockerd command) gets started to create a server that can be interacted with via a REST API. The Docker CLI is the way most people interact with the REST API to manage Docker objects, but 3rd party apps are also able to interact directly with the REST API or via Docker engine SDKs.

Docker engine

Related resources

Installing Docker

The easiest way to install all Docker related dependencies is to install Docker Desktop. Docker Desktop comes with several Docker related tools including the Docker Engine, Docker CLI, and Docker Compose CLI (read about Docker Compose here).

Docker Desktop is available for download on Mac and Windows:

Once the above is installed, ensure that the Docker Desktop is running. If the Docker Desktop is running it means that the Docker engine is running and the Docker CLI commands mentioned in this guide will be able to be executed.

Docker Desktop is running indicator

Indication in the Mac OS menu bar that Docker Desktop is running.

For Linux users, there is no Docker Desktop, so each component must be installed individually:

On Linux, the Docker daemon must be started using the following command:

sudo systemctl start docker
Enter fullscreen mode Exit fullscreen mode

The above command should work since systemctl ships with most Linux distributions, but if not, use sudo service docker start. There is also a way to automatically have Linux start Docker on system boot.

Related resources

Dockerfile

A Dockerfile is a file that holds the instructions on how to build an image. The file generally begins by specifying a base image that will be used as the basis of the Docker image to be built. For example, if building a python based API, a base image that is made up of a Linux OS with python installed on it can be used. After specifying a base image, other instructions are used that can specify the following details on how to the Docker image should be built:

  • Environment variables to set in a container
  • Ports exposed by the image
  • Which files should be copied into the image
  • Which dependencies should be installed
  • Command to be executed when starting a container (e.g. yarn start if starting a Node.js API)
  • And more...

As an example, an image for a Node.js API might have a Dockerfile similar to the following:

# Base image that has Node version 12.16.1 installed on a Linux OS.
# This is an alpine Linux image that is smaller than the non-alpine Linux equivalent images.
FROM node:12.16.1-alpine3.11

# Installs some dependencies required for the Node.js API
RUN apk add --no-cache make g++

# Indicates that the API exposes port 3000
EXPOSE 3000

# Specifies the working directory. All the paths referenced after this point will be relative to this directory.
WORKDIR /usr/src/app

# Copy all local source files into the Docker container's working directory
COPY . .

# Installs NPM dependencies using yarn
RUN yarn install

# Command to start the API which will get executed when a Docker container using this image is started.
CMD [ "yarn", "start" ]
Enter fullscreen mode Exit fullscreen mode

Frequently used instructions are described in the table below. The full list of instructions can be found in the Dockerfile reference documentation.

Instruction Description
FROM Defines a base image
RUN Executes command in a new image layer
CMD Command to be executed when running a container
EXPOSE Documents which ports are exposed (not used for anything other than documentation)
ENV Sets environment variables
COPY Copies files/directories into the image
ADD A more feature-rich version of the COPY instruction. COPY is preferred over ADD.
ENTRYPOINT Define a container's executable. See the difference between the CMD instruction here.
VOLUME Defines which directory in an image should be treated as a volume. The volume will be given a random name which can be found using docker inspect command.
WORKDIR Defines the working directory for subsequent instructions in the Dockerfile
ARG Defines variables that can be passed by the docker build --build-arg command and used within the Dockerfile.

If some files should be prevented from being copied into the Docker image, a .dockerignore file can be added at the same level as the Dockerfile where files that should not be copied over into the Docker image can be specified. This means that if using a COPY or ADD instruction in a Dockerfile to specify the files to be added into a Docker image, any file specified in the .dockerignore file will be ignored and not added into the Docker image. This may be desirable to prevent files that contain sensitive information (e.g. .env files that hold API keys) or large unnecessary files from being added to a Docker image. Make sure to know the correct syntax to use when specifying files to be ignored in the .dockerignore file.

Pro tip: Use Alpine Linux base images

Many base images have an Alpine Linux equivalent which is much smaller than their non-apline equivalent. Building smaller Docker images is advantageous because you can save money on data transfer/storing costs associated with Docker images. For example, Amazon's container registry service, ECR, charges users per GB of Docker images that are pulled from their registry. Smaller Docker images also mean pulling Docker images from image registries will be done more quickly, which can have noticeable effects on your CI times and local development wait times.

Learn more about the advantages of Alpine Linux images by reading "The 3 Biggest Wins When Using Alpine as a Base Docker Image".

Pro tip: Multi-stage builds

Since there are many advantages to keeping Docker image sizes to a minimum, removing any unnecessary compile-time dependencies from Docker images will help cut down on the overall size. The best way to avoid including unnecessary files in Docker images is by using multi-stage builds in a Dockerfile. Multi-stage builds allow "intermediate" images to be built where any necessary compile-time dependencies can be installed and then only the necessary files generated from these intermediate images can be copied into an optimized runtime image. Read this article by Greg Schier for a detailed explanation of how multi-stage builds work.

Related resources

Docker images

Docker images contain the code and instructions used by Docker containers to know how to set up an application's environment.

Building and tagging images

Docker images are created using the docker build command. When building Docker images, they are given a tag specified by the --tag option. Tagging images is a way to give Docker images names/versions to know which image to pull from image repositories as well as to know which image should be used when running a container.

Here's an example of building and tagging an image.

docker build --tag my-app:1.0 .
Enter fullscreen mode Exit fullscreen mode

Breaking down the above command:

  • docker build specifies that a Docker image is being created
  • --tag my-app:1:0 specifies that the image should be assigned a repository name of my-app and a tag of 1.0
  • . indicates that the Docker image should be built from a Dockerfile that is found within the current directory

The docker images command can be used to verify that the image has been created by listing out all the Docker images available:

docker images
Enter fullscreen mode Exit fullscreen mode
REPOSITORY             TAG                 IMAGE ID            CREATED             SIZE
my-app                 1.0                 964baf4833d3        8 minutes ago       322MB
Enter fullscreen mode Exit fullscreen mode

An image can also only have a repository name specified using the --tag option, in which case a default tag of latest will be used.

docker build --tag my-app .
docker images
Enter fullscreen mode Exit fullscreen mode
REPOSITORY             TAG                 IMAGE ID            CREATED             SIZE
my-app                 latest              964baf4833d3        8 minutes ago       322MB
Enter fullscreen mode Exit fullscreen mode

In addition to being able to tag images using the --tag option with the docker build command, Docker images can be tagged using the docker tag command. Since the same image can have multiple tags, the docker tag command allows for a new tag to be applied to an image that has already been tagged via the docker build --tag command. To the docker tag command can be avoided by correctly tagging an image using docker build --tag, but if ever an image should have a different tag, that can be done using the docker tag command.

Here's an example:

docker tag my-app:1.0 robertcooper/my-app:1.0
Enter fullscreen mode Exit fullscreen mode

The above command will provide a repository name of robertcooper/my-app and tag 1.0 to the local image that has the name of my-app and tag 1.0.

Listing out the images will yield the following results:

docker images
Enter fullscreen mode Exit fullscreen mode
REPOSITORY             TAG                 IMAGE ID            CREATED             SIZE
robertcooper/my-app    1.0                 964baf4833d3        46 minutes ago      322MB
my-app                 1.0                 964baf4833d3        46 minutes ago      322MB
Enter fullscreen mode Exit fullscreen mode

Notice how both images share the same image ID and only differ by the repository name. The repository name associated with an image is what determines to which Docker image registry/repository the image will be pushed when running the docker push command. Docker image repositories and how to push and pull images is discussed in the Image registries and pulling/pushing images section.

Pro tip: Beware of mutable Docker tag behavior

Docker image tags are mutable, which means that if an image is pushed to an image repository with a specific tag (e.g. 1.0.0), that tagged image can be overwritten with a different image that contains different code. This might be surprising for those that are more familiar with immutable dependency repositories like npm. Mutable image tags have their benefits (such as making sure previously published versions of an image include security fixes), but this also means that a user might not be pulling the exact image they expect, which could result in non-deterministic deployments. To make sure the exact same image is pulled every time, an image's digest can be specified, which is a "content-addressable identifier" that points to an image. If referencing images by their digest, instead of docker pull node@8.10.0, the command would look like docker pull node@sha256:06ebd9b1879057e24c1e87db508ba9fd0dd7f766bbf55665652d31487ca194eb.

"Attack of the mutant tags! Or why tag mutability is a real security threat" explains the pros/cons of a mutable tag system in the context of Docker and how to reference image digests.

"Overcoming Dockerโ€™s Mutable Image Tags" explains how to reference image digests and how Renovate can be used to detect if a new image digest has been published for a specific image tag to alert users when there is an updated image available.

Related resources

Listing images

Either use the docker images (or docker image ls) command to list out the Docker images available on the Docker host.

docker images
Enter fullscreen mode Exit fullscreen mode
REPOSITORY             TAG                 IMAGE ID            CREATED             SIZE
robertcooper/my-app    1.0                 964baf4833d3        46 minutes ago      322MB
my-app                 1.0                 964baf4833d3        46 minutes ago      322MB
Enter fullscreen mode Exit fullscreen mode

Related resources

Image registries and pulling/pushing images

Docker images are remotely saved in Docker image registries, with the default Docker image registry being Docker Hub. Docker image registries allow for Docker images to be stored and pulled down (i.e. downloaded) onto a Docker host to be used by a Docker container.

To pull an image from Docker Hub, use the docker pull command:

docker pull nginx:1.18.0
Enter fullscreen mode Exit fullscreen mode

The above command will download the nginx Docker image tagged 1.18.0 from Docker Hub. After running the docker pull command, the image should show up when listing out the images using docker images.

docker images
Enter fullscreen mode Exit fullscreen mode
REPOSITORY             TAG                 IMAGE ID            CREATED             SIZE
nginx                  1.18.0              20ade3ba43bf        6 days ago          133MB
Enter fullscreen mode Exit fullscreen mode

If the tag is not explicitly specified when pulled, the image tagged with latest will be pulled.

docker pull nginx
Enter fullscreen mode Exit fullscreen mode
docker images
Enter fullscreen mode Exit fullscreen mode
REPOSITORY             TAG                 IMAGE ID            CREATED             SIZE
nginx                  latest              992e3b7be046        6 days ago          133MB
Enter fullscreen mode Exit fullscreen mode

The above example uses the nginx image, which is an official Docker Hub image. Official Docker Hub images are images that are formally approved by Docker Hub and the images are regularly tested for security vulnerabilities. Read more about official Docker images here.

Anyone can create their own account and repository on Docker Hub and push images to the repository. Pushing images to Docker Hub means that the images are saved at Docker Hub in the repository specified using the docker push command. The docker push command takes the form of:

docker push <hub-user>/<repo-name>:<tag>
Enter fullscreen mode Exit fullscreen mode

Example usage of this would look like the following:

docker push robertcooper/my-app:1.0
Enter fullscreen mode Exit fullscreen mode

In the above example, this pushes a local image with a repository name of robertcooper/my-app and tag of 1.0 to Docker Hub.

To have the rights to push an image to an image registry, a user must first log in using docker login:

docker login --username robertcooper
Enter fullscreen mode Exit fullscreen mode

The above command will attempt to authenticate to the robertcooper Docker Hub account. Executing the command will prompt a user for the password, and if successful, it will now be possible to push to Docker Hub repositories associated with the robertcooper Docker Hub account. Login with docker login is also required to pull images from private repositories.

Docker Hub is not the only image registry available for users to use, there are also other popular image registry offerings such as:

Each Docker registry has its own pricing. Docker Hub allows for unlimited public repositories and 1 private repository on their free plan, but for more private repositories an upgrade to their Pro plan is required, which is $60 a year or $7 a month. Amazon ECR, on the other hand, charges $0.10 per GB/month for the storage space that the images take and additionally charges between $0.05 and $0.09 per GB of images that are pulled (i.e. data transfer out is charged, but data transfer in is free).

If pulling images from a repository that is not on Docker Hub, the repository name in the docker push and docker pull commands will need to be prefixed by the hostname corresponding to an image repository. docker push and docker pull commands that don't specify a hostname will default to Docker Hub as the image registry. For example, if pulling an image hosted on Amazon ECR, it might look similar to the following command:

docker pull 183746294028.dkr.ecr.us-east-1.amazonaws.com/robertcooper/my-app:1.0
Enter fullscreen mode Exit fullscreen mode

In the above example, 183746294028.dkr.ecr.us-east-1.amazonaws.com is the hostname for the image repository, robertcooper/my-app is the name of the repository, and 1.0 is the tag.

Remember that authentication using docker login is required to push images, no matter which registry is used.

Related resources

Removing images

To remove a docker image use the docker rmi or docker image rm command.

docker rmi my-app:1.0
Enter fullscreen mode Exit fullscreen mode

Images that are already being used by containers will need to have the containers removed before attempting to remove the image or the --force option can be passed to the docker rmi command.

docker rmi --force my-app:1.0
Enter fullscreen mode Exit fullscreen mode

Here are two useful commands to clear out all images at once:

docker rmi $(docker images -a -q)  # remove all images
docker rmi $(docker images -a -q) -f  # same as above, but forces the images associated with running containers to also be removed
Enter fullscreen mode Exit fullscreen mode

Related resources

Saving and loading images

There may be situations where saving a Docker image to a file and then loading the image onto a Docker host via the file could be useful. For example, a CI check that builds a Docker image might use that same image in another CI check. Instead of pushing the image to an image repository and pulling it down for use in another CI check, it may be beneficial to save the image to a file in a persistent storage on the CI server and then load the image from the other CI check that should use the same built image.

To save a Docker image, use the docker save command:

docker save --output my-app.tar my-app:1.0
Enter fullscreen mode Exit fullscreen mode

The above command will save the Docker image my-app:1.0 to a tarball file named my-app.tar.

The newly saved tarball file can be loaded into a Docker host as a Docker image using the docker load command:

docker load --input my-app.tar
Enter fullscreen mode Exit fullscreen mode
Loaded image: my-app:1.0
Enter fullscreen mode Exit fullscreen mode

Related resources

Docker containers

Docker containers are isolated environments used to run applications. Docker containers are described by their corresponding Docker images.

Running containers

Docker containers are created by running an environment specified by a Docker image. Docker containers are started using the docker run command.

docker run my-app:1.0
Enter fullscreen mode Exit fullscreen mode

The above command will create a container using the my-app:1.0 image. Executing the docker run command will boot up the container and also execute the command specified by the CMD instruction that was specified in the image's Dockerfile.

To verify that the docker container is running, execute docker ps to list out the running containers.

docker ps
Enter fullscreen mode Exit fullscreen mode
CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS                               NAMES
ec488b832e0a        my-app:1.0             "docker-entrypoint.sโ€ฆ"   13 minutes ago      Up 13 minutes                                           practical_buck
Enter fullscreen mode Exit fullscreen mode

Notice that the output of the above command shows there is one container running with a container ID of ec488b832e0a and the name of practical_buck. Both the container ID and container name are randomly generated and can be used for other Docker CLI commands that require the container name/ID to be identified. It's also possible to assign a more appropriate and rememberable name for a container by passing a value to the --name option of the docker run command.

docker run --name my-app my-app:1.0
Enter fullscreen mode Exit fullscreen mode

Now, the name of my-app is assigned to the running docker container.

docker ps
Enter fullscreen mode Exit fullscreen mode
CONTAINER ID        IMAGE                  COMMAND                  CREATED             STATUS              PORTS                               NAMES
6cf7fd48703e        my-app:1.0             "docker-entrypoint.sโ€ฆ"   13 minutes ago      Up 13 minutes                                           my-app
Enter fullscreen mode Exit fullscreen mode

Related resources

Detached mode and viewing logs

By default, running a docker container using docker run will attach the container's process output to the console in which the docker run command was executed. This means all the logs for the container appear in the console in real-time. However, it's also possible to run a container in detached mode by using the -d option, which would allow the console to be free to execute further commands while the docker container is running.

docker run -d my-app:1.0  # Runs a docker container in detached mode
Enter fullscreen mode Exit fullscreen mode

If a container is running in detached mode, the logs can be viewed using the docker logs command.

docker logs ec488b832e0a 
Enter fullscreen mode Exit fullscreen mode

The above command gets the logs for a container with an ID of ec488b832e0a. The container's name could also be used for the docker logs command to get the same result.

docker logs my-app
Enter fullscreen mode Exit fullscreen mode

Related resources

Exposing ports

If a running container exposes a port (e.g. port 3000), then a mapping of ports between the Docker host and the Docker container is required to access the application outside of Docker (e.g. to view a web app running in Docker in a Chrome browser at http://localhost:3000).

docker run -p 3000:3000 my-app:1.0
Enter fullscreen mode Exit fullscreen mode

The above command will run a container that will be accessible at http://localhost:3000 on the host. Visiting http://localhost:3000 in the browser should display the running app (assuming the app is a web application).

Often the port numbers between the Docker host and Docker container end up being the same value, but it's also possible to map to a different port on the host machine, which can be desirable if a port on the host is already occupied.

docker run -p 8000:3000 my-app:1.0
Enter fullscreen mode Exit fullscreen mode

The above command will run a container accessible at http://localhost:8000.

Stopping and removing containers

To stop running a docker container, use the docker stop (or docker container stop) command.

docker stop my-app
Enter fullscreen mode Exit fullscreen mode

docker stop will send a SIGTERM signal in the container to try and gracefully terminate the running process. If after 10 seconds the container doesn't stop, a SIGKILL signal will be sent, which does a more forceful termination of the running process.

There is also the docker kill (or docker container kill) command which will immediately send a SIGKILL signal to the container for a forceful termination of the container's process.

docker kill my-app
Enter fullscreen mode Exit fullscreen mode

It is possible to restart a stopped container using the docker start (or docker container start) command.

docker start my-app
Enter fullscreen mode Exit fullscreen mode

To remove containers no longer in use, use the docker rm command.

docker rm my-app
Enter fullscreen mode Exit fullscreen mode

By default, the docker rm command will only permit stopped containers from being removed. To remove containers that are running, use the -f (--force) option.

docker rm -f my-app
Enter fullscreen mode Exit fullscreen mode

Using the -f option will send a SIGKILL signal to the running container to stop it and then the container will be removed.

Related resources

Listing containers

To view a list of running containers, use the docker ps (or docker container ls) command. By default, the containers shown with docker ps are the containers that are running. To include stopped containers, pass the -a (--all) option to the command.

docker ps
Enter fullscreen mode Exit fullscreen mode
docker ps -a  # includes stopped containers
Enter fullscreen mode Exit fullscreen mode

Related resources

Execute commands in a running container

To run a command inside a running container, use the docker exec command.

docker exec my-app ls
Enter fullscreen mode Exit fullscreen mode
Dockerfile
README.md
node_modules
package.json
pages
public
styles
yarn.lock
Enter fullscreen mode Exit fullscreen mode

A useful command to exec is one that allows connecting to the container's shell:

docker exec -it my-app sh
Enter fullscreen mode Exit fullscreen mode

Running the above command will connect the command line to the Docker container's shell where it will be possible to execute further commands, such as cd, ls, cat, echo, etc. The -it option is required to have the command line shell be connected to the Docker container's shell. For more details on what the -it command does, read this article.

Side note: bash != sh

In the above command, sh is used instead of bash since bash might not be installed on the Docker image. For example, Alpine Linux Docker images don't have bash installed on them by default, so bash would either need to be installed via an instruction in the Dockerfile (see this Stack Overflow answer to do so) or sh could be used since it comes installed on Alpine Linux. sh should be available on most/all Linux distributions, whereas bash might not be natively installed. bash is a superset of sh (in the same way that TypeScript is a superset of JavaScript), which means that bash has some additional functionality over sh, but for typical usage in the context of docker exec -it my-app sh, there shouldn't be a need for the extra features provided by bash.

For more details read this great resource on Stack Overflow which answers the question: "Difference between sh and bash".

The docker run command can also be used to run a command in a Docker container. The difference between docker run and docker exec is that docker exec needs to be used on a container that is already running, whereas the docker run command will create a new container to run the specified command.

docker run my-app:1.0 ls
Enter fullscreen mode Exit fullscreen mode

The above command will start a container with the my-app:1.0 image and execute the ls command. Listing out the Docker containers (docker ps --all) before and after executing a docker run command will show that a new container was created.

Remember that the docker run command, as explained at the beginning of the section on Docker containers, is also used to create a Docker container and will run any commands that are specified in the container's associated image. This is probably how the docker run command is most often used, however, it can also be used to run one-off commands as explained in the aforementioned paragraphs.

Related resources

Get details of a container

To get details about a docker container, the docker inspect command can be used.

docker inspect my-app
Enter fullscreen mode Exit fullscreen mode

The output is quite long for the docker inspect command since it includes a ton of information related to the Docker container, such as its full ID, when the container was created, the command the container ran when it started, the network settings, and a whole lot more.

The docker inspect command is not exclusive to Docker containers, but can be used for other Docker objects such as Docker images.

docker inspect my-app:1.0  # inspect a Docker image
Enter fullscreen mode Exit fullscreen mode

To get live data on the resource consumption usage of a Docker container, such as CPU, memory, and I/O, use the docker stats command.

docker stats my-app
Enter fullscreen mode Exit fullscreen mode
CONTAINER ID        NAME                CPU %               MEM USAGE / LIMIT     MEM %               NET I/O             BLOCK I/O           PIDS
4eef7af61c6f        my-app              0.03%               46.73MiB / 1.944GiB   2.35%               906B / 0B           0B / 0B             18
Enter fullscreen mode Exit fullscreen mode

Side note: Resource limits on the Docker host

The total CPU and Memory used to calculate the CPU % and MEM % values don't necessarily correspond with the total CPU and memory available on the physical machine since the Docker host can be configured to have a fraction of the CPUs/RAM available on the machine. CPU, memory, and disk space limits can be easily configured with Docker Desktop to make sure that the Docker host doesn't have access to too many resources on the machine.

Docker Desktop configuration of resources such as CPU, memory, and disk space

Docker Desktop configuration of resource limits on the Docker host.

Related resources

Docker volumes

Docker volumes are used to persist data in Docker containers. That means that a container can be stopped and restarted and the application's data/state can be maintained. Docker volumes can also allow multiple different containers to share data by having all the containers point to the same volume.

Docker volumes are saved in the Docker storage directory on the host filesystem. The Docker storage directory is /var/lib/docker/volumes on Linux, but it shouldn't matter to the user where Docker saves these volumes since users will be relying on the Docker CLI to interact with volumes.

A volume can be mounted to a container by passing the -v (--volume) option to the docker run command when starting a container.

docker run -d --name my-app -v my-volume:/usr/src/app my-app:1.0
Enter fullscreen mode Exit fullscreen mode

The above command will map a volume named my-volume to the code found in the /usr/src/app directory in the Docker container.

List out all the available Docker volumes using docker volume ls.

docker volume ls
Enter fullscreen mode Exit fullscreen mode
DRIVER              VOLUME NAME
local               my-volume
Enter fullscreen mode Exit fullscreen mode

Volumes can also be created using the docker volume create command.

docker volume create my-volume
Enter fullscreen mode Exit fullscreen mode

There shouldn't be many use cases where the docker volume create command is necessary since volumes are likely to be created/specified with the docker run -v command or with Docker Compose.

A volume name doesn't need to be explicitly passed when creating a volume as Docker will generate a random name for the volume.

docker run -d --name my-app -v /usr/src/app my-app:1.0
Enter fullscreen mode Exit fullscreen mode

To find out the name that was assigned to the Docker volume created by the above command, run the docker inspect command to get details on the container.

docker inspect my-app
Enter fullscreen mode Exit fullscreen mode

The output will contain a section called "Mounts" with the information about the volume.

...
  "Mounts": [
    {
        "Type": "volume",
        "Name": "7ce1df6e218cda9c64917c51c1035e7291791043f9722db2fe38156cb9dc98f3",
        "Source": "/var/lib/docker/volumes/7ce1df6e218cda9c64917c51c1035e7291791043f9722db2fe38156cb9dc98f3/_data",
        "Destination": "/usr/src/app",
        "Driver": "local",
        "Mode": "",
        "RW": true,
        "Propagation": ""
    }
  ],
...
Enter fullscreen mode Exit fullscreen mode

In the above output, 7ce1df6e218cda9c64917c51c1035e7291791043f9722db2fe38156cb9dc98f3 is the name of the volume.

Usually, the name of the volume should be explicitly defined to make sure the same volume is used every time a container is started.

Docker also provides the option to use what are called bind mounts, which behave very similarly to Docker volumes. A bind mount is a directory found on the host machine that will be mounted into a Docker container. The main difference between a bind mount and a Docker volume is that Docker volumes are managed by the Docker engine, so the docker volume command can be used to interact with the volumes, and the volumes are all saved within Docker's storage directory.

As an example, a directory on a user's desktop could be used as the bind mount for a container.

docker run -d --name my-app -v ~/Desktop/my-app:/usr/src/app my-app:1.0
Enter fullscreen mode Exit fullscreen mode

Bind mounts won't show up when running the docker volume ls command since bind mounts are not "managed" by Docker.

Pro tip: Live reloading

Docker volumes can be leveraged to get live reloading capabilities when developing applications with Docker. This Free Code Camp tutorial explains how to set up a Docker app to allow for live reloading of a Node.js application that is written in TypeScript. The tutorial explains how a Docker volume is used to mirror files between a local file system and a Docker container. The tutorial also explains how to override the default command specified in a Dockerfile via the Docker CLI to have the Docker container run a watcher process that watches for TypeScript files to be changed to transpile them into JavaScript, which will, in turn, trigger the application to live reload.

Related resources

Docker networks

Docker networks are used to allow containers to be well isolated from one another and they are also used if containers need to be able to communicate with each other.

Docker networks have different drivers that allow for different kinds of networks. The default driver is called a "bridge" driver and when starting a container without specifying a specific network, the container will be running on a bridge network that is named "bridge". However, Docker recommends a user-defined bridge network be used if different containers need to communicate with one another.

docker run -d --name my-app my-app:1.0
Enter fullscreen mode Exit fullscreen mode

Running a container using the above command would connect the container to the default bridge network. This can be verified by using the docker inspect command.

docker inspect my-app
Enter fullscreen mode Exit fullscreen mode
...
"NetworkSettings": {
    "Bridge": "",
    "SandboxID": "33bf53f6badba3f19e4bcdb18e55d198a48672f23187be8934e20f31e6aad18f",
    "HairpinMode": false,
    "LinkLocalIPv6Address": "",
    "LinkLocalIPv6PrefixLen": 0,
    "Ports": {},
    "SandboxKey": "/var/run/docker/netns/33bf53f6badb",
    "SecondaryIPAddresses": null,
    "SecondaryIPv6Addresses": null,
    "EndpointID": "",
    "Gateway": "",
    "GlobalIPv6Address": "",
    "GlobalIPv6PrefixLen": 0,
    "IPAddress": "",
    "IPPrefixLen": 0,
    "IPv6Gateway": "",
    "MacAddress": "",
    "Networks": {
        "bridge": {
            "IPAMConfig": null,
            "Links": null,
            "Aliases": null,
            "NetworkID": "94c91e903283f9686d2d6f16bb10e28e95b004d81f81415d04f0cf710af006f9",
            "EndpointID": "",
            "Gateway": "",
            "IPAddress": "",
            "IPPrefixLen": 0,
            "IPv6Gateway": "",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "MacAddress": "",
            "DriverOpts": null
        }
    }
}
...
Enter fullscreen mode Exit fullscreen mode

To create a user-defined network, use the docker network create command.

docker network create my-network
Enter fullscreen mode Exit fullscreen mode

Connect to the user-defined network by using the --network option with docker run:

docker run -d --name my-app --network my-network my-app:1.0
Enter fullscreen mode Exit fullscreen mode

Opt to connect a network to a running container using docker network connect:

docker network connect my-network my-app
Enter fullscreen mode Exit fullscreen mode

So far, only the bridge network driver has been mentioned since that is all that most people need when working with Docker. However, there are 3 other network drivers that come with Docker and can be used for container networking:

  • host: The host driver is the network that the Docker host uses. It may be desirable to have a container using the Docker's host network if the number of available ports while using bridge networks are too limited.
  • none: Containers that don't need to be connected to a network can use no network. This means that input and output would be done through STDIN and STDOUT or through files mirrored in a Docker volume.
  • overlay: Used to connect Docker containers that are on different Docker hosts. This is most often used when running Docker in swarm mode.
  • maclan: Used to assign a MAC address to a container.

To view the list of Docker networks on the Docker host, run the docker network ls command.

docker network ls
Enter fullscreen mode Exit fullscreen mode
NETWORK ID          NAME                         DRIVER              SCOPE
e538e11182fd        my-network                   bridge              local
409d0b996c9d        bridge                       bridge              local
39a73b1d3dc1        host                         host                local
8616053c1b28        none                         null                local
Enter fullscreen mode Exit fullscreen mode

The above output shows that there are 4 networks on the current Docker host:

  • The user-defined bridge network named "my-network"
  • The default bridge network named "bridge"
  • The host network named "host"
  • The network named "none" which is used for containers that should not be connected to a network

Related resources

Removing unused images, containers, and volumes

A computer's hard drive can quickly become bloated with old and unused Docker objects (i.e. images, containers, and volumes) that accumulate as a user works with Docker.

Here's a list of some of the useful commands to clear out old images, containers, and volumes:

docker system prune  # cleans images, containers, volumes, and networks that are not associated with a container
docker system prune -a  # same as above, but includes stopped containers and unused images

docker volume prune  # removes volumes that are not connected to containers (aka "dangling" volumes)

docker rmi $(docker images -a -q)  # removes all images that are not associated with existing containers
docker image prune -a # same as the above command
docker rmi $(docker images -a -q) -f  # same as above, but forces the images associated with existing containers (running or stopped) to also be removed

docker rm $(docker ps -a -q)  # removes all containers
docker rm $(docker ps -a -q) -f  # same as above, but forces running containers to also be removed
Enter fullscreen mode Exit fullscreen mode

Related resources

Docker Compose

Docker Compose is a tool that simplifies the communication with the Docker engine. Every interaction with the Docker engine can be done via the regular Docker CLI, but Docker Compose allows for the definition of containers to be done in docker-compose.yml files with the use of the docker-compose CLI commands to interact with the Docker engine. This allows you to container configuration details away from the regular docker CLI commands and into Docker Compose files.

Before being able to run any commands with docker-compose, it is required to specify the services in a docker-compose.yml file. A service describes what image should be used for a running container and any other configuration associated with a container, such as environment variables, ports to be exposed, volumes to be used, etc.

Side note: Why "service"?

As mentioned, the blocks that describe Docker containers in the docker-compose.yml file are called services. The term "service" is often used to refer to the parts that make up some larger application. The whole microservices architecture is based on this idea.

Here's an example docker-compose.yml file that describes three services: a web-app, an API, and a database.

version: "3.8"
services:
  web-app:
    container_name: web-app
    build: ./web-app
    ports:
      - "3000:80"
  api:
    container_name: api
    build: ./api
    ports:
      - "5000:5000"
  db:
    container_name: db
    image: postgres
    ports:
      - "5432:5432"
    environment:
      POSTGRES_PASSWORD: password
    volumes:
      - db-volume:/var/lib/postgresql/data
volumes:
  db-volume:
Enter fullscreen mode Exit fullscreen mode

Breaking down the above docker-compose.yml, we can see that we are first defining the version of Docker Compose that should be used (3.8 being the latest version at the time of this writing).

Next, there are 3 services that are described: web-app, api, and db. Each service has a container_name defined, which will correspond with the name of the Docker container when running the services with Docker Compose. To start a service, the docker-compose up command can be used. If specifying the name of a service, it will start that specific service. If no service is specified, all services will be started.

docker-compose up api  # Will only start the api service
docker-compose up  # Will start all the services
Enter fullscreen mode Exit fullscreen mode

The services can be run in detached mode to keep control of the command line.

docker-compose up api -d
Enter fullscreen mode Exit fullscreen mode

It's possible to verify the containers are running using the docker ps command.

docker ps
Enter fullscreen mode Exit fullscreen mode
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS  
            PORTS                    NAMES
4c33deed7688        my-app_web-app      "/docker-entrypoint.โ€ฆ"   3 days ago          Up 7 hours          0.0.0.0:3000->80/tcp     web-app
7c3fc92ad1c4        my-app_api          "docker-entrypoint.sโ€ฆ"   4 days ago          Up 7 hours          0.0.0.0:5000->5000/tcp   api
acc4dcd27a2b        postgres            "docker-entrypoint.sโ€ฆ"   4 days ago          Up 7 hours          0.0.0.0:5432->5432/tcp   db
Enter fullscreen mode Exit fullscreen mode

The web-app and api services have a build option which is pointing to the location of the Dockerfile that corresponds with each service. The location of the Dockerfile is required for Docker Compose to know how to build images associated with those services.

To build the images with Docker Compose, use the docker-compose build command.

docker-compose build api
Enter fullscreen mode Exit fullscreen mode

The above command will build an image for the api service with a default name following the format of [folder name]_[service name]. So if the folder in which the docker-compose.yml file is found is named my-app, then the build image will be named my-app_api. The image will also be given a tag of latest.

Building images with docker-compose build should not be necessary for most situations since running the docker-compose up command will automatically build any missing images. However, it may be desirable to re-build an image if the image tagged as latest is old and needs to be updated with new code.

Notice that in the above docker-compose.yml file, the db service doesn't have a build option. That's because the database service is using the official PostgreSQL image from Docker Hub, so there is no need to build an image.

Each service has the ports it would like to expose to the Docker host using the ports option, where the first port number is the port on the Docker host and the second number is the port in the Docker container. Taking the web app as an example, we are mapping the Docker host port 3000 to port 80 of the Docker container since the web app service runs an NGINX server that serves HTML to port 80.

Also, notice an environment variable is being set for the db service. The POSTGRES_PASSWORD environment variable is being set to a value of "password" (mandatory disclaimer that using "password" for your password is not secure). Environment variables specified using the environment option in Docker Compose makes the environment variables available in the associated Docker container, but not during build time. Therefore environment variables are not accessible in the Dockerfile. To have variables accessible at build time in the Dockerfile, use the args sub-option for the build option in the docker-compose.yml file.

The db service also makes use of a named volume. The named volume of db-volume is specified at the bottom of the docker-compose.yml file and the mapping of the volume to the directory in the Docker container is done with the volumes option on the db service. Notice that the docker-compose.yml file is mapping the db-volume volume to the /var/lib/postgresql/data directory in the Docker container which allows for the database data to be persisted when restarting the db service.

Here's a list of frequently used Docker Compose options in the docker-compose.yml file:

Option Description
build Configuration on how to build an image
build.args Used to specify build-time variables
command Overrides the default CMD defined by an image's Dockerfile
container_name The name that should be used for the service's container
depends_on Used to determine the order in which containers should be started
env_file Adds environment variables from a file
environment Specify environment variables
image Specifies the image to be used by the container
networks Specify the networks used by the container
ports Port mapping between host and container
volumes Specify a named volume or bind mount mapping between the host and container

View all possible Docker Compose options in the official documentation.

Here's a list of frequently used docker-compose CLI commands:

Command Description
docker-compose build Builds or rebuilds a service's image
docker-compose up Creates and starts service containers
docker-compose down Stops containers and removes containers, images, and networks associated with a service
docker-compose exec Execute a command in a container
docker-compose images List the images associated with services
docker-compose kill Kills containers
docker-compose logs View the logs of a service
docker-compose ps List all service containers and their state
docker-compose pull Pull the image for a service
docker-compose push Push the image for a service
docker-compose rm Removes stopped containers
docker-compose run Run a single command in a service
docker-compose start Start a service
docker-compose stop Stop a service

View all the docker-compose command with docker-compose help or in the official documentation.

It's possible to get command-line completion of docker-compose commands by following this guide. It should help to quickly type out commands and avoid typos.

Related resources

Managing environment variables

Working with environment variables in Docker deserves its own section since it is not obvious how they work. When talking about environment variables, we are talking about variables that are globally accessible through any shell process in the Docker container and can be listed using the env or printenv shell commands.

To verify environment variables are set in the shell of the Docker container, the most convenient way is to run docker-compose exec container-name sh to open an interactive shell where all the environment variables can be listed using env or printenv.

The first way environment variables can be set is in the Dockerfile using the ENV instruction.

ENV <key>=<value> ...
Enter fullscreen mode Exit fullscreen mode
ENV TEST="value"
Enter fullscreen mode Exit fullscreen mode

Not only will the variables set with the ENV instruction be available as environment variables in the Docker container, but they will also be available to use in other instructions in the Dockerfile by prefixing the name of the variable with $.

The next way to set environment variables is via the docker run command using the -e (--env) option.

docker run -d -e TEST="value" my-app:1.0
Enter fullscreen mode Exit fullscreen mode

Environment variables can also be set with Docker compose and there is a great guide on all the different ways to do so. To summarize the linked guide, environment variables in Docker Compose can be set for each service in the docker-compose.yml file using the environment option.

services:
  web-app:
    environment:
      - TEST="value"
Enter fullscreen mode Exit fullscreen mode

The docker-compose CLI will also automatically load environment variables found in .env files that are in the same directory specified by the Docker Compose build option. However, if the environment variable file is named other than .env, then the file name should be specified using either the --env-file CLI option of the env_file option in the docker-compose.yml file.

docker-compose --env-file .env.dev up
Enter fullscreen mode Exit fullscreen mode
services:
  web-app:
    env_file: .env.dev
Enter fullscreen mode Exit fullscreen mode

If using the docker-compose run command, pass environment variables using the -e option.

docker-compose run -e TEST="value" web-app echo "Hello world"
Enter fullscreen mode Exit fullscreen mode

Related resources

Deploying and running Docker applications

There is a bunch of tooling that has been built around making it more manageable to deploy dockerized applications onto servers and make those applications accessible to the public. The choice of tooling used to deploy dockerized tooling depends on the scale of an app, the amount of control desired, the amount of complexity that is acceptable to manage, and the budget. This guide will list out 3 possible approaches to take to deploy dockerized applications.

Dokku

Ideal for: Side projects and early-stage startups

For a low-cost solution where scalability is not a concern and a bit of manual server configuration is no issue, then Dokku is a great solution. The way Dokku works is that it is installed on a cloud server (e.g. on Digital Ocean or Linode) and can then be configured to run Docker applications. The basic steps are the following:

  • Install Dokku on a server (Digital Ocean has Dokku pre-installed on some servers)
  • SSH into the server and pull down the application's image(s)
  • Create an "app" within Dokku which will use one an image (create multiple apps to run multiple containers on a single server)
  • Configure Dokku with domain names, SSL, databases, etc.
  • Run the application(s) with Dokku
  • Update the applications by pulling down new images and running a Dokku command (see this code for an example of this being done in the CI with GitHub actions)

To get a more step-by-step outline of how to do this, see the Dokku documentation or this article on setting up Dokku with Digital Ocean

Dokku is great if budget is a concern and a high level of control is desired, however, this is not as simple a solution as using Heroku or the Digital Ocean App Platform.

Related resources

Heroku and Digital Ocean App Platform

Ideal for: Small to medium-sized startups/companies

Heroku and the Digital Ocean App Platform are solutions that handle a lot of the complexity related to setting up and deploying applications on servers while also automatically scaling applications on the fly. Both platforms allow for the deploying applications in Docker containers and make it easy to setup domains, SSL, and databases. Since Heroku and the Digital Ocean App Platform handle implementing some of the configurations, there may be some limitations encountered for more advanced server setups.

Related resources

Kubernetes

Ideal for: medium to large-sized startups/companies with large scale

Perhaps the most popular name in the field of deploying dockerized applications is Kubernetes, which can be referred to as a "container orchestration system" responsible for deploying, scaling, and managing dockerized applications. Kubernetes can automatically scale apps by adding more servers/virtual machines and distribute traffic among those servers. The 3 big cloud providers (Amazon, Microsoft, Google) have offerings that integrate with Kubernetes:

The above offerings are called "managed Kubernetes" services because each cloud provider manages certain parts of the Kubernetes concerns, such as creating a Kubernetes cluster on AWS, Microsoft, or Google specific infrastructure.

Kubernetes is great for large scale and complex applications, but small scale applications should try to avoid using Kubernetes if possible since it comes with a lot of configuration and complexity. Unless a user has a good understanding of servers and deployment-related things, Kubernetes has a steep learning curve and can't take up a lot of time that could be better spent on application development purposes. There is a lot of tooling surrounding Kubernetes itself (e.g. Helm and kubectl) that takes a lot of time to learn and be comfortable working with.

For me details on the topic of whether Kubernetes should be used, I recommend reading this article title "โ€œLetโ€™s use Kubernetes!โ€ Now you have 8 problems". Don't start addressing scaling issues until it becomes a problem. When it does, here's a good article on how to scale apps starting from 1 user to 100k users.

Related resources

Avoiding the Docker command line

Understandably, some people aren't too fond of working in the command line. Well, luckily, there are some great options for using Graphical User Interfaces (GUIs) along with Docker.

Docker Desktop

Docker Desktop ships with a GUI that interacts with the Docker Engine to perform some Docker related tasks, such as stopping containers, remove containers, and opening a shell inside a container.

Alt Text

Visual Studio Code

VS Code, one of the most popular text editors (especially amongst JavaScript devs), has a great Docker extension that enables many Docker related tasks to be done without having to leave the editor.

In addition to being able to view Docker images, containers, networks, and volumes, IntelliSense autocomplete functionality is available in Dockerfiles and Docker Compose configuration files. Also, most common Docker commands are available via the command palette.

Docker VS Code extension

Related resources

Dockstation

Dockstation is a full-fledged desktop application dedicated to serving as a Docker GUI. It provides a nice way to visualize Docker related information along with performing most Docker related tasks.

Dockstation project view
Dockstation stats view

Debugging dockerized apps

It's common for developers to opt to run some of their application's services locally outside of Docker to be able to run the services through their editor's debugger to have a better debugging experience. However, editors have started to add the ability to attach a debugger inside of a running Docker container, so there should not be a need to have to run services outside of Docker to take advantage of an editor's debugging capabilities.

Visual Studio Code

To debug containerized apps in Visual Studio code, the Docker extension is required. The Docker extension currently only supports debugging Node.js, Python, and .NET Core applications inside Docker containers.

Related resources

JetBrains IDEs

There are many JetBrains IDEs and most of them have ways to run dockerized applications with debug capabilities.

Related resources

Navigating the Docker documentation

There's a lot of parts to the Docker documentation website, with getting started guides, release notes, manuals on how to use Docker products, and more. Therefore it might not be obvious where to look for relevant documentation about some of the Docker-related stuff talked about in this article.

I'm here to tell you that you will likely want to be looking at the "Reference documentation". More specifically the following reference documentation pages:

Additional resources

Top comments (5)

Collapse
 
subhankar720 profile image
subhankar720

Thanks man. It's a goldmine ๐Ÿ™‚

Collapse
 
mysimonid profile image
mySimonID

Hi Robert,
Many thanks, this article has really helped me to understand docker and what I need to do next. :)

Collapse
 
praise002 profile image
Praise Idowu

Thanks a lot

Collapse
 
thineikhaing profile image
ThinEi

Hi Robert,
Thank much for the article, This really helped me to understand docker better and your post is like docker wiki. Thanks again.

Collapse
 
vrushalisonwani profile image
Vrushali Sonwani

Too good Thanks a lot. Need to re-read re-read again and again. To understand docker better.. :)