DEV Community

Cover image for Docker basics for beginners
David MM๐Ÿ‘จ๐Ÿปโ€๐Ÿ’ป
David MM๐Ÿ‘จ๐Ÿปโ€๐Ÿ’ป

Posted on • Updated on

Docker basics for beginners

In my latest post, I talked about Vagrant and how it can help us to create Virtual Machines in minutes, but what if can do it even faster, better and more customizable? Let's learn how we can develop, deploy and run applications easily with Docker!

Introduction

If you Google Docker, you will find that Docker is a software platform that uses an OS-level virtualization to create self-contained containers.

Luckily here, I will explain to you what that means in simple English.

You may have created multiple Virtual Machines with Oracle VM or Vagrant before. Docker is something like that (but better, but more about that later).

With Docker, we select an image (think about Docker images as recipes) and download it. Then, we create an instance of that image or container, pretty similar to a Virtual Machine.

Image:

A package or template used to create one or more containers

Container:

Instances of an image, isolated from each other, with their own environment.

But let's see it in action. This is a docker image code:

FROM ubuntu:23.04
RUN apt-get update
RUN apt-get install -y curl nginx

Remember what I said about a Docker image being a recipe? In this image or recipe, Docker gets the Ubuntu 23.04 version, updates the SO, and then installs curl and nginx.

Granted, this is a short Docker image version, but it helped us to visualize what Docker is about.

Now, with this image, we can create a container (imagine a Virtual Machine) and it will create a Linux Ubuntu VM-like already updated, with curl and nginx.

And all the developers in our company can use the same image to have the same programs, packages, and versions installed. No more "Bu...but it works on my computer!"; now every computer has the same specifications.

Docker vs Virtual Machines

But...if Docker creates a container that is VM-like, why don't we use just Virtual Machines?

I could explain how Docker containers are better than Virtual Machines on a low-level way, maybe even snatch some cool infographics from another site like this one and explain that Docker uses the same kernel for every container, making it light-weight and fast, only taking a few seconds to spin one container:

Image description

But there is one big benefit of using Docker and Docker containers:

Imagine you want to develop a 21.1 NodeJS: You create your own Docker image, where you get an Ubuntu image, update it, install all the NodeJS-related stuff, and then distribute the image to the developing team.

In a normal setting, you have to upload the NodeJS application, deploy it on your server and you have to make sure that the server has all the dependencies and its NodeJS is compatible with yours.

And you don't want to bet on that.

With Docker, we can create a Docker image, upload it to a Docker-compatible server and that's it.

The Docker server doesn't care about what Linux you use, what packages installed, or what it is your app's language: It only needs to run the image. That's it.

Let me emphasize this: We don't care what the server has installed. We upload and run the Docker image. That's all we have to do.

Installation

You can install Docker Desktop, a GUI Docker app, but we, rugged and tough developers use proper terminal stuff, so you will install Docker Engine, the terminal version of Docker.

Jokes aside, you can install whatever you want: Docker Desktop or Docker Engine, just make sure you are following your OS's instructions. For example, for Debian-based distros such as Ubuntu:

Uninstall previous Docker versions

sudo apt-get purge docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin docker-ce-rootless-extras
sudo rm -rf /var/lib/docker
sudo rm -rf /var/lib/containerd

Install Docker
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh ./get-docker.sh

Check that Docker is installed
sudo docker version

Let's run a test. Run the following command in your terminal:

sudo docker run docker/whalesay cowsay boo

Image description

Important: Every Docker command needs sudo permission. You can add a user to the docker group, but despite that, it still keeps asking for sudo permission. I have found that by running the command sudo chmod 666 /var/run/docker.sock , you don't get asked for sudo permissions any more (You can use a similar command, like chmod +x).

Basic commands

We have Docker up and running. Let's see a few basic commands. Your bread and butter if you want:

List all images
docker images

Download or execute a container from an image
docker run <IMAGE_NAME>

Download a specific version
docker run <IMAGE_NAME>:<VERSION>

Execute a container in the background
docker run -d <IMAGE_NAME>

Bring a container from the background to the foreground
docker run attach <ID>

Execute a command
docker run ubuntu cat /etc/*release*
docker run ubuntu sleep 15

Download an image to run it later
docker pull <IMAGE_NAME>

Execute a command inside the docker container
docker exec <COMMAND>

Connect to the container's bash
docker run -it <IMAGE_NAME> bash

List all running containers
docker ps

List ALL containers, running or not
docker ps -a

Run a container with a link to other container:
docker run -p <PORT_LOCAL>:<PORT_DEFAULT> --link <IMAGE_NAME_TO_LINK>:<IMAGE_NAME_TO_LINK> <IMAGE_NAME>
docker run -p 5000:80 --link redis:redis voting-app

Get details from an image or container in JSON format
docker inspect <NAME_OR_ID>

Get logs from a container running in the background
docker logs <NAME_OR_ID>

Get all the layers from an image
docker history <IMAGE_NAME>

Stop a container
docker stop <IMAGE_NAME_OR_ID>

Remove permanently a container
docker rm <IMAGE_NAME_OR_ID>

Remove permanently an image that isn't being used
docker rmi <IMAGE_NAME>

Build an image from a Dockerfile
docker build . -t <NAME>

Environment variables
docker run -e <VARIABLE>=<VALUE> <IMAGE_NAME>
docker run -e APP_COLOR=blue simple-webapp-color

An example: Jenkins container

Let's use a real-life example: Using a Jenkins container.

In future posts, I will talk more in-depth about Jenkins and what it does, but Jenkins is a great DevOps CI/CD tool. Let's download Jenkins and run it on our computer:

docker run jenkins/jenkins # This downloads and runs jenkins
docker ps # Get the container ID and port
docker inspect <CONTAINER_ID> # Get the container IP

Open a Browser in your VM using :
docker run -p 8080:8080 jenkins/jenkins # Map the port

Open a Browser in your Host machine using :

Here, we are installing and downloading a Docker image in our Ubuntu Virtual Machine and running it. We can view Jenkins in our VM by opening a Browser and using the Docker container's IP and port, but by mapping the port we can open Jenkins in our Host computer.

The structure is:

Host with Windows -> Linux VM -> Docker container running in Linux

Now, Linux is running a lightweight Docker container we can access from our Windows machine. Isn't that great?

Data persistence

We stop the Jenkins container and the next day we resume it to keep working. But we have lost everything. What happened????

Docker alone doesn't have data permanence.

The container uses its own folders (/var/jenkins_home on Jenkins, /var/lib/mysql on MySQL, etc), but when you stop the container and run the image again, you are creating a container from scratch. What can we do about it?

We can achieve Data persistence by linking a folder in the OS running Docker, and the container's folder.

mkdir my_jenkins_data
docker run -p 8080:8080 -v /home/<USERNAME>/my_jenkins_data:/var/jenkins_home jenkins/jenkins

Here, we created a folder called my_jenkins_data and we linked it with the Jenkins folder /var/jenkins_home, where Docker stores any change.

So, if we run the command again, we will create a new container, linking the stored information, as if we were resuming our container.

Data permanence with volumes

We can simplify this process. Instead of giving a long string for our folder, we can let Docker manage the volumes by creating them in /var/lib/docker/volumes/*.

Create a volume
docker volume create test_volume

This creates a volume in /var/lib/docker/volumes/test_volume
docker run -v test_volume:var/lib/mysql mysql

We can also use the modern way, which is longer but more declarative and verbose:
docker run / --mount type=bind, source=/data/mysql, target=/var/lib/mysql mysql

Final thoughts

As we just saw, Docker is great for several reasons:

  1. Isolation: Docker allows applications to be isolated from the underlying system, ensuring consistency in different environments.

  2. Efficiency: It optimizes resource utilization by using containerization, allowing for more efficient use of system resources.

  3. Portability: Docker containers can run on any machine that has Docker installed, making it easy to deploy applications across different environments.

  4. Scalability: With Docker, it's easy to scale applications by increasing or decreasing the number of containers as per the demand.

  5. Consistency: Docker ensures that the development, testing, and production environments are consistent, reducing the "it works on my machine" problem.

  6. Ecosystem: Docker has a rich ecosystem with a wide range of tools and services that complement containerization, making it a versatile platform for application deployment and management.

  7. Deployment: Docker makes it easier and safer to deploy. Instead of managing packages and their versions, we upload our Docker image to a server.

Resources

Original post

Vagrant tutorial for beginners

Docker Desktop

Docker Engine

The Role of Docker in DevOps

Docker Hub

Top comments (6)

Collapse
 
elielson77 profile image
Elielson Melo

so cool!

Collapse
 
trojandeveloper profile image
Vinnie Stracke

wonderful post! really appreciate it.

Collapse
 
sarthaksavvy profile image
๐’ฎ๐’ถ๐“‡๐“‰๐’ฝ๐’ถ๐“€ ๐Ÿ“จ

very nicely explain

Collapse
 
bharathkumar profile image
Bharath Kumar

good one!!

Collapse
 
aditya_raj_1010 profile image
A.R

well explained

Collapse
 
mabdan945 profile image
mabdan945

Thanks a lot sir