DEV Community

Adnan Latif
Adnan Latif

Posted on • Edited on • Originally published at Medium

Deep Dive into Docker

What is Docker

Docker is an open-source software platform to create, deploy and manage virtualized application containers on a common operating system (OS), with an ecosystem of allied tools. Docker container technology debuted in 2013; Docker Inc. was formed to support a commercial edition of container management software and be the principal sponsor of an open-source version. Mirantis acquired the Docker Enterprise business in November 2019.

How Docker works

Docker packages, provisions, and runs containers. Container technology is available through the operating system: A container packages the application service or function with all the libraries, configuration files, dependencies, and other necessary parts and parameters to operate. Each container shares the services of one underlying operating system. Docker images contain all the dependencies needed to execute code inside a container, so containers that move between Docker environments with the same OS work with no changes.

Docker uses resource isolation in the OS kernel to run multiple containers on the same OS. This is different than virtual machines (VMs), which encapsulate an entire OS with executable code on top of an abstracted layer of physical hardware resources.

Docker was created to work on the Linux platform, but has extended to offer greater support for non-Linux operating systems, including Microsoft Windows and Apple OS X. Versions of Docker for Amazon Web Services (AWS) and Microsoft Azure are available.

Why do we need Docker?

Well, Docker is a platform for building, running, and shipping applications in a consistent manner. So, if your application works on your development machine, it can run and function the same way on other machines. If you have been developing software for a while, you’ve probably come across this situation where your application works on your development machine but doesn’t somewhere else. Can you think of three reasons why this happens? Well, this can happen if one or more files are not included as part of your deployment, so your application is not completely deployed. It’s missing something. This can also happen if the target machine is running a different version of some software that your application needs. Let’s say your application needs Node version 14, but the target machine is running Node version 9. This can also happen if the configuration settings like environment variables are different across these machines. And this is where Docker comes to the rescue.

What are Containers in Docker?

With Docker, we can easily package up our application with everything it needs and run it anywhere on any machine with Docker. So, if your application needs a given version of Node and MongoDB, all of these will be included in your Applications package. Now you can take this package and run it on any machine that runs Docker. So, if it works on your development machine, it’s going to work on your test and production machines. Now there’s more. If someone joins your team, they don’t have to spend half a day or so setting up a new machine to run your application. They don’t have to install and configure all these dependencies. They simply tell Docker to bring up your application, and Docker itself will automatically download and run these dependencies inside an isolated environment called a container and this is the beauty of Docker.

This isolated environment allows multiple applications to use different versions of some software side by side. So, one application may use Node version 14. Another application may use Node version 9. Both these applications can run side by side on the same machine without messing with each other. So, this is how Docker allows us to consistently run an application on different machines. Now, there is one more benefit here. When we’re done with this application and don’t want to work on it anymore, we can remove the application and all its dependencies in one go without Docker.

As we work on different projects, our development machine gets cluttered with so many libraries and tools that are used by different applications, and then after a while, we don’t know if we can remove one or more of these tools because we’re always afraid that we would mess up with some application with Docker, we don’t have to worry about this because each application runs with its dependencies inside an isolated environment. We can safely remove an application with all its dependencies to clean up our machine. Isn’t that great? So, in a nutshell. Docker helps us consistently build, run, and ship our applications and that’s why a lot of employers are looking for people with Docker skills these days. So if you’re pursuing a job as a software or DevOps engineer, I highly encourage you to learn Docker and learn it well.

Difference between Containers and Virtual Machines?

So, in the last paragraph, I briefly talked about containers. The container is an isolated environment for running an application. Now. One of the questions that often comes up is how our containers are different from virtual machines or VMS. Do you know the differences Well, a virtual machine as the name implies, is an abstraction of a machine or physical hardware. So, we can run several virtual machines on a real physical machine. For example, we can have a Mac, and on this Mac, we can run two virtual machines, one running Windows, and the other running Linux. How do we do that using a tool called Hyper Visor I know it’s one of those computer science names. In simple terms, the Hyper Visor is a software we use to create and manage virtual machines. There are many hypervisors available out there, like Virtual Box and VMware which are cross-platform. So, they can run on Windows or Mac. OS and Linux and Hyper-V (which is only for Windows). So, with a hyper Visor, we can manage virtual machines. Now, what is the benefit of building virtual machines Well, for us, software developers, we can run an application in isolation inside a virtual machine. So, on the same physical machine, we can have two different virtual machines. Each runs a completely different application and each application has the exact dependencies it needs.

So, application one may use node version 14 and Mongo Db Version 4, and one application may use node version nine and Mongo Db Version 3. All these are running on the same machine but in different isolated environments. That’s one of the benefits of virtual machines. However, there are several problems with this model. Each virtual machine needs a full copy of an operating system that needs to be licensed, patched, and monitored. And that’s why these virtual machines are slow to start because the entire operating system must be loaded just like starting your computer. Another problem is that these virtual machines are resource-intensive because each virtual machine takes a slice of the actual physical hardware resources like CPU memory and disk space. So, if you have eight gigabytes of memory, that memory must be divided between different virtual machines. Of course, we can decide how much memory to allocate to each virtual machine. But at the end of the day, we have a limit in terms of the number of VMS, we can run on a machine, usually a handful, otherwise we’re going to run out of hardware resources. Now let’s talk about containers, containers give us the same kind of isolation, so we can run multiple applications in isolation, but they are more lightweight. They don’t need a full operating system. In fact, all containers on a single machine share the operating system of the host. So that means we need to license, patch, and monitor a single operating system. Also, because the operating system has already started on the host, a container can start up quickly, usually in a second, sometimes less. These containers also don’t need a slice of the hardware resources on the host, so we don’t need to give them a specific number of CPU cores or a slice of memory or disk space. So, on a single host, we can run tens or even hundreds of containers side by side. So, these are the differences between containers and virtual machines.

Docker Architecture

Let’s talk about the architecture of Docker. So, you understand how it works Docker uses a client-server architecture. So, it has a client component that talks to a server component using a restful API the server also called the Docker engine sits in the background and takes care of building and running Docker containers. But technically a container is just a process, like other processes running on your computer. But it’s a special kind of process which we’re going to talk about soon Now, as I told you, unlike virtual machines, containers don’t contain a full-blown operating system. Instead, all containers on a host share the operating system of the host. Now, more accurately, all these containers share the kernel of the host. What’s the kernel? The kernel is the core of an operating system. It’s like the engine of a car. It’s the part that manages all applications as well as hardware resources like memory and CPU every operating system has its own kernel or engine. These kernels have different APIs that’s why we cannot run a Windows application on Linux because under the hood this application needs to talk to the kernel of the underlying operating system. Okay, so that means on a Linux machine, we can only run Linux containers because these containers need Linux On a Windows Machine. However, we can run both Windows and Linux containers because Windows 10 is now shipped with a custom-built Linux kernel. This is in addition to the Windows Kernel, that’s always been in Windows. It’s not a replacement. So, with this Linux kernel now we can run Linux applications natively on Windows, so on Windows, we can run both Linux and Windows containers. Our Windows containers share the Windows kernel and our Linux containers share the Linux kernel. Okay, now, what about Mac OS Well, Mac OS has its own kernel, which is different from Linux and Windows kernels, and this kernel does not have native support for continuous applications. So docker on Mac uses a lightweight Linux virtual machine to run Linux containers. Alright enough about the architecture. Next we’re going to install Docker and that’s where the fun begins.

Now the fun part begins, where we get our hands dirty with Docker

Let’s install the latest version of Docker. If you have an existing version of Docker on your machine, I highly encourage you to upgrade to the latest version because your version might be old and not compatible with the version I’m using this Quest.

I am using the version at the time of blog is 20.10.13.

Go to Docker docs to get the latest version of docker. Docker desktop is available for Windows, Mac and Linux. We have a Docker desktop which is the combination of the Docker engine, plus a bunch of other tools.

Let’s look at the instructions for Windows. You can download the latest version from Docker Hub and make sure to read system requirement. What are the things that is really important is enabling hyper-v and containers Windows features just go to the settings where you can turn on or turn off those features.

Docker Daily use Commands

  1. docker version

This command is used to get the currently installed version of docker.

  1. docker pull

Usage: docker pull

This command is used to pull images from the docker repository(hub.docker.com).

  1. docker run

Usage: docker run -it -d

This command is used to create a container from an image.

  • -d: To start a container in detached mode, you use -d=true or just -d option. By design, containers started in detached mode exit when the root process used to run the container exits, unless you also specify the --rm option.

  • -it: For interactive processes (like a shell), you must use -i -t together in order to allocate a tty for the container process. -i -t is often written -it as you’ll see in later examples. Specifying -t is forbidden when the client is receiving its standard input from a pipe.

  1. docker ps

This command is used to list the running containers.

  1. docker ps -a

This command is used to show all the running and exited containers.

  1. docker exec

Usage: docker exec -it bash

This command is used to access the running container.

  1. docker stop

Usage: docker stop

This command stops a running container.

  1. docker kill

Usage: docker kill

This command kills the container by stopping its execution immediately. The difference between ‘docker kill’ and ‘docker stop’ is that ‘docker stop’ gives the container time to shutdown gracefully, in situations when it is taking too much time for getting the container to stop, one can opt to kill it.

  1. docker commit

Usage: docker commit

This command creates a new image of an edited container on the local system.

  1. docker login

This command is used to login to the docker hub repository.

  1. docker push

Usage: docker push

This command is used to push an image to the docker hub repository.

  1. docker images

This command lists all the locally stored docker images.

  1. docker rm

Usage: docker rm

This command is used to delete a stopped container.

  1. docker rmi

Usage: docker rmi

This command is used to delete an image from local storage.

  1. docker build

Usage: docker build

This command is used to build an image from a specified docker file.

Creating Our First Docker Application

Let's say we have a PHP application and want to deploy it to our staging or production server. First, we make sure we have the docker configuration script included in the root directory of the application.

  1. Create a Dockerfile in your application

Create a file with name Dockerfile at the root of your application and include the code below to tell Docker what to do when running in the production or staging environment

FROM node:alpine
COPY . /app
WORKDIR /app
CMD node app.js
Enter fullscreen mode Exit fullscreen mode

Above is a sample docker script which configures Node on a staging or production server.

  1. Installing Docker on Staging Or Production Server

For Mac get docker here.

For Windows go here.

For Linux go here.

  1. Running Docker

After docker is installed on the staging or production server, click on the whale icon to run docker

  1. Deploying Your Application

Copy the application to the staging or production server and do the following

  • Navigate to the project directory on the terminal and create a docker image.

Run the following command in the terminal and it will create a docker image of the application and download all the necessary dependencies needed for the application to run successfully

docker build -t <name to give to your image>
Enter fullscreen mode Exit fullscreen mode
  • Convert Docker image of the Application into a Running container.

Run the following command in terminal and it will use create a running container with all the needed dependencies and start the application.

docker run -p 9090:80 <name to give to your container>
Enter fullscreen mode Exit fullscreen mode

The 9090 is the port we want to access our application on. 80 is the port the container is exposing for the host to access.

Below are some useful Docker commands

Stopping a running image

docker stop <id-of-image>
Enter fullscreen mode Exit fullscreen mode

Starting an image which is not running

docker start <id-of-image>
Enter fullscreen mode Exit fullscreen mode

Removing an image from docker

docker rmi <id-of-image>
Enter fullscreen mode Exit fullscreen mode

Removing a container from docker

docker rm <id-of-container>
Enter fullscreen mode Exit fullscreen mode

For the moment that’s enough to understand what is docker and how we can use it. In my next blog we will write a full React and Node app and run it on Docker by using

docker-compose.yml
Enter fullscreen mode Exit fullscreen mode

With docker-compose.yml we don’t need to restart Docker again and again to publish the new changes. When we save the file Docker will automatically publish the new changes.

This post was originally posted on Medium.com by me. 😃

Top comments (2)

Collapse
 
synertry profile image
Synertry

Hey there Adnan,
I would suggest to add docker logs to the daily use commands.
Also the preferred naming is compose.yaml.
Otherwise great intro into Docker.

Collapse
 
ilatif profile image
Imran Latif

An amazing in-depth article. Learned a lot from it. Thanks for the great write-up.