This is the second post in my series called "Demystifying Docker". While having read the first post is not a necessity I do recommend you to check that out as it explains "what" docker is. In this post, we're going to get our hands dirty and actually start using docker.
I'm going to structure this article in the order of how one would generally go on using the commands mentioned. But before we start with that there are two important key concepts which you must be aware of, those being, Images and Containers.
Images And Containers
The simplest way to explain these two terms is this one line,
"Images are blueprints for Containers".
A docker image is built from your code and sets up all the dependencies required to run it. Images are the "movable" part in the "isolated moveable environments" I talked about in my previous post. These images are then further used to start containers.
So to put it simply containers are nothing but running instances of images. From here it isn't hard to conclude that you can create multiple containers from the same image.
Now you may wonder where these images come from? Well, images can either be downloaded from sites like Docker Hub or they can be tailor-made for your particular application using a Dockerfile.
Dockerfile
Now that you have a fair bit of idea about what images are let me show you how you can build your own images. For this, you'll need to write a Dockerfile. Let's say you have a simple server (in our case a Node.js
app) and you want to dockerize it. In the root folder of your app add a file and name it Dockerfile
(no extensions). Now paste the following code in it:
FROM node
COPY . .
RUN npm install
EXPOSE 80
CMD ["node", "server.js"]
Now let's analyze this Dockerfile which will help you understand the basic way of writing a Dockerfile. Not all Dockerfiles would look like this but after understanding this one you'll have a solid foundation and will be able to build upon that for your particular use cases. Let's start:
You must remember that this Dockerfile contains the information required to build our image. We will use this image to run containers but that's at a later stage. Now this app for ours requires node and npm in order to run.
Theoretically, we can write the code for getting node and npm in this Dockerfile but there is a much easier way and that is, basing this image of ours on the official node image.
The official node image takes care of all our needs and would give us an environment where node and npm would be available for us to use. The FROM node
command does exactly that. This node image that we base off of comes from Docker Hub. So when you build the image for your application the first step would be pulling this node image from Docker Hub.
After pulling this image we use the COPY . .
command to copy our code from the local machine to the root folder in the file system of the container. The first .
is used for the directory where the Dockerfile is present (which is the location of the code we want to copy) and the second .
is the destination on the file system of the container.
A few things to note here:
- The container and the local machine have a separate filesystem and our code needs to be present on the container file system for the container to be able to run our app. Hence we copy it there.
- Notice how I'm talking about the container file system and not the "image file system". That is because an image in of itself is never going to run anything. It is always the container (based on an image) that will be running hence it makes no sense to talk of a file system in the case of an image.
After copying the code we want to install the dependencies and that is done using the npm install
command for Node.js apps. So we simply use the RUN
keyword followed by our actual command to get all the dependencies to install.
Now let's assume that our Node.js app exposes the port 80. But this port 80 would be on our docker container and would not be available outside it. So you simply would not be able to go to localhost:80
and access it. Therefore we EXPOSE
this port 80 so that we can access it out of our container easily.
And finally, after setting all this up we run the command node server.js
to start the server. The way to do this is by using CMD ["node", "server.js"]
syntax. Now you might be wondering that both npm start
and node server.js
are commands we would run in our terminal so why is the way of specifying them so different in the Dockerfile?
Well, this is so because RUN
is used as a part of the image building step whereas CMD
is something we want to run in our container. To put it in even simpler terms, we don't want to run node server.js
when ever we build our image form the Dockerfile instead we want to run node server.js
when we start the container, this is where these two keywords RUN
and CMD
differ in functionality. It is very important to understand that building an image is not the same as running a container. Images are built with a different command and containers are run using the built image with a different command as I will be showing you next.
Actually Using Docker
Now that we have the Dockerfile for our app ready let's go about building our image and starting our first container.
Open a terminal in the directory where your Dockerfile is present and run:
docker build .
This is the command which will build up your image. Once it is done building up your image you should see a long output but at the end of it there would be a line:
Successfully built abcd1234
,
where instead of abcd1234
you'll see the actual image id. This is what you will use to spin up a container from this particular image. Before doing that let me come back to ports and exposing them.
Like I stated above that since our Node.js server opens up on port 80 inside the container, we need to make this port accessible outside the container in order to be able to use our app. So to do that we used the EXPOSE 80
instruction in our Dockerfile. But doing just this is not enough. We also need to specify which port on our local system we want to use for this connection with port 80 of the container while starting the container. To make this a bit more clear let us look at the command we run to start our container:
docker run -p 3000:80 abcd1234
Now if you send requests to localhost:3000
from your browser or using some other tool you should be getting the response this your Node.js server sends. Had we simply used docker run abcd1234
we would not have been able to interact with our server since we did not specify which port we are going to use to access port 80 of the container. The -p
flag does that for us.
We could have chosen to open any port of our machine to connect with port 80 of the container. For example, if we wanted to use port 4000 for the Node.js app we would do,
docker run -p 4000:80 abcd1234
.
Finally, I would like to end this by showing you how you can list your containers and stop running containers.
The
docker ps
command will show you all the running containers. After running docker ps
you can grab the name or id of the container and run
docker stop container_name_or_id
to stop that particular container. If you want to see all containers you ever ran and not just the currently running containers you can use this command:
docker ps -a
This was it from my side and I hope that now you're a bit more comfortable with actually "using" docker. If you have any doubts feel free to reach out to me and I'll try my best to answer your queries. There is a lot to Docker and this article by no means covers everything about building images and running containers. But it definitely gives you a well needed solid start. I will be writing such articles on other "core" docker concepts too, so if you liked this I'd suggest you keep an eye out for the next ones.
Thanks for reading! :)
Reach out to me on Twitter to share your feedback or for any queries. I'd be more than happy to help!
Top comments (1)
Hey thanks for pointing that out! I've fixed that typo now.
I'm so glad that you found the series helpful! Thanks for those kind words :')