DEV Community

Cover image for Docker For Frontend Developers
Aks
Aks

Posted on

Docker For Frontend Developers

This is short and simple guide of docker, useful for frontend developers.

Why should you use docker?

Long long back when business needed other application DevOps team would go out and buy a server, without knowing performance requirements of the new apps. This would involve lot of guess work and wastage of capital and resources which could be used for other apps.

Enter Virtual machines or VM, it allowed us to run multiple apps on same servers. but there is a drawback. Every VM needed entire OS to run. Every OS needs CPU, RAM etc to run, it needs patching and licensing, which in turn increases cost and resiliency.

Google started using containers model long time ago to address shortcomings of VM model. Basically what container model means that multiple containers on same host uses same host, freeing up CPU, RAM which could be used elsewhere.

But how does it helps us developers?

It ensures that the working environment is same for all developers and all servers i.e, production, staging and testing

Anyone can setup the project in seconds, no need to mess with config, install libraries, setup dependencies etc.

In simple terms docker is a platform that enables us to develop, deploy, and run applications with containers.

Let’s take a step back, what does container system look like physically and how is it different from VM.

1.1 Difference between VM and docker

1.1 Difference between VM and docker.

As you can see host and it’s resources are shared in containers but not in Virtual Machine.

With that out of the way, let’s dive.

How to use docker?

For that we need to familiarise ourselves with certain terminology.

1.2 Visualisation of docker images and docker container

1.2 Visualisation of docker images and docker container

Docker image: It is an executable file which contains cutdown operating system and all the libraries and configuration needed to run the application. It has multiple layers stacked on top of each other and represented as single object. A docker image is created using docker file, we will get to that in a bit.

Docker Container: It is a running instance of docker image. there can be many containers running from same docker image.

Containerise simple Node.js App

We would try to containerise very node.js simple app, and create a image:

Your Node.js App

Let’s start by creating folder my-node-app ,

mkdir my-node-app  
cd my-node-app
Enter fullscreen mode Exit fullscreen mode

let ‘s create a simple node server in index.js and add following code there:

//Load express module with `require` directive

var express = require('express')

var app = express()

//Define request response in root URL (/)  
app.get('/', function (req, res) {  
 res.send('Hello World!')  
})

//Launch listening server on port 8081  
app.listen(8081, function () {  
  console.log('app listening on port 8081!')  
})
Enter fullscreen mode Exit fullscreen mode

and save this file inside your my-node-app folder.

Now we create a package.json file and add following code there:

 {

    "name": "helloworld",  
    "version": "1.0.0",  
    "description": "Dockerized node.js app",  
    "main": "index.js",  
    "author": "",  
    "license": "ISC",  
    "dependencies": {  
      "express": "^4.16.4"  
    }

 }
Enter fullscreen mode Exit fullscreen mode

At this point you don’t need express or npm installed in your host, because remember dockerfile handles setting up all the dependencies, lib and configurations.

DockerFile

Let’s create dockerfile and save it inside our my-node-app folder. This file has no extension and is named Dockerfile . Let go ahead and add following code to our dockerfile.

    # Dockerfile  
    FROM node:8  
    WORKDIR /app  
    COPY package.json /app  
    RUN npm install  
    COPY . /app  
    EXPOSE 8081  
    CMD node index.js
Enter fullscreen mode Exit fullscreen mode

Now what we are doing here:

FROM node:8 — pulls node.js docker image from docker hub, which can be found here https://hub.docker.com/_/node/

WORKDIR /app -this sets working directory for our code in image, it is used by all the subsequent commands such as COPY , RUN and CMD

COPY package.json /app -this copies our package.json from host my-node-app folder to our image in /app folder.

RUN npm install — we are running this command inside our image to install dependencies (node_modules) for our app.

COPY . /app — we are telling docker to copy our files from my-node-app folder and paste it to /app in docker image.

EXPOSE 8081 — we are exposing a port on the container using this command. Why this port ? because in our server in index.js is listening on 8081. By default containers created from this image will ignore all the requests made to it.

Build Docker Image

Show time. Open terminal , go to your folder my-node-app and type following command:

     # Build a image docker build -t <image-name> <relative-path-to-your-dockerfile>

    docker build -t hello-world .
Enter fullscreen mode Exit fullscreen mode

This command creates a hello-world image on our host.

-t is used to give a name to our image which is hello-world here.

. is the relative path to docker file, since we are in folder my-node-app we used dot to represent path to docker file.

You will see an output on your command line something like this:

    Sending build context to Docker daemon  4.096kB  
    Step 1/7 : FROM node:8  
     ---> 4f01e5319662  
    Step 2/7 : WORKDIR /app  
     ---> Using cache  
     ---> 5c173b2c7b76  
    Step 3/7 : COPY package.json /app  
     ---> Using cache  
     ---> ceb27a57f18e  
    Step 4/7 : RUN npm install  
     ---> Using cache  
     ---> c1baaf16812a  
    Step 5/7 : COPY . /app  
     ---> 4a770927e8e8  
    Step 6/7 : EXPOSE 8081  
     ---> Running in 2b3f11daff5e  
    Removing intermediate container 2b3f11daff5e  
     ---> 81a7ce14340a  
    Step 7/7 : CMD node index.js  
     ---> Running in 3791dd7f5149  
    Removing intermediate container 3791dd7f5149  
     ---> c80301fa07b2  
    Successfully built c80301fa07b2  
    Successfully tagged hello-world:latest
Enter fullscreen mode Exit fullscreen mode

As you can see it ran the steps in our docker file and output a docker image. When you try it first time it will take a few minutes, but from next time it will start to use the cache and build much faster and output will be like the one shown above. Now, try following command in your terminal to see if your image is there or not :

    # Get a list of images on your host 
    docker images
Enter fullscreen mode Exit fullscreen mode

it should have a list of images in your host. something like this

    REPOSITORY    TAG      IMAGE ID      CREATED         SIZE  
    hello-world   latest   c80301fa07b2  22 minutes ago  896MB
Enter fullscreen mode Exit fullscreen mode

Run Docker Container

With our images created we can spin up a container from this image.

    # Default command for this is docker container run <image-name>  
    docker container run -p 4000:8081  hello-world
Enter fullscreen mode Exit fullscreen mode

This command is used to create and run a docker container.

-p 4000:8081— this is publish flag, it maps host port 4000 to container port 8081 which we opened through expose command in dockerfile. Now all the requests to host port 4000 will be listened by container port 8081.

hello-world — this is the name we gave our image earlier when we ran docker-build command.

You will receive some output like this :

    app listening on port 8081!
Enter fullscreen mode Exit fullscreen mode

If you want to enter your container and mount a bash terminal to it you can run

    # Enter the container
    docker exec -ti <container id> /bin/bash
Enter fullscreen mode Exit fullscreen mode

In order to check if container is running or not, open another terminal and type

    docker ps
Enter fullscreen mode Exit fullscreen mode

You should see your running container like this

     CONTAINER ID    IMAGE        COMMAND                  CREATED    
    `<container id>`  hello-world  "/bin/sh -c 'node in…"   11 seconds ago

    STATUS              PORTS                    NAMES  
    Up 11 seconds       0.0.0.0:4000->8081/tcp   some-random-name
Enter fullscreen mode Exit fullscreen mode

It means our container with id <container id> created from hello-world image, is up and running and listening to port 8081.

Now our small Node.js app is completely containerised. You can run http://localhost:4000/ on your browser and you should see something like this:

1.3 Containerised Node.js App

1.3 Containerised Node.js App

Voilà, you have containerised your first app.

Top comments (51)

Collapse
 
derek profile image
derek • Edited

Oh, also you can get more bang for your buck...

if you use

FROM node:8-alpine
Enter fullscreen mode Exit fullscreen mode

or

FROM gcr.io/distroless/nodejs
Enter fullscreen mode Exit fullscreen mode

Becasue... Size matters 😜, and so does security 🔒

Image Size
node:8 681MB
gcr.io/distroless/nodejs 76MB
node:8-alpine 69.7MB
Collapse
 
akanksha_9560 profile image
Aks

You are right Derek, slimming your images is important, but then i think that is an vast agenda having your multistage bulids, and trimming your code among other things :)

Collapse
 
derek profile image
derek

Indeed. But hypothetically if you could only do one thing, utilizing alpine or distroless is a low hanging fruit with a huge ROI.

Because, even if you do a multistage build without it you won't trim too much in comparison.

Image Size
node:8 681MB
node:8 with multi stage build 678MB

👆🏽is a basic Hello World express app

Thread Thread
 
shindesharad71 profile image
SHARAD SHINDE

Docker Slim: Hold my beer 😅

Take a look at docker-slim - dockersl.im/

Thread Thread
 
derek profile image
derek

🍺held! Very cool

Confirmed! They've successfully implemented the middle-out compression 😆
confirmed

Collapse
 
slidenerd profile image
slidenerd

does it have node-gyp?

Collapse
 
vishnuharidas profile image
Vishnu Haridas • Edited

Earlier when I started on Docker tutorials online, I couldn't understand how different OSs make it possible to run an isolated environment without having a VM.

Later only I understood that Docker is designed for Linux and uses a kernel-level isolation feature that is built into the Linux OS. And when installing Docker in Windows or Mac, they are running Docker inside a Linux OS that is running on a VM inside that Windows/Mac computer.

I really like every Docker tutorial to include the line clearly saying that "A Container is all about filesystem/resources isolation, which is a system-level feature built-into Linux, and Docker is a tool that abstracts this feature."

Collapse
 
jcarlosr profile image
Juan Ramos

So, if we want to use Docker in Windows or Mac, we will end up using VM technology at the end of the day. Is that correct?

Collapse
 
vishnuharidas profile image
Vishnu Haridas • Edited

Correct. From Wikipedia: "Docker on macOS uses a Linux virtual machine to run the containers. It is also possible to run those on Windows using Hyper-V or docker-machine."

Windows has two types of containers — Windows Server Containers and Hyper-V Isolation — in which Hyper-V is a VM. More details are here - docs.microsoft.com/en-us/virtualiz...

I understand that production servers are running on Linux machines.

--

Note: Microsoft will soon start shipping a full Linux kernel within Windows. This may change how Docker runs in Windows computers. Let's wait and see.

Collapse
 
dimitrisnl profile image
Dimitrios Lytras

Why copy package.json separately?

Collapse
 
skythet profile image
Raimbek • Edited

I'll try answer with my bad english :)

It's very good question. Docker creates layers for each command COPY/ADD (and some others, you need to read documentation). In build time docker will see to the changes, if in some layer will detected change all below layers will be rebuild.

For example, assume we have like this Dockerfile:

 WORKDIR /app  
 COPY . /app  
 RUN npm install  
Enter fullscreen mode Exit fullscreen mode

And we will change source code very frequently, then docker will execute npm install for each change. It's very bad and not efficient. Because for each little change you will reinstall whole nodejs packages (and if you not using volume or cache it will take loooong time).

In this case:

 WORKDIR /app  
 COPY package.json /app
 RUN npm install
 COPY . /app  
Enter fullscreen mode Exit fullscreen mode

we will execute npm install only when package.json changes (some package added or removed so on).

Collapse
 
dimitrisnl profile image
Dimitrios Lytras

Thank you for the detailed response! Your English are perfect btw.

Collapse
 
akanksha_9560 profile image
Aks

right on :)

Collapse
 
gautamkrishnar profile image
Gautam Krishna R

Great explanation. I too had the same question.

Collapse
 
anantvir profile image
anantvir

The best explanation so far. Much better than 99% of the youtubers/tutorials online ! Thanks Raimbek ! This is awesome

Collapse
 
somangim profile image
SomangIm

Thank you! i was looking for this answer too

Collapse
 
daviddennis02 profile image
David Dennis • Edited

This is used to download the node package/dependencies your application needs.
With the help of adding node_modules folder to your DockerIgnore file, you only need to use the package.json file which gives a clear description of all the packages your node app needs inorder to run.

Collapse
 
vschoener profile image
Vincent Schoener

Still the same article without a proper solution to work with IDE feature as autocomplete.

node_modules is installed in the container and you also need to install it locally and try to not mess with it when you mount your source if you want to have the best of the 2 worlds. (Currently working on it and writing an article describing the need and the issue all developer met).

Otherwise, nice article :)

Collapse
 
akosyakov profile image
Anton Kosyakov

You should check out Gitpod. It builds your image together with the project within it, deploys it in the cloud and provides VS Code like browser IDE with autocomplete and so on. Also VS Code releases remote extensions which are deployed in containers. Although i'm not sure how they get files from host to container os, if they mount them then you will get the same issues.

Collapse
 
cookrdan profile image
Dan

If I understand you right, VS Code just implemented an extension so you can use vs code in the container environment. It needs the insiders build. They just announced this yesterday.

Collapse
 
akanksha_9560 profile image
Aks

In order to solve this problem, just develop locally on your container no?

Collapse
 
vschoener profile image
Vincent Schoener

That's not possible, how can I use an IDE that way and be productive? Using Vim or other text editor is not the solution at all :)

Collapse
 
cookrdan profile image
Dan

Thank you very much for this write up! It's very easy to understand and I have been curious about docker because I see the term often.

In your example you have set up a node server. As of right now I don't need to do that because I'm focused on front end things. Do you see any reason to use docker for front end development? (I.e. simple webpages that don't require backend services)

Also, one thing that is unclear to me about docker containers is, where are the files (for example if I make hello.txt in the container? Are those files in the docker file folder? Can I access them if I'm not using the docker container? In a docker container if I run cd ~, where does that take me? Virtual or real home?

Collapse
 
akanksha_9560 profile image
Aks

Hey Dan,

Thank you for your feedback :)

While dockers are not a necessity to develop frontend, it is advisable to do so. For example there are simple webpages, you need a server which serves it to consumers, that entire setup should be dockerized (in my opinion) so that other devs do not have to put effort in setting up their local server. It just makes development easier. Also, next part is deployment
While the config for that is little more complex, but in simple terms docker would allow you to scale it as and when needed based on traffic.

For your second question, for example you have my-app folder, then your folder structure is going to be like this:
Docker folder structure

Then your docker file your can write

# Dockerfile  
    FROM some image that you want
    COPY hello.txt  <to wherever you want your file in docker container, can be a folder or not>
    EXPOSE 8081  

Also for your last question , Inside container, paths that you navigate, they point to virtual paths inside container.

Collapse
 
cookrdan profile image
Dan

Wow thank you! I understand now about the COPY part in the docker file. I had to go back up and re-read your explanation. Makes sense now.

I'm certainly interested in trying these things out - but at the moment I think it will introduce a great many other things that I need to learn and I think I will wait for a bit. But your post certainly helped clear some things up in my head about what docker containers are all about.

So please correct me if I'm wrong, but I'm curious about the following:

  • locally you have a container set up for development. You have this docker file and etc.
  • locally anyone on your team can then easily set up the same environment because they can use the same docker file
  • server side you also have the same docker file but that container is constantly running to serve a website or whatever

I assume that you would also use version control for all of this, including the docker file. I also assume that the container running on the server has some kind of way to update files from version control when a commit is done on a certain branch. (I know that hooks exist for this sort of thing with GitHub but I don't have experience with this yet.)

If you have the time, I'd be interested to read more about this sort of stuff in another dev.to post. Diagrams are helpful for sure! There's a lot of moving parts for all of this technology and it's a challenge to see how things fit together.

Thread Thread
 
akanksha_9560 profile image
Aks • Edited

Hey Dan,

Yes using dockerfile, you can build and share your image. There is a version control for docker images, hub.docker.com/, here you can store and share your images.

I did not understand your last question, could you please elaborate?

Collapse
 
krthr profile image
Wilson Tovar

Thaaaaaaaaaank you! I was loking for a tutorial like this. <3

Collapse
 
svijaykoushik profile image
Vijay Koushik, S. 👨🏽‍💻

Thanks for the post 👍🏽. Not only I learnt what docker is, I also learnt how to use docker in node.js 🙂

Collapse
 
lennythedev profile image
Lenmor Ld

I do more frontend at work, and I just needed a quick intro to the concepts so I can grasp what the DevOps guys are talking about. 😅
Didn't want to sit through a 2-hour tutorial, so this is exactly what I needed.
Thanks for this short-and-sweet demo!

Collapse
 
leslieongit profile image
Leslie

This is awesome, absolutely love this platform because of such information, I'm going to complicate things about and try with ASP.NET

Collapse
 
thejoezack profile image
Joe Zack

Microsoft has really nice base images that will do this, it's a bit tricky because the sdk you need to compile is separate from the runtime you need to run it so they do a 2 pass build.

It's not bad, but I would definitely start with their examples: docs.microsoft.com/en-us/aspnet/co...

Collapse
 
leslieongit profile image
Leslie

Thank you, be blessed 🙏

Thread Thread
 
mithiridi23 profile image
mithiridi23

Very nice article

Collapse
 
bijurama profile image
biju ramachandran

Hello Akanksha:

Great tutorial - now I have a better understanding of docker. But when I run the following command, I get - > /bin/sh: 1: mode: not found.?

ubuntu@ubuntu-VirtualBox:~/my-node-app$ sudo docker container run -p 4000:8081 hello-world
/bin/sh: 1: mode: not found

Thanks and much appreciated.
/Biju

Collapse
 
akanksha_9560 profile image
Aks

Hi Biju,

It is hard to say what should be causing this error. Maybe you need to source /bin/sh. I maybe wrong. Do not have a lot of experience in ubuntu. Sorry :(

Some comments may only be visible to logged-in visitors. Sign in to view all comments.