Overview
Docker is a platform that allows containerizing software projects. What I mean by containerizing, having the source code, packages, and dependencies with their respective versions in one container and running it as a process.
The difference between Docker and VMs is Docker runs containers using the host kernel OS with no guest operating systems. On the other hand, under VMs, the Server hardware is virtualized. Each VM has it is own guest OS. This makes Docker lightweight, consumes fewer resources, and spins apps quicker.
How to Dockerize an application?
Applications get dockerized by first having a blueprint or a snapshot of what to have in the container. This blueprint is called docker image. The image gets specified in a Dockerfile
and then we will run this image to create a container of this specific snapshot.
Making the Dockerfile
This file is usually put in the root directory of the application. Each line in this file represents an instruction and a layer of how to build the image.
Let's have a simple express API that runs on node as an example.
1_ Specifying the parent image in the Dockerfile
FROM node:17-alpine
This is the first layer of the Dockerfile. It pulls this image of node and installs it in the container. The tag after the colon is called a tag, it specifies the version of node that needs to be pulled.
2_ Specifying the directory we want to work at on the container.
FROM node:17-alpine
WORKDIR /app
3_ Coping package.json and installing all the dependencies in the container.
FROM node:17-alpine
WORKDIR /app
COPY package.json .
RUN npm install
4_ Copying the whole source code and pasting it in the container.
FROM node:17-alpine
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
The first dot next to the copy means get the whole directory. The second dot is where to paste the directory in the container. The reason why it is a dot and not /app is that after specifying the WORKDIR /app
, all the paths in the container will be relative to /app.
5_ Finally run the app.
FROM node:17-alpine
WORKDIR /app
COPY package.json .
RUN npm install
COPY . .
CMD ['node' , 'start']
Building the image:
Docker build -t [IMAGE_NAME] -p [HOST_PORT]:[EXPOSED_PORT] .
Running the image inside a container:
Docker run βname [CONTAINER_NAME] [IMAGE_NAME]
An alternative for the above commands would be creating a unified compose file that would look like this:
- Create docker-compose.yaml next to the project directory.
version: '3.8'
services:
api:
build: ./api
container_name: api_container
ports:
- '3000:3000'
volumes:
- ./api:/app
- ./app/node_modules
Run docker-compose up
What are volumes?
Images are read-only, which means on a testing environment it does not make sense to build an image every time we change the source code. Using volumes here becomes very handy. Volumes allow mapping files from local code to the existing one on the container without building new images.
Some useful docker commands
- Docker ps
- List all running containers.
- Docker ps -a
- List all containers
- Docker images
- List all images
- Docker build -t [IMAGE_NAME] -p [HOST_PORT]:[EXPOSED_PORT] .
- Build the image
- Docker run βname [CONTAINER_NAME] [IMAGE_NAME]
- Run the image inside a container.
Resources:
Top comments (0)