Hey Dev's
As you all know I am an advocate for server-less and container architecture. Here is my profile to get to know me more https://www.kevinodongo.com/. The reason I advocate for the two approaches is that for startups cost is a major factor.
The reason I prefer these two architectures is they are quite cost-effective. You can control your cost depending on your application architect. This debate is quite big for the fans of the two. On my end, I can say for a large-scale application container architecture can be quite versatile you will have better control of the application. Server-less will get you up and running quickly. AWS Amplify, Firebase / Firestore will get your application up and running in a couple of hours. Don't get me wrong Server-less can also scale and handle large-scale applications.
Let me get back to today's tutorial. We break down docker in a simple approach. This tutorial can help someone who wants to start with Docker.
Imagine you are building an application with the following stack:
- Vue for front end
- Node and Express for backend
- Socker.io
- Turn server
- WebRTC
- Redis for catching
- MongoDB as your database
- Turn server
Ideally, your application will need to scale to meet the needs of your users. The best approach for this stack is to decouple your application. Let each service run independently.
Here is where Docker comes in while building using containers the one RULE you should stick to is each container should do one thing and do it well. Docker containers allow us to decouple our application.
In the above diagram, we have a simple architect. Let me explain what is going on. We will have users using our application through the web app which is Vue Application. Once a new session begins our node worker will check from Redis if this is a new user or current user. In Redis, we will only be saving user ID while in MongoDB we will save all the details of the new user. If the user does not exist then we will create his details in the database. Our Turn server will work independently but communicate with the Node worker.
We will deploy each section separately in individual containers. This will allow each container to carry out only a single task it is designed to do.
So how do we manage all the containers?. This is where Kubernetes, AWS ECS, AWS Fargate, AWS EKS, and many other applications out there assist in managing containers.
Brief Explanation
To someone learning how Docker works. Here is a brief explanation of how to go about it. When you begin learning you will realize you can define everything in a single command using Docker CLI for example. This can be daunting for a new learner. Will I learn all those?
docker run -dp 3000:3000 {% raw %}`
-w /app -v "$(pwd):/app" `
node:12-alpine `
sh -c "yarn install && yarn run dev"
```
There is a simple way of doing everything using two files Dockerfile and Docker-compose.yml. These two files will always simplify everything that you are trying to achieve.
![Alt Text](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7x6h69vvjjh2yult49lg.png)
A Dockerfile is simply a text-based script of instructions that is used to create a container image.
Docker Compose will define all your multi containers from testing, development, and production.
Before we go back to other components of Docker. Let us discuss these two files because you will generally use them frequently.
#### Dockerfile
Assume you want to start building the above stack backend. We are talking about Node Worker, Redis, Turn Server, and MongoDB. To begin you will need a Mongo Database running, Redis Server running, and Turn Server running. All these can be achieved by either pulling the images from the Docker hub or AWS ECR and building a container or defining everything in a Docker compose file.
#### The structure of a Dockerfile
- Define your environment. If you are working on a node application you need to install node etc.
- Create a directory where you want to hold all your application files.
- Install all dependencies. To install all our dependencies we need to copy the package.json file.
- Run npm install.
- Copy all files to the directory you created above.
- Start your application.
*Here is a sample of a dockerfile for development*
```javascript
# install node
FROM node:alpine
# make the 'app' folder the current working directory
WORKDIR /usr/app
# copy both 'package.json' and 'package-lock.json' (if available)
COPY package*.json ./
# install project dependencies
RUN npm install
# copy project files and folders to the current working directory (i.e. 'app' folder)
COPY . .
# serve application in development
CMD [ "npm", "start" ]
```
To build your image ensure you are in the root folder where your Dockerfile resides and run the following command.
```
Docker build .
Docker ps // get the container id or name
Docker run <container id>
```
With the above, you will have your application deployed in a container.
NOTE
Just understand the structure of Dockerfile and what each section defines.
#### Docker compose
Imagine we have many containers to deploy for example we want to deploy Mongo DB, Redis, Turn Server, Vue app. If using the above route of Docker build and Docker run it will quite tedious work.
Docker-compose simplifies everything.
```javascript
version: "3.8"
services:
redis-server:
container_name: redis-server
image: redis
restart: always
turn-server:
container_name: turn-server
image: instrumentisto/coturn
restart: always
mongo-server:
container_name: mongo-server
image: mongo
restart: always
node_backend:
container_name: node_backend
build:
context: .
dockerfile: Dockerfile.dev
restart: always
depends_on:
- mongo-server
environment:
- MONGO_DB_URI=mongodb://mongo-server/<db name>
- REDIS_DB_URI=redis-server
ports:
- 3000:3000
volumes:
- ./:/node_backend
```
Once we run the below command all our containers will running and will be under one network. They will be able to communicate with each other.
```
Docker-compose up
```
This will do all the processes we were doing manually one by one. With all containers running you can focus on developing your application. Once you done tear down all your containers.
```
Docker-compose down // shut down your environment
Docker system prune // clean your environment
```
Logging
Run the following commands to see the logging for a container.
```
docker logs -f <container-id>
```
To access a container
```
docker exec -it <container name> sh
```
Here are some of the common commands your should know while working with image and container
| Command |
| ------------- |
| docker run <image> // build container |
| docker ps|
| docker build . // build an image |
| docker rm -f <container/image name> // remove an image |
| docker system prune // clear your enviroment |
| docker run -dp 8080:8080 <image> //start container port mapping |
| docker exec -it <container id/name> command // get into container |
| docker build -t <tagname> . // tagging a build |
| docker scan <container name> |
| docker image history --no-trunc getting-started |
| docker stop <container name/id>|
| docker kill <container name/id>|
| Docker-compose up|
| Docker-compose down|
I believe the two files I have discussed above will simplify your route of understanding docker containers. Read more about Docker.
https://docs.docker.com/get-docker/
#### How do we go to production?
In general, before we go to production we need to choose which application will we use to manage our containers.
Once you are satisfied with your application structure and all the test have passed then build your image.
We can either use Docker hub or AWS ECR to save your images. For private images, you will be charged for both. Once you have saved your image then you can deploy the containers using Kubernetes, AWS ECR, and AWS ECS.
The beauty of this architect is that each container will scale independently depending on its load. This means Turn Server might be scalling more than the Redis server.
# CONCLUSION
The easiest way of installing docker in your machine is through vs-code.
![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/5s7rihazk4ts7jo8nkcm.png)
This image illustrates a complete environment of docker.
*image from docker*
![Alt Text](https://dev-to-uploads.s3.amazonaws.com/i/hr9w5dzdq21nfh9zd03u.png)
Thank you I hope this will be helpful to someone.
Top comments (0)