What is Docker
Docker is a platform that allows us to package and run applications inside lightweight, isolated environments called containers. These containers include all the necessary dependencies and code needed to run our app consistently across different machines. Unlike traditional virtual machines, Docker containers share the same host operating system, making them efficient and portable. Docker simplifies version control, dependency management, and ensures consistent environments for development and production. It’s a powerful tool for streamlining software deployment and scaling. 🚀
Before
After
- Own isolated environment and own operating system which is linux based.
- Packed with all configuration
- One command to install the app
Application deployment
- So before docker , the configuration of the server needed
- There might dependencies conflict
- Misunderstandings
- Text doc for deployment
so after the containers,
- No environmental configuration on the server - except docker runtime
- Developers and Operations works together to the package the application in a container
What is a container
- The container is a layer of images.
- Mostly linux base image, because small in size[ alpine: 3.10]
- And the application image is on top.
Lets install Postgres:9.6
- 1st go to DockerHub
- Search postgres
3 . Choose your desired version, i choose 9.6
4 . For windows, open wsl and run docker run postgres:9.6
Difference between Image and Container
- Image is an actual package or software that we want to run. It can be moved around
- Actually when we start the application is called container. And when we run the image, then it first bring to a container and then we run it and called container
Difference between Docker and Virtual machine
- Docker image is much smaller like couple of megabytes and vm might be couple of gigabytes
- Docker container start and runs faster but the vm is much slower
- If we want to run some linux based application on windows then it will not run, because the the OS kernel is not same. Thats why we need to use the Docker tool box
Basic Docker commands
- docker pull
- docker run
- docker start
- docker stop
- docker ps
- dokcer exec-it
- docker logs
Installing redis image
- Just go to docker hub
- Search redis
3 . open wsl , run command docker pull redis
4 . Now we have to put the redis image to a container to run it. For that just give the command, docker run redis
To check the running images, open a new terminal and give,
docker ps
To stop the running image give
ctrl + c
docker run
, command runs new container when we use it.Here are some key differences between the two commands:
Attached vs Detached Mode: The main difference is that docker run attaches to the container, while docker run -d runs the container in detached mode.
Output: With docker run, you will see the output of the container in your terminal. With docker run -d, you will not see the output.
Container Management: With docker run, you can manage the container directly from your terminal. With docker run -d, you will need to use other commands like docker ps and docker stop to manage the container.
In summary, docker run is used to start a container and attach to it, while docker run -d is used to start a container in detached mode, allowing it to run in the background.
Attached mode
Here the container is attached with the terminal
Detached mode
9 . Now in detached mode, to stop the container for some reason we can use, docker stop <id>
10 . To start again use , docker start <id>
11 . Suppose we stops the container for today and came back tomorrow and want to start from where left, to do that we can use, docker ps -a
which will show all the running and not running container logs
12 . Now if want to run 2 redis of different versions, we have to use docker run redis:version
, here the version is- which version image we want to run in our container. docker run
pulls image and runs the container.
So the docker run does two command in one, one is docker pull and the other one is docker run
Here we can see, both redis versions are running on port: 6379/tcp, which is also the container port number.
But the problem is in tcp we can not use single port for multiple application in our localhost . To solve this issue we have to bind port.
13 . To bind the port at first we have to stop all the running redis images.
Then run,
docker run -p6000:6379 -d redis
for 1st one,
for the second one we will run it on port 6001 same way but remember 6379 is the container port.
We can see 2 different redis application runnig on 6001, 6000.
docker log & docker exec -it
To check why one container is not working properly , using the docker log
docker logs <id>
- or
docker logs <name>
but it is difficult to remember the container name. To solve this problem we can name the container as we like.
3 . So we have two container currently running,
at first we have to stop one using,
docker stop <id>
Now give the command ,
docker run -d -p6001:6379 --name redis-older redis:7.0
- then we can use docker logs by the given name
docker logs redis-older
Docker exec -it
We can easily open the terminal of the current container using,
docker exec -it
We have container named redis-old
now give,
docker exec -it redis-old /bin/bash
note: we can use the container id also instead of name(redis-old)
Then we went to home directory and run env command
To exit we simply give
exit
Developing with containers
- Install express, mongo-express by,
docker pull mongo
docker pull mong-express
We are going to create a new network
To create a new network, give:
docker network create mongo-network
to check the network list:
docker network ls
Run mongo containers
The default container port for mongo is 27017
docker run -d -p 27017:27017 \
-e MONGO_INITDB_ROOT_USERNAME=admin \
-e MONGO_INITDB_ROOT_PASSWORD=password \
--name mongodb \
--net mongo-network \
mongo
id: 74bd46402e9cebdf13c2265a01b072807dd7d97bb83ae569193e35c191f67f60
3 . Same way run the mongo-express,
docker run -d -p 8081:8081 \ -e ME_CONFIG_MONGODB_ADMINUSERNAME=admin \ -e ME_CONFIG_MONGODB_ADMINPASSWORD=password \ -e ME_CONFIG_MONGODB_SERVER=mongodb \ --net mongo-network \ --name mongo-express \ mongo-express
id: 6847ec9032c3673c6c61fc0f89e1b2baa5cbb44bea27eeedf0d67a1e384ec109
4 . Now we can easily check , if mongo-express is connected or not by, docker logs <id>
5 . Now go to browser and search, localhost:8081
and give username: admin,
password: pass
And you will see the home page,
Docker Compose
Previously we use run commands to start our containers and created a network to communicate between them. But what if we need to run 10 containers at a time and create a network to communicate them, it will be very difficult to run 10 commands for each containers. To solve this issue we can use Docker Compose File , which will contain all the commands in a structured way.
1 . Version: 3 // version of docker compose
2 . Services: // here all the container names goes
3 . mongodb: // container name
4 . image: mongo // which image we are using here mongo
5 . port:
-27017:27017 // this is the tcp port number host:container
6 . environment:
-MONGO_INITDB_ROOT_USERNAME=admin MONGO_INITDB_ROOT_PASSWORD=password // environment variables
7 . mongo-express: // new container name
8 . image: mongo-express // image
9 . ports: -8081:8081 // port number
- environment: .... // environment variables
But there is one thing is missing that is Docker Network, Actually in docker compose file we don't need that, docker compose itself takes care of creating a common network.
Let's create a docker compose file
In docker compose file(yaml file) the indentation is important. We have a demo project where we have the docker compose file,
version: '3'
services:
mongodb:
image: mongo
ports:
- 27017:27017
environment:
- MONGO_INITDB_ROOT_USERNAME=admin
- MONGO_INITDB_ROOT_PASSWORD=password
mongo-express:
image: mongo-express
ports:
- 8081:8081
environment:
- ME_CONFIG_MONGODB_ADMINUSERNAME=admin
- ME_CONFIG_MONGODB_ADMINPASSWORD=password
- ME_CONFIG_MONGODB_SERVER=mongodb
save it as, mongo.yaml
Now the question is how to run the compose file !!.
At first we go to the location where we save the compose file using the terminal.
now give, docker-compose -f mongo.yaml up
Here, the command -f
means the mode of the document is file
Here we can see new network has been created and two new containers also.
- Previously to stop the containers we have to use docker stop, but in docker compose we can use
docker-compose -f mongo.yaml down
And it will shutdown all the container in the compose file, and also the network that had been created by docker compose.
after shutdown,
NOTE: The command is used for downing the compose file is actually a container restart and we will lost every data that we worked in previous container.
Dockerfile - Building our own Docker Image
1 . FROM: node // which gives us the installed node
2 . ENV: // which are the environment variable
3 . RUN : // executes any linux commands , here the directory will be created using mkdir
4 . COPY: // It executes on the HOST machines
5 . CMD: // entry point command
To create our own image we need two thigs, 1. name, 2. Tag
At first go to the folder that contains Dockerfile, run the command-> docker build -t my-app:1.0 .
Dockerfile:
FROM node:22.3-alpine3.19
ENV MONGO_DB_USERNAME=admin \
MONGO_DB_PWD=password
RUN mkdir -p /home/app
COPY ./app /home/app
# set default dir so that next commands executes in /home/app dir
WORKDIR /home/app
# will execute npm install in /home/app because of WORKDIR
RUN npm install
# no need for /home/app/server.js because of WORKDIR
CMD ["node", "server.js"]
To check, give docker images,
We can run the image using,
docker run my-app:1.0
and see the logs,docker logs <id>
to open it exec mode,
Privet Docker Registry
- Create and account on aws.amazon.com
- Serach ECR service
- Create a repository
- Named the repo my-app
5 . Now we are going to push the docker image that we created.
6 . At first we need to create an account on aws.amazon.com.
7 . Install AWS CLI, CLI installer .
verify installation using, aws --version
8 . Now we need to do aws configure, before that we need to create a IAM account.
From the user we have to create a user and click on the user and create an access key,
Now we can configure the aws cli using , aws configure
and put the access key id there.
- Now we are going to create an ECR repository which is a private registry,
- In services search ECR and go to private portion and click on create repository
Now , give name to your repo, i gave my-app and keep the other things as it is,
10 . Now Click on the created repo , my-app,
Then view push command,
- Copy The first command and paste it to powershell,
if you get this error,
then, go to IAM user,
- Them Add permission
Create Inline policy
- Select JSON,
- And remove all code and add this,
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ECRLogin",
"Effect": "Allow",
"Action": "ecr:GetAuthorizationToken",
"Resource": "*"
}
]
}
Now the error must be solved. And give this command,,
it will show you,
LOGIN Successful
Now we will come back later and we see the
Image Naming in Docker Registry
Normally in docker hub, we have the short name of the docker image but in aws ECR we have write the full mode.
So we are going to use this,
docker tag my-app:1.0 339712908553.dkr.ecr.eu-north-1.amazonaws.com/my-app:1.0
By giving this we create a duplicate image with different name because of the tag,
Now we are going to use the 4th command but we will the latest with 1.0.
[Some images are taken from Tech with nanas YT channel]
Top comments (0)