Welcome to the first in a series of tutorials on getting started with Docker. Docker is a tool that allows developers and system administrators to create, instal, and run containers-based software. This is often referred to as containerization. Putting apps in containers has a number of advantages:
Containers in Docker are still compact. This assumes you can create containers locally and distribute them to any docker environment (other machines, servers, cloud, etc...).
Containers are lightweight and they share the host kernel (the host operating system), but they are still capable of handling the most dynamic applications.
Containers can be stacked vertically, and services can be stacked verically on the fly.
When discussing containerization, virtual computers are often used as analogies. Take a glance at the screenshot below to see the biggest difference:
The Docker container framework runs on top of the host operating system at all times. Containers hold binaries, libraries, and the program itself. Containers do not have a guest operating system, allowing them to be lightweight.
This guide will teach you about Docker and how to get started with this famous container framework. Before we begin using Docker in practise, let us first define some of the most relevant principles and terminologies.
A Docker image contains all of the components used to run an application as a container. This contains the following:
environment variables configuration files code runtime libraries
The picture can then be deployed and executed as a container in any Docker environment.
A Docker container is an image's runtime example. From a single picture, you can launch multiple containers (all of which run the sample application) on different Docker platforms.
On the host computer, a container operates as a separate operation. Since the container operates without the need to boot up a guest operating system, it is lightweight and uses less resources (such as memory) to function.
First and foremost, ensure that Docker is built on your device. For the purposes of this guide, we'll presume that Docker Community Edition (CE) is already installed. This version is suitable for developers who want to get started with Docker and play with container-based applications, making it an excellent option for our use case. Docker Community Edition is available for all major operating systems, including MacOS, Windows, and Linux. The detailed instructions for installing Docker CE on your system can be found at https://docs.docker.com/install/.
You can also build a free account at https://hub.docker.com so that you can use it to sign in to the Docker Desktop program. Finally, ensure that the Docker Desktop program is running.
Once Docker is mounted and operational on your server, we can begin by entering the following command into the terminal:
The following tutorial is performed on the Windows host PC OS.
Thus, this says the command that you have already pulled and the image from the hello-world from the docker is already there in your local server (your pc) and also a test for the docker that has been successfully installed also gives you the confidence that you are ready to work for the future projects.
Now that Docker is up and running, we can choose a picture on which to run our first docker container. To choose from a list of pre-existing Docker images, go to hub.docker.com:
Make sure you're logged in with your Docker Hub account before entering a search term that matches the name of the application for which you'd like to locate a current Docker picture.
On the information tab, you can see a rundown of the image's versions as well as links to the related Dockerfiles.(in the docker hub)
A Dockerfile is a text document that contains all of the commands that you can usually run manually to create a Docker image. Docker will automatically generate images by reading the instructions from a Dockerfile.
Later, we'll go through the steps of creating a Dockerfile.
Basically, the docker pull >repository name> will pull the required image that is desired from the developer if the image exists in the Docker Hub.
When you want to specifically pull the docker container image from any repo.
When you want to pull a "nginx" that is present in the docker hub directly you can use the command below.
docker run the command shown above this helps in running the container image and will perform the required operation.
-d detach mode for the operation of the running container.
-t tag that is used for the new tag to be written for the running container.
-p is the tag that is used for getting the port to be running.
but when you run the docker container in the interactive mode the logs of the HTTPS gets request on the command line and will have the output which are visible on the command prompt.
tag is the best way to know how you have made some changes in the built container thus they will help us in getting to know the latest version of the code patch that has been released through the repository.
After running the command and getting the local port:
The above is a example program to get the command.
The above will be the sample output that is available on the Browser.
push is the command that will push the local repository or the tagged image to the "logged hub account"(that is your docker hub account) where every time you make a change the container will be made available through the hub and anybody can pull and work with the repository that has released.
command to run it with the exec code
thus when you want to stop the container type in
docker stop [container-id]
This is the set of the container that are up and running thus we need to make one of them stop this I chose the first container in the list of the container and used it for stopping it.
after the command above the code will stop the desired container.
The one after the execution of the docker stop command.
and we will now jump on the docker services.
You've also learned how to use Docker images and containers to run applications in a container environment. However, in most real-world settings, a "function" is made up of various components. A web application, for example, necessitates the operation of a web server, which serves content to the browser. A database server is also needed, and the web application connects to the database to retrieve data.
Now when you are running the images and you feel that it's an overhead in the memory of the system thus you can use the command
docker image prune -a for removing all the images that you pulled or created using the
docker build commands.
You can orchestrate such "services" in Docker by merging multiple files through "Docker Swarm". Using this method, we can describe one service that runs the MongoDB database and another service that runs the web application. Docker makes it very simple to describe and run certain services; all we need to do is create a
docker-compose.yml file that declares the services.
But first, let's start from scratch with a real-world example: In the following steps, we will create a Node.js server program that exposes a REST API for handling Todo objects. The Node.js server connects to a MongoDB database, which is responsible for data persistence, by using the Mongoose library.
To run this app on Docker, we create a
docker-compose.yml file in which we specify two services: one for running the Node.js server and another for running the MongoDB database. We'll also learn how to link one Docker service to another since the Node.js server program has to connect to the MongoDB database.
Thus further for the orchestration we will have the Docker Swarm this will be another series on it and also will be doing a series on Kubernetes, so stay tuned folks.