Docker is a software development tool and a virtualisation technology that makes it easy to develop, deploy, and manage applications by using containers. Container refers to a lightweight, stand-alone, executable package of a piece of software that contains all the libraries, configuration files, dependencies, and other necessary parts to operate the application.
In other words, applications run in the same way, irrespective of where they are and what machine they are running on because the container provides the environment throughout the software development lifecycle of the application. Since containers are isolated, they provide security, thus allowing multiple containers to run simultaneously on the given host. Also, containers are lightweight because they do not require an extra load of a hypervisor. A hypervisor is a guest operating system like VMWare or VirtualBox, but instead, containers run directly within the host’s machine kernel.
Learning new technologies: To get started with a new tool without spending time on installation and configuration, Docker offers an isolated and disposable environment. Many projects maintain Docker images with their applications already installed and configured.
Basic use cases: Pulling images from Docker Hub is also a good solution if your application is basic or standard enough to work with a default Docker image. Cases such as in developing and hosting a website using MERN stack, the node, mongodb official images are already available on DockerHub and they are well supported. If the default configuration in these images is acceptable for your needs, then pulling the image can save a lot of time that would otherwise be spent setting up your environment and installing the necessary tools.
App isolation: If you want to run multiple applications on one server, keeping the components of each application in separate containers will prevent problems with dependency management.
Developer teams: It works on my machine! As a developer we know that one of the trickiest problems in software development is having to deal with environment disparity across different machines and platforms. Docker allows you to run containers locally, eliminating disparity between your development and production environments, and everything in between. There is no need to install software packages locally. Everything you need for your development environment can simply run on the Docker engine as containers. Regardless of the language or the tool, you can easily containerise your environment locally.
Limited system resources Instances of containerised apps use far less memory than virtual machines, they start up and stop more quickly, and they can be packed far more densely on their host hardware. All of this amounts to less spending on IT.
The cost savings will vary depending on what apps are in play and how resource-intensive they may be, but containers invariably work out as more efficient than VMs. It’s also possible to save on costs of software licenses, because you need many fewer operating system instances to run the same workloads.
Docker containers and virtual machines are both ways of deploying applications inside environments that are isolated from the underlying hardware. The chief difference is the level of isolation.
With a virtual machine, everything running inside the VM is independent of the host operating system, or hypervisor. The virtual machine platform starts a process (called virtual machine monitor, or VMM) in order to manage the virtualisation process for a specific VM, and the host system allocates some of its hardware resources to the VM. However, what’s fundamentally different with a virtual machine is that at start time, it boots a new, dedicated kernel for this VM environment, and starts a (often rather large) set of operating system processes. This makes the size of the VM much larger than a typical container that only contains the application.
In contrast, With a container runtime like Docker, your application is sandboxed inside of the isolation features that a container provides, but still shares the same kernel as other containers on the same host. As a result, processes running inside containers are visible from the host system (given enough privileges for listing all processes). Having multiple containers share the same kernel allows the end user to bin-pack lots and lots of containers on the same machine with near-instant start time. Also, as a consequence of containers not needing to embed a full OS, they are very lightweight, commonly around 5-100 MB.
A container is a runnable instance of an image. You can create, start, stop, move, or delete a container using the Docker API or CLI. You can connect a container to one or more networks, attach storage to it, or even create a new image based on its current state.
By default, a container is relatively well isolated from other containers and its host machine. You can control how isolated a container’s network, storage, or other underlying subsystems are from other containers or from the host machine.
A container is defined by its image as well as any configuration options you provide to it when you create or start it. When a container is removed, any changes to its state that are not stored in persistent storage disappear.
An image is a read-only template with instructions for creating a Docker container. Often, an image is based on another image, with some additional customisation. For example, you may build an image which is based on the ubuntu image, but installs the Apache web server and your application on top of it, as well as the configuration details needed to make your application run.
You might create your own images or you might only use those created by others and published in a registry. To build your own image, you create a Dockerfile with a simple syntax for defining the steps needed to create the image and run it. Each instruction in a Dockerfile creates a layer in the image. When you change the Dockerfile and rebuild the image, only those layers which have changed are rebuilt. This is part of what makes images so lightweight, small, and fast, when compared to other virtualisation technologies.
Docker Hub is a cloud-based repository service provided by Docker in which users create, test, store and distribute container images. Through Docker Hub, a user can access public, open-source image repositories, as well as use space to create their own private repositories, automated build functions, web hooks and workgroups.
Assume we have a simple node JS application that has server.js file, which listens to the port 3040 and prints 'Hello World!' on hitting 'localhost:3040/'
File Structure as follows:
//Initializes a new build stage and sets the Base Image for subsequent instructions FROM node:14-alpine //Defining working directory for our application and it will be the default directory for our further steps WORKDIR /app //Copy package.json file to our created workdir COPY package.json . //Does npm install RUN npm install //Copy entire code from local to our workdir COPY . . //Exposing application on port 3040 EXPOSE 3040 //Command to run our application CMD ["node", "server.js"]
To play with the above dockerfile
Build image =>
docker build -t myApp:V1 .
Create container and map it to port 3000 =>
docker run -p 3000:3040 --name myContainer myApp:V1
List running containers =>
Stop container =>
docker stop myContainer
To start container again =>
docker start myContainer
Note: We might wonder about the difference between RUN and CMD,
RUNcommand will be executed while we build an image, Whereas the
CMDinstruction should be used to run the software contained in your image
The following are some of the docker CLI commands that we use in daily basis
Let ImageId be
myApp:V1 and ContainerName be
docker build [OPTIONS] PATH | URL | -
docker build -t myApp:V1 .
This command help us to build an image of an application using written dockerfile.
We can name the image on our own, to do that use
docker build -t name:tag .
docker tag SOURCE_IMAGE[:TAG] TARGET_IMAGE[:TAG]
docker tag myApp:V1 myNewApp:V1
This allows us to rename/tag already created image without the need for rebuilding it.
List all images that are available in local.
docker image inspect Name|ImageId
docker image inspect myApp:V1
This provides detailed information of an image in JSON format by default.
docker rmi [OPTIONS] IMAGE [IMAGE...]
docker rmi myApp:V1
This command allows us to remove one or more images.
We can remove the image only if it is not used by any container, including stopped containers. Inorder to remove image we need to remove the container first.
docker system prune [OPTIONS]
Remove all unused containers, networks, images (both dangling and unreferenced), and optionally, volumes.
docker run [OPTIONS] IMAGE [COMMAND] [ARG...]
docker run -p 3000:3040 -it -rm --name myContainer myApp:V1
This command first creates a writeable container layer over the specified image, and then starts it using the specified command. By default this command runs in attached mode. Every time you run this command it will create a new container with the given image.
By default this command search for image in local, if not found it will also look in repositories.
In order to name our container we can use
docker run --name string Name|ImageId.
In order to run an application with interactive mode(read input from console etc...) use
-rmoption will tell docker to remove the container once the container is stopped.
docker stop [OPTIONS] CONTAINER [CONTAINER...]
docker stop myContainer
This commands helps us to stop one or more containers.
docker start [OPTIONS] CONTAINER [CONTAINER...]
docker start myContainer
This commands helps us to start one or more stopped containers. By default this runs in detached mode.
docker restart [OPTIONS] CONTAINER [CONTAINER...]
docker restart myContainer
Restart one or more containers.
docker rename CONTAINER NEW_NAME
docker rename myContainer myNewContainer
Allows us to rename already created container.
docker ps [OPTIONS]
This list all running containers by default.
To list all containers use
docker rm [OPTIONS] CONTAINER [CONTAINER...]
docker rm myContainer
This command allows us to remove one or more containers.
docker logs [OPTIONS] CONTAINER
docker logs myContainer
Fetch the logs of a container.
docker cp [OPTIONS] CONTAINER:SRC_PATH DEST_PATH|-
Copy files/folders between a container and the local filesystem. This can be used in scenarios like when we want to pull log file from container to local file system for debugging.
docker login [OPTIONS] [SERVER]
docker login localhost:8080
Log in to a Docker registry.
docker logout [SERVER]
docker logout localhost:8080
Log out from a Docker registry.
docker push [OPTIONS] NAME[:TAG]
docker image push myApp:V1
Use docker image push to share your images to the Docker Hub registry or to a self-hosted one.
docker pull [OPTIONS] NAME[:TAG|@DIGEST]
docker pull ubuntu:20.04
Docker Hub contains many pre-built images that you can pull and try without needing to define and configure your own.
This command allows us to download a particular image, or set of images (i.e., a repository)
Hope you got a good understanding about docker basics, to know about how to manage data in docker checkout Part 2
Thanks for reading!!