In this blog, I will explain the steps required to run docker in docker using two different methods.
In Jenkins, all the commands in the stages of your pipeline are executed on the agent that you specify. This agent can be a Docker container. So, if one of your commands, for example, in the Build stage, is a Docker command (for example, for building an image), then you have the case that you need to run a Docker command within a Docker container.
Furthermore, Jenkins itself can be run as a Docker container. If you use a Docker agent, you would start this Docker container from within the Jenkins Docker container. If you also have Docker commands in your Jenkins pipeline, then you would have three levels of nested “Dockers”.
However, with the above approach, all these Dockers use one and the same Docker daemon, and all the difficulties of multiple daemons (in this case three) on the same system, that would otherwise occur, are bypassed.
There are 2 ways to achieve docker in docker.
1) Run docker by mounting docker.sock (DooD Method)
2) dind method
/var/run/docker.sock is the default Unix socket. Sockets are meant for communication between processes on the same host. Docker daemon by default listens to docker.sock. If you are on the same host where Docker daemon is running, you can use the /var/run/docker.sock to manage containers.
This is recommended way to use Docker inside a Docker container, with which the two Docker instances are not independent of each other, but which bypasses many low-level technical problems.
With this approach, a container, with Docker installed, does not run its own Docker daemon but connects to the Docker daemon of the host system. That means, you will have a Docker CLI in the container, as well as on the host system, but they both connect to one and the same Docker daemon. At any time, there is only one Docker daemon running in your machine, the one running on the host system.
To achieve this, you can start a Docker container, that has Docker installed.
For example, you can use the docker image, which is a Docker image that has Docker installed, and start it like this:
run docker with the default Unix socket docker.sock as a volume.
docker run -ti -v /var/run/docker.sock:/var/run/docker.sock docker
docker pull ubuntu
Observe the output. The output is exactly the same as when you run these commands on the host system.
It looks like the Docker installation of the container that you just started, and that you maybe would expect to be fresh and untouched, already has some images cached and some containers running. This is because we wired up the Docker CLI in the container to talk to the Docker daemon that is already running on the host system.
This means, if you pull an image inside the container, this image will also be visible on the host system (and vice versa). And if you run a container inside the container, this container will actually be a “sibling” to all the containers running on the host machine (including the container in which you are running Docker).
See the below image.
This method actually creates a child container inside a container. If you really want to, you can use “real” Docker in Docker, that is nested Docker instances that are completely encapsulated from each other. You can do this with the dind (Docker in Docker) tag of the docker image, as follows:
docker run --privileged -d --name dind-test docker:dind
docker exec -it dind-test /bin/sh
Here I pulled the Ubuntu image that can be seen in the below image
Here I launched one more container inside a running container.
Here I can run Ubuntu-specific commands.
Here I Updated the Ubuntu package management system.
Running docker in docker using docker.sock and dind method is less secure as it has complete privileges over the docker daemon.