It would be really awesome if we can run each jobs in Gitlab Pipeline in containerized environments. Gitlab Runner enables that by having image
in targeted jobs accompanying with docker
executor.
But... what about running these jobs in Gitlab Runner that is also running inside its own container? And then, build, tag, push Docker images as a job too?
Oh, yes! We can.
Why?
Because isolation each job to its container is sooooooo good. It creates a clean working space each time we run, we can also choose which environment we want to run each job simply by selecting a Docker image we want.
And having Gitlab Runner runs as container is pretty cool, right? We can easily start and upgrade it.
How about the part of building and pushing inside a container, you may ask. that part... I just want to try out making every job using container. That's all. 🤪 If you can think of any reason, please help me in the comment.
Steps
Here are the steps we will be doing today:
- Prepare How To Start the Runner
- Register the Runner
- Start It!
Prepare How To Start the Runner
I usually use docker-compose to run Docker container declaratively.
version: "3"
services:
runner:
image: gitlab/gitlab-runner:alpine
restart: always
volumes:
- /Users/zarewoft/gitlab-runner/config:/etc/gitlab-runner
- /var/run/docker.sock:/var/run/docker.sock
First, I map my config
directory to consume config.yoml
that will be created later.
Secondly, docker.sock
is mapped to accommodate Docker executor. This will make the executor interacts with host's Docker daemon, hence, its jobs will create containers that will be siblings of the Runner.
Register the Runner
The Runner requires to be registered with Gitlab server. Here, we will start a short-lived Gitlab Runner container to do just that. It will register to Gitlab server and create config.yoml
to use in the future.
From the example, I mount my config
directory to persistently store config.yoml
, same spot as what my earlier docker-compose will consume. I also provide docker-image
which tells the executor what the default image is.
Cache Image Layers
Imagine define a job in Gitlab Pipeline. The job will build and push a Docker image which contains our app. It will definitely use image similar to docker:stable
. In the process, it will create many layers of Docker image which can be reused in the future, sadly, it would destroy along with the job's container.
No. We shall not sacrifice minutes in each pipeline we run!!!
It would be nice if we can persist those layers somewhere 🤔.
Oh, or we can just make them use the host's Docker daemon!
Here comes docker-volumes
. The option tells Docker daemon, which is the host's, to map docker.sock
of the host to each container it will create for any jobs.
By adding this, our final register command look like this:
docker run --rm -v /Users/zarewoft/gitlab-runner/config:/etc/gitlab-runner gitlab/gitlab-runner:alpine register -n \
--url https://gitlab.com/ \
--registration-token REGISTRATION_TOKEN \
--executor docker \
--description "My Docker Runner" \
--docker-image "docker:stable" \
--docker-volumes /var/run/docker.sock:/var/run/docker.sock
Start It!
And then, just up the docker-compose.
docker-compose up -d
Hope you find this informative enough. If there are any mistakes, please let me know in the comment. Thanks! 😊
Top comments (1)
awesome