DEV Community

Cover image for 🦊 GitLab CI Optimization: 15+ Tips for Faster Pipelines
Benoit COUETIL πŸ’« for Zenika

Posted on • Updated on

🦊 GitLab CI Optimization: 15+ Tips for Faster Pipelines


Are you looking to boost your GitLab pipelines' speed and efficiency? If saving time and increasing productivity is on your agenda, then you're in the right place. This article offers 15+ practical tips to optimize your GitLab CI/CD pipelines and accelerate your development process.

We'll delve into various aspects of pipeline optimization, focusing on improvements that apply broadly. While you can fine-tune your pipelines further based on specific tools like NPM, PNPM, Yarn, Maven, or Gradle, those details are beyond the scope of this article.

Assuming you've already covered the fundamentals and best practices as discussed in our previous article, GitLab CI: 10+ best practices to avoid widespread anti-patterns, let's explore how you can further optimize your GitLab CI pipelines.

CI YAML optimizations

1. Parallelize large jobs

When you have extensive test suites or tasks to execute, parallelization can significantly reduce pipeline duration. GitLab allows you to parallelize jobs using the parallel keyword. You can split your work into multiple jobs that run concurrently, improving overall pipeline speed.

For example, you can use predefined variables to split test suites like this:

  parallel: 3
    - bundle
    - bundle exec rspec_booster --job $CI_NODE_INDEX/$CI_NODE_TOTAL
Enter fullscreen mode Exit fullscreen mode

You will find another exemple with Yarn in this article.

2. Use small Linux distributions

Optimize your pipeline performance by selecting small Linux distributions for your Docker images. Alpine Linux, for instance, is a lightweight option that results in smaller image sizes compared to standard distributions like Ubuntu or Debian.

Here are some Linux distribution types suitable for CI:

  • Standard Docker Image: These are based on full distributions like Ubuntu or Debian, including various pre-installed packages. They tend to be larger.

  • Alpine Docker Image: Alpine Linux is lightweight and designed to be small and secure. Images based on Alpine are typically much smaller and faster.

  • Slim Docker Image: These images aim to be small and efficient, using lightweight distributions like CentOS or Debian but without the bloat.

There are also distroless Docker images, sacrificing dynamic possibilities for size and security. But by the very nature of job running, these images' fit is very limited in CI, where dynamic tools, like a package manager, are needed.

For CI performance, choose Alpine Docker images when available, else Slim Docker images if you have the choice.

3. Configure caching, split cache, and set policy

GitLab optimal caching strategy, between local and shared cache, depends on your runner architecture, among the ones described in GitLab Runners topologies: pros and cons. Overall, where there are multiple runner servers, we should configure runner shared cache when aiming for pipeline speed.

On thing to consider when defining cache, is splitting in multiple caches kind when suitable, taking advantage of the fact that the cache keyword can take a list of caches. You will find a detailed example with Yarn in this article.

Something complementary, especially to the multi cache and/or gigabytes caches, is the cache policy. By default, cache is downloaded, unzipped, and at the end of the job, zipped and uploaded (or stored if cache is local). This is the pull-push policy. But you can define it to push in cache producer-only jobs, and pull in cache consumer-only jobs, saving some precious time on unneeded cache phases.

<lora:ManyPipesAI:.85> ManyPipesAI orange sport car, ((high speed)), futurism

4. Download only needed artifacts

By default, GitLab downloads all artifacts from every job and every previous stage at the start of a job. This can create a significant overhead, especially in complex pipelines.

Imagine a mono-repo pipeline with 5 modules and 4 jobs for each module in different stages: package, test, image build, deploy. Lets assume that the modules are independents, and the package jobs for each module are the only ones producing artifacts, for jobs in test and image build.

We then have, for each module, 2 jobs with one artifact download needed, that is 10 strictly useful downloads. By default, there would be... 75 downloads ! (5 artifacts produced x 5 jobs per stage x 3 later stages).

So we'd better define artifacts with dependencies keyword :

    paths: [a/target/]

  dependencies: [package-a]

  dependencies: [package-a]

  dependencies: []
Enter fullscreen mode Exit fullscreen mode

5. Use tuned rules

By default, GitLab runs jobs for every Git event, such as a commit being pushed or a new branch being created. While this is suitable for simple use cases, you may want to avoid running unnecessary jobs on certain pipelines.

You can optimize this by defining rules for when a pipeline should be triggered using the workflow:rules keyword. To decide when a specific job within a pipeline should be triggered, you can use the rules keyword. Reducing the number of unnecessary jobs in your pipelines can alleviate the load on your runners.

For instance, you can avoid running jobs in the following situations:

  • For unmodified modules in mono-repos, consider using merge request pipelines with rules:changes.
  • Skip tests on pipelines where the Docker image has already been pushed and tested.
  • Avoid running security tests on feature branches.
  • Selectively run unit testing or end-to-end testing based on your workflow requirements.

6. Define stages wisely and adjust with needs

Imagine we have a pipeline deploying 3 modules in separate jobs in a single deploy stage, but with one deployment preparation task. We would be tempted to add a pre-deploy stage for the single job, like below :

Stages without optimizations

This is inefficient, because the pre-deploy job is solo in its stage, does not need the Docker images to be built, and can be time consuming, being related to infrastructure.

An easy optimization is to merge some stages :

Stage removed without needs

That would be a recommended and simple way.

Another one would be to take advantage of the needs keyword. It allows us to bypass the stages constraint for ordering jobs. We can then put the pre-deploy job in the same stage as other deployment jobs, defining some chains constraints between them :

  • pre-deploy needs back-test, front-test, bff-test (this way it will run concurrently to Docker build jobs)
  • back-deploy, front-deploy, bff-deploy needs pre-deploy

This will produce this pipeline, with represented dependencies :

Stage removed with needs

This way we obtain the exact same workflow the stages merge explained earlier.

We have to be careful with needs, real workflow can become hard to read at runtime.

7. Configure interruptible pipelines

By default, GitLab allows multiple pipelines to run concurrently for the same branch, even if the associated commit(s) have become obsolete. This can result in unnecessary resource consumption on your runners.

You can reduce this strain on your runners by automatically stopping jobs for obsolete pipelines. GitLab provides the interruptible keyword, which can be set to true to achieve this. When this attribute is enabled, only the most relevant pipelines continue to execute, reducing the load on your runners.

You can set it by default for every job:

  interruptible: true
Enter fullscreen mode Exit fullscreen mode

Worth noting: It does stop the obsolete pipeline only if both pipelines are launched automatically. If one of them is manually launched, GitLab will not interfere.

8. Rerun automatically jobs that failed

Occasionally, jobs may fail due to rare real-time issues. These failures can block the entire pipeline and require manual intervention to rerun the failed job.

To prevent such inconveniences, you can configure automatic job retries using the retry keyword in your CI/CD configuration. By specifying the number of times a job should automatically retry in case of failure, you ensure that your pipeline can recover from transient issues without manual intervention.

Here's an example of how to configure job retries:

    - ./
  retry: 2
Enter fullscreen mode Exit fullscreen mode

<lora:ManyPipesAI:.85> ManyPipesAI orange sport car, ((high speed)), futurism

Project configuration optimizations

In addition to optimizing CI/CD YAML configurations and runner settings, project-specific configurations can also play a crucial role in streamlining GitLab pipelines. Here, we'll explore some strategies to enhance your project settings, which can further improve pipeline performance.

Now, let's delve into specific optimizations you can make at the project (or group) level.

9. Disable separate cache for protected branches

By default, GitLab separates caches between protected branches and non-protected branches. While this approach offers enhanced security, it may not always be relevant for performance optimization.

In many cases, there's no need to maintain separate caches, and it's more efficient to share the cache across all branches. You can deactivate this option and use the same cache for all branches. This can significantly improve caching and reduce redundancy in your pipeline.

10. Avoid Docker images rebuild with fast-forward merge

When it comes to handling merge requests, GitLab provides three methods: merge commit, merge commit with semi-linear history, and fast-forward merge. The choice of merge method can have an impact on your pipeline performance.

Fast-forward merge, in particular, is an efficient way to incorporate changes from a source branch into a target branch when there are no conflicts. It results in a linear history, and the commit at the tip of a feature branch become the commit at the tip of the main branch, since there are no merge commits.

We can take advantage of this to not rebuild docker images already built for this commit...

11. Configure push rules to avoid pipelines for misconfigured branches

Paraphrasing the famous quote by Jeff Atwood: The best pipeline is no unnecessary pipeline at all. Configuring push rules in GitLab can serve as gate-keeping mechanisms to prevent pipelines from running on branches with certain misconfigurations. The goal here is to free your runners from unnecessary work by avoiding pipelines on branches that are not properly set up.

You can implement various rules to restrict pipeline creation, such as:

  • Reject unverified users, inconsistent user names, and non-GitLab users.
  • Define regular expressions for commit messages, branch names, and commit author email.
  • Exclude specific file extensions from triggering pipelines.

These rules help ensure that your runners are used efficiently and that pipelines are only created when they are truly needed.

Runner configuration optimizations

Optimizing your GitLab CI/CD pipeline isn't limited to CI/CD configuration files and project-level settings. Runner configuration plays a vital role in ensuring efficient pipeline execution. Here, we'll explore strategies to fine-tune your GitLab runners for improved performance.

Some of these are accessible through GitLab CI YAML configuration, others need administrator access to runners.

12. Cache Docker builds

Building and pushing Docker images can significantly contribute to your pipeline's total duration. To optimize this aspect, you can consider various strategies, but first, make sure you're familiar with GitLab's documentation on using Docker build.

One common approach for building Docker images is to use a Docker-in-docker CI service. While this method is prevalent, it has some downsides: it takes time to spin up, requires privileged mode on the runner, and doesn't offer image layer caching between jobs.

A recommended and efficient alternative is to utilize Docker layer caching, which significantly improves build times. The essential principle here is to pull a similar image before starting the build process. This way, Docker layers are cached, and subsequent jobs can reuse them, saving valuable time. Here's an example configuration:

  image: docker:20.10.16
    - docker:20.10.16-dind
    - docker pull $CI_REGISTRY_IMAGE:latest || true
    - docker build --cache-from $CI_REGISTRY_IMAGE:latest --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA --tag $CI_REGISTRY_IMAGE:latest .
    - docker push $CI_REGISTRY_IMAGE:latest
Enter fullscreen mode Exit fullscreen mode

This technique saves up time, but it adds layer downloads, for each job. this is still far from ideal.

A more efficient way is using Docker alternatives, such as Kaniko, Buildah, Img.

This is faster, because they do not need to spin up a service (and there is no privileged mode needed). But they still needs the layers download to become efficient.

The fastest solution is to bind-mount the docker socket of the underlying server (where the job is running). No need for a service, and cache is already there. However, the more dynamically provisioned are your servers, the less likely you will hit this precious cache. So this is perfect for Docker runners, but less for AWS and Kubernetes, described in GitLab Runners topologies: pros and cons.

This solution is controversial because of the security breach it opens to the underlying VM. Designing a runner architecture with this in mind is crucial. But this is still our preferred choice for starting projects.

13. Cache jobs images

In GitLab, Docker images for jobs are often downloaded by default, which can consume both time and network bandwidth. To enhance performance, consider configuring your jobs to use the "if-not-present" Docker image pull policy at runner level. This setting instructs the runner to download the image only if it's not already present on the runner. It's especially useful if you have control over your runner.

You can also set this policy for jobs using the image:pull_policy keyword in your CI/CD configuration. Make sure your GitLab runner admin hasn't disabled this feature. By using this policy, you can save time and prevent unnecessary image downloads during pipeline execution.

Do not use latest image tags when using this configuration, or you will have unexpected differences between runners. Do not use latest at all for public images with or without this configuration, or you will face unexpected errors when breaking changes occur.

To make GitLab configuration specifically and efficiently working on an autoscaling kubernetes cluster, you will have to add (Kubernetes) tools to improve container images availability and speed with caching.

14. Optimize caches and artifacts compression

Cache and artifacts compression plays a crucial role in the overall performance of your GitLab pipelines. GitLab offers some runner feature flags that allow you to choose the FastZip compression tool, which is more efficient than the default one. Additionally, you can fine-tune the compression level to further improve performance :

  FF_USE_FASTZIP: "true"
  # These can be specified per job or per pipeline (slowest, slow, default, fast, and fastest)
Enter fullscreen mode Exit fullscreen mode

You can safely use these variable in every projects. the fastest compression level can be the default, proven to be the best optimization for pipeline speed.

15. Size runners correctly and tune your maximum parallelized jobs

Optimizing your GitLab CI/CD pipeline performance involves ensuring that your runners are correctly sized and that the maximum number of parallelized jobs is properly tuned. This is a crucial aspect of managing your GitLab infrastructure efficiently.

If you have the choice you can read our article GitLab Runners topologies: pros and cons and choose a topology according to your constraints.

When you're determining the appropriate runner setup for your project, consider both the number of runners you need and their specifications. Conducting local tests while monitoring CPU, RAM, and disk I/O usage can help you make informed decisions.

The maximum number of parallelized jobs, defined with the parallel runner parameter, should align with the number of available CPU cores on your runner. For instance, if your pipeline jobs typically utilize two CPU cores, you might initially set the parallel value to 10 for a runner equipped with 16 CPU cores. However, it's important to experiment with different configurations and fine-tune this value to best match your specific project's workload and infrastructure.

By accurately sizing your runners and adjusting the parallel job limits, you can significantly boost the performance and efficiency of your GitLab CI/CD pipelines.


In your quest for faster GitLab CI/CD pipelines, these performance optimizations can make a substantial difference. Saving valuable minutes on pipeline execution not only enhances your productivity but also reduces waiting times and accelerates your software development process.

While these optimizations can significantly enhance pipeline speed, it's essential to strike a balance between performance improvements and pipeline readability and maintainability. Complexity should be kept to a minimum to ensure that your CI/CD process remains manageable and transparent.

If you have any thoughts or suggestions, please feel free to share them in the comments section πŸ€“

<lora:ManyPipesAI:.85> ManyPipesAI orange sport car, ((high speed)), futurism

Illustrations generated locally by Automatic1111 using RevAnimated model with ManyPipesAI LoRA

Further reading

This article was enhanced with the assistance of an AI language model to ensure clarity and accuracy in the content, as English is not my native language.

Top comments (0)