DEV Community

Cover image for From 41 Minutes to 8 Minutes: How I Made Our CI/CD Pipeline 5x Faster
Adnan Latif
Adnan Latif

Posted on • Originally published at Medium

From 41 Minutes to 8 Minutes: How I Made Our CI/CD Pipeline 5x Faster

From 41 Minutes to 8 Minutes: How I Made Our CI/CD Pipeline 5x Faster

Introduction

In the world of software development, time is everything. Continuous Integration/Continuous Deployment pipelines speed the process up, but sometimes it’s the pipeline that makes the process slow down, ironically. This was what brought me to my latest problem when our Jenkins pipeline grew to an unmanageably long 41 minutes per build.

Determined to eliminate this inefficiency, I analyzed, optimized, and transformed our pipeline from a whopping 41 minutes down to 8 minutes — a 5x improvement! In this article, I’ll walk you through the issues I encountered, the solutions I implemented, and the strategies you can use to supercharge your own pipeline.

The Problem

Our CI/CD pipeline handled the following tasks for both backend and frontend:

  • Code Checkout

  • Static Code Analysis: ESLint, SonarQube

  • Unit Testing

  • Docker Image Build and Push

  • Staging Deployment

  • Manual Approval and Production Deployment

At first glance, the pipeline appeared robust, but some issues came into view:

  1. Bloated Docker Build Context
    The build context — all the files sent into Docker during an image build — had grown to 1.5GB and was taking a really long time to build.

  2. Installing Dependencies Redundantly
    Every stage in the pipeline had to reinstall the npm dependencies from scratch, thus adding unnecessary delays.

  3. Poor Docker Image Management
    Docker images were rebuilt and pushed to the registry, even when no changes had occurred.

  4. No Parallel Execution
    Similarly, all tasks, such as static code analysis or testing, were run sequentially.

  5. Manual Deployment Steps
    Since this involved updating AWS ECS task definitions manually, deployment of the backend was time-consuming and prone to human error.

The Solutions

Here’s How I Transformed the Pipeline for a 5x Optimization.

Reduce the size of the Docker Build Context

The Docker build context was unnecessarily large due to unfiltered project directories. We can use .dockerignore file to exclude certain files such as node_modules, logs etc.

Key File: .dockerignore

node_modules  
*.log  
dist  
coverage  
test-results
Enter fullscreen mode Exit fullscreen mode

Impact:
Reduced the build context size from 1.5GB to ~10MB, reducing the transfer time from 30 minutes to <1 minute.

Dependency Caching

Every stage was using npm install. I replaced it with npm ci for reproducibility and activated caching in Jenkins.

Command Update:

npm ci --cache ~/.npm
Enter fullscreen mode Exit fullscreen mode

Impact:
Reduced dependency installation time from 3–4 minutes per stage down to <20 seconds.

Improve Docker Image Handling

Previously, the pipeline would rebuild and push Docker images irrespective of changes. I added the logic to compare the hash of local and remote images, doing a push only if the image changed.

Updated Logic:

def remoteImageHash = sh(returnStdout: true, script: "docker inspect --format='{{.Id}}' $DOCKER_IMAGE:$DOCKER_TAG || echo ''").trim()
def localImageHash = sh(returnStdout: true, script: "docker images --no-trunc -q $DOCKER_IMAGE:$DOCKER_TAG").trim()

if (localImageHash != remoteImageHash) {
    sh 'docker push $DOCKER_IMAGE:$DOCKER_TAG'
} else {
    echo "Image has not changed; skipping push."
}
Enter fullscreen mode Exit fullscreen mode

Impact:
Avoided unnecessary pushes, saving 3–5 minutes per build.

Run Static Analysis and Testing in Parallel

I extended the Jenkins pipeline to make use of the parallel directive so that tasks like ESLint, SonarQube analysis, and unit tests could proceed simultaneously.

Updated Pipeline:

stage('Static Code Analysis') {
    parallel {
        stage('Frontend ESLint') {
            steps {
                sh 'npm run lint'
            }
        }
        stage('Backend SonarQube') {
            steps {
                withSonarQubeEnv() {
                    sh 'sonar-scanner'
                }
            }
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

Impact:
Reduced static analysis and testing time by 50%.

Automatic Backend Deployment

Manual updates to AWS ECS task definitions were time-consuming and error-prone. I automated this step using the AWS CLI.

Automated Script:

def taskDefinitionJson = """
{
    "family": "$ECS_TASK_DEFINITION_NAME",
    "containerDefinitions": [
        {
            "name": "backend",
            "image": "$DOCKER_IMAGE:$DOCKER_TAG",
            "memory": 512,
            "cpu": 256,
            "essential": true
        }
    ]
}
"""
sh "echo '${taskDefinitionJson}' > task-definition.json"
sh "aws ecs register-task-definition --cli-input-json file://task-definition.json --region $AWS_REGION"
sh "aws ecs update-service --cluster $ECS_CLUSTER_NAME --service $ECS_SERVICE_NAME --task-definition $ECS_TASK_DEFINITION_NAME --region $AWS_REGION"
Enter fullscreen mode Exit fullscreen mode

Impact:
Streamlined deployments, shaving off 5 minutes.

Results

After these optimizations, the pipeline time came down from 41 minutes to just 8 minutes — a 5x improvement. Here’s a detailed comparison:

Comparison Table

Lessons Learned

  1. Logs Are Your Best Friend: Analyze logs to pinpoint bottlenecks.

  2. Caching Saves the Day: Effective use of caching can drastically cut build times.

  3. Run Tasks in Parallel: Use parallel execution for immediate time savings.

  4. Exclude Irrelevant Files: A .dockerignore file can significantly boost performance.

  5. Automate Repetitive Tasks: Automation eliminates errors and speeds up workflows.

Conclusion

Optimizing a CI/CD pipeline was an eye-opening experience. Targeting key bottlenecks and implementing strategic changes transformed a 41-minute chore into an 8-minute powerhouse. The result? Faster deployments, happier developers, and more time to focus on features.

If you’re struggling with a slow pipeline, start by identifying bottlenecks, leverage caching, parallelize tasks, and automate repetitive steps. Even small tweaks can lead to massive gains.

How much time have you saved by optimizing your CI/CD pipeline? Share your experiences and tips in the comments below!

Top comments (0)