Introduction:
GitLab is a popular tool for managing the software development lifecycle, including continuous integration and deployment (CI/CD) pipelines. One common scenario is to use GitLab to deploy applications to AWS EC2 instances via CI/CD pipelines. In this guide, we’ll walk through how to use GitLab CI/CD pipelines to deploy applications to an EC2 instance via SSH.
We’ll start by creating the necessary infrastructure in AWS using Terraform. Then, we’ll go over how to configure a GitLab CI/CD pipeline to deploy an application to the EC2 instance via SSH.
Having your own preinstalled and preconfigured EC2 instance allows you to have more control over the resources used for executing your jobs, as well as the ability to install and configure the necessary software and dependencies. In this project, we assign an IAM role with the necessary permissions to a preconfigured EC2 instance. This solution is better than using AWS Access Keys to access resources from EC2 to ECR. Here is AWS's best practice for using the IAM role instead of Access Keys: https://docs.aws.amazon.com/accounts/latest/reference/credentials-access-keys-best-practices.html
We deploy a simple web application with a predefined default web page based on an Nginx Docker container. Application deployment is done with a blue-green strategy.
About the project:
In this project, we have one folder with Terraform configurations environment_aws_EC2 for creating and configuring the EC2 instance. In this configuration, we use the default VPC with default subnets. This infrastructure will have an SSH key pair, an EC2 instance (or instances, depending on the settings in variables.tf), an Elastic Container Registry repository, a network security group, and an IAM role with an assigned policy. We can easily define in variables.tf which and how many instances are necessary for deployment. Deployment of the application is done with a GitLab CI/CD pipeline.
The project structure:
├── Dockerfile
├── environment_aws_EC2
│ ├── backend.tf
│ ├── main.tf
│ ├── outputs.tf
│ ├── scripts
│ │ ├── docker_aws_install_rhel.sh
│ │ └── docker_aws_install_ubuntu.sh
│ └── variables.tf
├── images
├── index.html
├── LICENSE
└── README.md
Prerequisites:
To follow along with this tutorial, you will need the following:
A GitLab account with an active project.
An AWS account with permissions to create resources and an S3 bucket for storing the Terraform backend.
The Terraform CLI is installed on your local machine.
Deployment:
Step 1: Create an SSH Key Pair
The EC2 instances require an SSH key pair for login.
You can create a key pair using the ssh-keygen command.
ssh-keygen -t rsa -b 4096 -f gitlab_runner
Step 2. Clone the repository and define the S3 bucket.
Clone the repository https://gitlab.com/Andr1500/CICD_BlueGreen_EC2.git, and go to the repository folder.
Step 3. Apply infrastructure configuration
Go to the *environment_aws_EC2 * folder and build the AWS environment: set AWS credentials, define the S3 bucket in the backend.tf, make init, plan and apply. You will have the public IP address of the instance in the output configuration:
terraform apply
Step 4. Push the repository to your GitLab account
Create a new project in GitLab. In your local Git repository, add the GitLab repository as a remote. Push your local repository to GitLab.
Step 5. Setup necessary variables for CI/CD pipeline
Add the necessary variables into GitLab Settings -> CI/CD -> Variables.
Step 6. Run the CI/CD pipeline.
CI/CD pipeline:
Ensure you have SSH access to your created EC2 instance and everything (Docker, Trivy, awscli) works correctly on the instance. Go to CI/CD -> Pipelines -> Run pipeline and run the pipeline.
A working set of CI and CD release Gitlab workflows are provided in gitlab-ci.yml, Gitlab stages run on Gitlab hosted runners with SSH to EC2 instance and make all work on the instance.
image: alpine
variables:
IMAGE_NAME: "nginx"
IMAGE_DEFAULT_TAG: "stable-alpine"
CI_EXECUTION_DIR: "/home/$EC2_USER/docker_dir"
ECR_REPOSITORY_URI: "$AWS_ACCOUNT.dkr.ecr.$AWS_DEFAULT_REGION.amazonaws.com"
stages:
- check_ecr_default_image
- docker_pull
- security_check
- docker_push
- build_new_image
- check_and_push
- deploy
- blue-green
# check in case default image exists in ECR repo
check_ecr_default_image:
stage: check_ecr_default_image
before_script:
# Establish SSH connection to the EC2 instance
- &establish_ssh_connection >-
apk add openssh-client;
eval "$(ssh-agent -s)";
mkdir -p ~/.ssh;
chmod 700 ~/.ssh;
ssh-keyscan $EC2_HOST >> ~/.ssh/known_hosts;
echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
script:
- ssh -o StrictHostKeyChecking=no $EC2_USER@$EC2_HOST "
pwd && touch ci_variables.env && chmod 755 ci_variables.env &&
if ! aws ecr describe-images --repository-name $IMAGE_NAME --image-ids imageTag=stable > /dev/null 2>&1; then
echo 'ECR_IMAGE_EXISTS=false' > ci_variables.env;
else
echo 'ECR_IMAGE_EXISTS=true' > ci_variables.env;
fi"
- scp -o StrictHostKeyChecking=no $EC2_USER@$EC2_HOST:ci_variables.env .
- cat ci_variables.env
artifacts:
paths:
- ci_variables.env
expire_in: 1 hour
# pull default image from Docker hub
docker_pull:
stage: docker_pull
before_script:
- if grep -q 'ECR_IMAGE_EXISTS=true' ci_variables.env; then exit 0; fi
- *establish_ssh_connection
script:
- ssh $EC2_USER@$EC2_HOST "
docker pull $IMAGE_NAME:$IMAGE_DEFAULT_TAG"
# security stage, perform any security checks the image with Trivy tool
security_check:
stage: security_check
before_script:
- if grep -q 'ECR_IMAGE_EXISTS=true' ci_variables.env; then exit 0; fi
- *establish_ssh_connection
script:
- ssh -o StrictHostKeyChecking=no $EC2_USER@$EC2_HOST "
trivy image $IMAGE_NAME:$IMAGE_DEFAULT_TAG &&
trivy --severity CRITICAL --exit-code 1 $IMAGE_NAME:$IMAGE_DEFAULT_TAG"
allow_failure: true
# push the default image to ECR repo
docker_push:
stage: docker_push
before_script:
- if grep -q 'ECR_IMAGE_EXISTS=true' ci_variables.env; then exit 0; fi
- *establish_ssh_connection
script:
- ssh -o StrictHostKeyChecking=no $EC2_USER@$EC2_HOST "
docker tag $IMAGE_NAME:$IMAGE_DEFAULT_TAG $ECR_REPOSITORY_URI/$IMAGE_NAME:$IMAGE_DEFAULT_TAG &&
aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $ECR_REPOSITORY_URI &&
docker push $ECR_REPOSITORY_URI/$IMAGE_NAME:$IMAGE_DEFAULT_TAG &&
docker image prune -a -f"
# Take the default image from ECR repo and build new image
build_new_image:
stage: build_new_image
before_script:
- *establish_ssh_connection
script:
- ssh -o StrictHostKeyChecking=no $EC2_USER@$EC2_HOST "mkdir -p $CI_EXECUTION_DIR/"
- scp Dockerfile index.html $EC2_USER@$EC2_HOST:$CI_EXECUTION_DIR/
- ssh -o StrictHostKeyChecking=no $EC2_USER@$EC2_HOST "
pwd && docker pull $ECR_REPOSITORY_URI/$IMAGE_NAME:$IMAGE_DEFAULT_TAG &&
cd $CI_EXECUTION_DIR/ &&
sed -i 's|\${ECR_REPOSITORY_URI}|$ECR_REPOSITORY_URI|g' Dockerfile &&
sed -i 's|\${IMAGE_NAME}|$IMAGE_NAME|g' Dockerfile &&
sed -i 's|\${IMAGE_DEFAULT_TAG}|$IMAGE_DEFAULT_TAG|g' Dockerfile &&
docker build -f Dockerfile -t $ECR_REPOSITORY_URI/$IMAGE_NAME:$CI_PIPELINE_IID . &&
cd .. && rm -rf $CI_EXECUTION_DIR"
# scan the created image and push to the ECR repo
check_and_push:
stage: check_and_push
before_script:
- *establish_ssh_connection
script:
- ssh -o StrictHostKeyChecking=no $EC2_USER@$EC2_HOST "
trivy image $ECR_REPOSITORY_URI/$IMAGE_NAME:$CI_PIPELINE_IID &&
trivy --severity CRITICAL --exit-code 1 $ECR_REPOSITORY_URI/$IMAGE_NAME:$CI_PIPELINE_IID &&
docker push $ECR_REPOSITORY_URI/$IMAGE_NAME:$CI_PIPELINE_IID"
allow_failure: true
# deploy docker container from the new image
deploy:
stage: deploy
before_script:
- *establish_ssh_connection
script:
- ssh -o StrictHostKeyChecking=no $EC2_USER@$EC2_HOST "
docker ps -a -q --filter name="nginx-app2" | xargs -r docker stop | xargs -r docker rm &&
docker ps -a -q --filter name="nginx-app1" &&
docker run --name nginx-app2 -d -p 8080:80 $ECR_REPOSITORY_URI/$IMAGE_NAME:$CI_PIPELINE_IID ||
docker run --name nginx-app1 -d -p 80:80 $ECR_REPOSITORY_URI/$IMAGE_NAME:$CI_PIPELINE_IID"
# blue-green stage
blue-green:
stage: blue-green
when: manual
before_script:
- *establish_ssh_connection
script:
- ssh -o StrictHostKeyChecking=no $EC2_USER@$EC2_HOST "
sudo lsof -i -P -n | grep LISTEN | grep 80 && echo "port 80 is busy" || echo "port 80 is not busy" &&
docker ps -a -q --filter name="nginx-app1" | xargs -r docker stop | xargs -r docker rm &&
docker run --name "nginx-app1" -d -p 80:80 $ECR_REPOSITORY_URI/$IMAGE_NAME:$CI_PIPELINE_IID &&
docker ps -a -q --filter name="nginx-app2" | xargs -r docker stop | xargs -r docker rm &&
docker image prune -a -f"
after_script:
# Close SSH connection to the EC2 instance
- ssh-agent -k
GitLab pipeline stages:
check_ecr_default_image: Check if the default image already exists in the ECR repo.
docker_pull, security_check, and docker_push stages are executable if the default image does not exist in the ECR repo, otherwise, these stages are skipped. In the stage security_check we use the Trivy tool for scanning the default Docker container for vulnerabilities.
build_new_image: Build a new Docker image based on Dockerfile.
check_and_push: Check the built Docker image with the Trivy tool and push the image to the ECR repo.
deploy: Deploy the app on the EC2 instance. “nginx-app1” is the container name for “blue deployment”, “nginx-app2” is the container
name for “green deployment”. For versioning newly deployed images we use the CI_PIPELINE_IID variable.
blue_green: Manual stage, after verifying that everything is ok with “green deployment” we run this stage and “green deployment” becomes “blue deployment”.
After finishing this pipeline, we can go to the index.html file, make simple changes (for example, change the version number), make a git commit and push, and after the successful execution of the pipeline’s stage deploy, a new version of the application will be available on port 8080. Now we have the “old” version of the application available on port 80 and the “new” version of the application available on port 8080.
If everything looks good, we run the **blue_green **stage. We stop and delete the container “nginx-app1” and recreate it based on “green deployment” image. Next, we stop and delete the “green deployment” container. Next, we delete all unused images.
Scanning Docker images:
Trivy tool is used for checking Docker images for vulnerabilities in both the security_check and **check_and_push **stages. Trivy is a widely used open-source security scanner for container images that can detect vulnerabilities in images from Docker Hub. For example, Trivy detects 120 vulnerabilities, including 4 critical vulnerabilities, in the nginx:stable image. One way to minimize vulnerabilities is to use minimal base images, such as Alpine. In the case of the nginx:stable-alpine image, Trivy did not find any vulnerabilities.
In addition to checking for vulnerabilities in the pipeline, we also configured a check in the ECR repository. In this case, the nginx:stable-alpine image was found to be better than nginx:stable in terms of vulnerability checks.
Conclusion
Using GitLab CI/CD pipeline with an EC2 instance in AWS for deployment can be an effective way to automate the process of building, testing, and deploying applications. By using Terraform to provision the infrastructure, we could easily manage and configure the EC2 instance, ECR repository, and IAM role with the necessary policy. Using SSH to execute the pipeline on the GitLab runner instance provided a secure way to access the instance and run the necessary commands. Overall, this approach can help to improve the efficiency and security of the software development and deployment process.
If you found this post helpful and interesting, please click the reaction button below to show your support for the author. Feel free to use and share this post!
You can also support me with a virtual coffee https://www.buymeacoffee.com/andrworld1500 .
Top comments (0)