DEV Community

Oluwafemi Lawal for AWS Community Builders

Posted on

How to Deploy to AWS from Fargate backed Gitlab Runners

Introduction

I love AWS CodePipeline. It integrates with GitHub and Bitbucket, you can add as many stages for your pipeline as you want, be it for Approval, Build, Invoking, Testing, and efficiently deploying to multiple AWS Services. One thing it does not to integrate with, unfortunately (at the time of my writing this article), is Gitlab.

Sipping tea

This article will cover the steps required to deploy to AWS from a GitLab runner backed by the Amazon Fargate service. I will not talk about how to set everything up, Gitlab covers it in detail on their documentation https://docs.gitlab.com/runner/configuration/runner_autoscale_aws_fargate/

Prerequisites

Before we get started, there are a few things you need to have set up:

  1. A GitLab account with a repository that will run CI jobs with Fargate.
  2. An AWS account.
  3. GitLab runner set up and running in your AWS account.
  4. Fargate task definition that Gitlab runner will use to run your CI jobs.

A modified version of the example debian image in the instructions from Gitlab will be used for the Fargate task Gitlab runner invokes, the only addition is the AWS CLI

FROM debian:buster

# ---------------------------------------------------------------------
# Install https://github.com/krallin/tini - a very small 'init' process
# that helps processing signalls sent to the container properly.
# ---------------------------------------------------------------------
ARG TINI_VERSION=v0.19.0

RUN apt-get update && \
    apt-get install -y curl && \
    curl -Lo /usr/local/bin/tini https://github.com/krallin/tini/releases/download/${TINI_VERSION}/tini-amd64 && \
    chmod +x /usr/local/bin/tini

# --------------------------------------------------------------------------
# Install and configure sshd.
# https://docs.docker.com/engine/examples/running_ssh_service for reference.
# --------------------------------------------------------------------------
RUN apt-get install -y openssh-server && \
    # Creating /run/sshd instead of /var/run/sshd, because in the Debian
    # image /var/run is a symlink to /run. Creating /var/run/sshd directory
    # as proposed in the Docker documentation linked above just doesn't
    # work.
    mkdir -p /run/sshd

EXPOSE 22

# ----------------------------------------
# Install GitLab CI required dependencies.
# ----------------------------------------
ARG GITLAB_RUNNER_VERSION=v12.9.0

RUN curl -Lo /usr/local/bin/gitlab-runner https://gitlab-runner-downloads.s3.amazonaws.com/${GITLAB_RUNNER_VERSION}/binaries/gitlab-runner-linux-amd64 && \
    chmod +x /usr/local/bin/gitlab-runner && \
    # Test if the downloaded file was indeed a binary and not, for example,
    # an HTML page representing S3's internal server error message or something
    # like that.
    gitlab-runner --version

RUN apt-get install -y bash ca-certificates git git-lfs && \
    git lfs install --skip-repo

# ----------------------------------------
# Install AWS CLI.
# ----------------------------------------
RUN apt-get install -y awscli

# -------------------------------------------------------------------------------------
# Execute a startup script.
# https://success.docker.com/article/use-a-script-to-initialize-stateful-container-data
# for reference.
# -------------------------------------------------------------------------------------
COPY docker-entrypoint.sh /usr/local/bin/docker-entrypoint.sh

RUN chmod +x /usr/local/bin/docker-entrypoint.sh

ENTRYPOINT ["tini", "--", "/usr/local/bin/docker-entrypoint.sh"]
Enter fullscreen mode Exit fullscreen mode

You will have to create an ECR repository for the image, build your image and push it ECR, then create the task definition that uses the image.

Setting Up The Deployment

The scenario is you have a Gitlab repository that holds a CloudFormation template for creating ECR Repositories, merges to the main branch should update the template in your AWS account. Our pipeline requires two key components

  1. Creating a ecr.yaml CloudFormation template.
AWSTemplateFormatVersion: "2010-09-09"

Description: >
  This template creates ECR resources

Parameters:
  IAMUserName:
    Type: String
    Description: IAM User Name
    Default: "YOUR_USER_NAME"
    AllowedPattern: "[a-zA-Z0-9-_]+"
    ConstraintDescription: must be a valid IAM user name  

Resources:
  ECRRepository:
    Type: AWS::ECR::Repository
    Properties:
      RepositoryName: !Ref AWS::StackName
      RepositoryPolicyText:
        Version: "2012-10-17"
        Statement:
          - Sid: "AllowPushPull"
            Effect: Allow
            Principal:
              AWS:
                !Join [
                  "",
                  [
                    "arn:aws:iam::",
                    !Ref AWS::AccountId,
                    ":user/",
                    !Ref IAMUserName
                  ],
                ]
            Action:
              - "ecr:GetDownloadUrlForLayer"
              - "ecr:BatchGetImage"
              - "ecr:BatchCheckLayerAvailability"
              - "ecr:PutImage"
              - "ecr:InitiateLayerUpload"
              - "ecr:UploadLayerPart"
              - "ecr:CompleteLayerUpload"

Outputs:
  ECRRepository:
    Description: ECR repository
    Value: !Ref ECRRepository
Enter fullscreen mode Exit fullscreen mode
  1. Modifying the .gitlab-ci.yml file to include a deployment stage that will deploy your changes to AWS.
variables:
  STACK_NAME: ecr-stack
  TEMPLATE_PATH: /templates/ecr.yaml

stages:
  - deploy

cloudformation_deploy:
  stage: deploy
  script:
    - aws cloudformation deploy --template-file $TEMPLATE_PATH --stack-name $STACK_NAME --capabilities CAPABILITY_NAMED_IAM
Enter fullscreen mode Exit fullscreen mode

The Problem

The pipeline is not actually going to work, the AWS CLI will be unable to locate the credentials and will ask you to run aws configure. You might be thinking the simple solution is to just create credentials for Gitlab then set the AWS_SECRET_ACCESS_KEY and AWS_ACCESS_KEY_ID environment variables, but that's completely unnecessary (and not very secure).

The pipeline is running in a Fargate container within your AWS account and has proper permissions, so why doesn't it work?

Illustration of the Fargate task

Ordinarily, the container makes a request to the task metadata endpoint to get temporary AWS credentials, but unfortunately, the environment variable it uses for that is only available to the init container process (PID 1). So to fix this issue, we have to set that environment variable.

How do we get an environment variable from an entirely different process? PID 1 environment variables are stored in the environ file of the process, so we can retrieve it from there by running:

export $(strings /proc/1/environ | grep AWS_CONTAINER_CREDENTIALS_RELATIVE_URI) 
Enter fullscreen mode Exit fullscreen mode

This will make the variable avaialable to the CI process and enable to container to retrieve temporary AWS credentials, the final .gitlab.ci.yaml file will look like this:

variables:
  STACK_NAME: ecr-stack
  TEMPLATE_PATH: /templates/ecr.yaml

stages:
  - deploy

deploy:
  stage: deploy
  script:
    - export $(strings /proc/1/environ | grep AWS_CONTAINER_CREDENTIALS_RELATIVE_URI)
    - aws cloudformation deploy --template-file $TEMPLATE_PATH --stack-name $STACK_NAME --capabilities CAPABILITY_NAMED_IAM
Enter fullscreen mode Exit fullscreen mode

Conclusion

In this article, we covered the steps required to use the AWS CLI for deployments from a GitLab runner backed by the Amazon Fargate service. With these steps in place, you can automate the deployment of your resources to AWS.

Oldest comments (0)