DEV Community

Daniane P. Gomes
Daniane P. Gomes

Posted on

Building a continuous delivery pipeline for database migrations with GitLab and AWS

Photo by JJ Ying on Unsplash

After having tested tools to automate the database migrations, it is time to integrate the chosen one with my GitLab repository and build a continuous delivery pipeline for AWS.


Project stack

You can check my previous story about a comparison between Flyway and Liquibase here, but spoiler alert this implementation has the following stack:

The process

  • GitLab CI builds a Flyway Docker image and pushes it to Amazon Elastic Container Registry (ECR).
  • GitLab CI triggers a lambda that runs an Amazon Elastic Container Service (ECS) task with the Flyway Docker image from ECR.
  • The Flyway command “migrate” is executed and the database schema is updated.

The image below illustrates the process.

The continuous delivery pipeline process

GitLab CI

A demo project has the folder db-migrations/scripts where migration scripts are placed. Every time a change is pushed to this folder on GitLab repository, the pipeline will run, build a Flyway Docker image with the scripts and push it to the Amazon Elastic Container Registry (ECR).

Additionally, GitLab CI triggers a lambda that calls an Amazon Elastic Container Service (ECS) task which will run the built image.

The image details are on the Dockerfile below.

# Get image "flyway" from Flyway's repository
FROM flyway/flyway

WORKDIR /flyway 

# Database credentials
COPY db-migrations/flyway.conf /flyway/conf

# Add the scripts I've pushed to my project folder to the Docker image
ADD db-migrations/scripts /flyway/sql

# Execute the command migrate
CMD [ "migrate" ]
Enter fullscreen mode Exit fullscreen mode

The following .gitlab-ci.yml shows GitLab’s actions. Check stages “build-docker-image” and “execute-migrations”.

stages:
  - build-docker-image
  - execute-migrations

build-docker-image:
  stage: build-docker-image
  image:
    name: gcr.io/kaniko-project/executor:debug-v0.19.0
    entrypoint: [""]
  script:
    - echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > /kaniko/.docker/config.json    
    - /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/db-migrations/Dockerfile --destination $AWS_REPOS_FLYWAY:latest
  only:
    refs:
      - <MY_BRANCH_ON_GITLAB>
    changes:
      - db-migrations/scripts/*

execute-migrations:
  stage: execute-migrations
  image: python:3.8-alpine
  before_script:
    - apk add --no-cache python3
    - python3 -m pip install awscli
  script:
    - aws lambda invoke --function-name MyLambdaToTriggerFlyway response.json
  only:
    refs:
      - <MY_BRANCH_ON_GITLAB>
    changes:
      - db-migrations/scripts/*
Enter fullscreen mode Exit fullscreen mode

Lambda

The lambda is not strictly required: the same results could be achieved using the CLI.

However, a lambda offers more flexibility to the process. It is possible to get the execution results, send emails, feed a database table with information to collect statistics and everything else your imagination allows.

Also, it keeps the infrastructure control through the code and its versions.

The lambda was written in Node.js 12.x and I have reused the code I wrote for another test. The task is triggered on line 47 with the command “ecs.runTask(params)”.

/**
 * MyLambdaToTriggerFlyway
 *
 * This lambda relies on 3 environment variables: ENV_CLUSTER, ENV_SUBNET, ENV_SECURITY_GROUP.
 * 
 */

var aws = require('aws-sdk');
var ecs = new aws.ECS();

exports.handler = async (event, context) => {

    var taskDefinition = null;

    var CLUSTER = process.env.ENV_CLUSTER;
    var SUBNET = process.env.ENV_SUBNET.split(",");
    var SECURITY_GROUP = process.env.ENV_SECURITY_GROUP;
    var LAUNCH_TYPE = "FARGATE";
    var FAMILY_PREFIX = "flyway";
    var CONTAINER_NAME = "flyway";  

    var taskParams = {
        familyPrefix: FAMILY_PREFIX
    };    

    const listTaskDefinitionsResult = await ecs.listTaskDefinitions(taskParams).promise();

    if(listTaskDefinitionsResult) {
        taskDefinition = listTaskDefinitionsResult.taskDefinitionArns[listTaskDefinitionsResult.taskDefinitionArns.length-1];
        taskDefinition = taskDefinition.split("/")[1];
    }

    var params = {
        cluster: CLUSTER,
        count: 1, 
        launchType: LAUNCH_TYPE,
        networkConfiguration: {
            "awsvpcConfiguration":  {
              "subnets": SUBNET,
              "securityGroups": [SECURITY_GROUP]
            }
        },
        taskDefinition: taskDefinition,

    };

    const runTaskResult = await ecs.runTask(params).promise();

    if (runTaskResult.failures && runTaskResult.failures.length > 0) {
        console.log("Error!");
        console.log(runTaskResult.failures);
    }

    return runTaskResult;

};
Enter fullscreen mode Exit fullscreen mode

Elastic Container Service Task

The ECS task gets the Flyway Docker image and runs it. The command “migrate” will be executed, the scripts will be applied and the database schema will be updated.

In case of errors, I have decided to keep the fix execution manual for now, but it would be possible to automatize the usage of other commands such as “validate” and “repair”.

I have created an alarm on CloudWatch to notify me by email in case of any error. For future implementation, I intent to manage execution errors through the lambda.

Conclusions

It can be painful to manage database migrations manually, especially if we have multiple environments such as development, staging or production.

However, a migration tool like Flyway integrated with a continuous delivery pipeline, avoid manual execution and therefore mitigates human error. Furthermore, it relieves the burden and boredom of the activity.


This article was written in partnership with Elson Neto.

Originally posted on my Medium Stories

Top comments (0)