DEV Community

Cover image for Using GitHub Actions to build test workflows on a Rails on Docker app (+ Postgres and Selenium) leveraging Docker layer caching
Isa
Isa

Posted on

Using GitHub Actions to build test workflows on a Rails on Docker app (+ Postgres and Selenium) leveraging Docker layer caching

Ever since I've heard about Github Actions (from now on mentioned as GHA), I've been wanting to try it out in one of my projects. Also, I've been eager to write tests, as in the code bootcamp I took there wasn't an opportunity to implement them during the final project. So given those two topics I wanted to learn more about, I decided to couple them together and experiment creating a GitHub Actions workflow for tests. Although having come across tutorials/articles about setting up GHA workflows for Rails apps, I didn't find any specific tutorials explaining how to set up a GHA workflow to test a Rails on Docker app. Hence, I thought if I wrote an article explaining my approach I could help out someone.

I'm assuming that those reading this article are already somehow familiar with GHA, Docker, and Rails, so I will focus on explaining the workflow setups.

In the simple Rails app used to build the workflow, there are three services that we need to run the tests: the app itself, the database (Postgres), and a tool that allows us to perform the system tests by simulating a browser (Selenium). Hence, we need to build three containers, one for each of those services.

I decided to try out two approaches for the workflow: one that uses GHA services containers and another one that builds those services based on Dockerfiles that we will define. In the end, my favorite approach ended up being the first one, as it is easier to set up and a little bit faster. Let's take a look at those approaches in detail.

The Services Containers Approach
Our goal in setting up this workflow is to test the app every time there is a push action performed against our repository. There are only a few tests in the app (one model test and a couple of system tests) as the main purpose of this tutorial is to show a basic setup to implement a GHA workflow to run tests on every push (and not the testing methodology itself). We are using Rail's default test framework, Minitest.
We will start by showing up the test-services.yml file, placed inside the .github/workflows folder.

Let's walk through each step of this configuration.

First, we define which action will trigger this workflow: in our case, the push action will do it. Then we set up some env variables related to the test environment and specify which kind of runner we want to use (Ubuntu).

name: Test

on: [push]

jobs:
  test:
    env:
      RAILS_ENV: test
      NODE_ENV: test
    runs-on: ubuntu-latest # runner
}
Enter fullscreen mode Exit fullscreen mode

Next, we enter the services' part (which should be placed before the steps part). We must specify all the services we need to run our tests: in this case, Postgres and Selenium with the Chrome browser. By doing this, GHA will create one container for each of the services and a specific network so that the containers can communicate with each other. The network part is important for this setup because later we will need to connect to it to execute our tests. You can read more about GHA Services Containers and how to set them up here.

    services:
      database:
        image: postgres
        env:
          POSTGRES_PASSWORD: postgres
        ports:
          - 5432:5432
        options: >-
          --health-cmd pg_isready
          --health-interval 10s
          --health-timeout 5s
          --health-retries 5
      chrome:
        image: selenium/standalone-chrome-debug
        ports:
          - 4444:4444
          - 5900:5900
        volumes:
          - /dev/shm:/dev/shm
}
Enter fullscreen mode Exit fullscreen mode

Now we go to the "steps" part of the configuration. The first step, named "Output services network", will store, under the variable services_network, the name of the network created for the services' containers. Later, on the step named "Run tests", we will access this variable and use it to connect our main container to the services container network. In the following step, "Checkout code", we use a GHA action called checkout that "checks-out your repository under $GITHUB_WORKSPACE, so your workflow can access it."

    steps:

    - name: Output services network
      id: network
      run: |
        echo ::set-output name=services_network::${{ job.container.network }}
        echo ${{ job.container.network }}

    - name: Checkout code
      uses: actions/checkout@v2

Enter fullscreen mode Exit fullscreen mode

The next step relies on a Docker action that installs Docker Buildx in the runner used to run our workflow. Buildx is a CLI plugin that allows the use of extra features of the BuildKit builder toolkit. The main reason we're installing Buildx is to take advantage of Buildkit's ability to store Docker layers as artifacts, which will later let us speed up our workflow by caching the layers of our Docker builds. The install: true option will make Buildx the default builder in Docker. In the following step, "Prepare tags", we are, well, preparing the tags for the images we'll build/use.

    - name: Set up Docker Buildx
      uses: docker/setup-buildx-action@v1
      with:
        version: latest
        install: true

    - name: Prepare Tags
      id: tags
      run: |
        TAG=$(echo $GITHUB_SHA | head -c7)
        IMAGE="dev/test"
        echo ::set-output name=tagged_image::${IMAGE}:${TAG}
        echo ::set-output name=tag::${TAG}
Enter fullscreen mode Exit fullscreen mode

Next, we use a Github Action that allows caching dependencies for a job. Basically, it lets us specify a path that will be used to cache and restore the dependencies. You can specify any folder you want, as long as it's empty or containing only Buildx data. In the following step, "Build Code", we will finally build our main image, which contains the app we want to test. We use Docker's build-push action, which, as the name implies, lets us build and push Docker images on GHA runners. In this step there are some important configurations, so let's go over each one of them.

We start with the push: false, meaning we are not interested in pushing this image anywhere as we just want it to be tested (based on the official documentation, this option is false by default, hence we could have it removed from the configuration and the result would be the same).

Next comes file: .ci-services/Dockerfile-services.ci, which tells the builder which Dockerfile to use for the build. The load: true option will load the build result to Docker images. If we don't specify this option, the execution of the "Run tests" step will fail because it will not find the image built on this step to execute the tests.

Next comes the two configurations that will allow the caching and caching retrieval for our builds: cache-from and cache-to. On cache-from we are specifying the cache type (in our case is local, but it could also be on a registry), and its path (src=/tmp/.buildx-main-cache). This path should seem familiar, as we set it up on the previous step "Cache main image layers". So here we are telling the builder to use the folder /tmp/.buildx-main-cache, specified in the previous step, to retrieve the cache.

As for the cache-to option, it will write the new cache generated by the build on another folder called /tmp/.buildx-main-cache-new. We could have used the same folder as in the cache-from option, but as this build-push action still doesn't offer a way to clear up the cache, we have to cache to a different location, remove the /tmp/.buildx-main-cache folder and rename the /tmp/.buildx-main-cache-new path to /tmp/.buildx-main-cache (this removal and renaming commands are executed in the last step, "Move cache"). This way, we can prevent caching from becoming huge.

Although we are using a regular Dockerfile in this tutorial (without multi-stage building), it is worth mentioning an important configuration that impacts multi-stage buildings: the mode=max option inside the cache-to option. The mode=max option allows the caching of all layers generated by a multi-stage Dockerfile, and not just the final layer. If you are using a multi-stage Dockerfile and you need to access intermediate layers, you have to set mode=max on the cache-to entry.

The last option is the tag, which will receive as input the tagged_image variable set on the "Prepare Tags" step.

    - name: Cache main image layers
      uses: actions/cache@v2
      with:
        path: /tmp/.buildx-main-cache
        key: ${{ runner.os }}-buildx-main-${{ github.sha }}
        restore-keys: |
          ${{ runner.os }}-buildx-main-

    - name: Build code
      uses: docker/build-push-action@v2
      with:
        push: false
        file: .ci-services/Dockerfile-services.ci
        load: true
        cache-from: type=local,src=/tmp/.buildx-main-cache
        cache-to: type=local,mode=max,dest=/tmp/.buildx-main-cache-new
        tags: ${{ steps.tags.outputs.tagged_image }}
Enter fullscreen mode Exit fullscreen mode

Finally, we have everything set to run our tests! Now we can run our docker-compose file with some env variables to build a container from the image created on the "Build code" step and execute our test service. The TEST_IMAGE_TAG will store the name of the image we built on the "Build code" step and the SERVICES_NETWORK will store the name of the network created for the services' container. We also specify which docker-compose file will be used (.ci-services/docker-compose.test.services.yml) and the service we want to run (test).

    - name: Run Tests
      run: |
        TEST_IMAGE_TAG=${{ steps.tags.outputs.tagged_image }} SERVICES_NETWORK=${{ steps.network.outputs.services_network }} docker-compose -f .ci-services/docker-compose.test.services.yml run test
Enter fullscreen mode Exit fullscreen mode

Now let's take a look at the docker-compose file we are using to understand how those env variables are used.

It's straightforward to see that we are using the TEST_IMAGE_TAG env variable to tell docker-compose which image to use to build the test service. Inside the test service, we are also specifying two env variables: HUB_URL and PARALLEL_WORKERS. The first is a configuration related to Selenium and it is used on the application_system_test_case.rb file to tell Rails where to find the Chrome executable that will be used on system tests. Rails will then know that it should find and execute the Chrome browser located in the chrome container (defined on our services step), and not locally.

The second env variable, PARALLEL_WORKERS, just states that there should be only one "worker" to run the tests, which means the tests will run sequentially instead of simultaneously. Obviously, this isn't the most efficient setup, but creating more "workers" to run tests would mean launching multiple chrome containers, which would make our workflow a little bit more complicated. This article should help if you want to know more about how to scale the chrome service to run tests in parallel.

Going back to the docker-compose file, after setting env variables, the test service will execute a command to run Rails tests (rails test && rails test:system). At last we can see how the SERVICES_NETWORK env variable is used: it tells the test service container to connect to a specific network, the one created for the services' containers at the beginning of our workflow. Being connected to this network allows the test service container to access the database and the chrome service containers to run the tests.

The Dockerfile Containers Approach
I won't get into a lot of details about this approach as it has a lot of similarities with the previous approach. Same as before, the goal here is to test the app every time there is a push action performed against our repository. The main difference is that instead of using GHA service containers, we will be using Docker's build-push action to build those containers. So for this approach, we have the following test.yml placed on the .github/workflows folder:

Given that we are now building our two service containers from the scratch instead of using GHA service containers, the test.yml file becomes a little bit more extensive. The same steps we only had to perform for our main service in the previous approach now have to be performed for the postgres and chrome services as well: prepare the caching, build the images and move the cache.

Besides, we also need to have two additional Dockerfiles: one for the postgres service and one for the chrome service. The docker-compose file is also a little bit different: it now has to use env variables to refer to the images built for the postgres and chrome services to execute the tests. Here's how it looks:

Given that docker-compose is building all the services together and automatically creating a network for all of them, we don't need to worry about connecting to a specific network to make things work as we did in the first approach.

Wrapping up
As mentioned before, I prefer using the first approach (using GHA service containers) as is is more simple and a bit faster to execute. Anyway, I thought it would be worth mentioning the second approach (build the service containers with the build-push action) because it works and it was the first approach that I could implement.

You can find the repository with those two workflows here. It has a .devcontainer folder, so you can use Vscode and open the code inside a container.

I'm far from being an expert on GHA, Rails, or Docker, so please let me know if you think I can improve this workflow and/or if I wrote something wrong. 😃

I'd also like to mention this excellent article that helped me to set up my workflow and understand how to leverage Docker layer caching on GHA.

Thank you for reading this!

Top comments (0)