DEV Community

Cover image for Simulating Multiple Client Load Tests in Gatling with GitHub Actions
Kaye Alvarado for Developers @ Asurion

Posted on

Simulating Multiple Client Load Tests in Gatling with GitHub Actions

This is the second part of the Load Testing with Gatling series. I previously talked about how you can simulate a production load using Gatling Load-Test-as-Code. Gatling provides a way for you to write client requests in code, and run it from your local machine with different injection rates that you specify.

The Problem

In the real world scenario, traffic does not come from just one client. Multiple users login from different clients and this creates a different scenario in comparison to a load test from just one machine. With more than one client, there are other factors that come in to play that affects performance, specifically on establishing the connection between client and server.

Image description

As Gatling load tests are written in code, and ran with scripts, you can always make use of any pipeline tool to simulate the load test. Pipelines can be triggered either manually or automatically depending on your use-case.

Simulating Multiple Client Load Tests with GitHub Actions

GitHub Actions is a feature of GitHub that allows you to automate, customize, and execute your workflows right in your repository. In GitHub, a workflow is ran by GitHub runners, which essentially, are hosted virtual machines that is provisioned every time a job is triggered to run, and is automatically decommissioned when the job is finished.

Image description

Creating a workflow in GitHub is very easy, and learning the code (YAML) doesn't take much time as well. There are a lot of GitHub actions available in the marketplace providing common workflows to be re-used. Below, I run through some issues I had to solve when developing the workflow for my load test and what solution I put in place for those.

Customizable Fields in the Load Test

Issue#1: I want to be able to run different simulation files in my repository. I do not want to create multiple repositories for different load tests

Image description

To solve this, I added a few inputs to the GitHub Actions workflow. One of the inputs is to allow selection of which simulation file to run.

on: 
  workflow_dispatch:
    inputs:
      simulation:
        description: 'API Test Collection'     
        required: true
        default: 'LoadTestSimulation1' 
        type: choice
        options:
        - LoadTestSimulation1
        - LoadTestSimulation2
        - LoadTestSimulation3
        - LoadTestSimulation4
        - LoadTestSimulation5
Enter fullscreen mode Exit fullscreen mode

In the workflow, I can then add some logic to replace the simulationClass value, so that when running the gatling.sh script, it will auto-select the file to run.

gatling {
  core {
    runDescription = ""          # The description for this simulation run, displayed in each report
    simulationClass = "loadtestpackage.{{ FQCN }}"         # The FQCN of the simulation to run (when used in conjunction with noReports, the simulation for which assertions will be validated)
Enter fullscreen mode Exit fullscreen mode

In GitHub Actions, I added a sed command to replace {{ FQCN }} in the galing.conf file. This way, when I run gatling.sh, it will not ask me to select the option of which simulation file to run.

- name: Update Variables
  run: |
  sed -i -e 's/{{ FQCN }}/${{ github.event.inputs.simulation }}/g' ./conf/gatling.conf
Enter fullscreen mode Exit fullscreen mode

Issue#2: The load test I was developing should be re-usable across different URLs as the URL between the environments vary. Based on different runs, I also want to pass a header with a tag value, to identify the requests in my monitoring tools, and be able to filter based on these tags.

Issue#3: I want to be able to override the authorization headers or apikey headers in my simulation tests in case they change over time.

Issue#4: I want to be able to override the injection rates before running my simulations

Issues 2-4 are somewhat similar to the first issue. I made these variables to be replaceable with the value I feed in the workflow input. Using sed command, I can replace these values in my scala file, in the same way.

Once everything is ready, I can simply add the following GitHub Actions to run the gatling.sh script.

      - name: Install JAVA dependency
        uses: actions/setup-java@v3
        with:
          distribution: 'zulu' 
          java-version: '17'

      - name: Run the Load Test
        run: |
          #change directory to display compile errors if any
          cd bin
          ./gatling.sh
          cd ..
Enter fullscreen mode Exit fullscreen mode

Triggering Multiple Workflows with Multiple Inputs is Hard to Do when Done Manually

Let's go back to the original issue I was trying to solve why I used GitHub Actions in the first place--which is, to be able to run multiple simulations in parallel. As I added inputs to my workflow, it would be difficult to trigger them close to the start of each other if I have to type in the input values.

Image description

The solution here is provided by GitHub itself! It's possible to expose an API to trigger GitHub workflows. Simply pass a payload with the values of the input and trigger the API with curl or Postman, and they will get triggered within a few seconds of each other. My runs below, had a start time of 3s difference from each other.

Image description

Each GitHub runner is a different client. In theory, I am simulating the load tests to come from at least two clients, thus achieving my goal of multiple client load tests.

The Created Gatling Report Cannot be Accessed After the GitHub Runner is Decommissioned

For each workflow run, a hosted virtual machine is provisioned which checkouts your Gatling source code and performs the commands from that machine. The gatling report is also created locally on to that machine, thus, it is no longer accessible after the workflow completes because the hosted virtual machine is also decommissioned.

This is fairly easy to solve as there is a GitHub Action called upload-artifact, which allows you to create a zipped file of any path in the hosted vm. This allows it to be accessed outside the runner after the workflow completes.

      - name: Output the Gatling Test Results
        uses: actions/upload-artifact@v2
        with: 
          name: GatlingArtifact
          path: results
Enter fullscreen mode Exit fullscreen mode

You can browse for the artifact in GitHub and download it locally to your machine to further analyze the report.

Image description

Further improvements to the workflow is to send a notification via messaging channels or email so that you don't have to browse manually in your repository to download the report.


If you have more suggestions on how to improve the solution, feel free to add it in the comments! I'd love to hear about it. 🙂

Top comments (0)