DEV Community

Semyon Kirekov
Semyon Kirekov

Posted on

E2E-Testing in CI Environment With Testcontainers

I wrote many blog posts about unit and integration testing. But today I want to tell you something beyond it. And this is E2E-testing. Though it's important to test each service's behaviour distinctly. But it's also crucial to verify the business scenario validity on the whole running system. In this article, I'm telling you what is E2E-testing, why is it so important and how you can implement it within your release pipeline. You'll learn how to run E2E-tests on each new pull request before merging the changes to the master branch.

The code examples are in Java but the proposed solution is applicable to any programming language. You can find the source code of the whole project by this link.

Post cover

Domain

We're going to develop a system for gaining upcoming messages with additional data. Take a look at the schema below.

System design

The message processing algorithm is simple:

  1. User sends a message via REST API.
  2. API-Service transfers it to RabbitMQ.
  3. Gain-Service updates data for future gaining in Redis, if the message contains something valuable. And then puts additional data to the message itself and then transfers it to RabbitMQ again.

Testing

Unit testing

How can we validate the system's behaviour? There are several options. The simplest one is unit testing. Take a look at the diagram below. I pointed out the testing areas with pale green and blue ovals.

Unit testing areas

Unit tests have several advantages:

  1. They are fast to run.
  2. Easily integrated into CI/CD pipeline.
  3. Can be run in parallel (if properly written).

Though they also have a problem. Unit tests do not check interactions with real external services (e.g. Redis, RabbitMQ). It's about verifying business logic but not the actual production scenario.

I wrote a longread about unit testing patterns and best practices. Go check it out, it's really awesome.

Integration testing

We need to extend the perspective. So, integration tests can come in handy, right? Have a look at the next diagram below.

Integration Testing

In this case, we do check interactions with external services. Though one problem remains. A business operation involves several components of communication. Even if each module is tested properly, how can we verify the correctness of a multi-service request (i.e. a business scenario)? For example, if the API-service puts a breaking change to the format of the output message, then the gain-service won't be able to proceed with enrichment successfully. Though the API-service integration and unit tests would pass.

To overcome this issue we need something beyond integration tests.

I wrote an article explaining integration tests deeply. You should check it out.

E2E testing

The idea of E2E testing is straightforward. We're considering the whole system as a black box that accepts some data and returns the computed result (either synchronously or asynchronously). Take a look at the schema below.

E2E Testing

Well, that sounds reasonable and trustworthy. But how can we implement it? Where do we begin? Let's start to deconstruct this problem step by step.

Releasing strategies

Firstly, let's clarify the release pipeline of individual services. That'll help us to understand the whole E2E testing approach. Take a look at the schema below.

A single module release pipeline

Here is the flow step by step:

  1. A developer push changes to the feature/task branch.
  2. Then makes pull request from feature/task to master branch.
  3. During the CI pipeline the pull request is being built (i.e. unit tests and integration tests execution).
  4. If the pipeline is green, the changes are merged to the master branch.
  5. When the pull request is merged, the resulting artefact is published to the Docker Hub.
  6. When the release is triggered (e.g. on a scheduled basis), the deploy stage pulls the required Docker image (latest by default) and runs it in the specified environment.

So, how can we put E2E tests within the stated process? Actually, there are several ways.

Synchronous releasing strategy

That's the easiest approach to understand. No matter how many services we have, the release pipeline deploys each of them within a single job. In this case, we just need to run E2E tests right before deploying artefacts to production. Take a look at the schema below describing the process.

Synchronous releasing strategy

The algorithm is:

  1. Trigger release
  2. Pull all services' images from the Docker Hub (latest by default).
  3. Run E2E tests with the pulled images (I'll explain the approach to you later in the article).
  4. If tests succeed, deploy the pulled images.

Despite its simplicity, this approach has a significant obstacle. You cannot update a single microservice
isolated. It means that different modules have to be released all at once. Though in reality, some microservices have to be updated more frequently than others. But here you have to choose a release trigger that satisfies (at least partially) every service's requirements.

Asynchronous releasing strategy

This one means updating each service like an isolated functionality. Each module can be deployed accordirng to its own rules.

Here is an example of an asynchronous releasing strategy. Take a look at the schema below.

Asynchronous releasing strategy

As you can see, the diagram is similar to a single module release pipeline that we've seen before. Though there are slight differences. Now there is the E2E-tests stage that runs both during pull request build and right before deploying to production.
Why do we need to run E2E-tests again if they have already been completed on the pull request pipeline? Take a look at the picture below to understand the problem.

e2e-tests delayed releases problem

We deployed API-Service immediately after the PR merge. But we delayed the Gain-Service release by one day. So, if E2E-tests run only during pull request build, there is a chance that some other services have been already updated. But we verified the correctness only with previous versions because during the pull request build the newest releases have not been promoted yet.

If you stick with the asynchronous releasing strategy, you have to run E2E-tests right before deploying to production as well as during pull request build.

In this article, we're looking at asynchronous releasing strategy as preferred for microservices.

Establishing the process

Well, that all sounds promising. But how do we establish this scenario? I can say that's not as complex as it seems. Take a look at the example of running E2E-tests for the API-Service below.

E2E-tests running process

There are two parts. Running E2E-tests during pull request build and right before deploying the artefact to production. Let's go through each scenario step by step.

Pull request build

  1. Firstly, unit tests and integration tests are run. These two steps are usually combined with the building artefact itself.
  2. Then the current version of API-Service is being built and saved locally as a Docker image. We don't push it to the hub because the proposed changes might not be correct (we haven't run E2E-tests to check it yet). Though some CI providers don't allow building Docker images locally to reuse them later. In that case, you can specify a tag that won't be used in production. For example, dev-CI_BUILD_ID.
  3. Then we pull a Docker image containing E2E-tests themselves. As we see later, it's a simple application. So, it's convenient to keep in Docker Hub as well.
  4. And finally, it's time to run E2E tests. The app that contains tests should be configurable to run with different Docker images of services (in this case, API-Service and Gain-Service). Here we put the API_SERVICE_IMAGE as the one that we've built locally in step 2.

All other services should have the default Docker image as the latest tag. That'll give us an opportunity to run E2E tests in any repository by overriding the current service image version.

If all verifications are passed, the PR is accepted to be merged. After the merge, the new version of API-Service is pushed to Docker Hub with the latest tag.

E2E-tests running before the deploy stage

  1. Unit tests and integrations tests are run the same way.
  2. The latest version of the E2E-tests images is pulled from the Docker Hub.
  3. E2E-tests are run with the tags of latest for all the services.

The API-Service has been already pushed to Docker Hub with the latest tag on the pull request merge. Therefore, there is no need to specify the particular image version on the E2E-tests run.

Code Implementation

Let's start implementing the E2E tests. You can check out the source code by this link.
I'm using Spring Boot Test as the framework for E2E tests. But you can apply any technology you like.

I placed all modules (including e2e-tests) within a single mono-repository for the sake of simplicity. Anyway, the approach I'm describing to you is comprehensive. So, you can apply it to multi-repositories microservices as well.

Let's start with the E2ESuite. This one will contain all configurations and act as a superclass for all the test cases. Take a look at the code example below.



@ContextConfiguration(initializers = Initializer.class)
@SpringBootTest(webEnvironment = RANDOM_PORT)
@Import({
    TestRedisFacade.class,
    TestRabbitListener.class,
    TestRestFacade.class
})
public class E2ESuite {
  private static final Network SHARED_NETWORK = Network.newNetwork();
  private static GenericContainer<?> REDIS;
  private static RabbitMQContainer RABBIT;
  private static GenericContainer<?> API_SERVICE;
  private static GenericContainer<?> GAIN_SERVICE
}


Enter fullscreen mode Exit fullscreen mode

Firstly, we have to declare Docker containers to run within the Testcontainers environment. Here we've got Redis and RabbitMQ that are part of the infrastructure. Whilst API_SERVICE and GAIN_SERVICE are the custom services implementing the business logic.

The @Import annotation is used to add custom classes to the Spring Context that are used for testing purposes. Their implementation is trivial. So, you can find it by the repository link above. Though @ContextConfiguration is important. We'll get to this soon.

Also, SHARED_NETWORK is crucial. You see, the containers should communicate with each other because that's the purpose of the E2E scenario. But also we have to be able to send HTTP requests to API-Service to invoke the business logic. To achieve both of these goals we bound all the containers with a single network and forward the API-Service HTTP port to open access for the client. Take a look at the schema below describing the process.

Docker network with the set of containers

Now we need to initialize and start the containers somehow. Besides, we also have to specify the correct properties to connect our E2E-tests application to the recently started Docker containers. In this case, the @ContextConfiguration annotation can come in handy. It provides the initializers parameter which represents callbacks invoked in Spring Context initializing stage. Here we've put the inner class Initializer. Take a look at the code example below.



static class Initializer implements
      ApplicationContextInitializer<ConfigurableApplicationContext> {

    @Override
    public void initialize(ConfigurableApplicationContext context) {
      final var environment = context.getEnvironment();
      REDIS = createRedisContainer();
      RABBIT = createRabbitMQContainer(environment);

      Startables.deepStart(REDIS, RABBIT).join();
      final var apiExposedPort = environment.getProperty("api.exposed-port", Integer.class);
      API_SERVICE = createApiServiceContainer(environment, apiExposedPort);
      GAIN_SERVICE = createGainServiceContainer(environment);

      Startables.deepStart(API_SERVICE, GAIN_SERVICE).join();

      setPropertiesForConnections(environment);
    }
    ...
}


Enter fullscreen mode Exit fullscreen mode

Let's deconstruct this functionality step by step. Redis container is created first. Take a look at the code snippet below.



private GenericContainer<?> createRedisContainer() {
      return new GenericContainer<>("redis:5.0.14-alpine3.15")
          .withExposedPorts(6379)
          .withNetwork(SHARED_NETWORK)
          .withNetworkAliases("redis")
          .withLogConsumer(new Slf4jLogConsumer(LoggerFactory.getLogger("Redis")));
    }


Enter fullscreen mode Exit fullscreen mode

At the moment of writing, there is no distinct container for Redis in the Testcontainers library. So, I'm using a generic one. The most important attributes are network and network aliases. Their presence makes a container reachable for the other ones within the same network. We're also exposing the 6379 port (the default Redis port) because the E2E test case will connect to Redis during the execution.

Also, I'd like you to pay attention to the log consumer. You see, when the E2E scenario fails, it's not always obvious why. Sometimes to understand the source of the problem you have to dig into containers' logs. Thankfully the log consumer allows us to forward a container's logs to any SLF4J logger instance. In this project, containers' logs are forwarded to regular text files (you can find the Logback configuration in the repository). Though it's much better to transfer logs to external logging facility (e.g. Kibana).

Next comes RabbitMQ. Take a look at the container initialization below.



private RabbitMQContainer createRabbitMQContainer(Environment environment) {
      return new RabbitMQContainer("rabbitmq:3.7.25-management-alpine")
          .withNetwork(SHARED_NETWORK)
          .withNetworkAliases("rabbit")
          .withQueue(
              environment.getProperty("queue.api", String.class)
          )
          .withQueue(
              environment.getProperty("queue.gain", String.class)
          )
          .withLogConsumer(new Slf4jLogConsumer(LoggerFactory.getLogger("Rabbit")));
    }


Enter fullscreen mode Exit fullscreen mode

The idea is similar to Redis container instantiation. But here we also called withQueue method (which is part of the RabbitMQContainer class) to specify default topics on RabbitMQ start. API-Service sends messages to queue.api topic and Gain-Service sends messages to queue.gain topic (those properties are configurable). So, it's convenient to create the required topics on the application start.

Then there is an interesting line of code.



Startables.deepStart(REDIS, RABBIT).join();


Enter fullscreen mode Exit fullscreen mode

The deepStart method accepts varargs of containers to start and returns CompletableFuture. We need those containers to start before API-Service and Gain-Service. So, we call the join method to wait until containers are ready to accept requests.

You can also start all the containers with the single deepStart method invocation and specify the order by calling the dependsOn method on the container itself. It's more performant but harder to read through. So, I'm leaving the simpler example.

And now we can start our custom containers.



final var apiExposedPort = environment.getProperty("api.exposed-port", Integer.class);
API_SERVICE = createApiServiceContainer(environment, apiExposedPort);


Enter fullscreen mode Exit fullscreen mode

First of all, let's deep dive into the createApiServiceContainer method. Take a look at the code snipped below.



private GenericContainer<?> createApiServiceContainer(
        Environment environment,
        int apiExposedPort
    ) {
      final var apiServiceImage = environment.getProperty(
          "image.api-service",
          String.class
      );
      final var queue = environment.getProperty(
          "queue.api",
          String.class
      );
      return new GenericContainer<>(apiServiceImage)
          .withEnv("SPRING_RABBITMQ_ADDRESSES", "amqp://rabbit:5672")
          .withEnv("QUEUE_NAME", queue)
          .withExposedPorts(8080)
          .withNetwork(SHARED_NETWORK)
          .withNetworkAliases("api-service")
          .withCreateContainerCmdModifier(
              cmd -> cmd.withHostConfig(
                  new HostConfig()
                      .withNetworkMode(SHARED_NETWORK.getId())
                      .withPortBindings(new PortBinding(
                          Ports.Binding.bindPort(apiExposedPort),
                          new ExposedPort(8080)
                      ))
              )
          )
          .waitingFor(
              Wait.forHttp("/actuator/health")
                  .forStatusCode(200)
          )
          .withImagePullPolicy(new AbstractImagePullPolicy() {
            @Override
            protected boolean shouldPullCached(DockerImageName imageName,
                ImageData localImageData) {
              return true;
            }
          })
          .withLogConsumer(new Slf4jLogConsumer(LoggerFactory.getLogger("API-Service")));
    }


Enter fullscreen mode Exit fullscreen mode

There some things I want to point out.

The withEnv method just sets a regular environment variable. Those are used to configure API-Service. You have probably noticed that the RabbitMQ URL is amqp://rabbit:5672. Because rabbit is the corresponding container's name in the internal network (we specified it as a network alias on the container's instantiation). That is what makes RabbitMQ reachable by the API-Service.

The waitingFor clause is more interesting. Testcontainers has to know somehow that a container is ready to accept connections. API-Service exposes the /actuator/health HTTP path that returns a 200 code, if the instance is prepared.

The withCreateContainerCmdModifier combined with the withExposedPorts method binds the internal container's port 8080 to the apiExposedPort (specified by environment variable before E2E tests start).

The withImagePullPolicy defines the rule for retrieving images directly from the Docker Hub. By default, Testcontainers checks the image's presence locally. If it finds one, it does not pull anything from the remote server. The behaviour is suitable for testing particular images. But if you specify the one with the latest tag, there is a chance that the library won't pull the most relevant version. In this case, Testcontainers always pull images from the remote Docker Hub.

Take a look at the Gain-Service container declaration below.



private GenericContainer<?> createGainServiceContainer(Environment environment) {
      final var gainServiceImage = environment.getProperty(
          "image.gain-service",
          String.class
      );
      final var apiQueue = environment.getProperty(
          "queue.api",
          String.class
      );
      final var gainQueue = environment.getProperty(
          "queue.gain",
          String.class
      );
      return new GenericContainer<>(gainServiceImage)
          .withNetwork(SHARED_NETWORK)
          .withNetworkAliases("gain-service")
          .withEnv("SPRING_RABBITMQ_ADDRESSES", "amqp://rabbit:5672")
          .withEnv("SPRING_REDIS_URL", "redis://redis:6379")
          .withEnv("QUEUE_INPUT_NAME", apiQueue)
          .withEnv("QUEUE_OUTPUT_NAME", gainQueue)
          .waitingFor(
              Wait.forHttp("/actuator/health")
                  .forStatusCode(200)
          )
          .withImagePullPolicy(new AbstractImagePullPolicy() {
            @Override
            protected boolean shouldPullCached(DockerImageName imageName,
                ImageData localImageData) {
              return true;
            }
          })
          .withLogConsumer(new Slf4jLogConsumer(LoggerFactory.getLogger("Gain-Service")));
    }


Enter fullscreen mode Exit fullscreen mode

As you can see, the initialization is similar to API-Service. So, let's go further.

When API-Service and Gain-Service containers are ready, we can start them. Take a look at the code snippet below.



Startables.deepStart(API_SERVICE, GAIN_SERVICE).join();
setPropertiesForConnections(environment);


Enter fullscreen mode Exit fullscreen mode

We have already discussed the idea of Startables.deepStart. Though setPropertiesForConnections requires some explanations. This method sets URLs of the started container as the properties for the E2E test cases. So, test suites can verify the results. Take a look at the procedure implementation below.



private void setPropertiesForConnections(ConfigurableEnvironment environment) {
      environment.getPropertySources().addFirst(
          new MapPropertySource(
              "testcontainers",
              Map.of(
                  "spring.rabbitmq.addresses", RABBIT.getAmqpUrl(),
                  "spring.redis.url", format(
                      "redis://%s:%s",
                      REDIS.getHost(),
                      REDIS.getMappedPort(6379)
                  ),
                  "api.host", API_SERVICE.getHost()
              )
          )
      );
    }


Enter fullscreen mode Exit fullscreen mode

Here we've specified connections for the RabbitMQ and Redis. Also, we stored the API-Service host to send HTTP requests.

OK, let's do the test cases. We're writing a single E2E scenario. Take a look at the bullet list below.

  1. A client sends the message that contains both msisdn and cookie values to the API-Service
  2. The message with no modifications should be transmitted to RabbitMQ eventually.
  3. A client sends the message that contains only the cookie value to the API-Service.
  4. The enriched message with a determined msisdn value should be transmitted to RabbitMQ eventually.

Take a look at the test suite below.



class GainTest extends E2ESuite {

  @Test
  void shouldGainMessage() {
    rest.post(
        "/api/message",
        Map.of(
            "some_key", "some_value",
            "cookie", "cookie-value",
            "msisdn", "msisdn-value"
        ),
        Void.class
    );
    await().atMost(FIVE_SECONDS)
        .until(() -> getGainQueueMessages().contains(Map.of(
            "some_key", "some_value",
            "cookie", "cookie-value",
            "msisdn", "msisdn-value"
        )));

    rest.post(
        "/api/message",
        Map.of(
            "another_key", "another_value",
            "cookie", "cookie-value"
        ),
        Void.class
    );
    await().atMost(FIVE_SECONDS)
        .until(() -> getGainQueueMessages().contains(Map.of(
            "another_key", "another_value",
            "cookie", "cookie-value",
            "msisdn", "msisdn-value"
        )));
  }
}


Enter fullscreen mode Exit fullscreen mode

First of all, we send a message with cookie and msisdn. Then we check that the message is transferred further as-is. The next step is to send another message with omitted msisdn but present cookie value. Finally, the message with enriched msisdn value should be pushed to RabbitMQ by Gain-Service eventually.

If you run the test locally, it may take a while. Anyway, it takes time to download the required images and start the corresponding containers. But the test should pass successfully.

Test result

Running in CI environment

Well, that all sounds great. But how do we run E2E tests during the CI pipeline?

Firstly, we should pack E2E tests as the Docker image. Take a look at the Dockerfile below.



FROM openjdk:17-alpine

WORKDIR /app

COPY . /app

CMD ["/app/gradlew", ":e2e-tests:test"]


Enter fullscreen mode Exit fullscreen mode

So, tests are compiled and run on the container's start.

Tests are not part of the compiled artefact (in this case, the .jar file). That's why we copy the whole directory with the code itself.

Next comes the YAML configuration for the GitHub Actions pipeline. The result playbook is quite long. So, I'm showing it to you in small parts.

We're going to run the test cases on each pull request and each merge to the master branch.



name: Java CI

on:
  push:
    branches: [ "master" ]
  pull_request:
    branches: [ "master" ]


Enter fullscreen mode Exit fullscreen mode

The whole pipeline consists of 3 jobs:

  1. build compiles all the services (API-Service and Gain-Service), and runs unit and integration tests.
  2. build-dev-images packs all the components (including E2E-tests) as the Docker images and pushes them to the Docker Hub with the dev-$CI_BUILD_NUM tag.
  3. e2e-tests runs E2E tests for the images pushed on the build-dev-images job.
  4. build-prod-images packs all the components as the Docker images and pushes them to the Docker Hub with the latest tag. Runs only in the master branch after successfully passing the e2e-tests job.

Let's look at each job distinctly.

build

That's the most trivial one. Moreover, GitHub can generate this one for you.



jobs:
  build:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v3
      - name: Set up JDK 17
        uses: actions/setup-java@v3
        with:
          java-version: '17'
          distribution: 'temurin'
      - name: Build with Gradle
        uses: gradle/gradle-build-action@0d13054264b0bb894ded474f08ebb30921341cee
        with:
          arguments: :gain-service:build :api-service:build


Enter fullscreen mode Exit fullscreen mode

build-dev-images

This one is tricky. Firstly, we have to store DOCKERHUB_USERNAME and DOCKERHUB_TOKEN as the repository secrets to push Docker-built images. Then we should push the artefacts. And finally, we have to forward the calculated dev tag to the next job. Take a look at the implementation below.



jobs:
  ...
  build-dev-images:
    needs:
      - build
    runs-on: ubuntu-latest
    outputs:
      image_tag: ${{ steps.env.outputs.image_tag }}
    steps:
      - name: Checkout
        uses: actions/checkout@v3
      - name: Set up QEMU
        uses: docker/setup-qemu-action@v2
      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
      - name: Login to DockerHub
        uses: docker/login-action@v2
        with:
          username: ${{ secrets.DOCKERHUB_USERNAME }}
          password: ${{ secrets.DOCKERHUB_TOKEN }}
      - name: Define images tags
        id: env
        run: |
          export IMAGE_TAG_ENV=dev-${{ github.run_number }}
          echo "IMAGE_TAG=$IMAGE_TAG_ENV" >> "$GITHUB_ENV"
          echo "::set-output name=image_tag::$IMAGE_TAG_ENV"
      - name: Build and push E2E-tests
        uses: docker/build-push-action@v3
        with:
          file: "./Dockerfile_e2e_tests"
          push: true
          tags: kirekov/e2e-tests:${{ env.IMAGE_TAG }}
      - name: Build and push API-Service
        uses: docker/build-push-action@v3
        with:
          file: "./Dockerfile_api_service"
          push: true
          tags: kirekov/api-service:${{ env.IMAGE_TAG }}
      - name: Build and push Gain-Service
        uses: docker/build-push-action@v3
        with:
          file: "./Dockerfile_gain_service"
          push: true
          tags: kirekov/gain-service:${{ env.IMAGE_TAG }}


Enter fullscreen mode Exit fullscreen mode

I want you to pay attention to these lines of code.



jobs:
  ...
  build-dev-images:
    ...
    outputs:
      image_tag: ${{ steps.env.outputs.image_tag }}
    steps:
      ...
      - name: Define images tags
        id: env
        run: |
          export IMAGE_TAG_ENV=dev-${{ github.run_number }}
          echo "IMAGE_TAG=$IMAGE_TAG_ENV" >> "$GITHUB_ENV"
          echo "::set-output name=image_tag::$IMAGE_TAG_ENV"


Enter fullscreen mode Exit fullscreen mode

The export IMAGE_TAG_ENV=dev-${{ github.run_number }} line sets the dev tag with the generated build number to the IMAGE_TAG_ENV environment variable.

The echo "IMAGE_TAG=$IMAGE_TAG_ENV" >> "$GITHUB_ENV" line makes ${{ env.IMAGE_TAG }} variable available. It is used to specify the Docker tag on image publishing in the next steps.

The echo "::set-output name=image_tag::$IMAGE_TAG_ENV" saves image_tag variable as the output. So, the next job can reference it to run the specified version of E2E tests.

The pushing to Docker Hub itself is implemented with docker/build-push-action. Take a look at the code snippet below.



- name: Build and push E2E-tests
  uses: docker/build-push-action@v3
    with:
    file: "./Dockerfile_e2e_tests"
    push: true
    tags: kirekov/e2e-tests:${{ env.IMAGE_TAG }}


Enter fullscreen mode Exit fullscreen mode

Building and pushing API-Service and Gain-Service is similar.

e2e-tests

And now it's time to run E2E tests. Take a look at the configuration below.



jobs:
  ...
  e2e-tests:
    needs:
      - build-dev-images
    runs-on: ubuntu-latest
    container:
      image: kirekov/e2e-tests:${{needs.build-dev-images.outputs.image_tag}}
      volumes:
        - /var/run/docker.sock:/var/run/docker.sock
    steps:
      - name: Run E2E-tests
        run: |
          cd /app
          ./gradlew :e2e-tests:test


Enter fullscreen mode Exit fullscreen mode

The container.image specifies the version of E2E tests to run. The ${{needs.build-dev-images.outputs.image_tag}} variable references to the one exposed by the build-dev-images job on the previous step.

The volumes: /var/run/docker.sock:/var/run/docker.sock is crucial. Because e2e-tests images uses Testcontainers library to run another Docker containers. Mounting docker.sock as the volume implements Docker Wormhole pattern. You can read more about it by this link.

build-prod-images

This step is almost the same as the build-dev-images. You can find it in the repository.

Conclusion

As the result, we have configured the CI environment to run unit tests, integration tests, and E2E-tests for multiple business components (i.e. Gain-Service and API-Service) and external services (i.e. RabbitMQ, Redis). Testcontainers allows us to build comprehensive and solid pipelines. What's more exciting is that you don't have to own dedicated servers for E2E testing. Pure CI pipelines are sufficient!

I hope you liked the E2E-testing approach I proposed. If you have any questions or suggestions, please leave your comments down below. Besides, you can always text me directly. I'll be happy to discuss the topic. Thanks for reading!

Resources

  1. Repository with the source code
  2. Apache Spark, Hive, and Spring Boot Testing Guide
  3. Spring Boot Testing โ€” Testcontainers and Flyway
  4. Spring Boot Testing โ€” Data and Services
  5. Spring Data JPA โ€” Clear Tests
  6. A Deep Dive into Unit Testing
  7. Getting Integration Testing Right
  8. SLF4J
  9. Logback
  10. Kibana
  11. Patterns for running tests inside a Docker container

Top comments (0)