Jenkins has a feature that allows pipelines to extract JUnit XML files and display the results within the UI. It's challenging to figure out how to run the tests in the container, copy the results file to the host, then make sure the temporary container is deleted at the end of the operation.
Why not mount the project directory from the host then run the tests?
One way to accomplish this would be to mount the project to the host while testing so you don't have to reach into the container and copy files out to the host. New files (like a JUnit XML file) would be generated on the host itself since it is mounted to the container.
This works in some situations, but it's not testing the actual container. Ideally, your tests should run inside of the container to give you confidence that the container was built correctly.
The Solution
Here's what I came up with, where webapp
is the name of the service that I'm testing. I'm only including the commands that the CI would run, and I'm excluding Jenkins syntax for the sake of portability.
docker-compose run --rm --detach webapp tail -f /dev/null
docker exec $(docker-compose ps -q webapp) bundle exec \
rspec --format RspecJunitFormatter --out rspec.xml
After that, the following commands should be executed in a cleanup step that always runs, regardless if the tests pass or fail:
docker cp $(docker-compose ps -q webapp):app/rspec.xml rspec.xml
docker-compose -f docker-compose.test.yml down
After this cleanup, the CI system will still need to be told where to find this XML file to display the results in the UI, but that is outside the scope of this post.
How it works
The container is started in the background using docker-compose run
with the --detach
flag, and it will be deleted once the process stops because of the --rm
flag.
We need a process to run in the container while we perform operations inside of it, so we run tail -f /dev/null
. The command itself isn't important: it's just there to keep the container running.
With the container started, we can run additional commands (like running rspec
tests) inside of it using docker exec
. The $(docker-compose ps -q webapp)
subshell command gets the container ID of the currently running webapp
service.
In a separate clean up step, the XML file created by the tests is copied out and then the container is stopped via docker-compose down
.
While it might feel a little gross to keep the container running by tailing /dev/null
, this approach has a few advantages. If you wanted to run more arbitrary code inside of a container (for example: linting), you wouldn't have to start and stop the container multiple times, which would make the CI pipeline take longer.
I hope you found this post helpful. If you know a better way of doing this, please leave a comment below!
Top comments (0)