DEV Community

Cover image for CMake on SMT32 | Episode 8: build with Docker
Pierre Gradot
Pierre Gradot

Posted on

CMake on SMT32 | Episode 8: build with Docker

In this episode, we will use Docker to set up and manage our build environment.

The embedded world doesn't always embrace the latest software development techniques. I acknowledged this in the previous episode about unit tests. Four years ago, when I started this series, I was totally unaware of Docker for instance. Since then, I have used it for non-embedded projects and I think it could have solved some issues I encountered in the past. I would like to share my experience and demonstrate that it can perfectly fit in an MCU project.

In fact, every I will say here applies to hosted C / C++ projects too!

Dependencies to build our project

Let's take a moment to look back and review all the tools required on our machine to build the project presented in this series:

  • cmake: obviously, this is the quintessence of this series.
  • make: in the first episode, we chose to use -G "MinGW Makefiles" as our generator. We may have chosen another generator, for instance -G Ninja, and have another dependency (ninja, namely).
  • arm-none-eabi-gcc: this is the toolchain to compile our embedded code.
  • gcc: we need another toolchain for the unit tests to run on the host computer.
  • git: CMake's FetchContent module needs Git to fetch Catch2 from GitHub.

The list may seem relatively small for the moment, but for sure it will grow in the future. We will probably want to:

  • write scripts with python and need pip to install packages.
  • debug the units tests with gdb.
  • analyze our code clang-tidy and format it with clang-format.

Each developer on the project needs to install everything on their machine. Each time the team decides to upgrade a tool to a newer version or to use a new tool, everyone must update their setup. Not to mention the CI environment (if you have maintained a Jenkins machine once in your life, you know what I mean).

Should I say that everyone may have a different version of everything on their machine? Things could turn into a nightmare.

This is where Docker comes into play to helps tackle this complexity. The purpose is to create a Docker image with everything the team needs to work. Instead of having to install many applications, we just need Docker (and obviously a VCS client, probably Git, to clone the central repository).

This article is not meant to be crash course to Docker. If you don't have the basic knowledge, you should probably read a tutorial. This may be optional though, as you will probably get the concepts and the process I will describe anyway.

Dockerfile

We create a Dockerfile (yes, that the name of the file) next to our CMakeLists.txt, at the root of our project. We use the latest image of Ubuntu as the base image, and we install the packages we need. Here is the content of the file:

FROM ubuntu:latest

RUN apt-get update && apt-get install -y \
                        build-essential \
                        cmake \
                        gcc-arm-none-eabi \
                        gdb \
                        git
Enter fullscreen mode Exit fullscreen mode

The build-essential package includes gcc and make. Note that I have included gdb, which isn't yet mandatory, but CLion asks for it (and we will see that this Docker image can be used seamlessly within CLion).

And... that's it! We have everything we need! We will see later how we can improve this file by using explicit versions. For the moment, let's try to build our firmware and our unit tests!

Build the Docker image

This Dockerfile describes our image, and we have to build it now. We just need to execute:

docker build -t cmake-on-stm32 .
Enter fullscreen mode Exit fullscreen mode

-t is short for --tag. It gives the image a name (in fact, a tag), and this will make our life easier in the next commands.

If we list the images, we see our image and its base image:

$ docker images
REPOSITORY        TAG       IMAGE ID       CREATED          SIZE
cmake-on-stm32    latest    1ed7f1257600   48 minutes ago   3.12GB
ubuntu            latest    e34e831650c1   5 weeks ago      77.9MB
Enter fullscreen mode Exit fullscreen mode

The image can now be used to run containers, which are the actual runners for this Dockerized build environment.

Using the container from the terminal

We have two options:

  1. Get a shell inside the container and work inside the container.
  2. Ask the container to run commands and work from the host.

Get a shell inside the container

For the first option, we just have to run the following command to get a shell inside the container:

docker run --rm -it -v $(pwd)/:/nucleo/ -w /nucleo/ cmake-on-stm32
Enter fullscreen mode Exit fullscreen mode

Let's break down the options:

  • --rm requires Docker to remove the container when it exits.
  • -it is short for --interactive + --tty. This is the option that takes us straight inside the container.
  • -v to mount the current directory as /nucleo/ in the container. Indeed, the container is isolated from the host. If we want to share directories between the host and the container, volumes are the way to go.
  • -w is optional, it just sets the working directory so that the shell starts in a handy location.

To demonstrate the usage:

prompts

Notice how the prompt changes and how the arm-none-eabi-gcc is found or not. exit exits the shell and hence the container.

Now, we can just follow the process that has been described throughout this series: generate the project, build the firmware, build the tests, run the tests, etc. The commands are the same, we just execute them inside the container.

Execute commands from the host

We can remove the -it option and add more arguments to the command. The container will execute them. Here is an example with ls and arm-none-eabi-gcc --version:

commands from host

I tend to prefer the first technique, because the commands can get very long with this second technique. Furthermore, the host can't auto-complete the commands because it doesn't know what the container supports. Nevertheless, I guess in some cases, mixing both techniques can be interesting. It's up to you to define how you want to interact with the container.

Simplify the command-line with Docker Compose

The above commands may seem a bit "heavy", mostly because of the options to mount the volume and to set the working directory. It's likely that more options will be added, making the commands even longer.

We can take advantage of Docker Compose to simplify things a little. compose is a subcommand of docker, there is no new software to install on the host machine. We can write a simple docker-compose.yml file to describe what we want:

services:
  cmake-on-stm32:
    build: .
    volumes:
      - .:/nucleo/
    working_dir: /nucleo
Enter fullscreen mode Exit fullscreen mode

We can now use the docker compose run command (instead of just docker run) and omit the options that are described in docker-compose.yml:

docker compose run

Note that the -it option is not required to use the container interactively with docker compose.

Using the container from CLion

In episode 2, we learned how to build our CMake project for STM32 with CLion by Jetbrains. I have good news: CLion supports Docker toolchains out-of-the-box! I tried and it worked perfectly in minutes:

  • I added a new Docker toolchain that uses cmake-on-stm32:latest as its image.
  • I created 2 profiles to build the project:
    1. For the units tests: I used all default values except for the toolchain where I selected my Docker toolchain.
    2. For the embedded code: same except that I added -DCMAKE_TOOLCHAIN_FILE=arm-none-eabi-gcc.cmake in the CMake options field.

And voilà! We can then work within CLion without noticing that Docker is involved.

Improve user management

There is a little catch with the commands above: inside the container, the user is root. Consequence: all files and directories created from the container are owned by root. For instance when building in build-docker:

files listing with root as owners

Most of the time, this is not a real issue, except when we want to delete them because we don't have the appropriate permissions. On Linux, it means we will need sudo to perform the deletion.

To get around this issue, we can add the --user option to specify the user and group we want when calling docker run. For instance on Linux, --user $(id -u):$(id -g) will use our user and group. We will then be the owner of the generated files and directories:

files listing with pierre as owner

The user and group can also be set in Docker Compose. Read "How to set user and group in Docker Compose" by Giovanni De Mizio to learn how to do this.

Note that CLion does this in background, so the ownership of the files is correct without any further options.

Pin versions of dependencies

In our Dockerfile, we have simply chosen the latest Ubuntu version as the base image (with FROM ubuntu:latest) and have blissfully installed the packages. This is cool when testing things but there is one major drawback: we have absolutely no idea what versions of the tools we will get!

Pin the base image

As of February 29th 2024 (as I'm writing these lines), Docker Hub says that the ubuntu:latest is Jammy Jellyfish (aka 22.04, the current LTS). In April 2024 Noble Numbat (aka 24.04, the next LTS release) will be out and will probably become the new latest version. It will come (almost for sure) with new versions of the tools. Who knows if our code will still compile under Noble Numbat? We really don't want our project to suddenly stop building because a new release of Ubuntu is out.

The obvious first counter-measure is to choose a version of Ubuntu explicitly. We just have to replace ubuntu:latest with ubuntu:jammy or any other version. Pinning the base image will also (normally) pin the versions of the tools because each release of Ubuntu is quite conservative about the versions of the programs in the packages (Debian is even more). Ubuntu Packages Search website tells us the content of the packages we can install with apt-get. Thanks to this knowledge, we can decide which Ubuntu version is good for us, if we want to upgrade to the next version or not, etc. For instance with gcc:

Ubuntu gcc gcc-arm-none-eabi
Jammy Jellyfish 11.2 10.3
Lunar Lobster 12.2 12.2
Mantic Minotaur 13.2 12.2

Select more specific packages

On Mantic Minotaur (23.10, the latest version of Ubuntu as I write these lines), the gcc-arm-none-eabi package installs GCC 12.2 (for ARM), while the gcc package installs GCC 13.2 (for x86-64). We would like our hosted and embedded toolchains to use the same version GCC (at least the same major version). There is a solution: the gcc-12 packages installs GCC 12.3. That is a better match with the embedded toolchain. Note that there are packages for gcc-9 up to gcc-13 on Mantic Minotaur.

A better Dockerfile would probably be:

FROM ubuntu:mantic

RUN apt-get update && apt-get install -y \
                        make \
                        cmake \
                        gcc-12 \
                        gcc-arm-none-eabi \
                        gdb \
                        git

RUN ln -s -f gcc-12 /usr/bin/gcc
Enter fullscreen mode Exit fullscreen mode

I should probably explain the link creation. The cmake package has gcc in its list of recommended packages. Because we haven't used the option --no-install-recommends, it will install GCC 13 and create a link /usr/bin/gcc that points to /usr/bin/gcc-13 (which is itself a link). The last line of the Dockerfile is here to overwrite this link. Oh! by the way: the gcc-12 package won't create the link anyway, so we really need this line.

Select exact versions of the packages

As I said, each release of Ubuntu is quite conservative. It's very unlikely that the version of GCC in the gcc-arm-none-eabi in the repositories will change. Unlikely doesn't mean impossible (it could be a patch version). In case we really want to be sure to stick to an exact version, we can add the version of the package in the apt-get command and use gcc-arm-none-eabi=15:12.2.rel1-1 instead of just gcc-arm-none-eabi (with Mantic Minotaur). The version string can be found in the title of package's page for each release of Ubuntu.

Manual installations

It's possible that the Ubuntu packages won't provide with the versions we want. We can them fallback to manual installations. Let's say to we want the latest release of CMake (which is 3.29.0-rc2 at the moment), we can script the installation from GitHub in the Dockerfile.

Here is our new Dockerfile:

FROM ubuntu:mantic

RUN apt-get update && apt-get install -y \
                        make \
                        gcc-12 \
                        gcc-arm-none-eabi=15:12.2.rel1-1 \
                        gdb \
                        git \
                        wget

RUN ln -s -f gcc-12 /usr/bin/gcc

RUN wget https://github.com/Kitware/CMake/releases/download/v3.29.0-rc2/cmake-3.29.0-rc2-linux-x86_64.sh
RUN chmod u+x cmake-3.29.0-rc2-linux-x86_64.sh
RUN ./cmake-3.29.0-rc2-linux-x86_64.sh --skip-license --include-subdir
ENV PATH="${PATH}:/cmake-3.29.0-rc2-linux-x86_64/bin"
Enter fullscreen mode Exit fullscreen mode

We update the PATH environment variable so that cmake (and its associated binaries, such as ctest) is available from the path.

Assessing our solution

In my opinion, this Docker-centered approach has a lot of benefits:

  • It's much easier to set our computer when we join a new project. Install Git (and SVN or whatever we need), install Docker, clone the repository, and we're good to build!
  • Everyone in the team has the same environment (including Jenkins or whatever CI service we use), assuming that everyone rebuilds the Docker image regularly.
  • It is very easy to try a new version of a compiler (or any other tool) without smashing our current environment. We create new branch in our VCS, we update the Dockerfile to fetch the new compiler and we can work. We can then seamlessly switch back to our original branch and have our "normal" compiler back.
  • On the same computer, we can work on several projects that use different versions of the same tool, without conflicts (eg: project A uses GCC 12 while project B uses GCC 13) thanks to the isolation that Docker offers.
  • It's easier to rebuild old versions of the project. When we check out an old commit, we can rebuild a Docker image to get the tools that the project required back then, and then rebuild the project with it.

Of course, there is no magic in real life and every solution comes with some downsides:

  • We now have yet another tool in our environment. We have to learn it, we will run into Docker-specific issues, we have to maintain Docker code.
  • It may be difficult to integrate some tools in the image if their installations aren't scriptable.
  • We can't force people to regularly rebuild their local Docker image, so there may still be inconsistency among the team's machines.
  • We never know if our docker build command will work tomorrow. We rely on the Docker registry to download the Ubuntu images, on the Ubuntu repository to download the compilers and so on. It's not really a daily issue for the HEAD of an active repository. However, we have no guarantee that a given Dockerfile we will still build in X months. That's why I have written "it's easier to rebuild old versions of the project", instead of "you can rebuild any old versions of the project".

In this article, I have showed one way to use Docker. The repository contains a Dockerfile to create the build environment from scratch, making the repository somehow able to build itself. This is a nice approach because the repository is self-sufficient.

We can proceed differently to mitigate the last two downsides. For instance, we can manage the creations of the images separately from the project(s) that will use them.

Treat Docker images as a separate project

Creating build environments can be considered as a separate project.

On the one hand, we create images with revisions. Instead of always tagging the images as "latest" with docker build -t cmake-on-stm32 ., we use explicit version numbers with docker build -t cmake-on-stm32:0.1 ., docker build -t cmake-on-stm32:0.2 ., and so. Each version of the image corresponds to a set of tools at known versions. If we need a new version of CMake (for instance), we update the Dockerfile and release image 0.3.

On the other hand, we fetch the exact Docker image we need in our STM32-based project. We can create build scripts to wrap the commands shown in this article to force people to use the correct Docker image at any revision of the repository.

Sharing the Docker images can be manual, using docker image save + docker image load + a shared drive. The images can also be pushed to the DockerHub or self-hosted registry with docker image push.

Conclusion

In this episode, we have seen how we can leverage Docker to seamlessly set up and manage our build environment. Instead of having to install several applications on our computer, we now only need Docker to build the project. We have also seen there are several ways to include Docker in the process. It's up to you to find how you want to use it in your project. I really wished I could go back in time and try Docker on some of my previous projects! Tell me in the comments how you use/would like to use Docker for your projects!

Top comments (0)