DEV Community

Cover image for Automating Yocto Builds w/ Self-Hosted Runners
Stephen Murphy
Stephen Murphy

Posted on • Updated on

Automating Yocto Builds w/ Self-Hosted Runners

Links to follow along:
yocto-runner image repo
yocto-runner ghcr.io
gl-yocto-sandbox-software repo

Intro

Building a Linux kernel and rootfs on your local machine can take a LONG time to complete and can be taxing on your machines CPU and RAM. We can speed the process up by using a machine with more cores but having a better machine alone is not always convenient either - You need to access the build machine (physically or remotely), pull the new source, begin your build and cross your fingers hoping that something doesn't go wrong in the build while your not present or looking 😩 Once completed you may want to store the output images and SDK generated for future use which you will have to manage and distribute yourself.

The first thing that comes to mind to help solve these problems is to use Github Actions and Runners, however the hosted instances offered by Github are under powered and lack the necessary storage space we need for our large Linux builds (~50GB of scratch space). Which brings us to our solution - A self-hosted runner on our own 12-core build server capable of tearing through Linux builds in about 30min compared to the usual 3.5hours on our laptops. 😎 We will also utilize the artifacts functionality for uploading the final images and SDK to Github where any developer can retrieve them.

Self-Hosted Runner setup

To make setting up our Yocto Build machine an easy process on a server, we will be creating our own Docker image that has all of the necessary packages and tools to spin up a self-hosted runner as well as the tools necessary to build a Yocto project.

If you'd like to skip the step of building your own image you can pull a yocto-runner image from our Github registry!

The Dockerfile is pretty straight forward. We specify that we want a base image of Ubuntu:20.04 and we will be using the latest version of the Self-hosted runner source: 2.285.1. Since we want this image to build on its own without input, we also add the noninteractive value to our DEBIAN_FRONTEND arg.

# base
FROM ubuntu:20.04

# set the github runner version
ARG RUNNER_VERSION="2.285.1"

# do a non interactive build
ARG DEBIAN_FRONTEND=noninteractive
Enter fullscreen mode Exit fullscreen mode

We then update and upgrade all of the packages in our base install and then add a new user called docker.

# update the base packages and add a non-sudo user
RUN apt-get update -y && apt-get upgrade -y && useradd -m docker
Enter fullscreen mode Exit fullscreen mode

Next we install all of the packages needed for a Self-Hosted runner as well as the packages required for a Yocto build.

# install python and the packages the your code depends on along with jq so we can parse JSON
# add additional packages as necessary
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends \
    curl \
    jq \
    build-essential \
    libssl-dev \
    libffi-dev \
    python3 \
    python3-venv \
    python3-dev \
    python3-pip \
    gawk \
    wget \
    git-core \
    diffstat \
    unzip \
    texinfo \
    gcc-multilib \
    chrpath \
    socat \
    libsdl1.2-dev \
    xterm \
    cpio \
    file \
    locales
Enter fullscreen mode Exit fullscreen mode

When building a Yocto project, you will also need to ensure that the locale environment variables are set appropriately.

# Update the locales to UTF-8
RUN locale-gen en_US.UTF-8 && update-locale LC_ALL=en_US.UTF-8 \
    LANG=en_US.UTF-8
ENV LANG en_US.UTF-8
ENV LC_ALL en_US.UTF-8
Enter fullscreen mode Exit fullscreen mode

Next we move to our working directory for our new user - docker. Create a folder to hold the action-runner source, download the source and then extract it into our folder

# cd into the user directory, download and unzip the github actions runner
RUN cd /home/docker && mkdir actions-runner && cd actions-runner \
    && curl -O -L https://github.com/actions/runner/releases/download/v${RUNNER_VERSION}/actions-runner-linux-x64-${RUNNER_VERSION}.tar.gz \
    && tar xzf ./actions-runner-linux-x64-${RUNNER_VERSION}.tar.gz
Enter fullscreen mode Exit fullscreen mode

Next we install the additonal dependencies needed by the action-runner, move our startup script (Responsible for setting up and tearing down the runner) into the image, make the script executable and then finally set it as our entry point.

# install some additional dependencies
RUN chown -R docker ~docker && /home/docker/actions-runner/bin/installdependencies.sh

# copy over the start.sh script
COPY start.sh start.sh

# make the script executable
RUN chmod +x start.sh

# since the config and run script for actions are not allowed to be run by root,
# set the user to "docker" so all subsequent commands are run as the docker user
USER docker

# set the entrypoint to the start.sh script
ENTRYPOINT ["./start.sh"]
Enter fullscreen mode Exit fullscreen mode

The last bit is the start.sh script. To make the image modular and usable for others, two environment variables will be passed in when spinning up a container - The organization name this Runner will be made available to and a Personal Access Token needed to retrieve a registration token for your org. The PAT used will need repo, workflow and admin:org access rights given.

#!/bin/bash
ORGANIZATION=$ORGANIZATION
ACCESS_TOKEN=$ACCESS_TOKEN

REG_TOKEN=$(curl -sX POST -H "Authorization: token ${ACCESS_TOKEN}" https://api.github.com/orgs/${ORGANIZATION}/actions/runners/registration-token | jq .token --raw-output)
Enter fullscreen mode Exit fullscreen mode

Next we move into our previously created action-runner folder and begin the startup process. We indicate which org we are connecting to, provide our newly retrieved registration token and add any labels specific to this machine. We will use these labels later in a repository workflow file to ensure the correct remote runner picks up our jobs. The cleanup function will handle unregistering and removing the runner from the org in the case the Docker container is stopped.

cd /home/docker/actions-runner

./config.sh --url https://github.com/${ORGANIZATION} --token ${REG_TOKEN} --labels yocto,x64,linux

cleanup() {
    echo "Removing runner..."
    ./config.sh remove --unattended --token ${REG_TOKEN}
}

trap 'cleanup; exit 130' INT
trap 'cleanup; exit 143' TERM

./run.sh & wait $!
Enter fullscreen mode Exit fullscreen mode

Finally we can build our new Docker image.

sudo docker build --tag yocto-runner .
Enter fullscreen mode Exit fullscreen mode

You're all set! You can now spin up a new container and it should appear in your available Organization action-runners. You'll find it by going to your Organization settings > Actions > Runners

Note the run command below is using our glassboard-dev/yocto-runner pulled from the Github Container Registry. You can instead replace it with your tag name given from building it yourself.

sudo docker run -d --env ORGANIZATION=<YOUR-GITHUB-ORGANIZATION> --env ACCESS_TOKEN=<YOUR-GITHUB-ACCESS-TOKEN> --name runner ghcr.io/glassboard-dev/yocto-runner
Enter fullscreen mode Exit fullscreen mode

!! Note !! You will need to adjust your Repo access settings within the Default group by going to Actions > Runner groups. If you intend to use a self-hosted runner with a public repository, you should also verify your General settings for Actions. Specifically making sure that outside collaborators require approval before a workflow can be ran on a pull request. This will ensure that someone doesn't fork your repo, make a malicious change in the workflow file and then have YOUR build machine execute the malicious config.

Utilizing a remote-runner

Now that we have our remote runner up and running we can start utilizing it for building our Linux builds. We will be using our gl-yocto-sandbox-software repository as an example. This repo contains the recipes necessary to build a Linux rootfs for the STM32MP1 dev kit from ST Microelectronics. You don't need to know too much of how the recipes work, but just know that we have two commands to execute when building the project.

To do so, we created a .github/workflows/ci.yml file in our repo with our workflow instructions. These will look very familiar for the most part to a workflow file found in any other Github actions. We give our workflow a name and tell it to execute on pushes to the main and develop branch.

name: CI - Compile

on:
  push:
    branches: [ main, develop ]
Enter fullscreen mode Exit fullscreen mode

The next portion is where it gets different. In the runs-on argument, we pass an array of labels used to determine what runners can be used to build our app. Self-hosted tells Github to look for our own runners, linux and x64 specify the OS and architecture that the runner should be on and finally the yocto label must be matched which tells us the runner will have the tools necessary to make our Yocto build (Remember we added the yocto label in our setup.sh file for our Docker image recipe). The checkout portion just tells the runner to recursively checkout our submodules within the repo.

jobs:
  build:
    runs-on: [self-hosted, linux, x64, yocto]
    steps:
    - uses: actions/checkout@v2
      with:
        submodules: recursive
Enter fullscreen mode Exit fullscreen mode

Next we setup our build environment and begin building our image.

- name: Build Yocto Rootfs
      run:  |
        source poky/oe-init-build-env .
        bitbake stephendpmurphy-image
Enter fullscreen mode Exit fullscreen mode

Next we compress and tar the output images to be uploaded as an artifact of the build.

- name: Compress the output images
      run: |
        tar -czvf images.tar.gz -C ./tmp/deploy/images/stm32mp1 .
Enter fullscreen mode Exit fullscreen mode

Lastly, we upload the artifacts to Github with a retention policy of 1 day. We only do a short duration since these builds will be made often and will likely be downloaded immediately onto a developers machine for testing when complete.

- name: Upload Artifacts
      uses: actions/upload-artifact@v2
      with:
        name: images
        path: images.tar.gz
        retention-days: 1
Enter fullscreen mode Exit fullscreen mode

As you can see, we have success. We finished building our Linux image in 31 minutes and we can download the output files from the artifacts sections!
Build results

Conclusion

Voila 🎉 You now have the ability to remotely and automatically build a Linux Yocto image. Our use case was that we wanted builds to be completed faster and unattended. You may not have a hefty server to leverage for quicker builds but you could still dole out jobs to remote-runner with adequate space to build overnight - removing the workload from your main machine.

I hope you found this useful, and please feel free to contribute and ask question on our repositories! Happy Hacking! âš¡

Top comments (0)