DEV Community

Puru
Puru

Posted on

GitHub Actions Self-Hosted Runner on Kubernetes

Deploy a scalable GitHub Actions self-hosted runner on Kubernetes using Helm.

image

Why Self-Hosted Runner?

Self-hosted runners are ideal for use-cases where you need to run workflows in a highly customizable environment with more granular control over hardware requirements, security, operating system, and software tools than GitHub-hosted runners provides.

Self-hosted runners can be physical, virtual, in a container, on-premises, or in a cloud. In this guide, we’ll deploy it as a container in the Kubernetes cluster in the AWS cloud.


Deploy Kubernetes Cluster (optional)

image

If you already have an existing K8s cluster, feel free to skip this step.

In this guide, we’ll deploy a managed K8s cluster on AWS using eksctl— Official CLI for Amazon EKS which is written in Go and uses CloudFormation under the hood, and is by far the easiest way to spin up a managed Kubernetes cluster in AWS. See Installing eksctl

Create Kubernetes Cluster

Our cluster will consist of a single worker node (c6g.large — 2 vCPU, 4GiB RAM) in us-east-1 region with a dedicated VPC. Feel free to modify the cluster config as per your requirements. See more examples configs

Save the following cluster config as cluster-config.yaml

And run the following command using the above cluster config:



$ eksctl create cluster -f cluster-config.yaml


Enter fullscreen mode Exit fullscreen mode

NOTE: The cluster creation may take upto 15–20 minutes.



2021-06-22 19:14:19 [✔]  EKS cluster "github-actions" in "us-east-1" region is ready


Enter fullscreen mode Exit fullscreen mode

Once the cluster is created and ready, you will find that cluster credentials were added to your kubeconfig in $HOME/.kube/config automatically by eksctl

Now, verify the cluster connectivity, access and nodes status:



$ kubectl get nodes
$ kubectl get namespaces


Enter fullscreen mode Exit fullscreen mode

Deploy Action Runner Controller using Helm

Helm is a package manager for Kubernetes to easily install and manage Kubernetes applications. See Installing Helm

image

What is actions-runner-controller?

action-runner controller operates self-hosted runners for GitHub Actions on the Kubernetes cluster. It provides CRDs (Custom Resource Definition) such as Runner RunnerDeployment HorizontalRunnerAutoscaler which allows us to easily deploy a scalable self-hosted runners on Kubernetes.

Installation of cert-manager

cert-manager is a required component needed by the actions-runner-controller for certificate management of Admission Webhook.



# Add repository
$ helm repo add jetstack https://charts.jetstack.io
$ helm repo update

# Install chart
$ helm install --wait --create-namespace --namespace cert-manager cert-manager jetstack/cert-manager --version v1.3.0 --set installCRDs=true

# Verify installation
$ kubectl --namespace cert-manager get all


Enter fullscreen mode Exit fullscreen mode

GitHub Personal Access Token

Next, we need to create a Personal Access Token (PAT) which will be used by the controller to register self-hosted runners to GitHub Actions.

  1. Login to GitHub account and navigate to https://github.com/settings/tokens
  2. Click on Generate new token button
  3. Select repo (Full Control) scope.
  4. Click Generate Token

image

Now, store the access token in a YAML file called custom-values.yaml as such:



authSecret:
  github_token: REPLACE_YOUR_TOKEN_HERE


Enter fullscreen mode Exit fullscreen mode

Installation of actions-runner-controller

We’re now ready to install the controller using Helm.



# Add repository
$ helm repo add actions-runner-controller https://actions-runner-controller.github.io/actions-runner-controller

# Install chart
$ helm install -f custom-values.yaml --wait --namespace actions-runner-system --create-namespace actions-runner-controller actions-runner-controller/actions-runner-controller

# Verify installation
$ kubectl --namespace actions-runner-system get all


Enter fullscreen mode Exit fullscreen mode

Deploy Self-Hosted Runner

We now have everything in-place to deploy a self-hosted runner tied to a specific repository.

First, create a namespace to host self-hosted runners resources.



$ kubectl create namespace self-hosted-runners


Enter fullscreen mode Exit fullscreen mode

Next, save the following K8s manifest file as self-hosted-runner.yaml, and modify the following:

  • Replace tuladhar/self-hosted-runner with your own repository.
  • Adjust the minReplicas and maxReplicas as required.

And apply the Kubernetes manifest:



$ kubectl --namespace self-hosted-runners apply -f self-hosted-runner.yaml


Enter fullscreen mode Exit fullscreen mode

Verify the runner is deployed and is in ready state.



$ kubectl --namespace self-hosted-runners get runner


Enter fullscreen mode Exit fullscreen mode

Now, navigate to your repository Settings > Actions > Runner to view the registered runner.

image

🚀 We’re now ready to give our self-hosted runner a try!


Create a workflow to test your self-hosted runner

Save and commit the following sample GitHub Actions workflow in .github/workflows/hello-world.yml in your repository where the self-hosted runner is registered.

NOTE: The important part of this workflow is runs-on: self-hosted

Now, navigate to the Actions tab where you will see Hello World workflow listed. Let’s manually trigger by clicking Run Workflow

image

… and voila! 🎉 The workflow has successfully ran on our self-hosted runner, and we can see all the steps and logs.

image


Clean-up Kubernetes Cluster (optional)

Once, you’re done exploring the self-hosted runner, you can easily destroy the cluster and associated resources like VPC, etc.



$ eksctl delete cluster -f cluster-config.yaml


Enter fullscreen mode Exit fullscreen mode

Output:



2021-06-22 20:16:02 [ℹ]  eksctl version 0.54.0
2021-06-22 20:16:02 [ℹ]  using region us-east-1
2021-06-22 20:16:02 [ℹ]  deleting EKS cluster "github-actions"
2021-06-22 20:16:06 [ℹ]  deleted 0 Fargate profile(s)
2021-06-22 20:16:10 [✔]  kubeconfig has been updated
2021-06-22 20:16:10 [ℹ]  cleaning up AWS load balancers created by Kubernetes objects of Kind Service or Ingress
2021-06-22 20:16:23 [ℹ]  2 sequential tasks: { delete nodegroup "ng-1", delete cluster control plane "github-actions" [async] }
2021-06-22 20:16:23 [ℹ]  will delete stack "eksctl-github-actions-nodegroup-ng-1"
2021-06-22 20:16:23 [ℹ]  waiting for stack "eksctl-github-actions-
2021-06-22 20:18:21 [ℹ]  will delete stack "eksctl-github-actions-cluster"
2021-06-22 20:18:22 [✔]  all cluster resources were deleted


Enter fullscreen mode Exit fullscreen mode

And remove the dangling offline registered runner as well.

image


Useful Resources

Top comments (1)

Collapse
 
koss822 profile image
Martin Konicek

I have wrote something similar with access to TLS registries, see below - martinkonicek.eu/en/blog/self-host...