We all know that every serious project needs CI/CD and I'm pretty sure that it's not necessary to explain why. There are a lot of tools, platforms and solution to choose from when deciding where to build your CI/CD, though. You could pick Jenkins, Travis, CircleCI, Bamboo, and many other, but if you're building CI/CD for cloud-native applications running on Kubernetes, then it just makes sense to also run cloud-native CI/CD along with it using appropriate tool.
One such solution that allows you to run CI/CD natively on Kubernetes is Tekton, so in this article we will begin series about building CI/CD with Tekton, starting with introduction, installation and customization of Tekton to kick-start our journey to cloud-native CI/CD on Kubernetes.
TL;DR: All resources, scripts and files needed to kick-start your CI/CD with Tekton are available at https://github.com/MartinHeinz/tekton-kickstarter.
What is it? (and Why Even Bother?)
As title and intro implies, Tekton is cloud native CI/CD tool. It was originally developed at Google and was known as Knative pipelines. It runs on Kubernetes as a set of custom resources (CRDs) such as Pipeline or Task which lifecycle is managed by Tekton's controller. The fact that it natively runs on Kubernetes makes it ideal to manage/build/deploy any applications and resources that are also being deployed on Kubernetes.
This shows that it's suitable for managing Kubernetes workloads, but why not use other, more popular tools for this?
Commonly used CI/CD solutions such as Jenkins, Travis or Bamboo weren't built to run on Kubernetes or lack proper integration with Kubernetes. This makes it difficult and/or annoying to deploy, maintain and manage the CI/CD tool itself as well as use it to deploy any Kubernetes-native applications with it. Tekton on the other hand can be deployed very easily as Kubernetes operator alongside all the other containerized applications and every Tekton pipeline is just another Kubernetes resource managed the same way as good old Pods or Deployments.
This also makes Tekton work well with GitOps practices, as you can take all your pipelines and configurations and maintain them in git, which cannot be said about at least one of the above mentioned tools (yes, I hate Jenkins with burning passion). Same goes for resource consumption - considering that whole Tekton deployment is just a couple of pods - a very little memory and CPU is consumed while pipelines are not running in comparison to other CI/CD tools.
With that said, it's pretty clear that if you're running all your workloads on Kubernetes, then it's very much advisable to use some Kubernetes-native tool for your CI/CD. Is the Tekton the only option though? No, there are - of course - other tools you could use, one of them being JenkinsX which is an opinionated way to do continuous delivery with Kubernetes, natively. It packs a lot of tools, which can make your life easier if you don't have any strong preferences for alternative tooling, but it can also be very annoying if you want to customize your tech stack. JenkinsX uses Tekton in the background anyway though, so you might as well learn to use Tekton and then decide, whether you also want all the other components that JenkinsX provides.
Another option would be Spinnaker - it's a multi-cloud solution that has been around for a long time. It uses plugins to integrate with various providers, one of them being Kubernetes. It's however, not a build engine - it does not provide tools to test your code, build your application images or push them to registry, for those task you would still need some other CI tool.
Let's now take a closer look at what Tekton consists of - the core of Tekton consist of just a few CustomResourceDefinitions (CRDs), which are Tasks and Pipelines which act as a blueprints for TaskRuns and PipelineRuns. These four (plus a few other that are either about to be deprecated or aren't relevant right now) are enough to start running some pipelines and tasks.
That however is usually not sufficient, considering that most setups require the builds, deployments - and therefore - also the pipelines to be triggered by some events. That's why we also install Tekton Triggers which provides additional resources, namely - EventListener, TriggerBinding and TriggerTemplate. These three resources provide means for us to listen to particular events - such as (GitHub) webhooks, CloudEvents or events sent by cron jobs - and fire up specific pipelines.
Last - and very optional component - is Tekton Dashboard, which is a very simple GUI, yet very convenient tool for inspecting all CRDs, including tasks, pipelines and triggers. It also allows for searching and filtering, which can be helpful when looking for TaskRuns and PipelineRuns. You can also use it to create TaskRuns and PipelineRuns from existing Tasks and Pipelines.
All these pieces are managed by controller deployments and pods, which take care of lifecycle of the above mentioned CRDs.
Setting Things Up
Considering that Tekton consists of multiple components, installing can be a little complicated and can be done in various ways. Usually you will want to install at least Pipelines and Triggers and the most obvious way would be to install it with raw Kubernetes manifests, but you can take the simpler route and install Tekton Operator from OperatorHub, which already includes all the parts. As a prerequisite (for any of the installation approaches) we will obviously need a cluster, here we will use KinD (Kubernetes in Docker) for local pipeline development. We will use following custom config for KinD, as we will need to deploy Ingress controller and expose port 80/443 to be able to reach Tekton Triggers event listener.
# kind-tekton.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
And we can create the cluster with following commands:
~ $ kind create cluster --name tekton --image=kindest/node:v1.20.2 --config=kind-tekton.yaml
~ $ kubectl cluster-info --context kind-tekton
~ $ kubectl config set-context kind-tekton
~ $ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/kind/deploy.yaml
~ $ kubectl wait --namespace ingress-nginx --for=condition=ready pod --selector=app.kubernetes.io/component=controller --timeout=90s
Now, for the actual deployment of Tekton Pipeline and Triggers - I mentioned an installation via Tekton Operator which might seem like the fastest and the best way to get up and running with everything preconfigured, the operator however (at the time of writing) lacks any actual documentation, so you would need to dig around quite a lot to find any explanation as to how things work, which wouldn't be that big of an problem for me personally. The real problem here though, is that the Operator in OperatorHub isn't up to date and I couldn't find current build/image which renders it more or less useless. I'm sure this will change at some point when Tekton Operator is a little more mature (so keep an eye on it's repository), but until then, other installation options should be used.
If you happen to be running on OpenShift, the option you could use would be Red Hat Pipeline Operator, which is - again - Kubernetes Operator, but in this case curated by Red Hat and customized for OpenShift. It can be installed with just a few clicks in web console, so in case you have access to OpenShift cluster, then you should give it a try. One downside of using this is a slower release cycle so you will be forced to use not-quite-up-to-date version of Tekton.
If OpenShift is not an option or you just want run things on Kubernetes, then installation using raw manifests will work just fine and this is how it's done:
~ $ kubectl apply -f https://storage.googleapis.com/tekton-releases/pipeline/latest/release.yaml # Deploy pipelines
~ $ kubectl apply -f https://storage.googleapis.com/tekton-releases/triggers/latest/release.yaml # Deploy triggers
~ $ kubectl get svc,deploy --namespace tekton-pipelines --selector=app.kubernetes.io/part-of=tekton-pipelines
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/tekton-pipelines-controller ClusterIP 10.106.114.94 <none> 9090/TCP,8080/TCP 2m13s
service/tekton-pipelines-webhook ClusterIP 10.105.247.0 <none> 9090/TCP,8008/TCP,443/TCP,8080/TCP 2m13s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/tekton-pipelines-controller 1/1 1 1 2m13s
deployment.apps/tekton-pipelines-webhook 1/1 1 1 2m13s
If you want to also include Tekton Dashboard in this installation, then you need to apply one more set of manifests:
~ $ kubectl apply -f https://storage.googleapis.com/tekton-releases/dashboard/latest/tekton-dashboard-release.yaml # Deploy dashboard
~ $ kubectl get svc,deploy -n tekton-pipelines --selector=app=tekton-dashboard
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/tekton-dashboard ClusterIP 10.111.144.87 <none> 9097/TCP 25s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/tekton-dashboard 1/1 1 1 25s
On top of that we also need extra Ingress to reach the Dashboard:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: dashboard
namespace: tekton-pipelines
annotations:
nginx.ingress.kubernetes.io/rewrite-target: '/$2'
spec:
rules:
- http:
paths:
- path: /dashboard(/|$)(.*)
pathType: Prefix
backend:
service:
name: tekton-dashboard
port:
number: 9097
The previously applied Dashboard resources are by default created in tekton-pipelines
namespace and include Service named tekton-dashboard
that uses port 9097
, which are the values referenced in the Ingress above. This Ingress also has rewrite rule to show the dashboard at /dashboard/...
path instead of /
. This is because we will want to use the default /
(root) path for the webhook of our event listener (topic for later).
To verify that the Dashboard really is live and everything is running, you can browse to localhost/dashboard/
(assuming you're using KinD) and you should see something like this (minus the actual pipeline):
If all this setup seems like way too much effort, then you can grab the tekton-kickstarter
repository and just run make
and you will have all of the above ready in a minute.
With this deployed, we have all the (very) basic pieces up and running, so let's poke around in CLI to see what we actually deployed with those you commands...
Exploring Custom Resources
If you followed the steps above (or just used make
target from the kick-start repository), then you should have quite a few new resources in your cluster now. All the components of Tekton will be located in tekton-pipelines
namespace and should include the following:
~ $ kubectl get deploy,service,ingress,hpa -n tekton-pipelines
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/tekton-dashboard 1/1 1 1 2m24s
deployment.apps/tekton-pipelines-controller 1/1 1 1 6m57s
deployment.apps/tekton-pipelines-webhook 1/1 1 1 6m57s
deployment.apps/tekton-triggers-controller 1/1 1 1 6m56s
deployment.apps/tekton-triggers-core-interceptors 1/1 1 1 6m56s
deployment.apps/tekton-triggers-webhook 1/1 1 1 6m56s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/tekton-dashboard ClusterIP 10.108.143.42 <none> 9097/TCP 2m24s
service/tekton-pipelines-controller ClusterIP 10.98.218.218 <none> 9090/TCP,8080/TCP 6m57s
service/tekton-pipelines-webhook ClusterIP 10.101.192.94 <none> 9090/TCP,8008/TCP,443/TCP,8080/TCP 6m57s
service/tekton-triggers-controller ClusterIP 10.98.189.205 <none> 9090/TCP 6m56s
service/tekton-triggers-core-interceptors ClusterIP 10.110.47.172 <none> 80/TCP 6m56s
service/tekton-triggers-webhook ClusterIP 10.111.209.100 <none> 443/TCP 6m56s
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress.networking.k8s.io/dashboard <none> * localhost 80 2m24s
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
horizontalpodautoscaler.autoscaling/tekton-pipelines-webhook Deployment/tekton-pipelines-webhook <unknown>/100% 1 5 1 6m57s
These include all the deployments, services as well as autoscaler which can help with higher availability in case of higher number of requests. If HA is required, then you can also look into docs section which explains how to configure Tekton for HA.
Besides the resources shown above, you can also find event listeners and their resources in the default
namespace. These could share namespace with the core components, but splitting them like this allows you to keep the pipelines and their webhooks divided based on application/project they are used for:
kubectl get deploy,service,ingress,hpa -n default
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/el-cron-listener 1/1 1 1 8m40s
deployment.apps/el-http-event-listener 1/1 1 1 8m40s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/el-cron-listener ClusterIP 10.100.238.60 <none> 8080/TCP 8m40s
service/el-http-event-listener ClusterIP 10.98.88.164 <none> 8080/TCP 8m40s
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress.networking.k8s.io/http-listener <none> * localhost 80 8m40s
Installation of Tekton also brings along couple of CRDs, which are used to manage all tasks, pipelines and triggers:
kubectl get crd | grep tekton
clustertasks.tekton.dev 2021-02-27T20:23:35Z
clustertriggerbindings.triggers.tekton.dev 2021-02-27T20:23:36Z
conditions.tekton.dev 2021-02-27T20:23:35Z
eventlisteners.triggers.tekton.dev 2021-02-27T20:23:36Z
extensions.dashboard.tekton.dev 2021-02-27T20:28:08Z
pipelineresources.tekton.dev 2021-02-27T20:23:35Z
pipelineruns.tekton.dev 2021-02-27T20:23:35Z
pipelines.tekton.dev 2021-02-27T20:23:35Z
runs.tekton.dev 2021-02-27T20:23:35Z
taskruns.tekton.dev 2021-02-27T20:23:35Z
tasks.tekton.dev 2021-02-27T20:23:35Z
triggerbindings.triggers.tekton.dev 2021-02-27T20:23:36Z
triggers.triggers.tekton.dev 2021-02-27T20:23:36Z
triggertemplates.triggers.tekton.dev 2021-02-27T20:23:36Z
You can use these CRDs to list and inspect Tasks and Pipelines with kubectl get
or kubectl describe
.
For every user of Kubernetes, the natural way of interacting with resources is using kubectl
, but Tekton also has it's own CLI tool called tkn
. You can download it from this release page.
This CLI allows you to interact with Tekton resources without having to deal with CRDs. As an example you can list or inspect a Pipelines:
~ $ tkn pipeline list
NAME AGE LAST RUN STARTED DURATION STATUS
database-backup 12 hours ago job-qxcwc 39 minutes ago 8 minutes Failed
deploy 12 hours ago --- --- --- ---
~ $ tkn pipeline describe deploy
# ... Long and verbose output
Besides inspecting resources, you can also use it to start TaskRuns or PipelineRuns and subsequently read the logs without having to look up individual pods:
~ $ tkn task start send-to-webhook-slack
? Value for param `webhook-secret` of type `string`? slack-webhook
? Value for param `message` of type `string`? Hello There!
TaskRun started: send-to-webhook-slack-run-d5sxv
In order to track the TaskRun progress run:
tkn taskrun logs send-to-webhook-slack-run-d5sxv -f -n default
~ $ tkn taskrun logs send-to-webhook-slack-run-d5sxv -f -n default
[post] % Total % Received % Xferd Average Speed Time Time Time Current
[post] Dload Upload Total Spent Left Speed
100 23 0 0 100 23 0 111 --:--:-- --:--:-- --:--:-- 111
As you can see above, it even prompts you for parameters if you don't specify them initially!
One thing that annoys me to no end though, is the reversed order of arguments this CLI tool uses compared to kubectl
. With kubectl
the order is kubectl <VERB> <RESOURCE>
and with tkn
it's tkn <RESOURCE> <VERB>
- a minor nitpick about otherwise very handy tool.
Customizing Everything
The installation that we've done already, puts in place some reasonable default values for all the configs. These can be changed a bit to better suit your needs and simplify things down the line. There are 2 ConfigMaps that we will take a look at:
First of them is config-defaults
, which - as name implies - sets defaults for pipeline and task executions. These include things like default timeout, ServiceAccount or node selector. This ConfigMap also initially includes _example
key, which has all the possible (commented out) options and their description, so when in doubt, just run kubectl get cm config-defaults -n tekton-pipelines -o=jsonpath='{ .data._example }'
.
The other available ConfigMap is feature-flags
, which allows you to switch on/off some of the Tekton features. You can mostly leave those on default values. Only one that I change is require-git-ssh-secret-known-hosts
, which I prefer to have switched on to require known_hosts
to be included when authenticating to git with SSH. To view current settings, you can run kubectl get cm feature-flags -n tekton-pipelines -o=jsonpath='{.data}' | jq .
.
If you want complete, customized version of both of these configs, you can grab them in my repository here. If you used the make
script in this repository to set up your Tekton environment, then these were already applied during installation.
Besides these global defaults, there are also other configs you might want to set. Most important of these would be SSH key for authenticating to git. This gets configured in Kubernetes Secret containing SSH private key and known_hosts
file in base64
format:
apiVersion: v1
kind: Secret
metadata:
name: ssh-key
annotations:
tekton.dev/git-0: github.com
type: kubernetes.io/ssh-auth
data:
# cat ~/.ssh/id_rsa | base64
ssh-privatekey: |-
...
# cat ~/.ssh/known_hosts | base64
known_hosts: |-
...
This secret also includes Tekton annotation(s) (tekton.dev/git-*
) to make it aware that it should use it for authentication for specified provider.
Another important Secret is registry credentials which allows you to push Docker images (or pull from private registry). For this one we use dockerconfigjson
Secret type and once again we specify Tekton annotation with registry URL of your provider:
# kubectl create secret generic reg-cred \
# --from-file=.dockerconfigjson=<DOCKER_CFG_PATH> \
# --type=kubernetes.io/dockerconfigjson
apiVersion: v1
kind: Secret
metadata:
name: reg-cred
annotations:
tekton.dev/docker-0: 'https://docker.io'
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: ...
Both of these then need to be added to ServiceAccount that your tasks and pipelines will be using. This should be the ServiceAccount that was previously specified in config-defaults
ConfigMap.
Speaking of ServiceAccounts - you will need to give it enough permissions to interact with pipelines and optionally add extra permissions based on what your pipelines will be doing. For example, if you want to run kubectl rollout
, then your ServiceAccount will need permission for that.
Both ServiceAccount and reasonable Roles and RoleBindings are available in the repository here.
Last but not least, I also recommend setting LimitRange to make sure that each of your tasks get enough CPU and memory and at the same time doesn't consume way too much. The exact values depend on your use case but some reasonable defaults are shown here.
And with that, we have fully prepared and customized installation of Tekton with all the components and their configs, ready to run some pipelines!
Conclusion
Tekton is a versatile tool that can get quite complicated and therefore one short article definitely isn't enough to go over every piece of it in detail. This introduction should give you enough to get up and running with all the configurations in place. In the following articles in these series, we will explore how to use and build your own custom Tasks and Pipelines, deal with event handling - both HTTP events and scheduled with cron and much more. So, stay tuned for next article and in the meantime you can have a sneak peek at files in tekton-kickstarter repository where all the resources from this and following articles are already available. And in case you have some feedback or suggestion feel free to open an issue in the repository or just star it if like you the content. 😉
Top comments (1)
Hi Martin,
This blog and the code repo complement each other well, one of the best for sure.
I have run into a stupid silly issue post install of the great Make script. The ingress is responding back with HTTP 503 when I tried hitting the webhook end point for git and also manually access localhost/
Can you please guide, the setup was a breeze on Ubuntu 20.04
curl -H 'X-GitHub-Event: push' \ ─╯
503 Service Temporarily Unavailable503 Service Temporarily Unavailable