DEV Community

Adil H
Adil H

Posted on • Originally published at Medium on

Building a Kubernetes Mutating Admission Webhook

A “magic” way to inject a file into Pod Containers

I originally posted this article on Medium

Have you ever noticed that when you create Pods in Kubernetes, the containers (usually) contain an authentication token file located at /var/run/secrets/kubernetes.io/serviceaccount/token ? You can try it out by running the following command in your cluster:

$ kubectl run busybox --image=busybox --restart=Never -it --rm -- ls -l /var/run/secrets/kubernetes.io/serviceaccount/token
# output
/var/run/secrets/kubernetes.io/serviceaccount/token
Enter fullscreen mode Exit fullscreen mode

Side note: You can actually opt out of this behaviour in Kubernetes versions 1.6+

Let’s now imagine that we want to automatically add a magic “hello.txt” file into all (or a group) of pod container filesystems, without explicitly attaching a volumeMount to each pod spec. How can we achieve that ?

To make things more fun, we’ll use a piece of ASCII art (generated via this tool) as our “hello.txt” file:

Our hello.txt file content

Enter Admission Webhooks

One way to achieve the goal stated in the last paragraph is to use Kubernetes Admission Webhooks. But what are those ? let’s look at the official documentation:

Admission webhooks are HTTP callbacks that receive admission requests and do something with them. You can define two types of admission webhooks, validating admission webhook and mutating admission webhook. Mutating admission webhooks are invoked first, and can modify objects sent to the API server to enforce custom defaults

The diagram below, borrowed from this Kubernetes.io blog post can also help us understand the concept:

Admission Controller Phases

So the way we’ll add a magic “hello.txt” file to pod containers in this article is by extending Kubernetes through a Mutating Admission Webhook , so that every time we send a request to the API to create a Pod, the Pod spec is mutated before being saved to storage. Then when the Kubelet creates our Pod on worker nodes, it should have the ”hello.txt” file included, automagically. Let’s try it !

The setup

I’ve included all of the code and commands to run this project in this gi thub repository. You can use it to follow along.

The first thing you’ll need is an up and running Kubernetes Cluster. You can use a Kind cluster for example, which runs your Cluster nodes in containers.

Next we define a ConfigMap which contains the “hello.txt” file content:

To build the webhook, we’ll use a pretty simple Go API server. The most important part of our webhook implementation code is the actual http handler:

The code above, like a lot of Kubernetes code, uses the schema types from https://github.com/kubernetes/api and https://github.com/kubernetes/apimachinery. What the code actually does is:

  • Deserialize the AdmissionReview input json from the Http request
  • Read the pod spec
  • Add a “hello-volume” Volume to our Pod using our “hello-configmap”as a source
  • Mount the Volume to the Pod Containers
  • Build the JSON Patch for the mutations, including the volumes changes, the volumeMount changes, and, as a bonus, adding an extra “hello-added=true” label to the containers.
  • Build the json response, including our requested changes

I’ve also included a unit/functional test for the handler here, to make sure it’s doing what is intended to.

A small complication: TLS

Our webhook API server needs to serve the webhook via TLS, and as we want to deploy it inside our Kubernetes cluster, we’ll need to generate the certificate somewhow. One way I’ve found to do it is via a little piece of software from New Relic, that can handle the webhook certificate generation for us. I forked the repo to be able to make a couple of changes, and it can be deployed as a Job:

More YAML

After building the container image for the webhook API server and pushing it to a container repository, we deploy it to the cluster using a Deployment

And a ClusterIP Service:

Then we can create our MutatingWebhookConfiguration that registers our webhook with the Kubernetes API server:

In this last manifest we ask Kubernetes to send all Pod creation requests that match the label “hello=true” in the namespace (the namespace where we deploy the MutatingWebhookConfiguration) to the Service hello-webhook-service, at the path “/mutate”. The match label is optional, I just wanted to include it in this example so that we have a way to circumvent the Mutating Webhook.

If you’re wondering why there is no “caBundle” key in the “clientConfig” section of this last file, as specified in the docs, that’s because our webhook-cert-setup Job defined previously takes care of automatically adding that key.

The Webhook in Action

Our project is now ready to deploy to the cluster, with a little bit of Makefile and Kustomize trickery:

$ make k8s-deploy
# output
kustomize build k8s/other | kubectl apply -f -
configmap/hello-configmap created
service/hello-webhook-service created
mutatingwebhookconfiguration.admissionregistration.k8s.io/hello-webhook.leclouddev.com created
kustomize build k8s/csr | kubectl apply -f -
serviceaccount/webhook-cert-sa created
clusterrole.rbac.authorization.k8s.io/webhook-cert-cluster-role created
clusterrolebinding.rbac.authorization.k8s.io/webhook-cert-cluster-role-binding created
job.batch/webhook-cert-setup created
Waiting for cert creation ...
kubectl certificate approve hello-webhook-service.default
certificatesigningrequest.certificates.k8s.io/hello-webhook-service.default approved
kustomize build k8s/csr | kubectl apply -f -
serviceaccount/webhook-cert-sa unchanged
clusterrole.rbac.authorization.k8s.io/webhook-cert-cluster-role unchanged
clusterrolebinding.rbac.authorization.k8s.io/webhook-cert-cluster-role-binding unchanged
job.batch/webhook-cert-setup unchanged
Waiting for cert creation ...
kubectl certificate approve hello-webhook-service.default
certificatesigningrequest.certificates.k8s.io/hello-webhook-service.default approved
(cd k8s/deployment && \
 kustomize edit set image CONTAINER\_IMAGE=quay.io/didil/hello-webhook:0.1.8)
kustomize build k8s/deployment | kubectl apply -f -
deployment.apps/hello-webhook-deployment created
Enter fullscreen mode Exit fullscreen mode

Let’s see if our mutating webhook works at this point by running a simple busybox image, including our target match label “hello=true”:

$ kubectl run busybox-1 --image=busybox --restart=Never -l=app=busybox,hello=true -- sleep 3600
Enter fullscreen mode Exit fullscreen mode

Let’s see if the file is present in the container filesystem:

$ kubectl exec busybox-1 -it -- sh -c "ls /etc/config/hello.txt"
# output
/etc/config/hello.txt
Enter fullscreen mode Exit fullscreen mode

And let’s have a look at the content:

$ kubectl exec busybox-1 -it -- sh -c "cat /etc/config/hello.txt"
Enter fullscreen mode Exit fullscreen mode

The file is in the pod container !

Let’s now create a second pod without the special label “hello=true”:

$ kubectl run busybox-2 --image=busybox --restart=Never -l=app=busybox -- sleep 3600
# output
pod/busybox-2 created
$ kubectl exec busybox-2 -it -- sh -c "ls /etc/config/hello.txt"
# output
ls: /etc/config/hello.txt: No such file or directory
Enter fullscreen mode Exit fullscreen mode

As expected, the file was only added for the busybox-1 container that matched our webhook’s label selector, but not for busybox-2.

Let’s check that our bonus label “hello-added” was added for busybox-1 but not for busybox-2:

$ kubectl get pod -l=app=busybox -L=hello-added
# output
NAME READY STATUS RESTARTS AGE HELLO-ADDED
busybox-1 1/1 Running 0 3m7s OK
busybox-2 1/1 Running 0 53s
Enter fullscreen mode Exit fullscreen mode

Our Mutating Webhook works ! 🎉🦄🎊

Conclusion

With Mutating Admission Webhooks, we have just explored our first way to extend Kubernetes. We didn’t mention Validating Admission Webhooks, but you should also check those out if you need advanced validation for your resources, beyond what the OpenAPI schemas allow.

I hope you’ll find this article useful for your Kubernetes journey, and please let me know if you have any questions or remarks. And make sure to stay tuned: for the next article we’ll be discussing another way to extend Kubernetes as we try to implement a Kubernetes Operator.

Top comments (0)