When we automate tasks/resources in a Kubernetes Cluster, we first think about managing Kubernetes resources. But we must not forget that we are not limited to what is presented to us.
Like every post since the begining of this serie, we will talk about the Operator SDK, the Go framework to create operators. And who says Go, says that we can :
- read into a database
- call an API
- write a message in a queue
Indeed, everything which is possible in Go is possible in your operator.
Why it is interesting?
For a simple reason, you are not limited to your Kubernetes cluster context to automate tasks and resources management.
Here is a simple example: You manage Prometheus and/or Thanos instances in your cluster with your operator, but you have a single Grafana instance for all these Prometheus/Thanos instances and it's outside of your cluster. In this case, if you want to manage the datasource in Grafana, you must do it through the API.
And here the operator power skyrockets because you can use any API or service which manage resources. You don't need to implement something particular or migrate it into your Kubernetes cluster.
For sure, it won't be useful for everyone but think about the following example :
You are in a compagny which offer several services to your clients, including :
- a metric collector with Promtheus
- a Grafana instance to monitor all the metrics The issue is that each element is managed by a dedicated operator on a dedicated Kubernetes cluster. How can you automate the management of a customer stack ?
Answer : With APIs !
With dedicated APIs to manage your resources (by using the Python Kubernetes library for example) or by using the Kubernetes API, you can have an operator which can manage all the stack by calling all the other operators with these APIs.
From this, all the spread stack can be automated!
I hope it will help you and if you have any questions (there are not dumb questions) or some points are not clear for you, don't hesitate to add your question in the comments or to contact me directly on LinkedIn.