Microsoft Azure

Manage Azure Event Hubs with Azure Service Operator on Kubernetes

abhirockzz profile image Abhishek Gupta ・8 min read

Azure Service Operator is an open source project to help you provision and manage Azure services using Kubernetes. Developers can use it to provision Azure services from any environment, be it Azure, any other cloud provider or on-premises - Kubernetes is the only common denominator!

It can also be included as a part of CI/CD pipelines to create, use and tear down Azure resources on-demand. Behind the scenes, all the heavy lifting is taken care of by a combination of Custom Resource Definitions which define Azure resources and the corresponding Kubernetes Operator(s) which ensure that the state defined by the Custom Resource Definition is reflected in Azure as well.

Read more in the recent announcement here - https://cloudblogs.microsoft.com/opensource/2020/06/25/announcing-azure-service-operator-kubernetes/

In this blog post:

  • You will get a high level overview of Azure Service Operator (sometimes referred to as ASO in this blog)
  • How to set it up and use it to provision Azure Event Hubs
  • Deploy apps to Kubernetes which use the Azure Event Hubs cluster

All the artefacts are available on this GitHub repo https://github.com/abhirockzz/eventhubs-using-aso-on-k8s

Getting started....

Azure Service Operator supports many Azure services including databases (Azure Cosmos DB, PostgreSQL, MySQL, Azure SQL etc.), core infrastructure components (Virtual Machines, VM Scale sets, Virtual Networks etc.) and others as well.

It also supports Azure Event Hubs which is a fully managed data streaming platform and event ingestion service with support for Apache Kafka and other tools in the Kafka ecosystem. With Azure Service Operator you can provision and manage Azure Event Hubs namespaces, Event Hub and Consumer Groups.

So, let's dive in without further ado! Before we do that, please note that you will need the following in order to try out this tutorial:


Start by getting an Azure account if you don't have one already - you can get for FREE! Please make sure you've kubectl and Helm 3 installed as well.

Although the steps outlined in this blog should work with any Kubernetes cluster (including minikube etc.), I used Azure Kubernetes Service (AKS). You can setup a cluster using Azure CLI, Azure portal or even an ARM template. Once that's done, simply configure kubectl to point to it

az aks get-credentials --resource-group <CLUSTER_RESOURCE_GROUP> --name <CLUSTER_NAME>

Ok, you're now ready to...

... Install Azure Service Operator

Nothing too fancy about it... just following the steps to install it using Helm

Start by installing cert-manager

Setup cert-manager

kubectl create namespace cert-manager

kubectl label namespace cert-manager cert-manager.io/disable-validation=true

kubectl apply --validate=false -f https://github.com/jetstack/cert-manager/releases/download/v0.12.0/cert-manager.yaml

//make sure cert manager is up and running
kubectl rollout status -n cert-manager deploy/cert-manager-webhook


Since the operator will create resource on Azure, we need to authorize it to do so by providing the appropriate credentials. Currently, you can use Managed Identity or Service Principal

I will be using a Service Principal, so let's start by creating one (with Azure CLI) using the az ad sp create-for-rbac command

az ad sp create-for-rbac -n "aso-rbac-sp"

//JSON output
  "appId": "eb4280db-4242-4ed0-a7d2-42424242f0d0",
  "displayName": "aso-rbac-sp",
  "name": "http://aso-rbac-sp",
  "password": "7d69a422-428d-42d4-a242-cd1d425424b2",
  "tenant": "42f988bf-42f1-42af-42ab-2d7cd421db42"


Setup required environment variables:

export AZURE_SUBSCRIPTION_ID=<enter Azure subscription ID>
export AZURE_TENANT_ID=<enter value from the "tenant" attribute in the JSON payload above>
export AZURE_CLIENT_ID=<enter value from the "appId" attribute in the JSON payload above>
export AZURE_CLIENT_SECRET=<enter value from the "password" attribute in the JSON payload above>
export AZURE_SERVICE_OPERATOR_NAMESPACE=<name of the namespace into which ASO will be installed>

Add the repo, create namespace

helm repo add azureserviceoperator https://raw.githubusercontent.com/Azure/azure-service-operator/master/charts

kubectl create namespace $AZURE_SERVICE_OPERATOR_NAMESPACE

Use helm upgrade to initiate setup:

helm upgrade --install aso azureserviceoperator/azure-service-operator \
--set azureSubscriptionID=$AZURE_SUBSCRIPTION_ID \
--set azureTenantID=$AZURE_TENANT_ID \
--set azureClientID=$AZURE_CLIENT_ID \
--set azureClientSecret=$AZURE_CLIENT_SECRET

Before you proceed, wait for the Azure Service Operator Pod to startup


NAME                                              READY   STATUS    RESTARTS   AGE
azureoperator-controller-manager-68f44fd4-cm6wl   2/2     Running   0          6m

Setup Azure Event Hubs components...

Start by cloning the repo:

git clone https://github.com/abhirockzz/eventhubs-using-aso-on-k8s
cd eventhubs-using-aso-on-k8s

Create an Azure Resource Group

I have used the southeastasia location. Please update eh-resource-group.yaml if you need to use a different one

kubectl apply -f deploy/eh-resource-group.yaml

//confirm that its created
kubectl get resourcegroups/eh-aso-rg

Create Event Hubs namespace

I have used the southeastasia location. Please update eh-namespace.yaml if you need to use a different one

kubectl apply -f deploy/eh-namespace.yaml

//wait for creation
kubectl get eventhubnamespaces -w

Once done, you should see this:

eh-aso-ns   true          successfully provisioned

You can get details with kubectl describe eventhubnamespaces and also double-check using az eventhubs namespace show

The namespace is ready, we can now create an Event Hub

kubectl apply -f deploy/eh-hub.yaml

kubectl get eventhubs/eh-aso-hub

//once done...
eh-aso-hub  true          successfully provisioned

You can get details with kubectl describe eventhub and also double-check using az eventhubs eventhub show

As a final step, create the consumer group

This is addition to the default consumer group (appropriately named $Default)

kubectl apply -f deploy/eh-consumer-group.yaml

kubectl get consumergroups/eh-aso-cg

eh-aso-cg  true          successfully provisioned

You can get details with kubectl describe consumergroup and also double-check using eazventhubs eventhub consumer-group show

What's next?

Let's make use of what we just setup! We'll deploy a pair of producer and consumer apps to Kubernetes that will send and receive messages from Event Hubs respectively. Both these client apps are written in Go and use the Sarama library for Kafka. I am not going to dive into the details since they are relatively straightforward

Deploy the consumer app:

kubectl apply -f deploy/consumer.yaml

//wait for it to start
kubectl get pods -l=app=eh-consumer -w

Keep a track of the logs for the consumer app:

kubectl logs -f $(kubectl get pods -l=app=eh-consumer --output=jsonpath={.items..metadata.name})

You should see something similar to:

Event Hubs broker [eh-aso-ns.servicebus.windows.net:9093]
Sarama client consumer group ID eh-aso-cg
new consumer group created
Event Hubs topic eh-aso-hub
Waiting for program to exit
Partition allocation - map[eh-aso-hub:[0 1 2]]

Using another terminal, deploy the producer app:

kubectl apply -f deploy/producer.yaml

Once the producer app is up and running, the consumer should kick in, start consumer the messages and print them to the console. So you'll see logs similar to this:

Message topic:"eh-aso-hub" partition:0 offset:6
Message content value-2020-07-06 15:37:06.116674866 +0000 UTC m=+67.450171692
Message topic:"eh-aso-hub" partition:0 offset:7
Message content value-2020-07-06 15:37:09.133115988 +0000 UTC m=+70.466612714
Message topic:"eh-aso-hub" partition:0 offset:8
Message content value-2020-07-06 15:37:12.149068005 +0000 UTC m=+73.482564831

In case you want to check producer logs as well: kubectl logs -f $(kubectl get pods -l=app=eh-producer --output=jsonpath={.items..metadata.name})

Alright, it worked!

  • We created an Event Hubs namespace, Event Hub along and a consumer group.. all using kubectl (and YAMLs of course)
  • Deployed a simple producer and consumer for testing

But, what just happened ...?

... how did the consumer and producer apps connect to Event Hubs without connection info, credentials etc.?

Notice this part of Event Hub manifest (eh-hub.yaml file):

  secretName: eh-secret
  location: southeastasia
  resourceGroup: eh-aso-rg

secretName: eh-secret ensured that a Kubernetes Secret was created with the required connectivity details including connection strings (primary, secondary), keys (primary, secondary), along with the basic info such as Event Hubs namespace and hub name.

The producer and consumer Deployments were simply able to refer to this. Take a look at this snippet from the consumer app Deployment

        - name: eh-consumer
          image: abhirockzz/eh-kafka-consumer
                  name: eh-secret
                  key: primaryConnectionString
            - name: EVENTHUBS_NAMESPACE
                  name: eh-secret
                  key: eventhubNamespace
            - name: EVENTHUBS_BROKER
              value: $(EVENTHUBS_NAMESPACE).servicebus.windows.net:9093
            - name: EVENTHUBS_TOPIC
                  name: eh-secret
                  key: eventhubName
              value: eh-aso-cg

The app uses env vars EVENTHUBS_CONNECTION_STRING, EVENTHUBS_NAMESPACE and EVENTHUBS_TOPIC whose values were sourced from the Secret (eh-secret). The value for EVENTHUBS_CONSUMER_GROUPID is hardcoded to eh-aso-cg which was the name of the consumer group specified in eh-consumer-group.yaml.

Clean up

To remove all the resources including Event Hubs and the client apps, simply use kubectl delete -f deploy


Azure Service Operator provides a layer of abstraction on top Azure specific primitives. It allows you to manage Azure resources and also provide ways to connect to them using other applications deployed in the same Kubernetes cluster.

I covered Azure Event Hubs as an example, but as I mentioned earlier, Azure Service Operator also supports other services too. Head over to the GitHub repo and give them a try!

Posted on by:

abhirockzz profile

Abhishek Gupta


Currently working with Kafka, Databases, Azure, Kubernetes and related open source projects

Microsoft Azure

Any language. Any platform.


markdown guide