Have you ever heard about Keda?
No? Soooo, just grab a coffee or whatever you want and follow me the goods!
What's Keda?
Keda is a Event Driven Autoscaler based on Kubernetes. Was developed by Microsoft and Red Hat and now it's a Cloud Native Computing Foundation (CNCF) sandbox project.
With Keda you can simply scale you application of any container in Kubernetes based on the number of events.
This means you can create any application that is Event-Drive and get it up when something arrives and get it down when it's nothing there. This makes the cost of your application lower when running in a Cloud provider as AWS, Azure or GCP.
You can use Keda for scale based in a queue on Rabbit, Azure ServiceBus, Kafka. Using CPU usage, as a Cron, on MongoDB queries and many more
Awesome, isn't it? 🤓
Great, but how does it works?
I will drop here my sample on github.
In this tutorial, I will show you how simple is create a Azure Function using Python ❤️ and deploy it on AKS. I also will show a pipeline that will deploy this functions automatically to your AKS.
Alright, talk is cheap and let's code!
Pre-requisites
For start our Python functions we need a couple things.
- First of all, we need an Azure Subscription to create our AKS and the Azure ServiceBus. The free-trial it's just great for create it.
- We gonna use the Azure Functions Core Tools for create, start and run our functions.
- Install Docker and a DockerHub account is essential.
- Kubectl to watch the beauty babies getting up.
- A repository on Github, Gitlab, Azure DevOps repos,...
- It's not a must. But if you want to deploy it using the Azure pipeline from the example, you need to log in the Azure DevOps and run the pipeline. But if you have another CI/CD pipeline or want to deploy in a different way, it's fair enough too.
Let's start!!
First of all, we must create start the project on Azure DevOps repos, GitHubs or GitLab. You just have to configure your Azure DevOps to connect in your repository to run the pipeline when you want to.
After create, clone your project to your machine.
From here I will assume that you already have an AKS cluster created with Keda 2.1 installed, a ServiceBus with a queue and connection string.
1. Starting a function
With the Azure Functions Tools it's pretty easy to start a functions. We just need to run
func init . --docker
And that's all folks! Thanks for reading...
Just kidding 🤣
Following this command, you will see in your terminal something like
Select a number for worker runtime:
1. dotnet
2. node
3. python
4. powershell
5. custom
Choose option:
Right here you will choose the option 3. After that, our project will be started with a Dockerfile that is prepared to run a Python function. But it's not done yet. We need to create our function.
2. Creating a function
In the step 1 we just started our project. Now we gonna create our function running the command
func new
And now we will choose which template we need for this function. For this tutorial, you must choose the option 11.
Select a number for template:
1. Azure Blob Storage trigger
2. Azure Cosmos DB trigger
3. Durable Functions activity
4. Durable Functions HTTP starter
5. Durable Functions orchestrator
6. Azure Event Grid trigger
7. Azure Event Hub trigger
8. HTTP trigger
9. Azure Queue Storage trigger
10. RabbitMQ trigger
11. Azure Service Bus Queue trigger
12. Azure Service Bus Topic trigger
13. Timer trigger
Choose option:
After that, we must choose a name for our function. You can choose the name you want.
Azure Service Bus Queue trigger
Function name: [ServiceBusQueueTrigger]
After create our function, we must change the file local.settings.json
and include the servicebus connection like
{
"IsEncrypted": false,
"Values": {
"FUNCTIONS_WORKER_RUNTIME": "python",
"AzureWebJobsStorage": "<service-bus-connection>"
}
}
And in the folder that was created our functions, we need to include the queue name as well
{
"scriptFile": "__init__.py",
"bindings": [
{
"name": "msg",
"type": "serviceBusTrigger",
"direction": "in",
"queueName": "<queue-name>",
"connection": "AzureWebJobsStorage"
}
]
}
And for now, our function is ready!! 🎉🎊
3. Testing it locally
To run our function, just run
func start
and it will start. For test it, I created a python script that it's in my github sample that I posted up here, but it's simple do it and I show you.
We need to export the AzureWebJobsStorage and QUEUE_NAME to our OS like:
export AzureWebJobsStorage='<service-bus-connection>'
export QUEUE_NAME=<queue-name>
Exporting it, you can create or use the script:
import os
import sys
import time
from logger import logger
from azure.servicebus import ServiceBusClient, ServiceBusMessage
connection_string = os.environ['AzureWebJobsStorage']
queue_name = os.environ['QUEUE_NAME']
queue = ServiceBusClient.from_connection_string(conn_str=connection_string, queue_name=queue_name)
def send_a_list_of_messages(sender):
messages = [ServiceBusMessage("Message in list") for _ in range(100)]
sender.send_messages(messages)
logger.info("Sent a list of 100 messages")
with queue:
sender = queue.get_queue_sender(queue_name=queue_name)
with sender:
send_a_list_of_messages(sender)
logger.info("Done sending messages")
logger.info("-----------------------")
This script will send 100 events to your ServiceBus queue and your function running locally will consume all this events.
Alright, for now we created our function and tested it! Now we need to deploy it to our AKS. As I wrote up here, I will use an azure-pipeline.yml to deploy.
4. Manifests files and deploy pipeline
In this project I created a folder called manifests
and there we have 2 files. The deployment.yml
that we gonna use to configure our pod on AKS and the scaledobject.yml
that is the configuration file that Keda will use to understand when to scale the application. Let's see what is up on deployment.yml
apiVersion : apps/v1
kind: Deployment
metadata:
name: <pod-name>
namespace: <namespace>
labels:
app: <pod-name>
spec:
selector:
matchLabels:
app: <pod-name>
template:
metadata:
labels:
app: <pod-name>
spec:
containers:
- image: arthuravila/keda-container
name: keda-container
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "128Mi"
cpu: "250m"
This is a simple deployment file that I used to configure my pod. In the container I'm using my container image from DockerHub. Feel free to use it or create your own, it's up to you.
Now, let's see how the scaledobject.yml
looks like
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: <pod-name>
namespace: <namespace>
spec:
scaleTargetRef:
name: <pod-name>
minReplicaCount: 0
maxReplicaCount: 10
pollingInterval: 1
triggers:
- type: azure-servicebus
metadata:
queueName: <queue-name>
messageCount: '1'
connectionFromEnv: AzureWebJobsStorage
This file is for Keda version 2.x. For version 1.x this may be different. Have a look on Documentation.
But explaining this file, on scaleTargetRef
we set the minimum of replicas I want. I set 0 because when there is no event in the queue I assume that it's not necessary have a pod running. And The maximum of 10 replicas.
In the triggers, I use the azure-servicebus. It may change when you want to scale from another service. But as this tutorial is about ServiceBus, we must use like that.
In the metadata we must give the name of the queue on queueName
. In the messageCount
I set 1 because when an event arrives, the pod get up for consume it. And finally the connectionFromEnv
I set the AzureWebJobsStorage
. This variable will use the connection string that we passed in the file local.settings.json
.
Cool, now let's talk about the pipeline. This pipeline is pretty simple at all.
trigger:
- main
resources:
- repo: self
variables:
# Container registry service connection established during pipeline creation
dockerRegistryServiceConnection: 'DockerHub'
imageRepository: 'arthuravila/keda-container'
dockerfilePath: '**/Dockerfile'
tag: '$(Build.BuildId)'
imagePullSecret: 'keda-container'
# Agent VM image name
vmImageName: 'ubuntu-latest'
stages:
- stage: Build
displayName: Build stage
jobs:
- job: Build
displayName: Build
pool:
vmImage: $(vmImageName)
steps:
- task: Docker@2
displayName: Build an image
inputs:
command: build
repository: $(imageRepository)
dockerfile: $(dockerfilePath)
containerRegistry: $(dockerRegistryServiceConnection)
tags: |
$(tag)
- task: Docker@2
displayName: Push an image to container registry
inputs:
command: push
repository: $(imageRepository)
containerRegistry: $(dockerRegistryServiceConnection)
tags: |
$(tag)
- upload: manifests
artifact: manifests
- stage: DeployNONPROD
displayName: Deploy NONPROD
dependsOn:
- Build
condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main'))
jobs:
- deployment: Deploy
displayName: Deploy
pool:
vmImage: $(vmImageName)
environment: '<AKS environment>'
strategy:
runOnce:
deploy:
steps:
- task: KubernetesManifest@0
displayName: Create imagePullSecret
inputs:
action: createSecret
secretName: $(imagePullSecret)
dockerRegistryEndpoint: $(dockerRegistryServiceConnection)
- task: KubernetesManifest@0
displayName: Deploy to Kubernetes cluster
inputs:
action: deploy
manifests: |
$(Pipeline.Workspace)/manifests/deployment.yml
$(Pipeline.Workspace)/manifests/scaledobject.yml
imagePullSecrets: |
$(imagePullSecret)
containers: |
$(containerRegistry)/$(imageRepository):$(tag)
This pipeline will trigger when merge to main branch, build an image with the dockerfile created by Azure Function Tools, push this image to our docker registry and deploy to AKS using the deployment.yml
and scaledobject.yml
files.
But we are not done yet, we must configure our Azure DevOps to run our pipeline.
5. Configuring Azure DevOps and Deploying to AKS
To run our pipeline and deploy it to our AKS, we must configure a couple things in Azure DevOps.
First of all, we start configuring Service connections
. Here we configure our AKS connection, our repository for trigger when trigger to main branch and our docker registry connection.
After configuring this connections, we must configure our environments. As you may have note in the pipeline on the stage Deploy
we have the environment
. This environment we set on Pipelines. Here we set this environment to deploy to the AKS.
At the first time that we run a pipeline in Azure DevOps, we must trigger it manually.
To do it, we just go to Pipelines -> New pipeline and choose the repository where the project is and select the azure-pipelines.yml
file and run the pipeline.
We are almost done, can you believe that?
After your pipeline run and deploy, you must see something like that
It means that your functions has been deployed with success!! 🎊🥳
Alright, Alright... Let's see it running mate!
If you run the command
kubectl get pods -n <your-name-space>
You will see something like this
It means that your pod is stopped because you don't have any event in your queue and it's not necessary to stay up consuming resources.
To watch how beautiful is your serveless function getting up with Keda, you can run the test that I show up here again.
After run that, use the command again to see your pods
kubectl get pods -n <your-name-space>
And you will see all pods getting up like this
Pretty nice, huh?
This is just one example of many other you can try out using Keda. You can check more examples on Keda's github Sample's repository
Well, I finish here... This was huge!
If you have any doubt or feedback, fell free to comment or contact me on linkedin. This is the first article I have ever wrote in my life, any feedback is welcome!
Top comments (1)
Bom dia,
Thanks for the great post. I am looking for a way to deploy durable fuctions to keda. Do you have any idea of how to do that? Is it also possible to have the activity functions in different pods so that they can scale individually?