DEV Community

Cover image for Getting started with KrakenD on Kubernetes / AKS
Christian Dennig
Christian Dennig

Posted on • Originally published at partlycloudy.blog on

Getting started with KrakenD on Kubernetes / AKS

If you develop applications in a cloud-native environment and, for example, rely on the “microservices” architecture pattern, you will sooner or later have to deal with the issue of “API gateways”. There is a wide range of offerings available “in the wild”, both as managed versions from various cloud providers, as well as from the open source domain. Many often think of the well-known OSS projects such as “Kong”, “tyk” or “gloo” when it comes to API gateways. The same is true for me. However, when I took a closer look at the projects, I wasn’t always satisfied with the feature set. I was always looking for product that can be hosted in your Kubernetes cluster, is flexible and easy to configure (“desired state”) and offers good performance. During my work as a cloud solution architect at Microsoft, I became aware of the OSS API Gateway “KrakenD” during a project about 1.5 years ago.

KrakenD API Gateway

krakend logo
KrakenD logo

KrakenD is an API gateway implemented in Go that relies on the ultra-fast GIN framework under the hood. It offers an incredible number of features out-of-the-box that can be used to implement about any gateway requirement:

  • request proxying and aggregation (merge multiple responses)
  • decoding (from JSON, XML…)
  • filtering (allow- and block-lists)
  • request & response transformation
  • caching
  • circuit breaker pattern via configuration, timeouts…
  • protocol translation
  • JWT validation / signing
  • SSL
  • OAuth2
  • Prometheus/OpenCensus integration

As you can see, this is quite an extensive list of features, which is nevertheless far from being “complete”. On their homepage and documentation, you can find much more information, what the product offers in its entirety​.

The creators also recently published an Azure Marketplace offer, a container image that you can directly push / integrate to your Azure Container Registry…so I thought, it’s an appropriate time to publish a blog post about how to get started with KrakenD on Azure Kubernetes Service (AKS).

Getting Started with KrakenD on AKS

Ok, let’s get started then. First, we need a Kubernetes cluster on which we can roll out a sample application that we want to expose via KrakenD. So, as with all Azure deployments, let’s start with a resource group and then add a corresponding AKS service. We will be using the Azure Command Line Interface for this, but you can also create the cluster via the Azure Portal.

# create an Azure resource group

$ az group create --name krakend-aks-rg \
   --location westeurope

{
  "id": "/subscriptions/xxx/resourceGroups/krakend-aks-rg",
  "location": "westeurope",
  "managedBy": null,
  "name": "krakend-aks-rg",
  "properties": {
    "provisioningState": "Succeeded"
  },
  "tags": null,
  "type": "Microsoft.Resources/resourceGroups"
}

# create a Kubernetes cluster

$ az aks create -g krakend-aks-rg \
   -n krakend-aks \
   --enable-managed-identity \
   --generate-ssh-keys

Enter fullscreen mode Exit fullscreen mode

After a few minutes, the cluster has been created and we can download the access credentials to our workstation.


$ az aks get-credentials -g krakend-aks-rg \
   -n krakend-aks 

# in case you don't have kubectl on your 
# machine, there's a handy installer coming with 
# the Azure CLI:

$ az aks install-cli

Enter fullscreen mode Exit fullscreen mode

Let’s check, if we have access to the cluster…


$ kubectl get nodes

NAME STATUS ROLES AGE VERSION
aks-nodepool1-34625029-vmss000000 Ready agent 24h v1.18.14
aks-nodepool1-34625029-vmss000001 Ready agent 24h v1.18.14
aks-nodepool1-34625029-vmss000002 Ready agent 24h v1.18.14

Enter fullscreen mode Exit fullscreen mode

Looks great and we are all set from an infrastructure perspective. Let’s add a service that we can expose via KrakenD.

Add a sample service

We are now going to deploy a very simple service implemented in dotnet core, that is capable of creating / storing “contact” objects in a MS SQL server 2019 (Linux) that is running – for convenience reasons – on the same Kubernetes cluster as a single container/pod. After having the services deployed, the in-cluster situation looks like that:

In-cluster architecture /wo KrakenD

Let’s deploy everything. First, the MS SQL server with its service definition:


# content of sql-server.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mssql-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mssql
  template:
    metadata:
      labels:
        app: mssql
    spec:
      terminationGracePeriodSeconds: 30
      securityContext:
        fsGroup: 10001
      containers:
        - name: mssql
          image: mcr.microsoft.com/mssql/server:2019-latest
          ports:
            - containerPort: 1433
          env:
            - name: MSSQL_PID
              value: 'Developer'
            - name: ACCEPT_EULA
              value: 'Y'
            - name: SA_PASSWORD
              value: 'Ch@ngeMe!23'
--------
apiVersion: v1
kind: Service
metadata:
  name: mssqlsvr
spec:
  selector:
    app: mssql
  ports:
    - protocol: TCP
      port: 1433
      targetPort: 1433
  type: ClusterIP

Enter fullscreen mode Exit fullscreen mode

Create a file called sql-server.yaml and apply it to the cluster.


$ kubectl apply -f sql-server.yaml

deployment.apps/mssql-deployment created
service/mssqlsvr created

Enter fullscreen mode Exit fullscreen mode

Second, the contacts API plus a service definition:


# content of contacts-app.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ca-deploy
  labels:
    application: scmcontacts
    service: contactsapi
spec:
  replicas: 2
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  minReadySeconds: 5
  revisionHistoryLimit: 3
  selector:
    matchLabels:
      application: scmcontacts
      service: contactsapi
  template:
    metadata:
      labels:
        application: scmcontacts
        service: contactsapi
    spec:
      automountServiceAccountToken: false
      containers:
        - name: application
          resources:
            requests:
              memory: '64Mi'
              cpu: '100m'
            limits:
              memory: '256Mi'
              cpu: '500m'
          image: ghcr.io/azuredevcollege/adc-contacts-api:3.0
          env:
            - name: ConnectionStrings__DefaultConnectionString
              value: "Server=tcp:mssqlsvr,1433;Initial Catalog=scmcontactsdb;Persist Security Info=False;User ID=sa;Password=Ch@ngeMe!23;MultipleActiveResultSets=False;Encrypt=False;TrustServerCertificate=True;Connection Timeout=30;"
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 5000
--------
apiVersion: v1
kind: Service
metadata:
  name: contacts
  labels:
    application: scmcontacts
    service: contactsapi
spec:
  type: ClusterIP
  selector:
    application: scmcontacts
    service: contactsapi
  ports:
    - port: 8080
      targetPort: 5000

Enter fullscreen mode Exit fullscreen mode

Create a file called contacts-app.yaml and apply it to the cluster.


$ kubectl apply -f contacts-app.yaml

deployment.apps/ca-deploy created
service/contacts created

Enter fullscreen mode Exit fullscreen mode

To check, if the contact pods can communicate with the MSSQL server, let’s quickly spin up an interactive pod and issue a few requests from within the cluster. As you can see in the YAML manifests, the services have been added as type ClusterIP which means they don’t get an external IP address. Exposing the contacts service to the public will be the responsibility of KrakenD.


$ kubectl run -it --rm --image csaocpger/httpie:1.0 http --restart Never -- /bin/sh


$ echo '{"firstname": "Satya", "lastname": "Nadella", "email": "satya@microsoft.com", "company": "Microsoft", "avatarLocation": "", "phone": "+1 32 6546 6545", "mobile": "+1 32 6546 6542", "description": "CEO of Microsoft", "street": "Street", "houseNumber": "1", "city": "Redmond", "postalCode": "123456", "country": "USA"}' | http POST http://contacts:8080/api/contacts

HTTP/1.1 201 Created
Content-Type: application/json; charset=utf-8
Date: Wed, 17 Feb 2021 10:58:57 GMT
Location: http://contacts:8080/api/contacts/ee176782-a767-45ad-a7df-dbcefef22688
Server: Kestrel
Transfer-Encoding: chunked

{
    "avatarLocation": "",
    "city": "Redmond",
    "company": "Microsoft",
    "country": "USA",
    "description": "CEO of Microsoft",
    "email": "satya@microsoft.com",
    "firstname": "Satya",
    "houseNumber": "1",
    "id": "ee176782-a767-45ad-a7df-dbcefef22688",
    "lastname": "Nadella",
    "mobile": "+1 32 6546 6542",
    "phone": "+1 32 6546 6545",
    "postalCode": "123456",
    "street": "Street"
}

$ http GET http://contacts:8080/api/contacts
HTTP/1.1 200 OK
Content-Type: application/json; charset=utf-8
Date: Wed, 17 Feb 2021 11:00:07 GMT
Server: Kestrel
Transfer-Encoding: chunked

[
    {
        "avatarLocation": "",
        "city": "Redmond",
        "company": "Microsoft",
        "country": "USA",
        "description": "CEO of Microsoft",
        "email": "satya@microsoft.com",
        "firstname": "Satya",
        "houseNumber": "1",
        "id": "ee176782-a767-45ad-a7df-dbcefef22688",
        "lastname": "Nadella",
        "mobile": "+1 32 6546 6542",
        "phone": "+1 32 6546 6545",
        "postalCode": "123456",
        "street": "Street"
    }
]

Enter fullscreen mode Exit fullscreen mode

As you can see, we can create new contacts by POSTing a JSON payload to the endpoint http://contacts:8080/api/contacts (first request) and also retrieve what has been added to the database by GETing data from http://contacts:8080/api/contacts endpoint (second request).

Create a KrakenD Configuration

So far, everything works as expected and we have a working API in the cluster that is storing its data in a MSSQL server. As discussed in the previous section, we did not expose the contacts service to the internet on purpose. We will do this later by adding KrakenD in front of that service giving the API gateway a public IP so that it is externally reachable.

But first, we need to create a KrakenD configuration (a plain JSON file) where we configure the endpoints, backend services, how requests should be routed etc. etc. Fortunately, KrakenD has a very easy-to-use designer that gives you a head-start when creating that configuration file – it’s simply called the KrakenDesigner.

kraken designer
KrakenDesigner – sample service

kraken designer logging config
KrakenDesigner – logging configuration

When creating such a configuration, it comes down to these simple steps:

  1. Adjust “common” configuration for KrakenD like service name, port, CORS, exposed/allowed headers etc.
  2. Add backend services, in our case just the Kubernetes service for our contacts API (http://contacts:8080)
  3. Exposed endpoints (/contacts) at the gateway and to which backend to route to (http:/contacts:8080/api/contacts). Here you can also define, if a JWT token should be validated, which headers to pass to the backend etc. A lot of options – which we obviously don’t need in our simple setup.
  4. Add logging configuration – it’s optional, but you should do it. We simply enable stdoutlogging, but you can also use OpenCensuse.g. and even expose metrics to a Prometheus instance (nice!).

You can export the configuration you have done in the UI as a last step to a JSON file. For our sample here, this file looks like that:


{
    "version": 2,
    "extra_config": {
      "github_com/devopsfaith/krakend-cors": {
        "allow_origins": [
          "*"
        ],
        "expose_headers": [
          "Content-Length",
          "Location"
        ],
        "max_age": "12h",
        "allow_methods": [
          "GET",
          "POST",
          "PUT",
          "DELETE",
          "OPTIONS"
        ]
      },
      "github_com/devopsfaith/krakend-gologging": {
        "level": "INFO",
        "prefix": "[KRAKEND]",
        "syslog": false,
        "stdout": true,
        "format": "default"
      }
    },
    "timeout": "3000ms",
    "cache_ttl": "300s",
    "output_encoding": "json",
    "name": "contacts",
    "port": 8080,
    "endpoints": [
      {
        "endpoint": "/contacts",
        "method": "GET",
        "output_encoding": "no-op",
        "extra_config": {},
        "backend": [
          {
            "url_pattern": "/api/contacts",
            "encoding": "no-op",
            "sd": "static",
            "method": "GET",
            "extra_config": {},
            "host": [
              "http://contacts:8080"
            ],
            "disable_host_sanitize": true
          }
        ]
      },
      {
        "endpoint": "/contacts",
        "method": "POST",
        "output_encoding": "no-op",
        "extra_config": {},
        "backend": [
          {
            "url_pattern": "/api/contacts",
            "encoding": "no-op",
            "sd": "static",
            "method": "POST",
            "extra_config": {},
            "host": [
              "http://contacts:8080"
            ],
            "disable_host_sanitize": true
          }
        ]
      }
    ]
  }

Enter fullscreen mode Exit fullscreen mode

We simply expose two endpoints, one that let’s us create (POST) contacts and one that retrieves (GET) all contacts from the database – so basically the same sample we did when calling the contacts service from within the cluster.

Save that file above to your local machine (name it krakend.json) as we need to add it later to Kubernetes as a ConfigMap.

Add the KrakenD API Gateway

So, now we are ready to deploy KrakenD to the cluster: we have an API that we want to expose and we have the KrakenD configuration. To dynamically add the configuration (krakend.json) to our running KrakenD instance, we will use a Kubernetes ConfigMapobject. This gives us the ability to decouple configuration from our KrakenD application instance/pod – if you are not familiar with the concepts, have a look at the official documentation here.

During the startup of KrakenD we will then use this ConfigMapand mount the content of it (krakend.json file) into the container (folder/etc/krakend) so that the KrakenD process can pick it up and apply the configuration.

In the folder where you saved the config file, issue the following commands:


$ kubectl create configmap krakend-cfg --from-file=./krakend-cfg.json

configmap/krakend-cfg created

# check the contents of the configmap

$ kubectl describe configmap krakend-cfg

Name: krakend-cfg
Namespace: default
Labels: <none>
Annotations: <none>

Data
====
krakend.json:
---------
{
    "version": 2,
    "extra_config": {
      "github_com/devopsfaith/krakend-cors": {
        "allow_origins": [
          "*"
        ],
        "expose_headers": [
          "Content-Length",
          "Location"
        ],
        "max_age": "12h",
        "allow_methods": [
          "GET",
          "POST",
          "PUT",
          "DELETE",
          "OPTIONS"
        ]
      },
      "github_com/devopsfaith/krakend-gologging": {
        "level": "INFO",
        "prefix": "[KRAKEND]",
        "syslog": false,
        "stdout": true,
        "format": "default"
      }
    },
    "timeout": "3000ms",
    "cache_ttl": "300s",
    "output_encoding": "json",
    "name": "contacts",
    "port": 8080,
    "endpoints": [
      {
        "endpoint": "/contacts",
        "method": "GET",
        "output_encoding": "no-op",
        "extra_config": {},
        "backend": [
          {
            "url_pattern": "/api/contacts",
            "encoding": "no-op",
            "sd": "static",
            "method": "GET",
            "extra_config": {},
            "host": [
              "http://contacts"
            ],
            "disable_host_sanitize": true
          }
        ]
      },
      {
        "endpoint": "/contacts",
        "method": "POST",
        "output_encoding": "no-op",
        "extra_config": {},
        "backend": [
          {
            "url_pattern": "/api/contacts",
            "encoding": "no-op",
            "sd": "static",
            "method": "POST",
            "extra_config": {},
            "host": [
              "http://contacts"
            ],
            "disable_host_sanitize": true
          }
        ]
      }
    ]
  }

Events: <none>

Enter fullscreen mode Exit fullscreen mode

That looks great. We are finally ready to spin up KrakenD in the cluster. We therefor apply the following Kubernetes manifest file which creates a deployment and a Kubernetes service of type LoadBalancer – which gives us a public IP address for KrakenD via the Azure loadbalancer.


# content of api-gateway.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: krakend-deploy
  labels:
    application: apigateway
spec:
  replicas: 1
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 1
  minReadySeconds: 5
  revisionHistoryLimit: 3
  selector:
    matchLabels:
      application: apigateway
  template:
    metadata:
      labels:
        application: apigateway
    spec:
      automountServiceAccountToken: false
      volumes:
        - name: krakend-cfg
          configMap:
            name: krakend-cfg
      containers:
        - name: application
          resources:
            requests:
              memory: '64Mi'
              cpu: '100m'
            limits:
              memory: '1024Mi'
              cpu: '1000m'
          image: devopsfaith/krakend:1.2
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 8080
          volumeMounts:
          - name: krakend-cfg
            mountPath: /etc/krakend

--------
apiVersion: v1
kind: Service
metadata:
  name: apigateway
  labels:
    application: apigateway
spec:
  type: LoadBalancer
  selector:
    application: apigateway
  ports:
    - port: 8080
      targetPort: 8080

Enter fullscreen mode Exit fullscreen mode

Let me highlight the two important parts here, that mount the configuration file into our pod. First, we create a volume on line 26 (named krakend-cfg) referencing the ConfigMapwe created before and second, we mount that volume (line 43) to our pod (mountPath /etc/krakend).

Save the manifest file and apply it to the cluster.


$ kubectl apply -f api-gateway.yaml

deployment.apps/krakend-deploy created
service/apigateway created

Enter fullscreen mode Exit fullscreen mode

The resulting architecture within the cluster is now as follows:

Architecture with krakend
Architecture with KrakenD API gateway

As a last step, we just need to retrieve the public IP of our “LoadBalancer” service.


$ kubectl get services

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
apigateway LoadBalancer 10.0.26.150 104.45.73.37 8080:31552/TCP 4h53m
contacts ClusterIP 10.0.155.35 <none> 8080/TCP 3h47m
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 26h
mssqlsvr ClusterIP 10.0.192.57 <none> 1433/TCP 3h59m

Enter fullscreen mode Exit fullscreen mode

So, in our case here, we got 104.45.73.37. Let’s issue a few request (either with a browser or a tool like httpie– which I use all the time) against the resulting URL http://104.45.73.37:8080/contacts.


$ http http://104.45.73.37:8080/contacts

HTTP/1.1 200 OK
Content-Length: 337
Content-Type: application/json; charset=utf-8
Date: Wed, 17 Feb 2021 12:10:20 GMT
Server: Kestrel
Vary: Origin
X-Krakend: Version 1.2.0
X-Krakend-Completed: false

[
    {
        "avatarLocation": "",
        "city": "Redmond",
        "company": "Microsoft",
        "country": "USA",
        "description": "CEO of Microsoft",
        "email": "satya@microsoft.com",
        "firstname": "Satya",
        "houseNumber": "1",
        "id": "ee176782-a767-45ad-a7df-dbcefef22688",
        "lastname": "Nadella",
        "mobile": "+1 32 6546 6542",
        "phone": "+1 32 6546 6545",
        "postalCode": "123456",
        "street": "Street"
    }
]

Enter fullscreen mode Exit fullscreen mode

Works like a charm! Also, have a look at the logs of the KrakenD container:


$ kubectl logs krakend-deploy-86c44c787d-qczjh -f=true

Parsing configuration file: /etc/krakend/krakend.json
[KRAKEND] 2021/02/17 - 09:59:59.745 ▶ ERROR unable to create the GELF writer: getting the extra config for the krakend-gelf module
[KRAKEND] 2021/02/17 - 09:59:59.745 ▶ INFO Listening on port: 8080
[KRAKEND] 2021/02/17 - 09:59:59.746 ▶ WARNIN influxdb: unable to load custom config
[KRAKEND] 2021/02/17 - 09:59:59.746 ▶ WARNIN opencensus: no extra config defined for the opencensus module
[KRAKEND] 2021/02/17 - 09:59:59.746 ▶ WARNIN building the etcd client: unable to create the etcd client: no config
[KRAKEND] 2021/02/17 - 09:59:59.746 ▶ WARNIN bloomFilter: no config for the bloomfilter
[KRAKEND] 2021/02/17 - 09:59:59.746 ▶ WARNIN no config present for the httpsecure module
[KRAKEND] 2021/02/17 - 09:59:59.746 ▶ INFO JOSE: signer disabled for the endpoint /contacts
[KRAKEND] 2021/02/17 - 09:59:59.746 ▶ INFO JOSE: validator disabled for the endpoint /contacts
[KRAKEND] 2021/02/17 - 09:59:59.746 ▶ INFO JOSE: signer disabled for the endpoint /contacts
[KRAKEND] 2021/02/17 - 09:59:59.746 ▶ INFO JOSE: validator disabled for the endpoint /contacts
[KRAKEND] 2021/02/17 - 09:59:59.747 ▶ INFO registering usage stats for cluster ID '293C0vbu4hqE6jM0BsSNl/HCzaAKsvjhSbHtWo9Hacc='
[GIN] 2021/02/17 - 10:01:44 | 200 | 4.093438ms | 10.244.1.1 | GET "/contacts"
[GIN] 2021/02/17 - 10:01:46 | 200 | 5.397977ms | 10.244.1.1 | GET "/contacts"
[GIN] 2021/02/17 - 10:01:56 | 200 | 6.820172ms | 10.244.1.1 | GET "/contacts"
[GIN] 2021/02/17 - 10:01:57 | 200 | 5.911475ms | 10.244.1.1 | GET "/contacts"

Enter fullscreen mode Exit fullscreen mode

As mentioned before, KrakenD logs its events to stdoutand we can see how the request are coming in, the destination and the time the request needed to complete at the gateway level.

Wrap-Up

In this brief article, I showed you how you can deploy KrakenD to an AKS/Kubernetes cluster on Azure and how to setup a first, simple sample of how to expose an API running in Kubernetes via the KrakenD API gateway. The project has so many useful features, that this post only covers the very, very basic stuff. I really encourage you to have a look at the product when you consider hosting an API gateway within your Kubernetes cluster. The folks at KrakenD do a great job and are also open and accept pull requests, if you want to contribute to the project.

As mentioned in the beginning of this article, they recently published a version of their KrakenD container image to the Azure Marketplace. This gives you the ability to directly push their current and future image to your own Azure Container Registry, enabling scenarios like static image scanning, Azure Security Center integration, geo-replication etc. You can find their offering here: KrakenD API Gateway

Hope you enjoyed this brief introduction…happy hacking, friends! 🖖

Oldest comments (0)