DEV Community

Robin Cher
Robin Cher

Posted on

Automating Kong API Gateway deployment with Flux

Introduction

In recent years, GitOps has created an industry shift in how configuration change is managed. GitOps elevates the source control repository to the source of truth for configuration change management, and makes the repository the central hub of change control. The benefits of following this development paradigm are many, for example, GitOps helps:

  • Improve Collaboration
  • Increase deployment reliability, stability and frequency
  • Decrease deployment time and reduce human error
  • Improve compliance and auditing
  • and many others…

For these reasons, many engineering organizations are implementing GitOps, including in control of deployments to their API Gateway.

One of the most popular tools in this space is Flux, which we'll be using today.For this post, we will be setting up Flux to demonstrate how one can deploy Kong in a GitOps fashion.

What is Flux?

Let's start by defining what Flux is. Flux is a tool used to keep your Kubernetes clusters in sync with sources of configuration, such as Git repositories, and also automate updates to your configuration when there is new code to deploy. The declarative nature of Flux means that Kong configurations can be written as a set of facts directly within the source code, and Git is the "single source of truth". Essentially, Flux focuses on managing your infrastructure and platform, the way that a developer already is familiar with, using git to commit code changes.

What is Kong?

Kong is the world's most adopted API Gateway that lets you secure, manage and extend APIs or microservices. Get started with Kong API Gateway and learn why it’s one of the best API Gateway in the industry.

Architecture

Now that we understand what Flux is, let's dive into what our architecture looks like when using Flux. Below, we'll find a diagram that summarizes how Flux (with its CRDs) manages Kong using GitOps.

Context

As one can see from the diagram above, the Platform Engineer has one responsibility: to push code to the repository. It is at this point that the whole GitOps flow starts and triggers a number of actions that will finish with a release of an API (in our case). Let's dive into it further.
Tech Stack and Tooling

Note: You will incur some cost in running public cloud resources, do remember to tear it down upon this exercise.

For this walkthrough, these are the tools that we will be using:

Prerequisites

To get started, we need to make sure that we have our stack set up, so we'll perform the following steps:

  • Fork this Repository, and clone it locally - https://github.com/robincher/kong-flux-gitops. This will be the working directory for us to execute commands for this exercise

  • Generate a Github’s Personal Access Tokens (PAT) - Refer to the Github Guide for details

How to Deploy Kong using Flux

Set up the EKS Cluster**

To get started, let's create an EKS cluster with eksctl



eksctl create cluster --name Kong-GitOps-Test-Cluster  --version 1.23 --region <preferred-aws-region>  --without-nodegroup


Enter fullscreen mode Exit fullscreen mode

Subsequently, let create a node group for the cluster



eksctl create nodegroup --cluster Kong-GitOps-Test-Cluster --name Worker-NG  --region <preferred-aws-region>  --node-type t3.medium --nodes 1 --max-pods-per-node 50


Enter fullscreen mode Exit fullscreen mode

After the cluster is completely set up, let's ensure you are able to access the kube-api by running the following:



aws eks update-kubeconfig --region <preferred-aws-region> --name Kong-GitOps-Test-Cluster


Enter fullscreen mode Exit fullscreen mode

Let check if the cluster is up and you have access to it



kubectl get nodes


Enter fullscreen mode Exit fullscreen mode

There should be one node up and running



NAME                                               STATUS   ROLES    AGE    VERSION
ip-192-168-49-64.ap-southeast-1.compute.internal   Ready    <none>   6d2h   v1.23.7


Enter fullscreen mode Exit fullscreen mode

Creating a Remote RDS Postgres (For Kong Control Plane)

Generally we will advise customers to use a Managed Postgres when running Kong in production-like environments.

RDS-Setup

Remember to pre-create an initial database. When using a remote database like RDS, Kong will not automatically initialize the database.

RDS-Kong-DB

Next, create a secret for the DB password



kubectl create secret generic kong-db-password --from-literal=postgresql-password=xxxxx -n kong


Enter fullscreen mode Exit fullscreen mode

Let’s test the connectivity from the EKS cluster to Postgres by first creating a temporary postgres pod



kubectl run -i --tty --rm debug --image=postgres --restart=Never -- sh


Enter fullscreen mode Exit fullscreen mode

Subsequently., we can run a test command to check the connectivity.



psql --host=postgres.internal.somewhere --port=5432 --username=konger --password --dbname=kong


Enter fullscreen mode Exit fullscreen mode

Lastly, we can create an ExternalName for the postgres host.



kubectl create service externalname  kong-db-postgresql  --external-name postgres.internal.somewhere -n kong


Enter fullscreen mode Exit fullscreen mode

Deploying Kong via HelmRepository and HelmRelease

Now we should have our cluster properly bootstrapped.

Now it's time to briefly explain the configurations. For our experiment, we will deploy Kong using Helm. Flux supports helm deployments via their HelmRepository and HelmRelease CRDs.

Let’s go into the folder you just forked and clone locally

cd ~/yourpath/kong-flux-gitops

HelmRepository defines the source where Flux will attempt to pull the helm charts from.

If you already forked from the repository, you will see a HelmRepository CRD pre-configured for you to pull Kong’s helm charts. At this stage, you do not need to modify anything for this file



#sources/kong.yaml
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: HelmRepository
metadata:
  name: kong
spec:
  url: https://charts.konghq.com
  interval: 10m


Enter fullscreen mode Exit fullscreen mode

Let’s look at the HelmRelease CRD that will we will be updating, and we will go through the configuration step by step



cat ~/yourpath/kong-flux-gitops/platform/kong-gw/release/yaml


Enter fullscreen mode Exit fullscreen mode

Understanding HelmRelease CRD

From Flux’s Official documentation: HelmRelease defines a Flux’s resource for controller driven reconciliation of Helm releases via Helm actions such as install, upgrade, test, uninstall, and rollback. As such, we need to allow Flux to understand what to deploy and configure for Kong.

The first step is to specify the chart that will be pulled for the deployment under spec.chart, and indicate the HelmRepository that was created in the previous step, and the corresponding chart version.



#platform/kong-gw/release.yaml

apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: kong
spec:
  interval: 5m
  chart:
    spec:
      chart: kong
      version: 2.15.3
      sourceRef:
        kind: HelmRepository
        name: kong
        namespace: flux-system
      interval: 1m



Enter fullscreen mode Exit fullscreen mode

Subsequently, the next section is spec.values where you will indicate all possible helm values from Kong Official Helm Chart which is located in Kong's official Github repository, here.

So, without further ado, let’s start off by setting the image repository and the tag for the kong container image that we will be deploying. The result can be shown below:



#platform/kong-gw/release.yaml

apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
  name: kong
spec:
  chart:
  ….
  ….
  values:
    image:
      repository: kong/kong-gateway
      tag: "3.1.1.3"
    replicaCount: 1
—--


Enter fullscreen mode Exit fullscreen mode

Configuring Kong Environment Variables

Within the file, you may notice the section: spec.values.env. This section is one that allows the user to overwrite the default Kong Gateway configuration parameters via the use of environmental variables. This is a very important feature, which is documented here. An example of some of the environmental variables that you would wish to overwrite is shown below:



#platform/kong-gw/release.yaml
……
   env:
      prefix: /kong_prefix/
      database: postgres
      # Pre-created Kubernetes ExternalName that pointing to the PG Host
      pg_host: kong-db-postgresql.kong-enterprise.svc.cluster.local
      pg_port: 5432
      pg_user: konger
      pg_database: kong # Pre-create in RDS First
      pg_password: 
        valueFrom:
          secretKeyRef:
            name: kong-db-password # Pre-created previously
            key: postgresql-password # Pre-created previously

      # Logs Output
      log_level: warn

      # Configuring Admin GUI (Kong Manager) and Admin API
      admin_api_uri: http://admin.customer.kongtest.net:8001 
      admin_gui_url: http://manager.customer.kongtest.net:8002

      # Configuring Portal Settings 
      portal_gui_protocol: http
      portal_api_url: http://portalapi.customer.kongtest.net:8004
      portal_gui_host: portal.customer.kongtest.net:8003   
      portal_session_conf:
        valueFrom:
          secretKeyRef:
            name: kong-session-config
            key: portal_session_conf

      portal: on



Enter fullscreen mode Exit fullscreen mode
Configuring Admin GUI (Kong Manager) and Admin API

Within the section, you'll find some important settings such as: admin_api_uri and admin_gui_url, where you have to indicate two hostnames that can be accessed by the API Operator using Kong Manager. admin_gui_url is the value Kong used to set in “Access-Control-Allow-Origin” header for CORS purposes, and it will decide which domain could access Kong Admin API via the web browser.

Configuring Portal Settings (Enterprise)

portal_gui_host is the hostname for your Developer Portal, portal_api_url and is the address on which your Kong Dev Portal API is accessible by Kong. Both settings are required if you are mapping your portal with your own DNS hostname.

portal_session_conf is the portal session configuration so as to create a session cookie that is to authenticate dev portal users. The sample snippet to create a portal_session_conf is found in the scripts/kong-gw-initial.sh

More information about portal session configuration can be found here: https://docs.konghq.com/gateway/latest/kong-enterprise/dev-portal/authentication/sessions/

Configuring Enterprise License and Default Kong Manager Secret

For an enterprise customer, one needs to create a license secret in the cluster by running the following:



kubectl create secret generic kong-enterprise-license --from-file=license=license.json -n kong --dry-run=client -o yaml | kubectl apply -f -


Enter fullscreen mode Exit fullscreen mode

From the helm configuration, indicate the license_secret you just created, in this case the secret name is kong-enterprise-license.

For a better security posture, we enabled RBAC to secure our Kong Manager with basic-auth.To achieve this, we have to indicate the authentication mechanism via enterprise.admin_gui_auth, and enterprise.session_conf_secret for the default password.

You can read more about this here: https://docs.konghq.com/gateway/latest/kong-manager/auth/basic/



#platform/kong-gw/release.yaml
……
   enterprise:
      enabled: true
      # CHANGEME: https://github.com/Kong/charts/blob/main/charts/kong/README.md#kong-enterprise-license
      license_secret: kong-enterprise-license
      vitals:
        enabled: true
      portal:
        enabled: true
      rbac:
        enabled: true
        admin_gui_auth: basic-auth
        session_conf_secret: kong-session-config
        admin_gui_auth_conf_secret: kong-session-config
      smtp:
        enabled: false


Enter fullscreen mode Exit fullscreen mode

Configuring Kong Services

As you might have known, a single Kong container comprised core services to make it work.

  • Kong Manager - Admin UI for you to manage API. Interact with Kong Admin API behind the scene
  • Kong Admin API - Admin API for you to manage API
  • Kong Developer Portal - Developer Portal to browse and search API documentation, and test API endpoints
  • Kong Developer Portal API - API to interact with Dev Portal.
  • Kong Proxy - Proxy service that process API request

To deploy these services, it is just as simple as enabling them in the helm values. For this exercise, we will only enable Kong Proxy and Admin API.



#platform/kong-gw/release.yaml
……
   admin:
      enabled: true
      type: LoadBalancer
      annotations: 
        service.beta.kubernetes.io/aws-load-balancer-type: "nlb" 
        service.beta.kubernetes.io/aws-load-balancer-internal: "false"
   manager:
      enabled: false
   proxy:
      # Creating a Kubernetes service for the proxy
      enabled: true
      type: LoadBalancer 
      annotations:
        service.beta.kubernetes.io/aws-load-balancer-type: "nlb" 
        service.beta.kubernetes.io/aws-load-balancer-internal: "false"
  portal:
      enabled: false

  portalapi:
      enabled: false


Enter fullscreen mode Exit fullscreen mode

The full sample HelmRelease CRD can be accessed here : https://github.com/robincher/kong-flux-gitops/blob/main/platform/kong-gw/release.yaml

Now, we can update and check-in the above two configuration file to the following locations:

Bootstrapping our cluster with Flux + Kong

After the cluster has been created and configuration updated, we need to bootstrap the cluster with the Flux CLI. Bootstrapping is a Flux process that's used to install the Flux CRD's and then deploy a sample application, which in our case is Kong. To find out more about what bootstrapping means, see the Flux documentation for information regarding bootstrapping.

Before running the bootstrap command, we need to retrieve a Github Personal Access Token(PAT). Without a PAT, the Flux CLI will not be able to access Github API to do the necessary action like creating a fresh source repository if it doesn’t exist.

The command on how to bootstrap Kong is shown below:

Create an environment variable based on your Github PAT. This token will be used by Flux to interact with the corresponding Github repository



export GITHUB_TOKEN=<your-token>


Enter fullscreen mode Exit fullscreen mode

Run the following command to bootstrap Flux into the remote EKS Cluster



flux bootstrap github \
 --owner=your-github-id \
 --repository=kong-flux-gitops \
 --path=clusters/staging \   #The Folder that contain the manifest
 --personal


Enter fullscreen mode Exit fullscreen mode

Once you've bootstrapped the cluster, you should now have the Flux CRDs installed in your cluster and also see the cluster being initiated with Kong pods.

The flux bootstrap command perform the followings:

  • A new Git repository for our manifests is created on the source repository
  • A flux-system namespace with all Flux components is configured on our cluster


kubectl get pods -n flux-system


Enter fullscreen mode Exit fullscreen mode

Expected output with all Flux’s components up and running



❯ 
NAME                                       READY   STATUS    RESTARTS   AGE
helm-controller-56fc8dd99d-sf4ml           1/1     Running   0          4d2h
kustomize-controller-bc455c688-mwfv4       1/1     Running   0          4d2h
notification-controller-644f548fb6-69wr4   1/1     Running   0          4d2h
source-controller-7f66565fb8-q4r7j         1/1     Running   0          4d2h


Enter fullscreen mode Exit fullscreen mode
  • The Flux controllers are set up to sync with our new git repository


flux get sources git


Enter fullscreen mode Exit fullscreen mode

Expected output with all Flux’s components up and running



NAME        REVISION        SUSPENDED   READY   MESSAGE
flux-system main/43187e2    False       True    stored artifact for revision 'main/43187e206a7d6f3e06406afc17b9579dec3ee04d'


Enter fullscreen mode Exit fullscreen mode

After the sync is established, any Kubernetes manifests checked into the source repository will be automatically deployed into the target cluster.

Let’s check if all the Kong pods are up and running



kubectl get pods -n kong


Enter fullscreen mode Exit fullscreen mode

Expected output



NAME                              READY   STATUS      RESTARTS   AGE
kong-kong-665fc7d8db-m67lb        1/1     Running     0          2m2s
kong-kong-init-migrations-62c7x   0/1     Completed   0          2m2s


Enter fullscreen mode Exit fullscreen mode


kubectl get services -n kong


Enter fullscreen mode Exit fullscreen mode

Expected output



NAME                 TYPE           CLUSTER-IP      EXTERNAL-IP                            PORT(S)                         AGE
kong-kong-admin      LoadBalancer   10.100.0.98     xx2.elb.ap-southeast-1.amazonaws.com   8001:31386/TCP,8444:30909/TCP   2m21s
kong-kong-proxy      LoadBalancer   10.100.140.25   xx1.elb.ap-southeast-1.amazonaws.com   80:31675/TCP,443:30040/TCP      2m21s




Enter fullscreen mode Exit fullscreen mode

Making Kong Config Updates

As shown above there is only 1 x Kong Pod, so what can we do to scale it up via GitOps?

Let’s update the replica count to 2



values:
    image:
      repository: kong/kong-gateway
      tag: "3.1.1.3"
    replicaCount: 2 # Change to this


Enter fullscreen mode Exit fullscreen mode

Commit the code, and push to the remote repository.



git add kong-gw/release.yaml
git commit -m "chore: Bump 2 x Kong pod"

git push origin main



Enter fullscreen mode Exit fullscreen mode

Observe the number of Kong Pods now by running the below command




❯ kubectl get pods -n kong-enterprise
NAME                                      READY   STATUS      RESTARTS   AGE
kong-kong-665fc7d8db-m67lb                1/1     Running     0          5m
kong-kong-665fc7d8db-nsqm7                1/1     Running     0          26s


Enter fullscreen mode Exit fullscreen mode

You should now see there are 2 x Kong pods running, and what happened is due to Flux pulling the new manifest from the source repository it's synchronized with, and then updating the state of the cluster accordingly.

Cleaning-Up

Before you receive any bill shock, do remember to destroy the EKS cluster upon the conclusion of your experiment. Below are the steps to clean up and destroy the cluster upon experimentation

Uninstall Flux


 uninstall --namespace=flux-system

Enter fullscreen mode Exit fullscreen mode

Clean up Kong Resource

Do a full-clean up of any remaining Kong resources


 delete ns kong

Enter fullscreen mode Exit fullscreen mode

Delete the node groups

eksctl delete nodegroup --name Kong-GitOps-Test-Cluster --region <aws-region>

Delete the cluster


delete cluster --name Kong-GitOps-Test-Cluster --region <aws-region>
Enter fullscreen mode Exit fullscreen mode




Summary

The key message here is that Kong is flexible and agnostic enough to be installed using several methods. With Gitops tooling like Flux, it can support all Kong Kubernetes deployment flavours with their operators or controllers, and have them consistently deployed across clusters.

With GitOps, you can easily automate configurations across multiple clusters. This is critical if ,for any unfortunate event, that you have to rebuild your current cluster in another site. With Flux GitOps, you can easily replicate the configuration by bootstrapping the new cluster with Flux and pointing to the existing source repository like what we did in this exercise. As a result, downtime is decreased and services are impacted minimally.

Kong supports the following deployment flavours for Kubernetes, and you can easily use them with Flux GitOps

  • Helm
  • Kubernetes Custom Resource Definition (CRD)
  • Kubernetes Operators (Incubator)

Sample Repository can be found here: https://github.com/robincher/kong-flux-gitops

Top comments (0)