DEV Community

Cover image for Flomesh Ingress Controller with Kubernetes Multi-tenancy
Ali Naqvi for Flomesh

Posted on

Flomesh Ingress Controller with Kubernetes Multi-tenancy


It's very rare for organizations to have or provide a dedicated Kubernetes cluster to each tenant, as it's generally more cost-effective to share a cluster. Sharing clusters reduces expenses and streamlines management. Sharing clusters, however, also comes with difficulties like managing noisy neighbors and ensuring security.

Clusters can be shared in many ways. In some cases, different applications may run in the same cluster. In other cases, multiple instances of the same application may run in the same cluster, one for each end user. All these types of sharing are frequently described using the umbrella term multi-tenancy.

While Kubernetes does not have first-class concepts of end users or tenants, it provides several features to help manage different tenancy requirements. These are discussed below.

  • A common form of multi-tenancy is to share a cluster between multiple teams within an organization, each of whom may operate one or more workloads. In this scenario, members of the teams often have direct access to Kubernetes resources via tools such as kubectl, or indirect access through GitOps controllers or other types of release automation tools. There is often some level of trust between members of different teams, but Kubernetes policies such as RBAC, quotas, and network policies are essential to safely and fairly share clusters.

  • The other major form of multi-tenancy frequently involves a Software-as-a-Service (SaaS) vendor running multiple instances of a workload for customers. This business model is so strongly associated with this deployment style that many people call it "SaaS tenancy." In this scenario, the customers do not have access to the cluster; Kubernetes is invisible from their perspective and is only used by the vendor to manage the workloads. Cost optimization is frequently a critical concern, and Kubernetes policies are used to ensure that the workloads are strongly isolated from each other.

Tenant Isolation

There are several ways to design and build multi-tenant solutions with Kubernetes at its control plane, data plane, or at both levels. Each of these methods comes with its own set of tradeoffs that impact the isolation level, implementation effort, operational complexity, and cost of service.

Kubernetes Control plane isolation ensures that different tenants cannot access or affect each other's Kubernetes API resources, and Flomesh Service Mesh (FSM) Ingress controller provides isolated Ingress controllers for Kubernetes Namespaces.

In Kubernetes, a Namespace provides a mechanism for isolating groups of API resources within a single cluster. This isolation has two key dimensions:

  • Object names within a namespace can overlap with names in other namespaces, similar to files in folders. This allows tenants to name their resources without having to consider what other tenants are doing.

  • Many Kubernetes security policies are scoped to namespaces. For example, RBAC Roles and Network Policies are namespace-scoped resources. Using RBAC, Users and Service Accounts can be restricted to a namespace.

In a multi-tenant environment, a Namespace helps segment a tenant's workload into a logical and distinct management unit. A common practice is to isolate every workload in its namespace, even if multiple workloads are operated by the same tenant. This ensures that each workload has its own identity and can be configured with an appropriate security policy.

In this article, we will learn how to use the Flomesh Service Mesh Ingress controller to create physical isolation of Ingress controllers when hosting multiple tenants in your Kubernetes cluster.

Flomesh Service Mesh (FSM)

FSM is an open-source product from Flomesh for Kubernetes north-south traffic, gateway API controller, and multi-cluster management. FSM uses Pipy, a programmable proxy at its core, and provides an Ingress controller, Gateway API controller, load balancer, cross-cluster service registration discovery, and more.

FSM Ingress Controller supports a multi-tenancy model via its concept of NamespacedIngress CRD where it deploys a physically isolated Ingress Controller for requested Namespace.

For example, the YAML below defines an Ingress Controller that monitors on port 100 and also creates a LoadBalancer type service for it that listens on port 100.

kind: NamespacedIngress
  name: namespaced-ingress-100
  namespace: test-100
  serviceType: LoadBalancer
  - name: http
    port: 100
    protocol: TCP
Enter fullscreen mode Exit fullscreen mode

FSM NamespacedIngress

Install FSM

FSM provides a standard Helm chart, which can be installed via the Helm CLI.

$ helm repo add fsm
$ helm repo update

$ helm install fsm fsm/fsm --namespace flomesh --create-namespace --set fsm.ingress.namespaced=true
Enter fullscreen mode Exit fullscreen mode

Verify that all pods are up and running properly.

$ kubectl get po -n flomesh
NAME                                          READY   STATUS    RESTARTS   AGE
fsm-manager-6857f96858-sjksm                  1/1     Running   0          55s
fsm-repo-59bbbfdc5f-w7vg6                     1/1     Running   0          55s
fsm-bootstrap-8576c5ff4f-7qr7k                1/1     Running   0          55s
fsm-cluster-connector-local-8f8fb87f6-h7z9j   1/1     Running   0          32s
Enter fullscreen mode Exit fullscreen mode

Create Sample Application

In this demo, we will be deploying httpbin service under a namespace httpbin.

# Create Namespace
kubectl create ns httpbin

# Deploy sample
kubectl apply -f -n httpbin
Enter fullscreen mode Exit fullscreen mode

Creating a standalone Ingress Controller

The next step is to create a separate Ingress Controller for the namespace httpbin.

$ kubectl apply -f - <<EOF
kind: NamespacedIngress
  name: namespaced-ingress-httpbin
  namespace: httpbin
  serviceType: LoadBalancer
      name: http
      port: 81
      nodePort: 30081
      cpu: 500m
      memory: 200Mi
      cpu: 100m
      memory: 20Mi
Enter fullscreen mode Exit fullscreen mode

After executing the above command, you will see an Ingress Controller running successfully under the namespace httpbin.

kubectl get po -n httpbin -l app=fsm-ingress-pipy
NAME                                        READY   STATUS    RESTARTS   AGE
fsm-ingress-pipy-httpbin-5594ffcfcc-zl5gl   1/1     Running   0          58s
Enter fullscreen mode Exit fullscreen mode

At this point, there should be a corresponding Service under this namespace.

$ kubectl get svc -n httpbin -l
NAME                       TYPE           CLUSTER-IP     EXTERNAL-IP    PORT(S)        AGE
fsm-ingress-pipy-httpbin   LoadBalancer   81:30081/TCP   2m49s
Enter fullscreen mode Exit fullscreen mode

Once you have the Ingress Controller, it's time to create the Ingress resource.

kubectl apply -f - <<EOF
kind: Ingress
  name: httpbin
  namespace: httpbin
  ingressClassName: pipy
  - host:
      - path: /
        pathType: Prefix
            name: httpbin
              number: 14001
Enter fullscreen mode Exit fullscreen mode

Now we have created Ingress resource, let's do a quick curl to see if things are working as expected.

For my local setup demo, LoadBalancer IP is, your IP might be different. So ensure you are performing a curl against your setup ExternalIP.

curl -sI -H "Host:"
HTTP/1.1 200 OK
server: gunicorn/19.9.0
date: Mon, 03 Oct 2022 12:02:04 GMT
content-type: application/json
content-length: 239
access-control-allow-origin: *
access-control-allow-credentials: true
connection: keep-alive
Enter fullscreen mode Exit fullscreen mode


In this blog post, you learned about Kubernetes multi-tenancy, features provided by Kubernetes, tenancy isolation levels, and how to use Flomesh Service Mesh (FSM) Ingress controller to set up isolated Ingress controller for namespaces.

Flomesh Service Mesh(FSM) from Flomesh is Kubernetes North-South traffic manager, provides Ingress controllers, Gateway API, Load Balancer, and cross-cluster service registration and service discovery. FSM uses Pipy - a programmable network proxy, as its data plane and is suitable for cloud, edge, and IoT.

Top comments (0)