Objective
Network Traffic management is one of the key areas for any kubernetes setup and its important to understand how different components of your cluster function when it comes to handling the incoming traffic.
The main objective of this post is to discuss about components of nginx ingress controller and Istio service mesh and the main differences between each of them along with following:
- Different types of services used in a kubernetes cluster
- What is an ingress controller?
- What is an ingress resource?
- What is an istio service mesh?
- Traffic flow in Nginx Ingress Controller vs Istio service mesh.
- When to use Istio Service Mesh vs Nginx Ingress controller?
Services in kubernetes - A quick recap
Before knowing about Nginx ingress controller and Istio service mesh, its important to know about concept of services in a native kubernetes setup.
If you have worked on kubernetes or have learnt the basics of kubernetes, you must be familiar with object type called "service".
In order for your workloads hosted on pods to be able to accept traffic, you would need some kind of load balancer. All the pods we deploy are ephemeral, which means when a pod is terminated or killed, it will be replaced by another pod with different ip address and we cannot directly communicate with the POD instead we use the concept of service object.
A service is a logical abstraction to expose the deployed pods that host your application. Below are few different types of the services that are commonly used.
- NodePort
- ClusterIP
- LoadBalancer
- ExternalName
Depending on the type of service you choose, traffic will be routed accordingly. I'll briefly touch upon what each of these services does.
NodePort
When you use this kind of service object, kubernetes would create a random port in the range of 30000-32768 on each of the nodes and you can access the backend pod by typing the IP address of the Node followed by the port as follows:
http://192.168.126.8:32768
Below is the YAML file for node port service
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
type: NodePort
selector:
app.kubernetes.io/name: MyApp
ports:
# By default and for convenience, the `targetPort` is set to the same value as the `port` field.
- port: 80
targetPort: 80
# Optional field
# By default and for convenience, the Kubernetes control plane will allocate a port from a range (default: 30000-32767)
nodePort: 30007
If you have multiple nodes, each node will have it own IP and you would need to use the IP address and the port number to access your workload. This is good for testing purposes and local development, but not ideal for real world scenarios.
ClusterIP
This is one of the common services that is used to create and expose your workloads within your cluster. Unlike nodePort service, you have an option to choose which port you would like your kubernetes cluster to allocate the port number. When you create a clusterIP type of service, it would create a service object with an IP and port number specified and you can use the same IP/DNS service name allocated to the service across the cluster to access your cluster. When you create a service object, if you do not specify the type of service you would like to create (ex:clusterIP/NodePort/Loadbalancer), by default kubernetes would create the service of type clusterIP.
Below is the YAML file snippet from kubernetes.io for service of type clusterIp.
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app.kubernetes.io/name: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
This type of service does not expose your workloads outside your kubernetes cluster and its good for workloads like backend API Services, Databases, Batch processing workloads, etc.
Typically the services which you dont need to be expose to external world are created as clusterIP.
Load Balancer
This is one of most commonly used services that is used to expose your workloads to external world and on cloud. When you use this kind of service, kubernetes would spinup a load balancer in the cloud service provider where your managed kubernetes cluster is hosted on and it creates an IP Address in the cloud, which you can access it externally from your kubernetes cluster. This is one of the service types that are used for exposing your front-end services. You can also specify the ip address you would like to assign to your service.
Below is the YAML file for Load Balancer
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app.kubernetes.io/name: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
clusterIP: 10.0.171.239
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 192.0.2.127
ExternalName
Services of type external name point a service to a DNS name instead of pods based on pod selector like other type of services described above.
Below is yaml file that shows how to define an external name service
apiVersion: v1
kind: Service
metadata:
name: my-service
namespace: prod
spec:
type: ExternalName
externalName: my.database.example.com
To learn more about these services in detail follow this link
What is an Ingress Controller and why do we need one?
Now that we have covered the basics of kubernetes services, lets move on to Nginx Ingress controller.
One of the services we discussed above is Load Balancer, while you can use this kind of service to expose your workloads, this is good of a couple of different workloads. However, when your microservices grow, to expose them externally, you need to use multiple services of type load balancer and each of these services would create an additional load balancer and a public IP on your cloud service provider and you would end up not only managing all of them, but also you would be paying for each of the public Ips provisioned by load balancer service.
Here is where, Ingress controller comes into picture.
An ingress controller is a piece of software that provides reverse proxy, configurable traffic routing, and TLS termination capabilities for Kubernetes services. Kubernetes ingress resources are used to configure the ingress rules and routes for individual Kubernetes services.
When you use an ingress controller like nginx ingress controller, you dont need to create multiple load balancer type services to expose your workloads. When you create an Nginx ingress controller, it would create a service type of Load balancer and you can use it as your inbound ip to access all your workloads by exposing them on clusterIP type of service. But how do we create a single service type and route the traffic to multiple backend services? that can be achieved using an Ingress Resource. An ingress resource is another kubernetes object that is used to define how the incoming traffic can be forwarded to appropriate backend service and the traffic sent to service would be directed to the pods.
I'm going to deploy a sample hello-world application on an AKS cluster, install nginx controller and configure ingress resource to route the traffic. While i'm not going to cover everything in detail on this setup, i used followed this link to install ingress controller on AKS and its pretty straight forward.
I have created an AKS cluster and deployed sample application on it in a new namespace called 'ingress-namespace'.
Then installed a nginx ingress controller, it created few kubernetes objects in 'ingress-basic' namespace.
One of them you see is a service type of load balancer that accepts the incoming traffic.
What is an Ingress Resource?
An ingress resource one of the resource types that helps in defining the routing to your backend services. There are two types of routing methods available in ingress.
1.Host-based routing
2.Path-based routing
Using an ingress resource you can map the incoming requests to respective backend services.
Below is an ingress resource YAML file that shows ingress routing rule for based on host name. There are two hosts defined the the resource. If the incoming request is https://foo.bar.com/bar where host name is "foo.bar.com" and prefix is "/bar", it goes to "service1" and requests for "*.foo.com" with prefix "/foo" goes to "service2". This type of routing is called host-based routing.
code snippet credits: Kubernetes.io
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-wildcard-host
spec:
rules:
- host: "foo.bar.com"
http:
paths:
- pathType: Prefix
path: "/bar"
backend:
service:
name: service1
port:
number: 80
- host: "*.foo.com"
http:
paths:
- pathType: Prefix
path: "/foo"
backend:
service:
name: service2
port:
number: 80
Below is an ingress resource YAML file that shows how the routing happens on incoming request for application path with prefix '/testpath'. That means, if there is an incoming request, for https://myexamplesite.com/testpath it will be evaluated against below ingress rule and it would be sent to the service called test on port 80. This path based routing.
code snippet credits: Kubernetes.io
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx-example
rules:
- http:
paths:
- path: /testpath
pathType: Prefix
backend:
service:
name: test
port:
number: 80
Traffic flow when you use Ingress Controller and Ingress resource
Here is a picture on how the incoming traffic flows when you use ingress.
Image credits: kubernetes.io
Traffic flow explained:
- Request originates from client reaches ingress-managed load balancer
- Then request is processed by Ingress resource based on the service prefix defined in the routing rule.
- Then request is sent to the actual service
- Then the request is sent to the actual backend pod.
Now that we have taken a look at ingress and other options in ingress, lets go few concepts of Istio and how they are different from traditional ingress controller.
What is Istio?
Istio service mesh is one of the widely used service mesh tools and it has capabilities like observability, traffic management, and security for your microservices workloads hosted on your kubernetes cluster. For more information on istio vist https://istio.io/latest/about/service-mesh/
What is an Istio IngressGateway?
Istio Ingress Gateway is one of the components that is operates at the edge of the service mesh and serves as traffic controller incoming requests. Interestingly, this also installed as one of the 'service' object and has few pods running behind it. So basically the logic to handle the traffic in the pods that run the istio ingressGateway and istio uses Envoy proxy images to run these pods. This is similar to plain Nginx Ingress Controller.The Ingress Gateway Pod is configured by a Gateway and a Virtual Service.
What is an Istio Gateway? And how is it different from Ingress Controller?
Istio Gateway is the component is similar to ingress resource. Like the way ingress resource is used to configure ingress controller, Istio Gateway is used to configure Istio Ingress Gateway which is mentioned in the above section. Using this component, we can configure it accept traffic on the host that we want the traffic to be sent on, configure TLS certificates for incoming requests.
Below is the yaml snippet of istio gateway component. Here it shows that in the selector, it uses istio: ingressgateway as the label to bind to istio ingress gateway and this is how its bound to istio gateway. It also has the 'servers' section which has the configuratio for configuring the port number, hosts that this gateway is configured to accept traffic on.
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: httpbin-gateway
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "httpbin.example.com"
What is a Virtual Service?
A virtual service is used to configure routing to the backend services. We can configure one virtual service per application and the backend services.
Below is the snippet of virtual service component, that shows how its configured to route the traffic to backend 'service' based on incoming hosts and uri prefix.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews-route
spec:
hosts:
- reviews.prod.svc.cluster.local
http:
- name: "reviews-v2-routes"
match:
- uri:
prefix: "/wpcatalog"
- uri:
prefix: "/consumercatalog"
rewrite:
uri: "/newcatalog"
route:
- destination:
host: reviews.prod.svc.cluster.local
subset: v2
- name: "reviews-v1-route"
route:
- destination:
host: reviews.prod.svc.cluster.local
subset: v1
Traffic flow when you use Istio Ingress Gateway with Istio gateway and Virtual Service
Below picture shows how the traffic flows in Istio and also shows how the services are configured.
When to use Istio service mesh vs Nginx Ingress Controller?
So far we have seen the differences between a traditional nginx ingress controller vs istio service mesh. Using service mesh over ngnix ingress controller is recommended only if you are looking for:
- Enabling mutual TLS between services
- Observability of your service traffic
- Implement deployment techniques like blue/green, circuit breaking, A/B testing, etc
Use traditional nginx ingress controller, only if you want to handle your incoming traffic and distribute it to backend services, which is good for services with less number. When your workloads/services increase, using service mesh tool like ISTIO service mesh is essential.
In this blog post, we covered differences between a traditional ingress gateway vs istio service mesh and when to use each of them.
This brings us to the end of this article.
Thank you for reading this post and I hope you find it informative.
Happy Learning!!!
Top comments (0)