DEV Community

Cover image for The many faces of Kubernetes Services
Vlad Fratila
Vlad Fratila

Posted on • Originally published at Medium

The many faces of Kubernetes Services

If you heard the terms NodePort, Ingress Controller, Cluster Ingress, kube-proxy and thought: "this is confusing", then read on.

Kubernetes is quickly becoming the standard in container orchestration. The platform serves as a solid base for a whole new generation of services. But, like with anything good in life, some assembly is required.

In this article, let's build our understanding of Services and the required K8s add-ons to make them work. We will start with a simple example and expand until we reach the concept of Ingress Controllers.

Alt Text

There are three main things that are missing in Kubernetes.

Networking. K8s expects a network setup so that each pod can talk to each other pod, and each node can reach other nodes. No magic is allowed here (read: without NAT).

Storage. Apart from the API objects stored in etcd, K8s has no storage solution to speak of. You need to provide one suited to your needs. Or more. As you'll see, you can always add more of everything.

Ingress. And lastly, K8s does not offer a way to expose your services. The Endpoint/Service combo is made for in-cluster communication, but out-of-the-box, there's nothing for the outside world to consume.

Services: let's make a door

In this article, we'll focus on the third aspect: how to ingress into the cluster. Say we want to deploy a Go app. We create a Deployment for the pods and a Service to expose them. What are our options?

DNS

Most cluster operators install a cluster-aware DNS solution like Core DNS. This is basically the first thing you do. Thanks to in-cluster DNS, our service receives a locally-resolvable DNS endpoint. Other services inside the cluster can resolve this endpoint and access our service; traffic is balanced via kube-proxy, a little utility that works its magic via iptables.

Remember, K8s expects us to provide this Network connectivity. Usually, cluster operators implement a robust networking component via a K8s CNI solution (a topic for another post).

If we use a cloud-managed cluster, we can be sure that CNI and DNS are already setup. Which leads us straight to our first real choice in the matter.

NodePort

This is the first thing we can do to expose our Service to the outside world. We can choose a port, and K8s will reserve it for our service on each of its nodes. This is very much what Docker would do on a normal host. This approach is built-in and does not require any additional components.

It has some drawbacks:

  • no port re-use - a service claims ports, so a second service cannot use them
  • you need to keep track of ports across the entire cluster to avoid annoying conflicts
  • you need a solution to manage a DNS entry that includes all of your node IPs - not feasible if you think about node replacements
  • if you use Deployments, a small degradation in latency is expected.

More on that latency bit: this model is usually well-suited to a DaemonSet, so each Node can have a Pod which listens on this port. This way, if a request lands on a Node, it will find a Pod and K8s won't need to forward it to another host. Compare this to a regular Deployment, where your Pods may not end up on every Node; so K8s will need to take requests that fell on the lonely Node, and forward them to a Node that's running your Pod (this happens through kube-proxy). Makes sense? Yeah, it's a bit weird - but normal, if you spend the time to think about it.

LoadBalancer

If we're doing this in the Clouds (and why wouldn't we?) then we can choose to create a LoadBalancer for our Kubernetes Service. Just change the Service type from NodePort to LoadBalancer, and all of our problems will go away:

  • who cares about port re-use: services will be automatically mapped to ephemeral ports
  • we don't need to keep track of these ports anymore
  • we can create a DNS that points to the load balancer - and all clouds will manage this endpoint for you, and you can setup scaling groups to handle node replacements as well
  • If we use Deployments (and we should) the latency increase… does a 360 and stays in place

Turns out that if we're not using a DaemonSet, the same latency problem applies. All of our Nodes will be tied to the LoadBalancer, DNS will be managed by the cloud, but our Pods may not be spread out to all Nodes. So the kube-proxy forwarding will still happen.

No surprise there.

But wait. There's one more problem. What if we want to deploy another service?

Our cluster is now like a house with many doors.

Oops! It just created another Load Balancer. Well, that was unexpected.


Cluster Ingress: Our welcome area

To solve these issues, we arrive at the concept of Cluster Ingress.

The Cluster-wide Ingress is a single point of entrance for the whole cluster. The most common implementation is a Service of type LoadBalancer (which behind the curtains, is backed by a NodePort). In K8s, as in life, we're always building on what we have. Typically, this would be a Layer 4 load balancer, which will be backed by…

… drum roll…

Nginx.

In K8s, as in life, we're always building on what we have.

That's right, we can use Nginx. Or Envoy, or Traefik, or something else. You are now back on familiar grounds. All of our Services can be upstreams to our new Ingress (or vhosts, or backends, what have you), and we only need to deploy a single Load Balancer in the cloud.

This way, we solve all of our drawbacks. Our Ingress app will sit behind a LoadBalancer, so no more ports to keep track of.

We can even solve the latency issue. By deploying our Ingress solution as a DaemonSet, it will be available on each node. Right where you want it. This is an excellent use of DaemonSets, which by their nature need to be use sparingly (or else they threaten to fill up your node, and leave no room for other pods).

Ingress controller: The friendly front-desk staff

Now that we have a fancy waiting area to impress our requests, we need one more piece: the Ingress Controller.

You didn't think you would write nginx config by hand, did you?
The Ingress controller is a Kubernetes controller - a component deployed in the cluster that manages a special type of custom resource: the Ingress resource. This is a CRD, but that - again - is a topic for another post.

If you deploy an Ingress resource, you can link your service to the Ingress app, say nginx, and specify ports, server names, and what have you. The Ingress Controller is then responsible for translating your Ingress YML into nginx configuration, and reloading nginx so it can serve your new upstream.

IngressRoute

You may see the term IngressRoute thrown around. This is another CRD, created by the folks at Heptio to manage backends for Envoy. You can think about it as Ingress v2. If you deploy an Envoy Ingress Controller, such as Contour, you will use the IngressRoute object to control rules, rather than the default Ingress object.


I hope this has been informative. Do you think I need more details? More topics I should explore? Let me know in the comments.

Top comments (0)