Hello! This is my first post to dev.to and excited to be a part of this great community.
A few weeks ago during the development of one of our apps we came across an issue where we needed to communicate with all associated pods instead of load-balancing. Our backend has 7 pods, all with the same image but different secrets to update databases at different locations.
Instead of writing a service for each location and creating a ton of overhead with repetitive yaml files, changing the deployed service to a headless service was the best way to go.
A headless service is a service with a service IP but instead of load-balancing it will return the IPs of our associated Pods. This allows us to interact directly with the Pods instead of a proxy. It's as simple as specifying
.spec.clusterIP and can be utilized with or without selectors - you'll see an example with selectors in a moment.
Check out this sample config:
apiVersion: v1 kind: Service metadata: name: my-headless-service spec: clusterIP: None # <-- selector: app: test-app ports: - protocol: TCP port: 80 targetPort: 3000
Create a deployment with five pods.
apiVersion: apps/v1 kind: Deployment metadata: name: api-deployment labels: app: api spec: replicas: 5 selector: matchLabels: app: api template: metadata: labels: app: api spec: containers: - name: api image: eddiehale/hellonodeapi ports: - containerPort: 3000
Create a regular service
apiVersion: v1 kind: Service metadata: name: normal-service spec: selector: app: api ports: - protocol: TCP port: 80 targetPort: 3000
And a headless service
apiVersion: v1 kind: Service metadata: name: headless-service spec: clusterIP: None # <-- Don't forget!! selector: app: api ports: - protocol: TCP port: 80 targetPort: 3000
Apply the yaml and verify everything deployed correctly:
$ kubectl apply -f deployment.yaml $ kubectl get all NAME READY STATUS RESTARTS AGE pod/api-deployment-f457fbcf6-6j8f9 1/1 Running 0 5s pod/api-deployment-f457fbcf6-9gvbp 1/1 Running 0 5s pod/api-deployment-f457fbcf6-kqbds 1/1 Running 0 5s pod/api-deployment-f457fbcf6-m76l9 1/1 Running 0 5s pod/api-deployment-f457fbcf6-qzhxw 1/1 Running 0 5s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/headless-service ClusterIP None <none> 80/TCP 5s service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 45h service/normal-service ClusterIP 10.109.192.226 <none> 80/TCP 5s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/api-deployment 5/5 5 5 5s NAME DESIRED CURRENT READY AGE replicaset.apps/api-deployment-f457fbcf6 5 5 5 5s
Now that its all running, deploy a Pod and execute a few commands to test.
$ kubectl run --generator=run-pod/v1 --rm utils -it --image eddiehale/utils bash If you don't see a command prompt, try pressing enter. root@utils:/#
nslookup on each service to see what DNS entries exist. If we
nslookup normal-service one DNS entry and IP is returned, where
nslookup headless-service returns the list of associated Pod IPs with the service DNS:
root@utils:/# nslookup normal-service Server: 10.96.0.10 Address: 10.96.0.10#53 Name: normal-service.default.svc.cluster.local Address: 10.109.192.226 root@utils:/# nslookup headless-service Server: 10.96.0.10 Address: 10.96.0.10#53 Name: headless-service.default.svc.cluster.local Address: 10.1.0.41 Name: headless-service.default.svc.cluster.local Address: 10.1.0.39 Name: headless-service.default.svc.cluster.local Address: 10.1.0.38 Name: headless-service.default.svc.cluster.local Address: 10.1.0.40 Name: headless-service.default.svc.cluster.local Address: 10.1.0.37
Exit the utils pod:
root@utils:/# exit exit Session ended, resume using 'kubectl attach utils -c utils -i -t' command when the pod is running pod "utils" deleted
Delete the services and deployment
$ kubectl delete svc headless-service normal-service && kubectl delete deployment api-deployment service "headless-service" deleted service "normal-service" deleted deployment.extensions "api-deployment" deleted
Headless-services allow us to reach each Pod directly, rather than the service acting as a load-balancer or proxy. This can have many use cases and I'd love to hear your experiences and thoughts in the comments!