DEV Community

Joseph D. Marhee
Joseph D. Marhee

Posted on

Kubernetes Pod DNS with Jobs not sharing a Pod

I use a feature on StackOverflow to send me a daily digest of questions asked about Kubernetes so I can remain aware of the sorts of things people are asking about, but also are running into organically in using it in production.

Because I don't do this often where I intervene unless a question goes unanswered, I recently fielded a question about DNS between Pods without using a Service.

Normally, when you create a Service in Kubernetes, you are, then, able to access the Pod resources that make up that service via DNS, i.e. ${your_service_name}.${namespace}.svc.cluster.local.

However, Service resources can't be applied to all resources that use the Pod spec for definition (because Pod resources are the primitive that Kubernetes relies on for anything that requires--a set of- containers) like Job resources, as this user asked on StackOverflow.

You can read my response, but to break it down a little bit:

Basically, when you define a Job, you're mostly asking the Kubernetes scheduler to manage the container lifecycle in a certain way-- a Job (or CronJob) is expected to exit upon completion, when this happens in a Pod (or something like a Deployment or StatefulSet) it is probably a little troubling (because this means something you expected to remain online is definitely down).

Normally, when two containers need to communicate, if the resource cannot be used as a backend to a Service (like a Load Balancer) and exposed with a DNS record, the solution is to create them in the same Pod, which a Job uses, and then the component containers are scheduled to the same node and thus all ports exposed are available via localhost.

However, this user specifically could not (for unspecified reasons) do this, so the question becomes, how do I use Service-style DNS with Pod resources that are part of a Job:

In Kubernetes 1.20, a new feature in the Pod spec was revealed: setHostnameAsFQDN that, when set to true, would create a Service-style DNS entry like pod-instance-1.default-subdomain.my-namespace.svc.cluster-domain.example.

Because Job resources use the Pod spec, it's as simple as defining your Job to include this:

apiVersion: batch/v1
kind: Job
metadata:
  name: pi
spec:
  template:
    spec:
      containers:
      - name: pi
...
      setHostnameAsFQDN: true
  backoffLimit: 4
Enter fullscreen mode Exit fullscreen mode

which, as I reported in my answer, I tested to be working up through Kubernetes 1.21 (this is a beta feature in 1.20).

Top comments (0)