DEV Community

Joseph D. Marhee
Joseph D. Marhee

Posted on • Edited on

Manage an Authoritative DNS Server on Kubernetes with Helm and Fleet

Among my other clusters, I run infrastructure-specific services out of a small Kubernetes cluster. One such service (aside from things like a VPN gateway, Docker registry, etc.) is a DNS nameserver pair that I use inside and out of the cluster. I install and configure these namservers using Helm and deploy/update the Deployment (and ConfigMaps that it serves zone files from) using Fleet.

In my authoritative-dns folder in my repo that Fleet watches, after running:

helm create authoritative-dns
Enter fullscreen mode Exit fullscreen mode

which creates the directory and the template chart (i.e. a templates directory, Chart.yaml, etc.), I am mostly going to be concerned with templates/deployment.yaml and templates/configmap.yaml (and optionally templates/service.yaml) being set-up to receive the BIND configuration and zone files I'd like it to serve.

In templates/configmap.yaml, I'm going to set up two ConfigMap resources, one to handle the named.conf configuration, and another to hold the zone file:

apiVersion: v1
kind: ConfigMap
metadata:
  name: bind-config
data:
{{- toYaml .Values.bindconfig | nindent 2 }}
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: bind-zone-config
data:
{{- toYaml .Values.bindzones | nindent 2 }}
Enter fullscreen mode Exit fullscreen mode

The .Values in template brackets ({{ }}) will refer to data in your Values.yaml file, which will be the part of the Chart you want most of your modifications to go into, hold variable value configuration items. I'll return to that in a moment.

In templates/deployment.yaml, we just need a Deployment that will mount these ConfigMap resources to the BIND container:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ .Values.name }}
  labels:
    {{- include "authoritative-dns.selectorLabels" . | nindent 4 }}
spec:
  replicas: {{ .Values.replicaCount }}
  selector:
    matchLabels:
      {{- include "authoritative-dns.selectorLabels" . | nindent 6 }}
  template:
    metadata:
      roll: {{ randAlphaNum 5 | quote }}
      {{- with .Values.podAnnotations }}
      annotations:
        {{- toYaml . | nindent 8 }}
      {{- end }}
      labels:
        {{- include "authoritative-dns.selectorLabels" . | nindent 8 }}
    spec:
      hostNetwork: true
      affinity:
        {{- toYaml .Values.affinity | nindent 8 }}
      containers:
        - name: {{ .Values.name }}-bind
          image: {{ .Values.image.image }}:{{ .Values.image.tag }}
          imagePullPolicy: IfNotPresent
          ports:
            - name: dns-tcp
              containerPort: {{ .Values.service.targetPort }}
              protocol: TCP
            - name: dns-udp
              containerPort: {{ .Values.service.targetPort }}
              protocol: UDP
          {{- if .Values.readinessProbe.enabled }}
          readinessProbe:
            tcpSocket:
              port: dns-tcp
            initialDelaySeconds: 5
            periodSeconds: 10
          {{- end }}
          {{- if .Values.readinessProbe.enabled }}
          livenessProbe:
            tcpSocket:
              port: dns-udp
            initialDelaySeconds: 5
            periodSeconds: 10
          {{- end }}
          resources:
            {{- toYaml .Values.resources | nindent 12 }}
          volumeMounts:
          - name: bind-config
            mountPath: /etc/bind
          - name: bind-zone-config
            mountPath: /var/lib/bind
      volumes:
        - name: bind-config
          configMap:
            name: bind-config
            items:
              - key: named.conf
                path: named.conf
        - name: bind-zone-config
          configMap:
            name: bind-zone-config
{{- toYaml .Values.zoneconfigs | nindent 12 }}
Enter fullscreen mode Exit fullscreen mode

where in this case, we want:

        - name: bind-zone-config
          configMap:
            name: bind-zone-config
{{- toYaml .Values.zoneconfigs | nindent 12 }}
Enter fullscreen mode Exit fullscreen mode

to refer to each zone file ConfigMap file, which you'll also populate in your Values.yaml file as well, and then appear in the container under /var/lib/bind/{{your zone}}.

Note: the Deployment contains an annotation, roll: {{ randAlphaNum 5 | quote }}-- this is because BIND requires a restart (and thereforce the Pods need to be restarted) to pick up changes in the zone configs. This will generate an updated value for the Pod to be annotated with triggering the restart-- the above restarts the Pod on every upgrade, but there's other options to roll your deployment pods (i.e. based on changes to the configmap specifically, etc.)

This Deployment, because of how I use it, just uses the host network, so a Service is not required, but you can use a template like this template/service.yaml to expose on ports 53/TCP and 53/UDP with a LoadBalancer, in this case with MetalLB:

apiVersion: v1
kind: Service
metadata:
  name: {{ include "authoritative-dns.name" . }}-tcp
  labels:
    {{- include "authoritative-dns.labels" . | nindent 4 }}
  {{- if .Values.service.annotations }}
  annotations:
    {{- toYaml .Values.service.annotations | nindent 4 }}
  {{- end }}
spec:
  type: {{ .Values.service.type }}
  externalTrafficPolicy: Local
  ports:
    - port: {{ .Values.service.targetPort }}
      targetPort: {{ .Values.service.targetPort }}
      protocol: TCP
      name: dns-tcp
  selector:
    {{- include "authoritative-dns.selectorLabels" . | nindent 4 }}
---
apiVersion: v1
kind: Service
metadata:
  name: {{ include "authoritative-dns.name" . }}-udp
  labels:
    {{- include "authoritative-dns.labels" . | nindent 4 }}
  {{- if .Values.service.annotations }}
  annotations:
    {{- toYaml .Values.service.annotations | nindent 4 }}
  {{- end }}
spec:
  type: {{ .Values.service.type }}
  ports:
    - port: {{ .Values.service.targetPort }}
      targetPort: {{ .Values.service.targetPort }}
      protocol: UDP
      name: dns-udp
  selector:
    {{- include "authoritative-dns.selectorLabels" . | nindent 4 }}
Enter fullscreen mode Exit fullscreen mode

So, with these templates in place, now we need a Values.yaml to supply the data:

replicaCount: 2
region: ORD1
name: authoritative-dns
image:
  image: internetsystemsconsortium/bind9
  pullPolicy: IfNotPresent
  tag: 9.11

imageConfig:
  pullPolicy: IfNotPresent

service:
  type: LoadBalancer
  port: 53
  targetPort: 53
  labels:

# resources:
#   requests:
#     memory: 1Gi
#     cpu: 300m

readinessProbe:
  enabled: true
livenessProbe:
  enabled: true

bindzones:
  c00lz0ne.internal: |
      $TTL    604800
      @       IN      SOA     ns1.c00lz0ne.internal. admin.c00lz0ne.internal. (
                                    5         ; Serial
                               604800         ; Refresh
                                86400         ; Retry
                              2419200         ; Expire
                               604800 )       ; Negative Cache TTL
      ;
      c00lz0ne.internal.       IN      NS      ns1.c00lz0ne.internal.
      c00lz0ne.internal.       IN      NS      ns2.c00lz0ne.internal.
      ns1                      IN      A       10.24.0.11
      ns2                      IN      A       10.24.0.12
      rke00.mgr                IN      A       10.24.0.99
      rke01.mgr                IN      A       10.24.0.100
      rke-lb.mgr               IN      A       100.69.29.9

bindconfig:
  named.conf: |
    options {
            directory "/var/cache/bind";
            listen-on port 53 { any; };
            auth-nxdomain yes;
            forwarders { 
                    1.1.1.1; 
                    1.0.0.1; 
            };
            listen-on-v6 { ::1; };
            allow-recursion {
                    none;
            };
            allow-transfer {
                    none;
            };
            allow-update {
                    none;
            };
    };

    zone "c00lz0ne.internal" {
      type master;
      file "/var/lib/bind/c00lz0ne.internal";
    };

zoneconfigs:
  items:                 
  - key: c00lz0ne.internal
    path: c00lz0ne.internal
Enter fullscreen mode Exit fullscreen mode

So, you'll see for my cool domain c00lz0ne.internal, I need to populate bindzones.c00lz0ne.internal with the BIND zonefile itself, then bindconfig.named.conf needs to be updated to add that file:

    zone "c00lz0ne.internal" {
      type master;
      file "/var/lib/bind/c00lz0ne.internal";
    };
Enter fullscreen mode Exit fullscreen mode

which you'll see in the above uses the mount path we defined in the Deployment for the ConfigMap containing these files, and then finally, zoneconfigs.items which should contain a map of keys and paths for each of the domain zone files you created, and then referenced in named.conf so it mounts to the Deployment template when it renders.

Typically, at this point, you could apply your chart:

helm install authoritative-dns ./
Enter fullscreen mode Exit fullscreen mode

and then when the Deployment is online:

jdmarhee@boris ~/repos/terraform-digitalocean-k3s-highavailability (master) $ kubectl get pods
NAME                                 READY   STATUS    RESTARTS   AGE
authoritative-dns-6576df5d48-7glqk   1/1     Running   0          15s
authoritative-dns-6576df5d48-nhjsq   1/1     Running   0          15s
Enter fullscreen mode Exit fullscreen mode

you can test resolution:

dig +short -t ns ns1.c00lz0ne.internal @${SVC_IP}
Enter fullscreen mode Exit fullscreen mode

However, because we want the Deployment to update with the changes to the ConfigMap and other values, we can use a GitOps tool like Fleet to update on changes to the repository.

Let's say I keep all of my charts in a repo (Fleet also supports private repositories, but for the sake of demonstration, a basic public repo) called helm-charts:

apiVersion: fleet.cattle.io/v1alpha1
kind: GitRepo
metadata:
  name: sample
  # This namespace is special and auto-wired to deploy to the local cluster
  namespace: fleet-local
spec:
  # Everything from this repo will be ran in this cluster. You trust me right?
  repo: "https://github.com/c00lz0ne-infra/helm-charts"
  paths:
  - authoritative-dns
Enter fullscreen mode Exit fullscreen mode

where paths represent the charts that Fleet will track changes to, once this is applied and you can see Fleet is running.

In my case, I only want this chart deployed to specific clusters (in this case, with env matching svcs), so to the above GitRepo, I add:

  targets:
  - clusterSelector:
      matchLabels:
        env: svcs
    name: svcs-cluster
Enter fullscreen mode Exit fullscreen mode

so when this, or a new cluster with that label match (or using any of the other methods of mapping to downstream clusters), the chart will be applied on each change to a specified branch or revision in the git repository.

In my case, I will only have a small and fixed number of clusters that will ever be managed that way, but if you want to great ClusterGroups to target (using these matching rules, rather than mapping the GitRepo to the rule, map the GitRepo to the clustergroup, and manage it as its own resource), it can be defined like this:

kind: ClusterGroup
apiVersion: fleet.cattle.io/v1alpha1
metadata:
  name: services-group
  namespace: clusters
spec:
  selector:
    matchLabels:
      env: svcs
Enter fullscreen mode Exit fullscreen mode

and then update the GitRepo resource to use this matching rule:

    clusterGroup: group1
Enter fullscreen mode Exit fullscreen mode

or:

    clusterGroupSelector:
      matchLabels:
        region: us-east
Enter fullscreen mode Exit fullscreen mode

or optionally use a clusterGroupSelector, or a combination of these to subset this group further (for example, in region us-east within that group), name a specific group, or in the above example, find ClusterGroups that are in a specific region.

The result of all of this is that rather than writing complex rules for when and how to run Helm to target multiple clusters, Fleet will use the above resources to manage this application of chart changes on a specific branch, across the defined clusters.

Top comments (0)