DEV Community

Joseph D. Marhee
Joseph D. Marhee

Posted on • Updated on

Deploying Kubernetes (and DigitalOcean Cloud Controller Manager) on DigitalOcean with Terraform

I recently put together a Terraform repository to manage a Kubernetes cluster deployment on DigitalOcean:

https://bitbucket.org/jmarhee/cousteau/src/master/

The goal of this project was to deploy a cluster, along with a workload for the DigitalOcean cloud controller (to allow Kubernetes to provision things like DO Load Balancers and Block Storage as Kubernetes resources like the service LoadBalancer type ingress, and Volumes using the do-storage-class) along with facilities for scaling the node pool and storing the Terraform state in DigitalOcean Spaces object storage.

Once you clone the above repository, in a file called terraform.tfvars, you can define different variables, but using the terraform.tfvars.sample file as a template, you can provide the two non-optional variables:

digitalocean_token = "digitalocean_rw_api_key"

ssh_key_fingerprints = ["key:fingerprint1","key:fingerprint2"]
Enter fullscreen mode Exit fullscreen mode

I recommend looking at vars.tf for a complete listing of variables available to you, however, one of new is secrets_encrypt (defaults to no) that, when set to yes will configure at-rest Secrets encryption for you cluster on spin-up.

Once you have all of your options set, the remaining step before you can plan your Terraform script is to proceed to initialize your terraform provider using the Makefile:

make init-with-storage
Enter fullscreen mode Exit fullscreen mode

which will prompt you for your DigitalOcean Spaces credentials, and then you can format, validate, and plan your deployment:

make validate-apply
Enter fullscreen mode Exit fullscreen mode

before apply-ing.

To scale the pool, you can modify count in terraform.tfvars to the desired value, and plan and apply another run. Updating the join key will only require that you refresh (or retrieve, this result is stored in Terraform state) the join token from Terraform stored in random_string.kube_init_token_a and _b (concatenated to generate a valid token for kubeadm, then provide to kubeadm token create [token] to keep this in TF state, rather than local to kubeadm, and having to manually provide it to the new nodes) on the controller before bumping the node pool.

Once deployed, you can use the deployed cluster to, out of the box, provision DigitalOcean LoadBalancer resources and Volumes via the included digitalocean cloud-controller-manager, for example:

--------
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.15.4
        ports:
        - containerPort: 80
--------
kind: Service
apiVersion: v1
metadata:
  name: nginx
spec:
  selector:
    app: nginx
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: LoadBalancer
Enter fullscreen mode Exit fullscreen mode

and if you are a user of doctl, for example, you can see upon successfully completing the request for the above manifest, the output from doctl compute load-balancer list will show a new load balancer object.

Top comments (0)