DEV Community

Cover image for Deploying to DigitalOcean Kubernetes using Terraform Cloud and GitHub Actions
mkdelta
mkdelta

Posted on

Deploying to DigitalOcean Kubernetes using Terraform Cloud and GitHub Actions

This is a follow-up to my previous post about my submission to the DigitalOcean Kubernetes Challenge! I recommend that you at least skim through it for context on Kubegres, the Kubernetes operator we'll be using to deploy Postgres.

Disclaimer: This isn't an intro to any of the technologies mentioned! If you haven't used Terraform with GitHub Actions before, I highly suggest going through this tutorial from HashiCorp itself. I'll mostly be riffing off of it, pointing out important departures throughout.

You will need:

  • A GitHub account and a working Git installation
  • A Terraform Cloud account and a working Terraform installation
  • A DigitalOcean account (the process for other providers is very similar, however)

Brief overview

I had recently deployed a scalable Postgres cluster to DigitalOcean Kubernetes, but I did it manually. The process is straightforward but quite tedious, which makes it a prime candidate for automation.

How it works

  1. Infrastructure configuration is pushed to the GitHub repo, triggering a GitHub Actions workflow
  2. GitHub Actions checks out code to a runner
  3. Runner connects to Terraform Cloud to plan and apply the configuration
  4. Terraform Cloud connects to the provider (DigitalOcean in this case) to provision the needed resources

Diagram of how it works

The steps

  1. Set up Terraform Cloud
  2. Set up the GitHub repository
  3. Set up the Terraform file
  4. Push to the repository
  5. Cleanup!

1. Set up Terraform Cloud

1.1. From your DigitalOcean account, create a personal access token.

1.2. From your Terraform Cloud account, create a new workspace, selecting API-driven workflow as its type.

Workflow types

1.3. In your newly created workspace, go to the variables tab and make a new workspace variable called DIGITALOCEAN_TOKEN. Select the env variable type and check the Sensitive box.

Variables tab
Entering DigitalOcean token

1.4. From your Terraform Cloud account, go to the User settings page, select Tokens from the sidebar, and generate a new token. We'll need this for GitHub Actions.

Generating a Terraform Cloud token


2. Set up a GitHub repository

2.1. Create a new repository. Go to the Settings tab and select Secrets from the sidebar.

Settings tab

GitHub secrets

2.2. Create a new secret called TF_API_TOKEN and paste the Terraform Cloud token you just generated.

Terraform API token

2.3. Navigate to the Actions tab in your repository and find the Terraform template. Click Set up this workflow.

Actions tab
Terraform template

Refer to the Review Actions workflow section in this tutorial for a breakdown of the workflow steps. The template we're using is slightly different in that it doesn't have the update pull request steps.

2.4. Commit the file. The workflow is going to be triggered but it'll quickly error out because we don't have a Terraform file yet!


3. Set up the Terraform file

Click here to see the Terraform file I used. This section of the tutorial is gonna be a breakdown of the file instead of a sequence of steps. For the experts in the audience: I'm new to Terraform so go easy on me! I tried ordering it in a way conducive to explanation.

terraform {
  backend "remote" {
    organization = "your-org-here"

    workspaces {
      name = "your-workspace-name-here"
    }
  }
Enter fullscreen mode Exit fullscreen mode

This part tells Terraform to use Terraform Cloud to plan, apply, etc. instead of doing it locally. This also means the state of your deployment will be stored remotely and securely.

required_providers {
    digitalocean = {
      source  = "digitalocean/digitalocean"
      version = "~> 2.16.0"
    }

    kubernetes = {
      source  = "hashicorp/kubernetes"
      version = "~> 2.6.0"
    }

    kubectl = {
      source  = "gavinbunney/kubectl"
      version = ">= 1.7.0"
    }
  }
Enter fullscreen mode Exit fullscreen mode

Pretty straightforward. The kubectl provider is super useful for elegantly doing kubectl apply to our cluster (we did a lot of that manually last time). We'll see it in action later.

resource "digitalocean_project" "k8s_challenge" {
  name        = "k8s-challenge"
  description = "Entry for the DigitalOcean Kubernetes Challenge"
  purpose     = "Just trying out DigitalOcean"
  environment = "Development"

  resources = [
    digitalocean_kubernetes_cluster.postgres.urn
  ]
}

resource "digitalocean_vpc" "k8s" {
  name   = "k8s-vpc"
  region = "sgp1"

  timeouts {
    delete = "4m"
  }
}
Enter fullscreen mode Exit fullscreen mode

DigitalOcean uses projects to organize resources. We'll put our cluster in a new one and create a new VPC for it. The delete timeout section in the VPC resource makes sure everything else has been deleted before deleting the VPC (it'll throw an error in the destroy process otherwise; I've found that deletions take a few minutes to register).

data "digitalocean_kubernetes_versions" "prefix" {
  version_prefix = "1.21."
}

resource "digitalocean_kubernetes_cluster" "postgres" {
  name         = "postgres"
  region       = "sgp1"
  auto_upgrade = true
  version      = data.digitalocean_kubernetes_versions.prefix.latest_version

  vpc_uuid = digitalocean_vpc.k8s.id

  maintenance_policy {
    start_time = "04:00"
    day        = "sunday"
  }

  node_pool {
    name       = "worker-pool"
    size       = "s-2vcpu-2gb"
    node_count = 3
  }
}
Enter fullscreen mode Exit fullscreen mode

Here we're finally configuring the cluster itself. We're more or less creating a default one. Notice that we're using the id of the VPC we created. The maintenance policy determines when DigitalOcean will install updates and patches.

provider "kubernetes" {
  host  = digitalocean_kubernetes_cluster.postgres.endpoint
  token = digitalocean_kubernetes_cluster.postgres.kube_config[0].token
  cluster_ca_certificate = base64decode(
    digitalocean_kubernetes_cluster.postgres.kube_config[0].cluster_ca_certificate
  )
}

provider "kubectl" {
  host  = digitalocean_kubernetes_cluster.postgres.endpoint
  token = digitalocean_kubernetes_cluster.postgres.kube_config[0].token
  cluster_ca_certificate = base64decode(
    digitalocean_kubernetes_cluster.postgres.kube_config[0].cluster_ca_certificate
  )
  load_config_file = false
}
Enter fullscreen mode Exit fullscreen mode

Here we're configuring our providers to get credentials from the cluster for adding Kubegres resources.

variable "superUserPassword" {}
variable "replicationUserPassword" {}


resource "kubernetes_secret" "postgres_secret" {
  metadata {
    name      = "mypostgres-secret"
    namespace = "default"
  }

  data = {
    superUserPassword       = var.superUserPassword
    replicationUserPassword = var.replicationUserPassword
  }

  type = "Opaque"
}
Enter fullscreen mode Exit fullscreen mode

This is basically the equivalent of the my-postgres-secret.yaml in the Kubegres tutorial.

Short detour: put these secrets in your Terraform Cloud workspace variables!

Terraform Cloud workspace variables

data "kubectl_path_documents" "docs" {
  pattern = "./manifests/*.yaml"
}

resource "kubectl_manifest" "kubegres" {
  for_each  = toset(data.kubectl_path_documents.docs.documents)
  yaml_body = each.value
}
Enter fullscreen mode Exit fullscreen mode

Here we're telling the kubectl provider to apply all the manifests in our ./manifests/* directory. We're using kubectl_path_documents instead of kubectl_filename_list because the kubegres.yaml file actually consists of multiple documents defining different resources. I got stuck on this the first time around :^)

See the kubectl provider docs for more details.

Short detour: create a manifests directory in your repo and put the required manifests in it! Also check the previous post for context.

Manifests directory


4. Push to the repository

4.1. You should be pretty much done! Push everything to the repository. At the minimum, you should have a main.tf file, a manifests directory, and a .github/workflows directory.

4.2. Look at your Actions tab to see the triggered workflow. You should see something like the following.

Triggered workflow

A lot of configuration is hidden in that Kubegres manifest. Don't panic if the console throws thousands of lines of output at you.

Thousands of lines of output

You can also check the ongoing run in your Terraform Cloud account. The terraform apply part takes a few minutes. Grab a cup of your favorite beverage and sit tight!

Terraform Cloud output

After a few minutes:

GitHub Action apply complete
Terraform Cloud Apply complete

You can also view the cluster in your DigitalOcean control panel.

DigitalOcean control panel

Clusters also come with a dashboard by default.

Kubernetes cluster dashboard


5. Cleanup!

5.1 Since we used Terraform Cloud, we can simply queue up a destroy plan! Just go to your workspace Settings and select Destruction and Deletion. Click the red Queue destroy plan and confirm by entering the name of your cluster.

Terraform Cloud settings
Queue destroy plan
Confirm destroy plan

5.2. You should be taken to a new run. Click Confirm & Apply below, add a comment, and click Confirm Plan.

Confirm destroy plan

5.3. Wait a few minutes and your cluster should be destroyed! The created DigitalOcean project should also disappear from your control panel shortly.

Successful destroy

VPC delay

As you can see, the VPC took some time to get destroyed.


Thank you!

And that's it! I know this tutorial was a bit gisty so feel free to ask questions and ask for debugging help. Thanks to DigitalOcean for organizing the challenge! The repo can be found here.

Top comments (0)