DEV Community

Spacelift team for Spacelift

Posted on • Originally published at spacelift.io

What is Terraform Kubernetes Provider?

When it comes to managing infrastructure as code, Terraform is the go-to choice for modern engineers. As well as using Terraform providers specific to the cloud platform hosting your Kubernetes instance, such as azurerm for Azure Kubernetes Service (AKS) on Azure, or aws for Elastic Kubernetes Service (EKS) on AWS, you can also use the native kubernetes provider to directly deploy and manage objects on your K8S cluster.

In this article, we will dive into using the kubernetes Terraform provider to first get a token from the AKS cluster to authenticate. Once connected, we will deploy Kubernetes manifest files, which are described in HCL (Hashicorp Configuration Language).

What is a Terraform provider?

Terraform providers are plugins that enable Terraform to interact with specific infrastructure resources. They serve as an interface between Terraform and the provider you want to use, converting Terraform configurations into API calls and allowing Terraform to manage resources across multiple environments. All providers use the same language to describe different Terraform components, but each provider has its own set of these components (resources and data sources).

A common misconception about Terraform providers is that they only work with cloud providers. If your application has an API, you can implement your own Terraform provider, so many Terraform providers are not tied into the cloud, such as Kubernetes, Helm, RabbitMQ, Spacelift, and Aviatrix.

What is the Kubernetes provider in Terraform?

The Kubernetes provider in Terraform translates your Terraform configurations into API calls to your Kubernetes cluster. It enables you to create and manage Kubernetes resources like namespaces, pods, deployments, config maps, secrets, and more within your cluster, all using the HCL language.

terraform kubernetes provider

When to use the Terraform Kubernetes provider?

As a best practice, you should avoid using the Terraform Kubernetes provider to manage your K8s resources; instead, use Helm or Kustomize directly for this task. However, if your entire codebase is in Terraform and your Kubernetes setup isn't complex, leveraging Terraform can be beneficial, especially if you want everything in a single workflow.

Using the Terraform K8s provider alongside other Terraform providers allows you to manage your stack in one place, simplifying dependency management and automating deployments. At the same time, it makes sense to use the Terraform Kubernetes provider whenever you are doing multi-cloud deployments and use the K8s services your cloud providers offer.

How to configure the Terraform Kubernetes provider?

To configure the Terraform Kubernetes provider, first add the provider block in your Terraform configuration files with the required provider version. Then, specify all the necessary parameters to authenticate and connect Terraform to your Kubernetes cluster.

You can use different attributes when configuring the Terraform Kubernetes provider, for example:

  • host - the URI host of your Kubernetes cluster
  • username - username of the HTTP basic authentication
  • password - password of the above username
  • config_path - the path to the k8s config
  • config_paths - a list of paths for the k8s configs
  • config_context - the k8s context that you want to use
  • client_key - client certificate key for TLS authentication (PEM-encoded)
  • client_certificate - client certificate for TLS authentication (PEM-encoded)
  • cluster_ca_certificate - root certificate bundle for TLS authentication (PEM-encoded)
  • token - service account token

Head over to the provider documentation for the full list of options.

Basic authentication with kubeconfig path

In this example, we just specify the config file and K8s will load all the default values from it.

provider "kubernetes" {
 config_path = "~/.kube/config"  # Path to the kubeconfig file
}
Enter fullscreen mode Exit fullscreen mode

Basic http authentication with token

Here, we specify the Kubernetes API server's URL and the authentication token, avoiding the need for a Kubernetes config.

provider "kubernetes" {
 host  = "https://your-kubernetes-api-server"
 token = "your-token"
}
Enter fullscreen mode Exit fullscreen mode

Basic http authentication with username/password

Similar to the example above, in this provider configuration, we simply replace the token with a username/password combination.

provider "kubernetes" {
 host                   = "https://your-kubernetes-api-server"
 username               = "your-username"
 password               = "your-password"
}
Enter fullscreen mode Exit fullscreen mode

HTTP authentication with certificates

By leveraging this provider configuration, we specify certificates and keys for secure communication with the API server.

provider "kubernetes" {
 host                   = "https://your-kubernetes-api-server"
 client_certificate     = file("path/to/client.crt")
 client_key             = file("path/to/client.key")
 cluster_ca_certificate = file("path/to/ca.crt")
}
Enter fullscreen mode Exit fullscreen mode

Authentication with Kubeconfig and setting up the context

In this example, we are using the kubeconfig and specifying what context we want to use.

provider "kubernetes" {
 config_path    = "~/.kube/config"
 config_context = "dev"
}
Enter fullscreen mode Exit fullscreen mode

💡 You might also like:

Example: Setting up the AKS cluster using the Terraform Kubernetes provider

To first set up your AKS cluster, it is recommended to use the Terraform provider for your cloud provider of choice, in the case of Azure, azurerm. You can use the azuread provider for Terraform to set up Azure Active Directory authentication to your AKS cluster. 

If you need to set up AKS, check out the Provision Azure Kubernetes Service (AKS) Cluster using Terraform article to learn how to use the Terraform registry module to deploy a test cluster with just four lines of code.

The easiest way to set up the Kubernetes provider with AKS is to first use the Azure CLI command below to get credentials:

az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
Enter fullscreen mode Exit fullscreen mode

Next, configure the kubernetes provider block by supplying a path to your kubeconfig file using the config_path attribute or using the KUBE_CONFIG_PATH environment variable.

A kubeconfig file may have multiple contexts. If config_context is not specified, the provider will use the default context.

provider "kubernetes" {
  config_path = "~/.kube/config"
}
Enter fullscreen mode Exit fullscreen mode

Another example of how to configure the provider below uses the kubernetes provider block to provide the host and certificate also uses the exec plugin to issue the command kubelogin to get an AAD token for the cluster.

The service principal is created along with the AKS cluster deployment, and it can be referenced here to extract the server ID and client secret. The clientID is pulled from the AzureAD application, and the Azure AD tenant ID is also used.

The command is set up to log in with spn. This information pulled together gives us enough to request our login token for the AKS cluster.

providers.tf

terraform {
  required_version = ">= 1.3.7"
  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = ">= 3.41.0"
    }
    azuread = {
      version = ">= 2.33.0"
    }
    kubernetes = {
      version = ">= 2.17.0"
    }
  }
}

provider "azurerm" {
  features {}
}

provider "kubernetes" {
  host                   = azurerm_kubernetes_cluster.aks.kube_config.0.host
  cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.cluster_ca_certificate)

  # using kubelogin to get an AAD token for the cluster.
  exec {
    api_version = "client.authentication.k8s.io/v1beta1"
    command     = "kubelogin"
    args = [
      "get-token",
      "--environment",
      "AzurePublicCloud",
      "--server-id",
      data.azuread_service_principal.aks_aad_server.application_id, # Note: The AAD server app ID of AKS Managed AAD is always 6dae42f8-4368-4678-94ff-3960e28e3630 in any environments.
      "--client-id",
      azuread_application.app.application_id, # SPN App Id created via terraform
      "--client-secret",
      azuread_service_principal_password.spn_password.value,
      "--tenant-id",
      data.azurerm_subscription.current.tenant_id, # AAD Tenant Id
      "--login",
      "spn"
    ]
  }
}
Enter fullscreen mode Exit fullscreen mode

Example: Managing Kubernetes resources with Terraform provider

We will continue with the setup from the previous example to show how to use the Terraform Kubernetes provider to create and manage Kubernetes resources.

1. Create a Kubernetes namespace

Now that the provider is set to authenticate to our cluster, we need to create a namespace for our new resources.

The example file below takes a variable for the name called var.kube_namespace and sets an annotation and label.

ns.tf

resource "kubernetes_namespace_v1" "ns" {

  metadata {
    name = var.kube_namespace

    annotations = {
      name = "This blog post is amazing"
    }

    labels = {
      tier = "frontend"
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

2. Create a Kubernetes Pod

The example below shows how to create a pod with an NGINX image. It defines the image version to use, a port of 8080, and sets up a liveness probe using a custom HTTP header.

The namespace name is referenced from the ns resource we created in ns.tf.

NGINX is an open source reverse proxy server for HTTP, HTTPS, SMTP, POP3, and IMAP protocols, as well as a load balancer, HTTP cache, and a web server (origin server).

pod_nginx.tf

resource "kubernetes_pod_v1" "pod_nginx" {
  metadata {
    name      = "nginx"
    namespace = kubernetes_namespace_v1.ns.metadata.0.name
  }

  spec {
    container {
      image = "nginx:1.23.3"
      name  = "nginx"

      env {
        name  = "environment"
        value = "dev"
      }

      port {
        container_port = 8080
      }

      liveness_probe {
        http_get {
          path = "/"
          port = 8080

          http_header {
            name  = "X-Custom-Header"
            value = "GreatBlogArticle"
          }
        }

        initial_delay_seconds = 2
        period_seconds        = 2
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

3. Create a Kubernetes deployment resource

The example deployment file below creates our NGINX deployment in our namespace and specifies three replicas with resource limits.

deployment.tf

resource "kubernetes_deployment_v1" "deploy" {
  metadata {
    name      = "deploy-nginx"
    namespace = kubernetes_namespace_v1.ns.metadata.0.name

    labels = {
      tier = "frontend"
    }
  }

  spec {
    replicas = 3

    selector {
      match_labels = {
        tier = "frontend"
      }
    }

    template {
      metadata {
        labels = {
          tier = "frontend"
        }
      }

      spec {
        container {
          image = "nginx:1.23.3"
          name  = "nginx"

          resources {
            limits = {
              cpu    = "1"
              memory = "256Mi"
            }
            requests = {
              cpu    = "500m"
              memory = "30Mi"
            }
          }

          liveness_probe {
            http_get {
              path = "/"
              port = 8080

              http_header {
                name  = "X-Custom-Header"
                value = "GreatBlogArticle"
              }
            }

            initial_delay_seconds = 2
            period_seconds        = 2
          }
        }
      }
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

4. Create a Kubernetes service

The example below creates a frontend service in the namespace we created earlier in ns.tf.

It uses the tier selector, frontend port 4444, backend of 8080 and is of type 'Loadbalancer' (exposes the traffic publically using a public IP).

svc.tf

resource "kubernetes_service_v1" "svc" {
  metadata {
    name      = "frontend-svc"
    namespace = kubernetes_namespace_v1.ns.metadata.0.name
  }
  spec {
    selector = {
      tier = kubernetes_deployment_v1.deploy.spec.0.template.0.metadata.0.labels.tier
    }
    port {
      port        = 4444
      target_port = 8080
    }

    type = "LoadBalancer"
  }
}
Enter fullscreen mode Exit fullscreen mode

Managing Custom Resources with Kubernetes Terraform provider

To manage customer resources with the Terraform provider, use the kubernetes_manifest resource. This allows you to define and apply custom resource definitions (CRDs) and their instances directly in your Terraform configurations.

Let's suppose you have a CRD deployed with the following structure:

apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
 name: crdresources.test.example.com
spec:
 group: test.example.com
 versions:
   - name: v1
     served: true
     storage: true
     schema:
       openAPIV3Schema:
         type: object
         properties:
           spec:
             type: object
             properties:
               name:
                 type: string
 scope: Namespaced
 names:
   plural: crdresources
   singular: crdresource
   kind: crdResource
   shortNames:
   - cyr
Enter fullscreen mode Exit fullscreen mode

Now, we can create the Terraform code that will deploy resources of this type:

provider "kubernetes" {
 config_path = "~/.kube/config"
}

resource "kubernetes_manifest" "my_custom_resource" {
 manifest = {
   apiVersion = "test.example.com/v1"
   kind       = "crdResource"
   metadata = {
     name      = "example-custom-resource"
     namespace = "default"
   }
   spec = {
     name = "example-name"
   }
 }
}
Enter fullscreen mode Exit fullscreen mode

Let's run a terraform apply and then see the resource:

Plan: 1 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
 Terraform will perform the actions described above.
 Only 'yes' will be accepted to approve.

 Enter a value: yes

kubernetes_manifest.my_custom_resource: Creating...
kubernetes_manifest.my_custom_resource: Creation complete after 0s

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Enter fullscreen mode Exit fullscreen mode
kubectl get crdResource
NAME                      AGE
example-custom-resource   7s

Enter fullscreen mode Exit fullscreen mode

Kubernetes and Terraform with Spacelift

Spacelift supports both Terraform and Kubernetes and enables users to create stacks based on them. Leveraging Spacelift, you can build CI/CD pipelines to combine them and get the best of each tool. This way, you will use a single tool to manage your Terraform and Kubernetes resources lifecycle, allow your teams to collaborate easily, and add some necessary security controls to your workflows.

You could, for example, deploy Kubernetes clusters with Terraform stacks and then, on separate Kubernetes stacks, deploy your containerized applications to your clusters. With this approach, you can easily integrate drift detection into your Kubernetes stacks and enable your teams to manage all your stacks from a single place

To see why using Kubernetes and Terraform with Spacelift makes the most sense, check out this article. The code is available here.

If you want to learn more about Spacelift, create a free account or book a demo with one of our engineers.

Key Points

You can use the kubernetes Terraform provider to manage objects on your Kubernetes cluster. When combined with the cloud provider for your Kubernetes service, like azurerm for Azure, you can deploy your cluster and deploy objects into your cluster, all using Terraform, avoiding YAML manifest files!

Written by Jack Roper and Flavius Dinu.

Top comments (0)