DEV Community

Aurelia Peters
Aurelia Peters

Posted on

Setting Up The Home Lab: Setting up Kubernetes Using Ansible

In my previous article I went over how to set up VMs in Proxmox VE using Terraform to deploy the VMs and Cloud-Init to provision them. In this article I'll discuss using Ansible playbooks to do further provisioning of VMs.

Since I want to play with Kubernetes anyway, I'll set up a k8s cluster. It will have 2 master and 4 worker nodes. Each VM will have 4 cores, 8 GB of RAM, a 32 GB root virtual disk, and a 250 GB data virtual disk for Longhorn volumes. I'll create an ansible user via cloud-init and allow access via SSH.

For the purposes of this article, I'm going to run Ansible separately, rather than from within Terraform. As soon as I figure out how to run the two together, I'll post a new article about that. XD

Anyway, let's get started. To begin with I'll add some configuration variables to my vars.tf.

variable "k8s_worker_pve_node" {
  description = "Proxmox node(s) to target"
  type = list(string)
  sensitive = false
  default = ["thebeast"]
}

variable "k8s_master_count" {
  description = "Number of k8s masters to create"
  default = 3 # I need an odd number of masters for etcd
}

variable "k8s_worker_count" {
  description = "Number of k8s workers to create"
  default = 3
}

variable "k8s_master_cores" {
  description = "Number of CPU cores for each k8s master"
  default = 4
}

variable "k8s_master_mem" {
  description = "Memory (in KB) to assign to each k8s master"
  default = 8192
}

variable "k8s_worker_cores" {
  description = "Number of CPU cores for each k8s worker"
  default = 4
}

variable "k8s_worker_mem" {
  description = "Memory (in KB) to assign to each k8s worker"
  default = 8192
}

variable "k8s_user" {
  description = "Used by Ansible"
  default = "ansible"
}

variable "k8s_nameserver" {
  default = "192.168.1.9"
}

variable "k8s_nameserver_domain" {
  default = "scurrilous.foo"
}

variable "k8s_gateway" {
  default = "192.168.1.1"
}

variable "k8s_master_ip_addresses" {
  type = list(string)
  default = ["192.168.1.80/24", "192.168.1.81/24", "192.168.1.82/24"]
}

variable "k8s_worker_ip_addresses" {
  type = list(string)
  default = ["192.168.1.90/24", "192.168.1.91/24", "192.168.1.92/24"]
}

variable "k8s_node_root_disk_size" {
  default = "32G"
}

variable "k8s_node_data_disk_size" {
  default = "250G"
}

variable "k8s_node_disk_storage" {
  default = "containers-and-vms"
}

variable "k8s_template_name" {
  default = "ubuntu-2404-base"
}
Enter fullscreen mode Exit fullscreen mode

Next I'll set up my k8s master and worker nodes in Terraform.

resource "proxmox_vm_qemu" "k8s_master" {
    count = var.k8s_master_count
    name = "k8s-master-${count.index}"
    desc = "K8S Master Node"
    ipconfig0 = "gw=${var.k8s_gateway},ip=${var.k8s_master_ip_addresses[count.index]}"
    target_node = var.k8s_pve_node
    onboot = true
    clone = var.k8s_template_name
    agent = 1
    ciuser = var.k8s_user
    memory = var.k8s_master_mem
    cores = var.k8s_master_cores
    nameserver = var.k8s_nameserver
    os_type = "cloud-init"
    cpu = "host"
    scsihw = "virtio-scsi-single"
    tags="k8s,ubuntu,k8s_master"

    # Setup the disk
    disks {
        ide {
            ide2 {
                cloudinit {
                    storage = "containers-and-vms"
                }
            }
        }
        scsi {
            scsi0 {
                disk {
                  size     = var.k8s_node_root_disk_size
                  storage  = var.k8s_node_disk_storage
                  discard  = true
                  iothread = true
                }
            }
            scsi1 {
                disk {
                  size     = var.k8s_node_data_disk_size
                  storage  = var.k8s_node_disk_storage
                  discard  = true
                  iothread = true
                }
            }
        }
    }

    network {
        model = "virtio"
        bridge = var.nic_name
          tag = -1
    }

    # Setup the ip address using cloud-init.
    boot = "order=scsi0"
    skip_ipv6 = true

    lifecycle {
      ignore_changes = [
        disks,
        target_node,
        sshkeys,
        network
      ]
    }
}

resource "proxmox_vm_qemu" "k8s_workers" {
    count = var.k8s_worker_count
    name = "k8s-worker-${count.index}"
    desc = "K8S Master Node"
    ipconfig0 = "gw=${var.k8s_gateway},ip=${var.k8s_worker_ip_addresses[count.index]}"
    target_node = var.k8s_pve_node
    onboot = true
    clone = var.k8s_template_name
    agent = 1
    ciuser = var.k8s_user
    memory = var.k8s_worker_mem
    cores = var.k8s_worker_cores
    nameserver = var.k8s_nameserver
    os_type = "cloud-init"
    cpu = "host"
    scsihw = "virtio-scsi-single"
    sshkeys = file("${path.module}/files/${var.k8s_ssh_key_file}")
    tags="k8s,ubuntu,k8s_worker"

    # Setup the disk
    disks {
        ide {
            ide2 {
                cloudinit {
                    storage = "containers-and-vms"
                }
            }
        }
        scsi {
            scsi0 {
                disk {
                  size     = var.k8s_node_root_disk_size
                  storage  = var.k8s_node_disk_storage
                  discard  = true
                  iothread = true
                }
            }
            scsi1 {
                disk {
                  size     = var.k8s_node_data_disk_size
                  storage  = var.k8s_node_disk_storage
                  discard  = true
                  iothread = true
                }
            }
        }
    }

    network {
        model = "virtio"
        bridge = var.nic_name
    tag = -1
    }

    # Setup the ip address using cloud-init.
    boot = "order=scsi0"
    skip_ipv6 = true

    lifecycle {
      ignore_changes = [
        disks,
        target_node,
        sshkeys,
        network
      ]
    }
}
Enter fullscreen mode Exit fullscreen mode

A quick terraform apply and I have all of my VMs set up. Next, since I've already installed Ansible on my local machine, I'll set up Kubernetes using Kubespray following Pradeep Kumar's excellent tutorial.

First I set up an inventory file, like so:

all:
  hosts:
    k8s-master-0:
    k8s-master-1:
    k8s-master-2:
    k8s-worker-0:
    k8s-worker-1:
    k8s-worker-2:
  vars:
    ansible_user: ansible
    ansible_python_interpreter: /usr/bin/python3
  children:
    kube_control_plane:
      hosts:
        k8s-master-0:
        k8s-master-1:
        k8s-master-2:
    kube_node:
      hosts:
        k8s-worker-0:
        k8s-worker-1:
        k8s-worker-2:
    etcd:
      hosts:
        k8s-master-0:
        k8s-master-1:
        k8s-master-2:
    k8s_cluster:
      children:
        kube_control_plane:
        kube_node:
    calico_rr:
      hosts: {}
Enter fullscreen mode Exit fullscreen mode

Note here that I've also added DNS entries to my local nameserver for these hosts. I could also have used IP addresses instead of the hosts. In a later revision of this configuration, I'll try setting up resource discovery via the Proxmox inventory source for Ansible, but for now I'll hardcode things.

Note also that I've set the ansible_user variable in this inventory. That's important to make sure that Ansible uses the service account that I already set up in Terraform. I've also set the location of the Ansible Python interpreter (via the ansible_python_interpreter variable) so that I don't get bombarded with warnings from Ansible about using the discovered Python interpreter.

So now that I've got my hosts deployed, it's time to set up Kubernetes. I've cloned the Kubespray GitHub repo and now I'll run the cluster.yml playbook:

ansible-playbook -i ../ansible/inventory/k8s-cluster/hosts.yml --become --become-user=root cluster.yml
Enter fullscreen mode Exit fullscreen mode

After some time (I think it took a good half hour, all told) Kubernetes is installed, ready for me to deploy my applications.

So now I have a working k8s installation on my home lab, but there were several steps involved in getting it set up. It sure would be nice if I could deploy and provision everything in one fell swoop. I'll discuss that next time. I'd also like to not have to SSH into one of my master nodes in order to run kubectl.

Top comments (0)