DEV Community

Cover image for Install K3s on Proxmox Using Ansible
Algo7
Algo7

Posted on • Updated on

Install K3s on Proxmox Using Ansible

Introduction

This article will guide you through how to install K3s with Flannel VXLAN backend on Proxmox using Ansible. It is assumed that you have basic knowledge about Kubernetes, Proxmox, SSH, and Ansible.

This is my first post on dev.to, any constructive feedback is welcomed.

Table of Contents

Prerequisites

  1. Promox installed and running
  2. Ansible installed on your local machine
  3. At least 4 VMs (3 control plane nodes + 1 worker node)
    • OS: Debian-based OSes, preferably the latest Ubuntu LTS
    • SSH installed and configured using Public Key Auth
  4. kubectl installed locally
  5. SSH installed locally

Proxmox Firewall Configuration

When you create VMs on Proxmox, the firewall is disabled by default. If you want to use the VM firewall, please make sure you opened the required ports.

Below are the inbound rules for K3s nodes from the official documentation:

K3s Ports

For more information: https://docs.k3s.io/installation/requirements#inbound-rules-for-k3s-nodes

We don't need all of these ports opened because we are not using the Flannel Wireguard backend and Spegel registry.

Creating Proxmox Security Group

As we will be applying the same firewall rules to multiple VMs, it will be easier if we create a security group in Proxmox and apply the security group to all the target VMs.

We will be creating 2 Security Groups in Proxmox:

  1. k3s: Ports that need to be opened on all nodes
  2. k3s_server: Ports that only need to be opened on the control plane nodes

To create a security group in Proxmox:

  1. Login to the Proxmox UI
  2. Go to Datacenter on the left menu
  3. Go to Firewall and then Security Group
  4. Click on Create
  5. Select the security group you just created and click on Add to start adding rules

For the k3s security group:

  1. TCP 10250 for Kubelet metrics
  2. UDP 8472 for Flannel VXLAN
  3. TCP 22 for SSH

For the k3s_server security group:

  1. TCP 6443 for K3s supervisor and Kubernetes API Server
  2. TCP 2479 for HA with embedded etcd
  3. TCP 2380 for HA with embedded etcd

Applying the Security Groups to your VMs

  1. Click on your VM on from the left menu
  2. Select Firewall
  3. Click on the Insert: Security Group button
  4. Select the security groups we just created and enable them

Ansible Setup

Update your SSH config

To streamline our process of working with Ansible, it is recommended that you add the target hosts to your SSH config so we can reference them using hostnames in the Ansible inventory file. I am using Ubuntu on my local machine, so the SSH config is located at ~/.ssh/config. The same should apply to all Linux/Unix OSes.

Example configuration

# Control plane 1
Host k3s-m1
  User ubuntu
  HostName 10.0.0.30
  IdentityFile ~/.ssh/your_pk

# Control plane 2
Host k3s-m2
  User ubuntu
  HostName 10.0.0.31
  IdentityFile ~/.ssh/your_pk

# Control plane 3
Host k3s-m3
  User ubuntu
  HostName 10.0.0.32
  IdentityFile ~/.ssh/your_pk

# Worker node 1
Host k3s-w1
  User ubuntu
  HostName 10.0.0.33
  IdentityFile ~/.ssh/your_pk

# Add more if needed
Enter fullscreen mode Exit fullscreen mode

Directory Structure

.
├── ansible.cfg # project specific Ansible configuration
├── inventory
│   └── inventories.ini # Inventory information
├── k3s.yml # The playbook to install K3s
└── roles
    └── k3s
        ├── tasks
        │   └── main.yml # Ansible role to install K3s
        └── vars
            └── k3s-secrets.yml # K3s enrollment token
Enter fullscreen mode Exit fullscreen mode

Ansible Config File

Path: project_root/ansible.cfg

We will create a custom Ansible configuration file named ansible.cfg at the root of the project directory, which will reference the inventory file.

Example configuration:

[defaults]
inventory = ./inventory/inventories.ini
# If you haven't connected to the K3s VMs before and they are not in your SSH `know_hosts` file, you will have to uncomment the following option for the playbook to not thrown an error:
# host_key_checking = False
Enter fullscreen mode Exit fullscreen mode

Ansible Inventory

Path: project_root/inventory/inventories.ini

We will organize the target VMs using an Ansible inventory file.

# Initial master node setup for bootstrapping the K3S cluster.
# 'node_type' is used in the Ansible roles to identify and execute specific tasks for this node (see the role section).
[k3s_initial_master]
# k3s-m1 is the hostname we defined in our SSH config
k3s-m1 node_type=k3s_initial_master

# Additional master nodes for the K3S cluster.
[k3s_masters]
k3s-m2 node_type=k3s_master
k3s-m3 node_type=k3s_master
# Additional workers...

# Worker nodes for running containerized applications.
[k3s_workers]
k3s-w1 node_type=k3s_worker
k3s-w2 node_type=k3s_worker
k3s-w3 node_type=k3s_worker
# Additional workers...

# Group definition for simplified playbook targeting.
[k3s:children]
k3s_initial_master
k3s_masters
k3s_workers
Enter fullscreen mode Exit fullscreen mode

The Enrollment Token

Path: project_root/roles/k3s/vars/k3s-secrets.yml

You can optionally encrypt the file using the ansible-vault encrypt path_to_file command

---
# You have to create a token yourself
k3s_token: "your_token"
Enter fullscreen mode Exit fullscreen mode

The K3s Ansible Role

Path: project_root/roles/k3s/tasks/main.yml

# code: language=ansible
---
- name: Install K3S Requirements
  ansible.builtin.apt:
    update_cache: true
    pkg:
      - policycoreutils
    state: present

# To make sure the the role is idempotent. The tasks after this will only be executed if K3S hasn't been installed already.
- name: Check if K3S is already installed
  ansible.builtin.shell:
    cmd: 'test -f /usr/local/bin/k3s'
  register: k3s_installed
  failed_when: false

- name: Download K3s installation script
  ansible.builtin.uri:
    url: 'https://get.k3s.io'
    method: GET
    return_content: yes
    dest: '/tmp/k3s_install.sh'
  when: k3s_installed.rc != 0

- name: Import K3S Token
  ansible.builtin.include_vars:
    file: k3s-secrets.yml.vault
  when: k3s_installed.rc != 0

# Note that the node_type variable is set in the inventory file
- name: Execute K3s installation script [Initial Master Node]
  ansible.builtin.shell:
    cmd: 'sh /tmp/k3s_install.sh --token {{ k3s_token }} --disable=traefik --flannel-backend=vxlan --cluster-init'
  vars:
    k3s_token: '{{ k3s_token }}'
  args:
    executable: /bin/bash
  when: node_type | default('undefined') == 'k3s_initial_master' and k3s_installed.rc != 0

- name: Execute K3s installation script [Master Nodes]
  ansible.builtin.shell:
    cmd: 'sh /tmp/k3s_install.sh --token {{ k3s_token }} --disable=traefik --flannel-backend=vxlan --server https://{{ hostvars["k3s-m1"]["ansible_default_ipv4"]["address"] }}:6443'
  vars:
    k3s_token: '{{ k3s_token }}'
  args:
    executable: /bin/bash
  when: node_type | default('undefined') == 'k3s_master' and k3s_installed.rc != 0

- name: Execute K3s installation script [Worker Nodes]
  ansible.builtin.shell:
    cmd: 'sh /tmp/k3s_install.sh agent --token {{ k3s_token }} --server https://{{ hostvars["k3s-m1"]["ansible_default_ipv4"]["address"] }}:6443'
  vars:
    k3s_token: '{{ k3s_token }}'
  args:
    executable: /bin/bash
  when: node_type | default('undefined') == 'k3s_worker' and k3s_installed.rc != 0
Enter fullscreen mode Exit fullscreen mode

The Playbook

Path: project_root/k3s.yml

# code: language=ansible
# K3S Ansible Playbook
---
- name: K3S
  hosts: k3s
  gather_facts: true
  roles:
    - role: k3s
      become: true
Enter fullscreen mode Exit fullscreen mode

Run the Playbook

In your terminal, run ansible-playbook k3s.yml. If you have encrypted the k3s-secret.yml then you should run ansible-playbook k3s.yml --ask-vault-pass and enter the password.

Cluster Access

After the playbook has finished running. You can obtain the content of the cluster's kubeconfig by SSH into one of the master node and reading the content of /etc/rancher/k3s/k3s.yaml

More information: https://docs.k3s.io/cluster-access

To use the config locally, please remember to change the server: https://127.0.0.1:6443 property in the file to point to one of the master node.

Once you have copied the kubeconfig to your local machine, you can run kubectl get ns to list all the namespaces and test the connection. If you have multiple clusters locally, make sure you are selecting the the correct context.

Resources

The playbook and role configuration can be found here on my GitHub repo: https://github.com/algo7/homelab_ansible/tree/main/roles/k3s

Top comments (0)