DEV Community

r0psteev
r0psteev

Posted on • Updated on

Learning some terraform for Proxmox

Introduction

In an effort to capture some of the effort that goes into setting up tests infrastructures on proxmox, i decided to learn some terraform to formalize infrastructure deployment on proxmox as terraform configs.

Personal objectives

  • Build a terraform provider from source and use it from locally (this will be useful in case the need to further extend the provider with additional features that the proxmox API provides)
  • Deploy n-lxc containers using terraform on proxmox

The Provider

1. Building the Provider

I choose Telmate/Proxmox provider as it one of the most popular on hashicorp.

This provider has a quiet comprehensive friendly structure, with an entrypoint main.go and package proxmox dedicated to proxmox's specific logic.

peek at the code

And surprisingly easy to install. After installing the go compiler and build-essentials package on ubuntu, make does the trick.

building terraform from source

The resulting provider binary terraform-provider-proxmox is stored in ./bin directory.

terraform-provider-proxmox binary built in bin directory

2. Where does terraform search/find provider binaries

By default terraform will directly fetch providers from Hashicorp and cache them in ~/.terraform/plugins.

What we can do if we want to run a custom built provider is to put it in a directory structure similar to the one below.

directory strucuture for terraform plugins on system

We can now invoke this provider in our terraform files as follows.



# lxc-example.tf
terraform {
  required_providers {
    proxmox = {
      source  = "terraform.local/telmate/proxmox"
    }
  }
}

provider "proxmox" {
    pm_tls_insecure = true
    pm_api_url = "https://localhost:8006/api2/json"
    pm_password = "donthackmeplease"
    pm_user = "root@hostname"
}


Enter fullscreen mode Exit fullscreen mode

The containers

1. A minimal lxc container

At this point, to get a little confidence in my understanding of this provider i wanted to create a simple terraform file to spin for me an lxc container in a designated pool on proxmox.

So this exercise consisted of creating an lxc container:

  • within a choosen Pool (pools are kind of resource groupings in proxmox with common administrative policies, this pool was created because proxmox allows for visually grouping resources into pools, hence primarily for aesthetic purposes)
  • with 1024MB of RAM
  • 2vCPU
  • is connected to the internet over virtual switch vmbr0
  • starts immediately after creation.
  • is based on lxc ubuntu 22.04 template

To achieve this we can read the example terraform file terraform-provider-proxmox/examples/lxc_example.tf to spin an lxc container provided by the code base. However, we could get more insight about the terraform directives which will help us achieve our requirements by reading directly the file terraform-provider-proxmox/proxmox/resource_lxc.go as it is an entrypoint into the Schema definition of an lxc resource from the point of view of terraform (it basically maps the way terraform understands what an lxc container is to what proxmox understands it to be).

Similarly the file terraform-provider-proxmox/proxmox/resource_pool.go provides the Schema definition of a Pool.

  • Create Pool TestPool1


# lxc-example.tf
resource "proxmox_pool" "pool-test" {
  poolid = "TestPool1"
  comment = "Just a test pool"
}


Enter fullscreen mode Exit fullscreen mode
  • The lxc container


resource "proxmox_lxc" "lxc-test" {
  pool = proxmox_pool.pool-test.poolid # depends on pool-test
  target_node = "pve" # node of your cluster on which deployment should be done
  cores = 2
  memory = 1024
  hostname = "bot"
  password = "rootroot"
  network  {
    name = "eth0"
    bridge = "vmbr0"
    ip = "dhcp"
  }
  start = true # start after creation

  # using ubuntu container template
  ostemplate = "local:vztmpl/ubuntu-22.04-standard_22.04-1_amd64.tar.zst"
  rootfs {
    storage = "local-lvm"
    size = "8G"
  }
}


Enter fullscreen mode Exit fullscreen mode
  • ostemplate points to the volume on which is found the lxc template of your container, and follows some kind of promox's address notation.

ostemplate is special location to lxc template

This special notation ultimately resolves to local filesystem paths, as shown below for the address storage notation for our ubuntu-22.04-standard_22.04-1_amd64.tar.zst template.

ostemplate resolves to location in /var/lib/vz/template/cache of system

  • rootfs specifies the storage (where the vm disk would be stored) and its size

rootfs is local-lvm in my case

Testing:



$ terraform init
$ terraform plan  # just to check
$ terraform apply


Enter fullscreen mode Exit fullscreen mode

Testing the Deployment

Results

I developed an overly simple mental model through studying this code base, which might be useful in case you find yourself in need to extend/add support for some proxmox feature to this code base

Mental Model for Telmate/proxmox

  • terraform cli depends on Telmate/terraform-provider-proxmox for Schema definitions.
  • Telmate/terraform-provider-proxmox binds the Schema definitions to promox JSON API using Telmate/proxmox-api-go
  • Telmate/proxmox-api-go ultimately interacts with the proxmox server via its API.

The uniform naming conventions used throughout the Telmate codebase makes so that a Schema directive's name is exactly the same name as in the Proxmox Core API.
So we an directly refer ourselves to the proxmox API docs to get description / information about the role/purpose of a Schema directive.

2. Spinning 20 lxc containers

We spin n=20 containers, we can use terraform's count meta-argument.



resource "proxmox_pool" "pool-test" {
  poolid = "TestPool1"
  comment = "Just a test pool"
}

resource "proxmox_lxc" "lxc-test" {
  count = 20 # create 20 lxc containers
  pool = proxmox_pool.pool-test.poolid # depends on pool-test
  target_node = "pve" # node of your cluster on which deployment should be done
  cores = 1
  memory = 512
  hostname = "bot-${count.index}"
  password = "rootroot"

  network  {
    name = "eth0"
    bridge = "vmbr0"
    ip = "dhcp"
  }
  start = true # start after creation

  # using ubuntu container template
  ostemplate = "local:vztmpl/ubuntu-22.04-standard_22.04-1_amd64.tar.zst"
  rootfs {
    storage = "local-lvm"
    size = "8G"
  }
}


Enter fullscreen mode Exit fullscreen mode

After running terraform apply we get

Terraform apply

We have them, 20 containers created in pool TestPool1

20 lxc containers

Just as easily as we created them, we can destroy them back using terraform destroy

Terraform destroy

References

Top comments (1)

Collapse
 
popefelix profile image
Aurelia Peters • Edited

I just deployed a test LXC to my Proxmox machine in a pretty similar fashion to what you've done here. Have you figured out how to provision the LXC after you deploy it? Seems like you might could do something with hookscript, but I haven't tried it yet.

ETA: also it looks like the hookscript runs every time the LXC boots, which is not what I'm looking for here.