Introduction
In the previous posts of the series we learned about Ansible basics by creating a hello world role. In an independent post I wrote about LXD to run virtual machines and containers without Docker. In this post I will show you how you can use Ansible to install LXD. This time I will focus on simplicity instead of the best way, so we can improve it later and understand more easily what we are doing.
If you want to be notified about my new videos, you can subscribe to my channel: https://www.youtube.com/@akos.takacs
Table of contents
- Before you begin
- How much an Ansible role should be capable of
- Install zfs utils and create a zfs pool
- Install LXD
- Conclusion
Before you begin
Requirements
- The project requires Python 3.11. If you have an older version, and you don't know how you could install a new version, read about Nix in Install Ansible 8 on Ubuntu 20.04 LTS using Nix
- You will also need to create a virtual Python environment. In this tutorial I used the "venv" Python module and the name of the folder of the virtual environment will be "venv".
- You will also need an Ubuntu remote server. I recommend an Ubuntu 22.04 virtual machine.
Download the already written code of the previous episode
If you started the tutorial with this episode, clone the project from GitHub:
git clone https://github.com/rimelek/homelab.git
cd homelab
If you cloned the project now, or you want to make sure you are using the exact same code I did, switch to the previous episode in a new branch
git checkout -b tutorial.episode.4b tutorial.episode.4
Have the inventory file
Copy the inventory template
cp inventory-example.yml inventory.yml
And change ansible_host
to the IP address of your Ubuntu server that you use for this tutorial, and change ansible_user
to the username on the remote server that Ansible can use to log in. If you still don't have an SSH private key, read the Generate an SSH key part of Ansible playbook and SSH keys
Activate the Python virtual environment
How you activate the virtual environment, depends on how you created it. In the episode of The first Ansible playbook describes the way to create and activate the virtual environment using the "venv" Python module and in the episode of The first Ansible role we created helper scripts as well, so if you haven't created it yet, you can create the environment by running
./create-nix-env.sh venv
Optionally start an ssh agent:
ssh-agent $SHELL
and activate the environment with
source homelab-env.sh
How much an Ansible role should be capable of?
An Ansible role can be very simple and very complicated as well. When I started to learn about Ansible, I thought I had to do everything with Ansible and nothing manually, but that's wrong. It is ideal if you can implement everything in Ansible roles, but the main goal is to use Ansible to help you simplify the deployment and make it repeatable. When something is simple enough and implementing it in an Ansible role would make it harder to maintain or less reliable, then documenting it and doing it manually is just fine.
You should also make sure that the role is doing what you need and not what you think it could do with a little more work which you would probably never use. For example, don't add a parameter just to create a role that you could even share and let other people customize it when it is unlikely that you will ever share it. Add a new parameter when you have a new use case, and you actually need it or when it is likely that you will need it soon, and you feel it is easier to add it now than later. Otherwise, it will be harder to maintain and nothing to gain.
So let's keep this blogpost simple too.
Install ZFS utils and create a ZFS pool
We need a new role called "zfs_pool" which can create a new zfs pool for LXD and also installs dependencies that makes it possible. What the role has to be able to do besides installing dependencies is replacing the following command:
sudo zpool create "$name" "${disks[@]}"
So we will need two default variables. You already know how a basic Ansible role looks like. We will need a task file and a file for the default variables.
zfs_pool/defaults/main.yml
zfs_pool_name: default
zfs_pool_disks: []
I use "default" as default value for the zfs pool name because this is a general role. Otherwise, I would have called it "lxd_zfs_pool". I know I have just told you not to implement a feature that you don't need, and you probably don't need to create other zfs pools which are not used by ansible, but separating the zfs pool creation from the LXD installation will actually make it simpler and easier to understand.
The default value of zfs_pool_disks
is an empty list.
Let's also create a task file:
zfs_pool/tasks/main.yml
- name: Install zfslinux-utils
become: true
ansible.builtin.package:
name: zfsutils-linux
state: present
This is the role which is replacing apt-get install zfsutils-linux
. The builtin module called "package" would work on other Linux distributions as well, but the package name could be different on those distributions. Since for now I support only Ubuntu, I don't need a more complicated task. Since I added become: true
, it will be executed as root.
We should create the zfs pool now and there is actually some zfs support in Ansible 8.0.0, but it doesn't support creating pools. So we need to check if the pool already exists and create it only if it doesn't exist, so we will use some conditional roles which will require using the register
keyword again that we have already learnt about.
First of all, we need to determine whether the pool exists or not:
- name: Get zpool facts
ignore_errors: true
community.general.zpool_facts:
name: "{{ zfs_pool_name }}"
register: _zpool_facts_task
We need ignore_errors: true
because the task would fail otherwise if the pool doesn't exist. The zpool_facts
module in the community.general
collection will also set the ansible_zpool_facts
and ansible_facts.zpool_facts
variables, but we don't need that. However, we need to save the status information into a variable. That's why we use the register
keyword again. By the way that status information also contains the facts, so you would have 3 ways to get them.
As a next step we need a conditional role that runs only if the pool is not created yet:
- name: "Create ZFS pool: {{ zfs_pool_name }}"
when: _zpool_facts_task.failed
become: true
ansible.builtin.command: "zpool create {{ zfs_pool_name }} {{ zfs_pool_disks | join(' ') }}"
To make the logs more informative I used the zfs_pool_name
variable in the task name. The when
keyword expects a boolean value or a list of boolean values, but we need only one. The previously registered variable will container "failed
" as a boolean property so the task will run when the previous task failed. And finally, we use the builtin command
module to execute our zpool create
command. The join(' ')
filter will take the list of disks as argument and convert it to a string containing the disks separated by a space character.
The final task file will look like this:
- name: Install zfslinux-utils
become: true
ansible.builtin.package:
name: zfsutils-linux
state: present
- name: Get zpool facts
ignore_errors: true
community.general.zpool_facts:
name: "{{ zfs_pool_name }}"
register: _zpool_facts_task
- name: "Create ZFS pool: {{ zfs_pool_name }}"
when: _zpool_facts_task.failed
become: true
ansible.builtin.command: "zpool create {{ zfs_pool_name }} {{ zfs_pool_disks | join(' ') }}"
We need a playbook to call this role. In our first playbook was simply called playbook.yml
, but now let's rename it to playbook-hello.yml
so we can have more playbooks.
mv playbook.yml playbook-hello.yml
Although we have a role for creating the zfs pool, our final goal is to install LXD, so our new playbook will be "playbook-lxd-install.yml
".
- name: Install LXD
hosts: all
roles:
- role: zfs_pool
zfs_pool_name: lxd-default
zfs_pool_disks: "{{ config_lxd_zfs_pool_disks }}"
We still have only one host, so the "hosts" parameter can refer to all the hosts. We have too parameters for the role but for now we want a statically set pool name. lxd-default
will be fine, but obviously I can't include the paths of the disks, since it will be different for everyone and probably on every machine unless you already added aliases. It means we need some global parameters. Although you could easily set the zfs_pool_name
and zfs_pool_disks
in the inventory file, I usually find it a good practice to set role parameters in playbooks, and create project-level configuration variables. It is optional and setting role parameters in inventory files makes the playbooks shorter and cleaner, but it also makes it much harder to follow where the parameters are set and there are so many places where you can set them. so choose a way that you find more maintainable in your project.
In my case I had to change my inventory file, so the "inventory.yml
" in the project root looks like this now:
all:
vars:
ansible_user: ta
config_lxd_zfs_pool_disks:
- /dev/disk/by-id/scsi-1ATA_Samsung_SSD_850_EVO_500GB_S2RBNX0J103301N-part6
hosts:
ta-lxlt:
ansible_host: 192.168.4.58
ansible_ssh_private_key_file: ~/.ssh/ansible
If you don't understand what this inventory file is, please, read the previous posts to learn more about it. The new config_lxd_zfs_pool_disks
variable has to contain the list of your disks. If you don't have a physical partition, you can create a virtual disk for testing and set the size in gigabytes after -s
:
truncate -s 50G <PATH>/lxd-default.img
And refer to its absolute path in the inventory file. In the example I set 50G
but make sure you set a size that is appropriate for your free disk space.
We can try to run the playbook:
ansible-playbook \
-i inventory.yml playbook-lxd-install.yml \
--ask-become-pass
Output:
BECOME password:
PLAY [Install LXD] *******************************************************************************************
TASK [Gathering Facts] ***************************************************************************************
ok: [ta-lxlt]
TASK [zfs_pool : Install zfslinux-utils] *********************************************************************
ok: [ta-lxlt]
TASK [zfs_pool : Get zpool facts] ****************************************************************************
fatal: [ta-lxlt]: FAILED! => {"changed": false, "msg": "ZFS pool lxd-default does not exist!"}
...ignoring
TASK [zfs_pool : Create ZFS pool: lxd-default] ***************************************************************
changed: [ta-lxlt]
PLAY RECAP ***************************************************************************************************
ta-lxlt : ok=4 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=1
Notice that we had a fatal error which we ignored, detected it and created the zfs pool. If you run the same command again, the error will not appear and the pool creation will be skipped.
BECOME password:
PLAY [Install LXD] *******************************************************************************************
TASK [Gathering Facts] ***************************************************************************************
ok: [ta-lxlt]
TASK [zfs_pool : Install zfslinux-utils] *********************************************************************
ok: [ta-lxlt]
TASK [zfs_pool : Get zpool facts] ****************************************************************************
ok: [ta-lxlt]
TASK [zfs_pool : Create ZFS pool: lxd-default] ***************************************************************
skipping: [ta-lxlt]
PLAY RECAP ***************************************************************************************************
ta-lxlt : ok=3 changed=0 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
Install LXD
As a next step, we will install LXD using a config file. If you don't hav the config file yet, please, read Creating virtual machines with LXD first. I will use the same config file that I exported in that post.
lxd_install/files/lxd-init.yml
config:
images.auto_update_interval: "0"
networks:
- config:
ipv4.address: auto
ipv6.address: none
description: ""
name: lxdbr0
type: ""
project: default
storage_pools:
- config:
source: lxd-default
description: ""
name: default
driver: zfs
profiles:
- config: {}
description: ""
devices:
eth0:
name: eth0
network: lxdbr0
type: nic
root:
path: /
pool: default
type: disk
name: default
projects: []
cluster: null
It would be better to use a template, but I want to make it as simple as possible for now. Let's save some default variables.
lxd_install/defaults/main.yml
lxd_install_snap_channel: 5.0/stable
lxd_install_init_enabled: true
lxd_install_init_config_dir: /opt/lxd
lxd_install_init_config_file_name: init.yml
lxd_install_init_config_file_path: "{{ lxd_install_init_config_dir }}/{{ lxd_install_init_config_file_name }}"
As you can see I saved the LTS version as default channel, but we can override it. Using LTS version as default value could give you a more stable LXD, but you can still override it in the playbook. We also need to initialize LXD after installing it, but you might not want to initialize it. Again, this is something that does not add a real value to our role, but we can practice conditional tasks. By default, the init config will be copied to /opt/lxd/init.yml
and you need to override this value if you don't like it. It's time to create our task file.
lxd_install/tasks/main.yml
- name: Install LXD snap package
become: true
community.general.snap:
state: present
name: lxd
channel: "{{ lxd_install_snap_channel }}"
The above task will install the snap package, but it will not initialize it. Since the initialization is optional, we will create a conditional block:
- name: Initialize LXD
when: lxd_install_init_enabled | bool
block:
A block is a list of tasks. Since we have multiple tasks that we have to skip if lxd_install_init_enabled
is not true, it is easier to set the condition for the block. We also use "| bool
" after the variable name, because the variable can also come from the command line passing -e lxd_install_init_enabled=false
and it will always be a string, so we have to convert it to a boolean type. If you don't do that, "false" will mean boolean "true" as well. In the block we have to indent the tasks of course. The first task in the block will create the config directory:
- name: Create LXD config folder
become: true
ansible.builtin.file:
state: directory
path: "{{ lxd_install_init_config_dir }}"
mode: 0700
The builtin file
module is for creating directories, setting permissions and ownerships, and creating links as well. We don't want anyone to read our config, so we allow only the file owner (root in this case) to access files in the directory.
Now that the file is created, we can use the copy module to copy init config to the remote server:
- name: Copy LXD config
become: true
ansible.builtin.copy:
src: lxd-init.yml
dest: "{{ lxd_install_init_config_file_path }}"
mode: 0600
and finally we can initialize LXD:
- name: Apply LXD config
become: true
ansible.builtin.shell: lxd init --preseed < "{{ lxd_install_init_config_file_path }}"
Let's add the role to the playbook, so the new content of playbook-lxd-install.yml is:
- name: Install LXD
hosts: all
roles:
- role: zfs_pool
zfs_pool_name: lxd-default
zfs_pool_disks: "{{ config_lxd_zfs_pool_disks }}"
- role: lxd_install
You can run it again:
ansible-playbook -i inventory.yml playbook-lxd-install.yml --ask-become-pass
The relevant output:
TASK [lxd_install : Install LXD snap package] ****************************************************************
[DEPRECATION WARNING]: The DependencyMixin is being deprecated. Modules should use
community.general.plugins.module_utils.deps instead. This feature will be removed from community.general in
version 9.0.0. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
changed: [ta-lxlt]
TASK [lxd_install : Create LXD config folder] ****************************************************************
changed: [ta-lxlt]
TASK [lxd_install : Copy LXD config] *************************************************************************
changed: [ta-lxlt]
TASK [lxd_install : Apply LXD config] ************************************************************************
changed: [ta-lxlt]
PLAY RECAP ***************************************************************************************************
ta-lxlt : ok=7 changed=4 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
The only problem is that if you run the playbook again, it will initialize LXD again, even though the init config hasn't changed. Let's modify the last two tasks:
- name: Copy LXD config
become: true
ansible.builtin.copy:
src: lxd-init.yml
dest: "{{ lxd_install_init_config_file_path }}"
mode: 0600
register: _copy_init_task
- name: Apply LXD config
when: _copy_init_task.changed
become: true
ansible.builtin.shell: lxd init --preseed < "{{ lxd_install_init_config_file_path }}"
All we added was the register
keyword to the copy task and the when
keyword to the apply task.
Note: This is usually a good way to determine whether a file changed or not, but if for some reason the second task fails, and you need to rerun the playbook, the config file will already exist and the "Apply LXD config" task will not run. If that happens, you need to remove or change the init config on the remote server and rerun the playbook again.
The new task file looks like this:
- name: Install LXD snap package
become: true
community.general.snap:
state: present
name: lxd
channel: "{{ lxd_install_snap_channel }}"
- name: Initialize LXD
when: lxd_install_init_enabled | bool
block:
- name: Create LXD config folder
become: true
ansible.builtin.file:
state: directory
path: "{{ lxd_install_init_config_dir }}"
mode: 0700
- name: Copy LXD config
become: true
ansible.builtin.copy:
src: lxd-init.yml
dest: "{{ lxd_install_init_config_file_path }}"
mode: 0600
register: _copy_init_task
- name: Apply LXD config
when: _copy_init_task.changed
become: true
ansible.builtin.shell: lxd init --preseed < "{{ lxd_install_init_config_file_path }}"
Conclusion
We finally learned how we can install LXD using Ansible, but we still need to remove it manually. That's okay, but we use Ansible to create our home lab that we probably want to reinstall many times, so next time we will learn how we can use Ansible to remove LXD.
The final source code of this episode can be found on GitHub:
https://github.com/rimelek/homelab/tree/tutorial.episode.5
README
This project was created to help you build your own home lab where you can test your applications and configurations without breaking your workstation, so you can learn on cheap devices without paying for more expensive cloud services.
The project contains code written for the tutorial, but you can also use parts of it if you refer to this repository.
Tutorial on YouTube in English: https://www.youtube.com/watch?v=K9grKS335Mo&list=PLzMwEMzC_9o7VN1qlfh-avKsgmiU8Jofv
Tutorial on YouTube in Hungarian: https://www.youtube.com/watch?v=dmg7lYsj374&list=PLUHwLCacitP4DU2v_DEHQI0U2tQg0a421
Note: The inventory.yml file is not shared since that depends on the actual environment so it will be different for everyone. If you want to learn more about the inventory file watch the videos on YouTube or read the written version on https://dev.to. Links in the video descriptions on YouTube.
You can also find an example inventory file in the project root. You can copy that and change the content, so you will use your IP…
Top comments (0)