ZFS is a fascinating filesystem that is packed with features. It is mainly geared towards data storage use cases such as
NAS or Datacenter usage. One feature however intrigued me for workstation usage: The ability to not only take incremental
snapshots, but also transfer those over the network to another ZFS filesystem. This feature could solve a lot of my current
backup woes - so I set out on a journey to install Linux on ZFS.
I quickly learned, that guides about this topic are very cargocult-y, repeating things guides before them have said without
questioning why those things were said. This leads to a situation where these guides are needlessly complicated and often
fail to address the few important pieces of information. This is where this series of blog posts starts: My goal is to provide
a simple guide to running Linux on ZFS written largely from scratch. I will try to leave no stone unturned and nothing implied
so readers can get a feel for which parts of the guide are important for a running system and which are down to my personal
preferences.
To start things off, let's install archlinux on ZFS. I will be using Arch Linux for these guides as it's installation process (and
its lack of automation) lend itself very well towards this kind of systems exploration: If you have to do everything yourself
you have no choice but to learn how things work.
0. Assumptions
This guide makes a couple of assumptions:
- You area installing this on a UEFI based system. This should be true for all modern PCs but if you are trying this in a virtual machine, you may have to explicitly configure it that way 1
- No dual-booting is required
- We are installing on a 64bit intel based system
- We are going to use grub
- We are not using ZFS encryption. In order to use encryption on ANY dataset in the root pool, the
/boot
directory must be located on a different partition similar to LUKS setups. I will create a follow-up article explaining how Linux on encrypted ZFS works.
1. Build a Live ISO that contains ZFS
Like a lot of distributions, archlinux live environments do not ship with ZFS. This is mainly due to the licensing disagreement 2
that comes up a lot when talking about ZFS in linux environments. For archlinux, one can use the archzfs
repository 3. In order to
have ZFS included in an archlinux live environment, one has to build their own archiso including the arch packages.
I will skip over this fairly quickly, for more information check out the archwiki page on ZFS 4, the one on archiso 5 and my
repository containing the building blocks described here 6.
- Install
archiso
- Create a new archiso profile by copying
/usr/share/archiso/configs/releng/
into a directory of your own (e.g../archlive
) - Trust the pacman key and add the archzfs repository to the
pacman.conf
file in your archiso directory - Append
linux-headers
,zfs-dkms
,zfs-utils
to the list of packages that will be installed in the live environment located inpackages.x86_64
- Optional: In order to save yourself the tedious job of manually typing the pacman key later on, create a file containing it in
airrootfs
. If you are using my repository, then there will be/zfs-key.sh
with a script to add the key and/zfs-pacman.conf
with the pacman configuration.
2. Booting the Live environment
After booting into the live environment, we'll have to check a couple of things before we start:
- Use
loadkeys
in order to load a different keymap if you don't use QWERTY (e.g.loadkeys de
orloadkeys colemak
) - Use
ping 1.1.1.1
to check if we have internet connectivity. Networking should work out of the box for wired networks. Wireless networking can be setup usingiwctl
7 - Ensure, that the
zfs
andzpool
commands exist
3. Partitioning your drive
At this point, we can begin preparing our drive. First, identify the device you want to use as your bootdrive by using lsblk
.
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
loop0 7:0 0 784.6M 1 loop /run/archiso/airootfs
sr0 11:0 1 943.3M 0 rom /run/archiso/bootmnt
sda 254:0 0 50G 0 disk
In this example, I am going to use /dev/sda
as my boot drive. If your system boots off of an NVMe drive, then this will likely
be /dev/nvme0n1
or similar for you. We are going to use cgdisk
in order to partition the drive (WARNING: This will delete
the data that is currently on that drive):
- Create one partition of 500M in size. This will be the EFI partition that your System uses for booting. Use the type
EF00
to indicate the pact that this is a EFI partition - Create a secondary partition that spans the rest of your drive. The type for this drive is not really important. Popular choices are
bf00
(Solaris root, Solaris being the 'original' ZFS supporting OS),8300
(Linux Filesystem) or8304
(Linux root) 8
cgdisk 1.0.9
Disk Drive: /dev/vda
Size: 104857600, 50.0 GiB
Part. # Size Partition Type Partition Name
----------------------------------------------------------------
1007.0 KiB free space
1 500.0 MiB EFI system partition EFI
2 49.5 GiB Linux filesystem zroot
1007.5 KiB free space
Partition setup inside cgdisk 9
Write the partition and exit cgdisk
. Now, lsblk
should indicate 2 drives: /dev/sda1
(will be EFI) and /dev/sda2
(will be ZFS).
root@archiso ~ $ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
loop0 7:0 0 784.6M 1 loop /run/archiso/airootfs
sr0 11:0 1 943.3M 0 rom /run/archiso/bootmnt
sda 254:0 0 50G 0 disk
├─sda1 254:1 0 500M 0 part
└─sda2 254:2 0 49.5G 0 part
As a final step during the partitioning phase, we will use mkfs.vfat /etc/sda1
in order to initialize a FAT filesystem for the EFI drive.
Unfortunately, FAT is all EFI supports 10 - this limitation applies to all filesystems however. Even for default ext4 Linux systems, a FAT system
is always involved in booting. This fact doesn't matter however, because the EFI partition only contains reltaively volatile data: It can be
recreated from the data in the system at any point in time (In case of data loss & reinstallation, be sure to recreate the EFI data using the grub-install
command).
4. Setup ZFS
After all of this preparation, it is finally time to start with the interesting part: Actually setting up the ZFS zpool and datasets. In this part I will assume
that you are at least faintly familiar with the concepts behind ZFS, but will do my best to describe them as we go. To skip ahead a bit, the following is
the dataset setup we are setting up:
root@archiso ~ $ zfs list
NAME USED AVAIL REFER MOUNTPOINT
zroot 1.01M 48.0G 96K none
zroot/DATA 288K 48.0G 96K none
zroot/DATA/docker 96K 48.0G 96K /var/lib/docker
zroot/DATA/home 96K 48.0G 96K /home
zroot/ROOT 96K 48.0G 96K /
Under our root dataset, there are 2 'logical' datasets which are not mounted but used to logically separate the data:
-
ROOT
will contain our system root -
DATA
will contain datasets for userdata (e.g. home directory, docker files, ...)
This distinction is done because snapshots are created on a per-dataset basis. By separating system data and user data well, we will be able to roll back
a failed system upgrade in the future without compromising our user data. For alternative dataset configurations, check "4.2 Aside: Alternative dataset
configurations"
We are going to start out creating a zpool, which describes an array of one or more physical disks that are handled as a single unit by ZFS. When using
multiple disks, ZFS can arrange them in multiple RAID configurations, but for this simple guide we are going to assume that a single drive is being used.
A zpool then contains datasets: You can think of these as a cross between directories and partitions. Creating a zpool will always also create a dataset
with the same name.
$ zpool create \
zroot \
-o ashift=12 \
-O acltype=posixacl \
-O relatime=on \
-O xattr=sa \
-O mountpoint=none \
-O canmount=off \
-R /mnt \
/dev/sda2
Now that's a lot of options. Options set with -o
are options we are setting on the zpool while options set with -O
are those we are setting on the dataset.
Let's go through these options 1 by 1 and explain what each one does. I will also include information if the option is strictly necessary (as in, this option is
required for booting) or recommended (as in, you should probably set the option for optimal performance and compatibility)
-
zroot
is the name of the zpool we are creating. You are free to call it whatever you want, but it seems that for this use casezroot
has been agreed upon as a convention. -
-o ashift=12
sets the pools blocksizes to 4K (2 ^ 12 bytes). This should match the sectorsize of modern hard drives and can be only set once when creating the pool. This value is sometimes incorrectly detected as 9, making the pools performance less than suboptimal. 11 -
-O acltype=posixacl
instructs ZFS to use POSIX compatible ACLs (Access Control Lists). This option is not strictly required on the root partition for the whole system - but at least/var/log/journal
needs it to be set. 12 -
-O xattr=sa
Sets the storage mechanism for extended attributes. Some applications and use cases (including the ACLs mentioned above) add additional metadata to files and ZFS has multiple mechanisms of storing it.xattr=sa
will instruct ZFS to save the metadata on the files inode itself. This means that reading metadata does not cause another read operation, making reading and writing metadata more performant 13. This comes at the cost of compatibility, however: At the time of writing, only ZFS on Linux and supposedly OpenZFS on macOS support this xattr storage (Although I haven't tested the latter. Check the openzfs wiki 14 for more information). Your ZFS dataset will still be mountable and accessible on FreeBSD, but extended attributes will be lost. Since this dataset is meant to be used exclusively with Linux, the lack of compatibility is ok. -
-O relatime=on
by default, most filesystems will save file access timestamps for files. This however means, that file metadata has to be refreshed every single time a file is accessed, making a read equal to a read and a write. This behaviour can be disabled byatime=off
in order to disable tracking of accesstime completely.relatime
is a compromise between atime tracking and no atime tracking, saving the timestamp only when the file is updated or if the last access is more than 24h in the past, making it not write for every read operation15. It is said, that for maximum compatibility,relatime
should be used on systems - I for my part have run ext4 based systemsnoatime
(equivalent of ZFSatime=off
) for more than 10 years at this point and haven't had an issue so far. So if you want to be safe, userelatime=on
- if you want things to be more efficient, useatime=off
-
-O mountpoint=none
&-O canmount=off
Tells ZFS that this dataset is not mountable. It will only exist as a way to structure the rest of our datasets. -
-R /mnt
Is not an option for the zpool itself, but instructs ZFS to mount our datasets relative to/mnt
. This means our root dataset withmountpoint=/
will be mounted at/mnt
instead. -
/dev/sda2
Is the identifier of the blockdevice we want to use for our zpool
At this point we can start adding datasets to our pool:
root@archiso ~ $ zfs create -o mountpoint=/ -o canmount=noauto zroot/ROOT
root@archiso ~ $ zfs create zroot/DATA
root@archiso ~ $ zfs create -o mountpoint=/home zroot/home
root@archiso ~ $ zfs create -o mountpoint=/var/lib/docker zroot/DATA/docker
A couple of additional notes about the datasets:
- Use
-o mountpoint=...
to set the mountpoint of the dataset if the parent does not have a mountpoint - The dataset for the system-root (
zroot/ROOT
) has to havecanmount=noauto
- By default, when importing a zpool, ZFS will automatically mount all datasets on it.
canmount=noauto
tells ZFS that while this dataset is mountable, it should not be mounted automatically. Then booting, initramfs automatically mounts the root dataset, so ZFS will not have to mount it later on 16. In live environments this means that we'll have to mount it manually usingzfs mount {DATASET_NAME}
. Settingcanmount=noauto
is required for the root dataset.
- By default, when importing a zpool, ZFS will automatically mount all datasets on it.
To double check, that a) the pool is setup correctly and b) all datasets are mounted correctly we are going to export and reimport the pool (ZFS calls the process of removing a pool from the
system "exporting" and adding it "importing". Think of it as ejecting and inserting a thumb drive).
root@archiso ~ $ zpool export zroot
root@archiso ~ $ zpool import -R /mnt -N zroot
root@archiso ~ $ zfs mount zroot/ROOT
root@archiso ~ $ zfs mount -a
A couple of notes about this:
- We are using
-R /mnt
and-N
forzpool import
. We have used-R /mnt
before when creating the pool: It causes the datasets to be mounted relative to/mnt
.-N
causes no datasets to be mounted by default. This is important because our root dataset hascanmount=noauto
set and mounting other datasets automatically would cause them to be mounted in the wrong order -
zfs mount -a
mounts all datasets that are automatically mountable
4.1 Aside: Device identifier
- Usually the recommendation with ZFS is to use /dev/disk/by-id/... instead of /dev/... device IDs since they are more stable
- With this kind of use case however, I opted for the more unstable /dev/... identifier in order to make switching physical hard drives easier
- If you use /dev/disk/by-id/... you have to set the environment variable
ZPOOL_VDEV_NAME_PATH
in order for grub to be installable correctly 17
4.2 Aside: Alternative dataset configurations
There are multiple alternative ways of structuring your datasets that mostly come down to personal preference. The main rules (/
having to have canmount=noauto
)
are the same for all of them - they just differ in the dataset layout.
4.2.1 Multiple roots
Some guides place the system root in zroot/ROOT/default
in order to add support for multiple systems booting from the same pool with the same data directories.
root@archiso ~ $ zfs list
NAME USED AVAIL REFER MOUNTPOINT
zroot 1.11M 48.0G 96K none
zroot/DATA 288K 48.0G 96K none
zroot/DATA/docker 96K 48.0G 96K /var/lib/docker
zroot/DATA/home 96K 48.0G 96K /home
zroot/ROOT 192K 48.0G 96K none
zroot/ROOT/default 96K 48.0G 96K /
4.2.2 Not separating ROOT & DATA
The simplest setup would be to use the root-dataset as the system root mounted at /
and children datasets for data. Since by default datasets define a directory tree as
well, you will have to be careful to set canmount=off
on parent-datasets of your data-directories in order to have all system data in the root dataset instead of scattered
across multiple ones.
root@archiso ~ $ zfs list
NAME USED AVAIL REFER MOUNTPOINT
zroot 1.18M 48.0G 96K /
zroot/home 96K 48.0G 96K /home
zroot/var 288K 48.0G 96K /var
zroot/var/lib 192K 48.0G 96K /var/lib
zroot/var/lib/docker 96K 48.0G 96K /var/lib/docker
root@archiso ~ $ zfs get canmount
NAME PROPERTY VALUE SOURCE
zroot canmount noauto local
zroot/home canmount on local
zroot/var canmount off local
zroot/var/lib canmount off local
zroot/var/lib/docker canmount on local
5. Bootstrap System
At this, we can bootstrap an arch system using pacstrap
as we would with regular systems.
For more detailed information, check the arch installation guide 18
Before we start, we will have to mount our EFI partition as /mnt/boot/efi
root@archiso ~ $ mkdir -p /mnt/boot/efi
root@archiso ~ $ mount /dev/sda1 /mnt/boot/efi
root@archiso ~ $ pacstrap /mnt base base-devel linux linux-firmware linux-headers dkms efibootmgr grub neovim
Note the inclusion of linux-headers
and dkms
- these packages will be required a bit later on when installing zfs inside the bootstrapped system. I also installed
neovim
here as it is my preferred text editor. If you prefer a different text editor, then install a different one - you'll just need something to edit files in a second.
In preparation for a later step, we can also generate a /etc/fstab
file for our new system and immediately edit it: The script will include all mount points, however
most of them are handled by zfs and don't need to be in /etc/fstab
. Only the non-zfs (in this case only the EFI-partition) need to be in the file. Comment out all
ZFS datasets in the resulting /mnt/etc/fstab
and save the file.
root@archiso ~ $ genfstab -U /mnt >> /mnt/etc/fstab
6. Install ZFS
At this point we have a bootstrapped archlinux system in /mnt
but that system a) does not know about zfs and b) is not bootable. First, we are going to tackle the
first point: Install zfs inside the new system.
ZFS can save data about a zpool in a cachefile. In a later step, the zfs
hook will copy the cachefile into the initramfs in order for our booting kernel to know where
to find the pool. This however is where a bit of weirdness comes in: The zpool cache is generated by the ZFS kernel module in the main system hosting the kernel.
This means we will have to first create the cache and then copy it into the new system by hand.
root@archiso ~ $ mkdir -p /mnt/etc/zfs
root@archiso ~ $ zpool set cachefile=/etc/zfs/zpool.cache zroot
root@archiso ~ $ cp /etc/zfs/zpool.cache /mnt/etc/zfs/zpool.cache
Note: You are not free to choose any path. /etc/zfs/zpool.cache
is hardcoded in the initramfs hook for zfs.
This surprised me, but you can check for yourself in /usr/lib/initcpio/install/zfs
19
At this point, it is time to use arch-chroot
in order to change into our newly installed system in order to continue the installation process. We will continue by adding
the archzfs pacman repository and keys as we did earlier while creating the ISO.
root@archiso ~ $ arch-chroot /mnt
[root@archiso /]$ echo -e "\n[archzfs]\nServer = https://archzfs.com/\$repo/\$arch\n" >> /etc/pacman.conf
[root@archiso /]$ pacman-key -r DDF7DB817396A49B2A2723F7403BD972F75D9D76
[root@archiso /]$ pacman-key --lsign-key DDF7DB817396A49B2A2723F7403BD972F75D9D76
Especially adding the keys can be a bit awkward on real hardware as it includes transcribing a hash from another screen.
In case you are using the archiso I built (see Step 1), you can make this step easier by using scripts I have built in:
root@archiso ~ $ cat /zfs-pacman.conf >> /mnt/etc/pacman.conf
root@archiso ~ $ cp /zfs-key.sh /mnt/zfs-key.sh
root@archiso ~ $ arch-chroot /mnt
[root@archiso /]$ /zfs-key.sh
Now we can install ZFS by installing the zfs-dkms
and zfs-utils
packages:
[root@archiso /]$ pacman -Sy zfs-dkms zfs-utils
Now, ZFS should be installed. We can confirm this by using the zfs
and zpool
commands.
[root@archiso /]$ zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
zroot 49.5G 2.48G 47.0G - - 0% 5% 1.00x ONLINE /mnt
[root@archiso /]$ zfs list
NAME USED AVAIL REFER MOUNTPOINT
zroot 2.45G 45.5G 96K none
zroot/DATA 288K 45.5G 96K none
zroot/DATA/docker 96K 45.5G 96K /mnt/var/lib/docker
zroot/DATA/home 96K 45.5G 96K /mnt/home
zroot/ROOT 2.45G 45.5G 2.45G /mnt
At this point we can activate the following systemd services to ensure that ZFS will be initialized correctly upon boot:
systemctl enable zfs.target
systemctl enable zfs-import-cache.service
systemctl enable zfs-mount.service
systemctl enable zfs-import.target
6.1 Aside: zfs-linux vs zfs-dkms
The archzfs repository contains 2 different ways of installing ZFS 20:
- The
zfs-linux
(orarchzfs-linux-lts
,archzfs-linux-zen
, ...) packages provides the kernel modules specific to these kernels - The
zfs-dkms
uses dkms 21 in order to be compatible with all kernel versions. This comes at a cost of having to rebuild the kernel module every time you switch or upgrade the kernel.
I am opting to use the latter for 2 reasons:
- It reduces the mental load of having to install the correct package for your kernel
- I have been running into version incompatibilities between the kernel package and the zfs package due to the archlinux kernel being more recent than the archzfs repository anticipated
7. Configure bootloader & kernel images
With our system bootstrapped and ZFS installed in it, it is time to get it into a bootable state. Mainly this means configuring initramfs to mount ZFS datasets and installing grub
as a bootloader.
The high-level overview of how a linux system usually boots is as follows:
- The mainboards EFI is configured to start a bootloader (in our case grub)
- The bootloader then loads an image (the so called initramfs), which contains the kernel and the minimum set of applications in order to start the rest of the system. The bootloader also passes certain configuration to that initramfs image.
- initramfs is responsible for is tasked with bringing the system into a running state. This mainly includes mounting the system root partition (or dataset) as
/
. If the system partition is encrypted, then initramfs is also responsible for decrypting the partition (e.g. by prompting the user for a password)
To add ZFS support to this whole chain of events, first we'll have to add the zfs
hook to /etc/mkinitcpio.conf
. The hook should be added before filesystems
and keyboard
should be added before zfs
. fsck
is specific to journaling filesystems, so it is not important for zfs. As such, the resulting line should look as follows:
HOOKS=(base udev autodetect modconf block keyboard zfs filesystems)
Save the file and use mkinitcpio
to regenerate the initramfs images:
[root@archiso /]$ mkinitcpio -P
Now we'll have to configure grub to pass down the correct bootdevice configuration to the initramfs image. To do this, edit /etc/default/grub
and adjust the GRUB_CMDLINE_LINUX=
variable.
Add root=zfs
and zfs={ROOT_DATASET_NAME}
GRUB_CMDLINE_LINUX="root=zfs zfs=zroot/ROOT"
Lastly, we'll need to install grub as a EFI boot option and generate its config:
[root@archiso /]$ grub-install --target=x86_64-efi --efi-directory=/boot/efi --bootloader-id=arch-zfs
[root@archiso /]$ grub-mkconfig -o /boot/grub/grub.cfg
Notes about this:
-
--bootloader-id
can be any string. It is what will show up in your EFI configuration - If you have setup your zpool using a disk id in place of the disk path (e.g.
/dev/disk/by-id/...
instead of/dev/...
), thengrub-mkconfig
will likely fail withgrub-install: error: failed to get canonical path of `/dev/bus-Your_Disk_ID-part#
'. In this case you will have to set the environment variableZPOOL_VDEV_NAME_PATH=1
17. To set it globally for future grub config updates, add it to ''/etc/profile'
7.1 Aside: bootfs
An alternative approach to setting the dataset that should be used for booting is setting the bootfs
parameter on the pool.
This way the dataset name can be changed much more easily without having to go through grub config.
To do so, use root=zfs zfs=bootfs
in /etc/default/grub
and set the bootfs
option on the zpool:
$ zpool set bootfs=zroot/ROOT zroot
This method can be interesting in order to more easily boot off of a snapshot of your system. I personally prefer the simplicity of
setting the dataset name directly in the grub config. If a system upgrade goes wrong, I will more likely completely rollback the
dataset to the last snapshot instead of booting off of the snapshot itself.
7.2 Aside: Grub root= format
There are 2 formats to specify the root=...
string in /etc/default/grub
:
root=zfs zfs={DATASET_NAME}
-
root=ZFS={DATASET_NAME}
Both do the same thing - when researching the topic, you will see some guides use one format and others use the other.
If you are curious about more details as well as additional options, check out the mkinitcpio install script 22
as well as the script that will be embedded in the initramfs 23. There's much less magic in there than you might think.
8. Configure Rest of the System
At this point, all ZFS specific configuration has been done, and we'll have to finish configuring the system. This is not ZFS specific, so I will glaze over this. If you want more
information about this step, check out the arch installation guide 24
[root@archiso /]$ ln -sf /usr/share/zoneinfo/Europe/Berlin /etc/localtime
[root@archiso /]$ hwclock --systohc
[root@archiso /]$ nvim /etc/locale.gen
[root@archiso /]$ locale-gen
Generating locales...
de_DE.UTF-8... done
en_DK.UTF-8... done
en_US.UTF-8... done
Generation complete.
[root@archiso /]$ echo -e "LANG=en_US.UTF-8\nLV_TIME=en_DK.UTF-8" > /etc/locale.conf
[root@archiso /]$ echo 'KEYMAP=colemak' > /etc/vconsole.conf $ Or your preferred keyboard layout
[root@archiso /]$ echo 'arch-zfs-testmachine' > /etc/hostname
[root@archiso /]$ passwd
If you need to connect to Wi-Fi or have your IP address configured via DHCP, you should also install iwd
and dhcpcd
9. Reboot
Use exit
to exit the chroot environment and reboot
to reboot your system. You should now boot into your newly installed archlinux system running on ZFS.
A freshly booted ArchLinux installation running on top of ZFS
10. Honorable mentions
There are a couple of guides on installing ZFS on archlinux:
- The official OpenZFS documentation contains a section named "Root on ZFS" 25. This is the most complete guide, but it guides you through an extremely complicated setup. I don't recommend using this guide directly - but it is very helpful as a reference
- Arch-Wiki contains a page on installing arch on ZFS 26. It is not as complicated as the official guide, but does not explain a lot of things
- The YouTube channel "Stephens Tech Talks" has a video guide 27 which is the simplest guide so far, showing a full runthrough of the whole thing. Mostly mirrors the arch guide, but guides you through a 'golden path'. Really, this was the first guide I had found that made me understand what was going on.
-
BIOS-boot systems should work similarly but without the EFI Partition and with a different
grub-install
command. I haven't tried it though, so I can't vouch for it ↩ -
https://wiki.archlinux.org/title/ZFS#Create_an_Archiso_image_with_ZFS_support ↩
-
https://zfsonlinux.topicbox.com/groups/zfs-discuss/T5177f234d7c777ab-M68f3f3eee18142560b193538/proper-partition-type-linux ↩
-
Depending on the size and layout of your disk, free space may be inserted automatically. This is normal. ↩
-
https://openzfs.github.io/openzfs-docs/Performance%20and%20Tuning/Workload%20Tuning.html#alignment-shift-ashift ↩
-
https://askubuntu.com/questions/970886/journalctl-says-failed-to-search-journal-acl-operation-not-supported ↩
-
https://github.com/openzfs/zfs/commit/82a37189aac955c81a59a5ecc3400475adb56355 ↩
-
https://github.com/archzfs/zfs-utils/blob/f6e3a5e93796bbb4919ff611d22b55ae692c67e8/zfs-utils.initcpio.hook#L110 ↩
-
https://openzfs.github.io/openzfs-docs/Getting%20Started/Arch%20Linux/Root%20on%20ZFS/5-bootloader.html ↩
-
https://wiki.archlinux.org/title/Installation_guide#Installation ↩
-
https://github.com/archzfs/zfs-utils/blob/f6e3a5e93796bbb4919ff611d22b55ae692c67e8/zfs-utils.initcpio.install#L44 ↩
-
https://github.com/archzfs/archzfs/wiki#included-package-groups ↩
-
https://wiki.archlinux.org/title/Dynamic_Kernel_Module_Support ↩
-
https://github.com/archzfs/zfs-utils/blob/f6e3a5e93796bbb4919ff611d22b55ae692c67e8/zfs-utils.initcpio.install ↩
-
https://github.com/archzfs/zfs-utils/blob/f6e3a5e93796bbb4919ff611d22b55ae692c67e8/zfs-utils.initcpio.hook ↩
-
https://wiki.archlinux.org/title/Installation_guide#Configure_the_system ↩
-
https://openzfs.github.io/openzfs-docs/Getting%20Started/Arch%20Linux/Root%20on%20ZFS/0-overview.html ↩
-
https://wiki.archlinux.org/title/Install_Arch_Linux_on_ZFS ↩
Top comments (0)