DEV Community

Alugbin Abiodun Olutola
Alugbin Abiodun Olutola

Posted on

Creating an NFS Server with Vagrant and Archlinux for Kubernetes Cluster

Disclaimer: This post is originally meant to house my process for creating an NFS server (it was a huge toil). As such a lot of things might be omitted or assumed. If you find it useful and you need more insight or questions, please feel free to drop a comment and ask your question.

Also, there are several cloud providers for storage online. Almost if not all major cloud providers have the service available. However in the spirit of curiosity and how is it done, I decided to create mine.
(Also, it's a pet project and might not even see the light of the day so why waste $)

That said, let's move on.

While learning kubernetes recently from here, I began undertaking the task of migrating one of my laravel pet projects to kubernetes.

Everything was fine, until I got to the point where I needed to persist some of my essential files (logs,database,images etc).

First, I provision my k8s cluster using vagrant with the knowledge I learnt from here. Then I added an extra machine to the Vagrantfile to provision an archlinux box as well.

Here is the NFS server box:

#filename=Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure("2") do |cf|
  # NFS Server
  cf.vm.define "nfs-server" do |nfs|
    nfs.vm.box = "archlinux/archlinux"
    nfs.vm.hostname = "nfs-server.example.com"
    nfs.vm.network "private_network", ip: "172.42.42.99"
    nfs.vm.provider "virtualbox" do |n|
      n.name = "nfs-server"
      n.memory = 1024
      n.cpus = 1
    end
    nfs.vm.provision "shell",path: "bootstrap_nfs.sh"
  end
end

This box defines a machine named nfs-server with the following properties:

  • OS: archlinux official image from vagrant box collection
  • Hostname: "nfs-server.example.com", This machine can be reached by this name within the network as well.
  • IPAddress: The entire cluster runs on a private address, and since I will like this to be available to the cluster alone, I add it to the same address space.
  • Provider: I use virtualbox, various options exists as defined in official vagrant documentation. Various providers are listed here
  • Name: uniquely identify the box with the list of machines (I have a lot of them for my k8s cluster)
  • Memory: Assigned memory to this machine. 1Gb in this case
  • cpus: Just 1, it can be increased for more performance though.

bootstrap_nfs.sh is the script that is used to provision the machine.

#filename=bootstrap_nfs.sh

# Update hosts file
echo "[TASK 1] Update /etc/hosts file"
cat >>/etc/hosts<<EOF
172.42.42.99 nfs-server.example.com nfs-server
172.42.42.100 lmaster.example.com lmaster
172.42.42.101 lworker1.example.com lworker1
172.42.42.102 lworker2.example.com lworker2
EOF

echo "[TASK 2] Download and install NFS server"
yes| sudo pacman -S nfs-utils

echo "[TASK 3] Create a kubedata directory"
mkdir -p /srv/nfs/kubedata
mkdir -p /srv/nfs/kubedata/db
mkdir -p /srv/nfs/kubedata/storage
mkdir -p /srv/nfs/kubedata/logs

echo "[TASK 4] Update the shared folder access"
chmod -R 777 /srv/nfs/kubedata

echo "[TASK 5] Make the kubedata directory available on the network"
cat >>/etc/exports<<EOF
/srv/nfs/kubedata    *(rw,sync,no_subtree_check,no_root_squash)
EOF

echo "[TASK 6] Export the updates"
sudo exportfs -rav

echo "[TASK 7] Enable NFS Server"
sudo systemctl enable nfs-server

echo "[TASK 8] Start NFS Server"
sudo systemctl start nfs-server

I added the Task number with a description to show what is being done and also as a documentation.

The hosts needs to be updated so that the nfs server can reach the cluster nodes. Each node will also have the corresponding entry, it's just a way of finding each other within the private network.

On to my PersistentVolume definition on the cluster,

#filename=storage_volume.yml
apiVersion: v1
kind: PersistentVolume
metadata:
  namespace: app
  name: app-storage-pv
  labels:
    tier: storage
spec:
  capacity:
    storage: 10Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
    - ReadOnlyMany
  storageClassName: app-storage-pv
  mountOptions:
    - nfsvers=4.1
  nfs:
    path: "/srv/nfs/kubedata/storage"
    server: nfs-server.example.com
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: app-storage-pvc
  namespace: app
  labels:
    tier: storage
    app: app-storage
spec:
  storageClassName: app-storage-pv
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi

I created a PV that maps to the exposed folder /srv/nfs/kubedata/storage and using the FQDN within the network, I can use nfs-server.example.com for my server name (please ping this from one of your nodes to ensure it is reachable).

#filename=logs_volume.yml
apiVersion: v1
kind: PersistentVolume
metadata:
  namespace: app
  name: app-logs-pv
  labels:
    tier: storage
    app: app-logs
spec:
  capacity:
    storage: 10Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
    - ReadOnlyMany
  storageClassName: app-logs-pv
  mountOptions:
    - nfsvers=4.1
  nfs:
    path: "/srv/nfs/kubedata/logs"
    server: nfs-server.example.com
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: app-logs-pvc
  namespace: app
  labels:
    tier: storage
    app: app-logs
spec:
  storageClassName: app-logs-pv
  volumeMode: Filesystem
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi

This creates a PersistentVolume mapping to path /srv/nfs/kubedata/logs on our NFS server using the same server details nfs-server.example.com.

Now comes our deployment file

#filename=deployment.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: app-backend
  namespace: app
  labels:
    app: app-backend
    tier: backend

spec:
  selector:
    matchLabels:
      app: app-backend
  template:
    metadata:
      labels:
        app: app-backend
        tier: backend

    spec:
      containers:
        - name: app-backend
          image: image_name
          env:
            - name: DB_PASSWORD
              value: "password"
          ports:
            - containerPort: 80
          livenessProbe:
            tcpSocket:
              port: 80
            initialDelaySeconds: 10
            periodSeconds: 15
          readinessProbe:
            tcpSocket:
              port: 80
            initialDelaySeconds: 10
            periodSeconds: 10
          volumeMounts:
            - mountPath: /app/storage/app
              name: quest-storage
            - mountPath: /app/storage/logs
              name: app-logs

      volumes:
        - name: app-storage
          persistentVolumeClaim:
            claimName: app-storage-pvc
        - name: app-logs
          persistentVolumeClaim:
            claimName: app-logs-pvc
---

apiVersion: v1
kind: Service
metadata:
  name: app-backend
  namespace: app
  labels:
    tier: backend
spec:
  selector:
    app: app-backend
  type: LoadBalancer
  ports:
    - port: 80
      targetPort: 80

In Laravel, the storage folder is mostly used to house file uploads and other generated files. official docs

Given pods are mortal you don't want to lose customer's profile pictures , image uploads or any other essential file when your pod(s) go down (they do that often), we mount the nfs /srv/kubedata/storage/ of the nfs server to Laravel storage/app on our pod.

Application logs are also super important, Laravel uses monolog and you don't want your log file to go down with each pod hence we also mount /srv/nfs/kubedata/logs to Laravel's storage/logs/.

Once this mount is successful, you can always be sure that even if your cluster goes down, as long as your NFS server is still up and running, your files will be safe and you can bring your cluster back to life without changing a thing.

PS: There might be a better way to mount the entire storage folder, but I didn't want to add some files that are not needed eg, the view cache files. This might be useful for some other people though; and it might make sense to add them to the mount.

Again, there are various cloud provider options that can be used and kubernetes supports most of them, this is just to get my pet project up and running in kubernetes and to also add another feather to my cap during this lockdown.

Go and explore!!!

Thank you!

Top comments (0)