DEV Community

Cover image for Setting up a shared volume for your docker swarm using GlusterFs
Mohammed Ashour
Mohammed Ashour

Posted on • Edited on

Setting up a shared volume for your docker swarm using GlusterFs

Working on a distributed software product before containers is certainly different than how it's now after the sweet introduction of the world of containerization.

And without saying, for most of the community out there, when is containerization is mentioned, Docker is one of the first things that pop in their heads if it wasn't the first!
I'm not here to argue that others started the idea of containerization before docker, that may be the topic of another blog.

I'm here to talk about a problem I faced -and I'm sure others did- while dealing with Docker Swarm and needing some sort of data sharing between the nodes of the swarm. Unfortunately, this is not natively supported in Docker, you need to rely on a third-party storage service that provides you a good API that your nodes deal with and it will cost you a good deal of money for it, or you can go die-hard and make your own shared storage yourself. and here, I'm sharing the die-hard way that I chose to do it.

Introduction:

What is GlusterFS?

Alt Text
Gluster is a scalable, distributed file system that aggregates disk storage resources from multiple servers into a single global namespace.
Gluster is Open source, Provides replication, quotas, geo-replication, snapshots, and others.

Gluster gives you the ability of aggregating multiple nodes into one namespace, and from here you got multiple options.
You can simply

  • Have a replicated volume that offers the availability for your data without having to worry about your data loss

  • Have a distributed volume that can aggregate space for data by dividing your data across multiple machines, you here lose the availability but you gain more storage with the same resources

You can learn about the various setups of Gluster from here
https://docs.gluster.org/en/latest/Quick-Start-Guide/Architecture/

Let's start with our setup

First, let's imagine that we have a swarm setup of 3 manager nodes and 3 worker nodes, and we need the containers on worker nodes to see the same data wherever they are, also, we need consistency for our data, the availability for our data matters the most. So, we need to make a replicated GlusterFs volume, that replicates all the data in multiple replication nodes, and since we don't have many resources to get extra machines that can act as storage pools, we will use our swarm machines to act also as storage pool nodes.

So, our architecture will be something like this

Alt Text

  • 3 Swarm Managers
  • 3 Swarm Workers
  • GlusterFs Volume connected to the 3 servers

Yes, I know you are wondering now, this design makes our workers act as storage pool and clients at the same time, so they act as a replication for the data collected, and also works as servers that mount this volume and read from it, this may seem weird at first, but if you think about it, this can be a very good setup for a lot of use cases that your application may need. and it doesn't cost you any extra!

let's start the dirty work

let's assume we are working with a Debian based distro, like ubuntu

first, you will need to install glusterfs on all three worker machines
you can do this by

sudo apt update
sudo apt install software-properties-common
sudo add-apt-repository ppa:gluster/glusterfs-7
sudo apt update
sudo apt install glusterfs-server
Enter fullscreen mode Exit fullscreen mode

then, you will need to start and enable the glusterfs daemon service on all the worker machines

sudo systemctl start glusterd
sudo systemctl enable glusterd
Enter fullscreen mode Exit fullscreen mode

then, make sure you generate an ssh-key on each machine

ssh-keygen -t rsa
Enter fullscreen mode Exit fullscreen mode

after that, to be able to deal with all machines with their hostnames, you will need to edit the /etc/hosts for each of these machines and add the hostnames for the other nodes assigned to their IPs, like this format

<IP1> <HOSTNAME1>
<IP2> <HOSTNAME2>
Enter fullscreen mode Exit fullscreen mode

Now, let's create our storage cluster, start from one of the machines and add the others using this command

sudo gluster peer probe <Hostname>
Enter fullscreen mode Exit fullscreen mode

after you add the 2 nodes, run this command to make sure that all of the nodes joined the storage cluster

sudo gluster pool list
Enter fullscreen mode Exit fullscreen mode

now, we will need to make a directory in all of the 3 workers to act like a brick for our volume
But wait, what is a brick??
a brick is basically a directory that acts as a volume unit, GlusterFs uses it in all of its storage pool nodes to know where is the data to store/deal with

so basically you will need to create this directory on each worker node, let's name it brick1 and put it under /gluster-volumes

sudo mkdir -p /gluster-volumes/brick1
Enter fullscreen mode Exit fullscreen mode

Now, we are ready to create our replicated volume (let's call it demo-v )
[run this command only on the main machine]

sudo gluster volume create demo-v replica 3 <HOSTNAME1>:<BRICK_DIRECTORY> <HOSTNAME2>:<BRICK_DIRECTORY> <HOSTNAME3>:<BRICK_DIRECTORY> force
Enter fullscreen mode Exit fullscreen mode

then start the volume

sudo gluster volume start demo-v
Enter fullscreen mode Exit fullscreen mode

and Congrats, you have a replicated volume now that ready to be mounted and used in any machine
Now, let's mount this volume on our 3 workers
let's say that we will mount our volume under /our-application/logs

sudo mount.glusterfs localhost:/demo-v /our-application/logs/
Enter fullscreen mode Exit fullscreen mode

then, to make it permanent, we will need to add it to our fstab file
so open /etc/fstab and add this line


localhost:/demo-v /our-application/logs/ glusterfs defaults,_netdev,acl, backupvolfile-server=<HOSTNAME> 0 0
Enter fullscreen mode Exit fullscreen mode

instead of the <HOSTNAME> add a hostname of one of the worker machines you have, so if you needed to get it out of the storage pool, you can still mount the volume using the other machine.

Now you can try this

touch /our-application/logs/test.txt
Enter fullscreen mode Exit fullscreen mode

and check out the other workers, find the file? Congrats! you have a working replicated volume that is mounted across all of your workers.
Any questions? leave a comment!

Top comments (1)

Collapse
 
olegt0rr profile image
Oleg A.

Before mounting you should create the dir

mkdir -p /our-application/logs/
Enter fullscreen mode Exit fullscreen mode