Imagine that you could have your own free docker cluster right now? You probably will say that I'm insane or crazy and that maybe I'm trying to lie or sell you some crazy deal.
Below I will go with you on each little step that you will need to do to get that free docker cluster.
It's going to be composed of the following:
- GCP free-tier nodes.
- Docker Swarm only (no Kubernetes).
- Swarmpit as a GUI (Totally optional but useful).
Mesh means we can put any provider as a server in this cluster, so the setup here lets you basically to put anything on your cluster from old hardware to top notch nodes it's all up to you and how much you're up for spending, following this tutorial is all free even after the 1 year so don't worry about it.
With some big providers now charging somewhere around the 0.10 cents an hour or something just for the kubernetes for some this could be another node in the cluster with this setup below so pay attention.
In this setup we're using our own managing interface it's not as hyped or full featured as what some provide but for many setups it's more than enough.
How will this setup be, let us start with the nodes first? Google gives you a free "life-time" I guess is until a lot of folks start using it (after this post haha).
- Get two or more accounts setup into GCP: https://cloud.google.com/ (you will need at least you and your dad or your wife.. two credit cards to be inserted and create the accounts, after you see this working you will get your family into creating GCP accounts and side loading the f1-micros in hehe)
- On each account create the Free tier server that Google provides to you which is the: Series N1 -> Type: f1-micro (1 CPU, 614 MB).
- Change the disk image to Ubuntu -> Ubuntu 20.04 LTS Minimal
- Make sure to enable HTTP and HTTPS traffic on network, it's good to have this already enabled OOTB for you so we don't have to do an extra configuration step for these ports later.
If you did this already twice you're starting out great now we're going to go into the technical fun part of it.
Think about this, with 614MB the OS at Minimal is going to make only around 520 MB available so we'll have to work under serious limitations, when people come up to you and say the word "Kubernetes" that alone needs at least 4GB of RAM to start the talks. For many folks that already have a Kubernetes setup thinking about optimizing their setup with the information below can be a great strategy for some companies it can be a great way to save some money now with this pandemic situation so let us continue and stop the talking.
The docker swarm in the other hand alone consumes around 65 MB of RAM and then there a little amount for the containerd.io that supports it so we can round the full operation up to 70MB. We don't need anything else beyond the docker swarm to have a full operational cluster without depending on any provider or anything in that fashion. But as a bonus I will also show how to install the Swarmpit which gives you a great UI to do any setup or even study further the docker swarm which is where I'm at in this point in time. I have to tell you I love the low foot-print of the swarm it's crazy how performant it's specially compared to the Kubernetes.
OBS: I'm not here to say which one is the best or wins, just trying to give you insights and information and that's it. No sides taken but I had to choose the swarm because of the resource limitations choosen for this setup.
Swap to the rescue, we will need to configure swap over recommendations due to the memory restrictions inputted into the setup, it's not great to have a lot of stuff running in swap due to speed, specially because this free server doesn't give you a good disk technology either but we're after the free setup so we don't really care at this point.
On any server you come up with if it's going to be the f1-micro do this swap config so you don't run into issues when running stuff on the servers. Usually you would do a setup of swap anyways but here we go:
We will need nano installed on the nodes so please make sure to install nano:
- Update the references of APT
sudo apt update.
- Install Nano
sudo apt install nano.
OBS: You can use apt or apt-get either is fine, you're in charge.
Assuming that this is a fresh start I'm not checking if you have a swap or not and always remember you can change any numbers you feel like if you feel hard about it, I'm just suggesting here what I did basically.
- Give 2 GB of Swap:
sudo fallocate -l 2G /swapfile
- Make swap file only accessible to root:
sudo chmod 600 /swapfile
- Mark the file as a swap file:
sudo mkswap /swapfile
- Ubuntu start using our new swap:
sudo swapon /swapfile
- Survive a reboot, save to /etc/fstab file:
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab
- Change the swappiness value:
sudo sysctl vm.swappiness=10
- Change the vfs_cache_pressure value:
sudo sysctl vm.vfs_cache_pressure=50
- Persist on reboot, edit the sysctl:
sudo nano /etc/sysctl.conf
- Add the following to the bottom of the file:
- To save the nano and exit, type together:
CTRL + X
- Then type:
If you're doing it in GCP you can click on Firewall (VPC Network), then you can click on Create Firewall rule.
For our case just put destination to all servers and then make sure to open Ports for Swarm and Swarmpit (If you want the GUI for Swarm, if not then just open the swarm ports).
TCP: 888 (Or whatever other port you want to configure, or you can also configure a virtual path is up to you)
Now we're done with swap in our nodes and opening the ports we'll start with the Docker and Swarm setup which is the product we're looking forward to do our cluster with.
On all nodes do the docker setup (If you prefer you can follow the instructions in the Resources section below instead of here):
Update the pre-requirements for docker:
sudo apt-get install \
Add Apt key:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
Add Apt repository:
sudo add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
sudo apt update
Install docker and containerd.io:
sudo apt-get install docker-ce docker-ce-cli containerd.io
Make sure that docker is run as non root:
sudo usermod -aG docker YOURUBUNTUUSER
Now on the first master node you will run the following command to start the swarm:
docker swarm init, you'll get a string with the connection of how you can add worker nodes to your master, copy and save that string somewhere so you don't loose it. The string looks like the following:
docker swarm join --token BBBBB-1-3kre8r9q3120o46elb9aaaaav6ohrxxx9byy6bbb99g2jffow1-05y7ck1r4x0qvf6smli06175l 99.999.110.96:2377
Make sure to replace the IP in the end with your external IP because it will generate a string with your internal IP.
On your worker nodes you can place this string with your external IP and you will me adding nodes to your swarm. You can have multiple master nodes as well as multiple worker nodes in your setup and docker swarm does a fantastic job of auto-scaling accordingly.
There's also auto-failover out of the box for swarm, I've not gotten there yet I have just got this setup going and wanted to share so I will continue to explore and investigate and share in another post with my findings.
Run the following command to verify that you see the other nodes from the master node:
docker node ls
There are some people that don't like the GUIs and all I'm actually all in for the visual, the dashboards and everything you can give me in terms of information and I searched for a good UI for swarm and I found this great project called Swarmpit, it's beautiful, it helps me understand the mais swarm concepts and I can use it to manage my swarm so huge thanks for this work.
Within the master node please do the following:
- Create a directory for the swarmpit docker compose file:
- Create the docker file:
cd swarmpit && nano docker-compose.yml
- Input the following content into the file: OBS: Please note that the current version of swarmpit is 1.9, please refer to the Credits section to the swarmpit website to get the latest version currently and put that version instead in the docker-compose.yml
version: '3' services: app: image: swarmpit/swarmpit:1.9 environment: SWARMPIT_DB: http://db:5984 volumes: - /var/run/docker.sock:/var/run/docker.sock:ro ports: - 888:8080 networks: - net deploy: placement: constraints: - node.role == manager db: image: klaemo/couchdb:2.0.0 volumes: - db-data:/opt/couchdb/data networks: - net networks: net: driver: overlay volumes: db-data: driver: local
- Go back to the folder you were in:
- Setup a stack for Swarmpit:
docker stack deploy -c swarmpit/docker-compose.yml swarmpit
- To see if the swarmpit is up you can check with
docker service ls
- Now it's time to test and see it working:
curl -v http://127.0.0.1:888OBS: If you see html tags it means it's up and running.
Now you can access the swarmpit in the external ip of your master node or if you have multiple any ip of your masters. http://EXTERNALIPMASTERNODE:888
It will ask you for login and password, the defaults are admin/admin. Please change your password after first login.
You can think about LB scenario and also Prometheus for monitoring but I will leave that up to you guys for your homework.
Thank you god, mom, wife, son, the company I work for, friends, these companies that made this possible, all these talents working on the Docker eco-system and special thanks to Google for the f1-micro.