I've been wanting to setup a kubernetes (k8s) cluster for a while, mainly because I want to learn how it works. But, because I'm me, I refuse to do anything the easy way, so I didn't want to use GCP or AWS. I have 4 bare metal machines floating about the internet, there is no reason I can't use them, right?
Well, a couple reasons, actually. One, they aren't exactly bare metal, they are cloud bare metal, which means most of them are behind a weird NAT that doesn't share their external IP with them. So none of the interfaces have the external IP. Two, my home machine, which I plan to use for a control machine, is behind a router. So, again, NAT.
Surely that's not a problem, right?
Kubernetes imposes the following fundamental requirements on any networking implementation (barring any intentional network segmentation policies):
- pods on a node can communicate with all pods on all nodes without NAT 1
Huh. Well, let's see. AWS and GCP clearly have to be able to make this happen with NAT, right? It's no way the IP space exists otherwise.
Let's use a VPN. This post look perfect, an easy install script for OpenVPN 2
Installed it, connected the client, and the client promptly lost all internet connectivity. Huh. It could ping the VPN server, so that wasn't it.
Turns out you will want to edit a couple lines from the server.conf.
(you can find the full example here: 3)
You want to:
- remove the line that starts
- add the
Now the server can ping the client, and the client can ping the server. Success!
There's just one thing left to do, and that's to tell
kubeadmin init to use the VPN. I used flannel as my pod network, so my init command was
kubeadm -v 1 init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.8.0.1
And there we are!
drazisil@central:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION central Ready master 28m v1.15.3 server1 Ready <none> 26m v1.15.3 server2 Ready <none> 94s v1.15.3