DEV Community

Cover image for Experiment Nebula Mesh - part 1
CodeCadim by Brahim Hamdouni
CodeCadim by Brahim Hamdouni

Posted on • Updated on

Experiment Nebula Mesh - part 1

Is it possible to use the public network, namely Internet, to make 2 machines communicate securely ? And if possible something easier to install and configure than OpenVpn ?

A few months ago, I heard about Nebula from Defined and wanted to test it : one single binary, some certificates and as secure as it can be, that sounded promising !

So here we are trying to simulate a mesh network between 2 virtual machines using Vagrant on my Ubuntu Linux distribution 20.04 with Virtualbox already installed.

Vagrant

Vagrant from Hashicorp is the tool I use to manage virtual machines.

sudo apt install -y vagrant
Enter fullscreen mode Exit fullscreen mode

I configure Vagrant to have 2 virtual machines with Debian 11, named respectively boxA and boxB.

To make it simple, I use my host network interface to link the 2 virtual machines (so eth0 in my case) and use the available DHCP of my local network.

cat <<EOF > Vagrantfile
Vagrant.configure("2") do |config|
    config.vm.define "boxA" do |boxA|
        boxA.vm.box = "generic/debian11"
        boxA.vm.network "public_network", bridge: "eth0"
    end
    config.vm.define "boxB" do |boxB|
        boxB.vm.box = "generic/debian11"
        boxB.vm.network "public_network", bridge: "eth0"
    end
end
EOF
Enter fullscreen mode Exit fullscreen mode

Now I can launch the 2 machines with a single command :

vagrant up
Enter fullscreen mode Exit fullscreen mode

Nebula

Installing nebula is as simple as downloading the binaries from github. I choose nebula-linux-amd64.tar.gz to comply to my configuration.

wget https://github.com/slackhq/nebula/releases/download/v1.6.0/nebula-linux-amd64.tar.gz
Enter fullscreen mode Exit fullscreen mode

After downloading, I uncompress the archive :

tar xf nebula-linux-amd64.tar.gz
Enter fullscreen mode Exit fullscreen mode

I now have 2 binaries : nebula and nebula-cert.

  • nebula will be copied on the 2 virtual machines. This is the main binary who will establish the bridge between the machines.
  • nebula-cert will be used to generate all needed certificates :
./nebula-cert ca -name "ACME, Inc"
./nebula-cert sign -name "boxA" -ip "192.168.168.100/24"
./nebula-cert sign -name "boxB" -ip "192.168.168.200/24"
Enter fullscreen mode Exit fullscreen mode

I choose 192.168.168.* as my mesh network IP addresses : .100 for boxA and .200 for boxB.

The results of the commands are this list of files :

ca.crt
ca.key
boxA.crt
boxA.key
boxB.crt
boxB.key
Enter fullscreen mode Exit fullscreen mode

To make it more simple to copy the files to the virtual machines, I create a folder per machine and copy the needed files :

mkdir boxA boxB
mv boxA.* boxA && mv boxB.* boxB
Enter fullscreen mode Exit fullscreen mode

Now I want to generate a Nebula configuration file, that will be common for the 2 machines. I need to get their IP addresses on my local network (those assigned by my DHCP server). I know that vagrant will use the eth1 interface inside the machines, so I use some magic shell script :

First on boxA :

boxAip=$(vagrant ssh boxA --no-tty -c "ip address show eth1| grep 'inet ' | sed -e 's/^.*inet //' -e 's/\/.*$//'" | tr -d '\r')
Enter fullscreen mode Exit fullscreen mode

Then on boxB :

boxBip=$(vagrant ssh boxB --no-tty -c "ip address show eth1| grep 'inet ' | sed -e 's/^.*inet //' -e 's/\/.*$//'" | tr -d '\r')
Enter fullscreen mode Exit fullscreen mode

The magic is that $boxAip and $boxBip variables now contain the wanted IPs. Let use them inside the Nebula config in the static_host_map section :

cat <<EOF > config.yml
pki:
  ca: /etc/nebula/ca.crt
  cert: /etc/nebula/host.crt
  key: /etc/nebula/host.key
static_host_map:
    "192.168.168.100": ["$boxAip:4242"]
    "192.168.168.200": ["$boxBip:4242"]
lighthouse:
  am_lighthouse: false
listen:
  host: 0.0.0.0
  port: 4242
punchy:
  punch: true
tun:
  disabled: false
  dev: nebula1
  drop_local_broadcast: false
  drop_multicast: false
  tx_queue: 500
  mtu: 1300
  routes:
  unsafe_routes:
logging:
  level: info
  format: text
firewall:
  conntrack:
    tcp_timeout: 12m
    udp_timeout: 3m
    default_timeout: 10m
    max_connections: 100000
  outbound:
    - port: any
      proto: any
      host: any
  inbound:
    - port: any
      proto: any
      host: any
EOF
Enter fullscreen mode Exit fullscreen mode

It's time to configure the machines. First, let send all the files to each machine :

vagrant upload ./boxA /tmp/ boxA
vagrant upload ./boxB /tmp/ boxB
vagrant upload ca.crt /tmp/ boxA
vagrant upload ca.crt /tmp/ boxB
vagrant upload config.yml /tmp/ boxA
vagrant upload config.yml /tmp/ boxB
vagrant upload nebula /tmp/ boxA
vagrant upload nebula /tmp/ boxB
Enter fullscreen mode Exit fullscreen mode

Now I ssh in each machine and continue the configuration :

First, boxA :

vagrant ssh boxA
Enter fullscreen mode Exit fullscreen mode

Inside boxA, I move the files in their respective location and finally launch Nebula in the background before quitting boxA :

sudo mkdir /etc/nebula
sudo mv /tmp/config.yml /etc/nebula/config.yml
sudo mv /tmp/ca.crt /etc/nebula/ca.crt
sudo mv /tmp/boxA.crt /etc/nebula/host.crt
sudo mv /tmp/boxA.key /etc/nebula/host.key
sudo mkdir /opt/nebula 
sudo mv /tmp/nebula /opt/nebula/
chmod +x /opt/nebula/nebula
cd /opt/nebula
sudo ./nebula -config /etc/nebula/config.yml &
exit
Enter fullscreen mode Exit fullscreen mode

Let's configure boxB,

vagrant ssh boxB
Enter fullscreen mode Exit fullscreen mode

Then I move all files and launch Nebula :

sudo mkdir /etc/nebula
sudo mv /tmp/config.yml /etc/nebula/config.yml
sudo mv /tmp/ca.crt /etc/nebula/ca.crt
sudo mv /tmp/boxB.crt /etc/nebula/host.crt
sudo mv /tmp/boxB.key /etc/nebula/host.key
sudo mkdir /opt/nebula 
sudo mv /tmp/nebula /opt/nebula/
chmod +x /opt/nebula/nebula
cd /opt/nebula
sudo ./nebula -config /etc/nebula/config.yml &
exit
Enter fullscreen mode Exit fullscreen mode

Ok, at this moment, I have 2 virtual machines, with a Nebula network between them. How am I going to verify that it is really working ?

Well if I can ping boxB from boxA, it will mean that all is ok, so let's found out :

vagrant ssh boxA -c "ping 192.168.168.200"
Enter fullscreen mode Exit fullscreen mode

Hourra ! It is working. Of course, I can do it on the other side :

vagrant ssh boxB -c "ping 192.168.168.100"
Enter fullscreen mode Exit fullscreen mode

And that's all folks for part 1 !

Next, I'll try to use this configuration to execute a MySQL server on one machine, and access it from a MySQL client on the other machine. And to make it a little more difficult, I'll use Podman as the container engine to execute this client and server.

See Part 2

Top comments (0)