First a warning
If you've been following along with our previous steps you will have created your own local Linux VM running Microk8s, in this step we're going to expose this VM to the public internet.
Making a hole in yours, or anyone's network, to expose a machine, virtual or not, increases that networks risk of attack. You are responsible for your security and the risks created by following these steps.
...On with the show!
Reverse Tunnels
Because I'm running my MicroK8s Linux VM at home on my desktop it isn't exposed to the public internet without me needing to do some work.
I could pay extra for a static IP address, but depending on your ISP that may simply not be an option, or it could be prohibitively expensive.
As an alternative, we can use reverse tunnels to route network traffic to and from our local machine to the outside world, regardless of whether we have a static IP or not.
Previously I've used services like ngrok to perform this, and there is a list of other options here. But quite often when using a paid service I've quickly hit on limitations, like a limited number of addresses and not being able to route traffic across multiple ports. Because of this I looked to find another option, and like the rest of this series, I wanted to do it myself.
Enter SSH, which we can use to provide a tunnel from one server to another.
So far we've made use of free open source software....well, apart from Windows 11 pro, but I'm guessing that the majority of the audience already had that, but for this stage one of the things we will need is our own server already accessible to the outside world, which in general, is going to cost us something.
There are a huge amount of options here, varying in price and provider, but for this example I'm going to be creating and hosting a VM in Microsoft Azure. If you don't have an account with Azure, you can sign up for a free one and receive $200 credit for 30 days if you just want to test things out, alternatively you can use any other provider like digital ocean, AWS, Google Cloud, the choice is yours.
Now before we start to worry too much about cost, all we're going to be using this VM for is forwarding data to and from a couple of ports, so we're going to build all this on the cheapest VM we can get away with.
Last but not least, before we get started, I used the following guide from the brilliant Jeff Geerling when I was trying this out for the first time on a Raspberry Pi cluster I built (maybe that will be the basis for another guide!). I urge you to check out Jeff's page and YouTube channel, he makes great videos.
Create a Linux VM....again!
This time though we're creating a new VM using Azure. Now of course we could have created our first VM in Azure and then wouldn't have needed to go through this step, but the cost of hosting a full Linux VM running MicroK8s and our services could be costly, this way we're paying for a minimal VM.
With an azure account head into the portal and choose the option to 'Create Resource'.
And then choose the option to create a virtual machine.
The following are the settings I used for creating my VM but you may want to adjust these as you see fit. Pay close attention to the 'Size' as this determines the cost of your VM. I've chosen the minimum possible for my subscription, but if you want to use this VM for other things you may want to increase the performance here.
For 'Disks' you may want to change from premium storage to standard HDD which again reduces the cost.
My 'Network' settings are set as follows.
Make sure you have the 'Public IP' set to '(new)', and also make note of the inbound ports that are allowed (also shown on the first screen). This should default to allow connections via 'SSH' (port 22) and this will allow us to login to our VM once it's created.
Go ahead and create the machine, which should only take a few minutes, and once complete you should be able to see the resource in your dashboard.
Open up the terminal on your local machine and you should be able to SSH to this new VM using the IP address shown.
ssh {username}@{Azure VM IP}
Setup SSH
The first thing we need to do on our Azure VM is setup SSH correctly. We need to enable the AllowTCPForwarding
option, which should be set to yes by default, but we can check it with the following command.
sudo sshd -T | grep -E 'allowtcpforwarding'
you should see the following.
allowtcpforwarding yes
We now need to enable GatewayPorts
which you can do by editing the SSH Config file.
sudo nano /etc/ssh/sshd_config
Find the line that reads #GatewayPorts no
and change this to GatewayPorts yes
.
Exit out of nano CTRL+X, Y, ENTER
and run the following to restart SSH.
sudo systemctl restart ssh
Now using the following command you can check that both options are enabled.
sudo sshd -T | grep -E 'gatewayports|allowtcpforwarding'
Back on our Local Linux VM we need to configure things to allow us to connect directly via SSH to our Azure VM. This involves us creating an SSH key pair.
Run the following command
ssh-keygen -t ed25519 -C "{Local VM HostName}"
And press Enter
for all the prompts.
Get the contents of the created file using the following command.
cat /home/{username}/.ssh/id_ed25519.pub
Copy the string returned and then back on your Azure VM edit the
~/.ssh/authorized_keys
file and paste the copied string in.
sudo nano ~/.ssh/authorized_keys
Exit out of nano CTRL+X, Y, ENTER
and head back to your Local VM.
You should now be able to SSH to your Azure VM from your Local VM without the need for a password, simply use.
ssh {username}@{Azure VM IP}
answer yes
when prompted and you should simply connect to the VM. you can type exit
to return to your local VM.
Open Ports
As we saw when we created our Azure VM, apart from port 22 for SSH connections, most ports are disabled by default.
The Panache Legal services we have running on our local VM use ports 30000-30010 so we'll need to open those up. Choose the Networking menu item of our Azure VM and then click on Add inbound port rule
and create a rule for our ports.
Setup AutoSSH
autossh
will allow us to persist our connections. We could simply create the tunnels manually via the command line using something like.
ssh -nNTv -R 0.0.0.0:8080:localhost:80 {username}@{Azure VM IP}
This would map our webservers http port to our Azure VM, but you would need to run this after reboots, or when we log out of our VM, to maintain the connection. With autossh
we can ensure that this is always running, but first, we need to install it.
On our Local VM run the following command.
sudo apt install autossh
Once the install completes create an autossh
config file with the following command.
sudo nano /etc/default/autossh
Within this new file add the following lines.
AUTOSSH_POLL=60
AUTOSSH_FIRST_POLL=30
AUTOSSH_GATETIME=0
AUTOSSH_PORT=22000
Following those lines we need to add configuration for each of the ports. This requires a line beginning with SSH_OPTIONS=" -N
then followed by -R 0.0.0.0:30001:localhost:30001 {Azure VM Username}@{Azure VM IP}
, repeated for each of the ports. So for example.
SSH_OPTIONS="-N -R 0.0.0.0:30001:localhost:30001 {Azure VM Username}@{Azure VM IP} -R 0.0.0.0:30002:localhost:30002 {Azure VM Username}@{Azure VM IP} -R 0.0.0.0:30003:localhost:30003 {Azure VM Username}@{Azure VM IP} -R 0.0.0.0:30004:localhost:30004 {Azure VM Username}@{Azure VM IP} -R 0.0.0.0:30005:localhost:30005 {Azure VM Username}@{Azure VM IP} -R 0.0.0.0:30006:localhost:30006 {Azure VM Username}@{Azure VM IP} -R 0.0.0.0:30007:localhost:30007 {Azure VM Username}@{Azure VM IP} -R 0.0.0.0:30008:localhost:30008 {Azure VM Username}@{Azure VM IP} -R 0.0.0.0:30009:localhost:30009 {Azure VM Username}@{Azure VM IP} -R 0.0.0.0:30010:localhost:30010 {Azure VM Username}@{Azure VM IP}"
Exit out of nano CTRL+X, Y, ENTER
.
Next we need to tell systemd
about autossh
.
Create a new file with.
sudo nano /lib/systemd/system/autossh.service
Add the following, making sure to set the {username}
placeholder appropriately to the username you use on your local VM.
[Unit]
Description=autossh
Wants=network-online.target
After=network-online.target
[Service]
Type=simple
User={username}
EnvironmentFile=/etc/default/autossh
ExecStart=/usr/bin/autossh $SSH_OPTIONS
Restart=always
RestartSec=60
[Install]
WantedBy=multi-user.target
Exit out of nano CTRL+X, Y, ENTER
, then add a symlink for systemd
.
sudo ln -s /lib/systemd/system/autossh.service /etc/systemd/system/autossh.service
Finally run the following commands to tell systemd
about autossh
along with enabling it on startup.
sudo systemctl daemon-reload
sudo systemctl start autossh
sudo systemctl enable autossh
We have one final step, assuming you're using the Panache Legal containers, if you remember when we created our deployment files in step 4 some of the environment variables referred to our Local VM, this is where you replaced {server-IP}
with the IP address of the local VM. You now need to change this to the IP address of your Azure VM, although you can leave {db-server-IP}
as it is because the MySQL connection does not need to be routed across the internet.
Again, if you're using the Panache Legal containers you can run the DeleteService.sh
script to remove all the pods, make your changes to the deployment files and then run the StartServices.sh
script to recreate all the pods.
If all has gone according to plan you should be able to visit http://{Azure VM IP}:30001
in your browser and, fingers crossed, you should see the Panache Legal login page and then be able to login to the system.
All Finished :o)
That's it.
We've created a local Linux VM.
We've installed MicorK8s and MySQL on the local VM.
We've installed NGINX and phpMyAdmin.
We've spun up pods in MicroK8s using deployment files.
We've created an Azure Linux VM.
and finally we've configured Auto SSH and used a tunnel to access our local VM via the internet using our Azure VM.
Hopefully this has given you all the tools you need to build your own environments and expose them to the outside world if you need.
In Closing
Keep a look out for further tutorials and posts that I'll be putting out.
And please take a look at Panache Legal, it's a fully Open Source application built using .NET, with Blazor and identity server. It's still in active development so consider it pre-alpha, but take a look and why not get involved.
Pete
Top comments (0)