DEV Community

Cover image for Homelab Adventures: Crafting a Personal Tech Playground
Daniel Hofman
Daniel Hofman

Posted on

Homelab Adventures: Crafting a Personal Tech Playground

Disclaimer: I am not affiliated with any of the applications mentioned in this post. I have chosen to discuss them solely based on my positive experiences and the benefits they have offered in my personal tech projects.

No, I am not talking about a server rack in your closet or guest bathroom like an email server. This is about an old spare PC or laptop that can be repurposed into a Linux home server to give you a lot of fun if you are into tech.

Why Linux, you wonder? Linux is often recommended because unlike its Windows cousin, it can run for long periods without needing constant attention. Ok, it’s also free which is a nice feature. Yes, Windows servers are stable, but definitely not free.

This is a story of my own venture into the homelab world and what I learned along the way.

A long, long time ago, last year, I finally realized that I had reached the end of my patience with being unable to access my notes from all my devices. Thousands of my notes were stuck in the past using the desktop version of OneNote 2010. The OneNote upgrade prompts also increased over the last couple of years and kept nagging me to switch to the new and shiny cloud connected OneNote. Since I am overprotective of my life’s collective stash of information, I was not going to put that in the cloud. Microsoft must already have a copy of it somewhere in a data center saved under my barcode, but that’s probably a story for another time. In any case, I was not going to make it easier for them if I could help it.

On a quest to find a good reliable replacement for my old OneNote. Replacement that could be accessible from anywhere and from any device.

Another serious problem I was facing was the fact that OneNote 2010 does not offer a nice way to export all the notes. I tried all forms of exporting and non of them really worked well. Either my laptop crashed or the notes were unreadable. This was really a terrible experience and I felt very locked into OneNote at that point.

Hello Trilium!

After spending a significant amount of time searching and testing different apps, I finally settled on an app and an idea to solve all my issues. Welcome to Trilium, an open-source application that feels perfect. It has all the features I could dream of and more. What’s more important is it has a nested tree structure like I always loved and missed since my days using TreePad in the 2000s. When TreePad was discontinued, I made the switch to OneNote and missed the old nested tree ever since. Trilium can handle thousands of notes seamlessly, it has a very good search, and keeps all the notes in a SQLite database that, as a software developer, I can appreciate. It even has a page that is a query page for the database so you can run SQL queries against your notes all day long. You can even make updates to your notes via SQL. How cool is that.

Finding a home for the Trilium app

I mentioned earlier that I wasn't going to store my notes in the cloud. That still holds true, but the catch is that I would consider it if it were my own cloud, with a solid security layer around it. As a software developer, this seemed like an exciting new technology on the horizon, and I was eager to dive in, learn new shiny tech, and perhaps even need to buy new hardware. Along with a host of delightful new issues and bugs to tackle, and more notes to jot down. This was great, and I was prepared to face the challenge. If successful, I would have a homelab server and access to all my notes from anywhere in the world. A long-awaited dream, but now I was determined to make it a reality.

First attempt at homelab server: Raspberry Pi 4 and 5

So I watched many tutorials online and it seemed that Raspberry Pi would be a good fit. Small, quiet, and very well-equipped. On-board WIFI, USB 3, Ethernet, and even two 4k monitor connections. All online videos and tutorials were raving about it, so I purchased version 4 since version 5 was not shipping yet, putting it on backorder for 4 months.

I received the new Raspberry Pi 4 and started tinkering with it. I copied the latest Raspberry Pi OS onto the micro SD and booted the Pi. I connected a 4k monitor to it and, to my surprise, I realized that one thing all those YouTube videos did not mention or emphasize was that using it with a monitor would feel like being stuck in molasses. Not exactly a pleasant user experience. I had plans to install Docker and a containerized version of Trilium and NGINX on it. Raspberry Pi really felt underpowered. After a quick research, I found references to overclocking it.

Since it seemed very simple to do, by modifying a text file, I tried to max it out to the point just before crashing. Going overboard, simply made the device would not boot at all.

The parameters that worked for me and were at the max I could push it to were:


Enter fullscreen mode Exit fullscreen mode

Yes, I bumped up the voltage a bit as well. I did purchase the intercooler fan and a large heat sink for the Pi, so my temps were reasonable, staying at around 60 degrees Celsius. Not great for long term, but good for quick tests to see if I can even make this thing work.

I installed Docker, Trilium, and a few more containers with some admin tools. It was still too slow, and I started to get disappointed. Even after overclocking it, it was still sluggish, especially when navigating through thousands of notes.

Some time later the new Raspberry PI 5 showed up at my doorstep. At that point, I did not have much hope since I read all the specs and new that it was a bit faster, but not ground breaking improvement.

To no surprise, the new Pi 5 was just a bit quicker even after, again, overclocking it to the max. I already made some other plans for it so I was not disappointed about the purchase.

The new Raspberry Pi was re-purposed to be a portable device I would carry with my iPad for remote development while on the go. I configure it with a static IP, installed VS Code and I use rsync to copy data between the home PC and the Linux OS on the Pi. Connecting to the iPad via USB-C was super reliable and actually using it with the small screen of 12.9 iPad M2 is a blast. I went through some of the remote desktop apps as well before finally settling on the winner, RealVNC. I tried JumpDesktop and MS Remote Desktop as well. They all work great, so the RealVNC is just a personal preference.

Why the USB-C connected Raspberry Pi, you ask?

For the occasions when I have no internet access while traveling or in some cases some schools, universities block protocols like RDP and VNC. It’s nice to have a device with all your stuff connected whenever needed via an old fashion cable.

VSCode Server and the iPad

Yes, I did try VS Code server in a Docker container. The issue with it is that it does not work with all the extensions I needed. What was even worse, the keyboard mappings in the browser on the iPad are just terrible when it comes to VS Code. It was actually much better to simply connect the iPad to the Raspberry Pi over USB-C and then connect to the Pi via remote desktop and go with that. No issues with key mapping, and the speed was quite acceptable. Plus, it was truly a full-screen experience. The browser version, even when started as an app, is never quite the same. Another significant advantage of running remote desktop to the Pi was the addition of the function keys on the iPad. Yes, they are on-screen, but at least they are available. So, F3, F5, or F10 are there when I need them, and I don’t have to search through the menus.

I also tried and the tunneling extension, but faced the same browser issues as listed above.

Learning experience

I did learn a few tricks while working on the Pi. Even though I worked on Linux in my day job, this was a great learning exercise. I realized, for example, that the ARM processor was limited to only run certain software, and the software I had planned to run was not supported. I am referring to a SQL Server instance on ARM. I've seen some mentions on the Internet that Microsoft was working on a version that would run on the Raspberry Pi, but it wasn't available yet. What a bummer. I just needed a small DB for some testing that would be accessible from the web. Unfortunately, it had to be SQL Server at that time.

I did have a good old laptop that I could use in place of the Pi. I was reluctant to use it before since it had Windows 11 Pro on it, and I didn’t want to wipe it out and install Linux.

I finally had no choice. I decided to migrate all the apps I had installed on the Pi and use my old i7 laptop in its place.

That worked well and the speed definitely improved and I was quite happy. The old now Raspberry Pi 4 was put aside and is currently waiting for a new assignment.

Here is my docker-compose that I used in Portainer, which worked for me:

version: '3.8'

    image: zadam/trilium:latest
    container_name: trilium
      - "8080:8080"
      - USER_UID=1000
      - USER_GID=1000
      - ./data/trilium:/home/node/trilium-data
      - dh_network
    restart: unless-stopped

    driver: bridge
Enter fullscreen mode Exit fullscreen mode

Docker, the endless containers

I worked with Docker on a daily basis in my day job and understood the power of containers, especially coming from a Windows environment where the DLL hell and the ActiveX control referencing nightmare could drive someone crazy. Only minor improvements have been made to resolve this in the last couple of decades. Recently, .NET has made some strides to address it with their framework-dependent and self-contained install types. Let me tell you, this is still not that great. Knowing how frequently .NET receives updates, the framework-dependent type will swap out DLLs unexpectedly, leading your application to need another update. I am not excited about requiring so many .NET versions on my system just to run certain apps. On the flip side, the self-contained type seems excellent until you have to deploy the code and bundle around 300+ files, mostly .NET DLLs each time. In my situation, I utilize the WiX Toolset installer. I cringe when I attempt to publish my code and have to create an installer. Most of the time, the .NET version gets automatically updated on my PC, and WiX complains about missing or outdated DLLs due to the recent .NET update. It's quite exhausting, and there doesn't seem to be any solution in sight. I also work with Golang and truly value the single executable output. It's incredibly convenient. I am aware that .NET can also compile into a single file; however, this feature has never functioned properly for me, and the resulting file is massive. Sometimes I wish I wrote the API in Go, and other times I tell myself that I will rewrite it in Go. The truth is that I can't, due to some .NET encryption I use across my desktop app and the API. I tried a couple of times to make it work, but due to some padding issues, I was unable to and probably gave up to easy.

Let's get back to the topic. Running apps in containers is super convenient. When something goes wrong, you can just restart it and move on. Of course, I am not referring to large Kubernetes deployments and container management; I am talking about a simple homelab environment. It's something you can tinker with without losing sleep over, well, maybe once or twice. Yes, I know some of us take homelabs to the next level, but I'm not talking about a swarm of interconnected Raspberry Pis either :)

So the point here is that since it's so nice to run Docker containers, the next logical step is to add a bunch of apps to have more fun with it and learn something in the process as well.


My first app to manage the whole thing was Portainer, which is easy to follow and works really great. I used Rancher at work, and Portainer seemed like a super user-friendly version of Rancher.

File Browser

I then thought it would be nice to have something that allows me to access the file system without needing to write CLI commands in WSL. The best app I could find was an app called File Browser. Yes, another container to play with. I could do some basic operations on the file system, and what was even cooler, I was able to simply create browser links to files I wanted to open with a single click. It's really a great solution, and the app is super responsive and fast. Some basic user management is also available. I highly recommend this application; it's really a joy to use.


Then came the time when I wanted something for managing the Linux OS and getting some basic stats. I tried many apps until I finally settled on Webmin. Unfortunately, this app does not run well in a container, but it was worth it. One very helpful feature it has is the built-in terminal. It proved to be quite helpful for the times I wanted to access the server while on the iPad.


I should probably also mention that there is an app available on mobile that is essentially a terminal emulator and works great for free. The app is called iSH. I tried some paid options, but they seem to have too much fluff. This is a very streamlined and focused app that works great!


Now, an app that I've been eagerly waiting to install on my personal server is GitLab-CE (Community Edition). As I mentioned, I am a software developer and have written and collected a ton of source code. I have many personal projects and I am not ready to put them in the cloud, not even in a private repo. Call me paranoid. At my regular job, I use Bitbucket, Jira, and Confluence on a daily basis. For some personal projects, I use GitHub, and for others, I use GitLab. My most anticipated scenario is a self-hosted GitLab version, one that I have full control of.

I was finally ready for GitLab. The install was fairly easy and I was able to migrate all of my code and tickets into my long awaited self-hosted GitLab. I must say the app rand well, but there was quite a bit of a configuration headache and the memory footprint was more than I liked. It consumed more memory than any of my other apps, but I had no performance issues at all.

Here is my docker-compose that I used in Portainer, which worked for me:

version: '3.6'
    image: gitlab/gitlab-ce:16.6.2-ce.0
    container_name: gitlab
    restart: unless-stopped
    hostname: 'gitlab'
    user: "0:0"
        gitlab_rails['gitlab_shell_ssh_port'] = <please_enter>
      - '8282:80'
      - '2323:22'
      - ./data/gitlab/config:/etc/gitlab
      - ./data/gitlab/logs:/var/log/gitlab
      - ./data/gitlab/data:/var/opt/gitlab
      - dh_network

    external: true
Enter fullscreen mode Exit fullscreen mode

Here are the GitLab settings from gitlab.rb file:

external_url ''
nginx['redirect_http_to_https'] = false
gitlab_rails['gitlab_shell_ssh_port'] = 22
letsencrypt['enable'] = false
nginx['listen_port'] = 8282
nginx['listen_https'] = false
nginx['listen_port'] = 80
puma['worker_processes'] = 0

registry_nginx['proxy_set_headers'] = {
          "X-Forwarded-Proto" => "https",
          "X-Forwarded-Ssl" => "on"
Enter fullscreen mode Exit fullscreen mode


After successful GitLab addition, I ended up installing another containerized app worth mentioning, NextCloud. With the idea of replacing the standard cloud services and even iPhone photo sync, this was an interesting and almost too good to be true solution. This whole homelab server was really paying off and getting better by the minute. Since I had everything working just right, I spent all my free time researching new apps I could add to my Docker. This was indeed a pleasurable experience and one that also gave me a lot of technical satisfaction. At this point, I had about 15 container apps, and let me tell you, not all wanted to play nicely on the system. There was quite a bit of research and configuration to get them all to behave and work as expected. I became even more familiar with Docker, Portainer, the OS and networking between containers and all the components in the server. I had many happy moments when something finally clicked and started to work. Sometimes due to my tinkering and other times due to miracles. Back to the NextCloud, I initially had many issues with performance. It got to the point that I wanted to uninstall it and drop the idea of a personal dedicated cloud service. After a lot of research and trial and error, I finally got the performance under control. My eyes finally opened to the idea that it's true, I can have a personal cloud not shared with the big boys and not data mined for someone else's benefit. There are some aspects of NextCloud I use on a daily basis: the calendar, the contacts list, and of course the iPhone photo sync. The actual data storage portion of this experience is not that great. I use the File Browser app for that. The reason? Well, it can take forever when it comes to copying files to the cloud. Why? I think it's because of the file indexing performed internally. Copying a file directly to a folder that is part of the external file storage locations for NextCloud will not show that file if it was not copied using the NextCloud interface. This might be an error on my part, not entirely sure yet, but I do know that copying a large number of files can be extremely slow. I can't say the same for File Browser, which copies files very quickly. Overall, it's a good solution, except for the minor indexing problem.

Here is my File Browser docker-compose that I used in Portainer, which worked for me:

version: '3.8'
    image: filebrowser/filebrowser:latest
    container_name: filebrowser
      - PUID=1000
      - PGID=1000
      - TZ=America/New_York
      - "8083:80"
      - "/:/srv"  # Mounts the root of the host to /srv in the container
      - "./data/filebrowser:/data"
      - dh_network
    restart: unless-stopped

    driver: bridge
Enter fullscreen mode Exit fullscreen mode

Here is the docker-compose for the NextCloud app:

version: "2"
    image: linuxserver/nextcloud:latest
    container_name: nextcloud
      - PUID=1000
      - PGID=1000
      - TZ=America/New_York
      - ./data/next_cloud/config:/config
      - ./data/next_cloud/data:/data
      - ./next_cloud_users/daniel_cloud:/daniel_cloud
      - 444:443
      - 8082:80
    restart: unless-stopped
      - nextcloud_db

    image: linuxserver/mariadb:latest
    container_name: nextcloud_db
      - PUID=1000
      - PGID=1000
      - MYSQL_ROOT_PASSWORD=<strong_password>
      - TZ=America/New_York
      - MYSQL_DATABASE=nextcloud_db
      - MYSQL_USER=nextcloud
      - MYSQL_PASSWORD=<strong_password>
      - ./data/next_cloud/db:/config
    restart: unless-stopped
Enter fullscreen mode Exit fullscreen mode


Since I already alluded to a terminal access from the iPad via a browser, from Webmin’s interface, I have to mention my favorite browser access app to do this with. That would be the open-source and free app called WeTTY. I have tried many, some outright paid to get and others on a subscription. This app wins on simplicity and functionality. It is a Dockerized app that runs in a container, so this is a huge win for my environment and convenience.


I must also mention an open-source stats-dedicated app I use, called Glances. It’s a great and to-the-point app that lets you keep an eye on the system performance at a glance:) It provides a window into your Docker container stats as well, making it super useful for a homelab setup running Docker.

As I loaded my laptop with so many containerized apps, I realized that there was still one thing missing. I had all my notes connected via Trilium, and NextCloud provided photo sync and a very functional calendar with reminders. It would be cool if I could also connect my hard drives to the server and basically use that as a file server. Being an avid photographer with half a million high-res RAW images collected from my travels, I use Lightroom to organize them all. I thought to myself, it would be great to have access to all those images and collections accessible from anywhere. I use Lightroom on the phone and also on the iPad, so not having all my images with me while traveling is always a bummer. Yes, you can copy some selected sets of images to the cloud or the iPad on the go, but it always feels limited and more hassle than it should be. Having my entire life's collection always with me and at my fingertips has always been a dream.


After some research, I came across Samba, which allowed me to share folders from my server to any device. It's possible to install it in a Docker container, so it was a no-brainer. After a quick docker-compose stack creation and volumes setup in Portainer, I had it working on the server. I created a shared folder and configured my hard drives to permanently mount in that folder and become instantly available. Lightroom does work with shares, so repointing the main catalog was a breeze, and just like that, I had all my images available in Lightroom. I must say, mounting drives on Linux is a much nicer experience than on Windows. Once you have the correct UUID of the drive and modified the "/etc/fstab" file, you can count on that drive always being there. I can't say the same for Windows drives. External drives seem to always get a different drive letter. Even going through the Disk Management utility, there is no guarantee that it will get the same letter. Many times I connected to Lightroom on my desktop just to find out that the Lightroom catalog cannot find the images because the drive letter had changed. Absolutely crazy. No more of this nonsense, and it was a side benefit to my setup I had not thought about before. One thing I wish I could do is to re-use my Windows PC Lightroom catalog on my iPad. Unfortunately, that is not possible.

Here is the docker-compose for the Samba server:

version: '3.7'

    image: dperson/samba
    container_name: samba_server
    restart: unless-stopped
      - TZ=America/New_York
      - ./data/samba/:/data
      - ./data/docs:/samba/public
      - "139:139"
      - "445:445"
      - sambanet

    driver: bridge
Enter fullscreen mode Exit fullscreen mode

Outgrowing my laptop

It seemed that my laptop was reaching a breaking point with all these Docker images, so I decided to up the game to a dedicated desktop server. I wanted more performance, more space, and more of everything for all my toys. This was especially true when I added the photo drives, and each file I tried to open on the iPad was about 30-100GB or more in size. This is the price you pay for high-res files and the ability to push and pull image boundaries in Lightroom. I shot a few weddings and other professional photo shoots, and preserving as many details in the images was particularly important.

I also added a media server, JellyFin, to my collection of docker images. Serving music and movies with occasional transcoding required not only a fast CPU but also a better GPU.

It was time to repurpose the laptop and get something a bit more performant that would serve all my data in a more satisfying way. I wanted to forget about how long the file takes to open; I just wanted to use the data.

I purchased an i9 DELL server with a pretty nice GPU and 128GB of RAM. I was all in on the homelab server thing, and I convinced my wife that it was for the betterment of the household. She completely understood that my life and well-being were going to improve when I got a better server.

Thankfully, early on, I decided that I would not install an app on the server, with one or two exceptions, unless it was containerized for Docker. That decision really paid off. I had all my docker-compose files that became stacks in Portainer, and I kept all my server setup/configuration notes in Trilium.

When it came to migrating everything, I promptly installed Ubuntu Server, recreated the same folder structure as on the laptop so my Docker volumes would just snap into place. I auto-mounted the same hard drives, installed UFW (Uncomplicated Firewall), set up all the policies and rules from my Trilium notes, and I was almost back. I copied over my SSH key and removed the password access from the new Ubuntu install. All I had to do now was search the Internet for the correct rsync commands to safely copy some of my data from the laptop to the new shiny server.

One thing I found out right away is that I had to go into the server's BIOS and configure it to restart in case of power failure. This is something that slipped my mind, and frankly, I was not aware that it existed until I accidentally unplugged my server and realized it was still dead after I plugged it back in. I felt like it was a slap in the face, how silly of me to not even think about it beforehand. After using a battery-powered laptop, you can forget what that plug is for. Plus, I thought, I write software for a living; I don't administer servers, even though I have built a few desktop PCs in my lifetime. I do remember one feature I loved on my 2008 MacBook Pro: the ability to start the laptop on a timer, which is a super useful feature I miss ever since switching to Windows. I think the same goes for Time Machine backup. I don't know how I live without that as well.

iPad as a server display

Another purpose for my iPad, since the server didn’t come with a monitor and there was no monitor in sight, I decided it would be great if I could connect my iPad to the server when I needed to work on it directly. Luck has it, just recently I purchased a $10 capture card on Amazon in order to play Steam Deck games on the iPad. The capture card adapter itself weights nothing. As I mentioned, it’s a 12.9” display and it is just perfect. Now it’s also perfect to grab it and take it to the server when needed. The software that works great with this capture card and Steam Deck is Genki Studio and it’s also free on the Apple app store. One thing that is awesome about this software is that it lets me adjust the screen resolution on the fly and super seamlessly. I have tried a few apps for this purpose, but the easiest to use and the one that actually had the features I needed without extra bloat was this one. I am happy that I found another purpose for the iPad. I alway considered them to be just a larger versions of the iPhone. Now with the possibility to access external drives, use it as a monitor and access to more and more real productivity apps, it turns out to be a pretty cool device. I must mention another feature of the iPad that is just not the same on a laptop, its the ability to open it and be read to go without waiting for the thing to wake up from sleep or hibernation. The updates are painless and it just works. The battery life is excellent too. Just to add to this, now with a full featured development possibility, I just love this setup. I never thought I would say that in a million years just few months ago.

Hello World!

Let's Encrypt

What good is it when you have all those apps and files to access, but can only do it at home on the local network? Since I've already added NGINX to Docker and got it running, I thought I would configure it to access all my apps remotely. This is something I've always wanted to do, but was hesitant to actually go ahead with, fearing that malicious actors might discover me and exploit my security setup. This time, my desire to open my files to the outside overcame my security worries, and I got a Let's Encrypt certificate for my server.

This did not go smoothly out of the box. The first issue that came up was the Let's Encrypt certificate creation. Since I am on COX Internet, I ran into an issue with port 80, which is essential for the certificate challenge to work, but COX blocks it. I had to do some more research into this and found good information on a workaround. Here is some info on this directly from their website:


It’s easy to automate without extra knowledge about a domain’s configuration.
It allows hosting providers to issue certificates for domains CNAMEd to them.
It works with off-the-shelf web servers.

It doesn’t work if your ISP blocks port 80 (this is rare, but some residential ISPs do this).
Let’s Encrypt doesn’t let you use this challenge to issue wildcard certificates.
If you have multiple web servers, you have to make sure the file is available on all of them.
Enter fullscreen mode Exit fullscreen mode

DNS-01 challenge

The workaround is the DNS-01 challenge which does not require port 80. This worked as expected, and I received my new shiny Let's Encrypt certificate. Luckily, ISPs cannot block port 443, so all was good now.

I started to reconfigure NGINX to allow traffic into my Docker containers. This went smoothly with a few exceptions. Some containers did not like to be called over a Docker service name but insisted on the container's IP addresses. It took me a bit of time to figure that out. I had to tweak some of my Docker networking, but in the end, it worked well. Another hiccup was the Web Sockets configuration for some containers. Some apps required it (like Trilium), and the configuration was sometimes tricky due to different versions and setups.

Now that I had all this working, I added Apache basic auth to ease my paranoia. I also implemented some NGINX throttling for good measure. Even though most apps have the ability to authenticate, and in the case of NextCloud, offer Multi-Factor Authentication, not all do. I thought that adding the extra step would be a good idea, and I can live with it. This didn't quite work out as I expected.

Since one of my objectives was to use my homelab apps from my iPad, I soon realized how much of a pain it would be to deal with that extra Apache auth. The issue became apparent when, after logging into an app like Trilium (remember, I already went through two sets of authentication), the gesture to scroll to the top would prompt the app to log in again. This wasn't the only app facing this issue. The Apache auth proved to be too much for me to handle, but what to replace it with? Not to mention, I didn't feel very comfortable with all the port forwards and extra records in my domain DNS setup.

The NGINX did its job and forwarded all requests to my targets. The ISP allowed me to port forward all the required ports, and even the Apache Basic Auth did what it was supposed to. However, the iPad wouldn't stop me from scrolling past the top point, causing pages to refresh. This issue wasn't isolated to just the Trilium app; it occurred with many other apps as well. Maybe it had to do with my NGINX setup of the server sections, who knows, it was really getting on my nerves. The whole potential security vulnerability regarding the open ports and authentication that was the only thing standing between my data and the rest of the world was too worrisome for me. I started to look for alternatives. I loved the no port forwarding aspect of my local access. I thought about a VPN; would that solve my dilemma?

VPN forever

After some digging, I came to the conclusion that the VPN was probably the best and the safest option. I first looked at self-hosted OpenVPN and WireGuard. Both seemed like pretty good choices. I used both for work, and I had no issues. I started looking at some reviews online and quickly realized that most people mentioned that WireGuard was the faster option and that OpenVPN was a few times slower. That's unfortunate since most also mentioned that the learning curve to set up WireGuard was much steeper.

I kept searching for an even better solution that was secure and performant. I came across Cloudflare, and the reviews were extremely positive, almost convincing me. Then, I watched Chritian Lempa's video about how Cloudflare funnels all your data through it, allowing them to see everything passing through. It felt like one of those countless VPN commercials advising users to use a VPN from a specific company to hide their identity. It's ridiculous, and many non-techy individuals might fall prey to it. It's insane how many online influencers promote this idea without considering their audience.


Final VPN solution. Well, I found something that quickly struck me as an excellent idea. Welcome to Tailscale. From a high-level overview, it's just like Cloudflare, no port forwarding required, a small installable app on the network, IP automatically synchronized with their server so even moving a device will resolve the new IP and the best part of all, it uses WireGuard ynder the hood so it's fast, minus the short handshake between devices. Let me explain.

How Tailscale works: Tailscale uses the WireGuard protocol to establish a secure network that is easy to manage. It prioritizes simplicity and efficiency. Unlike OpenVPN, Tailscale creates direct, secure connections between your devices. This isn’t just convenient—it’s fast.

Why it’s better than traditional VPNs: Traditional VPNs often route your data through a central server, slowing things down. Tailscale skips that step. It utilizes a central coordination node to assist in setting up connections initially but does not handle your traffic. Therefore, your data flows directly between your devices, resulting in less lag and enhanced security.

Under the hood: At its core, Tailscale is powered by WireGuard, renowned for its high-speed capabilities and modern cryptographic techniques. Not to forget the ease of use.

I signed up for an account and surprisingly received an allowance for 100 connected devices. This is unbelievable. I installed the app on my Ubuntu server, iPad, iPhone, and my travel laptop. I quickly retired the port forwards on my router and the NGINX server routing. I could not be happier with this setup.


Building this homelab has been more than merely a technical project - it's been an adventure from start to finish. I struggled to run Trilium smoothly on an underpowered Raspberry Pi, then fine-tuned Docker containers on a beefy i9 server. Each step taught me some valuable lessons.

What started as a quest for secure remote note access morphed into a full-fledged dive into networking, security, and server management knowhow. It's been a rollercoaster with ups and downs - hardware limits, software breakthroughs, Docker networking, you name it. But throughout, I developed a newfound appreciation for the freedom and control of owning my personal homelab setup.

Introducing Tailscale was a total game-changer. It simplified remote access while locking down my data tightly and keeping it blazing fast no matter where I worked. It shows how the right tools can radically shift how we engage with tech, making even the most intricate stuff feel seamless.

So if you're considering your own homelab journey, here's my two cents: just dive in headfirst. Yeah, the learning curve can seem daunting at times. But the payoff of building and mastering your tech environment from the ground up? Absolutely worth the effort, no question.

Are you planning to start your own homelab journey, or do you have insights from your personal tech adventures? Please share your thoughts and questions in the comments below!

Thanks for reading.

Top comments (0)