A home lab is something I've been wanting for a very long time. In this post, I'll drag you along for the adventure that has been my build out, some troubles I faced, and how I solved them.
A lot of people can get by starting out with a
raspberry pi as a home lab. I also started with one of these, but my needs have grown beyond what it can do. The pi (especially the pi 3 model b+) is still a great little tool, and if you are just getting started I'd recommend one. For the price they can't be beat. This is a bit of a mind dump, so if you want to jump around, here are the sections:
- Intro (you are here)
- The Build
- The Storage
- The Network
- The Software
- Install and Troubleshooting
I spent a lot of time trying to figure out what parts, or system I should get for this. I wanted solid performance, but like everyone I didn't want to pay a truck load of money to get it. I fell into the rabbit hole of /r/homelab and /r/DataHoarder/ looking for suggestions.
I found myself over at serverbuilds.net looking through their build lists. This led me to looking through ebay, which is how I came across MET Servers who specialize in used enterprise hardware. I had found my sweet spot.
The prices on MET simply blew me away. I chose to start my configuration with a
Dell R710 for a couple of reasons. One, the backplate it uses (PERC H700) can handle large drives and can do SATA 6Gb/s. Here is the full configuration I ended up going with:
• Dell PowerEdge R710 6-Bay LFF 2U Rackmount Server --------------------------------------------------- • Processors: 2x Intel Xeon X5660 2.8GHz 6 Core 12MB Cache Processors. • Memory: 24GB DDR3 ECC Registered Memory (6 x 4GB) • RAID Controller: Dell PERC H700 6Gbps SAS/SATA RAID Controller 512MB Cache (0 1 5 6 10 50 60) • Hard Drive Bay x6 1: Dell 3.5 Tray with screws • Daughter Card: Embedded Broadcom NetXtreme II 5709C Gigabit Ethernet NIC • Management: iDRAC 6 Express Management Module • Networking: Additional Network Card Not Included • Power Supplies: 2x Redundant 870W Power Supplies • Rail Kit: Rail Kit Not Included • Front Security Bezel: Bezel Included with Key
The configuration started at $50, which is crazy. The CPUs were a $40 upgrade, the ram cost $54, upping the PSUs to 870 watts was $15, and the drive bays were $10 a piece. The amazing thing about this configuration is that it comes with a 4 port gig Ethernet adapter built in.
This cost me around $245 shipped, which is less than I was finding some empty chassis for online. The issue with this of course, is that it had no drives. They did have an option for shipping it with drives, (4Tb @ $80) which wasn't a bad price. However, I wanted to try something else I had found online, which was shucking large external hard drives (which cost less for some reason) and using them as internal storage. While getting CPUs and ram used isn't really a big deal, write hours on hard drives can be, so sadly I had to buy that new.
Bad idea - $6.99 - 1x Kingston Digital 16 GB DataTraveler SE9 G2 USB 3.0 Flash Drive (DTSE9G2/16GB) Good idea - $67 - WD Blue 3D NAND 500GB PC SSD - SATA III 6 Gb/s, 2.5"/7mm - WDS500G2B0A $8.95 - Protronix SATA Optical Bay 2nd Hard Drive Caddy, Universal for 12.7mm CD/DVD Drive Slot $110 each - 6x WD 6TB Elements Desktop Hard Drive - USB 3.0 - WDBWLG0060HBK-NESN
The seeming outlier here is the USB flash drive, but the cool thing is that it can be used as an OS drive,
so that I don't have to take up any of the storage bays for the operating system. The original plan was to use the flash drive to store the base OS. The flash drive couldn't handle the read/write cycles of the main os and died after about a month. I replaced it with the optical bay ssd holder and ssd combo, which has been working flawlessly ever since. I also didn't lose any data, which was a big relief.
The WD elements drives all arrived earlier than everything else, which allowed me to test the drives in their enclosures to make sure they were all functional. I did a simple file transfer (around 10Gb) and S.M.A.R.T. status check on each of them. Afterwards it was time to coax them out of their shells. I followed a youtube video on the process by the same person who wrote the
JDM_WAAAT, who sadly, doesn't appear to have an account here. The process was actually easier for the 6Tb drives I had, and only required removing two screws.
The drive inside is not the best, but for my use will be fine. It also costs $50 more if you buy it without the enclosure, which still makes no sense to me.
Netgear r7500v2 (already owned) $40 - Dell PowerConnect 2748 48-port gigabit Ethernet switch $20 - Cable Matters 5-Pack Snagless Cat 6a / Cat6a (SSTP/SFTP) Shielded Ethernet Cable in Blue 10 Feet
Given that I was planning to create a Link Aggregation Group (
LAG) with the four gig ports that came with this box, I knew I was going to need some additional networking hardware. My router is fine, but only has four ports total - most of which are already in use, and doesn't support link aggregation.
I went back to ebay and found a preowned
Dell PowerConnect 2748 for around $40 shipped which was more than enough for what I need now, and the future. Even better, the switch is managed, so I can set up my link aggregations, and VLANs if I so choose.
For cables, I picked up some additional
cat 6a cables, which are better than cat 5e in every way, and cost just slightly more. They also support 10Gbe, which means if 10Gbe networking is ever affordable, I can upgrade without having to buy new cables.
• FreeNAS 11 • NextCloud • Plex Media Server • Docker
I originally was going to use
unRAID for this setup, as it's ease-of-use was attractive to me, as was it's ability to add a single disk to an existing drive array. However, further research showed that the downsides of this benefit include reduced write speed to drives as you add more devices. This is an issue because one primary goal I have for this system is backups from my other devices, as well as a replacement for dropbox so I can stop shipping them my money. Frequent writes are common in both of these situations, which ultimately made me lean to
FreeNAS, which has better (supposedly) write speeds, and most of the same feature set. It also happens to be completely free (unlike unRAID) and is OSS.
Docker comes installed and supported out of the box on FreeNAS, which is great.
$137 - CyberPower CP850PFCLCD PFC Sinewave UPS System, 850VA/510W $50 - UNIVERSAL DEVICE PLACEMENT PORTABLE 12U FLOORSTANDING 19" SERVER RACK $40 - Dell R610 R710 Static Server Rail Kit 2 Post 4 Post R G483G L K291G $7.50 - Pasow 50pcs Cable Ties Reusable Fastening Wire Organizer $70 - SiliconDust HDHomeRun Connect Duo 2-Tuner
The first four items on this list aren't really "extras" but don't really fit anywhere else. I'd argue a UPS is required to protect a system like this. A rack, though small - does wonders for space management, and organization. The rails go with the server, but also with the rack, since I didn't buy the kit from MET.
The cable ties - which are reusable - is a purely quality of life item, but makes dealing with networking and power cables much easier.
The TV tuner is definitely an extra, and I wouldn't even want it if I wasn't going to use it with plex's excellent live TV & DVR system.
freenas 11 was straight forward, but not pain free. If you go the route I initially tried, which is running the install off of a flash drive, you technically need two flash drives. One to put the installer image on, and one to install the OS to. Fortunately, I had another flash drive that worked for this purpose. A pain point of this install though, was that the system - being the older hardware that it is - does not have usb 3. As such, installing onto the flash drive using usb 2.0 was a very time consuming process. Installing to the ssd later was faster, mostly because of its SATA interface, but also because I discovered the remote management utility built into this server let's you mount an ISO over the network, which meant I didn't have to use USB at all to install the system, which was great.
The hard drives were easy enough to shuck as I mentioned above, however I did break a drive caddy when I accidentally installed the drives too far forward in the caddies, and thus wasn't making contact with the backplane. One of the screws seized on a caddy and I stripped it (which was totally my fault), this cost me another $7 on ebay to get a caddy.
Once the drives were all properly installed in the drive bays, I did have to configure them each to be their own RAID 0 array to get them to pass through to freenas, something freenas really doesn't like because it removes some drive monitoring. It would be better if I was using a RAID controller that supported direct pass through, but this one doesn't and at this point I'm tired of spending money on this project.
The ZFS pool configured easily. I'm only using one parity disk, something that at this size of pool (36 total TB) is frowned upon, but losing 12TB of space to parity isn't something I'm currently willing to do.
The freenas interface is fairly easy to use, and after stumbling through creating the data pool above, the next challenge was the software I actually wanted. Plex and nextcloud.
Nextcloud was easy, freenas has nextcloud as a "plugin" which uses a concept of a jail - similar to a virtual machine - to run the software you want in isolation from the rest of the system. The only configuration I had to do here was setting up a username and password during the nextcloud web based install. Super easy.
Plex was a different matter. I wanted to be able to access plex files (things like pictures and videos) directly off of a network drive. This meant I had to setup a windows samba share in freenas. Freenas actually makes this trivial, and has an entire wizard to help you do this. I setup a user, named a share, and could connect to it from my network. Awesome. At this point I thought I was set, as plex also has an official plugin provided by freenas, just like nextcloud. No matter what I did though, I could not get the web interface to load to get past the initial configuration of plex. This was very frustrating and almost made me give up.
I finally found a guide that went through setting up plex in your own jail, so essentially creating the plugin, but from scratch. After a little trial and error, this method worked for me. Which meant both of my services were up and running as I had hoped.
This was a long process - mostly thanks to USB 2, and a failing flash drive - that involved a lot of headaches and trial and error. So much so that I know I've missed details when creating this build log. If you have any questions on any step, or anything I did that seems to skip forward in time, please drop a comment and I'll try to update and address it. I ended up not setting up a docker environment on this system, and am instead using jails for that purpose. Due to the way that freenas allocates memory and cpu resources, jails are much simpler. Got a cool idea for something I should do with this? Also please drop a comment.
Thanks for reading, and coming along with me on this adventure.
Today, I'll walk you through creating a production-ready Express API running on AWS Lambda with a persistent MongoDB data store. Yes, that's a thing, you can build Express apps on AWS Lambda. Amazing, I know! And yes, you can use MongoDB without batting an eye!