loading...

Colo Migration Write-up

dechristopher profile image Andrew DeChristopher ・6 min read

Hello /r/homelab, DEV.to, and whoever in KIWI is reading this. Little intro, my name is Andrew. I run a big gaming community called KIWI that operates quite a few of the most popular Counter-Strike servers on the east coast of the United States. I'm also an avid home-labber and a network engineer by trade pursuing a bachelors degree in Computer Science (2 years left). This blog post will be a highly technical write-up that will go past the normal bounds of what's considered "KIWI" with lots of information about my personal projects and home lab. If that's not interesting to you then escape now before the pictures rope you in!

To begin, a short history of my endeavors in the whole lab scene:

I've always been the kind of person to try to self-host as much as I can. This began with hosting my own web-accessible file storage and Minecraft servers in early high-school. Soon it began to get out of hand with plenty of craigslist purchases. My lab had lots of ancient HP and Dell servers that really hated working properly.

Old Lab (circa early-mid 2017) Old Lab (circa early-mid 2017)

And from the front And from the front

Over the years with dives into DigitalOcean, Vultr, AWS, and the like, I got sick of paying monthly for shite virtual hardware with metered bandwidth. The time was now, I was ready to start a real home lab. The pictures above show my earliest lab in a very volatile state with much failure and many learning experiences. Taking all of this wisdom gained into account and after much research, I began investing in better hardware. I began by purchasing a Dell R410 with 64G of RAM and dual Xeon E2650s for just under $100 on eBay (steeeeeal). This server runs a majority of the virtualization workloads across my home lab. More on the individual services will come later on in this post.

With the compute resources flourishing, I realized I needed a better network infrastructure. In the pictures above there's a VERY old Dell PowerConnect 6024 gigabit switch. It handled my needs just fine but the management port is broken beyond repair so I couldn't configure it at all. Flat networks are boring so I invested in a pretty new Cisco Catalyst 3560G 24p with PoE. Neat!

New equipment coming along nicely New equipment coming along nicely

But that beige rack looks so tacky...

Yes. Yes it did. This is why I hunted for weeks to get some of that fully enclosed short-rack goodness for my lab. It's an off-brand StarTech 24U rack (I think...) and it's perfect for what I need. Locking doors, removable side panels, and best of all square holes! Tossing all of my equipment in this thing and getting it perfectly cable-managed was a freaking blast. Side-note: I'm freakishly in love with perfect cable management. I actually enjoy it. Hate me. Anyways, behold the beauty!

The full frontal The full frontal

And out back And out back

Okay cool but what about the colo?

Oh yeah, the colo. That is what this post is about huh? Here goes.

So a bit of an explanation: I don't own the colo, my company owns and pays for it. We each have our own personal equipment in the rack and our employer has the top half of the rack for the inevitable cloud services division to get going in the very near future. The colo features:

  • 42U full-depth APC rack
  • 200Mb 95th percentile commit at 1Gb
  • 30A power drop at 208V
  • 24/7 key card access
  • Round-the-clock audio and video security monitoring
  • Lots of tools and peripherals in the DC itself for our free use
  • Located in an inconspicuous refurbished industrial building

Some of the hardware we've got in the rack currently was migrated to this facility from New Jersey in our old colo facility. We recently took a road trip down there to grab all of it and bid them farewell. The distance was too great and the fact that larger customers with higher space and bandwidth requirements were moving in meant higher prices when we would inevitably renew our contract.

I personally own the two Cisco UCS C240 M3s. Most of the other equipment belongs to my friend, he does quite a bit that I won't get in to here other than the fact that he runs Project1999, the largest, and only officially sanctioned, unofficial EverQuest classic server.

Here are some pictures of the final product:

Drives, drives, drives... Drives, drives, drives...

Brocade ICX 6610-24 L2/L3 Core Switch Brocade ICX 6610-24 L2/L3 Core Switch

The belly of the beast with two TrippLite PDUs to power it all The belly of the beast with two TrippLite PDUs to power it all

Yeah the bottom server is on a shelf and is totally unplugged . What of it? Yeah the bottom server is on a shelf and is totally unplugged . What of it?

Now we shall deep dive into what I do with my lab and what extents I go to with other projects that run on it.

I won't do a total hardware breakdown unless that's widely requested. Let's start listing some things I run in my lab:

  • Plex (should I even mention this? it's a staple of most labs at this point)
  • This blog (and a few other supporting sites for other projects)
  • Lots of file storage (and I mean lots as in over 20TB)
  • Client VPN for my devices when I'm out in the wild
  • Site-to-site VPNs with a few of my friends with pfSense for distributed labbing ( thanks Muffin! I owe you a VM or two, PM me )
  • A Pi-hole box with 16GB of RAM that I've configured all my family, friends, and customers to use (over 200 devices and ~1000 lookups per minute peak)
  • Quite a few CS:GO servers for my leisure and development
  • Quite a few more CS:GO servers for my gaming community
  • A private GitLab server for a few friends and I to keep our projects under wraps
  • A windows domain (everybody's gotta learn)
  • Numerous tiny VMs for development
  • Super swanky Grafana dashboards for everythinggggg

Future plans:

  • Play with Docker more (I'm an avid software engineer and I've used it heavily in the past but I'd like to get more into container orchestration and microservices than anything)
  • I'm 10Gig across servers right now but I'd love to play around with higher bandwidth like 40Gig or God forbid 100Gig *wallet screams in terror*

So as this blog post gets wrapped up, I urge you to ask me questions about anything. This project has been full of mistakes and learning from them so let me share my experiences with you. Comment below or PM me on Reddit /u/dropslays for any of that jazz.

Thanks for stopping by!

-Andrew DeChristopher

Discussion

markdown guide
 
 

Dude, I was an EQ fiend, and mentioning p99 got me super interested in what you're doing. Do you have a guide for getting started? I'd be starting from scratch, as I'm only just getting started with setting up droplets on DigitalOcean.

 

P99 is actually run out of this very same rack. I could definitely put a guide together if you'd like. What aspects of everything would you like covered?

 

Thanks!

Well, for a while now I thought it would be cool to host my own test projects myself, rather than use DigitalOcean. I don't even know what I need to know to get started. I'm not even sure what questions I should be asking.

So the following would be helpful.

1) An understanding of the big picture

  • minimum cost for hardware?
  • do I need a special account with an ISP?
  • security concerns?

2) An overview of good resources. I wouldn't expect you to rewrite info that is widely available elsewhere.

3) What end result should I be shooting for to get familiar with the basics? For example -- at the end of this guide, you'll have something setup at home, hosting your own projects that anyone in the world can visit. You'll know how to keep things reasonably secure, etc.

After going through this initial work, I would hope to be more familiar with the subject, and I'll have better questions and ideas for more advanced setups.

I hope that answers the question and isn't too vague?

 

You own community CS:GO servers! The pings must be dayum fine.

 

Players all along the east coast get sub-20ms ping. It's good times!

 

Aww, east-coast! You are several time-zones away. :(

Unfortunately yes, we've got plans to expand out west in the coming months though!

 

what are you doing with the edgerouter?

 

We're using it strictly for VPN access to the management network of the rack.

 

Nicely detailed write-up of your setup with all that hardware and its associated software.

PS: Your lab contains more components than most production environments ;-)