How I Built and Hosted My Own Server Using a Raspberry Pi: Lessons Learned
As developers, we often rely on cloud platforms like Heroku, AWS, or DigitalOcean to host our applications. But what if you could host your own projects, save money, and deepen your understanding of server management in the process? That’s exactly what I decided to do with a Raspberry Pi. In this post, I’ll share my journey, the challenges I faced, and what I learned from setting up and running my own server at home.
Why Build Your Own Server?
When I started this project, I had three main goals:
- Cost Savings: Stop paying for hosting services like Heroku.
- Learning: Dive deeper into deployment, infrastructure, and DevOps.
- Showcasing My Work: Make my portfolio and side projects accessible globally, running entirely on my own hardware.
The Setup
Hardware and Tools
- Raspberry Pi: Affordable, energy-efficient, and surprisingly powerful.
- Cloudflare Tunnels: To expose my local server to the internet securely and bypass shared-network restrictions.
- Ruby on Rails: My portfolio and side project, CelebrantGPT, are built on Rails.
- Redis and Sidekiq: For background job processing.
- Systemd: To manage services and ensure everything starts automatically on reboot.
Setting Up My Raspberry Pi
The first step was to set up my Raspberry Pi as a production server. Here's what I did:
1. Installing Necessary Software
I installed Ruby, Rails, PostgreSQL, Redis, and other tools needed for my Rails apps. I also installed a variety of additional tools and technologies to future-proof my server for upcoming projects, such as those that might use Python and other languages and frameworks.
# Update the package list
sudo apt update
# Install Ruby, Node.js (for Rails asset pipeline), and Yarn
sudo apt install ruby-full nodejs yarn
# Install PostgreSQL
sudo apt install postgresql postgresql-contrib
# Install Redis
sudo apt install redis
2. Deploying My Rails Apps
I cloned my repositories, set up databases, and configured Rails for production. For my project, CelebrantGPT, I use Redis and Sidekiq to handle API requests as background jobs. If your project relies on similar services or technologies, you’ll need to install and configure them accordingly to ensure everything runs smoothly.
# Clone the repository
git clone https://github.com/MM-Japan/celebrant_speech_generator.git
cd celebrant_speech_generator
# Install gems and dependencies
bundle install
# Set up the database
RAILS_ENV=production rails db:create db:migrate
3. Configuring Services with systemd
To ensure my apps, Redis, and Sidekiq started on boot, I created systemd service files. This setup ensures that the Pi only needs to be powered on and connected to Wi-Fi to function as a server.
Here’s an example for a Rails app:
# /etc/systemd/system/celebrant.service
[Unit]
Description=Celebrant Speech Generator
After=network.target
[Service]
User=pi
WorkingDirectory=/var/www/celebrant_speech_generator
ExecStart=/usr/bin/bash -lc 'bundle exec rails server -e production'
Restart=always
Environment="RAILS_ENV=production"
[Install]
WantedBy=multi-user.target
Exposing My Server to the Internet
Since I live in a shared house and don’t have direct access to the router, traditional port forwarding wasn’t an option. This is where Cloudflare Tunnels came in. By routing traffic to my domains through the tunnel, I can expose the Pi's localhost to the web, making it globally accessible.
Setting Up Cloudflare Tunnels
I installed cloudflared and created a tunnel to expose my server.
# Install cloudflared
wget https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-arm64.deb
sudo dpkg -i cloudflared-linux-arm64.deb
# Authenticate with Cloudflare
cloudflared tunnel login
# Create a new tunnel
cloudflared tunnel create portfolio-tunnel
Here’s a snippet of my config.yml file:
# ~/.cloudflared/config.yml
tunnel: portfolio-tunnel
credentials-file: /home/pi/.cloudflared/portfolio-tunnel.json
ingress:
- hostname: www.maximmccain.com
service: http://localhost:3000
- hostname: celebrant.maximmccain.com
service: http://localhost:3001
- service: http_status:404
Challenges and How I Solved Them
- Debugging in Production
Errors like "missing database user" and routing issues appeared only in production. To help reach the correct configuration for a production environment, I used journalctl to view systemd logs and debug errors. This was invaluable for troubleshooting services.
# View logs for a specific service
sudo journalctl -u celebrant.service
- API Key Management
Ensuring API keys and credentials are secure in production is vital. If not encrypted correctly, or if committed to github accidentally, you expose your API keys to potential abuse. To avoid this, I used Rails credentials and .env files:
# Add to credentials.yml.enc
production:
openai_api_key: your_openai_api_key
Lessons Learned
Full-Stack Understanding: I learned about DNS, networking, and server management beyond coding.
Automation is Key: Setting up systemd services made deployment and maintenance far easier.
The Value of Debugging Tools: Logs and tools like curl were critical for diagnosing issues.
Final Thoughts
Hosting your own server might seem intimidating, but it’s incredibly rewarding. Not only does it save money, but it also gives you a deeper understanding of how applications run in production.
If you’ve ever hosted your own apps or are considering it, I’d love to hear your thoughts or experiences. Let’s connect and share our learning journeys!
Top comments (0)