Mary, a software developer in training and the owner of a small online retail business, found herself in a bind. A viral Instagram post about her products caused a sudden surge in orders, and her manual order tracking system couldn’t keep up.
Fortunately, she had been building a web app to manage her business, featuring a React frontend and a FastAPI backend. But there was one problem, she did not know how to deploy it. She needed a scalable, robust system, complete with monitoring to handle future traffic spikes and prevent downtime.
That’s where I came in. Together, we embarked on a journey to transform her project into a fully deployed application with a stack that included Docker Compose, Traefik, Prometheus, Grafana, and Loki. In this article, I’ll walk you through the process we followed to:
- Containerize and orchestrate a full-stack app.
- Configure a reverse proxy for secure routing.
- Set up real-time monitoring for metrics and logs.
- Deploying the stack to a cloud platform with a custom domain and HTTPS.
Let’s dive in.
Table of Contents
Overview
To acomplish this, the following tools/services are employed:
Application Stack Services:
- React Frontend: A dynamic and responsive UI powered by Chakra UI
- FastAPI Backend: Provides REST APIs and Swagger documentation and using Poetry as package manager.
- PostgreSQL: A robust database for persistent storage.
- Traefik: Reverse Proxy fpr routing traffic seamlessly to appropriate services.
Monitoring Stack Services:
- Prometheus: Collects and stores real-time metrics and provides querying abilities
- Grafana: Visualizes performance and logs using data from prometheus and Loki
- Loki & Promtail: Promtail collects logs, and Loki stores them for querying and visualization.
- cAdvisor: Monitors container resource usage and forwards metrics to promethues
The Process
The application code to be deployed can be found here.
First I retreived the application:
git clone https://github.com/The-DevOps-Dojo/cv-challenge01
The repository is organized as follows:
Frontend: Contains the ReactJS application.
Backend: Contains the FastAPI application and PostgreSQL database integration.
I explored the codebase, I discovered a few uncommon aspects. First, I was unfamiliar with Poetry as a package manager for python, so I did some research on it and how to deploy Poetry applications. I tested the steps locally and was sure it was working fine before moving to the next step, containerization.
🖥️ Containerization
I wrote docker files for both frontend and backend code.
Frontend:
FROM node:16-alpine AS build
WORKDIR /app
COPY package.json package-lock.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["npm", "run", "dev", "--", "--host", "0.0.0.0"]
This docker file is responsible for building an image for the frontend application code.
Backend:
# Build stage
FROM python:3.10-slim as builder
RUN pip install poetry
WORKDIR /app
COPY poetry.lock pyproject.toml /app/
RUN poetry config virtualenvs.create false \
&& poetry install --no-interaction --no-ansi --no-root --no-dev
# Final stage
FROM python:3.10-slim
WORKDIR /app
RUN apt-get update \
&& apt-get install -y libpq-dev gcc \
&& rm -rf /var/lib/apt/lists/*
COPY --from=builder /usr/local/lib/python3.10/site-packages/ /usr/local/lib/python3.10/site-packages/
COPY --from=builder /usr/local/bin/ /usr/local/bin/
COPY . /app
RUN adduser --disabled-password --gecos "" appuser
RUN chown -R appuser:appuser /app
USER appuser
ENV PYTHONPATH=/app
ENV PORT=8000
CMD ["sh", "-c", "bash ./prestart.sh && uvicorn app.main:app --host 0.0.0.0 --port $PORT"]
EXPOSE 8000
This handles building the image for the FastAPI backend code. It looks more complicated becuase I used a multistage build to reduce file size due to the many layers involved with building an image for an app that requires Poetry for package management. If you are not familiar with multistage build, you can read more here.
The images for the other services I will be needing for this project will be retreived from Dockerhub.
🛳️ Docker Compose
Docker compose is an orchestration feature of docker where you can manage several containers running in a system. I will be using docker compose to manage my services. To accomplish this, I setup of folder structure and. created several YAML files for configuration.
docker-compose.yml
- main configuration file
traefik.yml
- traefik specific configurations
monitoring/docker-compose.yml
- configuration file for monitoring stack
prometheus.yml
- prometheus specific configs
promtail-config.yml
- prometail specific configs
loki-config.yml
- loki specific configs
.env
- environment variables for the configurations
I setup the configuration for all 10 services that was earlier mentioned. A sample example of my Frontend configuration.
frontend:
build:
context: ./frontend
dockerfile: Dockerfile
labels:
- "traefik.enable=true"
- "traefik.http.routers.frontend.rule=Host(`${DOMAIN}`)"
- "traefik.http.routers.frontend.priority=1"
- "traefik.http.routers.frontend.entrypoints=websecure"
- "traefik.http.routers.frontend.tls=true"
- "traefik.http.routers.frontend.tls.certresolver=myresolver"
- "traefik.http.services.frontend.loadbalancer.server.port=3000"
environment:
- VITE_API_URL=https://${DOMAIN}/api
networks:
- app-network
depends_on:
- backend
Additonally, I ensured the the complete setup can be deployed by running docker compose up -d
from the root directory. I did this by leveraging the extends
feature and configuring all services to be on the same bridge network for easy service discovery and communication.
services:
prometheus:
extends:
file: ./monitoring/docker-compose.yml
service: prometheus
After some back and forth between documentations for traefik,Loki and Promtail trying to figure out routing and service discovery configurations, I was able to get the correct configurations and after running docker compose, all my service were up and running and working properly.
💭 Into the Cloud
Up to this point, I have been working on my local system. At this point, I had all my services runnign locally and was able to predict the compute power needed to run docker and the services, so I decided to move the application to a clpud server and complete the setup there.
First, I provisioned a virtual machine using Microsoft Azure and then I set up a simple deployment pipeline using Github Actions that will SSH into my VM, clone my repository (containing all my working configs) and then deploy the application using docker compose command.
On successful run of the pipeline, I had all my services up and running on my cloud server.
NB: If you are opting to do this manually, follow the follwing steps:
- SSH into the VM
- Install Docker
- Clone the repository
-
cd
into the project folder - run
touch acme.json
chmod 600 acme.json
- The previous step creates a file that traefik and let's encrypt will use to store tls certificate details. This is crucial to be able to access application over HTTPS
- run
docker-compose up -d -build
Next, I accessed my DNS provider's website and added my vm's IP address to map to my domain name. This way, all requests to my domain name is forward to my server VM, and on hitting my server on port 80 or 443, the request is picked up by Traefik and re-routed to any of the services runninng on the VM depedning on the traefik configurations. At this point, the application is up and running and accessible via HTTPS, over the internet.
👁️ Monitoring
Now we have our application up and running. We should setup our monitoring stack.
Access prometheus and grafana UI using their specified paths.
Explore prometheus and confirm the service discovery.
Next, on grafana UI and added data sources, Loki and Prometheus. Then moved on to building dashboards to display metrics and logs. After a fair deal of time, PromQL queries and tweaks. I arrived on the following dashboards.
Challenges
At this point, I had a good amount of headache, I called up Mary to remind her that I will not be maintaining or managing the applications and I took out time to discuss with her, the challenges I faced at different points of the setup hoping it will help her if she runs into a problem in the future.
1. Dockerizing the Frontend: I had successfully built the application and it was running locally, but the application was never accessible and therefore cannot be reached by other services in the network. I used the command docker exec -it -u root sh
to access the container for troubleshooting. I also accessed the logs using docker logs
. After some back and forth, I realized the issue was with the open ports, Vite by default runs on port 5173 but my container's open port was 3000. I had to perform a port forwarding internally to ensure the frontend service was accessible.
2. Traefik Routing & TLS Configurations
This was probably the most tasking and less fun part of the project.
3. Loki Version Issue
This was a minor issue, but it took a lot of back and forth with the documentation before I realized I was using a version that does not match with the configurations I had in my loki config file.
4. CORS Issue
This was not an issue I spent time with, as I was looking forward to it is a common one. But when I moved from localhost to the cloud, I forgot to update my .env and ran into this issue, so if you are ever changing DNS name, remember to make this updates.
Lessons
- Managing multi-container applications with a single configuration file.
- Networking containers and defining service dependencies.
- Configuring Traefik for routing traffic, load balancing, and handling TLS certificates.
- Understanding how reverse proxies improve scalability and security.
- Visualizing container performance and debugging issues using dashboards.
- Building Dashboards
- Identifying and fixing container misconfigurations.
- Diagnosing common issues like misaligned ports and CORS issues.
- Leveraging monitoring and alerting to catch performance issues early.
- When trying to troubleshoot, Spend ample time a lot with logs and error messages and always read the documentation!
Conclusion
Deploying Mary’s application was not just about putting code into production; it was a journey of learning, problem-solving, and implementing best practices to ensure scalability, reliability, and observability. By leveraging Docker Compose, Traefik, and a comprehensive monitoring stack, we transformed a simple project into a robust, cloud-deployed application capable of handling real-world demands.
This process highlights the importance of containerization, network orchestration, and monitoring in modern application deployment. From navigating configuration challenges to ensuring seamless service communication and building insightful dashboards, every step reinforced the value of preparation, testing, and documentation.
Now, Mary’s application is not only ready to support her growing business but also serves as a model for deploying scalable, well-monitored web applications in the cloud. Whether you're a developer building for a client or managing your own projects, this guide can help you tackle similar deployment challenges with confidence.
What’s next? Expand this setup with features like autoscaling, a more robust CI/CD process for automated deployments, or Kubernetes for more advanced container orchestration. The possibilities are endless!
Top comments (0)