This post cross-published with OnePublish
In this article, we will cover how to set up service monitoring for Python projects with Prometheus and Grafana using Docker containers.
Service monitoring allows us to analyze specific events in our projects such as database calls, API interaction, tracking resource performance, etc. You can easily detect unusual behaviour or discover useful clues behind the issues.
Real Case Scenario
We had a temporary service which redirects the incoming requests from specific websites until Google indexing will stop for these web pages. By using, service monitoring we can easily see the redirect counts on a regular basis. At a certain point in the future, the number of redirects will decrease which means the traffic now has been migrated to the target website which means we no longer need this service to run.
Setting up Docker containers
We are going to run all our services locally on Docker containers. In big companies, there is a global service for Prometheus and Grafana which includes all microservice monitoring projects. Probably, you don't even need to write any deployment pipelines for service monitoring tools.
Let's start by creating a docker-compose file with the required services:
docker-compose.yml
version: "3.3"
services:
prometheus:
image: prom/prometheus
ports:
- "9090:9090"
volumes:
- ${PWD}/prometheus.yml:/etc/prometheus/prometheus.yml
grafana:
hostname: grafana
image: grafana/grafana
ports:
- 3000:3000
app:
build:
context: .
dockerfile: Dockerfile
depends_on:
- prometheus
ports:
- "8080:8080"
command: ["python3", "app/main.py"]
The most important point above configuration is prometheus.yml
file mounting from our local to the docker container. This file includes configuration for pulling data (metrics) from our app service or Python project. Without the file, you won't able to see the custom metrics that your project includes.
So, create a new file named prometheus.yml
at the root level of your project.
prometheus.yml
global:
scrape_interval: 15s # when Prometheus is pulling data from exporters etc
evaluation_interval: 30s # time between each evaluation of Prometheus' alerting rules
scrape_configs:
- job_name: app # your project name
static_configs:
- targets:
- app:8000
Now, Prometheus will pull data from our project.
All other configurations in compose file are self-explanatory and not very critical as we mentioned for prometheus.
Create a new Python project
Now, let's create a very simple python app that will create a metric to track time spent and requests made. Create a new folder named app
at the root level of the project. Also include __init__.py
marking it as a Python package.
Next, create another file named main.py
which will hold the main logic of program as below:
app/main.py
from prometheus_client import start_http_server, Summary
import random
import time
# Create a metric to track time spent and requests made.
REQUEST_TIME = Summary('request_processing_seconds', 'Time spent processing request')
# Decorate function with metric.
@REQUEST_TIME.time()
def process_request(t):
"""A dummy function that takes some time."""
time.sleep(t)
if __name__ == '__main__':
# Start up the server to expose the metrics.
start_http_server(8000)
# Generate some requests.
while True:
process_request(random.random())
Here, we are using a python package named prometheus_client
to interact with Prometheus. It easily allows the creation of different types of metrics that our project requires.
The code above is copied from the official documentation of prometheus_client which simply creates a new metric named request_processing_seconds
that measures the time spent on that particular request. We'll cover other types of metrics later in this post.
Now, let's create a Dockerfile
and requirements.txt
to build our project.
Dockerfile
FROM python:3.9-slim-buster
WORKDIR /app
RUN apt update
RUN pip3 install --upgrade pip
COPY requirements.txt .
RUN pip3 install --no-cache-dir -r requirements.txt
COPY app app
ENV PYTHONUNBUFFERED 1
ENV PYTHONPATH=/app
CMD ["python3", "app/main.py"]
requirements.txt
prometheus-client
So, start running services to see it in action:
docker-compose up -d
Setting up Grafana
In this section, we will use the Prometheus as a data source to show metrics in Grafana charts.
Navigate to localhost:3000
to see the Grafana login page and use admin
both for username and password. Then it will require adding a new password and we can keep it the same as it is since we're testing locally.
After successful login, we should see the default dashboard of Grafana. Then select Data Sources from the page.
Next, select Prometheus as a data source:
Then it will require the URL that prometheus service is running on which is going to be the docker service name that we created http://prometheus:9090
.
And finally, click the button Save & Test to check the data source:
Great! Now our Grafana is ready to illustrate the metrics that come from Prometheus.
Let's now navigate to http://localhost:3000/dashboards
to create a new dashboard and add a new panel. Click New Dashboard
and then New Panel
for initialization:
Next, we select code inside the Query panel and write request_processing_seconds. You will be able to see 3 different types of suffixes with your custom metric data. Prometheus simply applies different types of calculations to your data by default.
Select one of the options and click Run query
to see it in the chart:
Finally, we can see the metrics of our project illustrated by Grafana very nicely.
Other Metrics
There are a lot of types of metrics available based on what the project requires. If we want to count the specific event such as record updates in the database then we can use Counter()
.
If we have a message queue such as Kafka or RabbitMQ then we can use Gauge()
to illustrate the number of items waiting in the queue.
Try to add another metric in main.py
as below and apply same steps to connect Prometheus with Grafana:
from prometheus_client import start_http_server, Summary, Counter
import random
import time
# Create a metric to track time spent and requests made.
REQUEST_TIME = Summary('request_processing_seconds', 'Time spent processing request')
UPDATE_COUNT = Counter('update_count', 'Number of updates')
# Decorate function with metric.
@REQUEST_TIME.time()
def process_request(t):
"""A dummy function that takes some time."""
time.sleep(t)
if __name__ == '__main__':
# Start up the server to expose the metrics.
start_http_server(8000)
# Generate some requests.
while True:
process_request(random.random())
UPDATE_COUNT.inc(random.randint(1, 100))
Here, we added Counter()
to calculate the number of updates for the database. Don't forget to build the docker image again for all services:
docker-compose up -d --build
PylotStuff / python-prometheus-grafana
How to set up service monitoring for Python projects with Prometheus and Grafana using Docker containers.
Setup Grafana with Prometheus for Python projects using Docker
This project includes the implementation of how to set up service monitoring for Python projects with Prometheus and Grafana using Docker containers.
Getting Started
Run docker services
docker-compose up -d
Navigate to localhost:3000
to see Grafana login page.
For more detailed information visit: https://www.thepylot.dev/setup-grafana-with-prometheus-for-python-projects-docker-included/
Support π
If you feel like you unlocked new skills, please share with your friends and subscribe to the youtube channel to not miss any valuable information.
Thumbnail Reference - Monitoring icons created by juicy_fish - Flaticon
Top comments (0)