Prometheus is a metric based monitoring platform also it's one of all-time my favorite tools ever. And from a time ago I was thinking to develop some project with it. What's my plan? I want to create my multi-language application cluster with Prometheus monitoring, and then add some Grafana Loki, Cortex and Thanos integrations.
What's the first step?. Integrate some Prometheus metrics library with a basic python app. To reach this, I just use the Prometheus "client_python" sample, and make a container with it and push into a public registry. So...
First step! Create a Gitlab repository
Second step! Create an "app" folder and copy-paste this code into a "main.py":
from prometheus_client import start_http_server, Summary
import random
import time
# Create a metric to track time spent and requests made.
REQUEST_TIME = Summary('request_processing_seconds', 'Time spent processing request')
# Decorate function with metric.
@REQUEST_TIME.time()
def process_request(t):
"""A dummy function that takes some time."""
time.sleep(t)
if __name__ == '__main__':
# Start up the server to expose the metrics.
start_http_server(8000)
# Generate some requests.
while True:
process_request(random.random())
Add a "requisites.txt" file with this content:
prometheus_client
Add a "Dockerfile" like this one:
FROM python:3.9-alpine
WORKDIR /app
COPY . .
RUN pip install -r requisites.txt
RUN chmod u+x main.py
ENTRYPOINT ["/app/main.py"]
Third Step! Create a Gitlab CI with Container Registry pipeline. To manage this task, I created a ".gitlab-ci.yaml" file in the repository basepath:
stages:
- build
image: docker:stable
services:
- docker:dind
build:
stage: build
when: on_success
only:
- master
script:
- docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
- docker build -f app/Dockerfile -t $CI_REGISTRY_IMAGE app
- docker push $CI_REGISTRY_IMAGE
This repository should look like this:
├── app
│ ├── Dockerfile
│ ├── main.py
│ └── requisites.txt
├── .gitlab-ci.yml
└── README.md
Now, let commit all the files and wait until the pipeline is finished.
Four and last step! Run that image and scrape some metrics:
Run docker container as a daemon, with "python-prom" name, listening on 8000/TCP and delete that container when the image stops:
$> docker run -d --rm --name python-prom -p 8000:8000 registry.gitlab.com/cosckoya/python-prom
Check that the image is up & running (also listening on that port)
$> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
380cc1a00a8b registry.gitlab.com/cosckoya/python-prom "/app/main.py" About a minute ago Up About a minute 0.0.0.0:8000->8000/tcp, :::8000->8000/tcp python-prom
And scrape those metrics!
$> curl localhost:8000
# HELP python_gc_objects_collected_total Objects collected during gc
# TYPE python_gc_objects_collected_total counter
python_gc_objects_collected_total{generation="0"} 309.0
python_gc_objects_collected_total{generation="1"} 43.0
python_gc_objects_collected_total{generation="2"} 0.0
# HELP python_gc_objects_uncollectable_total Uncollectable object found during GC
# TYPE python_gc_objects_uncollectable_total counter
python_gc_objects_uncollectable_total{generation="0"} 0.0
python_gc_objects_uncollectable_total{generation="1"} 0.0
python_gc_objects_uncollectable_total{generation="2"} 0.0
# HELP python_gc_collections_total Number of times this generation was collected
# TYPE python_gc_collections_total counter
python_gc_collections_total{generation="0"} 36.0
python_gc_collections_total{generation="1"} 3.0
python_gc_collections_total{generation="2"} 0.0
[..]
Our Python Prometheus base image is done.
On my next post, I will create a Go or Java base prometheus application and all three of these base images will be deployed into a Kubernetes cluster alongside a Prometheus and create some Prometheus workbench with them.
Top comments (0)