DEV Community

Cover image for Monitoring API Performance with Express, Prometheus, and Grafana
Max Sveshnikov
Max Sveshnikov

Posted on • Updated on

Monitoring API Performance with Express, Prometheus, and Grafana

Introduction

As APIs become critical to modern applications, being able to monitor their performance is vital. By tracking API metrics like request rates, response times, and error counts, we can gain visibility into the health and efficiency of our backend services.

In this post, we’ll look at an approach for monitoring Express APIs using Prometheus to collect metrics and Grafana to visualize them. We’ll use docker-compose to spin up containers for all three tools to create an integrated monitoring stack.

Setting Up the Express API

To easily add Prometheus metrics to our Express app, we'll use the express-prom-bundle module. This bundles PromClient and provides middleware for request tracking and default metrics.

First install the module:

npm install express-prom-bundle
Enter fullscreen mode Exit fullscreen mode

Then update our app.js:

const express = require('express');
const promBundle = require('express-prom-bundle');

const app = express();


promBundle.normalizePath = (req, opts) => {
    return req.route?.path ?? "No";
};

// Init metrics
const metricsMiddleware = promBundle({
    includeMethod: true,
    includePath: true,
    metricType: "summary",
    customLabels: { model: "No" },
    transformLabels: (labels, req, res) => {
        labels.model = req?.body?.model ?? req?.body?.imageModel ?? req?.body?.voice ?? "No";
        return labels;
    },
});

app.use(metricsMiddleware);

// Routes 
app.get('/data', (req, res) => {
  res.json({data: 'Some data...'});
}); 

app.listen(3000);
Enter fullscreen mode Exit fullscreen mode

This instruments our app to track request rates, durations, response sizes, and more. Default process metrics are also collected.

The metrics are exposed on /metrics for Prometheus to scrape. We can also add custom metrics and labels as needed.

This module makes instrumenting Express apps for Prometheus a breeze! The generated metrics can then be visualized in Grafana without any additional work.

This app exports a /metrics route that exposes Prometheus metric data. We create a histogram to track HTTP request durations and use middleware timing on our main data route.

Setting Up Prometheus

Now we need to set up a Prometheus server that will scrape metrics data from our application. We’ll use an official Prometheus docker image to spin up an instance.

Here is a basic prometheus.yml config file that targets our Express app:

global:
  scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.  
scrape_configs:
  - job_name: 'express' 
    static_configs:
    - targets: ['express:3000'] #Our express app container
Enter fullscreen mode Exit fullscreen mode

We map port 9090 from the container to expose the Prometheus UI and API.

Visualizing with Grafana

For visualizations, we’ll use Grafana which has built-in support for querying and displaying Prometheus data. We set up a Grafana docker container and map port 3000.

Once running, we configure our Prometheus instance as a data source in Grafana using the URL http://prometheus:9090. Grafana will automatically detect and set up dashboards for many common Prometheus metrics out of the box.

We can also create custom charts and graphs to visualize metrics like our request duration histogram. This allows us to monitor 95th percentile request times, errors, rates and more.

Docker Compose allows us to spin up multiple containers for different services and link them together easily. This is perfect for creating an integrated monitoring stack.

Here is what a sample docker-compose.yml file might look like:

version: '3'

services:
    express:
        build: .
        ports:  
            - 3000:3000

    prometheus:
        image: prom/prometheus
        volumes:
            - ./prometheus:/etc/prometheus
            - prometheus:/prometheus
        command:
            - "--config.file=/etc/prometheus/prometheus.yml"

    grafana:
        image: grafana/grafana
        ports:
            - "7000:3000"
        depends_on:
            - prometheus
        volumes:
            - ./grafana/provisioning:/etc/grafana/provisioning
            - grafana:/var/lib/grafana
        environment:
            - GF_SECURITY_ADMIN_PASSWORD=${GF_SECURITY_ADMIN_PASSWORD}


Enter fullscreen mode Exit fullscreen mode

This defines three services:

  • Prometheus server configured via a volume mounted config file
  • Grafana dashboard exposed on port 7000 (for nginx)
  • Our Express app container exposing port 3000

Using a single compose file like this means:

  • All containers share a network so Prometheus can scrape metrics from the Express app
  • We can easily scale up our Express API containers for load balancing
  • Adding and linking extra services like a database is trivial
  • Spin up and down the whole monitored stack with a single command: docker-compose up/down

We could enhance this further by:

  • Adding Grafana config via shared volumes
  • Configuring Grafana data sources and dashboards on start

Docker Compose combined with Prometheus and Grafana provide a powerful way to prototyping and deploying a well-instrumented microservices architecture with monitoring baked-in from the start.

Configuring Grafana

To simplify Grafana setup, we can configure data sources and dashboards using configuration files mounted from the host system:

grafana:
  image: grafana/grafana
  volumes:
    - ./grafana/provisioning/datasources:/etc/grafana/provisioning/datasources
    - ./grafana/provisioning/dashboards:/etc/grafana/provisioning/dashboards
  environment:
    - GF_SECURITY_ADMIN_USER=admin 
    - GF_SECURITY_ADMIN_PASSWORD=admin
  ports:
    - 3000:3000
Enter fullscreen mode Exit fullscreen mode

This mounts volumes containing:

  • datasources/prometheus.yml - Configures our Prometheus instance
  • dashboards/express.yml - Sets up a sample dashboard

And sets the default Grafana admin credentials.

When Grafana starts up, it will automatically detect these files and configure the data sources and dashboards specified. We can easily add more yaml config files to provision additional dashboards and organize them.

Sample datasources/prometheus.yml:

apiVersion: 1

datasources:
  - name: Prometheus
    type: prometheus
    url: http://prometheus:9090
    access: proxy  
    isDefault: true
Enter fullscreen mode Exit fullscreen mode

Sample dashboards/express.yml:

apiVersion: 1
providers:
- name: 'default'
  orgId: 1
  folder: ''
  type: file
  disableDeletion: false
  updateIntervalSeconds: 10
  options:
    path: /var/lib/grafana/dashboards

dashboards:
  - name: 'Express App'
    options:
      path: /var/lib/grafana/dashboards/default.json
    folder: ''
    type: file
Enter fullscreen mode Exit fullscreen mode

This allows us to manage and version Grafana config alongside our Prometheus and app config for fully reproducible monitoring setup.

Image description

Conclusion

Using Prometheus and Grafana provides powerful and customizable monitoring for our Express APIs with very little integration code required. Running them via Docker provides a simple way to set up a unified metrics stack.

As applications grow to involve dozens of services, maintaining visibility into API performance becomes crucial. Robust monitoring helps detect issues before they cause problems, improving reliability and the development process.

This is just one approach to monitoring Express and Node.js services - there are many other great tools like InfluxDB, Datadog and New Relic that may suit other needs. Prometheus and Grafana offer an easy path to get started instrumenting the critical internal APIs that drive our applications today.

Top comments (0)