DEV Community

Cover image for Spring Boot monitoring with Prometheus Operator
Artur Bartosik
Artur Bartosik

Posted on

Spring Boot monitoring with Prometheus Operator

In this article, we will install a Prometheus Operator that will automatically detect targets for monitoring. If you have used Prometheus before, either without the Operator or outside of Kubernetes, you will see how the Operator and its CRDs can make Prometheus flexible and how many things can happen magically leverages Kubernetes capabilities.

We will use Spring Boot application in demo. However, you will be able to configure any other app following this article. If your stack isn’t a Spring Boot just skip the first paragraph

Prepare Spring Boot to expose Prometheus metrics

My demo app (GitHub Link) uses Spring Boot version 3, or more precisely the latest release from 2022, i.e. 3.0.1. The core monitoring component in Spring Boot is Actuator. If you remember the migration of Spring Boot from version 1 to 2, you’ll probably remember that update brought a lot of breaking changes in Actuator. Fortunately, in the case of version 3, no such changes have been made, so you can apply the following configurations to Spring Boot version 2.x.x

To expose metrics consumable for Prometheus, you need to add two dependencies. The first one enables Actuator features, the second one is Prometheus exporter by Micrometer.

implementation("org.springframework.boot:spring-boot-starter-actuator")
runtimeOnly("io.micrometer:micrometer-registry-prometheus")
Enter fullscreen mode Exit fullscreen mode

All you have to configure to enable default metrics is to provide the below configuration. As you can see, I expose entire Actuator with another port number. It is good practice to separate on the port level the business layer from the technical stuff.

management:
  server:
    port: 8081
  endpoints:
    web:
      exposure:
        include: "health,info,metrics,prometheus"
Enter fullscreen mode Exit fullscreen mode

From now Spring Boot metrics in Prometheus format should be visible on http://localhost:8081/actuator/prometheus

Prometheus Operator

Kubernetes operators are applications that automate installation and configuration (Day-1 tasks) and scaling, upgrades, backups, recovery, etc. (Day-2 tasks) for stateful applications. We can say that Operators can replace part of manual administrator work. Under the hood, operators work in reconciliation loop (watch for changes in the application state) and use CRDs to extend the Kubernetes API. Generally speaking, it is the operational knowledge of a specific software contained in the custom controller code.

Prometheus Operator is an independent project from the Prometheus project. I know, it can lead to confusion. In the official README you can find short comparison. Basically, Prometheus Operator does what an operator should do - provides Kubernetes native deployment and management of Prometheus and related monitoring components like Grafana or Alert Manager.

Quick installation with helmfile

If you haven't used helmfile yet, I strongly encourage you to check out this tool. It provides a lot of improvements for working with Helm charts, but you don't need to go into all of them. You can easily switch and streamline your helm releases with helmfile and gain at the beginning one killer feature - interactive helm diff that works as Terraform plan. Installation Gist.

Firstly, clone my GitHub repo with Prometheus Operator helmfile, and check how little configuration is needed to install all stuff. This is because the Prometheus Operator installation comes with reasonably safe defaults whenever possible, so we have only to overwrite some crucial values.
To install it we need to exec single command.

helmfile apply -i
Enter fullscreen mode Exit fullscreen mode

Flag -i apply interactive mode. Helmfile will ask for confirmation before attempting to modify cluster state. With the first installation, you will probably see a very loooooong diff status, so it won't be very useful. The power of this feature becomes apparent as you start adding small changes in your releases - the same as with Terraform.

After a short while, you should see a message that you have successfully installed three releases.

UPDATED RELEASES:
NAME                    CHART                                        VERSION
kube-prometheus-stack   prometheus-community/kube-prometheus-stack    43.2.0
grafana-dashboards      local-charts/grafana-dashboards                1.0.0
demo                    luafanti/spring-debug-app                      1.0.0
Enter fullscreen mode Exit fullscreen mode

Establish tunnel to Grafana and check if preinstalled dashboards show some data.

kubectl port-forward -n monitoring svc/kube-prometheus-stack-grafana 3000:80
Enter fullscreen mode Exit fullscreen mode

You should be able to see 3 dashboard directories.

Grafana predefined dashboards

First one - General is preinstalled together with Prometheus Operator, the remaining comes from local helm chart. This is the path where you can add any Grafana dashboard as a json file and install it along with whole stack. If you want to add your own dashboards, I recommend a method where you first import/create the chart in the Grafana UI, then export it as a json file, and then add it to the project. Thanks to this you will avoid the problem with missed datasource.

I could end this post here. We managed to install what we wanted so the goal was achieved 🎉 🎯. However, let me briefly explain the most interesting things that happen underneath.

How metrics flow from Spring Boot application to Grafana?

Metrics flow from Spring Boot via Prometheus to Grafana

Thanks to Spring Boot Actuator project exposing operational information become trivial. As you can above, all metrics are exposed under a separate port 8081. Thanks to this, we have a dedicated gateway that we can open only for Prometheus. Actuator extended Prometheus exporter by Micrometer added dedicated endpoint where publish application metrics in Prometheus format /actuator/prometheus. We'll configure this endpoint to be polled by Prometheus (scraped) to fetch metrics and store them in its database. Note that in the Spring Boot app I added an additional label configuration. This adds application=spring-boot-demo label to every single metric. The label will be used in preinstalled Grafana dashboard as one of filter variable.

Grafana variables

You need to know that Prometheus stores all data as time series. Every time series is uniquely identified by its metric name and optional labels. Labels enable dimensional data model. Any combination of labels for the same metric name identifies a particular dimension of that metric. Thanks to the fact that Prometheus stores metrics in this way, tools such as Grafana have an advanced ability to filter the results and present them also in various dimensions.

How Prometheus Operator discover endpoint to scrap?

This is a fundamental question that should be bothering us. If you've worked with Kubernetes before, you probably know that Prometheus requires configuring all endpoints for scraping. In Kubernetes environment, where pods appear and disappear quite often, it is impossible to provide such a static config. This is where one of powerful part of Prometheus Operator comes into play - ServiceMonitor. ServiceMonitor is one of Prometheus Operator CRDs. It defines a set of targets to be monitored by Prometheus. The Operator automatically generates scrape configuration based on that definition. Below you can see configuration responsible for defining ServiceMonitor for Spring Boot app.

additionalServiceMonitors:
    - name: kube-prometheus-stack-spring-boot
      selector:
        matchLabels:
          prometheus-monitoring: 'true'
      namespaceSelector:
        matchNames:
          - sandbox
      endpoints:
        - port: management
          interval: 5s
          path: /actuator/prometheus
Enter fullscreen mode Exit fullscreen mode

It uses label selectors to define which Services to monitor, the namespaces to look for, and the port on which the metrics are exposed.

# check all installed ServiceMonitors
kubectl get servicemonitors.monitoring.coreos.com
Enter fullscreen mode Exit fullscreen mode

Besides our explicitly defined ServiceMonitor, the default installation of the Prometheus Operator creates several others. These are ServiceMonitors used to monitor the Kubernetes cluster, and Prometheus or Grafana instances itself.

You can also view all targets defined by ServiceMonitor in the Prometheus UI.

kubectl port-forward -n monitoring svc/kube-prometheus-stack-prometheus 9090:9090
chrome http://localhost:9090/targets
Enter fullscreen mode Exit fullscreen mode

Prometheus targets

One very important thing! Targets appear only when Prometheus finds a Service that has the appropriate labels - like here for my Spring Boot chart. If your Service doesn’t have the matched labels, you will not see the target marked as unavailable/down here, it simply will not appear here at all.

Where are Grafana dashboards installed?

When Grafana starts, it will update/insert all dashboards available in the configured path. Dashboards are provided under this path with help of sidecar container. Sidecar watch for new dashboards defined as a ConfigMap, then add a dashboard dynamically without restarting the pod. Below you can check the suitable configuration.

grafana:
  sidecar:
    dashboards:
      enabled: true
      label: grafana_dashboard
      folder: /tmp/dashboards
      provider:
        foldersFromFilesStructure: true
Enter fullscreen mode Exit fullscreen mode

This is the first part of setup. We still need to somehow provide the definitions of our predefined dashboards as ConfigMap. For this I just created a local helm chart grafana-dashboards . As you can see, the only object that creates this chat is the ConfigMap with dashboard definitions which Grafana sidcar reads and extract under configured path.

# get all ConfigMaps with dashboard definitions
kubectl get cm | grep grafana-dashboards

# check if ConfigMaps are properly injected under configured path. You should see two dirs with predefined dashboards.
kubectl exec -it kube-prometheus-stack-grafana-5f4976649d-w7q56 -c grafana -- ls /tmp/dashboards
Enter fullscreen mode Exit fullscreen mode

Top comments (0)