Prometheus is exclusively pull-based…
…ish
The Pushgateway is an intermediary service which allows you to push metrics from jobs which cannot be scraped. For details, see Pushing metrics. From Prometheus.io
The Push Gateway is a prometheus service designed to overcome some of the limitations of a pull-based systems. As described above, ephemeral jobs are the main bread and butter of the Prometheus Push Gateway. But I was curious to dig in deeper to try and figure out, how exactly does it work?
The word ‘push’ in the title suggests that you are pushing metrics to Prometheus, but you are in fact not pushing metrics to Prometheus as we’ll explore down below. You are pushing metrics to an intermediary that is then scraped by Prometheus. Most importantly, The push gateway cannot be used to process events.
The Pushgateway is not an event store. While you can use Prometheus as a data source for Grafana annotations, tracking something like release events has to happen with some event-logging framework.
So you can’t send a metric when an event happens and expect it to work the way you might expect; i.e. per above show a deployment or perhaps the file size of a recently uploaded file. This does make sense given that Prometheus is time-series metrics store, but can we look deeper into the Push Gateway to get a better idea of the sense of this?
The push gateway is a relatively simple service. There’s one main entry point file (main.go) and an api file (api.go) both at 279 lines log. There’s probably a dozen or so api endpoints, but the key one is the one where you post metrics, defined here
This simplicity is key to the limitations of the push gateway. The Push Gateway does not push metrics to Prometheus, it caches metrics for prometheus to scrape, just like every other Prometheus exporter. (I’m not sure why they don’t call it a push exporter instead of push gateway but they did).
So then the main logic of the push gateway is encapsulated in the push function of pushgateway/push.go.
Labels are parsed as an incoming string Line 73
There must be a job label, parsed as a parameter of the url and appended to the labelset. Line 79
Metrics are accepted in protobuf or text format Line 88
Write the Metric to the Metric Store Line 116
Return errors to the User Line 134
Notice we’re writing metrics to the local metric store, we’re not shipping them upstream. Metrics are cached in the pushgateway for later scraping during the scrape interval, not pushed directly. So if you write a metric once at 12:30:47, and you are scraping minutely; (for simplicities sake we’ll assume scraping on the minute boundary), Prometheus will scrape and write that metric at 12:31. If you don’t overwrite or delete that metric, the same value will be written at 12:32, 12:33, 12:34, etc.
Metrics are persisted locally to the “Disk Metrics Store”
In api.go Line 129 you can see where the /metrics endpoint concatenates all metrics into the standard Prometheus metrics endpoint format. Waiting to be scraped by the prometheus server.
I decided to research and write this post because I’ve known the push gateway existed, but I wasn’t sure how it worked or why there were so many caveats to it’s usage. Diving in the code helped me understand exactly what it does, why it’s described as not an event store (metrics are not read at write time but during the normal prometheus scrape interval) and the fact that it is not a gateway in the way I might use the term (it waits to be scraped it does not forward metrics upstream).
Let me know in the comments if this was useful, or if there’s anything else in the push gateway you’d like to dig into.
Top comments (1)
Great post.Is push-gateway only useful with Prometheus? Can it be scraped by other tools such as Datadog, Dyantrace? because they do scrape exporters.