In many production environments, having application instrumented using something like Prometheus metrics is a minimum. These metrics can tell you a lot about the state of your application. You can have your alerting system monitor these metrics and alert you when something is wrong (i.e. error rate above threshold).
Let's say your application is public facing. However, all the systems involved with metrics are all internal and not public facing. Everything runs inside the private network of your organization. This is because all your metrics will contain information specific to your server and domain. You will be exposing your routes and methods used.
I love Fastify. It is a great Node.js server web framework. If you want to expose Prometheus metrics from your server, you have to use fastify-metrics
. By default, it will add a /metrics
route and serve default metrics along with the HTTP route histograms and summaries. On a public facing Fastify server, you are doing to be exposing these metrics to the public.
Fastify has a great recommendations on productionizing your Fastify server. There is a section on running multiple instance of your Fastify server:
There are several use-cases where running multiple Fastify apps on the same server might be considered. A common example would be exposing metrics endpoints on a separate port, to prevent public access, when using a reverse proxy or an ingress firewall is not an option.
Exposing metrics
You usually start a Fastify application like this:
import Fastify from "fastify";
const mainApp = Fastify({ logger: true });
mainApp.get("/", () => {
return { hello: "world" };
});
try {
await mainApp.listen({ port: 3000, host: "0.0.0.0" });
} catch (err) {
mainApp.log.error(err);
process.exit(1);
}
If you follow the fastify-metrics
docs, you will setup metrics by registering it on the mainApp
import fastifyMetrics from 'fastify-metrics';
...
await mainApp.register(fastifyMetrics, {
endpoint: '/metrics'
})
If you navigate to http://0.0.0.0:3000/metrics
, you will see bunch of metrics. So if your application is serving traffic from port 3000
, then your metrics will be accessible publicly.
Example of a Kubernetes service to route port 80
and 443
to your deployments:
---
apiVersion: v1
kind: Service
metadata:
annotations:
external-dns.alpha.kubernetes.io/hostname: fastify-metrics-demo.test.domain
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: http
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: '443'
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: ARN
labels:
app: fastify-metrics-demo
name: fastify-metrics-demo
namespace: fastify
spec:
ports:
- name: http
port: 80
targetPort: 3000
- name: https
port: 443
targetPort: 3000
selector:
app: fastify-metrics-demo
type: LoadBalancer
This is how the service monitor for this application would look like:
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
app: fastify-metrics-demo
component: metrics
prometheus: app
name: fastify-metrics-demo-metrics
namespace: fastify
spec:
endpoints:
- honorLabels: true
interval: 15s
path: "/metrics"
port: http
selector:
matchLabels:
app: fastify-metrics-demo
Based on the config above, if you went to https://fastify-metrics-demo.test.domain/metrics
, you would see your Prometheus metrics. Let's fix that.
Running a second instance
Instead of serving metrics from the mainApp
, you can create a second instance and endpoint
value to null
.
const mainApp = Fastify({ logger: true });
const metricsApp = Fastify({ logger: true });
await mainApp.register(fastifyMetrics, {
endpoint: null,
});
This will allow mainApp
to instrument the server by collecting metrics. Then you can have metricsApp
serve them on a separate port.
metricsApp.get(
"/metrics",
{ logLevel: "error" },
async (_, reply: FastifyReply) => {
return reply
.header("Content-Type", mainApp.metrics.client.register.contentType)
.send(await mainApp.metrics.client.register.metrics());
}
);
Don't forget to start mainApp
and metricsApp
:
try {
await mainApp.listen({ port: 3000, host: "0.0.0.0" });
await metricsApp.listen({ port: 3001, host: "0.0.0.0" });
} catch (err) {
mainApp.log.error(err);
process.exit(1);
}
Now, if you navigate to http://0.0.0.0:3000/metrics
, you will get a 404. Metrics are no longer accessible publicly. You can access them privately through http://0.0.0.0:3001/metrics
.
Now let's tell Kubernetes to scrape metrics from the new port. First we need a service for it:
---
apiVersion: v1
kind: Service
metadata:
labels:
app: fastify-metrics-demo
component: metrics
name: fastify-metrics-demo-metrics
namespace: fastify
spec:
clusterIP: None
ports:
- name: prometheus
port: 3001
protocol: TCP
targetPort: 3001
selector:
app: fastify-metrics-demo
sessionAffinity: None
Then we declare the Service Monitor:
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
labels:
app: fastify-metrics-demo
component: metrics
prometheus: app
name: fastify-metrics-demo-metrics
namespace: fastify
spec:
endpoints:
- honorLabels: true
interval: 15s
path: "/metrics"
port: prometheus
selector:
matchLabels:
app: fastify-metrics-demo
Now, if you try to go to https://fastify-metrics-demo.test.domain/metrics
, you will get a 404. But if you go to your Prometheus instance, you will be able to query your metrics.
Wrap up
As you can see, it is very simple to expose your metrics on another port. This is how most people do it with Rails applications. I wanted to replicate the same behavior.
One downside is running two instances of your server. Granted, the metricsApp
doesn't have all the extra plugins registered. Its sole responsibility is to serve a route. Other than the memory overhead, you aren't paying a big price for this implementation.
Let me know if you have built something like this before.
Code can be found here: https://github.com/umarov/fastify-metrics-demo
Top comments (0)