Cover image for Reducing On-call Alert Fatigue with Deduplication

Reducing On-call Alert Fatigue with Deduplication

prakya profile image Prakya Updated on ・5 min read

Alert noise is a very common on-call complaint leading to fatigue and on-call burnout. This article is an attempt at helping folks address this problem.

What is alert fatigue?

Most organizations today have an expansive set of tools to monitor their applications and services. This is to ensure that all the system metrics, events, logs, etc. are tracked to keep abreast of how their systems are doing. But it is humanly impossible to constantly supervise the various dashboards of these tools. So, it makes sense then that when these tools detect anything that is even remotely important, there is a notification that the team received informing them of this. This in turn enables engineering teams to know how reliable their systems are and be proactive in avoiding downtime.

But the issues arise when engineers start to get flooded with alerts from their monitoring setup. The sheer volume of alerts that are mostly informational and not necessarily actionable are much higher in comparison with those that are actual incidents that need immediate action.

So, a typical day in the life of an on call engineer would be to wade through the ocean of alerts on their incident management platform of choice. Engineers who have experienced this know how overwhelming it can get. The really important incidents start to get lost in the superfluous alert noise. This is Alert Fatigue.

Alert noise can kill on-call productivity

Alert fatigue has become an increasingly painful and widespread problem in DevOps and SRE teams given the amount of data that is available to them. While the whole point of using monitoring tools to send alerts is to build a culture of proactive incident management, it slowly begins diminishing this whole objective.
You know you have a problem to fix if the volume of low-priority/warning alerts greatly exceeds the number of actionable alerts to such an extent that the real, high-severity incidents end up getting detected much later or not at all.

It follows from this that it is super important to ensure that on call engineers who work on responding to these incidents are not overloaded with alert noise.

The problem now becomes centred around finding a way to capture all the data but at the same time ensuring that you’re getting particularly notified for only the actionable ones, or in essence, finding a tool that can distinguish between alerts and incidents.
No Engineer wants to be woken up at 3AM only to find out that it is a false alarm.

How Kevin Loses His Sanity Because of Alert Fatigue : An On-call Story

Let’s take a look at this in an illustrative way.

This is Kevin and he is an SRE (crowd cheers? Hahaha). He deals with services and makes sure they are healthy. And to top it all, he needs to do this while not losing his sanity.

An alert woke him up. Another one woke him up even more.

Alt Text

And this is a Herculean task when he is being woken up by a production alert at 1AM.

Looking like a zombie himself, and the King of Pop’s Thriller ringing on his phone is keeping up with the theme of this unfortunate series of events.

Don't judge him. (Cause this is THRILLER 🧟 on loop).

So, Kevin sees that the service sent a warning message for CPU Usage. It will probably take a week for it to move into the critical stage. He took steps to fix this by reaching out to his team. But the service continues to send him notifications disrupting his sleep.

Alt Text

While he understands that the alerting tool is just doing its job by pinging him ruthlessly until he wakes up to his responsibilities, he sees no reason to lose his sleep or sanity unless there's a serious production issue (he secretly prays that this isn't the case every time the phone rings)

Here's how he lost his sanity in just about an hour. I'm pretty sure he's a little sick of Thriller by now.

Alt Text

Timeline of D-Day:

12:58:59PM Thriller
01:00:22AM Sleep deprived yet, slapping in the face to remain awake and see audit logs
01:21:31AM Woke up from a unexpected snooze off, found out spacebar ain't working anymore due to salivary short circuit
01:30:01AM Copy spaces using mouse from sites and pasting in grep to filter logs
01:36:03AM Eureka moment,followed by a thought of "Oh shoot, I'm desperate now"
01:40:40AM Food delivery arrives.The high point of this incident so far.
01:40:41AM Thriller
01:47:12AM BURP
01:52:15AM Coffee Refill.
01:52:34AM Thriller
02:00:44AM Thriller
02:12:49AM Thriller
02:33:52AM Thriller
02:45:53AM Thriller
02:52:53AM Thriller Thriller Thriller
02:56:54AM Thriller Thriller Thriller Thriller Thriller
03:03:00AM Played dunk the phone in coffee. sparks
03:08:17AM Wakes up the duck. _Duck is not so thrilled
03:10:29AM Hot air to the face..._either from the duck or the CPU exhaust
03:27:05AM Manages to find the fix
03:29:30AM Figures out that his phone survived the 6 inch dunk
03:37:15AM Face hits the pillow as he contemplates throwing his phone out of the window

Kevin Configures De-duplication in Squadcast

Kevin saw that his alerts were pouring in from Prometheus. He realises that he can't keep dunking his phone in the coffee when alerts flood in.

He decides to deal with the alert noise once and for all after resolving the prod issue.

He manages to configure deduplication rules on his platform.

Prometheus was complaining about deployment rolling updates and some completely unrelated CPU usage issues every 10 seconds or so. He executes a runbook and fixes both the issues (apparently this happens once every month).

Now he rolls up his sleeves and decides to configure de-duplication for his alerts.

For deployment issues, he decides to group and de-duplicate alerts based on the impacted services.
For CPU Usage related issues, he decides to group and de-duplicate alerts based on the impacted services but create a new alert if the same event had already occurred 50 times.
He sees that the alert payload for one specific alert was to do with the deployment of that service.

"status" : "firing",
"annotations": {
"description": "Deployment replicas are not updated for payments",
"summary": "Deployment has not been rolled out properly"
"startsAt": "2019-11-11T12:58:59Z",
"endsAt": "0001-01-01T00:00:00Z",
"generatorURL": "..",
"labels": {
"alertname": "DeploymentReplicasNotUpdated",
"deployment": "payments",
"...": "...",
"kubernetes_namespace": "monitoring",
"severity": "warning"

He writes a rule to de-duplicate the incident for deployment errors.

(past.labels.alertname == current.labels.alertname) &&
(current.labels.alertname == "DeploymentReplicasNotUpdated") &&
(past.labels.deployment == current.labels.deployment)

He writes a similar rule for the CPU Usage based alerts and adds another one to fire this incident again only if it has occurred 50 times in a row.

"status" : "firing",
"annotations": {
"description": "CPU Usage higher than 60% in postgres-worker-7uflf558tx-ulr5h pod",
"summary": "CPU Usage high in postgres-worker-7uflf558tx-ulr5h pod"
"startsAt": "2019-12-11T01:40:39Z",
"endsAt": "0001-01-01T00:00:00Z",
"generatorURL": "..",
"labels": {
"alertname": "CPUThrottlingHigh",
"podname": "postgres-worker-7uflf558tx-ulr5h",
"deployment": "postgres-worker",
"...": "...",
"kubernetes_namespace": "monitoring",
"severity": "critical"

Rule that Kevin used for this:

(past.labels.alertname == current.labels.alertname) &&
(current.labels.alertname == "CPUThrottlingHigh") &&
(past.labels.deployment == current.labels.deployment) &&
event_count < 50

At least he won't hate Thriller now + No phone dunking + No coffee wastage + most importantly, No more alert noise!!!


Kevin finally manages to configure de-duplication rules for his Prometheus alerts and sets severities for incidents to get woken up for just the really really important ones.

Alt Text

Kevin is smart. Be like Kevin.

Originally published at Squadcast Blog

Posted on by:

prakya profile



I do something at @squadcast | I do something in general


Squadcast is an end-to-end incident response platform that helps tech teams adopt SRE best practices to maximize service reliability, accelerate innovation velocity and deliver outstanding customer experiences.


markdown guide