DEV Community


The API Gateway security risk you need to pay attention to

theburningmonk profile image Yan Cui Originally published at on ・4 min read

When you deploy an API to API Gateway, throttling is enabled by default in the stage configurations.

By default, every method inherits its throttling settings from the stage.

Having built-in throttling enabled by default is great. However, the default method limits – 10k req/s with a burst of 5000 concurrent requests – matches your account level limits. As a result, ALL your APIs in the entire region share a rate limit that can be exhausted by a single method.

It also means that, as an attacker, I only need to DOS attack one public endpoint. I can bring down not just the API in question, but all your APIs in the entire region. Effectively rendering your entire system unavailable.

Given that many organizations run their entire production environment out of a single AWS region and account, this is a risk you can’t afford to ignore.

Is WAF not the answer to DOS?

You can configure WAF rules for both API Gateway as well as CloudFront. You can do this in the API Gateway stage settings.

With AWS WAF, you can create rate-based rules that rate limits at the IP level.

This is sufficient to repel basic DOS attacks where all the requests originate from a handful of IP addresses. But it’s far from a foolproof system.

For starters, it won’t protect you from DDOS attacks from even a small botnet with thousands of hosts. The rise of IoT devices (and their poor security) has also given rise to IoT botnets. These botnets can comprise of millions of compromised devices.

These rate-based WAF rules also struggle to deal with low and slow DOS attacks. These attacks generate a slow and steady stream of requests that are hard to differentiate from normal traffic.

This naive IP level rate limiting can also block traffic from institutions that share the same IP address for its users. This can include universities and in some cases even small towns. In the past, I also observed that many AOL users would share the same IP address.

In short, WAF can keep the script kiddies out but is not good enough an answer to the threat of DOS attacks. The core of the problem here is that one method is allowed to inflict maximum damage to the whole region. And it’s a problem that really needs to be addressed at the platform level.

So what can we do?

The solution is simple, but the challenge is in governance.

All you have to do” is to apply a sensible rate limit for each method individually. However, doing so requires developer discipline, constantly. And we know from history that this leads to failure as humans are terrible at the same thing over and over consistently.

At the time of writing, there’s no built-in support in the Serverless framework to configure these method settings. The best solution seems to be the serverless-api-stage plugin. It works but has been dormant for over a year. And the author has not responded to any of the recent issues or PRs.

You can create a custom rule in AWS Config to check that every API Gateway method is created with a rate limit override. This is a good way to catch non-compliance and enforce better practices in the organization.

You can also implement some automated remediation. For example, you can trigger a Lambda function after every API Gateway deployment with CloudTrail and CloudWatch Events/EventBridge. If the API author had left the default rate limits on then we can override it with a more sensible rate limit settings. This wouldn’t be my first port of call though. As it can be confusing to the API author why the configuration of his API is changed without any action on his part.

Another strategy would be to reduce the amount of traffic that reaches API Gateway by leveraging CloudFront as CDN. The rate-based WAF rules can be applied to CloudFront too, although the same limitations we discussed earlier still apply. Which means you can incur extra CloudFront cost during a DDOS attack.

With AWS Shield Advanced ($3000/month plus various other fees), you can get payment protection against this extra cost incurred during an attack. Perhaps more importantly, you also get access to the DDoS Response Team if you have an existing Business or Enterprise support. Given the cost involved, this is likely to be out-of-reach for many startups.

All in all, the tooling needs to improve to help people do the right thing by default. We need better support from the likes of Serverless framework so we can configure these rate limits easily. And I hope AWS change the default behaviour of applying region-wide limits on every method. Or at the very least, show warning messages in the console that your rate limit settings are exposing you to serious risk.

Hi, my name is Yan Cui. I’m an AWS Serverless Hero and the author of Production-Ready Serverless. I have run production workload at scale in AWS for nearly 10 years and I have been an architect or principal engineer with a variety of industries ranging from banking, e-commerce, sports streaming to mobile gaming. I currently work as an independent consultant focused on AWS and serverless.

You can contact me via Email, Twitter and LinkedIn.

Hire me.

The post The API Gateway security flaw you need to pay attention to appeared first on


Editor guide
byrro profile image
Renato Byrro

Informative post, thanks for sharing Yan!

One question: by using a relatively low rate limit for all endpoints, wouldn't we make it easier for DDoS low and slow attacks? Especially endpoints that have central importance for most or the entire user experience...

Do you think reducing the timeout to a minimum, reasonable level could be an extra measure to mitigate this sort of attack?

theburningmonk profile image
Yan Cui Author

Thank you :-)

As a best effort to find the "right" rate limits, I tend to combine historical norms (in case of migration projects) + some buffer, or based on business req (e.g. need to support X concurrent users, which translates to X req/s) + some buffer.

It would make the endpoints easier for DOS attacks that aren't caught by WAF (eg. low and slow attacks). But, at least you limit the blast radius to just one endpoint/API, as opposed to ALL the APIs in the region.

Also, if you have more critical endpoints then you can allow a bigger buffer to better handle unexpected spikes in traffic. You should also be more strict about the rate limits for low-priority endpoints.

The alternative is that your critical endpoints that are important to the user experience (which should be protected by some authenticated & authorization mechanism) can be taken offline by an attacker that DDOS a low priority and public endpoint (e.g. the "about us" page).