A Facebook crawler was making 7M requests per day to my stupid website

napolux profile image Francesco Napoletano Updated on ・1 min read

I own a little website I use for some SEO experiments. Of course there's some content and a facebook sharing button for every post. The website is so little it runs on a "single controller" PHP app + a 400kb SQLite db, but can generate thousands of different pages.

Everything is hosted (together with a bunch of other websites) on a cheap DigitalOcean machine + free cloudflare plan for some caching. One of those websites has some alerting and it started to alert me about being down.

After some investigations I've found out the problem... the Facebook Crawler

That crawler was making more than 7M requests per day (with a peak of 300req/second) to that website.

Their doc was not helping on how to block the bot.

  • og:ttl -> ignored
  • robots.txt -> ignored
  • HTTP 429 -> ignored

I had to block the user-agent using cloudflare rules.

If there's someone working on that crawler here on dev.to, please stop ignoring basic Internet netiquette about crawlers.

Next time you could hit someone on AWS. And then they'll probably ask you to pay the bill ;)

Posted on by:

napolux profile

Francesco Napoletano


Software Engineer, husband, programmer, videogames player, technical writer. Opinions expressed here are my own.


markdown guide

Try follow in robots.txt

User-agent: Facebot/1.0
Crawl-delay: 1

User-agent: Facebot/1.1
Crawl-delay: 10