I own a little website I use for some SEO experiments. Of course there's some content and a facebook sharing button for every post. The website is so little it runs on a "single controller" PHP app + a 400kb SQLite db, but can generate thousands of different pages.
Everything is hosted (together with a bunch of other websites) on a cheap DigitalOcean machine + free cloudflare plan for some caching. One of those websites has some alerting and it started to alert me about being down.
After some investigations I've found out the problem... the Facebook Crawler
That crawler was making more than 7M requests per day (with a peak of 300req/second) to that website.
Their doc was not helping on how to block the bot.
- og:ttl -> ignored
- robots.txt -> ignored
- HTTP 429 -> ignored
I had to block the user-agent using cloudflare rules.
If there's someone working on that crawler here on dev.to, please stop ignoring basic Internet netiquette about crawlers.
Next time you could hit someone on AWS. And then they'll probably ask you to pay the bill ;)
Monitor your LEMP server with Netdata
Promise Akpan -
Michael "lampe" Lazarski -
If you care about user privacy, do NOT use Facebook JS SDK
Nguyen Kim Son -
Hackers are using a bug in PHP7 to remotely hijack web servers
Paulo Renato -