A few months back, I wrote about how Google is breaking its social contract with the web, harvesting our content not in order to send search traffic to relevant results, but to feed a large language model that will spew auto-completed sentences instead.
I still think Chris put it best:
I just think it’s fuckin’ rude.
When it comes to the crawlers that are ingesting our words to feed large language models, Neil Clarke describes the situtation:
It should be strictly opt-in. No one should be required to provide their work for free to any person or organization. The online community is under no responsibility to help them create their products. Some will declare that I am “Anti-AI” for saying such things, but that would be a misrepresentation. I am not declaring that these systems should be torn down, simply that their developers aren’t entitled to our work. They can still build those systems with purchased or donated data.
Alas, the current situation is opt-out. The onus is on us to update our robots.txt
file.
Neil handily provides the current list to add to your file. Pass it on:
User-agent: CCBot
Disallow: /
User-agent: ChatGPT-User
Disallow: /
User-agent: GPTBot
Disallow: /
User-agent: Google-Extended
Disallow: /
User-agent: Omgilibot
Disallow: /
User-agent: FacebookBot
Disallow: /
In theory you should be able to group those user agents together, but citation needed on whether that’s honoured everywhere:
User-agent: CCBot
User-agent: ChatGPT-User
User-agent: GPTBot
User-agent: Google-Extended
User-agent: Omgilibot
User-agent: FacebookBot
Disallow: /
There’s a bigger issue with robots.txt
though. It too is a social contract. And as we’ve seen, when it comes to large language models, social contracts are being ripped up by the companies looking to feed their beasts.
As Jim says:
I realized why I hadn’t yet added any rules to my
robots.txt
: I have zero faith in it.
That realisation was prompted in part by Manuel Moreale’s experiment with blocking crawlers:
So, what’s the takeaway here? I guess that the vast majority of crawlers don’t give a shit about your
robots.txt
.
Time to up the ante. Neil’s post offers an option if you’re running Apache. Either in .htaccess
or in a .conf
file, you can block user agents using mod_rewrite
:
RewriteEngine On
RewriteCond %{HTTP_USER_AGENT} (CCBot|ChatGPT|GPTBot|Omgilibot| FacebookBot) [NC]
RewriteRule ^ – [F]
You’ll see that Google-Extended
isn’t that list. It isn’t a crawler. Rather it’s the permissions model that Google have implemented for using your site’s content to train large language models: unless you opt out via robots.txt
, it’s assumed that you’re totally fine with your content being used to feed their stochastic parrots.
Top comments (0)