The Turing Test was a very hypothetical test imagined by — guess who — Alan Turing during the 50s. In a nutshell, a human would chat up a robot through text messaging and if they cannot tell apart the robot from another human then various conclusions can ensue according to your philosophical inclinations. Some might declare that robots became sentient, some might say that nothing replaces a soul, I might say GPT-3 is pretty damn close and in no way can it be considered as sentient.
More importantly, this test has focused our conception of robots and Artificial Intelligence as being a pure duplication of human capabilities through artificial means. However the past decade has shown to us that it's far from the case. Artificial Intelligence is for some tasks much better now than humans: according to some it has been some years that AI has a higher performance than humans on speech or image recognition. Car makers like Tesla claim superhuman powers.
And while physical robots are expensive and potentially dangerous in the real world, the web has been swarming with robots since its earliest days. Some are looking for security issues to exploit, some are participating massively in contests to win an iPhone, some others are trying to play weaknesses in social media algorithms. But maybe the most successful bot of all is actually omniscient and omnipotent on the Internet: Google. It visits every single page on the open web at least once a month.
This makes it a very interesting case:
- We solve every day an increasing number of CAPTCHAs because website owners want to avoid bots. Yet, those same owners would lose their mind if Google was to avoid them.
- While today Google's algorithms are most likely choke-full of AI, Google acquired its power at a time where it used mostly a simple algorithm, namely the PageRank.
This indicates to us that while a lot of bots are annoying, Google is actually a desirable bot which on top of that does not need AI to be desired. It is not sentient, it is not passing the Turing Test, it is not even speaking but yet you welcome it into your home.
Let us now consider another family of bots. Twitter bots. Most of us have seen them liking posts with a particular hashtag, propagating fake news, increasing engagement numbers and many many things that I don't dare to imagine. However, is this all really the product of bots? Or is it the product of click farms from India? Worse, angry groups of people would harass other individuals for free and at great scale for little to no reason at all except their own boredom.
And here it is. That line. So blurred that it is not a line anymore, it is the air we breathe. It has become impossible to make the difference between a bot and a human. If Twitter is considered so toxic for Women it is not because of bots, it is because of 100% legit users. And if Google is so useful it's not because a librarian classified all 130 trillion pages of the Internet, it's because a robot did it.
Humans can be "bad", robots can be "good". This is because robots are ultimately controlled by sentient beings — for now just us humans. So it all boils down to means and motives. Robots are a mean. Humans have the motive.
Starting from this point you can consider the following: if you think that robots can do a lot more bad than a "bare" person, don't you think that robots can also do a lot more good? That if you give robots an opportunity to use your website/platform/app then someone might be able to build something nice upon it?
A good example would be the RSS feeds. In that distant past of blogging you could aggregate the content of several websites to stay on top of everything, relay your friend's content and vice versa and overall have a truly inter-operable web. This has faded away for better or for worse but one thing stays true: this way of working only happened because we gave robots an opportunity to cooperate.
The question which remains — and whose answer is far from being simple — is how can we imagine a web that is open for humans and robots?
A first hint would be to consider robots as actual users of your website. Users that have a different purpose than hanging out reading your content. A few ideas:
- Everyone tries their best to remove bots from analytics. Embrace the fact that bots will visit your website and have different metrics for different "classes" of users. How many people came to your website versus how much of your content has been crawled.
- You could "watermark" your content depending on the bot to keep track. With
utmvariables in URLs, hidden meta-data in images, etc.
- Present several interfaces. HTML/CSS for people, micro-formats for crawlers, APIs for services building on top of yours.
On a regulatory level, it seems increasingly obvious that laws need to adapt so we can keep track of responsibilities. At the level of a website, it means making it natural to associate the actions of the robot to the will of a person:
- Have a simple-to-use, well-documented API which can be accessed with an identifying token.
- When you put up a wall like a CAPTCHA or a SMS verification, you create a mechanism where you'll lose control. When users will figure how to bypass your security it will mean that you have no more means of detecting anything bad that they are doing.
- A few years back, I worked in a company making Facebook apps. In spite of all the verifications made by Facebook on people's identity, phone number, credibility of their overall behavior, let me tell you that we received basically 90% of bot participation in our apps.
But look at Twitter. They had an open API and are going backwards, partly because of bots. Aren't they however mistaking the means and the motive? Even if they can cut the harm done by bots but they will never cut the harm done by humans. The change they need to make is deeper.
- It might sound stupid — and certainly is to be proven — but you could try to sell the service you're providing. Not everyone can be an ads platform, it's ridiculous. There is a legion of innovative business models we can copy from the world champion in consumption, China.
- Money is a far stronger trust platform than ReCAPTCHA. Netflix shows have to put money to be produced, people have to pay money to view it and creators are compensated when the content is successful. Whatever idiot with a phone can say anything they want on YouTube. Guess which platform is subject to the most criticism?
- Work against nefarious behaviors, not nefarious users. To stay on Twitter examples, every day I see "masculists" at war against feminists for what seems to be no good reason at all. If you can detect that a cluster is starting to attack another cluster — which is almost trivial with current AI libs — then just silence the fuck out of those tweets. Deciding who is right is not your job, however you can decide to avoid harm to both parties by pulling a plug from time to time.
- Give no reason to commit those nefarious acts. Facebook's likes have been debated over and over as they represent the social currency. But once again, they fit none of the characteristic of a currency except being a number that can be measured. By example, instead of measuring a creator's success in likes, measure it in money provided by its patrons.
Basically it comes down to this: if your business models is getting people engaged as long as possible on your app to show them as much ads as possible then you are abusing people and other people can leverage that too. If your business model is sane then you are less likely to get problems and abuses. Out of all the big tech names, which ones are considered to be a danger to democracy and which ones depend on ads to stay filthy rich? Is the solution to fake news to send a SMS verification code to keep the nasty Russian bots away?
The real problem is that existing platforms, the main social media first in line, are ill-conceived and facilitate nefarious behavior. This is exactly why robots need to be granted a first-class citizenship on the web. Not only robots can build fantastic things but also if your platform is ready to absorb the nastiest behavior of the nastiest bot by the worst-intentioned human then there is a good chance that humans on your platform are properly protected and will stay protected no matter the direction into which technology evolves.
By negating the worst at its largest scale, you get the strength to enable the best for both humans, robots and the whole ecosystem around you.