DEV Community

Juan De Dios Santos
Juan De Dios Santos

Posted on • Originally published at juandes.com on

On the sensationalism of artificial intelligence news

On the sensationalism of artificial intelligence news

It’s no longer a secret that artificial intelligence (AI) is here to stay. What once was a puzzling and rather niche area of computer science, has suddenly started to take over our lives with its many applications. As a result, due to this mysterious and unknown characteristic of AI and its more prominent child, machine learning , news sites, and the press, in general, have taken a liking on overstating the reality behind the successes or advances in the field. This phenomenon often leads to articles of an unsavory nature that seems to sensationalize and even fearmonger what’s genuinely going on. In this essay, I want to shed some light on this issue.

AI is often described as “the new electricity” or “the most important thing humanity has ever worked on” by some of its most prominent advocates, tech-evangelists, and companies, such as Google. These phrases, honestly, gives a sense of hope, future, and even of development. After all, electricity did transform the way we live and with that, the world and our economy. However, they are just that, phrases, hyperboles, and metaphors. As powerful as AI is, it’s just another piece of technology. One that, like many others, is designed to make our days more comfortable, accessible, and straightforward. Hence, when talking about it, it should be done in the same way we’d talk about, let’s say, mobile devices. Yet, some news sites disregard the fact that AI is just another tool, and in the process, they overstate and misinform the real events. Let’s see some examples.

On the sensationalism of artificial intelligence news
Photo by Alex Iby on Unsplash

In 2015, a factory worker died after a machine picked him up and crushed him. This sad news resulted in many sensationalized news stating that a “robot killed a man.” This sentence, in some sense, is partly true. However, the way some news outlets described the accident is as if the machine, knowingly, killed the man. This conjecture is not true at all; as automatic as the robot is, it is not autonomous (accurate source).

Another example is a Facebook experiment, which featured two chatbots chatting among themselves. The test, according to Facebook, was a failure, and they turned it off because the bots were basically “saying” gibberish. Pretty innocent, right? Well, not to some people. Right after this announcement, news surfaced saying that Facebook had turned it off because the bots had invented their language, and it was too dangerous to keep it running. This statement is not accurate at all. The reality is that the bots were not able to communicate using proper English grammar, and thus, useless to us. The text below is a portion of their conversation (accurate source).

Bob: I can i i everything else

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i everything else

Alice: balls have a ball to me to me to me to me to me to me to me to me

Then there are the so-called racist AI models. For example, a while ago MIT revealed their Norman model, an AI they call the world’s first “psychopath” AI. And sure, the model might produce some lines of dark and morbose content (examples below), but calling it racist right away doesn’t seem appropriate. After all, it is just but a controlled experiment in which the researchers fed the model with “disturbing” data.

“A man is electrocuted and catches to death”

“A man is shot dead”

On the sensationalism of artificial intelligence news
Norman looks a bit creepy, though (source).

Likewise, it is the recent and controversial ImageNet Roulette project. This art project featured (the experiment has been turned off) a website in which a user uploads an image of itself, and receives a label from an AI that describes the kind of person it “sees.” As you might expect, not everybody was happy with their results. ImageNet’s Person categories contain a wide array of classes, with some of them somehow offensive, e.g., “rape suspect,” “drug addict,” and “failure.”

As a consequence, users who visited the website expecting a nice classification to share among their friends were instead greeted with a negative one. This outcome led to a considerable uproar in which the media, called the model all sorts of things. I, personally, had no issues since the model called me a rock star. However, I can understand why some people might feel uneasy about a machine calling them a failure. But we need to accept that an AI system is merely a black box that extrapolates from the information it has seen; it doesn’t feel, “know” or hate you. Before getting all mad and starting pointing fingers, we should research the context of the intelligent entity. How was it trained? What’s the data behind it? Who labeled the data? These are the essential questions here.

On the sensationalism of artificial intelligence news
Rock star

The last example I want to illustrate is regarding a piece I read about an AI that generates music. Here, the author writes about a recently released music album entirely produced by a machine, but then, rapidly changes the topic to what it means for the music industry. I’ll oversimplify the issue here (if there’s even one) and say, in my opinion, that it means nothing. How many albums are released weekly? Many, I’m pretty sure. So, why can’t we just treat this “AI album” as another one? To clarify, the author doesn’t say anything negative. In fact, he seems to be very curious about the topic of generative AI, and his stance was a neutral one.

Now we have to ask: why is the media doing this? We can’t know for sure. However, I’m willing to say that it is partly due to these two reasons: clicks and lack of understanding. The internet is vast, and getting users to visit your website is basically a competition. To attract them, people employ some of the cheapest tricks we all know, such as clickbait titles or over-the-top (and often fake) news. By using a sensationalized headline, they’ll for sure bring many users, which translate to ads revenue or popularity, and by manipulating the content and straying from the truth, they’ll surely grab the attention who aren’t well familiar with the reality.

Reason number two could be genuinely not understanding the issues behind what they are trying to inform. AI and machine learning are complicated fields, and many of its concepts are complex, tedious, and not easy to explain. Thus, I wouldn’t be surprised if some of the misinformed facts are merely non-intended human errors.

On the sensationalism of artificial intelligence news
Doing machine learning. Photo by me.

Nonetheless, whether it was a mistake or something done on purpose, one of the main problems here is that this trend of wrongly informing AI news may lead to fearmongering. As I said before, AI is a somehow mysterious field that, for years, has been associated with The Terminator, the robot uprising, and the end of the world. So, when we have news saying that a robot killed a man, that another is calling others “negroes”, or that a Tesla on semi-autopilot caused a deadly accident, the masses will start to associate the idea of AI, the so new called electricity that should benefit the humankind, with the Armageddon or as something detrimental for us, when in reality, it isn’t like that.

*featured image by Roman Kraft on Unsplash

Top comments (0)