DEV Community

loading...

Tesla Hack shows how far AI is from being really intelligent

rodiongork profile image Rodion Gorkovenko ・2 min read

Recent news about successful "Hack" of Tesla car. It's system of computer vision / optical recognition catches images of road signs to provide "auto-pilot" function. Researches modified shape of the character in speed limit sign (see picture below) and Tesla misread it as 85 instead of 35, while it seems obvious human brain/eye can't make such a mistake.

What does it mean?

Image which bewildered Tesla

We are surrounded nowadays bu "smart" things, including smart tooth-brushes and smart toilet-seats. We are made to believe that computer brain nowadays can do wonders.

But it's not exactly so. Even not close.

During last 50 years humankind come up with many clever (even whimsical) algorithms which allow computers to outperform human brain in many tasks. Siri can find you nearest toilet or correct spelling of any word. Some "robo-babes" can politely talk to audience - while Boston Dynamics robo-dogs open doors by pressing handles - and this impresses!

Nevertheless we still have quite vague idea of how the human brain works and learns. We can teach computer to self-learn in some field (e.g. decision making in some game) - or to extract information about "diapers and beer" from sales logs.

We can teach it letters and digits - but suddenly it appears the underlying algorithm "classifies" them in different, more primitive manner than human. Consider, for example, how dogs recognize other dogs despite being of very different breed, even if never seeing anything alike. Computer (e.g. Neural Network) can't extrapolate knowledge in this way.

And so, in 2020, most of breaking news about using AI, ML, NN or GA anywhere from automotive industry to crime investigation generally has strong smell of marketing. Still those algorithms are clever math functions and methods which astonishingly can achieve goal in many cases - and surprisingly fail in many other cases!

So don't be easily fooled by hype-words. We still have very long way to make machines "clever" in general sense of word. Though still it seems possible.

Hapless robot

Discussion (1)

pic
Editor guide
Collapse
jcoelho profile image
José Coelho

“ Consider, for example, how dogs recognize other dogs despite being of very different breed, even if never seeing anything alike. Computer (e.g. Neural Network) can't extrapolate knowledge in this way”.

I have to disagree, there are neural networks generating unique human faces without ever seeing one.

Have a look: thispersondoesnotexist.com/

Of course many companies are selling vaporware but there is no denying ML can do some pretty neat stuff