DEV Community

Ben Halpern
Ben Halpern

Posted on

Why are you NOT worried about the robot apocalypse?

The robots are coming and they will enslave us! Or not. Why are you not all that concerned?

Top comments (37)

Collapse
 
rpalo profile image
Ryan Palo • Edited

Robot slipping on banana peel

Collapse
 
martyhimmel profile image
Martin Himmel

I can't stop watching this. πŸ˜‚

Collapse
 
jjsantos profile image
Juan De los santos • Edited

The battle has begun... XD

Collapse
 
dmfay profile image
Dian Fay

Because there's no reason to prefer a human labor pool over more robots. The next question of course is whether they'd simply kill us; but while it's easy enough to see a lack of concern for individual humans (witness the ongoing discussions over driverless cars), it's difficult to come up with a compelling reason for a campaign of extermination that doesn't presuppose a capacity for abstract thought.

The most likely scenario as I see it is that automation will continue to put people out of work since robots are better in every way than humans at physical labor, and programs are faster and more reliable than we are at calculations and algorithms. Under the present capitalist paradigm this means more and more un- and underemployment and a concentration of wealth in an ever-smaller group. As of 2016 the richest ten billionaires combined owned half a trillion US dollars. If you ranked that figure on a chart of national GDP for that year, those ten people would land on a position in the low 20s. Imagine what you could do with the labor power represented by $505 billion in a society that put it to work!

The problem with automation isn't the robots.

Collapse
 
andy profile image
Andy Zhao (he/him)

Because there's no reason to prefer a human labor pool over more robots.

Never thought of it that way; great point.

I guess "inequality apocalypse" isn't as catchy huh?

Collapse
 
dmfay profile image
Dian Fay • Edited

The big question to me is whether the development of surveillance, policing, and military technologies (all inextricably related) will outpace the capabilities of a good old-fashioned angry mob with pitchforks and torches. The super-rich have been asking themselves this too: check out this article in The Guardian from a year ago about Silicon Valley royalty buying "just in case" property in New Zealand. They aren't thinking about robots or zombies or a titanic wave of molten metal covering the earth; they're worried about being called to account for a level of wealth that can only be termed obscene in the face of any amount of human suffering or privation.

Collapse
 
yechielk profile image
Yechiel Kalmenson

Because the more intelligent AI becomes, the more it will be plagued by the same bugs that plague the rest of us:

xkcd skynet

Collapse
 
scottishross profile image
Ross Henderson

To be honest, technology works roughly 67% of the time at the moment. So if we continue developing to our current standards I think the Robot Apocalypse would fizzle out after a week or so.

Collapse
 
aurelkurtula profile image
aurel kurtula • Edited

I feel like there are two scenarios

Robots will take our jobs but work for Bill's and Larry's (for the 1%) and the rest of us will suffer.

Kind of how the poor suffer because our politics does not support them. Talk about Basic income and listen to all the objections. If Gates and Page's of the world control the production of these robots then we would be unemployed and blamed for no fault of our own - again similar to how we blame poor people.

Robots will take our jobs but work for us!

There is a beautiful science fiction story "The Lifecycle of Software Objects
" by Ted Chiang (fantastic writer) that amongst other things deals with our moral obligation towards AI. If by robots we mean Ex Machina style, then "robots working for us" is slavery. But, when I think of robots I'm thinking of automated-forklifts and pencil sharpeners on steroids - performing the important jobs of pencil pushers. True "can openers" with no feelings and without all the good stuff that make us us.

These robots would do our jobs and we would reap the benefits!

Someone said something like "if you take jobs away from people you'd be taking what makes them human". It doesn't make sense but of course there are many that might believe their 9-5 give them meaning. I heard one or two philosophers/psychologist say the same thing.

Not to be an ass about it but I do not agree! I think because the 9-5 is what most of us have to do we've come to believe that we need to do it.

Think about it!

If no one had to work ... we'd truly flourish!

As for them enslaving us, It could happen if only the 1% had a say in what kind of robots are created. The 1% of tomorrow might be blinded by their ego and start developing robots for some sort of final solution!

But I truly believe we are becoming better and better human beings and so we'll mostly use this for the good of humanity - with a little bit of mischief on the side, to keep the likes of Trump feel ... important. He will not find a robot to do what he truly wants but he will find robots that can quickly build walls! I can imagine two robots in a loop, one builds and one destroys.


We are the creators in this scenario, and we should really learn from the mistakes of the master! She (why not) created humans and some of them (your's truly) turned atheists. We don't want our creatures have that bug in them :)

Collapse
 
dmfay profile image
Dian Fay

Futurists were predicting shorter and shorter workweeks all through the 20th century. Instead we've kept working the same hours while technology has enabled ever-increasing productivity. Wages (in the US) have basically stagnated since the 1970s, and our legal minimum wage isn't even enough to live on -- companies like Wal-Mart and McDonald's are subsidized by their employees going on welfare just to survive.

We were all supposed to have flying cars and robot butlers by now!

Collapse
 
rhymes profile image
rhymes • Edited

That's because we should have built robots to manage the 1%'s greed :-D

Collapse
 
aurelkurtula profile image
aurel kurtula

That's true.

Take that as my own wishful thinking

Collapse
 
rhymes profile image
rhymes

I think the fact we all think that the first thing AI robots will do is terminate us all says way more about us as a human species than the eventual AI.

Maciej CegΕ‚owski's talk Superintelligence: The Idea That Eats Smart People it's very interesting and point on the topic of Superintelligence and the human intelligence bias.

Collapse
 
pmcgowan profile image
p-mcgowan

I tried to wipe the hair off that "L" about 4 times. More robots would not be a bad thing...

Collapse
 
maxwell_dev profile image
Max Antonucci • Edited

When I die, I at least want it to be in a way that's awesome and inspiring, usually through sheer horror or cool factor. Dying in a robot apocalypse is beaten only be being zombie patient zero or coding a large-scale project with only inline CSS.

Collapse
 
mjb2kmn profile image
MN Mark

We don't understand what consciousness really is, how it works, or why we have it. We couldn't program a computer to do something we don't understand and without self-aware consciousness, they're just a machine.

Collapse
 
erinlmoore profile image
Erin Moore

because it's just a regular apocalypse with extra steps. the simpler ones are much more likely to happen first.

Collapse
 
dmerand profile image
Donald Merand

Maciej Ceglowski presented a very nice counter-argument to rampant AI/robots in his talk Superintelligence: the Idea that Eats Smart People

Collapse
 
mortoray profile image
edA‑qa mort‑ora‑y

Because I'm a robot.

Collapse
 
ben profile image
Ben Halpern

Collapse
 
ryanlmartin profile image
ryan lee martin

I think that the current pop culture understanding of AI is a gross misunderstanding built on a mythology. A mythology created by science fiction and the marketing of AI that is more than happy to make people believe AI is smarter than it is.

Cognitive thinking in a machine does not exist. We still don't even know where to start. AI has been a complete failure in that way. The kind of AI that has become so pervasive works so well because it is fed massive amounts of data which in spite of its relative size, represents a very narrow and specific subset of reality to accommodate a dumb machine.

I'd be more worried about a future where environmentally aware machines are controlled and weaponized by humans as a tool for enslavement or war. An army of autonomous non-thinking robots don't need to be sentient to inflict mass death or be a tool for enslavement.

Collapse
 
damcosset profile image
Damien Cosset

So, you are telling me that we will be smart enough to make robots that will have the power to enslave the entire human race, but dumb enough to not give those robots the ability to make the right choices?

Collapse
 
i3utm profile image
GnosticMike

One short answer: We will enhance ourselves first before robots can enhance themselves. Whether that be through new skill-sets, new jobs, new ways of earning a living, or new medical advances. However, I cannot wait till the day the robots over take us. That will be a day where we can truly be free of our biological constraints. We are already slaves to our own biology. Why not free us form that type of bondage?

Collapse
 
skatkov profile image
Stanislav(Stas) Katkov

Not concerned.

It's more then 20 year of computer evolution and still a pain to get printer working on Linux. What other operation system robots could use internally? Windows? OSX? I hope your joking. :-)