DEV Community

Cover image for Intelligence Explosion: possibility that future of computing holds
Rajan Prasad
Rajan Prasad

Posted on

Intelligence Explosion: possibility that future of computing holds

Comparing most powerful computers from 1956 to that of 2015, there's 1 trillion-fold increase in processing power.Although the technological progress has been accelerating, the human brain has not changed significantly in millenia and so some great minds have speculated that machines will surpass human levels of intelligence and ability at some point of time.

Machines have become really intelligent, fast, accurate and unbeatable for performing some task like playing chess, or DOTA2(the Open AI), doing complex calculations, weather forecasting and more. But the one thing that makes human special is their Natural Intelligence. But what if humans were able to design such intelligent system which in turn designs a new system more intelligent than itself and this cycle repeats ?

This will result in such an intelligence called "UltraIntelligence". There would then unquestionably be an ‘intelligence explosion’, and the intelligence of men would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make.

But to be clear, nobody knows when is it going to happen, might be never or around 2500 or maybe around 2100. Maybe those prognosticators are right, and it will happen in the next thirty to eighty years.

Get yourself ready

There are another speculations however on How far along the intelligence explosion go.
Some believe that it will stop at the point that it reaches the full equivalent of human intelligence.

Why so? Because it is asserted that perhaps mankind exhibits the highest possible intelligence, and therefore there isn’t anything beyond it. In contrast, another viewpoint is that the AI intelligence explosion is going to zip right past human intelligence and land somewhere around so-called super-intelligence, also referred to as ultra-intelligence.

How high is up? Can’t say. Don’t know.

One belief is that this super-intelligence will undoubtedly be beyond our level of intelligence, and yet it will itself also have a boundary or final edge to it. It won’t be some amorphous intelligence that is all-consuming. Meanwhile, there are those though that suggest the AI intelligence explosion might be a never-ending chain reaction. AI intelligence might keep going and going, getting smarter and smarter, doing so for the rest of eternity.

What would it mean for mankind if the AI intelligence explosion was something that once started was ever-expanding?

This could be really good or ...really...really...bad

A machine superintelligence, if programmed with the right motivations, could potentially solve all the problems that humans are trying to solve but haven’t had the ingenuity or processing speed to solve yet. A superintelligence might cure disabilities and diseases, achieve world peace, give humans vastly longer and healthier lives, eliminate food and energy shortages, boost scientific discovery and space exploration, and so on.

Furthermore, humanity faces several existential risks in the 21st century, including global nuclear war, bioweapons, superviruses, and more.[56] A superintelligent machine would be more capable of solving those problems than humans are.

Sounds really cool so far right ? Now let's look at the downside

If programmed with the wrong motivations, a machine could be malevolent toward humans, and intentionally exterminate our species. More likely, it could be designed with motivations that initially appeared safe (and easy to program) to its designers, but that turn out to be best fulfilled (given sufficient power) by reallocating resources from sustaining human life to other projects.
“the AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.”, No strings attached.

Can we add friendliness to any artificial intelligence design?

Many AI designs that would generate an intelligence explosion would not have a ‘slot’ in which a goal (such as ‘be friendly to human interests’) could be placed. For example, if AI is made via whole brain emulation, or evolutionary algorithms, or neural nets, or reinforcement learning, the AI will end up with some goal as it self-improves, but that stable eventual goal may be very difficult to predict in advance.

Thus, in order to design a friendly AI, it is not sufficient to determine what ‘friendliness’ is (and to specify it clearly enough that even a superintelligence will interpret it the way we want it to). We must also figure out how to build a general intelligence that satisfies a goal at all, and that stably retains that goal as it edits its own code to make itself smarter. This task is perhaps the primary difficulty in designing friendly AI.

Can we teach a superintelligence a moral code with machine learning?

Some have proposed that we teach machines a moral code with case-based machine learning. The basic idea is this: Human judges would rate thousands of actions, character traits, desires, laws, or institutions as having varying degrees of moral acceptability. The machine would then find the connections between these cases and learn the principles behind morality, such that it could apply those principles to determine the morality of new cases not encountered during its training. This kind of machine learning has already been used to design machines that can, for example, detect underwater mines after feeding the machine hundreds of cases of mines and not-mines.

There are several reasons machine learning does not present an easy solution for Friendly AI. The first is that, of course, humans themselves hold deep disagreements about what is moral and immoral. But even if humans could be made to agree on all the training cases, at least two problems remain.

The first problem is that training on cases from our present reality may not result in a machine that will make correct ethical decisions in a world radically reshaped by superintelligence.

The second problem is that a superintelligence may generalize the wrong principles due to coincidental patterns in the training data.Consider the parable of the machine trained to recognize camouflaged tanks in a forest. Researchers take 100 photos of camouflaged tanks and 100 photos of trees. They then train the machine on 50 photos of each, so that it learns to distinguish camouflaged tanks from trees. As a test, they show the machine the remaining 50 photos of each, and it classifies each one correctly. Success! However, later tests show that the machine classifies additional photos of camouflaged tanks and trees poorly. The problem turns out to be that the researchers’ photos of camouflaged tanks had been taken on cloudy days, while their photos of trees had been taken on sunny days. The machine had learned to distinguish cloudy days from sunny days, not camouflaged tanks from trees.

Thus, it seems that trustworthy Friendly AI design must involve detailed models of the underlying processes generating human moral judgments, not only surface similarities of cases.

Top comments (0)