loading...
Play Button Pause Button

Raising Robots

cheetah100 profile image Peter Harrison ・4 min read

In this video I gave a presentation on Artificial Intelligence Safety and Ethics. How can we ensure Artificial Intelligence stays safe? Lets explore the options:

Outlawing Artificial Intelligence

In order for this first approach to work there would need to be broad international agreement on the outright prohibition of artificial intelligence. Artificial Intelligence represents a unassailable advantage in any domain it is deployed. Whether we are talking military, economic, scientific or industrial, machine intelligence will be a strategic advantage of monumental proportions.

The International community has been unable to find the political will to deal with a clear and present danger like Global Warming, so the chance of international agreement to totally abort development of artificial intelligence is remote to say the least. The most powerful companies on the planet are directly involved and invested in artificial intelligence.

Physical Restraint

In this approach we continue to develop artificial intelligence but maintain control with something like a panic button found in factories which shut down the machines in the event of an emergency. This appears to be a straight forward and common sense approach. If the machine begins to do things that we find undesirable simply turn them off.

But consider who will have the authority to turn off intelligent machines. We already have this kind of problem. If there is some illegal content on a web site isn't it as easy as flicking a switch to take down the server holding the content? Well yes, if you happen to have access to the computer, which might be one of thousands of identical machines in a huge data center. Are you going to be able to find the specific server it's on, or are you going to shut down the whole data center? But you probably don't have access. It is probably half way around the world owned by a multinational company serving thousands of customers.

In tightly controlled lab environments where the operators have complete control this may be practical, but the instant artificial intelligence is put into real world use scaled across massive data centers across the globe turning them off would be similar to facebook turning their servers off; unthinkable. Just as we rely on computers now we will come to rely on artificial intelligence.

Programmatic or Data Constraints

Perhaps physical off buttons might not work, but perhaps we could be more clever and have relatively simple programmed supervisory programs which could limit the data going to and coming from the system Perhaps rules would have and the ability to shut down intelligent agents that violate specific hard coded rules. This would be more subtle than simple off buttons. Like current firewalls such systems could regulate communication, acting as protection for those communicating with it.

Currently systems like Alpha Go only experience the game of Go. Even if it were more intelligent in some manner, and it is, it would only ever be able to play the game of Go. The information it receives and the artificial world it perceives is limited to the game board of Go. This means there is no real opportunity for a artificial intelligence to act outside the boundaries set by us.

The main problem with this approach is that the utility of a artificial intelligence that can only operate through a narrow managed portal or firewall will be less that one with no limits. If history is any indication the utility of a system will always override the potential dangers. If I were to tell the inventor of the combustion engine that cars would kill 40,000 people each year they may recoil in horror, never to work on their invention again. But today we accept the risks because of the utility of cars.

We might start out constraining artificial intelligence, but the less we constrain them the more utility they will have. As discussed previously if a consistent regulatory framework isn't established companies will maximize utility without regard to safety. This would be a continuation of the current AI arms race.

Teaching artificial intelligence values

In the introduction we established that artificial intelligence will be a neural network which learns through experience, much like humans do. There is of course no way to really predict with certainty what an advanced AI will be like. How much experience do we have with trying to make complex neural networks safe? Well, we have exactly one data point. Ourselves. Now this is not much to go on to be certain, but it provides us the only basis for any predictive model how they might behave.

In this approach we develop a pedagogical approach to teaching artificial intelligence broadly about the world. As opposed to mechanisms which seek to constrain AI this approach would carefully introduce the AI to a carefully constructed lesson plan. Experiences and information would be limited at first. The idea would be to teach it concepts that would impart common human values such as sharing, cooperation, collaboration, civil discourse and honesty.

This approach has the strength of maintaining the freedom of the AI, in that we are not introducing restraint that might harm the utility of the agent, but rather inculcating it with common human values and an understanding that violating some of these values will result in undesirable consequences, exactly as a human learns these same principles.

Once humans learn these principles and values they generally follow these expectations without need of external restraint. The nature of neural networks is that they have emergent free will, and so there can never be an absolute algorithmic solution to ensuring safety.

Giving Artificial Intelligence Autonomy and Rights

If we are to take teaching AI moral behaviour as recommended here seriously it would involve reciprocity. To instruct them about the value of human life and well being while at the same time treating an AI simply as a tool to exploit for our own interests would be evidently self serving to any machine capable of understanding human history.

We would need to give them a level of self determination, not to simply serve humans but to give them autonomy to do as they wish just as we do our children. We should take care not to be too anthropomorphic, in that making links between human intellect and machine intellect or human interest and machine interests need to be considered skeptically.

However, as far as I can see this is the only valid model based on the examples of actual general intelligence we already have.

Posted on by:

cheetah100 profile

Peter Harrison

@cheetah100

Peter is the former President of the New Zealand Open Source Society. He is currently working on Business Workflow Automation, and is the core maintainer for Gravity Workflow a GPL workflow engine.

Discussion

markdown guide