DEV Community

Peter Harrison
Peter Harrison

Posted on

What is AI Missing?

#ai

The argument for The Singularity, achieving artificial general intelligence, is primarily based on an statistical argument related to the rate of technological progress. Just as we can predict that January will be hotter than July based on statistics, but cannot say with any certainty about the detail of the actual weather in January, we can have a degree of certainty about what progress will be made in the state of the art in AI while having little idea about the specific details of future technologies. Just as with the weather you need to be very close in order to be able to start predicting detail of what the technology will be like.

We are close enough to the Singularity now that I feel that we can start to identify the remaining gaps between human and machine intelligence. Machines are now able to play board games, recognize speech in noisy environments and recognize features in images better than a human. They can translate between languages with pretty decent accuracy. These things are not being achieved through computer 'whizz kids' able to write amazing algorithms, but rather through the application of learning neural networks.

AI researchers have taken many roads, but the deep learning neural network approach has wiped the floor in recent years. There is a misunderstanding that the reason these systems do not exhibit human like intellect is because they do not yet have the sufficient number of neurons in their networks, or the number of connections is insufficient. Is achieving general intelligence just a matter of giving neural networks a bigger machine to run on?

Just as increasing CPU speeds or increasing RAM size was not the answer to machine intelligence, neither is the size or speed of neural networks. There was recently a demonstration of a Nvidia machine capable of recognizing objects and faces in images at a rate of thousands per second. No human can perform anything like this, yet we are expected to believe processing power is the barrier to human level intellect? If the number of neurons is critical how is it that some people with large parts of their brain missing able to still function? How can small mammals have the characteristics we want in our AI with far fewer neurons?

So what are we missing? The core differences in my view are all related. Consciousness, emotion, intent, learning and memory. One of the hot topics in AI circles right now is AI safety, controlling the behavior and intent of machines. Currently artificial neural networks do not have self determined intent. To the extent they have any intent at all it is utterly determined by humans. Whether this is searching for terrorists in a video feed from Afghanistan, or finding the best products to present to you on facebook, the utility function has to date been one imposed by humans on the basis of the utility to them.

The existing deep learning systems depend on huge volumes of data that has already been classified. Given a large data set it is possible for the machine to 'train' a neural network to optimize it's ability to correctly classify new examples. Such systems once trained have no moment to moment memory. If you ask Alpha Go what moves it had made previously it has no idea. It has no memory of past events and no plan for the future. It is simply acting move by move according to a neural network trained through playing millions of games. It lacks temporal awareness.

While the deep learning neural networks have been proven to work they are incapable of learning once deployed. The training mechanism runs on massive parallel matrix computers inside data centres. Google uses deep learning techniques to train neural networks to understand speech for example, which is then deployed onto phones. But once on your phone it does not continue to learn. It will not adapt to your specific speech.

The other aspect of these modern neural networks is that the are serial. One image in, one result out. One sound in, one sentence out. Each is totally new and run through the entire network. There is no storage of state from one moment to the next. There are no loop backs where the output from a layer is fed back into a previous layer. The result from one image will not affect the result for the next image.

We are missing short term moment to moment contextual memory. In real brains there are both feed forward and feed back neural pathways. For example, the motor neurons controlling eyes are also connected to the visual inputs This allows us to track a moving object without effort. This feedback architecture not only exists between input and output, but between all levels.

More important, it seems that this feedback mechanism is central to our expectations about the world. The feedback system allows us to run internal simulations. If the simulation agrees with the observed world we are happy, but if the simulation diverges from observed reality our brain lights up and our conscious attention is brought to the discrepancy. Consciousness itself appears to be a neural feedback where the output of one thought is input to the next in a constant stream of thought.

This feedback mechanism is used to reinforce memory, by replaying short term memories we strengthen the neural connections, thus making them long term memories. It is well known that repeating something burns it into your brain. Whether it was the 'rote learning' so maligned in modern educational systems or the repetition of the martial arts this approach of repetition strengthens you neural pathways. It has been shown that simply dreaming or thinking about training can improve technique just as much as actual physical training.

In my view consciousness is simply this simulation process running. Some animals such as ants are essentially robots. They have no capacity for awareness, intent or memory. But a dog or other mammal has a more advanced learning brain. Learning, consciousness and memory all appear to be bound up, different aspects of the same thing. They are a consequence of feedback loops that train our neural networks in real time. So how might we go about building an artificial system which exhibits these characteristics?

Perhaps the first thing to understand is how the brain is a system that is constantly learning; there is no artificial separation between the training phase and its use as there currently is with machine learning. Learning is the process of modifying the weights of neurons, and thus the behavior of the network. Short term memory seems to be more like a standing wave of electrical activity within the brain.

Consciousness in this view is simply the short term memory and feedback cycle. My mother in law who lives with us has Alzheimers and is now unable to form memories. As a result she can no longer form or retain intent. What this means in practice is that she cannot carry out basic tasks, such as feeding herself. It is not a physical impairment as she can still manipulate objects, but cannot remember moment to moment what she is doing.

What would it be like to be a baby with the same lack of short term memory? It impacts not just the ability to retain important information about your environment, but also your ability to form persistent intent. Your short term internal memory replays experiences or thoughts to reinforce neural connections, and thus learn. A similar strategy is now being used in machine learning, where the simulations are used to generate new training data. An example was a AlphaGo Zero, a new Go playing AI that didn't use any human game data, rather only learned through playing itself.

The current approach to learning systems is to have some kind of deterministic utility function to determine correct behavior. Dystopian futures where machines maximize paper clip production are clearly fantasy for any system implementing a broad reward system similar to humans. Humans initially have hard wired rewards, such as taste. If you put something in your mouth and it tastes good you do more of what you were just doing. Other emotions such as fear and anger are driven by chemicals released when dangers are perceived. Release of chemicals is a learned response to perceived threats and rewards.

Hunger is similar, if you are hungry you will be motivated to find food. Thus emotions are powerful built in intent generators.

Some animals may depend only on these lower level emotional drivers to control their behavior directly. However, it appears that emotions are the mechanism for training brains as well as driving behaviour. There is a reason people remember traumatic events while they forget everyday events. There is a selective pressure to remember the adverse things so you might want to avoid in future, or to remember things that are pleasurable so you can repeat them. It is easy to see in this context how drugs that influence the reward system of the brain can train the brain to continue consuming drugs.

However, base emotions alone do not explain the wonderful complexity of human thought and innovation. Perhaps we condition the brain such that certain behaviors have their own reward even if there is no immediate physical reward. Learning something new can bring a sense of satisfaction and achievement independent of any physical stimuli that would normally be associated with pleasure.

Simple approaches involving physical stimuli are insufficient to drive complex behaviors that require advanced planning and intent. For example, spending years at University to get a degree will take years of hard work. Without a mechanism of internal reward and persistent intent this kind of long term planning would be impossible. This would tend to indicate that with machines we could start with relatively simple sensory reward systems, but would need to have a way to modify how the neural network develop is rewarded.

There are two other aspects that humans appear rewarded. The first is communication. Communication is obviously far from unique to humans, with many species developing verbal cues and body language to communicate about dangers in their environment to their families. However, no other animal has developed the sophisticated symbolic abstract language capabilities as humans.

Once we had memory and intent the next stage was communicating intent to others. This enabled us to collaborate to achieve collective plans. Human tribalism and community is key to what we are. Positive selection pressure meant that humans developed desire to belong, to not be lonely, to be accepted by our communities.

Understanding emotion and the role it plays in training natural neural networks will be critical in spanning the gap between deep learning systems which utilize algorithmic analysis and a more natural approach that allows neural networks to develop their own reward systems in real time.

No doubt deep learning will continue to be developed and applied to many applications. We don't really need fully intelligent AI driving our cars for example. It would be better to have narrow AI that is very good at driving. But to achieve a genuine human like intelligence we might need to wait for a systems which utilize a more natural feedback approach to reinforcement and a system which mimics the chemical interactions we see with emotions.

Top comments (0)