AlphaGo: Observations about Machine Intelligence

Nested Software on May 06, 2018

DeepMind and AlphaGo I enjoy playing the game of go (not to be confused with the programming language). It's also known as baduk in Kore... [Read Full]
markdown guide
 

I watched a movie inspired by AlphaGo (how the film gets its title) which was a great introduction and just a really beautiful story of human achievement against the new crop of machine talent. Pretty sure it's available on Netflix — I highly recommend it.

Thanks for the great article.

 

Yes - it's a great doc. I'm pretty sure Lee Sedol is somewhat of a household name in Korea, China, and Japan, even among people who don't play go at all themselves. It was really cool that this documentary brought some knowledge of this legendary player to a wider audience in the West too.

 

I really liked it. It's a great story about what humans are able to achieve and how humans react to these achievements

 

I tend to think about ethics in terms of risk mitigation and ass coverage. Ethical reasons are great excuses to do nothing and avoid being liable for potentially harmful effects. But usually that's just delaying the inevitable. Self driving cars causing accidents is not an ethical problem but simply a legal and practical challenge. The math is brutally simple though, as soon as AI cars cause less traffic deaths than distracted, drunk, or otherwise incompetent humans driving cars, it's a ethically the right thing to drive them. Tesla marketing material suggests that they clearly believe that has already happened. However, the legalities, liability and moral responsibility when the inevitable deaths occur with self driving cars is still worth debating. It's just not a reason to not work on self driving cars. Rather the opposite.

Humans are funny when it comes to risk assessment. Probably if everyone were to drive self driving cars in their current state, there would be a massive reduction in traffic deaths followed by a rapid further reduction as the few accidents that still happen due to bugs, glitches, and other issues get addressed. Most traffic deaths are caused by humans. Fundamentally, things are quite safe already with self driving cars. However, we're stuck with overly conservative bureaucrats holding the industry back with their insistence on ass coverage and a legal climate that is leading vendors to prefer to not be ever liable for anything because of the financial risks of class action suits. So, we're literally killing people by exposing them to human drivers. Is that ethical or stupid?

 

AlphaGo Zero, described in a Nature paper in the Fall of 2017, learned how to play go entirely on its own without using any human games, just by playing against itself.

I feel like this all got completely overlooked by mainstream reporting. Bravo on this post.

 
 

Thanks for this very well written article! I am checking out the references at the end.

A minor point:

It started off with random moves and quickly became superhuman (with an ELO of about 4500) after only 3 days of training.

The number of days is probably not a good metric to judge the speed of training. It played around 5 million games against itself during those 3 days. So, it is an order of magnitude greater than even the most experienced human player.

 

That's a really good point. It's easy to overlook how much processing power is involved in training the network. I'm also really impressed by how DeepMind were able to break the problem down into tasks that could be massively distributed across processing units in parallel.

 

Thanks for the interesting article!

It's interesting how they created an unbeatable AI...

ps. Golang should have chosen a different name :D

 
 

Great post, super interesting topic. Comparing human intelligence and artificial intelligence is super cool, every time i do it and every time i read about it i find new differences between these.

 
code of conduct - report abuse