AlphaGo Beats One of the Top Go Players

I first met Demis Hassabis, founder of DeepMind (the creator of AlphaGo), at the Mind Sports Olympiad during the late 1990s. Even then there was an aura around Demis, a PentaMind World Champion many times over. The PentaMind is a pentathlon for players of multiple mind sports where competitors score points (in a similar fashion to the decathlon) that count towards the PentaMind title.  

Demis, a chess master with an ELO rating of over 2300, is also a skilled shogi (Japanese chess), Diplomacy and poker player, hitting the money six times in the World Series of Poker. I myself only played backgammon and poker (winning the No-Limit Hold'em title in 2000) at the Mind Sports Olympiad and always looked on in awe at people like Demis, people for whom flitting between rooms during the week-long competition, playing multiple games, and resetting their mind for each set of rules was easy.

Whilst achieving success at mind sports, Demis was completing  a PhD in cognitive neuroscience to go with his BSc in computing. Demis then went on to research artificial intelligence (AI), eventually setting up DeepMind, which was acquired by Google in 2014.

DeepMind specialises in deep learning, which is just another name for neural networks. You can see some of DeepMind's work in a video of a lecture given at my alma mater. Entitled The Future of Artificial Intelligence you can see how far AI has come. The video is impressive, especially the super human way in which an AI, using Deep Learning, learns how to play video games. Though I must admit that I knew of the Breakout "cheat" when I was a teenager.

In October of 2014, AlphaGo first came to the world's attention when it beat a top European player. DeepMind was then confident enough to challenge one of the top players in the world, Lee Sedol. Recordings of the five match series can be seen on YouTube with expert commentary and analysis.

Being an AI researcher myself I am impressed by this feat, which wasn't expected for decades to come. In the 1990s, DeepBlue beat Garry Kasparov at chess. However, Go is regarded as a more complex game. The rules are very simple but the 19x19 board creates a greater number of potential board positions than chess, which made people believe that it was a much harder game to beat. And yet, only two decades after DeepBlue we now have a machine intelligence able to beat the best Go players.

Artificial intelligence has come a long way since the post-war thinking of Alan Turing who envisioned a computer learning to play chess. However, it was over fifty years before one could beat a world champion player. Since then the technology has begun to accelerate in power.

I expect to see researchers achieving or getting close to the creation of the technological singularity during my lifetime. A time which will see machine intelligence and capability far surpass that of humanity. This won't be a case of creating a human intelligence. Why would anyone bother? There is a far more enjoyable way of creating a human intelligence, involving a man, a woman and a private moment. Instead, the technological singularity will catch and surpass human intelligence in the blink of an eye due to accelerating change in technological advancement.

Many scientists are worried that the technological singularity will be a danger to humanity. I am not one such scientist. I believe that humanity will require a merger between biological humans and a future machine intelligence to progress and survive because life won't be viable on Earth in the future. Humans are very fragile and the universe beyond our atmosphere is a very dangerous place. I don't see humans exploring the universe in biological form.

There are two ways that humans can explore the universe in comfort using the singularity, the first would be to use technology created by the singularity for terraforming planets and building humans cell by cell. A suitable planet is chosen on which to make a new home for humanity and a probe is sent to that planet. The probe will have an onboard laboratory for terraforming the planet so that it is ideal for sustaining life and also for building humans. Exploring the universe in this manner does not require humans to travel between the stars for hundreds or thousands of years.

The other way to explore the universe is by man-machine hybridisation. I am a little worried by bio-chemists exploring longevity. I don't like the idea of two hundred year old people on our planet. The world is crowded enough as it is. Imagine two hundred year old scientists who won't give way to younger scientists with new ideas. How about a two hundred year old politician who just won't go? Humanity will stagnate if people in power lived for exceedingly long periods of time.

A better idea is for our minds to live on in human-like machines after our bodies are no longer serviceable. Most people want to live forever. I certainly do. If my mind could live on in a machine with no need for food, water, air or warmth then I could live forever on a space station orbiting the Earth. I would not be a burden for biological humans on Earth. I could also explore the universe without the sort of worries that a biological human would have.

I look forward to the technological singularity and people like Demis Hassibis are the kind of people who are going to create it. For me, as a private researcher, AI is not used for such lofty ideas. I have recently created a trading position optimiser using genetic algorithm. I'll pop it into my next book.

Addenda

This was written after game three when AlphaGo had taken a 3-0 lead in the five match challenge. In the next game Lee Sedol won after AlphaGo had made mistakes so it's not quite over yet for us biological humans.

And, I have just noticed this article about someone with similar ideas to mine. His work will be in a future episode of the BBC's Horizon science series.