Google DeepMind just learned to read the London Underground map through memory and basic reasoning

Learning to read the London Underground map is a rite of passage for any new Londoner, but DeepMind -- Google's deep learning AI with an interest in healthcare and card games has joined the ranks of those that know the fastest way to get from Acton Town to Wapping.

That may not sound hugely impressive, but the manner in which it has learned to read the Tube map is very interesting for the future of artificial intelligence, as it used basic reasoning and memory to conquer the commute. In other words, it was more human than your average Tube map app.

"I think this can be described as rational reasoning," Herbert Jaegar, a computer scientist from the University of Bremen, told The Guardian. "They [the tasks] involve planning and structuring information into chunks and re-combining them." Combining deep learning with an external memory means that DeepMind could take what it learned from the London Underground, and apply it to navigating other similar transport networks around the world.

This is different from things that have gone before. As Alex Graves, a research scientist at DeepMind, told Wired: "You can't give normal neural networks a piece of information and let them keep it indefinitely in their internal state -- at some point it will be overwritten and they will essentially forget it." This neural network, however, could keep the memory forever.

The same strategy was used on two other tasks both of which, again, seem trivial to humans. DeepMind was given simple extracts of stories, such as "John is in the playground. John picked up the football." From there the AI would be asked where the football was, and provided the correct answer to these kind of puzzles 96% of the time. Graves concedes that while these puzzles "look so trivial to a human that they don't seem like questions at all," it's the methodology that is interesting. Traditional computers, he says, "do really badly at this."

In another puzzle, explained in this video, DeepMind was able to establish familial relationships by reading a family tree.

"Taken together, our results demonstrate that [differentiable neural computers] have the capacity to solve complex, structured tasks that are inaccessible to neural networks without external read-write memory," the authors concluded in their paper. "Like a conventional computer, it can use its memory to represent and manipulate complex data structures, but, like a neural network, it can learn to do so from data."

Alan Martin

After a false career start producing flash games, Alan Martin has been writing about phones, wearables and internet culture for over a decade with bylines all over the web and print.

Previously Deputy Editor of Alphr, he turned freelance in 2018 and his words can now be found all over the web, on the likes of Tom's Guide, The i, TechRadar, NME, Gizmodo, Coach, T3, The New Statesman and ShortList, as well as in the odd magazine and newspaper.

He's rarely seen not wearing at least one smartwatch, can talk your ear off about political biographies, and is a long-suffering fan of Derby County FC (which, on balance, he'd rather not talk about). He lives in London, right at the bottom of the Northern Line, long after you think it ends.

You can find Alan tweeting at @alan_p_martin, or email him at mralanpmartin@gmail.com.