DeepMind wants to enable neural networks to emulate algorithms to get the best of both worlds, and it’s using Google Maps as a testbed.
Classical algorithms are what have enabled software to eat the world, but the data they work with does not always reflect the real world. Deep learning is what powers some of the most iconic AI applications today, but deep learning models need retraining to be applied in domains they were not originally designed for.
DeepMind is trying to combine deep learning and algorithms, creating the one algorithm to rule them all: a deep learning model that can learn how to emulate any algorithm, generating an algorithm-equivalent model that can work with real-world data.
DeepMind has made headlines for some iconic feats in AI. After developing AlphaGo, a program that became the world champion at the game of Go in a five-game match after beating a human professional Go player, and AlphaFold, a solution to a 50-year-old grand challenge in biology, DeepMind has set its sights on another grand challenge: bridging deep learning, an AI technique, with classical computer science.
The birth of neural algorithmic reasoning
Charles Blundell and Petar Veličković both hold senior research positions at DeepMind. They share a background in classical computer science and a passion for applied innovation. When Veličković met Blundell at DeepMind, a line of research known as Neural Algorithmic Reasoning (NAR), was born, after the homonymous position paper recently published by the duo.
The key thesis is that algorithms possess fundamentally different qualities to deep learning methods — something Blundell and Veličković elaborated upon in their introduction of NAR. This suggests that if deep learning methods were better able to mimic algorithms, then generalization of the sort seen with algorithms would become possible with deep learning.
Like all well-grounded research, NAR has a pedigree that goes back to the roots of the fields it touches upon, and branches out to collaborations with other researchers. Unlike much pie-in-the-sky research, NAR has some early results and applications to show.
We recently sat down to discuss the first principles and foundations of NAR with Veličković and Blundell, to be joined as well by MILA researcher Andreea Deac, who expanded on specifics, applications, and future directions. Areas of interest include the processing of graph-shaped data and pathfinding.
Pathfinding: There’s an algorithm for that
Deac interned at DeepMind and became interested in graph representation learning through the lens of drug discovery. Graph representation learning is an area Veličković is a leading expert in, and he believes it’s a great tool for processing graph-shaped data.
“If you squint hard enough, any kind of data can be fit into a graph representation. Images can be seen as graphs of pixels connected by proximity. Text can be seen as a sequence of objects linked together. More generally, things that truly come from nature that aren’t engineered to fit within a frame or within a sequence like humans would do it, are actually quite naturally represented as graph structures,” said Veličković.
Another real-world problem that lends itself well to graphs — and a standard one for DeepMind, which, like Google, is part of Alphabet — is pathfinding. In 2020, Google Maps was the most downloaded map and navigation app in the U.S. and is used by millions of people every day. One of its killer features, pathfinding, is powered by none other than DeepMind.
The popular app now showcases an approach that could revolutionize AI and software as the world knows them. Google Maps, features a real-world road network that assists in predicting travel times. Veličković noted DeepMind has also worked on a Google Maps application that applies graph networks to predict travel times. This is now serving queries in Google Maps worldwide, and the details are laid out in a recent publication.