Neural networks can now “think” more like humans than ever before, scientists show in a new study.
The research, published Wednesday (Oct. 25) in the journal Nature, signals a shift in a decades-long debate in cognitive science — a field that explores what kind of computer would best represent the human mind. Since the 1980s, a subset of cognitive scientists have argued that neural networks, a type of artificial intelligence (AI), aren’t viable models of the mind because their architecture fails to capture a key feature of how humans think.
But with training, neural networks can now gain this human-like ability.
“Our work here suggests that this critical aspect of human intelligence … can be acquired through practice using a model that’s been dismissed for lacking those abilities,” study co-author Brenden Lake, an assistant professor of psychology and data science at New York University, told Live Science.
Neural networks somewhat mimic the human brain’s structure because their information-processing nodes are linked to one another, and their data processing flows in hierarchical layers. But historically the AI systems haven’t behaved like the human mind because they lacked the ability to combine known concepts in new ways — a capacity called “systematic compositionality.”
For example, Lake explained, if a standard neural network learns the words “hop,” “twice” and “in a circle,” it needs to be shown many examples of how those words can be combined into meaningful phrases, such as “hop twice” and “hop in a circle.” But if the system is then fed a new word, such as “spin,” it would again need to see a bunch of examples to learn how to use it similarly.
In the new study, Lake and study co-author Marco Baroni of Pompeu Fabra University in Barcelona tested both AI models and human volunteers using a made-up language with words like “dax” and “wif.” These words either corresponded with colored dots, or with a function that somehow manipulated those dots’ order in a sequence. Thus, the word sequences determined the order in which the colored dots appeared.
So given a nonsensical phrase, the AI and humans had to figure out the underlying “grammar rules” that determined which dots went with the words.
The human participants produced the correct dot sequences about 80% of the time. When they failed, they made consistent types of errors, such as thinking a word represented a single dot rather than a function that shuffled the whole dot sequence.
After testing seven AI models, Lake and Baroni landed on a method, called meta-learning for compositionality (MLC), that lets a neural network practice applying different sets of rules to the newly learned words, while also giving feedback on whether it applied the rules correctly.
The MLC-trained neural network matched or exceeded the humans’ performance on these tests. And when the researchers added data on the humans’ common mistakes, the AI model then made the same types of mistakes as people did.
The authors also pitted MLC against two neural network-based models from OpenAI, the company behind ChatGPT, and found both MLC and humans performed far better than OpenAI models on the dots test. MLC also aced additional tasks, which involved interpreting written instructions and the meanings of sentences.
“They got impressive success on that task, on computing the meaning of sentences,” said Paul Smolensky, a professor of cognitive science at Johns Hopkins and senior principal researcher at Microsoft Research, who was not involved in the new study. But the model was still limited in its ability to generalize. “It could work on the types of sentences it was trained on, but it couldn’t generalize to new types of sentences,” Smolensky told Live science.
Nevertheless, “until this paper, we really haven’t succeeded in training a network to be fully compositional,” he said. “That’s where I think their paper moves things forward,” despite its current limitations.
Boosting MLC’s ability to show compositional generalization is an important next step, Smolensky added.
“That is the central property that makes us intelligent, so we need to nail that,” he said. “This work takes us in that direction but doesn’t nail it.” (Yet.)