A visual illustration of the hypothetical scenario and its formalization. Credit: Computational Brain & Behavior (2024). DOI: 10.1007/s42113-024-00217-5

Don't believe the hype: Artificial general intelligence is far from inevitable, researchers say

by · Tech Xplore

Will AI soon surpass the human brain? If you ask employees at OpenAI, Google DeepMind and other large tech companies, it is inevitable. However, researchers at Radboud University and other institutes show new proof that those claims are overblown and unlikely to ever come to fruition. Their findings are published in Computational Brain & Behavior today.

Creating artificial general intelligence (AGI) with human-level cognition is "impossible," explains Iris van Rooij, lead author of the paper and professor of Computational Cognitive Science, who heads the Cognitive Science and AI department at Radboud University.

"Some argue that AGI is possible in principle, that it's only a matter of time before we have computers that can think like humans think. But principle isn't enough to make it actually doable. Our paper explains why chasing this goal is a fool's errand, and a waste of humanity's resources," says van Rooji.

Infinite possibilities with finite power

In their paper, the researchers introduce a thought experiment where an AGI is allowed to be developed under ideal circumstances.

Olivia Guest, co-author and assistant professor in Computational Cognitive Science at Radboud University, says, "For the sake of the thought experiment, we assume that engineers would have access to everything they might conceivably need, from perfect datasets to the most efficient machine learning methods possible. But even if we give the AGI-engineer every advantage, every benefit of the doubt, there is no conceivable method of achieving what big tech companies promise."

That's because cognition, or the ability to observe, learn and gain new insight, is incredibly hard to replicate through AI on the scale that it occurs in the human brain.

"If you have a conversation with someone, you might recall something you said fifteen minutes before. Or a year before. Or that someone else explained to you half your life ago. Any such knowledge might be crucial to advancing the conversation you're having. People do that seamlessly," explains van Rooij.

"There will never be enough computing power to create AGI using machine learning that can do the same, because we'd run out of natural resources long before we'd even get close," Olivia Guest adds.

Critical AI literacy is essential

The paper is a collaboration between researchers at Radboud University, Aarhus University, the University of Bristol, University of Amsterdam, the Memorial University of Newfoundland and the University of Bayreuth, bringing together the fields of cognitive science, neuroscience, philosophy and computer science. According to the researchers, the current hype surrounding AI creates the risk of misunderstanding of what both humans and AI-systems are capable of.

Few people realize that cognitive science is crucial for evaluating claims about AI capabilities. "We often overestimate what computers are capable of, while vastly underestimating what human cognition is capable of," says van Rooij.

"It's important that we help people develop critical AI literacy, so that they have the tools to judge how feasible the claims of big tech companies are. If a company pops up claiming to have a machine that, when you press a button, it creates world peace, then you'd distrust it too.

"So why are we so quick to believe the promises of big tech who are driven by profit? We want to help build a better understanding of AI systems, so that everyone can bring a critical eye to promises of the tech industry."

More information: Iris van Rooij et al, Reclaiming AI as a Theoretical Tool for Cognitive Science, Computational Brain & Behavior (2024). DOI: 10.1007/s42113-024-00217-5

Provided by Radboud University Nijmegen