Much like the sweeping blue ocean covering our world, the human brain is vast, largely unexplored and mostly a mystery, despite being essential to humanity. The biological origins of intelligence – and how it is created by the network architecture the human brain – remains largely unknown – but Beckman researchers and Google Research are one step closer to finding answers with their recent collaboration.
A team of researchers is turning to advances in computer science and engineering to uncover the secrets. The interdisciplinary team is jointly led by Lav Varshney, associate professor of electrical and computer engineering, neuroscience, and computer science, and Aron Barbey, professor of psychology, neuroscience, and bioengineering, in partnership with Been Kim, a research scientist at Google Research.
“The biological origins of human intelligence remain one of the greatest mysteries of modern science. How is intelligence created within the human brain and how do individual differences in the functional organization of the brain shape the way we understand and reason about the world?” Barbey said. “Recent advances in network neuroscience and modern AI provide new opportunities to address these questions – motivating novel theories and powerful methods to more precisely characterize the network architecture of intelligence in the human brain.”
The quest to uncover the cognitive foundations of human intelligence began more than a century ago, led by the pioneering work of Charles Spearman. Spearman examined performance on tests of academic achievement – for example, within mathematics, French, English, and classics – and provided early evidence that all cognitive tests measure something in common. Spearman referred to this commonality as the general factor, g, which represents the component of individual differences variance that is common across all tests of mental ability.
Rather than originating from a specific mental ability or the capacity to solve a particular type of problem, general intelligence reflects the ability to solve a wide range of cognitive tasks and represents skills that are common across all areas of human performance. Since Spearman’s discovery of g more than a century ago, thousands of subsequent studies have observed the general factor of intelligence, which now represents one of the most well-established findings in all of psychology. However, the biological origins of general intelligence – how the general factor is created by the network architecture of the human brain – remains the focus on ongoing research and debate in cognitive neuroscience.
While traditional theories propose that general intelligence depends on a specific brain region or a primary brain network, Barbey’s research framework, the Network Neuroscience Theory, asserts that general intelligence reflects individual differences in the system-wide topology and dynamics of the human brain.
“Rather than originating from a fixed set of regions or a specific brain network, we believe that general intelligence arises from network mechanisms for efficient and flexible information processing,” Barbey said.
To investigate the role of brain network topology and dynamics in general intelligence, the team is using cloud computing resources from Google to apply state-of-the-art machine learning and AI analysis methods. Because Barbey’s team has acquired a wealth of high-resolution brain imaging and cognitive performance data within hundreds of participants, artificial intelligence can be used to discover patterns and complex relationships that could go unnoticed by the human eye.
“We’re working with one of the world’s experts in artificial intelligence, Been Kim, who focuses on interpretable machine learning,” Varshney said, explaining that some machine learning models are complicated and not intuitive, making it hard for humans to understand what the computers are doing. “Interpretable machine learning moves toward a scenario where humans can intuitively understand the explanations that the machine learning algorithm is coming up with.”
According to Kim, the field of interpretable machine learning has traditionally focused on explaining tasks that humans already know how to do, but automated by machines, whereas the goal of this project is to explain something that humans don’t yet know.
“There are plenty of examples of machines learning to do something that humans yet know how to do, like playing Go,” Kim said. “Interpretability helps humans to have a conversation with these machines, so that humans learn how machines do what they do.”
Through the use of this advanced, interpretable artificial intelligence approach, the computational model will explain the relationship between an individual's intelligence score and the network architecture of their brain to determine how scores correlate with what’s physically happening inside.
“It’s not useful for us as scientists to discover knowledge we don’t understand,” Varshney said. “We want results we can write in sentences, something we can put in textbooks, and for that, humans need to understand.”
While Barbey imagines it’ll be decades until scientists have concrete answers on the origins of general intelligence in the human brain, the eventual impacts are profound.
Advancing our understanding of the network architecture of the brain not only promises new insights into the nature of human intelligence and associated concepts (for example, judgment and decision-making), but may also advance innovation in machine intelligence. Indeed, by studying human intelligence, modern research aims to gain new insights about how to improve machine intelligence – making artificial intelligence smarter, more resource-efficient, and hopefully ethical, by learning more about the human brain.
“One can definitely imagine that in the far future, if we know how to describe more intelligent humans, that might lead to design principles for designing more intelligent AI systems,” Varshney said. “But putting a timeline on this is tough. These are some of the biggest open questions in science. It’s hard to say exactly where it will lead.”
To complete the project, the greatest minds — and machinery — are coming together from psychology, neuroscience, engineering, and computer science.
“I've been passionate about applying and inventing some interpretability techniques to advance science, and I am particularly excited to work with the experts at Beckman on this project,” Kim said. “This will not only advance science on the nature of human intelligence, but it is also likely to advance interpretable machine learning by challenging it to do something much harder.”