University of Amsterdam: Rare electrical recordings from within the human brain give new picture of neural activity
The human brain is an incredibly complex organ that is inherently dynamic. Even viewing a simple, static image on a screen unleashes a vast network of neural activity in our brains. But the invasive nature of the techniques necessary for examining such activity means that scientific studies frequently focus on non-human subjects. But now an international research team, including Dr Iris Groen from the University of Amsterdam, has been able to use medical data collected from humans to examine neural activity related to visual perception in unprecedented detail. Their findings will be published on 5 October in the Journal of Neuroscience.
The detailed study of neural dynamics is usually restricted to the domain of animal studies, where invasive neural measurements can be made using electrical implants placed specifically for research purposes. In contrast, brain activity in humans is usually measured non-invasively, using fMRI, MEG or EEG scanners that do not ‘enter’ the brain in any way. In the new study, researchers at the UvA, along with colleagues from New York University and Utrecht University, used data from two academic medical centres to uncover temporal dynamics – rapid fluctuations in neural activity reflecting the processing of the visual image – across the human brain at an unprecedented level of detail and precision.
The human subjects in the study were epilepsy patients who were implanted with electrodes to measure brain activity associated with seizures. The patients took part in the research by watching pictures on a laptop computer positioned at their hospital bedsides, allowing the neuroscientists to make the rare new measurements.
Copyright: Liesbeth Dingemans
We found that both human and animal brains seem to be using a similar ‘toolkit’ of neural calculations to make sense of the continuous stream of inputs arriving from our senses. Understanding how and why these dynamics unfold as they do is an important part of understanding how the brain represents the outside world and will help us to learn how we can make machine vision more human-like.
Lead researcher Groen, who works at the interface of computer vision and cognitive neuroscience
The new study shows that computational models developed to explain neural responses in non-human primates can also be applied to human brains. The models can predict changes in brain activity for a variety of changes in the visually presented image – for example, how much longer the neurons remain active when a stimulus remains on the screen for twice as long, or how much they decrease their activity when an image is shown for a second time.
Perception of sensory stimuli
The fact that a single computational model can predict all these different phenomena suggests that the apparent complexity in neural dynamics in both human and non-human primate brains could result from just a handful of neural computations, which are repeatedly used throughout the cortical networks involved in perceiving sensory stimuli.
Groen: ‘Now that we have identified these simple computations, we can try to incorporate them into computer vision algorithms to see if that helps them get better at solving problems that humans do effortlessly but that computers still struggle with, such as recognizing and predicting events in videos.’