Research Finds Vision Allows Brain To Make Predictions
The moment a pitcher unleashes a fastball in the direction of Toronto Blue Jays shortstop Bo Bichette in a professional baseball game, the crowd at Rogers Centre hopes something special is about to happen. But while they can’t predict it, Bichette might.
A new Western study shows how a player like Bichette might predict it by processing the information available to him with his eyes (more specifically his retinae) from a ball moving at 150 kilometres (90+ miles) per hour in real-time, providing the all-star slugger a fighting chance to predict the outcome.
Blending math and artificial intelligence, mathematics professor Lyle Muller and his collaborators at Western and the Salk Institute for Biological Studies in La Jolla, California have developed a neural network model that can be rapidly and efficiently trained to predict individual moments.
This model provides new insight into how a particular pattern of neural activity, called “travelling waves,” could play a role in embedding visual information onto the circuits of the brain in a highly structured way.
The findings were published in the high impact journal Nature Communications.
“Each cortical region in the visual system contains a map of visual space. In this new paper, we reasoned that waves travelling over these maps may enable short-term predictions into the future,” said Muller, Western Institute for Neuroscience faculty member. “When we developed this network with travelling waves, we found it can help the system to forecast what comes next in upcoming movie frames.”
By learning how the brain extrapolates information from individual actions to build a reserve of “mental movies” for forecasting the future, engineers developing the latest AI technologies – for everything from chatbots to smart cars – may now have a blueprint for teaching machines by using the inherent expertise of all human existence.
New insight on visual perception
Considering the example from the baseball game, a ball takes around 400 milliseconds to travel 18 metres (60 feet) from the pitcher’s hand to Bichette at home plate. Time is obviously required for Bichette’s brain to make the neural computations, which enable him to perceive the ball and estimate its trajectory. This includes both the time required for sensory information to travel from the retinae to relevant areas of the brain and the time required for the computation of the ball’s trajectory in space based on this information.
It is estimated the entire computation can be accomplished in 150 milliseconds. During this time, the ball will have already traveled more than six metres (22 feet) or a third of the distance to home plate, so Bichette’s brain must be using other visual cues to predict where the ball is heading.
To estimate the likely current location of the ball based on information that was available to Bichette’s visual system 150 milliseconds ago, the brain may actively predict a few hundred milliseconds into the future, using these dynamic patterns of neural activity to forecast how the movie of a visual experience will change.
These results provide new insight into how the visual system processes information arriving from the eyes. Models of the visual system, building on foundational work by David Hubel and Torsten Wiesel, winners of the 1981 Nobel Prize in Physiology, generally focus on the transformations of visual information between different areas in the visual system. These transformations, which are created by the “feed-forward” connections that carry information from the eyes through the optic nerve and to the brain, can explain how the brain transforms static images into outlines of edges, and how the brain recognizes objects. But neuroscientists are beginning to see that’s not the only way vision may work.
“The status quo is a very static view of vision and does not consider things like latency or the movement we experience in our normal visual experience,” said Gabriel Benigno, a PhD student in mathematics and first author on the study. “We know the brain processes visual input using connections that stretch far across the map of visual space. These ‘recurrent’ connections can connect processing far across different portions of an image, and using these connections, we basically found a way the brain may ‘animate’ predictions in the visual system, going from a static image to a movie.”
New model for artificial brains
Previous work from Muller and his collaborators found neural activity can transmit across single regions in the brain in a travelling wave, similar to how waves move across the ocean. In 2014, Muller discovered that visual stimuli could create travelling waves of neural activity, in a study that was also published in Nature Communications. Muller and his colleagues followed this finding with a landmark 2020 Nature study, which showed spontaneous neural activity is also organized into travelling waves that can modulate perception. Why these waves exist in the visual system, however, has remained unclear – what computation could they be doing?
This new research project, largely cultivated at the Western Academy for Advanced Research, begins to answer this question. The Academy team integrates totally theoretical work, such as the neural network studied here, to work done by Roberto Budzinski, a Western Institute for Neuroscience clinical postdoctoral fellow and co-author on this study, who is applying these mathematical techniques in collaboration with neurologists and neurosurgeons at London Health Sciences Centre (LHSC). In this way, Muller and his collaborators aim to develop new mathematical techniques that help to explain information processing in neural networks, both artificial and biological.