Brains Speed Up Perception by Guessing What’s Next
Introduction
Imagine picking up a glass of what you think is apple juice, only to take a sip and discover that it’s actually ginger ale. Even though you usually love the soda, this time it tastes terrible. That’s because context and internal states, including expectation, influence how all animals perceive and process sensory information, explained Alfredo Fontanini, a neurobiologist at Stony Brook University in New York. In this case, anticipating the wrong stimulus leads to a surprise, and a negative response.
But this influence isn’t limited to the quality of the perception. Among other effects, priming sensory systems to expect an input, good or bad, can also accelerate how quickly the animal then detects, identifies and reacts to it.
Years ago, Fontanini and his team found direct neural evidence of this speedup effect in the gustatory cortex, the part of the brain responsible for taste perception. Since then, they have been trying to pin down the structure of the cortical circuitry that made their results possible. Now they have. Last month, they published their findings in Nature Neuroscience: a model of a network with a specific kind of architecture that not only provides new insights into how expectation works, but also delves into broader questions about how scientists should think about perception more generally. Moreover, it falls in step with a theory of decision making that suggests the brain really does leap to conclusions, rather than building up to them.
Faster Senses and Active States
Taste, the least studied of the senses, was the perfect place to start. After a taste hits the tongue, a few hundred milliseconds pass before activity in the gustatory cortex starts reflecting the input. “In brain terms, that’s like forever,” said Don Katz, a neuroscientist at Brandeis University in Massachusetts (in whose lab Fontanini did his postdoctoral work). “In the visual cortex, it takes a fraction of that time,” making it much more difficult to discern the expectation effect that these researchers wanted to study.
In 2012, Fontanini and his colleagues performed an experiment in which rats heard a sound (an “anticipatory cue”) and then received a tiny burst of flavor through a tube in their mouth. The taste itself could be sweet, salty, sour or bitter, and the anticipatory cue contained no information about which of the four it might be.
Even so, the researchers found that such general expectations could drive the neurons in the gustatory cortex to recognize the stimulus nearly twice as fast as when the rats received the taste without hearing the sound first. The period of latency dropped from roughly 200 milliseconds to only about 120 milliseconds.
Fontanini wanted to know what kind of neural network could theoretically enable this more rapid coding. And so he brought someone from outside the taste field into the fold: fellow Stony Brook neurobiologist Giancarlo La Camera, who had previously worked on modeling the spontaneous brain activity that occurs even in the absence of a stimulus.
The past few decades have increasingly highlighted that much of the activity in sensory networks is intrinsically generated, rather than driven by external stimuli. Compare the activity in the visual cortex of an animal in complete darkness with that of an animal looking around, and it’s difficult to tell the two apart. Even in the absence of light, sets of neurons in the cortex begin to fire together, either at the same time or in predictable waves. This correlated firing persists as a so-called metastable state for anywhere from a few hundred milliseconds to a few seconds, and then the firing pattern shifts to another configuration. The metastability, or tendency to hop between transient states, continues after a stimulus is introduced, but some states tend to arise more often for a particular stimulus and are therefore thought of as “coding states.”
La Camera and others (including Katz) had previously modeled metastability by building what’s called a clustered network. In it, groups of excitatory neurons had strong interconnections, but inhibitory neurons were also randomly connected to the excitatory ones, which added a broad damping effect to the system. “This clustered architecture is fundamental for producing metastability,” Fontanini said.
Fontanini, La Camera and their postdoctoral fellow Luca Mazzucato (now at the University of Oregon) found that the same network structure was fundamental for recreating the effects of expectation, too. In a metastable model with a clustered architecture, the researchers simulated a general anticipatory cue followed by the arrival of a particular taste stimulus. When they did this, they successfully reproduced the pattern of accelerated coding that Fontanini had observed in rats in 2012: The transitions from one metastable state to the next got faster, which also made it possible for the system to reach coding states faster. The results demonstrated that simply by building a network to show these metastable patterns of activity, “you can also capture a lot of the neurological responses … when you simulate a gustatory input,” Fontanini said.
When the researchers tried modeling the anticipatory cue and stimulus in a network without clusters, they couldn’t generate the 2012 results. And so “only certain types of networks allow this [effect] to happen,” Katz said.
A Less Strenuous Hike
The finding was notable, first, for providing insights into what kind of architecture to search for in the actual gustatory cortex — and perhaps in other sensory cortices as well. Currently, neuroscientists are debating how taste gets processed: Some argue that certain neurons might encode “sweet” and others “salty,” creating very specific neural signatures for specific tastes. Others tie it to broader patterns of activity; most neurons respond to most tastes, and a given neural signature is more roughly correlated with one taste over another. The work done by Fontanini and his colleagues supports the latter theory while providing predictions about what that connectivity should look like. The clusters alone “capture many, many features of the gustatory cortex,” Fontanini said: “the spontaneous activity, the patterns of response to taste, the expectation effect.” He hopes to continue digging into how those clusters form, and what other kinds of neural activity they affect.
The work also paints a picture of the neural substrate underlying expectation in the brain. It’s not just that an anticipatory cue excites particular neurons, or induces a particular set of states, which then encode the stimulus. Instead, it’s more significant that expectation seemed to modify the dynamics — namely, the switching speed — of the entire system.
Fontanini and La Camera liken these dynamics to a ball moving through a landscape filled with troughs. Those pockets or valleys represent response states, and anticipation tips the landscape so that the ball falls into the first trough faster. It also smooths out the hilly path the ball needs to traverse between troughs, making it easier to pass from one state to the next without getting stuck.
That is, expectation makes the network a little less sticky. It allows for an easier hike toward the states that encode an actual taste, but it does not confer so much stability that the system gets stuck in a single state. That’s a problem that often plagues these kinds of clustered networks: With such clustering, some “trough” states end up being too deep, and the system amplifies the wrong information. But these findings show that “you don’t need an elaborate system” in place to resolve that, said Georg Keller, a neuroscientist who studies visual processing at the Friedrich Miescher Institute for Biomedical Research in Switzerland.
Fontanini and La Camera hope this kind of mechanism might also explain the effects of other context-setting processes beyond expectation, like attention and learning. But perhaps the “most important implication [of our work] is that it shifts the focus from the static firing responses of neurons coding for things, to dynamical behaviors of neurons,” La Camera said.
While a dynamical systems approach to neuroscience is hardly new, it’s been difficult to test and model. The way experts think about basic sensory perception tends toward the hierarchical: The cortex builds up and integrates features to form perceptions, sending signals to other layers of the network that integrate still more information until the brain ultimately arrives at a decision or behavior.
Not so in this new work. Instead, the team’s results support a different kind of processing in which “all of this happens at the same time, and … before the stimulus even arrives,” said Leslie Kay, a neuroscientist at the University of Chicago who focuses on olfaction. “You learn stuff within a cortical area,” forming a system of connected clusters to reflect that learning, “and then you influence it [with expectation], and what it knows emerges.”
A Sudden Tumble
The model implies that decision making isn’t a gradual process driven by the buildup of information at all, but rather a sort of “aha” moment, a jump in neural fluctuations. In fact, Katz has used the same kind of modeling as Fontanini and La Camera to support the idea that arriving at a decision (say, to swallow or spit out a piece of food) “happens in a sudden tumble,” he said.
The connection between these “very different corners of the taste field” — Fontanini’s work on sensory perception and his own research on later processing — leaves Katz feeling “super excited.”
It also highlights the need to move away from focusing on single neurons that respond to particular cues, and toward making internal states and dynamics more explicit in our understanding of sensory networks — even for the most basic sensory stimuli. “It’s much easier to say that a neuron increases its firing rate,” said Anan Moran, a neurobiologist at Tel Aviv University in Israel. But to understand how organisms work, “you cannot account only for the stimulus, but also for the internal state,” he added. “And this means that our previous [understanding of] the mechanism used by the brain to achieve perception and action and so on needs to be reevaluated.”
“The stuff going on in the gustatory cortex before the stimulus arrives is a large part of how that stimulus gets processed when it gets there,” Katz said. And in this case, examining how those internal states get modified by an experience or cue revealed something about the overall network connectivity.
Now, Moran said, this kind of context dependence needs to find its way into other studies of perception and cognition. “The last frontier is the visual system… This [kind of work] might tell us something interesting about how visual information is processed.”
“We still don’t have any good, single model that really encapsulates all this activity,” he added. But this is “a good starting point.”