Your brain is an energy-saving ‘predicting machine’


how our brain, The presence of a three-pound mass of tissue encased within a bony skull, creating perceptions of sensations is an ancient mystery. Abundant evidence and decades of ongoing research suggest that the brain cannot simply put together sensory information, as if it were putting together a jigsaw puzzle, to make sense of its surroundings. This is confirmed by the fact that the brain can create a scene based on the light entering our eyes, even when the incoming information is noisy and vague.

As a result, many neuroscientists focus on seeing the brain as a “predicting machine”. Through predictive processing, the brain uses its prior knowledge of the world To make inferences Or create hypotheses about the causes of the sensory information received. These assumptions – not the sensory input itself – lead to perceptions in our mind’s eye. The more ambiguous the input, the greater the reliance on prior knowledge.

“The beauty of the predictive processing framework [is] that it has a really great capacity—critics may sometimes say it is very great—to explain a lot of different phenomena in many different systems,” Flores de Lange, a neuroscientist in the Predictive Brain Laboratory at Radboud University in the Netherlands.

However, the increasing neuroscientific evidence for this idea has mainly been incidental and open to alternative interpretations. “If you look at cognitive neuroscience and neuroimaging in humans, [there’s] Lots of evidence – but indirect tacit evidence Tim Kitzman from Radboud University, whose research lies in the interdisciplinary field of machine learning and neuroscience.

So the researchers Switching to arithmetic models To understand and test the idea of ​​a predictive brain. Computational neuroscientists have built artificial neural networks, with designs inspired by the behavior of biological neurons, that learn to make predictions about incoming information. These models display some paranormal abilities that seem to mimic those of real brains. Some experiments with these models even hint that brains must evolve as prediction machines to meet energy constraints.

And as computational models proliferate, neuroscientists who study living animals are becoming more convinced that brains learn to infer the causes of sensory input. While the finer details of how the brain does this remain blurry, the broad strokes of the brush are becoming clearer.

Unconscious Inferences in Perception

Predictive processing may seem at first as an unexpectedly complex mechanism of cognition, but there is a long history of scientists turning to it because other explanations have seemed so unexpected. Even a thousand years ago, the Arab Muslim astronomer and mathematician Hassan Ibn Al-Haytham highlighted one form of it in his book. optics book To explain the various aspects of the vision. The idea gained traction in the 1860s, when German physicist and physician Hermann von Helmholtz argued that the brain infers external causes of its incoming sensory input rather than constructing its perceptions “from the bottom up” from those inputs.

Helmholtz explained the concept of “unconscious inference” to explain bistable or multistable perception, in which an image can be perceived in more than one way. This happens, for example, with the well-known ambiguous image that we can perceive as a duck or a rabbit: our perception continues to flip between the two images of the animal. In such cases, Helmholtz asserted that perception must be the result of an unconscious process of descending inferences about the causes of sensory data because the image formed on the retina does not change.

During the twentieth century, cognitive psychologists continued to build the case that cognition was an active construction process based on both bottom-up and top-down sensory conceptual input. This effort culminated in the publication of an influential paper in 1980, “”Perceptions as hypotheses“, Late Richard Langton Gregory, who argued that perceptual illusions are essentially the brain’s wrong guesses about the causes of sensory impressions. Meanwhile, computer vision scientists have faltered in their efforts to use bottom-up reconstruction to enable computers to see without an internal “generative” model to reference.



Source link

Powered by BeaconSites