High-Level Feedback in Vision: Rules of Prediction, Suppression, and Filling-In

24.04.2026, Friday, 11:30-13:00

  

General focus of the symposium:  

Perception is increasingly viewed as an inferential process in which the brain actively predicts its sensory inputs and updates its beliefs based on prediction errors. Yet, even within this predictive processing framework, key questions remain unresolved: What kind of information is fed back into the visual system? At what level of abstraction are predictions formulated? And how do these signals shape representations when sensory evidence is weak, ambiguous, or entirely absent?  

This symposium focuses on high-level feedback in vision, with an emphasis on the rules and the content of predictive signals. On the one hand, we ask how predictive feedback operates in a normally sighted visual system: whether prediction errors in early areas are dominated by high-level, abstract expectations rather than low-level features, and under what conditions expectations selectively sharpen relevant information or suppress competing inputs in cluttered scenes. These questions speak directly to the computational role of feedback in the visual hierarchy and to ongoing debates about sharpening versus dampening accounts of expectation.  

On the other hand, we examine the content of predictions when input is missing – either transiently, as in occluded scenes, or chronically, as in congenital blindness. Here, the key issue is whether the brain fills in only coarse contextual information or whether it actively represents unseen objects and semantic properties with finegrained, feature-specific codes. Studying blindness also reveals how high-level, topdown signals can repurpose “visual” cortex for abstract, non-visual computations, providing a strong test of functional plasticity within a predictive brain. 

By bringing together work on prediction errors, selective suppression, occlusion, and blindness, the symposium aims to converge on a feature-specific, task-sensitive view of predictive processing in vision -- one that clarifies what the brain sends back, when it does so, and how this feedback sculpts both moment-to-moment perception and long-term cortical organization. 

Chair: Jakub Szewczyk (Kraków)

11:30 David Richter 

AffMind, Brain and Behavior Research Center (CIMCYC), University of Granada, Granada, Spain Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, the Netherlands  

"What Does the Brain Predict? A Case for High-Level Prediction in the Visual System."

Predictive processing accounts suggest that perception fundamentally relies on prediction and prediction error computation.

Yet it remains unclear what features, and at which level of abstraction, the brain predicts and hence computes prediction errors for.

Combining neuroimaging (EEG, fMRI) with deep neural network-based computational modelling, we show that neural responses across the visual hierarchy, including in early visual cortex, scale with high-level, but not low-level, visual surprise.

Moreover, high-level surprise influences perceptual processing rapidly, emerging around 190 ms after stimulus onset, suggesting that high-level predictions are readily integrated during perceptual inference.

These results converge with related studies in macaques and mice, supporting a feature-specific view of predictive processing in which the visual system predominantly leverages high-level predictions to guide perceptual inference rather than relying on fine-grained, low-level information. Together, these findings help constrain neurocomputational models of perceptual inference and suggest that the brain’s predictive machinery is tuned to higher-order structure in sensory input.

11:50 Jakub Szewczyk

Institute of Psychology, Jagiellonian University, Kraków, Poland  

"When Expectations Silence the Background: Selective Sharpening in Object Vision"

Top-down expectations are thought to shape object representations via sharpening or dampening. Sharpening proposes that expectations boost diagnostic features of an expected object while suppressing irrelevant features, such as background objects. Dampening predicts that expected objects generate smaller prediction errors and thus reduced neural activation. Prior tests of these accounts have typically used single-object displays. Here, I introduce a novel test that adjudicates between them by focusing on background objects in cluttered scenes. Participants viewed overlapping target (foreground) and non-target (background) objects while I manipulated target expectancy. Item-level encoding models quantified how strongly each background object was represented across stages of the ventral visual stream. Expectations produced sharpening-like inhibition of background features, but only when the background interfered with the task. When the background was task-irrelevant, its features remained uninhibited across all stages. These results suggest that top-down expectations selectively sharpen object representations under competition, rather than uniformly suppressing non-target features.

12:10 Mandy Viktoria Bartsch

Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, the Netherlands  

"Filling in the Blanks: representation of occluded scene parts in early visual cortex"

Does the brain automatically fill in missing parts of a scene? Prior fMRI work shows that even when part of a scene is occluded, early visual cortex still carries information about overall scene identity.

Yet it remains unclear whether this activity reflects the specific missing content or only contextual cues from the visible parts.

To test this, we created feature-controllable scenes composed of four colored shapes. After learning these layouts behaviorally, participants performed an item-swap detection task during fMRI, with one object occluded on half of the trials.

Replicating earlier findings, we decoded scene identity from early visual cortex responses to the occluded regions alone. Critically, feature-specific models trained on color and shape also recovered information about the occluded item’s features.

This shows that the brain represents missing scene elements with detailed sensory predictions, not just at the level of global scene context.

12:30 Łukasz Bola

Institute of Psychology, Jagiellonian University, Kraków, Poland Center for Brain Research, Jagiellonian University, Kraków, Poland.  

"Semantic representations in the visual cortex of blind and sighted humans"

What is the function of the visual cortex when deprived of visual input? Research on this question shows that in blind individuals the “visual” areas respond to linguistic stimuli, such as words and sentences. This may indicate that, in the absence of vision, the visual cortex becomes recruited for high-level computations that are atypical for this region. Alternatively, the lack of visual input may uncover typical (i.e., present also in sighted individuals) representations of linguistic stimuli in this area.
In this talk, I will describe the results of three studies aimed at disentangling these competing hypotheses.
Sighted and blind participants were presented with concrete, abstract, and pseudo words while undergoing fMRI. We used multi-voxel pattern analysis to probe representations elicited in the visual areas in both groups during word processing.
The results suggest that, during word processing, the blind visual cortex represents a specific semantic dimension: the knowledge about physical properties of word referents. In line with the “uncovering hypothesis”, we found representations of the same dimension in the sighted visual cortex. Notably, however, in sighted individuals these representations seemed more salient in high-level visual areas, whereas in blind individuals they were robust in both low-level and high-level regions.
These findings suggest that at least some of the responses to linguistic stimuli in the blind visual cortex can be driven by mechanisms that are also present in the sighted adult brain. In sighted individuals, the physical properties of word referents might be backprojected to the visual system to predict incoming visual information, initiate visual imagery, and support visuospatial thinking. In blind individuals, this mechanism might be preserved and, combined with increased sensitivity of the visual cortex in this population, drive language-related responses in this region.

Our partners

https://wb.uj.edu.pl/
https://phils.uj.edu.pl/
https://izibb.binoz.uj.edu.pl/
https://psychologia.uj.edu.pl/
https://ptbun.org.pl/en/index/
https://cbm.uj.edu.pl/
https://nenckifoundation.eu/
https://www.fnp.org.pl/component/fnp_pages/
https://fulbright.edu.pl/
https://fmn.org.pl/
https://www.gov.pl/web/nauka/marcin-kulasek
https://nawa.gov.pl/
https://kneurobiologii.pan.pl/?_gl=1%2A8le1aw%2A_ga%2AOTQ3MTI4MjE2LjE3NjA0NDI1MjU.%2A_ga_TKV678S29R%2AczE3NzUxNTkyMTAkbzckZzEkdDE3NzUxNTkyMjUkajQ1JGwwJGgw
https://brainingproject.com
https://kopalniawiedzy.pl/
https://biologhelp.pl/
https://edoktorant.pl/
https://issuu.com/pismowuj
https://ibro.org/
https://www.cortivision.com/
https://noldus.com/?lnid=&hsa_acc=5401040478&hsa_cam=12231947504&hsa_grp=1334809497032884&hsa_ad=&hsa_src=o&hsa_tgt=kwd-83426625672492:loc-151&hsa_kw=noldus&hsa_mt=e&hsa_net=adwords&hsa_ver=3&msclkid=5f98351f6ce41db2ea295cda4618b47f&utm_source=bing&utm_medium=cpc&utm_campaign=Brand%7CNoldus%20-%20EU%20%7C%20Samengevoegd&utm_term=noldus&utm_content=Noldus%20-%20EU
https://www.3brain.com
https://hellobio.com/
https://animalab.pl/
https://www.multichannelsystems.com/
The Neuronus Neuroscience Forum website uses cookies in accordance with the Privacy Policy. We ask for your consent to use anonymous data to improve your experience of our website. Privacy Policy