- Practical Guide
- About Neuronus
Radboud University, Nijmegen, The Netherlands
Marius Peelen is a principal investigator at the Donders Institute for Brain, Cognition and Behaviour, The Netherlands, where he is also the leader of one of the institute’s four research themes (Perception, Action and Decision Making). He received his PhD degree from the University of Wales, in 2006. Subsequently, he completed post-doctoral work in Switzerland (University of Geneva) and the USA (Princeton University, Harvard University). Before moving to The Netherlands in 2017, Dr. Peelen held an associate professorship at the Center for Mind/Brain Sciences (CIMeC) at the University of Trento, Italy. Dr. Peelen’s research uses behavioral and neuroimaging (fMRI, EEG, MEG, TMS) measures to understand how the brain makes sense of our natural daily-life environments; how it so rapidly creates - from the vast amounts of information received by the sensory systems - sparse conceptual-level representations of objects that are currently relevant to the observer. To address this question, his group investigates the nature of object and scene representations in visual cortex, the role of experience in shaping these representations, as well as top-down (attention, memory, expectation) and crossmodal influences on visual processing.
Our understanding of visual perception and attention is based mainly on studies using stimuli defined solely by physical features (e.g. shape and colour) and presented in either isolation or arrays of mutually independent items. Contrastingly, the real-world scenes we perceive every day are much more complex and cluttered. Yet, in most cases, we are able to recognize such scenes and objects in them rapidly and without effort. What distinguishes scenes from the above-mentioned simple stimuli – and can be considered their defining feature – is that they are well-structured and highly organised, with objects embedded within a global context of a scene and remaining in relation to other objects. The relations present in scenes might be spatial (i.e. we know where to expect certain objects), temporal (in dynamic scenes, we often know when certain objects might appear), and semantic (we know what objects to expect in certain contexts). There is growing evidence that these relations, combined with our experience and knowledge, effectively facilitate perception and attentional exploration of real-world scenes and that their effects reach beyond the effects of physical features.
The proposed symposium will present recent research exploring perceptual, attentional and neuronal mechanisms involved in the perception of real-world scenes. The particular focus will be on the role of scene structure and organisation. The complex and multifaceted nature of the research problem will be reflected by a broad range of presented methodological approaches – which will include psychophysics, fMRI, EEG, eye-tracking, and computational modelling – and theoretical perspectives. Specifically, first, we will discuss how the representations of objects are created, how they are integrated to create a coherent scene representation, and how they affect one another to facilitate (or, in some cases, hamper) perception. Second, we will present insights into the perceptual integration of scene elements obtained by employing computational modelling of behavioural data. Third, we will consider the extent to which semantic relations present in scenes can automatically guide spatial attention in a fashion similar to physical features. Fourth, we will explore how expectations derived from prior knowledge affect the eye movements to scenes. Finally, we will focus on the link between how scenes are explored using gaze and described using natural language.
The symposium will thus present state-of-the-art research on the perception of real-world scenes and discuss challenges and opportunities related to investigating mechanisms involved in the perception of naturalistic stimuli.
Expectations derived from scene context influence perception. For example, objects presented in their typical context (e.g., a car on a road) are more easily recognized than objects presented in an atypical context. Recent behavioural studies have shown that context-based expectations influence not only semantic judgements, but also how sharply we perceive objects. Furthermore, there is now also evidence for the reverse influence, with objects affecting scene perception. Here, I present results from fMRI and MEG studies investigating the neural basis of such bidirectional interactions between object and scene processing. Results provide evidence for scene-based sharpening of object representations in visual cortex from around 280 ms after stimulus onset, reflecting feedback signals after the initial parallel processing of scenes and objects. This expectation-based modulation was observed even when the stimuli were task-irrelevant and attention was temporally and spatially directed away from the scenes. Interestingly, the reverse influence - with objects sharpening scene representations - was found at the same latency, in line with a common predictive processing mechanism for bidirectional object-scene interactions. These results indicate that objects and scenes, while initially processed in parallel pathways, engage in mutual and facilitatory interactions. These interactions then shape the feedback signals propagated within each pathway, modulating activity in hierarchically lower levels of the visual system, thereby resulting in overall reduced uncertainty and sharpened visual perception.