- Homepage
- Programme
- Registration
- Practical Guide
- About Neuronus
Maria Ida Gobbini
Department of Medical and Surgical Sciences, University of Bologna, Bologna, Italy
For decades the face perception system has been investigated with well-controlled stimuli that are still images of strangers’ faces limiting the potentiality of characterizing such system in all its complexity. My talk will focus on two major points: the use of naturalistic stimuli to investigate the neural system for face perception and the use of familiar faces to better depict the individual components of this system. I will present fMRI data collected during movie viewing that were used to estimate multiple category-selective topographies including the face selective topography preserving the idiosyncracies of each individual functional brain architecture. I will highlight also how, through the use of naturalistic stimuli, we have shown that, so far, the human face perception system cannot be fully modelled by the state-of-the-art DCNNs. Recognition of familiar faces is remarkably effortless and robust. Automatic activation of knowledge about familiar individuals and the emotional responses play crucial roles in familiar face recognition. I will present data that show how familiarity affects the earliest stages of face processing to facilitate rapid, even preconscious detection of these highly socially salient stimuli, and present data that support the hypothesis that representation of personally familiar faces develops in a hierarchical fashion through the engagement of multiple levels in the distributed neural system from early visual processes to higher level of social cognition and emotion.
Ilona Kotlewska1, Bartłomiej Panek1, Anna Nowicka2, Dariusz Asanowicz1
1Institute of Psychology, Jagiellonian University, Krakow, Poland
2Laboratory of Language Neurobiology, Nencki Institute of Experimental Biology, Warsaw, Poland.
Self-related visual information, especially one’s own face and name, are processed in a specific, prioritized way. However, the spatio-temporal brain dynamics of self-prioritization have remained elusive. In an EEG study, 25 married women (who changed their surnames after marriage, so that their past and present surnames could be used as stimuli) performed a detection task with faces and names from five categories: self, self from the past, friend, famous, and unknown person. The aim was to determine the temporal and spatial characteristics of early electrophysiological markers of self-referential processing. Local theta power at the occipito-temporal (visual) areas and inter-regional theta phase coherence between the visual and midfrontal areas showed that self-relevance differentiation of faces began already about 100–300 ms after stimulus onset. Posterior theta activity revealed an early signal of self-face recognition.
Funding: NCN Poland 2022/45/B/HS6/01107
Szmytke, M.1,2, Ilyka, D.3, Duda-Goławska, J.4, Laudańska, Z.4, Malinowska-Korczak, A.4, and Tomalski, P.4
1Institute of Psychology, Faculty of Philosophy and Social Sciences, Nicolaus Copernicus University in Torun, Poland
2Faculty of Psychology, University of Warsaw, Warsaw, Poland
3Department of Psychology, University of Cambridge, Cambridge, UK
4Neurocognitive Development Lab (Babylab PAN), Institute of Psychology, Polish Academy of Sciences
Abstract: Humans pay special attention to faces and speech from birth, but the interplay of developmental processes leading to their specialization is poorly understood. We investigated the effects of face orientation on audiovisual (AV) speech perception in two age groups of infants (younger: 5- to 6.5-month-olds; older: 9- to 10.5-month-olds) and adults. We recorded ERPs in response to videos of upright and inverted faces producing /ba/ articulation dubbed with auditory syllables that were either matching /ba/ or mismatching /ga/ the mouth movement. In younger infants and adults, but not in older infants, we observed increased amplitude of audiovisual mismatch response (AVMMR) to incongruent visual /ba–auditory /ga/ syllable in comparison to other stimuli. AV mismatch response to inverted visual /ba/–auditory/ga/ stimulus relative to congruent stimuli was also detected but only in the younger group of infants and in adults. We show that face configuration affects the neural response to AV mismatch differently across all age groups. This may imply featural face processing in younger infants and adults when processing inverted faces articulating incongruent speech. The lack of differential responses to upright and inverted incongruent stimuli in older infants suggests a likely functional cortical reorganization of AV speech processing.
Funding: Polish National Science Centre, Grant/Award Number: 2016/23/B/HS6/03860; Institute of Psychology, Polish Academy of Sciences.
Maria Nalberczak-Skóra, Aleksandra Bartnik, Aleksandra Olechowska, Łukasz Gawęda
Experimental Psychopathology Lab, Institute of Psychology, Polish Academy of Science
Abstract: Audio-visual integration plays a big role in better understanding of the world around us. We tend to hear better when we see it and we better see it when we hear it. Sometimes this phenomenon leads to experiencing illusions like sound-induced flash illusion (SiFi) or McGurk’s effect. It is theorized that audio-visual integration is a top-down process involved in the generation of hallucinations, a positive symptom of numerous psychiatric disorders including schizophrenia. The research from our laboratory show that audio-visual integration is strongly associated with false perception of the speaking words in both healthy and clinical population. Here, I will present the results from a study investigating neural correlates of the experience of false perception of speaking word and how it is associated with congruent facial expressions of this word.
Funding: The project is financed by National Science Center as SONATINA-5 2021/40/C/HS6/00226.