25.04.2026, Saturday, 12:00-13:30
Chair: Kalinka Timmer
Faculty of Psychology, University of Warsaw, Warsaw, Poland
12:00 Hanna Cwynar
Faculty of Psychology, University of Warsaw, Warsaw, Poland
"Effects of cue transparency on language control in pure- and mixed-language contexts"
Bilinguals typically incur costs when switching compared to repeating a language, a phenomenon attributed to bilingual language control (BLC). To date, most research used arbitrary cues to signal the response language, and studies examining the effects of different cues on BLC have focused on mixed-language contexts.
We investigated how different cues affect BLC across pure- and mixed-language contexts to understand the impact of cue transparency and processing on language switching tasks.
31 unbalanced Polish (L1)–English (L2) bilinguals named pictures in L1 and L2 as signalled by auditory arbitrary tone cues (low vs. high) vs. transparent question cues (“Co?” vs. “What?”) in pure- and mixed-language blocks, while reaction times (RTs) and electrophysiology (EEG) were recorded.
In mixed-language blocks, behavioral switch costs showed 37 ms reduction with question relative to tone cues, supported by cue-locked N2 modulations showing greater switch costs in tones than questions. In addition to greater cue-N2 amplitudes for tones than questions, cue-N1 amplitudes were also greater for tones, suggesting more selective attention (N1) and cue updating (N2) may be required for tones. We also observed cue-N2 language-specific temporal differences in questions (L1 earlier than L2) but not tones. Yet, in pure-language blocks, cue-N1 and cue-N2 patterns by cue type and language partially matched those in mixed blocks, while RTs were unaffected by cue type. This suggests that some neural effects in mixed blocks may reflect cue processing rather than BLC, with behavioral cue-transparency facilitation emerging only when cues are necessary for task performance.
Results suggest that BLC is shaped by cue processing and transparency. We propose cues that resemble input encountered in talk-in interactions, directly signalling the language and the action that achieves the goal of communicating in that language, may reduce the apparent need for control commonly found with arbitrary cues.
FINANCIAL SUPPORT: The research was supported by the National Science Centre (Narodowe Centrum Nauki) through the OPUS grant (2022/45/B/HS6/01931) awarded to Kalinka Timmer.
12:15 Wiktoria Ogonowska
Faculty of Psychology, University of Warsaw, Warsaw, Poland
"The differences between bilinguals’ languages in conversational turn-taking"
The mechanisms underlying bilingual language processing are understudied in conversational contexts. Previous research focused on isolated speech production, but in real life, we produce speech in response to other people’s utterances. Recent conversational studies with monolinguals have revealed that monolinguals prepare their responses during the interlocutor’s turn, but they only begin speaking when the utterance is finished.
This study aims to examine conversational turn-taking processes in bilingual speakers and explore the similarities and differences between their languages.
Polish (L1) – English (L2) bilinguals completed a speech production task in both languages. They heard general knowledge questions and answered them aloud. The critical information needed to answer the question was presented either in the middle (EARLY) or at the end (LATE). Behavioral and EEG data were collected.
Bilinguals were more accurate and faster in L1 than in L2. Bilinguals’ accuracy was similar between EARLY and LATE questions, but they answered EARLY questions faster. The Late Positivity (LP) ERP component was present in reaction to the critical word compared to the non-critical word. There was no difference between languages in LP when the critical word appeared EARLY; when the critical word appeared LATE, the amplitude of the LP was larger for L1 than for L2.
The turn-taking pattern previously observed in monolinguals was replicated in both the L1 and L2 of bilinguals. Although previous research has shown slower word retrieval in L2, results revealed that speech planning in L2 is not delayed (LP). This suggests that auditory language cues directly activate the heard language and facilitate speech planning in L2. The difference between languages at the end of the question suggests that the longer RTs for L2 are caused by the later stages of production connected to articulation, and not by speech planning.
FINANCIAL SUPPORT: This work was supported by grants awarded to Kalinka Timmer from the National Science Centre (Narodowe Centrum Nauki) with an OPUS (2022/45/B/HS6/01931) and the Faculty of Psychology, University of Warsaw, from the funds awarded by the Ministry of Science and Higher Education in the form of a subsidy for the maintenance and development of research potential in the year 2026 (501-D125-01-1250000 zlec. 5011000677).
12:30 Kamil Wałczyk
College of Interdisciplinary Individual Studies, University of Silesia in Katowice, Katowice, Poland
"Late frontal positivity (LFP) as an indicator of metaphor processing in Polish: A study using nonparametric cluster-based permutation analysis."
This work addresses the problem of how the human brain processes mental metaphors in verbal phraseologisms in the Polish language. It engages with the embodied cognition framework to understand dynamic changes in the brain's electrical activity during language comprehension.
The study aimed to investigate neural processing differences between four contexts of action verbs: motor literal (physical action, e.g., 'throws a ball'), phraseological (common metaphor, e.g., 'throws an idea'), unknown figurative (novel metaphor, e.g., 'throws a wish'), and mental literal (abstract control, e.g., 'considers a decision'). The central research question was whether the neural representation of these contexts would differ and if phraseological contexts are processed differently than literal ones.
N=34, right-handed participants read 30 sentences per condition while EEG signals were recorded. Stimuli were presented in an 1800 ms time window. Artifact-free epochs (-200 to 1500 ms) were time-locked to stimulus onset. We utilized nonparametric cluster-based permutation analysis to identify significant spatiotemporal differences in event-related potentials (ERPs) without specifying a priori boundaries, effectively correcting for multiple comparisons.
An amplitude decrease in the sequence: phraseological > unknown figurative > motor literal > mental literal. A robust effect distinguished the phraseological condition from mental literal and motor literal conditions, corresponding to a spatiotemporal cluster maximal over frontal/central areas (550–700 ms). This represents a Late Frontal Positivity (LFP). A second effect (250–350 ms) differentiated phraseological and unknown figurative conditions from the mental literal condition.
Results suggest a two-stage mechanism for processing non-literal action verbs. The LFP indicates a specific cognitive cost for the phraseological condition. Findings support the embodied cognition framework and demonstrate the utility of cluster-based analysis.
FINANCIAL SUPPORT: This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
12:45 Rafał Jończyk
Faculty of English, Adam Mickiewicz University, Poznań, Poland
Cognitive Neuroscience Center, Adam Mickiewicz University, Poznań, Poland
"Language-Selective Inhibition of Negative Affect in Bilingual Word Production"
Semantic processing of negative content appears to be reduced when bilinguals read for comprehension in their second language (L2). However, this phenomenon has seldom been studied in production or when individuals listen to words, and the underlying mechanism remains unknown.
We investigated whether the dampening of negative emotional content applies in production and examined its underlying mechanism, specifically testing inhibitory control during L2-to-L1 translation of negative words by Polish-English bilinguals.
Polish–English bilinguals translated written (exp. 1, n = 35) or spoken (exp. 2, n = 35) negative and neutral words either from L2–to–L1 or L1–to–L2. We analyzed behavioral accuracy and response times, Event-Related Potentials (ERPs), and neural oscillation power via hierarchical linear modeling (LIMO; Pernet 2011, 2015)
Bilinguals were less accurate and slower when translating into L2 than into L1, but there was no interaction between target language and word valence in either experiment. Hierarchical linear modeling of ERPs showed higher positivity for negative than neutral words only in the L2–to–L1 direction (production in Polish) and only in experiment 1, between 300–750 ms after stimulus onset. Hierarchical linear modeling of neural oscillation power revealed a two-step inhibitory process: stimulus integration (240–365 ms over parietal electrodes) and speech preparation (620–800 ms over anterior sites).
We provide initial evidence that emotion modulation by language in bilinguals involves inhibitory control. The absence of such effects in experiment 2 may relate to the evolutionary precedence of auditory over visual language processing and the production context used. Our results point to inhibitory control as a possible mechanism underlying the commonly observed attenuation of negative emotion in a second language.
FINANCIAL SUPPORT: National Science Center (2020/37/B/HS6/00610)
13:00 Andras Ambrus
HUN-REN Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Budapest, Hungary
Department of Cognitive Science, Faculty of Natural Sciences, Budapest University of Technology and Economics, Budapest, Hungary
"Neonatal learning of speech sound patterns"
The automatic capacity to extract, encode and utilize statistical properties of the ever-changing sensory environment, known as statistical learning, is a fundamental ability already present from birth. Detecting longer recurring auditory patterns consisting of as many as 10 pure tones is proved to be present in newborn babies. Whereas pure tones only provide information about pitch transitions, speech includes a wealth of acoustic information.
This study investigated whether sleeping newborns detect regularities in speech-like sound sequences.
Using time-locked EEG measures, we compared 35 newborn babies' (up to 2 days of age) neural responses to regularly recurring and random pseudo-syllable sequences.
We found that neonates show distinct electrophysiological responses to regular versus random patterns, indicating that the neonatal auditory system is sensitive to structured syllable sequences.
These findings suggest that even at a very early age, infants can exploit redundancies in speech input, an ability underpinning later language acquisition. The present results provide an important step toward identifying early neural signatures of pattern learning in speech. Such responses may serve as potential biomarkers of language learning capacities in early infancy, offering a window into the developing mechanisms that support speech segmentation and grammar acquisition. Beyond their theoretical relevance, these results underscore the methodological value of the current paradigm: it provides a robust, non-invasive, and replicable tool for probing the infant brain’s capacity to extract structure from continuous auditory input. This paradigm can be extended to assess individual differences and atypical trajectories, contributing to the early identification of infants at risk for language disorders.
FINANCIAL SUPPORT: BIO-PREPA (2024-1.2.2-ERA_NET-2025-00023) Biomarkers and Behavioral Probes for Preclinical Perinatal Asphyxia Funding: ERA-NET NEURON Cofund2, Horizon 2020 Grant Agreement No. 964215 Principal Investigator and Project Coordinator: Brigitta Tóth