AREA

Neuroimage

SITE

Cereb Cortex

TYPE

Articles

YEAR

Formal publication: Oct 2020

Authors: Adolfo M García 1 2 3 4 5, Eugenia Hesse 1 2, Agustina Birba 1 2, Federico Adolfi 2, Ezequiel Mikulan 6, Miguel Martorell Caro 2, Agustín Petroni 7 8, Tristan A Bekinschtein 9, María Del Carmen García 10, Walter Silva 10, Carlos Ciraolo 10, Esteban Vaucheret 10, Lucas Sedeño 2, Agustín Ibáñez 1 2 5 11 12
 
 

Abstract:
In construing meaning, the brain recruits multimodal (conceptual) systems and embodied (modality-specific) mechanisms. Yet, no consensus exists on how crucial the latter are for the inception of semantic distinctions. To address this issue, we combined electroencephalographic (EEG) and intracranial EEG (iEEG) to examine when nouns denoting facial body parts (FBPs) and nonFBPs are discriminated in face-processing and multimodal networks. First, FBP words increased N170 amplitude (a hallmark of early facial processing). Second, they triggered fast (~100 ms) activity boosts within the face-processing network, alongside later (~275 ms) effects in multimodal circuits. Third, iEEG recordings from face-processing hubs allowed decoding ~80% of items before 200 ms, while classification based on multimodal-network activity only surpassed ~70% after 250 ms. Finally, EEG and iEEG connectivity between both networks proved greater in early (0-200 ms) than later (200-400 ms) windows. Collectively, our findings indicate that, at least for some lexico-semantic categories, meaning is construed through fast reenactments of modality-specific experience.