Browsing by Author "Michon Desbiey, Maëva"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
- ItemChapter 13 : Origin and evolution of human speech : emergence from a trimodal auditory, visual and vocal network(2019) Michon Desbiey, Maëva; López Hernández, Vladimir; Aboitiz, FranciscoIn recent years, there have been important additions to the classical model of speech processing as originally depicted by the Broca–Wernicke model consisting of an anterior, productive region and a posterior, perceptive region, both connected via the arcuate fasciculus. The modern view implies a separation into a dorsal and a ventral pathway conveying different kinds of linguistic information, which parallels the organization of the visual system. Furthermore, this organization is highly conserved in evolution and can be seen as the neural scaffolding from which the speech networks originated. In this chapter we emphasize that the speech networks are embedded in a multimodal system encompassing audio-vocal and visuo-vocal connections, which can be referred to an ancestral audio-visuo-motor pathway present in nonhuman primates. Likewise, we propose a trimodal repertoire for speech processing and acquisition involving auditory, visual and motor representations of the basic elements of speech: phoneme, observation of mouth movements, and articulatory processes. Finally, we discuss this proposal in the context of a scenario for early speech acquisition in infants and in human evolution.
- ItemDo you what I am saying? : electrophysiological dynamics of visual speech processing and the role of orofacial effectors for cross-modal predictions(2019) Michon Desbiey, Maëva; López Hernández, Vladimir; Pontificia Universidad Católica de Chile. Escuela de PsicologíaThe need of a comprehensive model to account for the neurobiology of language has quite a long history. New empirical data have challenged classical models. A new framework is needed to foster our understanding of brain mechanisms underlying speech perception and production. In this dissertation, we postulate the existence of a trimodal network for speech perception that emphasizes the importance of visual processing of speech related orofacial movements and its representation in motor cortices. In that sense, we hypothesize that auditory (phonemes), visual (visemes) and motor (articulemes) aspects of speech are bonded in a trimodal repertoire. In order to test our hypothesis, we recorded EEG signal while participants were attentively observing different type of linguistic and non-linguistic orofacial movements in two conditions: under normal observation and observation holding a speech effector depressor horizontally between their teeth. ERPs analyses of the signal provide evidence of cross-modal predictions indexed by the N270 and the N400-like components. The amplitude of these components was specifically modulated by the visual salience of visual speech cues; the more salient the more predictable. Interestingly, when orofacial effectors were restricted, the amplitude of N400 was significantly reduced, suggesting that language production system is recruited for predictions. The time-frequency analysis, on the other hand, demonstrated the involvement of motor cortices for visual speech perception. More specifically, a significant difference in the µ-suppression was observed between linguistic and nolinguistic orofacial movements. The power of the µ-suppression was modulated by visual salience but diminished for the more salient visual speech cues when the participants orofacial effectors were blocked. The results reported in this dissertation represent preliminary evidence of the existence of the proposed trimodal network and, in particular, of the articuleme. Undoubtedly, further research using complementary neuroimaging techniques are required to better understand this multimodal interplay during language perception and production.
- ItemElectrophysiological Dynamics of Visual Speech Processing and the Role of Orofacial Effectors for Cross-Modal Predictions(2020) Michon Desbiey, Maëva; Boncompte, G.; López Hernández, Vladimir
- ItemFaces and Voices Processing in Human and Primate Brains: Rhythmic and Multimodal Mechanisms Underlying the Evolution and Development of Speech(2022) Michon Desbiey, Maëva; Zamorano Abramson, José; Aboitiz, FranciscoWhile influential works since the 1970s have widely assumed that imitation is an innate skill in both human and non-human primate neonates, recent empirical studies and meta-analyses have challenged this view, indicating other forms of reward-based learning as relevant factors in the development of social behavior. The visual input translation into matching motor output that underlies imitation abilities instead seems to develop along with social interactions and sensorimotor experience during infancy and childhood. Recently, a new visual stream has been identified in both human and non-human primate brains, updating the dual visual stream model. This third pathway is thought to be specialized for dynamics aspects of social perceptions such as eye-gaze, facial expression and crucially for audio-visual integration of speech. Here, we review empirical studies addressing an understudied but crucial aspect of speech and communication, namely the processing of visual orofacial cues (i.e., the perception of a speaker's lips and tongue movements) and its integration with vocal auditory cues. Along this review, we offer new insights from our understanding of speech as the product of evolution and development of a rhythmic and multimodal organization of sensorimotor brain networks, supporting volitional motor control of the upper vocal tract and audio-visual voices-faces integration.
- ItemHow embodied is action language? Neurological evidence from motor diseases.(2014) Cardona, J. F.; Kargieman, L.; Sinay, V.; Gershanik, O.; Gelormini, C.; Amoruso, L.; Roca, M.; Pineda, D.; Trujillo, N.; Michon Desbiey, Maëva; García, A. M.; Szenkman, D.; Bekinschtein, T.; Manes, F.; Ibáñez, A.Although motor-language coupling is now being extensively studied, its underlying mechanisms are not fully understood. In this sense, a crucial opposition has emerged between the non-representational and the representational views of embodiment. The former posits that action language is grounded on the non-brain motor system directly engaged by musculoskeletal activity – i.e., peripheral involvement of ongoing actions. Conversely, the latter proposes that such grounding is afforded by the brain’s motor system – i.e., activation of neural areas representing motor action. We addressed this controversy through the action-sentence compatibility effect (ACE) paradigm, which induces a contextual coupling of motor actions and verbal processing. ACEs were measured in three patient groups – early Parkinson’s disease (EPD), neuromyelitis optica (NMO), and acute transverse myelitis (ATM) patients – as well as their respective healthy controls. NMO and ATM constitute models of injury to non-brain motor areas and the peripheral motor system, whereas EPD provides a model of brain motor system impairment. In our study, EPD patients exhibited impaired ACE and verbal processing relative to healthy participants, NMO, and ATM patients. These results indicate that the processing of action-related words is mainly subserved by a cortico-subcortical motor network system, thus supporting a brain-based embodied view on action language. More generally, our findings are consistent with contemporary perspectives for which action/verb processing depends on distributed brain networks supporting context-sensitive motor-language coupling.