Cargando…
A Visual Cortical Network for Deriving Phonological Information from Intelligible Lip Movements
Successful lip-reading requires a mapping from visual to phonological information [1]. Recently, visual and motor cortices have been implicated in tracking lip movements (e.g., [2]). It remains unclear, however, whether visuo-phonological mapping occurs already at the level of the visual cortex–that...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Cell Press
2018
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5956463/ https://www.ncbi.nlm.nih.gov/pubmed/29681475 http://dx.doi.org/10.1016/j.cub.2018.03.044 |
_version_ | 1783323898797883392 |
---|---|
author | Hauswald, Anne Lithari, Chrysa Collignon, Olivier Leonardelli, Elisa Weisz, Nathan |
author_facet | Hauswald, Anne Lithari, Chrysa Collignon, Olivier Leonardelli, Elisa Weisz, Nathan |
author_sort | Hauswald, Anne |
collection | PubMed |
description | Successful lip-reading requires a mapping from visual to phonological information [1]. Recently, visual and motor cortices have been implicated in tracking lip movements (e.g., [2]). It remains unclear, however, whether visuo-phonological mapping occurs already at the level of the visual cortex–that is, whether this structure tracks the acoustic signal in a functionally relevant manner. To elucidate this, we investigated how the cortex tracks (i.e., entrains to) absent acoustic speech signals carried by silent lip movements. Crucially, we contrasted the entrainment to unheard forward (intelligible) and backward (unintelligible) acoustic speech. We observed that the visual cortex exhibited stronger entrainment to the unheard forward acoustic speech envelope compared to the unheard backward acoustic speech envelope. Supporting the notion of a visuo-phonological mapping process, this forward-backward difference of occipital entrainment was not present for actually observed lip movements. Importantly, the respective occipital region received more top-down input, especially from left premotor, primary motor, and somatosensory regions and, to a lesser extent, also from posterior temporal cortex. Strikingly, across participants, the extent of top-down modulation of the visual cortex stemming from these regions partially correlated with the strength of entrainment to absent acoustic forward speech envelope, but not to present forward lip movements. Our findings demonstrate that a distributed cortical network, including key dorsal stream auditory regions [3, 4, 5], influences how the visual cortex shows sensitivity to the intelligibility of speech while tracking silent lip movements. |
format | Online Article Text |
id | pubmed-5956463 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2018 |
publisher | Cell Press |
record_format | MEDLINE/PubMed |
spelling | pubmed-59564632018-05-17 A Visual Cortical Network for Deriving Phonological Information from Intelligible Lip Movements Hauswald, Anne Lithari, Chrysa Collignon, Olivier Leonardelli, Elisa Weisz, Nathan Curr Biol Article Successful lip-reading requires a mapping from visual to phonological information [1]. Recently, visual and motor cortices have been implicated in tracking lip movements (e.g., [2]). It remains unclear, however, whether visuo-phonological mapping occurs already at the level of the visual cortex–that is, whether this structure tracks the acoustic signal in a functionally relevant manner. To elucidate this, we investigated how the cortex tracks (i.e., entrains to) absent acoustic speech signals carried by silent lip movements. Crucially, we contrasted the entrainment to unheard forward (intelligible) and backward (unintelligible) acoustic speech. We observed that the visual cortex exhibited stronger entrainment to the unheard forward acoustic speech envelope compared to the unheard backward acoustic speech envelope. Supporting the notion of a visuo-phonological mapping process, this forward-backward difference of occipital entrainment was not present for actually observed lip movements. Importantly, the respective occipital region received more top-down input, especially from left premotor, primary motor, and somatosensory regions and, to a lesser extent, also from posterior temporal cortex. Strikingly, across participants, the extent of top-down modulation of the visual cortex stemming from these regions partially correlated with the strength of entrainment to absent acoustic forward speech envelope, but not to present forward lip movements. Our findings demonstrate that a distributed cortical network, including key dorsal stream auditory regions [3, 4, 5], influences how the visual cortex shows sensitivity to the intelligibility of speech while tracking silent lip movements. Cell Press 2018-05-07 /pmc/articles/PMC5956463/ /pubmed/29681475 http://dx.doi.org/10.1016/j.cub.2018.03.044 Text en © 2018 The Authors http://creativecommons.org/licenses/by-nc-nd/4.0/ This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). |
spellingShingle | Article Hauswald, Anne Lithari, Chrysa Collignon, Olivier Leonardelli, Elisa Weisz, Nathan A Visual Cortical Network for Deriving Phonological Information from Intelligible Lip Movements |
title | A Visual Cortical Network for Deriving Phonological Information from Intelligible Lip Movements |
title_full | A Visual Cortical Network for Deriving Phonological Information from Intelligible Lip Movements |
title_fullStr | A Visual Cortical Network for Deriving Phonological Information from Intelligible Lip Movements |
title_full_unstemmed | A Visual Cortical Network for Deriving Phonological Information from Intelligible Lip Movements |
title_short | A Visual Cortical Network for Deriving Phonological Information from Intelligible Lip Movements |
title_sort | visual cortical network for deriving phonological information from intelligible lip movements |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5956463/ https://www.ncbi.nlm.nih.gov/pubmed/29681475 http://dx.doi.org/10.1016/j.cub.2018.03.044 |
work_keys_str_mv | AT hauswaldanne avisualcorticalnetworkforderivingphonologicalinformationfromintelligiblelipmovements AT litharichrysa avisualcorticalnetworkforderivingphonologicalinformationfromintelligiblelipmovements AT collignonolivier avisualcorticalnetworkforderivingphonologicalinformationfromintelligiblelipmovements AT leonardellielisa avisualcorticalnetworkforderivingphonologicalinformationfromintelligiblelipmovements AT weisznathan avisualcorticalnetworkforderivingphonologicalinformationfromintelligiblelipmovements AT hauswaldanne visualcorticalnetworkforderivingphonologicalinformationfromintelligiblelipmovements AT litharichrysa visualcorticalnetworkforderivingphonologicalinformationfromintelligiblelipmovements AT collignonolivier visualcorticalnetworkforderivingphonologicalinformationfromintelligiblelipmovements AT leonardellielisa visualcorticalnetworkforderivingphonologicalinformationfromintelligiblelipmovements AT weisznathan visualcorticalnetworkforderivingphonologicalinformationfromintelligiblelipmovements |