Cargando…
Visual Cortical Entrainment to Motion and Categorical Speech Features during Silent Lipreading
Speech is a multisensory percept, comprising an auditory and visual component. While the content and processing pathways of audio speech have been well characterized, the visual component is less well understood. In this work, we expand current methodologies using system identification to introduce...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2017
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5225113/ https://www.ncbi.nlm.nih.gov/pubmed/28123363 http://dx.doi.org/10.3389/fnhum.2016.00679 |
_version_ | 1782493456167337984 |
---|---|
author | O’Sullivan, Aisling E. Crosse, Michael J. Di Liberto, Giovanni M. Lalor, Edmund C. |
author_facet | O’Sullivan, Aisling E. Crosse, Michael J. Di Liberto, Giovanni M. Lalor, Edmund C. |
author_sort | O’Sullivan, Aisling E. |
collection | PubMed |
description | Speech is a multisensory percept, comprising an auditory and visual component. While the content and processing pathways of audio speech have been well characterized, the visual component is less well understood. In this work, we expand current methodologies using system identification to introduce a framework that facilitates the study of visual speech in its natural, continuous form. Specifically, we use models based on the unheard acoustic envelope (E), the motion signal (M) and categorical visual speech features (V) to predict EEG activity during silent lipreading. Our results show that each of these models performs similarly at predicting EEG in visual regions and that respective combinations of the individual models (EV, MV, EM and EMV) provide an improved prediction of the neural activity over their constituent models. In comparing these different combinations, we find that the model incorporating all three types of features (EMV) outperforms the individual models, as well as both the EV and MV models, while it performs similarly to the EM model. Importantly, EM does not outperform EV and MV, which, considering the higher dimensionality of the V model, suggests that more data is needed to clarify this finding. Nevertheless, the performance of EMV, and comparisons of the subject performances for the three individual models, provides further evidence to suggest that visual regions are involved in both low-level processing of stimulus dynamics and categorical speech perception. This framework may prove useful for investigating modality-specific processing of visual speech under naturalistic conditions. |
format | Online Article Text |
id | pubmed-5225113 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2017 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-52251132017-01-25 Visual Cortical Entrainment to Motion and Categorical Speech Features during Silent Lipreading O’Sullivan, Aisling E. Crosse, Michael J. Di Liberto, Giovanni M. Lalor, Edmund C. Front Hum Neurosci Neuroscience Speech is a multisensory percept, comprising an auditory and visual component. While the content and processing pathways of audio speech have been well characterized, the visual component is less well understood. In this work, we expand current methodologies using system identification to introduce a framework that facilitates the study of visual speech in its natural, continuous form. Specifically, we use models based on the unheard acoustic envelope (E), the motion signal (M) and categorical visual speech features (V) to predict EEG activity during silent lipreading. Our results show that each of these models performs similarly at predicting EEG in visual regions and that respective combinations of the individual models (EV, MV, EM and EMV) provide an improved prediction of the neural activity over their constituent models. In comparing these different combinations, we find that the model incorporating all three types of features (EMV) outperforms the individual models, as well as both the EV and MV models, while it performs similarly to the EM model. Importantly, EM does not outperform EV and MV, which, considering the higher dimensionality of the V model, suggests that more data is needed to clarify this finding. Nevertheless, the performance of EMV, and comparisons of the subject performances for the three individual models, provides further evidence to suggest that visual regions are involved in both low-level processing of stimulus dynamics and categorical speech perception. This framework may prove useful for investigating modality-specific processing of visual speech under naturalistic conditions. Frontiers Media S.A. 2017-01-11 /pmc/articles/PMC5225113/ /pubmed/28123363 http://dx.doi.org/10.3389/fnhum.2016.00679 Text en Copyright © 2017 O’Sullivan, Crosse, Di Liberto and Lalor. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution and reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Neuroscience O’Sullivan, Aisling E. Crosse, Michael J. Di Liberto, Giovanni M. Lalor, Edmund C. Visual Cortical Entrainment to Motion and Categorical Speech Features during Silent Lipreading |
title | Visual Cortical Entrainment to Motion and Categorical Speech Features during Silent Lipreading |
title_full | Visual Cortical Entrainment to Motion and Categorical Speech Features during Silent Lipreading |
title_fullStr | Visual Cortical Entrainment to Motion and Categorical Speech Features during Silent Lipreading |
title_full_unstemmed | Visual Cortical Entrainment to Motion and Categorical Speech Features during Silent Lipreading |
title_short | Visual Cortical Entrainment to Motion and Categorical Speech Features during Silent Lipreading |
title_sort | visual cortical entrainment to motion and categorical speech features during silent lipreading |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5225113/ https://www.ncbi.nlm.nih.gov/pubmed/28123363 http://dx.doi.org/10.3389/fnhum.2016.00679 |
work_keys_str_mv | AT osullivanaislinge visualcorticalentrainmenttomotionandcategoricalspeechfeaturesduringsilentlipreading AT crossemichaelj visualcorticalentrainmenttomotionandcategoricalspeechfeaturesduringsilentlipreading AT dilibertogiovannim visualcorticalentrainmenttomotionandcategoricalspeechfeaturesduringsilentlipreading AT laloredmundc visualcorticalentrainmenttomotionandcategoricalspeechfeaturesduringsilentlipreading |