Cargando…
Transformation of a temporal speech cue to a spatial neural code in human auditory cortex
In speech, listeners extract continuously-varying spectrotemporal cues from the acoustic signal to perceive discrete phonetic categories. Spectral cues are spatially encoded in the amplitude of responses in phonetically-tuned neural populations in auditory cortex. It remains unknown whether similar...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
eLife Sciences Publications, Ltd
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7556862/ https://www.ncbi.nlm.nih.gov/pubmed/32840483 http://dx.doi.org/10.7554/eLife.53051 |
_version_ | 1783594299862024192 |
---|---|
author | Fox, Neal P Leonard, Matthew Sjerps, Matthias J Chang, Edward F |
author_facet | Fox, Neal P Leonard, Matthew Sjerps, Matthias J Chang, Edward F |
author_sort | Fox, Neal P |
collection | PubMed |
description | In speech, listeners extract continuously-varying spectrotemporal cues from the acoustic signal to perceive discrete phonetic categories. Spectral cues are spatially encoded in the amplitude of responses in phonetically-tuned neural populations in auditory cortex. It remains unknown whether similar neurophysiological mechanisms encode temporal cues like voice-onset time (VOT), which distinguishes sounds like /b/ and/p/. We used direct brain recordings in humans to investigate the neural encoding of temporal speech cues with a VOT continuum from /ba/ to /pa/. We found that distinct neural populations respond preferentially to VOTs from one phonetic category, and are also sensitive to sub-phonetic VOT differences within a population’s preferred category. In a simple neural network model, simulated populations tuned to detect either temporal gaps or coincidences between spectral cues captured encoding patterns observed in real neural data. These results demonstrate that a spatial/amplitude neural code underlies the cortical representation of both spectral and temporal speech cues. |
format | Online Article Text |
id | pubmed-7556862 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
publisher | eLife Sciences Publications, Ltd |
record_format | MEDLINE/PubMed |
spelling | pubmed-75568622020-10-16 Transformation of a temporal speech cue to a spatial neural code in human auditory cortex Fox, Neal P Leonard, Matthew Sjerps, Matthias J Chang, Edward F eLife Neuroscience In speech, listeners extract continuously-varying spectrotemporal cues from the acoustic signal to perceive discrete phonetic categories. Spectral cues are spatially encoded in the amplitude of responses in phonetically-tuned neural populations in auditory cortex. It remains unknown whether similar neurophysiological mechanisms encode temporal cues like voice-onset time (VOT), which distinguishes sounds like /b/ and/p/. We used direct brain recordings in humans to investigate the neural encoding of temporal speech cues with a VOT continuum from /ba/ to /pa/. We found that distinct neural populations respond preferentially to VOTs from one phonetic category, and are also sensitive to sub-phonetic VOT differences within a population’s preferred category. In a simple neural network model, simulated populations tuned to detect either temporal gaps or coincidences between spectral cues captured encoding patterns observed in real neural data. These results demonstrate that a spatial/amplitude neural code underlies the cortical representation of both spectral and temporal speech cues. eLife Sciences Publications, Ltd 2020-08-25 /pmc/articles/PMC7556862/ /pubmed/32840483 http://dx.doi.org/10.7554/eLife.53051 Text en © 2020, Fox et al http://creativecommons.org/licenses/by/4.0/ http://creativecommons.org/licenses/by/4.0/This article is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use and redistribution provided that the original author and source are credited. |
spellingShingle | Neuroscience Fox, Neal P Leonard, Matthew Sjerps, Matthias J Chang, Edward F Transformation of a temporal speech cue to a spatial neural code in human auditory cortex |
title | Transformation of a temporal speech cue to a spatial neural code in human auditory cortex |
title_full | Transformation of a temporal speech cue to a spatial neural code in human auditory cortex |
title_fullStr | Transformation of a temporal speech cue to a spatial neural code in human auditory cortex |
title_full_unstemmed | Transformation of a temporal speech cue to a spatial neural code in human auditory cortex |
title_short | Transformation of a temporal speech cue to a spatial neural code in human auditory cortex |
title_sort | transformation of a temporal speech cue to a spatial neural code in human auditory cortex |
topic | Neuroscience |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7556862/ https://www.ncbi.nlm.nih.gov/pubmed/32840483 http://dx.doi.org/10.7554/eLife.53051 |
work_keys_str_mv | AT foxnealp transformationofatemporalspeechcuetoaspatialneuralcodeinhumanauditorycortex AT leonardmatthew transformationofatemporalspeechcuetoaspatialneuralcodeinhumanauditorycortex AT sjerpsmatthiasj transformationofatemporalspeechcuetoaspatialneuralcodeinhumanauditorycortex AT changedwardf transformationofatemporalspeechcuetoaspatialneuralcodeinhumanauditorycortex |