Cargando…

Neural Correlates of Auditory Figure-Ground Segregation Based on Temporal Coherence

To make sense of natural acoustic environments, listeners must parse complex mixtures of sounds that vary in frequency, space, and time. Emerging work suggests that, in addition to the well-studied spectral cues for segregation, sensitivity to temporal coherence—the coincidence of sound elements in...

Descripción completa

Detalles Bibliográficos
Autores principales: Teki, Sundeep, Barascud, Nicolas, Picard, Samuel, Payne, Christopher, Griffiths, Timothy D., Chait, Maria
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Oxford University Press 2016
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5004755/
https://www.ncbi.nlm.nih.gov/pubmed/27325682
http://dx.doi.org/10.1093/cercor/bhw173
_version_ 1782450816716636160
author Teki, Sundeep
Barascud, Nicolas
Picard, Samuel
Payne, Christopher
Griffiths, Timothy D.
Chait, Maria
author_facet Teki, Sundeep
Barascud, Nicolas
Picard, Samuel
Payne, Christopher
Griffiths, Timothy D.
Chait, Maria
author_sort Teki, Sundeep
collection PubMed
description To make sense of natural acoustic environments, listeners must parse complex mixtures of sounds that vary in frequency, space, and time. Emerging work suggests that, in addition to the well-studied spectral cues for segregation, sensitivity to temporal coherence—the coincidence of sound elements in and across time—is also critical for the perceptual organization of acoustic scenes. Here, we examine pre-attentive, stimulus-driven neural processes underlying auditory figure-ground segregation using stimuli that capture the challenges of listening in complex scenes where segregation cannot be achieved based on spectral cues alone. Signals (“stochastic figure-ground”: SFG) comprised a sequence of brief broadband chords containing random pure tone components that vary from 1 chord to another. Occasional tone repetitions across chords are perceived as “figures” popping out of a stochastic “ground.” Magnetoencephalography (MEG) measurement in naïve, distracted, human subjects revealed robust evoked responses, commencing from about 150 ms after figure onset that reflect the emergence of the “figure” from the randomly varying “ground.” Neural sources underlying this bottom-up driven figure-ground segregation were localized to planum temporale, and the intraparietal sulcus, demonstrating that this area, outside the “classic” auditory system, is also involved in the early stages of auditory scene analysis.”
format Online
Article
Text
id pubmed-5004755
institution National Center for Biotechnology Information
language English
publishDate 2016
publisher Oxford University Press
record_format MEDLINE/PubMed
spelling pubmed-50047552016-08-31 Neural Correlates of Auditory Figure-Ground Segregation Based on Temporal Coherence Teki, Sundeep Barascud, Nicolas Picard, Samuel Payne, Christopher Griffiths, Timothy D. Chait, Maria Cereb Cortex Original Articles To make sense of natural acoustic environments, listeners must parse complex mixtures of sounds that vary in frequency, space, and time. Emerging work suggests that, in addition to the well-studied spectral cues for segregation, sensitivity to temporal coherence—the coincidence of sound elements in and across time—is also critical for the perceptual organization of acoustic scenes. Here, we examine pre-attentive, stimulus-driven neural processes underlying auditory figure-ground segregation using stimuli that capture the challenges of listening in complex scenes where segregation cannot be achieved based on spectral cues alone. Signals (“stochastic figure-ground”: SFG) comprised a sequence of brief broadband chords containing random pure tone components that vary from 1 chord to another. Occasional tone repetitions across chords are perceived as “figures” popping out of a stochastic “ground.” Magnetoencephalography (MEG) measurement in naïve, distracted, human subjects revealed robust evoked responses, commencing from about 150 ms after figure onset that reflect the emergence of the “figure” from the randomly varying “ground.” Neural sources underlying this bottom-up driven figure-ground segregation were localized to planum temporale, and the intraparietal sulcus, demonstrating that this area, outside the “classic” auditory system, is also involved in the early stages of auditory scene analysis.” Oxford University Press 2016-09 2016-08-30 /pmc/articles/PMC5004755/ /pubmed/27325682 http://dx.doi.org/10.1093/cercor/bhw173 Text en © The Author 2016. Published by Oxford University Press. http://creativecommons.org/licenses/by/4.0/ This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.
spellingShingle Original Articles
Teki, Sundeep
Barascud, Nicolas
Picard, Samuel
Payne, Christopher
Griffiths, Timothy D.
Chait, Maria
Neural Correlates of Auditory Figure-Ground Segregation Based on Temporal Coherence
title Neural Correlates of Auditory Figure-Ground Segregation Based on Temporal Coherence
title_full Neural Correlates of Auditory Figure-Ground Segregation Based on Temporal Coherence
title_fullStr Neural Correlates of Auditory Figure-Ground Segregation Based on Temporal Coherence
title_full_unstemmed Neural Correlates of Auditory Figure-Ground Segregation Based on Temporal Coherence
title_short Neural Correlates of Auditory Figure-Ground Segregation Based on Temporal Coherence
title_sort neural correlates of auditory figure-ground segregation based on temporal coherence
topic Original Articles
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5004755/
https://www.ncbi.nlm.nih.gov/pubmed/27325682
http://dx.doi.org/10.1093/cercor/bhw173
work_keys_str_mv AT tekisundeep neuralcorrelatesofauditoryfiguregroundsegregationbasedontemporalcoherence
AT barascudnicolas neuralcorrelatesofauditoryfiguregroundsegregationbasedontemporalcoherence
AT picardsamuel neuralcorrelatesofauditoryfiguregroundsegregationbasedontemporalcoherence
AT paynechristopher neuralcorrelatesofauditoryfiguregroundsegregationbasedontemporalcoherence
AT griffithstimothyd neuralcorrelatesofauditoryfiguregroundsegregationbasedontemporalcoherence
AT chaitmaria neuralcorrelatesofauditoryfiguregroundsegregationbasedontemporalcoherence