Cargando…

The Music-In-Noise Task (MINT): A Tool for Dissecting Complex Auditory Perception

The ability to segregate target sounds in noisy backgrounds is relevant both to neuroscience and to clinical applications. Recent research suggests that hearing-in-noise (HIN) problems are solved using combinations of sub-skills that are applied according to task demand and information availability....

Descripción completa

Detalles Bibliográficos
Autores principales: Coffey, Emily B. J., Arseneau-Bruneau, Isabelle, Zhang, Xiaochen, Zatorre, Robert J.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6427094/
https://www.ncbi.nlm.nih.gov/pubmed/30930734
http://dx.doi.org/10.3389/fnins.2019.00199
_version_ 1783405132499648512
author Coffey, Emily B. J.
Arseneau-Bruneau, Isabelle
Zhang, Xiaochen
Zatorre, Robert J.
author_facet Coffey, Emily B. J.
Arseneau-Bruneau, Isabelle
Zhang, Xiaochen
Zatorre, Robert J.
author_sort Coffey, Emily B. J.
collection PubMed
description The ability to segregate target sounds in noisy backgrounds is relevant both to neuroscience and to clinical applications. Recent research suggests that hearing-in-noise (HIN) problems are solved using combinations of sub-skills that are applied according to task demand and information availability. While evidence is accumulating for a musician advantage in HIN, the exact nature of the reported training effect is not fully understood. Existing HIN tests focus on tasks requiring understanding of speech in the presence of competing sound. Because visual, spatial and predictive cues are not systematically considered in these tasks, few tools exist to investigate the most relevant components of cognitive processes involved in stream segregation. We present the Music-In-Noise Task (MINT) as a flexible tool to expand HIN measures beyond speech perception, and for addressing research questions pertaining to the relative contributions of HIN sub-skills, inter-individual differences in their use, and their neural correlates. The MINT uses a match-mismatch trial design: in four conditions (Baseline, Rhythm, Spatial, and Visual) subjects first hear a short instrumental musical excerpt embedded in an informational masker of “multi-music” noise, followed by either a matching or scrambled repetition of the target musical excerpt presented in silence; the four conditions differ according to the presence or absence of additional cues. In a fifth condition (Prediction), subjects hear the excerpt in silence as a target first, which helps to anticipate incoming information when the target is embedded in masking sound. Data from samples of young adults show that the MINT has good reliability and internal consistency, and demonstrate selective benefits of musicianship in the Prediction, Rhythm, and Visual subtasks. We also report a performance benefit of multilingualism that is separable from that of musicianship. Average MINT scores were correlated with scores on a sentence-in-noise perception task, but only accounted for a relatively small percentage of the variance, indicating that the MINT is sensitive to additional factors and can provide a complement and extension of speech-based tests for studying stream segregation. A customizable version of the MINT is made available for use and extension by the scientific community.
format Online
Article
Text
id pubmed-6427094
institution National Center for Biotechnology Information
language English
publishDate 2019
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-64270942019-03-29 The Music-In-Noise Task (MINT): A Tool for Dissecting Complex Auditory Perception Coffey, Emily B. J. Arseneau-Bruneau, Isabelle Zhang, Xiaochen Zatorre, Robert J. Front Neurosci Neuroscience The ability to segregate target sounds in noisy backgrounds is relevant both to neuroscience and to clinical applications. Recent research suggests that hearing-in-noise (HIN) problems are solved using combinations of sub-skills that are applied according to task demand and information availability. While evidence is accumulating for a musician advantage in HIN, the exact nature of the reported training effect is not fully understood. Existing HIN tests focus on tasks requiring understanding of speech in the presence of competing sound. Because visual, spatial and predictive cues are not systematically considered in these tasks, few tools exist to investigate the most relevant components of cognitive processes involved in stream segregation. We present the Music-In-Noise Task (MINT) as a flexible tool to expand HIN measures beyond speech perception, and for addressing research questions pertaining to the relative contributions of HIN sub-skills, inter-individual differences in their use, and their neural correlates. The MINT uses a match-mismatch trial design: in four conditions (Baseline, Rhythm, Spatial, and Visual) subjects first hear a short instrumental musical excerpt embedded in an informational masker of “multi-music” noise, followed by either a matching or scrambled repetition of the target musical excerpt presented in silence; the four conditions differ according to the presence or absence of additional cues. In a fifth condition (Prediction), subjects hear the excerpt in silence as a target first, which helps to anticipate incoming information when the target is embedded in masking sound. Data from samples of young adults show that the MINT has good reliability and internal consistency, and demonstrate selective benefits of musicianship in the Prediction, Rhythm, and Visual subtasks. We also report a performance benefit of multilingualism that is separable from that of musicianship. Average MINT scores were correlated with scores on a sentence-in-noise perception task, but only accounted for a relatively small percentage of the variance, indicating that the MINT is sensitive to additional factors and can provide a complement and extension of speech-based tests for studying stream segregation. A customizable version of the MINT is made available for use and extension by the scientific community. Frontiers Media S.A. 2019-03-14 /pmc/articles/PMC6427094/ /pubmed/30930734 http://dx.doi.org/10.3389/fnins.2019.00199 Text en Copyright © 2019 Coffey, Arseneau-Bruneau, Zhang and Zatorre. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Coffey, Emily B. J.
Arseneau-Bruneau, Isabelle
Zhang, Xiaochen
Zatorre, Robert J.
The Music-In-Noise Task (MINT): A Tool for Dissecting Complex Auditory Perception
title The Music-In-Noise Task (MINT): A Tool for Dissecting Complex Auditory Perception
title_full The Music-In-Noise Task (MINT): A Tool for Dissecting Complex Auditory Perception
title_fullStr The Music-In-Noise Task (MINT): A Tool for Dissecting Complex Auditory Perception
title_full_unstemmed The Music-In-Noise Task (MINT): A Tool for Dissecting Complex Auditory Perception
title_short The Music-In-Noise Task (MINT): A Tool for Dissecting Complex Auditory Perception
title_sort music-in-noise task (mint): a tool for dissecting complex auditory perception
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6427094/
https://www.ncbi.nlm.nih.gov/pubmed/30930734
http://dx.doi.org/10.3389/fnins.2019.00199
work_keys_str_mv AT coffeyemilybj themusicinnoisetaskmintatoolfordissectingcomplexauditoryperception
AT arseneaubruneauisabelle themusicinnoisetaskmintatoolfordissectingcomplexauditoryperception
AT zhangxiaochen themusicinnoisetaskmintatoolfordissectingcomplexauditoryperception
AT zatorrerobertj themusicinnoisetaskmintatoolfordissectingcomplexauditoryperception
AT coffeyemilybj musicinnoisetaskmintatoolfordissectingcomplexauditoryperception
AT arseneaubruneauisabelle musicinnoisetaskmintatoolfordissectingcomplexauditoryperception
AT zhangxiaochen musicinnoisetaskmintatoolfordissectingcomplexauditoryperception
AT zatorrerobertj musicinnoisetaskmintatoolfordissectingcomplexauditoryperception