Cargando…
CLEESE: An open-source audio-transformation toolbox for data-driven experiments in speech and music cognition
Over the past few years, the field of visual social cognition and face processing has been dramatically impacted by a series of data-driven studies employing computer-graphics tools to synthesize arbitrary meaningful facial expressions. In the auditory modality, reverse correlation is traditionally...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2019
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6448843/ https://www.ncbi.nlm.nih.gov/pubmed/30947281 http://dx.doi.org/10.1371/journal.pone.0205943 |
_version_ | 1783408734128570368 |
---|---|
author | Burred, Juan José Ponsot, Emmanuel Goupil, Louise Liuni, Marco Aucouturier, Jean-Julien |
author_facet | Burred, Juan José Ponsot, Emmanuel Goupil, Louise Liuni, Marco Aucouturier, Jean-Julien |
author_sort | Burred, Juan José |
collection | PubMed |
description | Over the past few years, the field of visual social cognition and face processing has been dramatically impacted by a series of data-driven studies employing computer-graphics tools to synthesize arbitrary meaningful facial expressions. In the auditory modality, reverse correlation is traditionally used to characterize sensory processing at the level of spectral or spectro-temporal stimulus properties, but not higher-level cognitive processing of e.g. words, sentences or music, by lack of tools able to manipulate the stimulus dimensions that are relevant for these processes. Here, we present an open-source audio-transformation toolbox, called CLEESE, able to systematically randomize the prosody/melody of existing speech and music recordings. CLEESE works by cutting recordings in small successive time segments (e.g. every successive 100 milliseconds in a spoken utterance), and applying a random parametric transformation of each segment’s pitch, duration or amplitude, using a new Python-language implementation of the phase-vocoder digital audio technique. We present here two applications of the tool to generate stimuli for studying intonation processing of interrogative vs declarative speech, and rhythm processing of sung melodies. |
format | Online Article Text |
id | pubmed-6448843 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2019 |
publisher | Public Library of Science |
record_format | MEDLINE/PubMed |
spelling | pubmed-64488432019-04-19 CLEESE: An open-source audio-transformation toolbox for data-driven experiments in speech and music cognition Burred, Juan José Ponsot, Emmanuel Goupil, Louise Liuni, Marco Aucouturier, Jean-Julien PLoS One Research Article Over the past few years, the field of visual social cognition and face processing has been dramatically impacted by a series of data-driven studies employing computer-graphics tools to synthesize arbitrary meaningful facial expressions. In the auditory modality, reverse correlation is traditionally used to characterize sensory processing at the level of spectral or spectro-temporal stimulus properties, but not higher-level cognitive processing of e.g. words, sentences or music, by lack of tools able to manipulate the stimulus dimensions that are relevant for these processes. Here, we present an open-source audio-transformation toolbox, called CLEESE, able to systematically randomize the prosody/melody of existing speech and music recordings. CLEESE works by cutting recordings in small successive time segments (e.g. every successive 100 milliseconds in a spoken utterance), and applying a random parametric transformation of each segment’s pitch, duration or amplitude, using a new Python-language implementation of the phase-vocoder digital audio technique. We present here two applications of the tool to generate stimuli for studying intonation processing of interrogative vs declarative speech, and rhythm processing of sung melodies. Public Library of Science 2019-04-04 /pmc/articles/PMC6448843/ /pubmed/30947281 http://dx.doi.org/10.1371/journal.pone.0205943 Text en © 2019 Burred et al http://creativecommons.org/licenses/by/4.0/ This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. |
spellingShingle | Research Article Burred, Juan José Ponsot, Emmanuel Goupil, Louise Liuni, Marco Aucouturier, Jean-Julien CLEESE: An open-source audio-transformation toolbox for data-driven experiments in speech and music cognition |
title | CLEESE: An open-source audio-transformation toolbox for data-driven experiments in speech and music cognition |
title_full | CLEESE: An open-source audio-transformation toolbox for data-driven experiments in speech and music cognition |
title_fullStr | CLEESE: An open-source audio-transformation toolbox for data-driven experiments in speech and music cognition |
title_full_unstemmed | CLEESE: An open-source audio-transformation toolbox for data-driven experiments in speech and music cognition |
title_short | CLEESE: An open-source audio-transformation toolbox for data-driven experiments in speech and music cognition |
title_sort | cleese: an open-source audio-transformation toolbox for data-driven experiments in speech and music cognition |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6448843/ https://www.ncbi.nlm.nih.gov/pubmed/30947281 http://dx.doi.org/10.1371/journal.pone.0205943 |
work_keys_str_mv | AT burredjuanjose cleeseanopensourceaudiotransformationtoolboxfordatadrivenexperimentsinspeechandmusiccognition AT ponsotemmanuel cleeseanopensourceaudiotransformationtoolboxfordatadrivenexperimentsinspeechandmusiccognition AT goupillouise cleeseanopensourceaudiotransformationtoolboxfordatadrivenexperimentsinspeechandmusiccognition AT liunimarco cleeseanopensourceaudiotransformationtoolboxfordatadrivenexperimentsinspeechandmusiccognition AT aucouturierjeanjulien cleeseanopensourceaudiotransformationtoolboxfordatadrivenexperimentsinspeechandmusiccognition |