Cargando…

Matching novel face and voice identity using static and dynamic facial images

Research investigating whether faces and voices share common source identity information has offered contradictory results. Accurate face–voice matching is consistently above chance when the facial stimuli are dynamic, but not when the facial stimuli are static. We tested whether procedural differen...

Descripción completa

Detalles Bibliográficos
Autores principales: Smith, Harriet M. J., Dunn, Andrew K., Baguley, Thom, Stacey, Paula C.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer US 2016
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4819615/
https://www.ncbi.nlm.nih.gov/pubmed/26732264
http://dx.doi.org/10.3758/s13414-015-1045-8
_version_ 1782425241591480320
author Smith, Harriet M. J.
Dunn, Andrew K.
Baguley, Thom
Stacey, Paula C.
author_facet Smith, Harriet M. J.
Dunn, Andrew K.
Baguley, Thom
Stacey, Paula C.
author_sort Smith, Harriet M. J.
collection PubMed
description Research investigating whether faces and voices share common source identity information has offered contradictory results. Accurate face–voice matching is consistently above chance when the facial stimuli are dynamic, but not when the facial stimuli are static. We tested whether procedural differences might help to account for the previous inconsistencies. In Experiment 1, participants completed a sequential two-alternative forced choice matching task. They either heard a voice and then saw two faces or saw a face and then heard two voices. Face–voice matching was above chance when the facial stimuli were dynamic and articulating, but not when they were static. In Experiment 2, we tested whether matching was more accurate when faces and voices were presented simultaneously. The participants saw two face–voice combinations, presented one after the other. They had to decide which combination was the same identity. As in Experiment 1, only dynamic face–voice matching was above chance. In Experiment 3, participants heard a voice and then saw two static faces presented simultaneously. With this procedure, static face–voice matching was above chance. The overall results, analyzed using multilevel modeling, showed that voices and dynamic articulating faces, as well as voices and static faces, share concordant source identity information. It seems, therefore, that above-chance static face–voice matching is sensitive to the experimental procedure employed. In addition, the inconsistencies in previous research might depend on the specific stimulus sets used; our multilevel modeling analyses show that some people look and sound more similar than others.
format Online
Article
Text
id pubmed-4819615
institution National Center for Biotechnology Information
language English
publishDate 2016
publisher Springer US
record_format MEDLINE/PubMed
spelling pubmed-48196152016-04-10 Matching novel face and voice identity using static and dynamic facial images Smith, Harriet M. J. Dunn, Andrew K. Baguley, Thom Stacey, Paula C. Atten Percept Psychophys Article Research investigating whether faces and voices share common source identity information has offered contradictory results. Accurate face–voice matching is consistently above chance when the facial stimuli are dynamic, but not when the facial stimuli are static. We tested whether procedural differences might help to account for the previous inconsistencies. In Experiment 1, participants completed a sequential two-alternative forced choice matching task. They either heard a voice and then saw two faces or saw a face and then heard two voices. Face–voice matching was above chance when the facial stimuli were dynamic and articulating, but not when they were static. In Experiment 2, we tested whether matching was more accurate when faces and voices were presented simultaneously. The participants saw two face–voice combinations, presented one after the other. They had to decide which combination was the same identity. As in Experiment 1, only dynamic face–voice matching was above chance. In Experiment 3, participants heard a voice and then saw two static faces presented simultaneously. With this procedure, static face–voice matching was above chance. The overall results, analyzed using multilevel modeling, showed that voices and dynamic articulating faces, as well as voices and static faces, share concordant source identity information. It seems, therefore, that above-chance static face–voice matching is sensitive to the experimental procedure employed. In addition, the inconsistencies in previous research might depend on the specific stimulus sets used; our multilevel modeling analyses show that some people look and sound more similar than others. Springer US 2016-01-05 2016 /pmc/articles/PMC4819615/ /pubmed/26732264 http://dx.doi.org/10.3758/s13414-015-1045-8 Text en © The Author(s) 2016 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
spellingShingle Article
Smith, Harriet M. J.
Dunn, Andrew K.
Baguley, Thom
Stacey, Paula C.
Matching novel face and voice identity using static and dynamic facial images
title Matching novel face and voice identity using static and dynamic facial images
title_full Matching novel face and voice identity using static and dynamic facial images
title_fullStr Matching novel face and voice identity using static and dynamic facial images
title_full_unstemmed Matching novel face and voice identity using static and dynamic facial images
title_short Matching novel face and voice identity using static and dynamic facial images
title_sort matching novel face and voice identity using static and dynamic facial images
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4819615/
https://www.ncbi.nlm.nih.gov/pubmed/26732264
http://dx.doi.org/10.3758/s13414-015-1045-8
work_keys_str_mv AT smithharrietmj matchingnovelfaceandvoiceidentityusingstaticanddynamicfacialimages
AT dunnandrewk matchingnovelfaceandvoiceidentityusingstaticanddynamicfacialimages
AT baguleythom matchingnovelfaceandvoiceidentityusingstaticanddynamicfacialimages
AT staceypaulac matchingnovelfaceandvoiceidentityusingstaticanddynamicfacialimages