Cargando…

Concordant Cues in Faces and Voices: Testing the Backup Signal Hypothesis

Information from faces and voices combines to provide multimodal signals about a person. Faces and voices may offer redundant, overlapping (backup signals), or complementary information (multiple messages). This article reports two experiments which investigated the extent to which faces and voices...

Descripción completa

Detalles Bibliográficos
Autores principales: Smith, Harriet M. J., Dunn, Andrew K., Baguley, Thom, Stacey, Paula C.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: SAGE Publications 2016
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10481076/
http://dx.doi.org/10.1177/1474704916630317
_version_ 1785101899587911680
author Smith, Harriet M. J.
Dunn, Andrew K.
Baguley, Thom
Stacey, Paula C.
author_facet Smith, Harriet M. J.
Dunn, Andrew K.
Baguley, Thom
Stacey, Paula C.
author_sort Smith, Harriet M. J.
collection PubMed
description Information from faces and voices combines to provide multimodal signals about a person. Faces and voices may offer redundant, overlapping (backup signals), or complementary information (multiple messages). This article reports two experiments which investigated the extent to which faces and voices deliver concordant information about dimensions of fitness and quality. In Experiment 1, participants rated faces and voices on scales for masculinity/femininity, age, health, height, and weight. The results showed that people make similar judgments from faces and voices, with particularly strong correlations for masculinity/femininity, health, and height. If, as these results suggest, faces and voices constitute backup signals for various dimensions, it is hypothetically possible that people would be able to accurately match novel faces and voices for identity. However, previous investigations into novel face–voice matching offer contradictory results. In Experiment 2, participants saw a face and heard a voice and were required to decide whether the face and voice belonged to the same person. Matching accuracy was significantly above chance level, suggesting that judgments made independently from faces and voices are sufficiently similar that people can match the two. Both sets of results were analyzed using multilevel modeling and are interpreted as being consistent with the backup signal hypothesis.
format Online
Article
Text
id pubmed-10481076
institution National Center for Biotechnology Information
language English
publishDate 2016
publisher SAGE Publications
record_format MEDLINE/PubMed
spelling pubmed-104810762023-09-07 Concordant Cues in Faces and Voices: Testing the Backup Signal Hypothesis Smith, Harriet M. J. Dunn, Andrew K. Baguley, Thom Stacey, Paula C. Evol Psychol Articles Information from faces and voices combines to provide multimodal signals about a person. Faces and voices may offer redundant, overlapping (backup signals), or complementary information (multiple messages). This article reports two experiments which investigated the extent to which faces and voices deliver concordant information about dimensions of fitness and quality. In Experiment 1, participants rated faces and voices on scales for masculinity/femininity, age, health, height, and weight. The results showed that people make similar judgments from faces and voices, with particularly strong correlations for masculinity/femininity, health, and height. If, as these results suggest, faces and voices constitute backup signals for various dimensions, it is hypothetically possible that people would be able to accurately match novel faces and voices for identity. However, previous investigations into novel face–voice matching offer contradictory results. In Experiment 2, participants saw a face and heard a voice and were required to decide whether the face and voice belonged to the same person. Matching accuracy was significantly above chance level, suggesting that judgments made independently from faces and voices are sufficiently similar that people can match the two. Both sets of results were analyzed using multilevel modeling and are interpreted as being consistent with the backup signal hypothesis. SAGE Publications 2016-02-10 /pmc/articles/PMC10481076/ http://dx.doi.org/10.1177/1474704916630317 Text en © The Author(s) 2016 https://creativecommons.org/licenses/by-nc/3.0/This article is distributed under the terms of the Creative Commons Attribution-NonCommercial 3.0 License (http://www.creativecommons.org/licenses/by-nc/3.0/ (https://creativecommons.org/licenses/by-nc/3.0/) ) which permits non-commercial use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access page (https://us.sagepub.com/en-us/nam/open-access-at-sage).
spellingShingle Articles
Smith, Harriet M. J.
Dunn, Andrew K.
Baguley, Thom
Stacey, Paula C.
Concordant Cues in Faces and Voices: Testing the Backup Signal Hypothesis
title Concordant Cues in Faces and Voices: Testing the Backup Signal Hypothesis
title_full Concordant Cues in Faces and Voices: Testing the Backup Signal Hypothesis
title_fullStr Concordant Cues in Faces and Voices: Testing the Backup Signal Hypothesis
title_full_unstemmed Concordant Cues in Faces and Voices: Testing the Backup Signal Hypothesis
title_short Concordant Cues in Faces and Voices: Testing the Backup Signal Hypothesis
title_sort concordant cues in faces and voices: testing the backup signal hypothesis
topic Articles
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10481076/
http://dx.doi.org/10.1177/1474704916630317
work_keys_str_mv AT smithharrietmj concordantcuesinfacesandvoicestestingthebackupsignalhypothesis
AT dunnandrewk concordantcuesinfacesandvoicestestingthebackupsignalhypothesis
AT baguleythom concordantcuesinfacesandvoicestestingthebackupsignalhypothesis
AT staceypaulac concordantcuesinfacesandvoicestestingthebackupsignalhypothesis