Cargando…

Warning: Humans cannot reliably detect speech deepfakes

Speech deepfakes are artificial voices generated by machine learning models. Previous literature has highlighted deepfakes as one of the biggest security threats arising from progress in artificial intelligence due to their potential for misuse. However, studies investigating human detection capabil...

Descripción completa

Detalles Bibliográficos
Autores principales: Mai, Kimberly T., Bray, Sergi, Davies, Toby, Griffin, Lewis D.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10395974/
https://www.ncbi.nlm.nih.gov/pubmed/37531336
http://dx.doi.org/10.1371/journal.pone.0285333
_version_ 1785083699841204224
author Mai, Kimberly T.
Bray, Sergi
Davies, Toby
Griffin, Lewis D.
author_facet Mai, Kimberly T.
Bray, Sergi
Davies, Toby
Griffin, Lewis D.
author_sort Mai, Kimberly T.
collection PubMed
description Speech deepfakes are artificial voices generated by machine learning models. Previous literature has highlighted deepfakes as one of the biggest security threats arising from progress in artificial intelligence due to their potential for misuse. However, studies investigating human detection capabilities are limited. We presented genuine and deepfake audio to n = 529 individuals and asked them to identify the deepfakes. We ran our experiments in English and Mandarin to understand if language affects detection performance and decision-making rationale. We found that detection capability is unreliable. Listeners only correctly spotted the deepfakes 73% of the time, and there was no difference in detectability between the two languages. Increasing listener awareness by providing examples of speech deepfakes only improves results slightly. As speech synthesis algorithms improve and become more realistic, we can expect the detection task to become harder. The difficulty of detecting speech deepfakes confirms their potential for misuse and signals that defenses against this threat are needed.
format Online
Article
Text
id pubmed-10395974
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-103959742023-08-03 Warning: Humans cannot reliably detect speech deepfakes Mai, Kimberly T. Bray, Sergi Davies, Toby Griffin, Lewis D. PLoS One Research Article Speech deepfakes are artificial voices generated by machine learning models. Previous literature has highlighted deepfakes as one of the biggest security threats arising from progress in artificial intelligence due to their potential for misuse. However, studies investigating human detection capabilities are limited. We presented genuine and deepfake audio to n = 529 individuals and asked them to identify the deepfakes. We ran our experiments in English and Mandarin to understand if language affects detection performance and decision-making rationale. We found that detection capability is unreliable. Listeners only correctly spotted the deepfakes 73% of the time, and there was no difference in detectability between the two languages. Increasing listener awareness by providing examples of speech deepfakes only improves results slightly. As speech synthesis algorithms improve and become more realistic, we can expect the detection task to become harder. The difficulty of detecting speech deepfakes confirms their potential for misuse and signals that defenses against this threat are needed. Public Library of Science 2023-08-02 /pmc/articles/PMC10395974/ /pubmed/37531336 http://dx.doi.org/10.1371/journal.pone.0285333 Text en © 2023 Mai et al https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Mai, Kimberly T.
Bray, Sergi
Davies, Toby
Griffin, Lewis D.
Warning: Humans cannot reliably detect speech deepfakes
title Warning: Humans cannot reliably detect speech deepfakes
title_full Warning: Humans cannot reliably detect speech deepfakes
title_fullStr Warning: Humans cannot reliably detect speech deepfakes
title_full_unstemmed Warning: Humans cannot reliably detect speech deepfakes
title_short Warning: Humans cannot reliably detect speech deepfakes
title_sort warning: humans cannot reliably detect speech deepfakes
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10395974/
https://www.ncbi.nlm.nih.gov/pubmed/37531336
http://dx.doi.org/10.1371/journal.pone.0285333
work_keys_str_mv AT maikimberlyt warninghumanscannotreliablydetectspeechdeepfakes
AT braysergi warninghumanscannotreliablydetectspeechdeepfakes
AT daviestoby warninghumanscannotreliablydetectspeechdeepfakes
AT griffinlewisd warninghumanscannotreliablydetectspeechdeepfakes