Cargando…
Fooled twice: People cannot detect deepfakes but think they can
Hyper-realistic manipulations of audio-visual content, i.e., deepfakes, present new challenges for establishing the veracity of online content. Research on the human impact of deepfakes remains sparse. In a pre-registered behavioral experiment (N = 210), we show that (1) people cannot reliably detec...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Elsevier
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8602050/ https://www.ncbi.nlm.nih.gov/pubmed/34820608 http://dx.doi.org/10.1016/j.isci.2021.103364 |
_version_ | 1784601493438988288 |
---|---|
author | Köbis, Nils C. Doležalová, Barbora Soraperra, Ivan |
author_facet | Köbis, Nils C. Doležalová, Barbora Soraperra, Ivan |
author_sort | Köbis, Nils C. |
collection | PubMed |
description | Hyper-realistic manipulations of audio-visual content, i.e., deepfakes, present new challenges for establishing the veracity of online content. Research on the human impact of deepfakes remains sparse. In a pre-registered behavioral experiment (N = 210), we show that (1) people cannot reliably detect deepfakes and (2) neither raising awareness nor introducing financial incentives improves their detection accuracy. Zeroing in on the underlying cognitive processes, we find that (3) people are biased toward mistaking deepfakes as authentic videos (rather than vice versa) and (4) they overestimate their own detection abilities. Together, these results suggest that people adopt a “seeing-is-believing” heuristic for deepfake detection while being overconfident in their (low) detection abilities. The combination renders people particularly susceptible to be influenced by deepfake content. |
format | Online Article Text |
id | pubmed-8602050 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | Elsevier |
record_format | MEDLINE/PubMed |
spelling | pubmed-86020502021-11-23 Fooled twice: People cannot detect deepfakes but think they can Köbis, Nils C. Doležalová, Barbora Soraperra, Ivan iScience Article Hyper-realistic manipulations of audio-visual content, i.e., deepfakes, present new challenges for establishing the veracity of online content. Research on the human impact of deepfakes remains sparse. In a pre-registered behavioral experiment (N = 210), we show that (1) people cannot reliably detect deepfakes and (2) neither raising awareness nor introducing financial incentives improves their detection accuracy. Zeroing in on the underlying cognitive processes, we find that (3) people are biased toward mistaking deepfakes as authentic videos (rather than vice versa) and (4) they overestimate their own detection abilities. Together, these results suggest that people adopt a “seeing-is-believing” heuristic for deepfake detection while being overconfident in their (low) detection abilities. The combination renders people particularly susceptible to be influenced by deepfake content. Elsevier 2021-10-29 /pmc/articles/PMC8602050/ /pubmed/34820608 http://dx.doi.org/10.1016/j.isci.2021.103364 Text en © 2021 The Authors https://creativecommons.org/licenses/by-nc-nd/4.0/This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). |
spellingShingle | Article Köbis, Nils C. Doležalová, Barbora Soraperra, Ivan Fooled twice: People cannot detect deepfakes but think they can |
title | Fooled twice: People cannot detect deepfakes but think they can |
title_full | Fooled twice: People cannot detect deepfakes but think they can |
title_fullStr | Fooled twice: People cannot detect deepfakes but think they can |
title_full_unstemmed | Fooled twice: People cannot detect deepfakes but think they can |
title_short | Fooled twice: People cannot detect deepfakes but think they can |
title_sort | fooled twice: people cannot detect deepfakes but think they can |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8602050/ https://www.ncbi.nlm.nih.gov/pubmed/34820608 http://dx.doi.org/10.1016/j.isci.2021.103364 |
work_keys_str_mv | AT kobisnilsc fooledtwicepeoplecannotdetectdeepfakesbutthinktheycan AT dolezalovabarbora fooledtwicepeoplecannotdetectdeepfakesbutthinktheycan AT soraperraivan fooledtwicepeoplecannotdetectdeepfakesbutthinktheycan |