Cargando…
Deepfake detection with and without content warnings
The rapid advancement of ‘deepfake' video technology—which uses deep learning artificial intelligence algorithms to create fake videos that look real—has given urgency to the question of how policymakers and technology companies should moderate inauthentic content. We conduct an experiment to m...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
The Royal Society
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10679876/ https://www.ncbi.nlm.nih.gov/pubmed/38026025 http://dx.doi.org/10.1098/rsos.231214 |
_version_ | 1785150642612862976 |
---|---|
author | Lewis, Andrew Vu, Patrick Duch, Raymond M. Chowdhury, Areeq |
author_facet | Lewis, Andrew Vu, Patrick Duch, Raymond M. Chowdhury, Areeq |
author_sort | Lewis, Andrew |
collection | PubMed |
description | The rapid advancement of ‘deepfake' video technology—which uses deep learning artificial intelligence algorithms to create fake videos that look real—has given urgency to the question of how policymakers and technology companies should moderate inauthentic content. We conduct an experiment to measure people's alertness to and ability to detect a high-quality deepfake among a set of videos. First, we find that in a natural setting with no content warnings, individuals who are exposed to a deepfake video of neutral content are no more likely to detect anything out of the ordinary (32.9%) compared to a control group who viewed only authentic videos (34.1%). Second, we find that when individuals are given a warning that at least one video in a set of five is a deepfake, only 21.6% of respondents correctly identify the deepfake as the only inauthentic video, while the remainder erroneously select at least one genuine video as a deepfake. |
format | Online Article Text |
id | pubmed-10679876 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | The Royal Society |
record_format | MEDLINE/PubMed |
spelling | pubmed-106798762023-11-27 Deepfake detection with and without content warnings Lewis, Andrew Vu, Patrick Duch, Raymond M. Chowdhury, Areeq R Soc Open Sci Science, Society and Policy The rapid advancement of ‘deepfake' video technology—which uses deep learning artificial intelligence algorithms to create fake videos that look real—has given urgency to the question of how policymakers and technology companies should moderate inauthentic content. We conduct an experiment to measure people's alertness to and ability to detect a high-quality deepfake among a set of videos. First, we find that in a natural setting with no content warnings, individuals who are exposed to a deepfake video of neutral content are no more likely to detect anything out of the ordinary (32.9%) compared to a control group who viewed only authentic videos (34.1%). Second, we find that when individuals are given a warning that at least one video in a set of five is a deepfake, only 21.6% of respondents correctly identify the deepfake as the only inauthentic video, while the remainder erroneously select at least one genuine video as a deepfake. The Royal Society 2023-11-27 /pmc/articles/PMC10679876/ /pubmed/38026025 http://dx.doi.org/10.1098/rsos.231214 Text en © 2023 The Authors. https://creativecommons.org/licenses/by/4.0/Published by the Royal Society under the terms of the Creative Commons Attribution License http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, provided the original author and source are credited. |
spellingShingle | Science, Society and Policy Lewis, Andrew Vu, Patrick Duch, Raymond M. Chowdhury, Areeq Deepfake detection with and without content warnings |
title | Deepfake detection with and without content warnings |
title_full | Deepfake detection with and without content warnings |
title_fullStr | Deepfake detection with and without content warnings |
title_full_unstemmed | Deepfake detection with and without content warnings |
title_short | Deepfake detection with and without content warnings |
title_sort | deepfake detection with and without content warnings |
topic | Science, Society and Policy |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10679876/ https://www.ncbi.nlm.nih.gov/pubmed/38026025 http://dx.doi.org/10.1098/rsos.231214 |
work_keys_str_mv | AT lewisandrew deepfakedetectionwithandwithoutcontentwarnings AT vupatrick deepfakedetectionwithandwithoutcontentwarnings AT duchraymondm deepfakedetectionwithandwithoutcontentwarnings AT chowdhuryareeq deepfakedetectionwithandwithoutcontentwarnings |