Cargando…

Deepfake detection with and without content warnings

The rapid advancement of ‘deepfake' video technology—which uses deep learning artificial intelligence algorithms to create fake videos that look real—has given urgency to the question of how policymakers and technology companies should moderate inauthentic content. We conduct an experiment to m...

Descripción completa

Detalles Bibliográficos
Autores principales: Lewis, Andrew, Vu, Patrick, Duch, Raymond M., Chowdhury, Areeq
Formato: Online Artículo Texto
Lenguaje:English
Publicado: The Royal Society 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10679876/
https://www.ncbi.nlm.nih.gov/pubmed/38026025
http://dx.doi.org/10.1098/rsos.231214
Descripción
Sumario:The rapid advancement of ‘deepfake' video technology—which uses deep learning artificial intelligence algorithms to create fake videos that look real—has given urgency to the question of how policymakers and technology companies should moderate inauthentic content. We conduct an experiment to measure people's alertness to and ability to detect a high-quality deepfake among a set of videos. First, we find that in a natural setting with no content warnings, individuals who are exposed to a deepfake video of neutral content are no more likely to detect anything out of the ordinary (32.9%) compared to a control group who viewed only authentic videos (34.1%). Second, we find that when individuals are given a warning that at least one video in a set of five is a deepfake, only 21.6% of respondents correctly identify the deepfake as the only inauthentic video, while the remainder erroneously select at least one genuine video as a deepfake.