Cargando…

Deepfake detection by human crowds, machines, and machine-informed crowds

The recent emergence of machine-manipulated media raises an important societal question: How can we know whether a video that we watch is real or fake? In two online studies with 15,016 participants, we present authentic videos and deepfakes and ask participants to identify which is which. We compar...

Descripción completa

Detalles Bibliográficos
Autores principales: Groh, Matthew, Epstein, Ziv, Firestone, Chaz, Picard, Rosalind
Formato: Online Artículo Texto
Lenguaje:English
Publicado: National Academy of Sciences 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8740705/
https://www.ncbi.nlm.nih.gov/pubmed/34969837
http://dx.doi.org/10.1073/pnas.2110013119
_version_ 1784629359582117888
author Groh, Matthew
Epstein, Ziv
Firestone, Chaz
Picard, Rosalind
author_facet Groh, Matthew
Epstein, Ziv
Firestone, Chaz
Picard, Rosalind
author_sort Groh, Matthew
collection PubMed
description The recent emergence of machine-manipulated media raises an important societal question: How can we know whether a video that we watch is real or fake? In two online studies with 15,016 participants, we present authentic videos and deepfakes and ask participants to identify which is which. We compare the performance of ordinary human observers with the leading computer vision deepfake detection model and find them similarly accurate, while making different kinds of mistakes. Together, participants with access to the model’s prediction are more accurate than either alone, but inaccurate model predictions often decrease participants’ accuracy. To probe the relative strengths and weaknesses of humans and machines as detectors of deepfakes, we examine human and machine performance across video-level features, and we evaluate the impact of preregistered randomized interventions on deepfake detection. We find that manipulations designed to disrupt visual processing of faces hinder human participants’ performance while mostly not affecting the model’s performance, suggesting a role for specialized cognitive capacities in explaining human deepfake detection performance.
format Online
Article
Text
id pubmed-8740705
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher National Academy of Sciences
record_format MEDLINE/PubMed
spelling pubmed-87407052022-01-25 Deepfake detection by human crowds, machines, and machine-informed crowds Groh, Matthew Epstein, Ziv Firestone, Chaz Picard, Rosalind Proc Natl Acad Sci U S A Social Sciences The recent emergence of machine-manipulated media raises an important societal question: How can we know whether a video that we watch is real or fake? In two online studies with 15,016 participants, we present authentic videos and deepfakes and ask participants to identify which is which. We compare the performance of ordinary human observers with the leading computer vision deepfake detection model and find them similarly accurate, while making different kinds of mistakes. Together, participants with access to the model’s prediction are more accurate than either alone, but inaccurate model predictions often decrease participants’ accuracy. To probe the relative strengths and weaknesses of humans and machines as detectors of deepfakes, we examine human and machine performance across video-level features, and we evaluate the impact of preregistered randomized interventions on deepfake detection. We find that manipulations designed to disrupt visual processing of faces hinder human participants’ performance while mostly not affecting the model’s performance, suggesting a role for specialized cognitive capacities in explaining human deepfake detection performance. National Academy of Sciences 2021-12-28 2022-01-04 /pmc/articles/PMC8740705/ /pubmed/34969837 http://dx.doi.org/10.1073/pnas.2110013119 Text en Copyright © 2021 the Author(s). Published by PNAS. https://creativecommons.org/licenses/by-nc-nd/4.0/This open access article is distributed under Creative Commons Attribution-NonCommercial-NoDerivatives License 4.0 (CC BY-NC-ND) (https://creativecommons.org/licenses/by-nc-nd/4.0/) .
spellingShingle Social Sciences
Groh, Matthew
Epstein, Ziv
Firestone, Chaz
Picard, Rosalind
Deepfake detection by human crowds, machines, and machine-informed crowds
title Deepfake detection by human crowds, machines, and machine-informed crowds
title_full Deepfake detection by human crowds, machines, and machine-informed crowds
title_fullStr Deepfake detection by human crowds, machines, and machine-informed crowds
title_full_unstemmed Deepfake detection by human crowds, machines, and machine-informed crowds
title_short Deepfake detection by human crowds, machines, and machine-informed crowds
title_sort deepfake detection by human crowds, machines, and machine-informed crowds
topic Social Sciences
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8740705/
https://www.ncbi.nlm.nih.gov/pubmed/34969837
http://dx.doi.org/10.1073/pnas.2110013119
work_keys_str_mv AT grohmatthew deepfakedetectionbyhumancrowdsmachinesandmachineinformedcrowds
AT epsteinziv deepfakedetectionbyhumancrowdsmachinesandmachineinformedcrowds
AT firestonechaz deepfakedetectionbyhumancrowdsmachinesandmachineinformedcrowds
AT picardrosalind deepfakedetectionbyhumancrowdsmachinesandmachineinformedcrowds