Cargando…

Deepfake forensics analysis: An explainable hierarchical ensemble of weakly supervised models

Deepfakes have become exponentially more common and sophisticated in recent years, so much so that forensic specialists, policy makers, and the public alike are anxious about their role in spreading disinformation. Recently, the detection and creation of such forgery became a popular research topic,...

Descripción completa

Detalles Bibliográficos
Autores principales: Silva, Samuel Henrique, Bethany, Mazal, Votto, Alexis Megan, Scarff, Ian Henry, Beebe, Nicole, Najafirad, Peyman
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Elsevier 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8808059/
https://www.ncbi.nlm.nih.gov/pubmed/35128371
http://dx.doi.org/10.1016/j.fsisyn.2022.100217
_version_ 1784643804270166016
author Silva, Samuel Henrique
Bethany, Mazal
Votto, Alexis Megan
Scarff, Ian Henry
Beebe, Nicole
Najafirad, Peyman
author_facet Silva, Samuel Henrique
Bethany, Mazal
Votto, Alexis Megan
Scarff, Ian Henry
Beebe, Nicole
Najafirad, Peyman
author_sort Silva, Samuel Henrique
collection PubMed
description Deepfakes have become exponentially more common and sophisticated in recent years, so much so that forensic specialists, policy makers, and the public alike are anxious about their role in spreading disinformation. Recently, the detection and creation of such forgery became a popular research topic, leading to significant growth in publications related to the creation of deepfakes, detection methods, and datasets containing the latest deepfake creation methods. The most successful approaches in identifying and preventing deepfakes are deep learning methods that rely on convolutional neural networks as the backbone for a binary classification task. A convolutional neural network extracts the underlying patterns from the input frames. It feeds these to a binary classification fully connected network, which classifies these patterns as trustworthy or untrustworthy. We claim that this method is not ideal in a scenario in which the generation algorithms constantly evolve since the detection algorithm is not robust enough to detect comparably minor artifacts introduced by the generation algorithms. This work proposes a hierarchical explainable forensics algorithm that incorporates humans in the detection loop. We curate the data through a deep learning detection algorithm and share an explainable decision to humans alongside a set of forensic analyses on the decision region. On the detection side, we propose an attention-based explainable deepfake detection algorithm. We address this generalization issue by implementing an ensemble of standard and attention-based data-augmented detection networks. We use the attention blocks to evaluate the face regions where the model focuses its decision. We simultaneously drop and enlarge the region to push the model to base its decision on more regions of the face, while maintaining a specific focal point for its decision. In this case, we use an ensemble of models to improve the generalization. We also evaluate the models’ decision using Grad-CAM explanation to focus on the attention maps. The region uncovered by the explanation layer is cropped and undergoes a series of frequency and statistical analyses that help humans decide if the frame is real or fake. We evaluate our model in one of the most challenging datasets, the DFDC, and achieve an accuracy of 92.4%. We successfully maintain this accuracy in datasets not used in the training process.
format Online
Article
Text
id pubmed-8808059
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Elsevier
record_format MEDLINE/PubMed
spelling pubmed-88080592022-02-04 Deepfake forensics analysis: An explainable hierarchical ensemble of weakly supervised models Silva, Samuel Henrique Bethany, Mazal Votto, Alexis Megan Scarff, Ian Henry Beebe, Nicole Najafirad, Peyman Forensic Sci Int Synerg Interdisciplinary Forensics Deepfakes have become exponentially more common and sophisticated in recent years, so much so that forensic specialists, policy makers, and the public alike are anxious about their role in spreading disinformation. Recently, the detection and creation of such forgery became a popular research topic, leading to significant growth in publications related to the creation of deepfakes, detection methods, and datasets containing the latest deepfake creation methods. The most successful approaches in identifying and preventing deepfakes are deep learning methods that rely on convolutional neural networks as the backbone for a binary classification task. A convolutional neural network extracts the underlying patterns from the input frames. It feeds these to a binary classification fully connected network, which classifies these patterns as trustworthy or untrustworthy. We claim that this method is not ideal in a scenario in which the generation algorithms constantly evolve since the detection algorithm is not robust enough to detect comparably minor artifacts introduced by the generation algorithms. This work proposes a hierarchical explainable forensics algorithm that incorporates humans in the detection loop. We curate the data through a deep learning detection algorithm and share an explainable decision to humans alongside a set of forensic analyses on the decision region. On the detection side, we propose an attention-based explainable deepfake detection algorithm. We address this generalization issue by implementing an ensemble of standard and attention-based data-augmented detection networks. We use the attention blocks to evaluate the face regions where the model focuses its decision. We simultaneously drop and enlarge the region to push the model to base its decision on more regions of the face, while maintaining a specific focal point for its decision. In this case, we use an ensemble of models to improve the generalization. We also evaluate the models’ decision using Grad-CAM explanation to focus on the attention maps. The region uncovered by the explanation layer is cropped and undergoes a series of frequency and statistical analyses that help humans decide if the frame is real or fake. We evaluate our model in one of the most challenging datasets, the DFDC, and achieve an accuracy of 92.4%. We successfully maintain this accuracy in datasets not used in the training process. Elsevier 2022-01-27 /pmc/articles/PMC8808059/ /pubmed/35128371 http://dx.doi.org/10.1016/j.fsisyn.2022.100217 Text en © 2022 The Authors. Published by Elsevier B.V. https://creativecommons.org/licenses/by/4.0/This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Interdisciplinary Forensics
Silva, Samuel Henrique
Bethany, Mazal
Votto, Alexis Megan
Scarff, Ian Henry
Beebe, Nicole
Najafirad, Peyman
Deepfake forensics analysis: An explainable hierarchical ensemble of weakly supervised models
title Deepfake forensics analysis: An explainable hierarchical ensemble of weakly supervised models
title_full Deepfake forensics analysis: An explainable hierarchical ensemble of weakly supervised models
title_fullStr Deepfake forensics analysis: An explainable hierarchical ensemble of weakly supervised models
title_full_unstemmed Deepfake forensics analysis: An explainable hierarchical ensemble of weakly supervised models
title_short Deepfake forensics analysis: An explainable hierarchical ensemble of weakly supervised models
title_sort deepfake forensics analysis: an explainable hierarchical ensemble of weakly supervised models
topic Interdisciplinary Forensics
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8808059/
https://www.ncbi.nlm.nih.gov/pubmed/35128371
http://dx.doi.org/10.1016/j.fsisyn.2022.100217
work_keys_str_mv AT silvasamuelhenrique deepfakeforensicsanalysisanexplainablehierarchicalensembleofweaklysupervisedmodels
AT bethanymazal deepfakeforensicsanalysisanexplainablehierarchicalensembleofweaklysupervisedmodels
AT vottoalexismegan deepfakeforensicsanalysisanexplainablehierarchicalensembleofweaklysupervisedmodels
AT scarffianhenry deepfakeforensicsanalysisanexplainablehierarchicalensembleofweaklysupervisedmodels
AT beebenicole deepfakeforensicsanalysisanexplainablehierarchicalensembleofweaklysupervisedmodels
AT najafiradpeyman deepfakeforensicsanalysisanexplainablehierarchicalensembleofweaklysupervisedmodels