Cargando…

A Robust Approach to Multimodal Deepfake Detection

The widespread use of deep learning techniques for creating realistic synthetic media, commonly known as deepfakes, poses a significant threat to individuals, organizations, and society. As the malicious use of these data could lead to unpleasant situations, it is becoming crucial to distinguish bet...

Descripción completa

Detalles Bibliográficos
Autores principales: Salvi, Davide, Liu, Honggu, Mandelli, Sara, Bestagini, Paolo, Zhou, Wenbo, Zhang, Weiming, Tubaro, Stefano
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10299653/
https://www.ncbi.nlm.nih.gov/pubmed/37367470
http://dx.doi.org/10.3390/jimaging9060122
_version_ 1785064417042366464
author Salvi, Davide
Liu, Honggu
Mandelli, Sara
Bestagini, Paolo
Zhou, Wenbo
Zhang, Weiming
Tubaro, Stefano
author_facet Salvi, Davide
Liu, Honggu
Mandelli, Sara
Bestagini, Paolo
Zhou, Wenbo
Zhang, Weiming
Tubaro, Stefano
author_sort Salvi, Davide
collection PubMed
description The widespread use of deep learning techniques for creating realistic synthetic media, commonly known as deepfakes, poses a significant threat to individuals, organizations, and society. As the malicious use of these data could lead to unpleasant situations, it is becoming crucial to distinguish between authentic and fake media. Nonetheless, though deepfake generation systems can create convincing images and audio, they may struggle to maintain consistency across different data modalities, such as producing a realistic video sequence where both visual frames and speech are fake and consistent one with the other. Moreover, these systems may not accurately reproduce semantic and timely accurate aspects. All these elements can be exploited to perform a robust detection of fake content. In this paper, we propose a novel approach for detecting deepfake video sequences by leveraging data multimodality. Our method extracts audio-visual features from the input video over time and analyzes them using time-aware neural networks. We exploit both the video and audio modalities to leverage the inconsistencies between and within them, enhancing the final detection performance. The peculiarity of the proposed method is that we never train on multimodal deepfake data, but on disjoint monomodal datasets which contain visual-only or audio-only deepfakes. This frees us from leveraging multimodal datasets during training, which is desirable given their lack in the literature. Moreover, at test time, it allows to evaluate the robustness of our proposed detector on unseen multimodal deepfakes. We test different fusion techniques between data modalities and investigate which one leads to more robust predictions by the developed detectors. Our results indicate that a multimodal approach is more effective than a monomodal one, even if trained on disjoint monomodal datasets.
format Online
Article
Text
id pubmed-10299653
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-102996532023-06-28 A Robust Approach to Multimodal Deepfake Detection Salvi, Davide Liu, Honggu Mandelli, Sara Bestagini, Paolo Zhou, Wenbo Zhang, Weiming Tubaro, Stefano J Imaging Article The widespread use of deep learning techniques for creating realistic synthetic media, commonly known as deepfakes, poses a significant threat to individuals, organizations, and society. As the malicious use of these data could lead to unpleasant situations, it is becoming crucial to distinguish between authentic and fake media. Nonetheless, though deepfake generation systems can create convincing images and audio, they may struggle to maintain consistency across different data modalities, such as producing a realistic video sequence where both visual frames and speech are fake and consistent one with the other. Moreover, these systems may not accurately reproduce semantic and timely accurate aspects. All these elements can be exploited to perform a robust detection of fake content. In this paper, we propose a novel approach for detecting deepfake video sequences by leveraging data multimodality. Our method extracts audio-visual features from the input video over time and analyzes them using time-aware neural networks. We exploit both the video and audio modalities to leverage the inconsistencies between and within them, enhancing the final detection performance. The peculiarity of the proposed method is that we never train on multimodal deepfake data, but on disjoint monomodal datasets which contain visual-only or audio-only deepfakes. This frees us from leveraging multimodal datasets during training, which is desirable given their lack in the literature. Moreover, at test time, it allows to evaluate the robustness of our proposed detector on unseen multimodal deepfakes. We test different fusion techniques between data modalities and investigate which one leads to more robust predictions by the developed detectors. Our results indicate that a multimodal approach is more effective than a monomodal one, even if trained on disjoint monomodal datasets. MDPI 2023-06-19 /pmc/articles/PMC10299653/ /pubmed/37367470 http://dx.doi.org/10.3390/jimaging9060122 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Salvi, Davide
Liu, Honggu
Mandelli, Sara
Bestagini, Paolo
Zhou, Wenbo
Zhang, Weiming
Tubaro, Stefano
A Robust Approach to Multimodal Deepfake Detection
title A Robust Approach to Multimodal Deepfake Detection
title_full A Robust Approach to Multimodal Deepfake Detection
title_fullStr A Robust Approach to Multimodal Deepfake Detection
title_full_unstemmed A Robust Approach to Multimodal Deepfake Detection
title_short A Robust Approach to Multimodal Deepfake Detection
title_sort robust approach to multimodal deepfake detection
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10299653/
https://www.ncbi.nlm.nih.gov/pubmed/37367470
http://dx.doi.org/10.3390/jimaging9060122
work_keys_str_mv AT salvidavide arobustapproachtomultimodaldeepfakedetection
AT liuhonggu arobustapproachtomultimodaldeepfakedetection
AT mandellisara arobustapproachtomultimodaldeepfakedetection
AT bestaginipaolo arobustapproachtomultimodaldeepfakedetection
AT zhouwenbo arobustapproachtomultimodaldeepfakedetection
AT zhangweiming arobustapproachtomultimodaldeepfakedetection
AT tubarostefano arobustapproachtomultimodaldeepfakedetection
AT salvidavide robustapproachtomultimodaldeepfakedetection
AT liuhonggu robustapproachtomultimodaldeepfakedetection
AT mandellisara robustapproachtomultimodaldeepfakedetection
AT bestaginipaolo robustapproachtomultimodaldeepfakedetection
AT zhouwenbo robustapproachtomultimodaldeepfakedetection
AT zhangweiming robustapproachtomultimodaldeepfakedetection
AT tubarostefano robustapproachtomultimodaldeepfakedetection