Cargando…

Deepfake attack prevention using steganography GANs

BACKGROUND: Deepfakes are fake images or videos generated by deep learning algorithms. Ongoing progress in deep learning techniques like auto-encoders and generative adversarial networks (GANs) is approaching a level that makes deepfake detection ideally impossible. A deepfake is created by swapping...

Descripción completa

Detalles Bibliográficos
Autores principales: Noreen, Iram, Muneer, Muhammad Shahid, Gillani, Saira
Formato: Online Artículo Texto
Lenguaje:English
Publicado: PeerJ Inc. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9680891/
https://www.ncbi.nlm.nih.gov/pubmed/36426246
http://dx.doi.org/10.7717/peerj-cs.1125
_version_ 1784834504541601792
author Noreen, Iram
Muneer, Muhammad Shahid
Gillani, Saira
author_facet Noreen, Iram
Muneer, Muhammad Shahid
Gillani, Saira
author_sort Noreen, Iram
collection PubMed
description BACKGROUND: Deepfakes are fake images or videos generated by deep learning algorithms. Ongoing progress in deep learning techniques like auto-encoders and generative adversarial networks (GANs) is approaching a level that makes deepfake detection ideally impossible. A deepfake is created by swapping videos, images, or audio with the target, consequently raising digital media threats over the internet. Much work has been done to detect deepfake videos through feature detection using a convolutional neural network (CNN), recurrent neural network (RNN), and spatiotemporal CNN. However, these techniques are not effective in the future due to continuous improvements in GANs. Style GANs can create fake videos with high accuracy that cannot be easily detected. Hence, deepfake prevention is the need of the hour rather than just mere detection. METHODS: Recently, blockchain-based ownership methods, image tags, and watermarks in video frames have been used to prevent deepfake. However, this process is not fully functional. An image frame could be faked by copying watermarks and reusing them to create a deepfake. In this research, an enhanced modified version of the steganography technique RivaGAN is used to address the issue. The proposed approach encodes watermarks into features of the video frames by training an “attention model” with the ReLU activation function to achieve a fast learning rate. RESULTS: The proposed attention-generating approach has been validated with multiple activation functions and learning rates. It achieved 99.7% accuracy in embedding watermarks into the frames of the video. After generating the attention model, the generative adversarial network has trained using DeepFaceLab 2.0 and has tested the prevention of deepfake attacks using watermark embedded videos comprising 8,074 frames from different benchmark datasets. The proposed approach has acquired a 100% success rate in preventing deepfake attacks. Our code is available at https://github.com/shahidmuneer/deepfakes-watermarking-technique.
format Online
Article
Text
id pubmed-9680891
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher PeerJ Inc.
record_format MEDLINE/PubMed
spelling pubmed-96808912022-11-23 Deepfake attack prevention using steganography GANs Noreen, Iram Muneer, Muhammad Shahid Gillani, Saira PeerJ Comput Sci Artificial Intelligence BACKGROUND: Deepfakes are fake images or videos generated by deep learning algorithms. Ongoing progress in deep learning techniques like auto-encoders and generative adversarial networks (GANs) is approaching a level that makes deepfake detection ideally impossible. A deepfake is created by swapping videos, images, or audio with the target, consequently raising digital media threats over the internet. Much work has been done to detect deepfake videos through feature detection using a convolutional neural network (CNN), recurrent neural network (RNN), and spatiotemporal CNN. However, these techniques are not effective in the future due to continuous improvements in GANs. Style GANs can create fake videos with high accuracy that cannot be easily detected. Hence, deepfake prevention is the need of the hour rather than just mere detection. METHODS: Recently, blockchain-based ownership methods, image tags, and watermarks in video frames have been used to prevent deepfake. However, this process is not fully functional. An image frame could be faked by copying watermarks and reusing them to create a deepfake. In this research, an enhanced modified version of the steganography technique RivaGAN is used to address the issue. The proposed approach encodes watermarks into features of the video frames by training an “attention model” with the ReLU activation function to achieve a fast learning rate. RESULTS: The proposed attention-generating approach has been validated with multiple activation functions and learning rates. It achieved 99.7% accuracy in embedding watermarks into the frames of the video. After generating the attention model, the generative adversarial network has trained using DeepFaceLab 2.0 and has tested the prevention of deepfake attacks using watermark embedded videos comprising 8,074 frames from different benchmark datasets. The proposed approach has acquired a 100% success rate in preventing deepfake attacks. Our code is available at https://github.com/shahidmuneer/deepfakes-watermarking-technique. PeerJ Inc. 2022-10-20 /pmc/articles/PMC9680891/ /pubmed/36426246 http://dx.doi.org/10.7717/peerj-cs.1125 Text en © 2022 Noreen et al. https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ Computer Science) and either DOI or URL of the article must be cited.
spellingShingle Artificial Intelligence
Noreen, Iram
Muneer, Muhammad Shahid
Gillani, Saira
Deepfake attack prevention using steganography GANs
title Deepfake attack prevention using steganography GANs
title_full Deepfake attack prevention using steganography GANs
title_fullStr Deepfake attack prevention using steganography GANs
title_full_unstemmed Deepfake attack prevention using steganography GANs
title_short Deepfake attack prevention using steganography GANs
title_sort deepfake attack prevention using steganography gans
topic Artificial Intelligence
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9680891/
https://www.ncbi.nlm.nih.gov/pubmed/36426246
http://dx.doi.org/10.7717/peerj-cs.1125
work_keys_str_mv AT noreeniram deepfakeattackpreventionusingsteganographygans
AT muneermuhammadshahid deepfakeattackpreventionusingsteganographygans
AT gillanisaira deepfakeattackpreventionusingsteganographygans