Cargando…

Detecting and Locating Passive Video Forgery Based on Low Computational Complexity Third-Order Tensor Representation

Great attention is paid to detecting video forgeries nowadays, especially with the widespread sharing of videos over social media and websites. Many video editing software programs are available and perform well in tampering with video contents or even creating fake videos. Forgery affects video int...

Descripción completa

Detalles Bibliográficos
Autores principales: Alsakar, Yasmin M., Mekky, Nagham E., Hikal, Noha A.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8321313/
https://www.ncbi.nlm.nih.gov/pubmed/34460703
http://dx.doi.org/10.3390/jimaging7030047
_version_ 1783730821875630080
author Alsakar, Yasmin M.
Mekky, Nagham E.
Hikal, Noha A.
author_facet Alsakar, Yasmin M.
Mekky, Nagham E.
Hikal, Noha A.
author_sort Alsakar, Yasmin M.
collection PubMed
description Great attention is paid to detecting video forgeries nowadays, especially with the widespread sharing of videos over social media and websites. Many video editing software programs are available and perform well in tampering with video contents or even creating fake videos. Forgery affects video integrity and authenticity and has serious implications. For example, digital videos for security and surveillance purposes are used as evidence in courts. In this paper, a newly developed passive video forgery scheme is introduced and discussed. The developed scheme is based on representing highly correlated video data with a low computational complexity third-order tensor tube-fiber mode. An arbitrary number of core tensors is selected to detect and locate two serious types of forgeries which are: insertion and deletion. These tensor data are orthogonally transformed to achieve more data reductions and to provide good features to trace forgery along the whole video. Experimental results and comparisons show the superiority of the proposed scheme with a precision value of up to 99% in detecting and locating both types of attacks for static as well as dynamic videos, quick-moving foreground items (single or multiple), zooming in and zooming out datasets which are rarely tested by previous works. Moreover, the proposed scheme offers a reduction in time and a linear computational complexity. Based on the used computer’s configurations, an average time of 35 s. is needed to detect and locate 40 forged frames out of 300 frames.
format Online
Article
Text
id pubmed-8321313
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-83213132021-08-26 Detecting and Locating Passive Video Forgery Based on Low Computational Complexity Third-Order Tensor Representation Alsakar, Yasmin M. Mekky, Nagham E. Hikal, Noha A. J Imaging Article Great attention is paid to detecting video forgeries nowadays, especially with the widespread sharing of videos over social media and websites. Many video editing software programs are available and perform well in tampering with video contents or even creating fake videos. Forgery affects video integrity and authenticity and has serious implications. For example, digital videos for security and surveillance purposes are used as evidence in courts. In this paper, a newly developed passive video forgery scheme is introduced and discussed. The developed scheme is based on representing highly correlated video data with a low computational complexity third-order tensor tube-fiber mode. An arbitrary number of core tensors is selected to detect and locate two serious types of forgeries which are: insertion and deletion. These tensor data are orthogonally transformed to achieve more data reductions and to provide good features to trace forgery along the whole video. Experimental results and comparisons show the superiority of the proposed scheme with a precision value of up to 99% in detecting and locating both types of attacks for static as well as dynamic videos, quick-moving foreground items (single or multiple), zooming in and zooming out datasets which are rarely tested by previous works. Moreover, the proposed scheme offers a reduction in time and a linear computational complexity. Based on the used computer’s configurations, an average time of 35 s. is needed to detect and locate 40 forged frames out of 300 frames. MDPI 2021-03-05 /pmc/articles/PMC8321313/ /pubmed/34460703 http://dx.doi.org/10.3390/jimaging7030047 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) ).
spellingShingle Article
Alsakar, Yasmin M.
Mekky, Nagham E.
Hikal, Noha A.
Detecting and Locating Passive Video Forgery Based on Low Computational Complexity Third-Order Tensor Representation
title Detecting and Locating Passive Video Forgery Based on Low Computational Complexity Third-Order Tensor Representation
title_full Detecting and Locating Passive Video Forgery Based on Low Computational Complexity Third-Order Tensor Representation
title_fullStr Detecting and Locating Passive Video Forgery Based on Low Computational Complexity Third-Order Tensor Representation
title_full_unstemmed Detecting and Locating Passive Video Forgery Based on Low Computational Complexity Third-Order Tensor Representation
title_short Detecting and Locating Passive Video Forgery Based on Low Computational Complexity Third-Order Tensor Representation
title_sort detecting and locating passive video forgery based on low computational complexity third-order tensor representation
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8321313/
https://www.ncbi.nlm.nih.gov/pubmed/34460703
http://dx.doi.org/10.3390/jimaging7030047
work_keys_str_mv AT alsakaryasminm detectingandlocatingpassivevideoforgerybasedonlowcomputationalcomplexitythirdordertensorrepresentation
AT mekkynaghame detectingandlocatingpassivevideoforgerybasedonlowcomputationalcomplexitythirdordertensorrepresentation
AT hikalnohaa detectingandlocatingpassivevideoforgerybasedonlowcomputationalcomplexitythirdordertensorrepresentation