Cargando…

A Short Video Classification Framework Based on Cross-Modal Fusion

The explosive growth of online short videos has brought great challenges to the efficient management of video content classification, retrieval, and recommendation. Video features for video management can be extracted from video image frames by various algorithms, and they have been proven to be eff...

Descripción completa

Detalles Bibliográficos
Autores principales: Pang, Nuo, Guo, Songlin, Yan, Ming, Chan, Chien Aun
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10611385/
https://www.ncbi.nlm.nih.gov/pubmed/37896519
http://dx.doi.org/10.3390/s23208425
_version_ 1785128478952128512
author Pang, Nuo
Guo, Songlin
Yan, Ming
Chan, Chien Aun
author_facet Pang, Nuo
Guo, Songlin
Yan, Ming
Chan, Chien Aun
author_sort Pang, Nuo
collection PubMed
description The explosive growth of online short videos has brought great challenges to the efficient management of video content classification, retrieval, and recommendation. Video features for video management can be extracted from video image frames by various algorithms, and they have been proven to be effective in the video classification of sensor systems. However, frame-by-frame processing of video image frames not only requires huge computing power, but also classification algorithms based on a single modality of video features cannot meet the accuracy requirements in specific scenarios. In response to these concerns, we introduce a short video categorization architecture centered around cross-modal fusion in visual sensor systems which jointly utilizes video features and text features to classify short videos, avoiding processing a large number of image frames during classification. Firstly, the image space is extended to three-dimensional space–time by a self-attention mechanism, and a series of patches are extracted from a single image frame. Each patch is linearly mapped into the embedding layer of the Timesformer network and augmented with positional information to extract video features. Second, the text features of subtitles are extracted through the bidirectional encoder representation from the Transformers (BERT) pre-training model. Finally, cross-modal fusion is performed based on the extracted video and text features, resulting in improved accuracy for short video classification tasks. The outcomes of our experiments showcase a substantial superiority of our introduced classification framework compared to alternative baseline video classification methodologies. This framework can be applied in sensor systems for potential video classification.
format Online
Article
Text
id pubmed-10611385
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-106113852023-10-28 A Short Video Classification Framework Based on Cross-Modal Fusion Pang, Nuo Guo, Songlin Yan, Ming Chan, Chien Aun Sensors (Basel) Article The explosive growth of online short videos has brought great challenges to the efficient management of video content classification, retrieval, and recommendation. Video features for video management can be extracted from video image frames by various algorithms, and they have been proven to be effective in the video classification of sensor systems. However, frame-by-frame processing of video image frames not only requires huge computing power, but also classification algorithms based on a single modality of video features cannot meet the accuracy requirements in specific scenarios. In response to these concerns, we introduce a short video categorization architecture centered around cross-modal fusion in visual sensor systems which jointly utilizes video features and text features to classify short videos, avoiding processing a large number of image frames during classification. Firstly, the image space is extended to three-dimensional space–time by a self-attention mechanism, and a series of patches are extracted from a single image frame. Each patch is linearly mapped into the embedding layer of the Timesformer network and augmented with positional information to extract video features. Second, the text features of subtitles are extracted through the bidirectional encoder representation from the Transformers (BERT) pre-training model. Finally, cross-modal fusion is performed based on the extracted video and text features, resulting in improved accuracy for short video classification tasks. The outcomes of our experiments showcase a substantial superiority of our introduced classification framework compared to alternative baseline video classification methodologies. This framework can be applied in sensor systems for potential video classification. MDPI 2023-10-12 /pmc/articles/PMC10611385/ /pubmed/37896519 http://dx.doi.org/10.3390/s23208425 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Pang, Nuo
Guo, Songlin
Yan, Ming
Chan, Chien Aun
A Short Video Classification Framework Based on Cross-Modal Fusion
title A Short Video Classification Framework Based on Cross-Modal Fusion
title_full A Short Video Classification Framework Based on Cross-Modal Fusion
title_fullStr A Short Video Classification Framework Based on Cross-Modal Fusion
title_full_unstemmed A Short Video Classification Framework Based on Cross-Modal Fusion
title_short A Short Video Classification Framework Based on Cross-Modal Fusion
title_sort short video classification framework based on cross-modal fusion
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10611385/
https://www.ncbi.nlm.nih.gov/pubmed/37896519
http://dx.doi.org/10.3390/s23208425
work_keys_str_mv AT pangnuo ashortvideoclassificationframeworkbasedoncrossmodalfusion
AT guosonglin ashortvideoclassificationframeworkbasedoncrossmodalfusion
AT yanming ashortvideoclassificationframeworkbasedoncrossmodalfusion
AT chanchienaun ashortvideoclassificationframeworkbasedoncrossmodalfusion
AT pangnuo shortvideoclassificationframeworkbasedoncrossmodalfusion
AT guosonglin shortvideoclassificationframeworkbasedoncrossmodalfusion
AT yanming shortvideoclassificationframeworkbasedoncrossmodalfusion
AT chanchienaun shortvideoclassificationframeworkbasedoncrossmodalfusion