Cargando…

“In the Wild” Video Content as a Special Case of User Generated Content and a System for Its Recognition

In the five years between 2017 and 2022, IP video traffic tripled, according to Cisco. User-Generated Content (UGC) is mainly responsible for user-generated IP video traffic. The development of widely accessible knowledge and affordable equipment makes it possible to produce UGCs of quality that is...

Descripción completa

Detalles Bibliográficos
Autores principales: Leszczuk, Mikołaj, Kobosko, Marek, Nawała, Jakub, Korus, Filip, Grega, Michał
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9961411/
https://www.ncbi.nlm.nih.gov/pubmed/36850368
http://dx.doi.org/10.3390/s23041769
Descripción
Sumario:In the five years between 2017 and 2022, IP video traffic tripled, according to Cisco. User-Generated Content (UGC) is mainly responsible for user-generated IP video traffic. The development of widely accessible knowledge and affordable equipment makes it possible to produce UGCs of quality that is practically indistinguishable from professional content, although at the beginning of UGC creation, this content was frequently characterized by amateur acquisition conditions and unprofessional processing. In this research, we focus only on UGC content, whose quality is obviously different from that of professional content. For the purpose of this paper, we refer to “in the wild” as a closely related idea to the general idea of UGC, which is its particular case. Studies on UGC recognition are scarce. According to research in the literature, there are currently no real operational algorithms that distinguish UGC content from other content. In this study, we demonstrate that the XGBoost machine learning algorithm (Extreme Gradient Boosting) can be used to develop a novel objective “in the wild” video content recognition model. The final model is trained and tested using video sequence databases with professional content and “in the wild” content. We have achieved a 0.916 accuracy value for our model. Due to the comparatively high accuracy of the model operation, a free version of its implementation is made accessible to the research community. It is provided via an easy-to-use Python package installable with Pip Installs Packages (pip).