Cargando…

Implementation of Short Video Click-Through Rate Estimation Model Based on Cross-Media Collaborative Filtering Neural Network

In this paper, we analyze the construction of cross-media collaborative filtering neural network model to design an in-depth model for fast video click-through rate projection based on cross-media collaborative filtering neural network. In this paper, by directly extracting the image features, behav...

Descripción completa

Detalles Bibliográficos
Autores principales: Feng, Ying, Zhao, Guisheng
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Hindawi 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9173947/
https://www.ncbi.nlm.nih.gov/pubmed/35685157
http://dx.doi.org/10.1155/2022/4951912
_version_ 1784722130516049920
author Feng, Ying
Zhao, Guisheng
author_facet Feng, Ying
Zhao, Guisheng
author_sort Feng, Ying
collection PubMed
description In this paper, we analyze the construction of cross-media collaborative filtering neural network model to design an in-depth model for fast video click-through rate projection based on cross-media collaborative filtering neural network. In this paper, by directly extracting the image features, behavioral features, and audio features of short videos as video feature representation, more video information is considered than other models. The experimental results show that the model incorporating multimodal elements improves AUC performance metrics compared to those without multimodal features. In this paper, we take advantage of recurrent neural networks in processing sequence information and incorporate them into the deep-width model to make up for the lack of capability of the original deep-width model in learning the dependencies between user sequence data and propose a deep-width model based on attention mechanism to model users' historical behaviors and explore the influence of different historical behaviors of users on current behaviors using the attention mechanism. Data augmentation techniques are used to deal with cases where the length of user behavior sequences is too short. This paper uses the input layer and top connection when introducing historical behavior sequences. The models commonly used in recent years are selected for comparison, and the experimental results show that the proposed model improves in AUC, accuracy, and log loss metrics.
format Online
Article
Text
id pubmed-9173947
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Hindawi
record_format MEDLINE/PubMed
spelling pubmed-91739472022-06-08 Implementation of Short Video Click-Through Rate Estimation Model Based on Cross-Media Collaborative Filtering Neural Network Feng, Ying Zhao, Guisheng Comput Intell Neurosci Research Article In this paper, we analyze the construction of cross-media collaborative filtering neural network model to design an in-depth model for fast video click-through rate projection based on cross-media collaborative filtering neural network. In this paper, by directly extracting the image features, behavioral features, and audio features of short videos as video feature representation, more video information is considered than other models. The experimental results show that the model incorporating multimodal elements improves AUC performance metrics compared to those without multimodal features. In this paper, we take advantage of recurrent neural networks in processing sequence information and incorporate them into the deep-width model to make up for the lack of capability of the original deep-width model in learning the dependencies between user sequence data and propose a deep-width model based on attention mechanism to model users' historical behaviors and explore the influence of different historical behaviors of users on current behaviors using the attention mechanism. Data augmentation techniques are used to deal with cases where the length of user behavior sequences is too short. This paper uses the input layer and top connection when introducing historical behavior sequences. The models commonly used in recent years are selected for comparison, and the experimental results show that the proposed model improves in AUC, accuracy, and log loss metrics. Hindawi 2022-05-31 /pmc/articles/PMC9173947/ /pubmed/35685157 http://dx.doi.org/10.1155/2022/4951912 Text en Copyright © 2022 Ying Feng and Guisheng Zhao. https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
spellingShingle Research Article
Feng, Ying
Zhao, Guisheng
Implementation of Short Video Click-Through Rate Estimation Model Based on Cross-Media Collaborative Filtering Neural Network
title Implementation of Short Video Click-Through Rate Estimation Model Based on Cross-Media Collaborative Filtering Neural Network
title_full Implementation of Short Video Click-Through Rate Estimation Model Based on Cross-Media Collaborative Filtering Neural Network
title_fullStr Implementation of Short Video Click-Through Rate Estimation Model Based on Cross-Media Collaborative Filtering Neural Network
title_full_unstemmed Implementation of Short Video Click-Through Rate Estimation Model Based on Cross-Media Collaborative Filtering Neural Network
title_short Implementation of Short Video Click-Through Rate Estimation Model Based on Cross-Media Collaborative Filtering Neural Network
title_sort implementation of short video click-through rate estimation model based on cross-media collaborative filtering neural network
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9173947/
https://www.ncbi.nlm.nih.gov/pubmed/35685157
http://dx.doi.org/10.1155/2022/4951912
work_keys_str_mv AT fengying implementationofshortvideoclickthroughrateestimationmodelbasedoncrossmediacollaborativefilteringneuralnetwork
AT zhaoguisheng implementationofshortvideoclickthroughrateestimationmodelbasedoncrossmediacollaborativefilteringneuralnetwork