Cargando…

A multi-intent based multi-policy relay contrastive learning for sequential recommendation

Sequential recommendations have become a trending study for their ability to capture dynamic user preference. However, when dealing with sparse data, they still fall short of expectations. The recent contrastive learning (CL) has shown potential in mitigating the issue of data sparsity. Many item re...

Descripción completa

Detalles Bibliográficos
Autor principal: Di, Weiqiang
Formato: Online Artículo Texto
Lenguaje:English
Publicado: PeerJ Inc. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9455276/
https://www.ncbi.nlm.nih.gov/pubmed/36091987
http://dx.doi.org/10.7717/peerj-cs.1088
Descripción
Sumario:Sequential recommendations have become a trending study for their ability to capture dynamic user preference. However, when dealing with sparse data, they still fall short of expectations. The recent contrastive learning (CL) has shown potential in mitigating the issue of data sparsity. Many item representations are destined to be poorly learned due to data sparsity. It is better to pay more attention to learn a set of influential latent intents that have greater impacts on the sequence evolution. In this article, we devise a novel multi-intent self-attention module, which modifies the self-attention mechanism to break down the user behavior sequences to multiple latent intents that identify the different tastes and inclinations of users. In addition to the above change in the model architecture, we also extend in dealing with multiple contrastive tasks. Specifically, some data augmentations in CL can be very different. Together they cannot cooperate well, and may stumbling over each other. To solve this problem, we propose a multi-policy relay training strategy, which divides the training into multiple stages based on the number of data augmentations. In each stage we optimize the relay to the best on the basis of the previous stage. This can combine the advantage of different schemes and make the best use of them. Experiments on four public recommendation datasets demonstrate the superiority of our model.