Cargando…
Hotel Review Classification Based on the Text Pretraining Heterogeneous Graph Neural Network Model
With the amount of online information continuously growing, it becomes more and more important for online stores to recommend corresponding products precisely based on users' preferences. Reviews for various products can be of great help for the recommendation task. However, most recommendation...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Hindawi
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8923762/ https://www.ncbi.nlm.nih.gov/pubmed/35300392 http://dx.doi.org/10.1155/2022/5259305 |
Sumario: | With the amount of online information continuously growing, it becomes more and more important for online stores to recommend corresponding products precisely based on users' preferences. Reviews for various products can be of great help for the recommendation task. However, most recommendation platforms only classify positive and negative reviews based on sentiment analysis, without considering the actual demands of users, and it will reduce the effectiveness on classification task. To count this issue, we propose a new model, which integrates heterogeneous neural network and text pretraining model into this task, and compare this model with others on a travel type classification task. The model combines a pretrained text model named Bidirectional Encoder Representation from Transformers (BERT) and heterogeneous graph attention network (HGAN). Firstly, we do a fine-tuning task on BERT by a dataset consisting of 1.4 million hotel reviews from the Ctrip website to obtain fine representations of trip-related words. Then, we proposed the similarity fussy-matching method to get the main topic of each review. Then, we construct a heterogeneous neural network and apply the attention mechanism to it to mine the preference of users for traveling. Finally, the classification task is done based on each user's preference. In Section 5 of this study, we do an experiment, which compares our model with five others. The results show that the accuracy of ours is 70%, which is higher than the other five models. |
---|