Cargando…

OpExHAN: opinion extraction using hierarchical attention network from unstructured reviews

In recent decades, sellers and merchants have asked their customers to share their opinion on the products at online marketplace. Analysis of the massive amounts of reviews for a potential customer is an immense challenge, to decide whether to purchase a product or not. In this paper, a hierarchical...

Descripción completa

Detalles Bibliográficos
Autores principales: Ratmele, Ankur, Thakur, Ramesh
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer Vienna 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9534471/
https://www.ncbi.nlm.nih.gov/pubmed/36217360
http://dx.doi.org/10.1007/s13278-022-00971-z
Descripción
Sumario:In recent decades, sellers and merchants have asked their customers to share their opinion on the products at online marketplace. Analysis of the massive amounts of reviews for a potential customer is an immense challenge, to decide whether to purchase a product or not. In this paper, a hierarchical attention network-based framework is presented to resolve this challenge. In this proposed framework, the Amazon’s Smartphone review dataset is preprocessed using NLP approaches and then applied Glove embedding to extract word vector representation of reviews which identifies contextual information of words. These word vectors are fed into the hierarchical attention network, which produces vectors at the word and sentence levels. Bi-GRU model encodes the words and sentences into hidden vectors. Finally, reviews are classified into five opinion classes like extremely positive, positive, extremely negative, negative and neutral. Furthermore, to perform the experiments with the proposed method, the dataset is divided into three parts such as 80% train, 10% validation and 10% test. Experiments reveal that the proposed framework outperforms baseline methods in terms of accuracy, precision and recall. The OpExHAN model achieved admirable results like 94.6% accuracy, 91% both precision and recall after a lot of hyper-parameter experimentation.