Cargando…
Explaining sentiment analysis results on social media texts through visualization
Today, Artificial Intelligence is achieving prodigious real-time performance, thanks to growing computational data and power capacities. However, there is little knowledge about what system results convey; thus, they are at risk of being susceptible to bias, and with the roots of Artificial Intellig...
Autores principales: | , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer US
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9892668/ https://www.ncbi.nlm.nih.gov/pubmed/36747895 http://dx.doi.org/10.1007/s11042-023-14432-y |
Sumario: | Today, Artificial Intelligence is achieving prodigious real-time performance, thanks to growing computational data and power capacities. However, there is little knowledge about what system results convey; thus, they are at risk of being susceptible to bias, and with the roots of Artificial Intelligence (“AI”) in almost every territory, even a minuscule bias can result in excessive damage. Efforts towards making AI interpretable have been made to address fairness, accountability, and transparency concerns. This paper proposes two unique methods to understand the system’s decisions aided by visualizing the results. For this study, interpretability has been implemented on Natural Language Processing-based sentiment analysis using data from various social media sites like Twitter, Facebook, and Reddit. With Valence Aware Dictionary for Sentiment Reasoning (“VADER”), heatmaps are generated, which account for visual justification of the result, increasing comprehensibility. Furthermore, Locally Interpretable Model-Agnostic Explanations (“LIME”) have been used to provide in-depth insight into the predictions. It has been found experimentally that the proposed system can surpass several contemporary systems designed to attempt interpretability. |
---|