Cargando…

Interpretable Multi-Head Self-Attention Architecture for Sarcasm Detection in Social Media

With the online presence of more than half the world population, social media plays a very important role in the lives of individuals as well as businesses alike. Social media enables businesses to advertise their products, build brand value, and reach out to their customers. To leverage these socia...

Descripción completa

Detalles Bibliográficos
Autores principales: Akula, Ramya, Garibay, Ivan
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8067006/
https://www.ncbi.nlm.nih.gov/pubmed/33810363
http://dx.doi.org/10.3390/e23040394
_version_ 1783682700827164672
author Akula, Ramya
Garibay, Ivan
author_facet Akula, Ramya
Garibay, Ivan
author_sort Akula, Ramya
collection PubMed
description With the online presence of more than half the world population, social media plays a very important role in the lives of individuals as well as businesses alike. Social media enables businesses to advertise their products, build brand value, and reach out to their customers. To leverage these social media platforms, it is important for businesses to process customer feedback in the form of posts and tweets. Sentiment analysis is the process of identifying the emotion, either positive, negative or neutral, associated with these social media texts. The presence of sarcasm in texts is the main hindrance in the performance of sentiment analysis. Sarcasm is a linguistic expression often used to communicate the opposite of what is said, usually something that is very unpleasant, with an intention to insult or ridicule. Inherent ambiguity in sarcastic expressions make sarcasm detection very difficult. In this work, we focus on detecting sarcasm in textual conversations from various social networking platforms and online media. To this end, we develop an interpretable deep learning model using multi-head self-attention and gated recurrent units. The multi-head self-attention module aids in identifying crucial sarcastic cue-words from the input, and the recurrent units learn long-range dependencies between these cue-words to better classify the input text. We show the effectiveness of our approach by achieving state-of-the-art results on multiple datasets from social networking platforms and online media. Models trained using our proposed approach are easily interpretable and enable identifying sarcastic cues in the input text which contribute to the final classification score. We visualize the learned attention weights on a few sample input texts to showcase the effectiveness and interpretability of our model.
format Online
Article
Text
id pubmed-8067006
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-80670062021-04-25 Interpretable Multi-Head Self-Attention Architecture for Sarcasm Detection in Social Media Akula, Ramya Garibay, Ivan Entropy (Basel) Article With the online presence of more than half the world population, social media plays a very important role in the lives of individuals as well as businesses alike. Social media enables businesses to advertise their products, build brand value, and reach out to their customers. To leverage these social media platforms, it is important for businesses to process customer feedback in the form of posts and tweets. Sentiment analysis is the process of identifying the emotion, either positive, negative or neutral, associated with these social media texts. The presence of sarcasm in texts is the main hindrance in the performance of sentiment analysis. Sarcasm is a linguistic expression often used to communicate the opposite of what is said, usually something that is very unpleasant, with an intention to insult or ridicule. Inherent ambiguity in sarcastic expressions make sarcasm detection very difficult. In this work, we focus on detecting sarcasm in textual conversations from various social networking platforms and online media. To this end, we develop an interpretable deep learning model using multi-head self-attention and gated recurrent units. The multi-head self-attention module aids in identifying crucial sarcastic cue-words from the input, and the recurrent units learn long-range dependencies between these cue-words to better classify the input text. We show the effectiveness of our approach by achieving state-of-the-art results on multiple datasets from social networking platforms and online media. Models trained using our proposed approach are easily interpretable and enable identifying sarcastic cues in the input text which contribute to the final classification score. We visualize the learned attention weights on a few sample input texts to showcase the effectiveness and interpretability of our model. MDPI 2021-03-26 /pmc/articles/PMC8067006/ /pubmed/33810363 http://dx.doi.org/10.3390/e23040394 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) ).
spellingShingle Article
Akula, Ramya
Garibay, Ivan
Interpretable Multi-Head Self-Attention Architecture for Sarcasm Detection in Social Media
title Interpretable Multi-Head Self-Attention Architecture for Sarcasm Detection in Social Media
title_full Interpretable Multi-Head Self-Attention Architecture for Sarcasm Detection in Social Media
title_fullStr Interpretable Multi-Head Self-Attention Architecture for Sarcasm Detection in Social Media
title_full_unstemmed Interpretable Multi-Head Self-Attention Architecture for Sarcasm Detection in Social Media
title_short Interpretable Multi-Head Self-Attention Architecture for Sarcasm Detection in Social Media
title_sort interpretable multi-head self-attention architecture for sarcasm detection in social media
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8067006/
https://www.ncbi.nlm.nih.gov/pubmed/33810363
http://dx.doi.org/10.3390/e23040394
work_keys_str_mv AT akularamya interpretablemultiheadselfattentionarchitectureforsarcasmdetectioninsocialmedia
AT garibayivan interpretablemultiheadselfattentionarchitectureforsarcasmdetectioninsocialmedia