Cargando…

Enhanced Semantic Representation Learning for Sarcasm Detection by Integrating Context-Aware Attention and Fusion Network

Sarcasm is a sophisticated figurative language that is prevalent on social media platforms. Automatic sarcasm detection is significant for understanding the real sentiment tendencies of users. Traditional approaches mostly focus on content features by using lexicon, n-gram, and pragmatic feature-bas...

Descripción completa

Detalles Bibliográficos
Autores principales: Hao, Shufeng, Yao, Jikun, Shi, Chongyang, Zhou, Yu, Xu, Shuang, Li, Dengao, Cheng, Yinghan
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10297453/
https://www.ncbi.nlm.nih.gov/pubmed/37372222
http://dx.doi.org/10.3390/e25060878
_version_ 1785063887267168256
author Hao, Shufeng
Yao, Jikun
Shi, Chongyang
Zhou, Yu
Xu, Shuang
Li, Dengao
Cheng, Yinghan
author_facet Hao, Shufeng
Yao, Jikun
Shi, Chongyang
Zhou, Yu
Xu, Shuang
Li, Dengao
Cheng, Yinghan
author_sort Hao, Shufeng
collection PubMed
description Sarcasm is a sophisticated figurative language that is prevalent on social media platforms. Automatic sarcasm detection is significant for understanding the real sentiment tendencies of users. Traditional approaches mostly focus on content features by using lexicon, n-gram, and pragmatic feature-based models. However, these methods ignore the diverse contextual clues that could provide more evidence of the sarcastic nature of sentences. In this work, we propose a Contextual Sarcasm Detection Model (CSDM) by modeling enhanced semantic representations with user profiling and forum topic information, where context-aware attention and a user-forum fusion network are used to obtain diverse representations from distinct aspects. In particular, we employ a Bi-LSTM encoder with context-aware attention to obtain a refined comment representation by capturing sentence composition information and the corresponding context situations. Then, we employ a user-forum fusion network to obtain the comprehensive context representation by capturing the corresponding sarcastic tendencies of the user and the background knowledge about the comments. Our proposed method achieves values of 0.69, 0.70, and 0.83 in terms of accuracy on the Main balanced, Pol balanced and Pol imbalanced datasets, respectively. The experimental results on a large Reddit corpus, SARC, demonstrate that our proposed method achieves a significant performance improvement over state-of-art textual sarcasm detection methods.
format Online
Article
Text
id pubmed-10297453
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-102974532023-06-28 Enhanced Semantic Representation Learning for Sarcasm Detection by Integrating Context-Aware Attention and Fusion Network Hao, Shufeng Yao, Jikun Shi, Chongyang Zhou, Yu Xu, Shuang Li, Dengao Cheng, Yinghan Entropy (Basel) Article Sarcasm is a sophisticated figurative language that is prevalent on social media platforms. Automatic sarcasm detection is significant for understanding the real sentiment tendencies of users. Traditional approaches mostly focus on content features by using lexicon, n-gram, and pragmatic feature-based models. However, these methods ignore the diverse contextual clues that could provide more evidence of the sarcastic nature of sentences. In this work, we propose a Contextual Sarcasm Detection Model (CSDM) by modeling enhanced semantic representations with user profiling and forum topic information, where context-aware attention and a user-forum fusion network are used to obtain diverse representations from distinct aspects. In particular, we employ a Bi-LSTM encoder with context-aware attention to obtain a refined comment representation by capturing sentence composition information and the corresponding context situations. Then, we employ a user-forum fusion network to obtain the comprehensive context representation by capturing the corresponding sarcastic tendencies of the user and the background knowledge about the comments. Our proposed method achieves values of 0.69, 0.70, and 0.83 in terms of accuracy on the Main balanced, Pol balanced and Pol imbalanced datasets, respectively. The experimental results on a large Reddit corpus, SARC, demonstrate that our proposed method achieves a significant performance improvement over state-of-art textual sarcasm detection methods. MDPI 2023-05-30 /pmc/articles/PMC10297453/ /pubmed/37372222 http://dx.doi.org/10.3390/e25060878 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Hao, Shufeng
Yao, Jikun
Shi, Chongyang
Zhou, Yu
Xu, Shuang
Li, Dengao
Cheng, Yinghan
Enhanced Semantic Representation Learning for Sarcasm Detection by Integrating Context-Aware Attention and Fusion Network
title Enhanced Semantic Representation Learning for Sarcasm Detection by Integrating Context-Aware Attention and Fusion Network
title_full Enhanced Semantic Representation Learning for Sarcasm Detection by Integrating Context-Aware Attention and Fusion Network
title_fullStr Enhanced Semantic Representation Learning for Sarcasm Detection by Integrating Context-Aware Attention and Fusion Network
title_full_unstemmed Enhanced Semantic Representation Learning for Sarcasm Detection by Integrating Context-Aware Attention and Fusion Network
title_short Enhanced Semantic Representation Learning for Sarcasm Detection by Integrating Context-Aware Attention and Fusion Network
title_sort enhanced semantic representation learning for sarcasm detection by integrating context-aware attention and fusion network
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10297453/
https://www.ncbi.nlm.nih.gov/pubmed/37372222
http://dx.doi.org/10.3390/e25060878
work_keys_str_mv AT haoshufeng enhancedsemanticrepresentationlearningforsarcasmdetectionbyintegratingcontextawareattentionandfusionnetwork
AT yaojikun enhancedsemanticrepresentationlearningforsarcasmdetectionbyintegratingcontextawareattentionandfusionnetwork
AT shichongyang enhancedsemanticrepresentationlearningforsarcasmdetectionbyintegratingcontextawareattentionandfusionnetwork
AT zhouyu enhancedsemanticrepresentationlearningforsarcasmdetectionbyintegratingcontextawareattentionandfusionnetwork
AT xushuang enhancedsemanticrepresentationlearningforsarcasmdetectionbyintegratingcontextawareattentionandfusionnetwork
AT lidengao enhancedsemanticrepresentationlearningforsarcasmdetectionbyintegratingcontextawareattentionandfusionnetwork
AT chengyinghan enhancedsemanticrepresentationlearningforsarcasmdetectionbyintegratingcontextawareattentionandfusionnetwork