Cargando…

AFR-BERT: Attention-based mechanism feature relevance fusion multimodal sentiment analysis model

Multimodal sentiment analysis is an essential task in natural language processing which refers to the fact that machines can analyze and recognize emotions through logical reasoning and mathematical operations after learning multimodal emotional features. For the problem of how to consider the effec...

Descripción completa

Detalles Bibliográficos
Autores principales: Mingyu, Ji, Jiawei, Zhou, Ning, Wei
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9462790/
https://www.ncbi.nlm.nih.gov/pubmed/36084041
http://dx.doi.org/10.1371/journal.pone.0273936
_version_ 1784787268364402688
author Mingyu, Ji
Jiawei, Zhou
Ning, Wei
author_facet Mingyu, Ji
Jiawei, Zhou
Ning, Wei
author_sort Mingyu, Ji
collection PubMed
description Multimodal sentiment analysis is an essential task in natural language processing which refers to the fact that machines can analyze and recognize emotions through logical reasoning and mathematical operations after learning multimodal emotional features. For the problem of how to consider the effective fusion of multimodal data and the relevance of multimodal data in multimodal sentiment analysis, we propose an attention-based mechanism feature relevance fusion multimodal sentiment analysis model (AFR-BERT). In the data pre-processing stage, text features are extracted using the pre-trained language model BERT (Bi-directional Encoder Representation from Transformers), and the BiLSTM (Bi-directional Long Short-Term Memory) is used to obtain the internal information of the audio. In the data fusion phase, the multimodal data fusion network effectively fuses multimodal features through the interaction of text and audio information. During the data analysis phase, the multimodal data association network analyzes the data by exploring the correlation of fused information between text and audio. In the data output phase, the model outputs the results of multimodal sentiment analysis. We conducted extensive comparative experiments on the publicly available sentiment analysis datasets CMU-MOSI and CMU-MOSEI. The experimental results show that AFR-BERT improves on the classical multimodal sentiment analysis model in terms of relevant performance metrics. In addition, ablation experiments and example analysis show that the multimodal data analysis network in AFR-BERT can effectively capture and analyze the sentiment features in text and audio.
format Online
Article
Text
id pubmed-9462790
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-94627902022-09-10 AFR-BERT: Attention-based mechanism feature relevance fusion multimodal sentiment analysis model Mingyu, Ji Jiawei, Zhou Ning, Wei PLoS One Research Article Multimodal sentiment analysis is an essential task in natural language processing which refers to the fact that machines can analyze and recognize emotions through logical reasoning and mathematical operations after learning multimodal emotional features. For the problem of how to consider the effective fusion of multimodal data and the relevance of multimodal data in multimodal sentiment analysis, we propose an attention-based mechanism feature relevance fusion multimodal sentiment analysis model (AFR-BERT). In the data pre-processing stage, text features are extracted using the pre-trained language model BERT (Bi-directional Encoder Representation from Transformers), and the BiLSTM (Bi-directional Long Short-Term Memory) is used to obtain the internal information of the audio. In the data fusion phase, the multimodal data fusion network effectively fuses multimodal features through the interaction of text and audio information. During the data analysis phase, the multimodal data association network analyzes the data by exploring the correlation of fused information between text and audio. In the data output phase, the model outputs the results of multimodal sentiment analysis. We conducted extensive comparative experiments on the publicly available sentiment analysis datasets CMU-MOSI and CMU-MOSEI. The experimental results show that AFR-BERT improves on the classical multimodal sentiment analysis model in terms of relevant performance metrics. In addition, ablation experiments and example analysis show that the multimodal data analysis network in AFR-BERT can effectively capture and analyze the sentiment features in text and audio. Public Library of Science 2022-09-09 /pmc/articles/PMC9462790/ /pubmed/36084041 http://dx.doi.org/10.1371/journal.pone.0273936 Text en © 2022 Mingyu et al https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Mingyu, Ji
Jiawei, Zhou
Ning, Wei
AFR-BERT: Attention-based mechanism feature relevance fusion multimodal sentiment analysis model
title AFR-BERT: Attention-based mechanism feature relevance fusion multimodal sentiment analysis model
title_full AFR-BERT: Attention-based mechanism feature relevance fusion multimodal sentiment analysis model
title_fullStr AFR-BERT: Attention-based mechanism feature relevance fusion multimodal sentiment analysis model
title_full_unstemmed AFR-BERT: Attention-based mechanism feature relevance fusion multimodal sentiment analysis model
title_short AFR-BERT: Attention-based mechanism feature relevance fusion multimodal sentiment analysis model
title_sort afr-bert: attention-based mechanism feature relevance fusion multimodal sentiment analysis model
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9462790/
https://www.ncbi.nlm.nih.gov/pubmed/36084041
http://dx.doi.org/10.1371/journal.pone.0273936
work_keys_str_mv AT mingyuji afrbertattentionbasedmechanismfeaturerelevancefusionmultimodalsentimentanalysismodel
AT jiaweizhou afrbertattentionbasedmechanismfeaturerelevancefusionmultimodalsentimentanalysismodel
AT ningwei afrbertattentionbasedmechanismfeaturerelevancefusionmultimodalsentimentanalysismodel