Cargando…

Multi-modal affine fusion network for social media rumor detection

With the rapid development of the Internet, people obtain much information from social media such as Twitter and Weibo every day. However, due to the complex structure of social media, many rumors with corresponding images are mixed in factual information to be widely spread, which misleads readers...

Descripción completa

Detalles Bibliográficos
Autores principales: Fu, Boyang, Sui, Jie
Formato: Online Artículo Texto
Lenguaje:English
Publicado: PeerJ Inc. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9138019/
https://www.ncbi.nlm.nih.gov/pubmed/35634114
http://dx.doi.org/10.7717/peerj-cs.928
_version_ 1784714522984972288
author Fu, Boyang
Sui, Jie
author_facet Fu, Boyang
Sui, Jie
author_sort Fu, Boyang
collection PubMed
description With the rapid development of the Internet, people obtain much information from social media such as Twitter and Weibo every day. However, due to the complex structure of social media, many rumors with corresponding images are mixed in factual information to be widely spread, which misleads readers and exerts adverse effects on society. Automatically detecting social media rumors has become a challenge faced by contemporary society. To overcome this challenge, we proposed the multimodal affine fusion network (MAFN) combined with entity recognition, a new end-to-end framework that fuses multimodal features to detect rumors effectively. The MAFN mainly consists of four parts: the entity recognition enhanced textual feature extractor, the visual feature extractor, the multimodal affine fuser, and the rumor detector. The entity recognition enhanced textual feature extractor is responsible for extracting textual features that enhance semantics with entity recognition from posts. The visual feature extractor extracts visual features. The multimodal affine fuser extracts the three types of modal features and fuses them by the affine method. It cooperates with the rumor detector to learn the representations for rumor detection to produce reliable fusion detection. Extensive experiments were conducted on the MAFN based on real Weibo and Twitter multimodal datasets, which verified the effectiveness of the proposed multimodal fusion neural network in rumor detection.
format Online
Article
Text
id pubmed-9138019
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher PeerJ Inc.
record_format MEDLINE/PubMed
spelling pubmed-91380192022-05-28 Multi-modal affine fusion network for social media rumor detection Fu, Boyang Sui, Jie PeerJ Comput Sci Computer Vision With the rapid development of the Internet, people obtain much information from social media such as Twitter and Weibo every day. However, due to the complex structure of social media, many rumors with corresponding images are mixed in factual information to be widely spread, which misleads readers and exerts adverse effects on society. Automatically detecting social media rumors has become a challenge faced by contemporary society. To overcome this challenge, we proposed the multimodal affine fusion network (MAFN) combined with entity recognition, a new end-to-end framework that fuses multimodal features to detect rumors effectively. The MAFN mainly consists of four parts: the entity recognition enhanced textual feature extractor, the visual feature extractor, the multimodal affine fuser, and the rumor detector. The entity recognition enhanced textual feature extractor is responsible for extracting textual features that enhance semantics with entity recognition from posts. The visual feature extractor extracts visual features. The multimodal affine fuser extracts the three types of modal features and fuses them by the affine method. It cooperates with the rumor detector to learn the representations for rumor detection to produce reliable fusion detection. Extensive experiments were conducted on the MAFN based on real Weibo and Twitter multimodal datasets, which verified the effectiveness of the proposed multimodal fusion neural network in rumor detection. PeerJ Inc. 2022-05-03 /pmc/articles/PMC9138019/ /pubmed/35634114 http://dx.doi.org/10.7717/peerj-cs.928 Text en © 2022 Fu and Sui https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ Computer Science) and either DOI or URL of the article must be cited.
spellingShingle Computer Vision
Fu, Boyang
Sui, Jie
Multi-modal affine fusion network for social media rumor detection
title Multi-modal affine fusion network for social media rumor detection
title_full Multi-modal affine fusion network for social media rumor detection
title_fullStr Multi-modal affine fusion network for social media rumor detection
title_full_unstemmed Multi-modal affine fusion network for social media rumor detection
title_short Multi-modal affine fusion network for social media rumor detection
title_sort multi-modal affine fusion network for social media rumor detection
topic Computer Vision
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9138019/
https://www.ncbi.nlm.nih.gov/pubmed/35634114
http://dx.doi.org/10.7717/peerj-cs.928
work_keys_str_mv AT fuboyang multimodalaffinefusionnetworkforsocialmediarumordetection
AT suijie multimodalaffinefusionnetworkforsocialmediarumordetection