Cargando…

Understanding image-text relations and news values for multimodal news analysis

The analysis of news dissemination is of utmost importance since the credibility of information and the identification of disinformation and misinformation affect society as a whole. Given the large amounts of news data published daily on the Web, the empirical analysis of news with regard to resear...

Descripción completa

Detalles Bibliográficos
Autores principales: Cheema, Gullal S., Hakimov, Sherzod, Müller-Budack, Eric, Otto, Christian, Bateman, John A., Ewerth, Ralph
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10185854/
https://www.ncbi.nlm.nih.gov/pubmed/37205296
http://dx.doi.org/10.3389/frai.2023.1125533
_version_ 1785042448471293952
author Cheema, Gullal S.
Hakimov, Sherzod
Müller-Budack, Eric
Otto, Christian
Bateman, John A.
Ewerth, Ralph
author_facet Cheema, Gullal S.
Hakimov, Sherzod
Müller-Budack, Eric
Otto, Christian
Bateman, John A.
Ewerth, Ralph
author_sort Cheema, Gullal S.
collection PubMed
description The analysis of news dissemination is of utmost importance since the credibility of information and the identification of disinformation and misinformation affect society as a whole. Given the large amounts of news data published daily on the Web, the empirical analysis of news with regard to research questions and the detection of problematic news content on the Web require computational methods that work at scale. Today's online news are typically disseminated in a multimodal form, including various presentation modalities such as text, image, audio, and video. Recent developments in multimodal machine learning now make it possible to capture basic “descriptive” relations between modalities–such as correspondences between words and phrases, on the one hand, and corresponding visual depictions of the verbally expressed information on the other. Although such advances have enabled tremendous progress in tasks like image captioning, text-to-image generation and visual question answering, in domains such as news dissemination, there is a need to go further. In this paper, we introduce a novel framework for the computational analysis of multimodal news. We motivate a set of more complex image-text relations as well as multimodal news values based on real examples of news reports and consider their realization by computational approaches. To this end, we provide (a) an overview of existing literature from semiotics where detailed proposals have been made for taxonomies covering diverse image-text relations generalisable to any domain; (b) an overview of computational work that derives models of image-text relations from data; and (c) an overview of a particular class of news-centric attributes developed in journalism studies called news values. The result is a novel framework for multimodal news analysis that closes existing gaps in previous work while maintaining and combining the strengths of those accounts. We assess and discuss the elements of the framework with real-world examples and use cases, setting out research directions at the intersection of multimodal learning, multimodal analytics and computational social sciences that can benefit from our approach.
format Online
Article
Text
id pubmed-10185854
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-101858542023-05-17 Understanding image-text relations and news values for multimodal news analysis Cheema, Gullal S. Hakimov, Sherzod Müller-Budack, Eric Otto, Christian Bateman, John A. Ewerth, Ralph Front Artif Intell Artificial Intelligence The analysis of news dissemination is of utmost importance since the credibility of information and the identification of disinformation and misinformation affect society as a whole. Given the large amounts of news data published daily on the Web, the empirical analysis of news with regard to research questions and the detection of problematic news content on the Web require computational methods that work at scale. Today's online news are typically disseminated in a multimodal form, including various presentation modalities such as text, image, audio, and video. Recent developments in multimodal machine learning now make it possible to capture basic “descriptive” relations between modalities–such as correspondences between words and phrases, on the one hand, and corresponding visual depictions of the verbally expressed information on the other. Although such advances have enabled tremendous progress in tasks like image captioning, text-to-image generation and visual question answering, in domains such as news dissemination, there is a need to go further. In this paper, we introduce a novel framework for the computational analysis of multimodal news. We motivate a set of more complex image-text relations as well as multimodal news values based on real examples of news reports and consider their realization by computational approaches. To this end, we provide (a) an overview of existing literature from semiotics where detailed proposals have been made for taxonomies covering diverse image-text relations generalisable to any domain; (b) an overview of computational work that derives models of image-text relations from data; and (c) an overview of a particular class of news-centric attributes developed in journalism studies called news values. The result is a novel framework for multimodal news analysis that closes existing gaps in previous work while maintaining and combining the strengths of those accounts. We assess and discuss the elements of the framework with real-world examples and use cases, setting out research directions at the intersection of multimodal learning, multimodal analytics and computational social sciences that can benefit from our approach. Frontiers Media S.A. 2023-05-02 /pmc/articles/PMC10185854/ /pubmed/37205296 http://dx.doi.org/10.3389/frai.2023.1125533 Text en Copyright © 2023 Cheema, Hakimov, Müller-Budack, Otto, Bateman and Ewerth. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Artificial Intelligence
Cheema, Gullal S.
Hakimov, Sherzod
Müller-Budack, Eric
Otto, Christian
Bateman, John A.
Ewerth, Ralph
Understanding image-text relations and news values for multimodal news analysis
title Understanding image-text relations and news values for multimodal news analysis
title_full Understanding image-text relations and news values for multimodal news analysis
title_fullStr Understanding image-text relations and news values for multimodal news analysis
title_full_unstemmed Understanding image-text relations and news values for multimodal news analysis
title_short Understanding image-text relations and news values for multimodal news analysis
title_sort understanding image-text relations and news values for multimodal news analysis
topic Artificial Intelligence
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10185854/
https://www.ncbi.nlm.nih.gov/pubmed/37205296
http://dx.doi.org/10.3389/frai.2023.1125533
work_keys_str_mv AT cheemagullals understandingimagetextrelationsandnewsvaluesformultimodalnewsanalysis
AT hakimovsherzod understandingimagetextrelationsandnewsvaluesformultimodalnewsanalysis
AT mullerbudackeric understandingimagetextrelationsandnewsvaluesformultimodalnewsanalysis
AT ottochristian understandingimagetextrelationsandnewsvaluesformultimodalnewsanalysis
AT batemanjohna understandingimagetextrelationsandnewsvaluesformultimodalnewsanalysis
AT ewerthralph understandingimagetextrelationsandnewsvaluesformultimodalnewsanalysis