Cargando…

Natural-Language-Driven Multimodal Representation Learning for Audio-Visual Scene-Aware Dialog System

With the development of multimedia systems in wireless environments, the rising need for artificial intelligence is to design a system that can properly communicate with humans with a comprehensive understanding of various types of information in a human-like manner. Therefore, this paper addresses...

Descripción completa

Detalles Bibliográficos
Autores principales: Heo, Yoonseok, Kang, Sangwoo, Seo, Jungyun
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10536977/
https://www.ncbi.nlm.nih.gov/pubmed/37765933
http://dx.doi.org/10.3390/s23187875
_version_ 1785112995204956160
author Heo, Yoonseok
Kang, Sangwoo
Seo, Jungyun
author_facet Heo, Yoonseok
Kang, Sangwoo
Seo, Jungyun
author_sort Heo, Yoonseok
collection PubMed
description With the development of multimedia systems in wireless environments, the rising need for artificial intelligence is to design a system that can properly communicate with humans with a comprehensive understanding of various types of information in a human-like manner. Therefore, this paper addresses an audio-visual scene-aware dialog system that can communicate with users about audio-visual scenes. It is essential to understand not only visual and textual information but also audio information in a comprehensive way. Despite the substantial progress in multimodal representation learning with language and visual modalities, there are still two caveats: ineffective use of auditory information and the lack of interpretability of the deep learning systems’ reasoning. To address these issues, we propose a novel audio-visual scene-aware dialog system that utilizes a set of explicit information from each modality as a form of natural language, which can be fused into a language model in a natural way. It leverages a transformer-based decoder to generate a coherent and correct response based on multimodal knowledge in a multitask learning setting. In addition, we also address the way of interpreting the model with a response-driven temporal moment localization method to verify how the system generates the response. The system itself provides the user with the evidence referred to in the system response process as a form of the timestamp of the scene. We show the superiority of the proposed model in all quantitative and qualitative measurements compared to the baseline. In particular, the proposed model achieved robust performance even in environments using all three modalities, including audio. We also conducted extensive experiments to investigate the proposed model. In addition, we obtained state-of-the-art performance in the system response reasoning task.
format Online
Article
Text
id pubmed-10536977
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-105369772023-09-29 Natural-Language-Driven Multimodal Representation Learning for Audio-Visual Scene-Aware Dialog System Heo, Yoonseok Kang, Sangwoo Seo, Jungyun Sensors (Basel) Article With the development of multimedia systems in wireless environments, the rising need for artificial intelligence is to design a system that can properly communicate with humans with a comprehensive understanding of various types of information in a human-like manner. Therefore, this paper addresses an audio-visual scene-aware dialog system that can communicate with users about audio-visual scenes. It is essential to understand not only visual and textual information but also audio information in a comprehensive way. Despite the substantial progress in multimodal representation learning with language and visual modalities, there are still two caveats: ineffective use of auditory information and the lack of interpretability of the deep learning systems’ reasoning. To address these issues, we propose a novel audio-visual scene-aware dialog system that utilizes a set of explicit information from each modality as a form of natural language, which can be fused into a language model in a natural way. It leverages a transformer-based decoder to generate a coherent and correct response based on multimodal knowledge in a multitask learning setting. In addition, we also address the way of interpreting the model with a response-driven temporal moment localization method to verify how the system generates the response. The system itself provides the user with the evidence referred to in the system response process as a form of the timestamp of the scene. We show the superiority of the proposed model in all quantitative and qualitative measurements compared to the baseline. In particular, the proposed model achieved robust performance even in environments using all three modalities, including audio. We also conducted extensive experiments to investigate the proposed model. In addition, we obtained state-of-the-art performance in the system response reasoning task. MDPI 2023-09-14 /pmc/articles/PMC10536977/ /pubmed/37765933 http://dx.doi.org/10.3390/s23187875 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Heo, Yoonseok
Kang, Sangwoo
Seo, Jungyun
Natural-Language-Driven Multimodal Representation Learning for Audio-Visual Scene-Aware Dialog System
title Natural-Language-Driven Multimodal Representation Learning for Audio-Visual Scene-Aware Dialog System
title_full Natural-Language-Driven Multimodal Representation Learning for Audio-Visual Scene-Aware Dialog System
title_fullStr Natural-Language-Driven Multimodal Representation Learning for Audio-Visual Scene-Aware Dialog System
title_full_unstemmed Natural-Language-Driven Multimodal Representation Learning for Audio-Visual Scene-Aware Dialog System
title_short Natural-Language-Driven Multimodal Representation Learning for Audio-Visual Scene-Aware Dialog System
title_sort natural-language-driven multimodal representation learning for audio-visual scene-aware dialog system
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10536977/
https://www.ncbi.nlm.nih.gov/pubmed/37765933
http://dx.doi.org/10.3390/s23187875
work_keys_str_mv AT heoyoonseok naturallanguagedrivenmultimodalrepresentationlearningforaudiovisualsceneawaredialogsystem
AT kangsangwoo naturallanguagedrivenmultimodalrepresentationlearningforaudiovisualsceneawaredialogsystem
AT seojungyun naturallanguagedrivenmultimodalrepresentationlearningforaudiovisualsceneawaredialogsystem