Cargando…

Vital information matching in vision-and-language navigation

With the rapid development of artificial intelligence technology, many researchers have begun to focus on visual language navigation, which is one of the most important tasks in multi-modal machine learning. The focus of this multi-modal field is how to fuse multiple inputs, which is crucial for the...

Descripción completa

Detalles Bibliográficos
Autores principales: Jia, Zixi, Yu, Kai, Ru, Jingyu, Yang, Sikai, Coleman, Sonya
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9712967/
https://www.ncbi.nlm.nih.gov/pubmed/36467568
http://dx.doi.org/10.3389/fnbot.2022.1035921
_version_ 1784841903161737216
author Jia, Zixi
Yu, Kai
Ru, Jingyu
Yang, Sikai
Coleman, Sonya
author_facet Jia, Zixi
Yu, Kai
Ru, Jingyu
Yang, Sikai
Coleman, Sonya
author_sort Jia, Zixi
collection PubMed
description With the rapid development of artificial intelligence technology, many researchers have begun to focus on visual language navigation, which is one of the most important tasks in multi-modal machine learning. The focus of this multi-modal field is how to fuse multiple inputs, which is crucial for the integrated feedback of intrinsic information. However, the existing models are only implemented through simple data augmentation or expansion, and are obviously far from being able to tap the intrinsic relationship between modalities. In this paper, to overcome these challenges, a novel multi-modal matching feedback self-tuning model is proposed, which is a novel neural network called Vital Information Matching Feedback Self-tuning Network (VIM-Net). Our VIM-Net network is mainly composed of two matching feedback modules, a visual matching feedback module (V-mat) and a trajectory matching feedback module (T-mat). Specifically, V-mat matches the target information of visual recognition with the entity information extracted by the command; T-mat matches the serialized trajectory feature with the direction of movement of the command. Ablation experiments and comparative experiments are conducted on the proposed model using the Matterport3D simulator and the Room-to-Room (R2R) benchmark datasets, and the final navigation effect is shown in detail. The results prove that the model proposed in this paper is indeed effective on the task.
format Online
Article
Text
id pubmed-9712967
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-97129672022-12-02 Vital information matching in vision-and-language navigation Jia, Zixi Yu, Kai Ru, Jingyu Yang, Sikai Coleman, Sonya Front Neurorobot Microbiology With the rapid development of artificial intelligence technology, many researchers have begun to focus on visual language navigation, which is one of the most important tasks in multi-modal machine learning. The focus of this multi-modal field is how to fuse multiple inputs, which is crucial for the integrated feedback of intrinsic information. However, the existing models are only implemented through simple data augmentation or expansion, and are obviously far from being able to tap the intrinsic relationship between modalities. In this paper, to overcome these challenges, a novel multi-modal matching feedback self-tuning model is proposed, which is a novel neural network called Vital Information Matching Feedback Self-tuning Network (VIM-Net). Our VIM-Net network is mainly composed of two matching feedback modules, a visual matching feedback module (V-mat) and a trajectory matching feedback module (T-mat). Specifically, V-mat matches the target information of visual recognition with the entity information extracted by the command; T-mat matches the serialized trajectory feature with the direction of movement of the command. Ablation experiments and comparative experiments are conducted on the proposed model using the Matterport3D simulator and the Room-to-Room (R2R) benchmark datasets, and the final navigation effect is shown in detail. The results prove that the model proposed in this paper is indeed effective on the task. Frontiers Media S.A. 2022-11-17 /pmc/articles/PMC9712967/ /pubmed/36467568 http://dx.doi.org/10.3389/fnbot.2022.1035921 Text en Copyright © 2022 Jia, Yu, Ru, Yang and Coleman. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Microbiology
Jia, Zixi
Yu, Kai
Ru, Jingyu
Yang, Sikai
Coleman, Sonya
Vital information matching in vision-and-language navigation
title Vital information matching in vision-and-language navigation
title_full Vital information matching in vision-and-language navigation
title_fullStr Vital information matching in vision-and-language navigation
title_full_unstemmed Vital information matching in vision-and-language navigation
title_short Vital information matching in vision-and-language navigation
title_sort vital information matching in vision-and-language navigation
topic Microbiology
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9712967/
https://www.ncbi.nlm.nih.gov/pubmed/36467568
http://dx.doi.org/10.3389/fnbot.2022.1035921
work_keys_str_mv AT jiazixi vitalinformationmatchinginvisionandlanguagenavigation
AT yukai vitalinformationmatchinginvisionandlanguagenavigation
AT rujingyu vitalinformationmatchinginvisionandlanguagenavigation
AT yangsikai vitalinformationmatchinginvisionandlanguagenavigation
AT colemansonya vitalinformationmatchinginvisionandlanguagenavigation