Cargando…

A Modular Vision Language Navigation and Manipulation Framework for Long Horizon Compositional Tasks in Indoor Environment

In this paper we propose a new framework—MoViLan (Modular Vision and Language) for execution of visually grounded natural language instructions for day to day indoor household tasks. While several data-driven, end-to-end learning frameworks have been proposed for targeted navigation tasks based on t...

Descripción completa

Detalles Bibliográficos
Autores principales: Saha, Homagni, Fotouhi, Fateme, Liu, Qisai, Sarkar, Soumik
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9340572/
https://www.ncbi.nlm.nih.gov/pubmed/35923304
http://dx.doi.org/10.3389/frobt.2022.930486
_version_ 1784760433840750592
author Saha, Homagni
Fotouhi, Fateme
Liu, Qisai
Sarkar, Soumik
author_facet Saha, Homagni
Fotouhi, Fateme
Liu, Qisai
Sarkar, Soumik
author_sort Saha, Homagni
collection PubMed
description In this paper we propose a new framework—MoViLan (Modular Vision and Language) for execution of visually grounded natural language instructions for day to day indoor household tasks. While several data-driven, end-to-end learning frameworks have been proposed for targeted navigation tasks based on the vision and language modalities, performance on recent benchmark data sets revealed the gap in developing comprehensive techniques for long horizon, compositional tasks (involving manipulation and navigation) with diverse object categories, realistic instructions and visual scenarios with non reversible state changes. We propose a modular approach to deal with the combined navigation and object interaction problem without the need for strictly aligned vision and language training data (e.g., in the form of expert demonstrated trajectories). Such an approach is a significant departure from the traditional end-to-end techniques in this space and allows for a more tractable training process with separate vision and language data sets. Specifically, we propose a novel geometry-aware mapping technique for cluttered indoor environments, and a language understanding model generalized for household instruction following. We demonstrate a significant increase in success rates for long horizon, compositional tasks over recent works on the recently released benchmark data set -ALFRED.
format Online
Article
Text
id pubmed-9340572
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-93405722022-08-02 A Modular Vision Language Navigation and Manipulation Framework for Long Horizon Compositional Tasks in Indoor Environment Saha, Homagni Fotouhi, Fateme Liu, Qisai Sarkar, Soumik Front Robot AI Robotics and AI In this paper we propose a new framework—MoViLan (Modular Vision and Language) for execution of visually grounded natural language instructions for day to day indoor household tasks. While several data-driven, end-to-end learning frameworks have been proposed for targeted navigation tasks based on the vision and language modalities, performance on recent benchmark data sets revealed the gap in developing comprehensive techniques for long horizon, compositional tasks (involving manipulation and navigation) with diverse object categories, realistic instructions and visual scenarios with non reversible state changes. We propose a modular approach to deal with the combined navigation and object interaction problem without the need for strictly aligned vision and language training data (e.g., in the form of expert demonstrated trajectories). Such an approach is a significant departure from the traditional end-to-end techniques in this space and allows for a more tractable training process with separate vision and language data sets. Specifically, we propose a novel geometry-aware mapping technique for cluttered indoor environments, and a language understanding model generalized for household instruction following. We demonstrate a significant increase in success rates for long horizon, compositional tasks over recent works on the recently released benchmark data set -ALFRED. Frontiers Media S.A. 2022-07-13 /pmc/articles/PMC9340572/ /pubmed/35923304 http://dx.doi.org/10.3389/frobt.2022.930486 Text en Copyright © 2022 Saha, Fotouhi, Liu and Sarkar. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Robotics and AI
Saha, Homagni
Fotouhi, Fateme
Liu, Qisai
Sarkar, Soumik
A Modular Vision Language Navigation and Manipulation Framework for Long Horizon Compositional Tasks in Indoor Environment
title A Modular Vision Language Navigation and Manipulation Framework for Long Horizon Compositional Tasks in Indoor Environment
title_full A Modular Vision Language Navigation and Manipulation Framework for Long Horizon Compositional Tasks in Indoor Environment
title_fullStr A Modular Vision Language Navigation and Manipulation Framework for Long Horizon Compositional Tasks in Indoor Environment
title_full_unstemmed A Modular Vision Language Navigation and Manipulation Framework for Long Horizon Compositional Tasks in Indoor Environment
title_short A Modular Vision Language Navigation and Manipulation Framework for Long Horizon Compositional Tasks in Indoor Environment
title_sort modular vision language navigation and manipulation framework for long horizon compositional tasks in indoor environment
topic Robotics and AI
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9340572/
https://www.ncbi.nlm.nih.gov/pubmed/35923304
http://dx.doi.org/10.3389/frobt.2022.930486
work_keys_str_mv AT sahahomagni amodularvisionlanguagenavigationandmanipulationframeworkforlonghorizoncompositionaltasksinindoorenvironment
AT fotouhifateme amodularvisionlanguagenavigationandmanipulationframeworkforlonghorizoncompositionaltasksinindoorenvironment
AT liuqisai amodularvisionlanguagenavigationandmanipulationframeworkforlonghorizoncompositionaltasksinindoorenvironment
AT sarkarsoumik amodularvisionlanguagenavigationandmanipulationframeworkforlonghorizoncompositionaltasksinindoorenvironment
AT sahahomagni modularvisionlanguagenavigationandmanipulationframeworkforlonghorizoncompositionaltasksinindoorenvironment
AT fotouhifateme modularvisionlanguagenavigationandmanipulationframeworkforlonghorizoncompositionaltasksinindoorenvironment
AT liuqisai modularvisionlanguagenavigationandmanipulationframeworkforlonghorizoncompositionaltasksinindoorenvironment
AT sarkarsoumik modularvisionlanguagenavigationandmanipulationframeworkforlonghorizoncompositionaltasksinindoorenvironment