Cargando…

A Study on Generative Models for Visual Recognition of Unknown Scenes Using a Textual Description

In this study, we investigate the application of generative models to assist artificial agents, such as delivery drones or service robots, in visualising unfamiliar destinations solely based on textual descriptions. We explore the use of generative models, such as Stable Diffusion, and embedding rep...

Descripción completa

Detalles Bibliográficos
Autores principales: Martinez-Carranza, Jose, Hernández-Farías, Delia Irazú, Vazquez-Meza, Victoria Eugenia, Rojas-Perez, Leticia Oyuki, Cabrera-Ponce, Aldrich Alfredo
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10649081/
https://www.ncbi.nlm.nih.gov/pubmed/37960458
http://dx.doi.org/10.3390/s23218757
_version_ 1785135485241262080
author Martinez-Carranza, Jose
Hernández-Farías, Delia Irazú
Vazquez-Meza, Victoria Eugenia
Rojas-Perez, Leticia Oyuki
Cabrera-Ponce, Aldrich Alfredo
author_facet Martinez-Carranza, Jose
Hernández-Farías, Delia Irazú
Vazquez-Meza, Victoria Eugenia
Rojas-Perez, Leticia Oyuki
Cabrera-Ponce, Aldrich Alfredo
author_sort Martinez-Carranza, Jose
collection PubMed
description In this study, we investigate the application of generative models to assist artificial agents, such as delivery drones or service robots, in visualising unfamiliar destinations solely based on textual descriptions. We explore the use of generative models, such as Stable Diffusion, and embedding representations, such as CLIP and VisualBERT, to compare generated images obtained from textual descriptions of target scenes with images of those scenes. Our research encompasses three key strategies: image generation, text generation, and text enhancement, the latter involving tools such as ChatGPT to create concise textual descriptions for evaluation. The findings of this study contribute to an understanding of the impact of combining generative tools with multi-modal embedding representations to enhance the artificial agent’s ability to recognise unknown scenes. Consequently, we assert that this research holds broad applications, particularly in drone parcel delivery, where an aerial robot can employ text descriptions to identify a destination. Furthermore, this concept can also be applied to other service robots tasked with delivering to unfamiliar locations, relying exclusively on user-provided textual descriptions.
format Online
Article
Text
id pubmed-10649081
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-106490812023-10-27 A Study on Generative Models for Visual Recognition of Unknown Scenes Using a Textual Description Martinez-Carranza, Jose Hernández-Farías, Delia Irazú Vazquez-Meza, Victoria Eugenia Rojas-Perez, Leticia Oyuki Cabrera-Ponce, Aldrich Alfredo Sensors (Basel) Article In this study, we investigate the application of generative models to assist artificial agents, such as delivery drones or service robots, in visualising unfamiliar destinations solely based on textual descriptions. We explore the use of generative models, such as Stable Diffusion, and embedding representations, such as CLIP and VisualBERT, to compare generated images obtained from textual descriptions of target scenes with images of those scenes. Our research encompasses three key strategies: image generation, text generation, and text enhancement, the latter involving tools such as ChatGPT to create concise textual descriptions for evaluation. The findings of this study contribute to an understanding of the impact of combining generative tools with multi-modal embedding representations to enhance the artificial agent’s ability to recognise unknown scenes. Consequently, we assert that this research holds broad applications, particularly in drone parcel delivery, where an aerial robot can employ text descriptions to identify a destination. Furthermore, this concept can also be applied to other service robots tasked with delivering to unfamiliar locations, relying exclusively on user-provided textual descriptions. MDPI 2023-10-27 /pmc/articles/PMC10649081/ /pubmed/37960458 http://dx.doi.org/10.3390/s23218757 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Martinez-Carranza, Jose
Hernández-Farías, Delia Irazú
Vazquez-Meza, Victoria Eugenia
Rojas-Perez, Leticia Oyuki
Cabrera-Ponce, Aldrich Alfredo
A Study on Generative Models for Visual Recognition of Unknown Scenes Using a Textual Description
title A Study on Generative Models for Visual Recognition of Unknown Scenes Using a Textual Description
title_full A Study on Generative Models for Visual Recognition of Unknown Scenes Using a Textual Description
title_fullStr A Study on Generative Models for Visual Recognition of Unknown Scenes Using a Textual Description
title_full_unstemmed A Study on Generative Models for Visual Recognition of Unknown Scenes Using a Textual Description
title_short A Study on Generative Models for Visual Recognition of Unknown Scenes Using a Textual Description
title_sort study on generative models for visual recognition of unknown scenes using a textual description
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10649081/
https://www.ncbi.nlm.nih.gov/pubmed/37960458
http://dx.doi.org/10.3390/s23218757
work_keys_str_mv AT martinezcarranzajose astudyongenerativemodelsforvisualrecognitionofunknownscenesusingatextualdescription
AT hernandezfariasdeliairazu astudyongenerativemodelsforvisualrecognitionofunknownscenesusingatextualdescription
AT vazquezmezavictoriaeugenia astudyongenerativemodelsforvisualrecognitionofunknownscenesusingatextualdescription
AT rojasperezleticiaoyuki astudyongenerativemodelsforvisualrecognitionofunknownscenesusingatextualdescription
AT cabreraponcealdrichalfredo astudyongenerativemodelsforvisualrecognitionofunknownscenesusingatextualdescription
AT martinezcarranzajose studyongenerativemodelsforvisualrecognitionofunknownscenesusingatextualdescription
AT hernandezfariasdeliairazu studyongenerativemodelsforvisualrecognitionofunknownscenesusingatextualdescription
AT vazquezmezavictoriaeugenia studyongenerativemodelsforvisualrecognitionofunknownscenesusingatextualdescription
AT rojasperezleticiaoyuki studyongenerativemodelsforvisualrecognitionofunknownscenesusingatextualdescription
AT cabreraponcealdrichalfredo studyongenerativemodelsforvisualrecognitionofunknownscenesusingatextualdescription