Cargando…
Enhancing Visual Feedback Control through Early Fusion Deep Learning
A visual servoing system is a type of control system used in robotics that employs visual feedback to guide the movement of a robot or a camera to achieve a desired task. This problem is addressed using deep models that receive a visual representation of the current and desired scene, to compute the...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10606400/ https://www.ncbi.nlm.nih.gov/pubmed/37895500 http://dx.doi.org/10.3390/e25101378 |
_version_ | 1785127307263868928 |
---|---|
author | Botezatu, Adrian-Paul Ferariu, Lavinia-Eugenia Burlacu, Adrian |
author_facet | Botezatu, Adrian-Paul Ferariu, Lavinia-Eugenia Burlacu, Adrian |
author_sort | Botezatu, Adrian-Paul |
collection | PubMed |
description | A visual servoing system is a type of control system used in robotics that employs visual feedback to guide the movement of a robot or a camera to achieve a desired task. This problem is addressed using deep models that receive a visual representation of the current and desired scene, to compute the control input. The focus is on early fusion, which consists of using additional information integrated into the neural input array. In this context, we discuss how ready-to-use information can be directly obtained from the current and desired scenes, to facilitate the learning process. Inspired by some of the most effective traditional visual servoing techniques, we introduce early fusion based on image moments and provide an extensive analysis of approaches based on image moments, region-based segmentation, and feature points. These techniques are applied stand-alone or in combination, to allow obtaining maps with different levels of detail. The role of the extra maps is experimentally investigated for scenes with different layouts. The results show that early fusion facilitates a more accurate approximation of the linear and angular camera velocities, in order to control the movement of a 6-degree-of-freedom robot from a current configuration to a desired one. The best results were obtained for the extra maps providing details of low and medium levels. |
format | Online Article Text |
id | pubmed-10606400 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-106064002023-10-28 Enhancing Visual Feedback Control through Early Fusion Deep Learning Botezatu, Adrian-Paul Ferariu, Lavinia-Eugenia Burlacu, Adrian Entropy (Basel) Article A visual servoing system is a type of control system used in robotics that employs visual feedback to guide the movement of a robot or a camera to achieve a desired task. This problem is addressed using deep models that receive a visual representation of the current and desired scene, to compute the control input. The focus is on early fusion, which consists of using additional information integrated into the neural input array. In this context, we discuss how ready-to-use information can be directly obtained from the current and desired scenes, to facilitate the learning process. Inspired by some of the most effective traditional visual servoing techniques, we introduce early fusion based on image moments and provide an extensive analysis of approaches based on image moments, region-based segmentation, and feature points. These techniques are applied stand-alone or in combination, to allow obtaining maps with different levels of detail. The role of the extra maps is experimentally investigated for scenes with different layouts. The results show that early fusion facilitates a more accurate approximation of the linear and angular camera velocities, in order to control the movement of a 6-degree-of-freedom robot from a current configuration to a desired one. The best results were obtained for the extra maps providing details of low and medium levels. MDPI 2023-09-25 /pmc/articles/PMC10606400/ /pubmed/37895500 http://dx.doi.org/10.3390/e25101378 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Botezatu, Adrian-Paul Ferariu, Lavinia-Eugenia Burlacu, Adrian Enhancing Visual Feedback Control through Early Fusion Deep Learning |
title | Enhancing Visual Feedback Control through Early Fusion Deep Learning |
title_full | Enhancing Visual Feedback Control through Early Fusion Deep Learning |
title_fullStr | Enhancing Visual Feedback Control through Early Fusion Deep Learning |
title_full_unstemmed | Enhancing Visual Feedback Control through Early Fusion Deep Learning |
title_short | Enhancing Visual Feedback Control through Early Fusion Deep Learning |
title_sort | enhancing visual feedback control through early fusion deep learning |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10606400/ https://www.ncbi.nlm.nih.gov/pubmed/37895500 http://dx.doi.org/10.3390/e25101378 |
work_keys_str_mv | AT botezatuadrianpaul enhancingvisualfeedbackcontrolthroughearlyfusiondeeplearning AT ferariulaviniaeugenia enhancingvisualfeedbackcontrolthroughearlyfusiondeeplearning AT burlacuadrian enhancingvisualfeedbackcontrolthroughearlyfusiondeeplearning |