Cargando…

An Embedded Framework for Fully Autonomous Object Manipulation in Robotic-Empowered Assisted Living

Most of the humanoid social robots currently diffused are designed only for verbal and animated interactions with users, and despite being equipped with two upper arms for interactive animation, they lack object manipulation capabilities. In this paper, we propose the MONOCULAR (eMbeddable autONomou...

Descripción completa

Detalles Bibliográficos
Autores principales: Mezzina, Giovanni, De Venuto, Daniela
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9823472/
https://www.ncbi.nlm.nih.gov/pubmed/36616705
http://dx.doi.org/10.3390/s23010103
_version_ 1784866168069160960
author Mezzina, Giovanni
De Venuto, Daniela
author_facet Mezzina, Giovanni
De Venuto, Daniela
author_sort Mezzina, Giovanni
collection PubMed
description Most of the humanoid social robots currently diffused are designed only for verbal and animated interactions with users, and despite being equipped with two upper arms for interactive animation, they lack object manipulation capabilities. In this paper, we propose the MONOCULAR (eMbeddable autONomous ObjeCt manipULAtion Routines) framework, which implements a set of routines to add manipulation functionalities to social robots by exploiting the functional data fusion of two RGB cameras and a 3D depth sensor placed in the head frame. The framework is designed to: (i) localize specific objects to be manipulated via RGB cameras; (ii) define the characteristics of the shelf on which they are placed; and (iii) autonomously adapt approach and manipulation routines to avoid collisions and maximize grabbing accuracy. To localize the item on the shelf, MONOCULAR exploits an embeddable version of the You Only Look Once (YOLO) object detector. The RGB camera outcomes are also used to estimate the height of the shelf using an edge-detecting algorithm. Based on the item’s position and the estimated shelf height, MONOCULAR is designed to select between two possible routines that dynamically optimize the approach and object manipulation parameters according to the real-time analysis of RGB and 3D sensor frames. These two routines are optimized for a central or lateral approach to objects on a shelf. The MONOCULAR procedures are designed to be fully automatic, intrinsically protecting sensitive users’ data and stored home or hospital maps. MONOCULAR was optimized for Pepper by SoftBank Robotics. To characterize the proposed system, a case study in which Pepper is used as a drug delivery operator is proposed. The case study is divided into: (i) pharmaceutical package search; (ii) object approach and manipulation; and (iii) delivery operations. Experimental data showed that object manipulation routines for laterally placed objects achieves a best grabbing success rate of 96%, while the routine for centrally placed objects can reach 97% for a wide range of different shelf heights. Finally, a proof of concept is proposed here to demonstrate the applicability of the MONOCULAR framework in a real-life scenario.
format Online
Article
Text
id pubmed-9823472
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-98234722023-01-08 An Embedded Framework for Fully Autonomous Object Manipulation in Robotic-Empowered Assisted Living Mezzina, Giovanni De Venuto, Daniela Sensors (Basel) Article Most of the humanoid social robots currently diffused are designed only for verbal and animated interactions with users, and despite being equipped with two upper arms for interactive animation, they lack object manipulation capabilities. In this paper, we propose the MONOCULAR (eMbeddable autONomous ObjeCt manipULAtion Routines) framework, which implements a set of routines to add manipulation functionalities to social robots by exploiting the functional data fusion of two RGB cameras and a 3D depth sensor placed in the head frame. The framework is designed to: (i) localize specific objects to be manipulated via RGB cameras; (ii) define the characteristics of the shelf on which they are placed; and (iii) autonomously adapt approach and manipulation routines to avoid collisions and maximize grabbing accuracy. To localize the item on the shelf, MONOCULAR exploits an embeddable version of the You Only Look Once (YOLO) object detector. The RGB camera outcomes are also used to estimate the height of the shelf using an edge-detecting algorithm. Based on the item’s position and the estimated shelf height, MONOCULAR is designed to select between two possible routines that dynamically optimize the approach and object manipulation parameters according to the real-time analysis of RGB and 3D sensor frames. These two routines are optimized for a central or lateral approach to objects on a shelf. The MONOCULAR procedures are designed to be fully automatic, intrinsically protecting sensitive users’ data and stored home or hospital maps. MONOCULAR was optimized for Pepper by SoftBank Robotics. To characterize the proposed system, a case study in which Pepper is used as a drug delivery operator is proposed. The case study is divided into: (i) pharmaceutical package search; (ii) object approach and manipulation; and (iii) delivery operations. Experimental data showed that object manipulation routines for laterally placed objects achieves a best grabbing success rate of 96%, while the routine for centrally placed objects can reach 97% for a wide range of different shelf heights. Finally, a proof of concept is proposed here to demonstrate the applicability of the MONOCULAR framework in a real-life scenario. MDPI 2022-12-22 /pmc/articles/PMC9823472/ /pubmed/36616705 http://dx.doi.org/10.3390/s23010103 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Mezzina, Giovanni
De Venuto, Daniela
An Embedded Framework for Fully Autonomous Object Manipulation in Robotic-Empowered Assisted Living
title An Embedded Framework for Fully Autonomous Object Manipulation in Robotic-Empowered Assisted Living
title_full An Embedded Framework for Fully Autonomous Object Manipulation in Robotic-Empowered Assisted Living
title_fullStr An Embedded Framework for Fully Autonomous Object Manipulation in Robotic-Empowered Assisted Living
title_full_unstemmed An Embedded Framework for Fully Autonomous Object Manipulation in Robotic-Empowered Assisted Living
title_short An Embedded Framework for Fully Autonomous Object Manipulation in Robotic-Empowered Assisted Living
title_sort embedded framework for fully autonomous object manipulation in robotic-empowered assisted living
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9823472/
https://www.ncbi.nlm.nih.gov/pubmed/36616705
http://dx.doi.org/10.3390/s23010103
work_keys_str_mv AT mezzinagiovanni anembeddedframeworkforfullyautonomousobjectmanipulationinroboticempoweredassistedliving
AT devenutodaniela anembeddedframeworkforfullyautonomousobjectmanipulationinroboticempoweredassistedliving
AT mezzinagiovanni embeddedframeworkforfullyautonomousobjectmanipulationinroboticempoweredassistedliving
AT devenutodaniela embeddedframeworkforfullyautonomousobjectmanipulationinroboticempoweredassistedliving