Cargando…

Improving Robotic Hand Prosthesis Control With Eye Tracking and Computer Vision: A Multimodal Approach Based on the Visuomotor Behavior of Grasping

The complexity and dexterity of the human hand make the development of natural and robust control of hand prostheses challenging. Although a large number of control approaches were developed and investigated in the last decades, limited robustness in real-life conditions often prevented their applic...

Descripción completa

Detalles Bibliográficos
Autores principales: Cognolato, Matteo, Atzori, Manfredo, Gassert, Roger, Müller, Henning
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8822121/
https://www.ncbi.nlm.nih.gov/pubmed/35146422
http://dx.doi.org/10.3389/frai.2021.744476
_version_ 1784646546655019008
author Cognolato, Matteo
Atzori, Manfredo
Gassert, Roger
Müller, Henning
author_facet Cognolato, Matteo
Atzori, Manfredo
Gassert, Roger
Müller, Henning
author_sort Cognolato, Matteo
collection PubMed
description The complexity and dexterity of the human hand make the development of natural and robust control of hand prostheses challenging. Although a large number of control approaches were developed and investigated in the last decades, limited robustness in real-life conditions often prevented their application in clinical settings and in commercial products. In this paper, we investigate a multimodal approach that exploits the use of eye-hand coordination to improve the control of myoelectric hand prostheses. The analyzed data are from the publicly available MeganePro Dataset 1, that includes multimodal data from transradial amputees and able-bodied subjects while grasping numerous household objects with ten grasp types. A continuous grasp-type classification based on surface electromyography served as both intent detector and classifier. At the same time, the information provided by eye-hand coordination parameters, gaze data and object recognition in first-person videos allowed to identify the object a person aims to grasp. The results show that the inclusion of visual information significantly increases the average offline classification accuracy by up to 15.61 ± 4.22% for the transradial amputees and of up to 7.37 ± 3.52% for the able-bodied subjects, allowing trans-radial amputees to reach average classification accuracy comparable to intact subjects and suggesting that the robustness of hand prosthesis control based on grasp-type recognition can be significantly improved with the inclusion of visual information extracted by leveraging natural eye-hand coordination behavior and without placing additional cognitive burden on the user.
format Online
Article
Text
id pubmed-8822121
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-88221212022-02-09 Improving Robotic Hand Prosthesis Control With Eye Tracking and Computer Vision: A Multimodal Approach Based on the Visuomotor Behavior of Grasping Cognolato, Matteo Atzori, Manfredo Gassert, Roger Müller, Henning Front Artif Intell Artificial Intelligence The complexity and dexterity of the human hand make the development of natural and robust control of hand prostheses challenging. Although a large number of control approaches were developed and investigated in the last decades, limited robustness in real-life conditions often prevented their application in clinical settings and in commercial products. In this paper, we investigate a multimodal approach that exploits the use of eye-hand coordination to improve the control of myoelectric hand prostheses. The analyzed data are from the publicly available MeganePro Dataset 1, that includes multimodal data from transradial amputees and able-bodied subjects while grasping numerous household objects with ten grasp types. A continuous grasp-type classification based on surface electromyography served as both intent detector and classifier. At the same time, the information provided by eye-hand coordination parameters, gaze data and object recognition in first-person videos allowed to identify the object a person aims to grasp. The results show that the inclusion of visual information significantly increases the average offline classification accuracy by up to 15.61 ± 4.22% for the transradial amputees and of up to 7.37 ± 3.52% for the able-bodied subjects, allowing trans-radial amputees to reach average classification accuracy comparable to intact subjects and suggesting that the robustness of hand prosthesis control based on grasp-type recognition can be significantly improved with the inclusion of visual information extracted by leveraging natural eye-hand coordination behavior and without placing additional cognitive burden on the user. Frontiers Media S.A. 2022-01-25 /pmc/articles/PMC8822121/ /pubmed/35146422 http://dx.doi.org/10.3389/frai.2021.744476 Text en Copyright © 2022 Cognolato, Atzori, Gassert and Müller. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Artificial Intelligence
Cognolato, Matteo
Atzori, Manfredo
Gassert, Roger
Müller, Henning
Improving Robotic Hand Prosthesis Control With Eye Tracking and Computer Vision: A Multimodal Approach Based on the Visuomotor Behavior of Grasping
title Improving Robotic Hand Prosthesis Control With Eye Tracking and Computer Vision: A Multimodal Approach Based on the Visuomotor Behavior of Grasping
title_full Improving Robotic Hand Prosthesis Control With Eye Tracking and Computer Vision: A Multimodal Approach Based on the Visuomotor Behavior of Grasping
title_fullStr Improving Robotic Hand Prosthesis Control With Eye Tracking and Computer Vision: A Multimodal Approach Based on the Visuomotor Behavior of Grasping
title_full_unstemmed Improving Robotic Hand Prosthesis Control With Eye Tracking and Computer Vision: A Multimodal Approach Based on the Visuomotor Behavior of Grasping
title_short Improving Robotic Hand Prosthesis Control With Eye Tracking and Computer Vision: A Multimodal Approach Based on the Visuomotor Behavior of Grasping
title_sort improving robotic hand prosthesis control with eye tracking and computer vision: a multimodal approach based on the visuomotor behavior of grasping
topic Artificial Intelligence
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8822121/
https://www.ncbi.nlm.nih.gov/pubmed/35146422
http://dx.doi.org/10.3389/frai.2021.744476
work_keys_str_mv AT cognolatomatteo improvingrobotichandprosthesiscontrolwitheyetrackingandcomputervisionamultimodalapproachbasedonthevisuomotorbehaviorofgrasping
AT atzorimanfredo improvingrobotichandprosthesiscontrolwitheyetrackingandcomputervisionamultimodalapproachbasedonthevisuomotorbehaviorofgrasping
AT gassertroger improvingrobotichandprosthesiscontrolwitheyetrackingandcomputervisionamultimodalapproachbasedonthevisuomotorbehaviorofgrasping
AT mullerhenning improvingrobotichandprosthesiscontrolwitheyetrackingandcomputervisionamultimodalapproachbasedonthevisuomotorbehaviorofgrasping