Cargando…

Learning-Based Motion-Intention Prediction for End-Point Control of Upper-Limb-Assistive Robots

The lack of intuitive and active human–robot interaction makes it difficult to use upper-limb-assistive devices. In this paper, we propose a novel learning-based controller that intuitively uses onset motion to predict the desired end-point position for an assistive robot. A multi-modal sensing syst...

Descripción completa

Detalles Bibliográficos
Autores principales: Yang, Sibo, Garg, Neha P., Gao, Ruobin, Yuan, Meng, Noronha, Bernardo, Ang, Wei Tech, Accoto, Dino
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10056111/
https://www.ncbi.nlm.nih.gov/pubmed/36991709
http://dx.doi.org/10.3390/s23062998
_version_ 1785016045611778048
author Yang, Sibo
Garg, Neha P.
Gao, Ruobin
Yuan, Meng
Noronha, Bernardo
Ang, Wei Tech
Accoto, Dino
author_facet Yang, Sibo
Garg, Neha P.
Gao, Ruobin
Yuan, Meng
Noronha, Bernardo
Ang, Wei Tech
Accoto, Dino
author_sort Yang, Sibo
collection PubMed
description The lack of intuitive and active human–robot interaction makes it difficult to use upper-limb-assistive devices. In this paper, we propose a novel learning-based controller that intuitively uses onset motion to predict the desired end-point position for an assistive robot. A multi-modal sensing system comprising inertial measurement units (IMUs), electromyographic (EMG) sensors, and mechanomyography (MMG) sensors was implemented. This system was used to acquire kinematic and physiological signals during reaching and placing tasks performed by five healthy subjects. The onset motion data of each motion trial were extracted to input into traditional regression models and deep learning models for training and testing. The models can predict the position of the hand in planar space, which is the reference position for low-level position controllers. The results show that using IMU sensor with the proposed prediction model is sufficient for motion intention detection, which can provide almost the same prediction performance compared with adding EMG or MMG. Additionally, recurrent neural network (RNN)-based models can predict target positions over a short onset time window for reaching motions and are suitable for predicting targets over a longer horizon for placing tasks. This study’s detailed analysis can improve the usability of the assistive/rehabilitation robots.
format Online
Article
Text
id pubmed-10056111
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-100561112023-03-30 Learning-Based Motion-Intention Prediction for End-Point Control of Upper-Limb-Assistive Robots Yang, Sibo Garg, Neha P. Gao, Ruobin Yuan, Meng Noronha, Bernardo Ang, Wei Tech Accoto, Dino Sensors (Basel) Article The lack of intuitive and active human–robot interaction makes it difficult to use upper-limb-assistive devices. In this paper, we propose a novel learning-based controller that intuitively uses onset motion to predict the desired end-point position for an assistive robot. A multi-modal sensing system comprising inertial measurement units (IMUs), electromyographic (EMG) sensors, and mechanomyography (MMG) sensors was implemented. This system was used to acquire kinematic and physiological signals during reaching and placing tasks performed by five healthy subjects. The onset motion data of each motion trial were extracted to input into traditional regression models and deep learning models for training and testing. The models can predict the position of the hand in planar space, which is the reference position for low-level position controllers. The results show that using IMU sensor with the proposed prediction model is sufficient for motion intention detection, which can provide almost the same prediction performance compared with adding EMG or MMG. Additionally, recurrent neural network (RNN)-based models can predict target positions over a short onset time window for reaching motions and are suitable for predicting targets over a longer horizon for placing tasks. This study’s detailed analysis can improve the usability of the assistive/rehabilitation robots. MDPI 2023-03-10 /pmc/articles/PMC10056111/ /pubmed/36991709 http://dx.doi.org/10.3390/s23062998 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Yang, Sibo
Garg, Neha P.
Gao, Ruobin
Yuan, Meng
Noronha, Bernardo
Ang, Wei Tech
Accoto, Dino
Learning-Based Motion-Intention Prediction for End-Point Control of Upper-Limb-Assistive Robots
title Learning-Based Motion-Intention Prediction for End-Point Control of Upper-Limb-Assistive Robots
title_full Learning-Based Motion-Intention Prediction for End-Point Control of Upper-Limb-Assistive Robots
title_fullStr Learning-Based Motion-Intention Prediction for End-Point Control of Upper-Limb-Assistive Robots
title_full_unstemmed Learning-Based Motion-Intention Prediction for End-Point Control of Upper-Limb-Assistive Robots
title_short Learning-Based Motion-Intention Prediction for End-Point Control of Upper-Limb-Assistive Robots
title_sort learning-based motion-intention prediction for end-point control of upper-limb-assistive robots
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10056111/
https://www.ncbi.nlm.nih.gov/pubmed/36991709
http://dx.doi.org/10.3390/s23062998
work_keys_str_mv AT yangsibo learningbasedmotionintentionpredictionforendpointcontrolofupperlimbassistiverobots
AT gargnehap learningbasedmotionintentionpredictionforendpointcontrolofupperlimbassistiverobots
AT gaoruobin learningbasedmotionintentionpredictionforendpointcontrolofupperlimbassistiverobots
AT yuanmeng learningbasedmotionintentionpredictionforendpointcontrolofupperlimbassistiverobots
AT noronhabernardo learningbasedmotionintentionpredictionforendpointcontrolofupperlimbassistiverobots
AT angweitech learningbasedmotionintentionpredictionforendpointcontrolofupperlimbassistiverobots
AT accotodino learningbasedmotionintentionpredictionforendpointcontrolofupperlimbassistiverobots