Cargando…

Interactive method research of dual mode information coordination integration for astronaut gesture and eye movement signals based on hybrid model

The lightweight human-robot interaction model with high real-time, high accuracy, and strong anti-interference capability can be better applied to future lunar surface exploration and construction work. Based on the feature information inputted from the monocular camera, the signal acquisition and p...

Descripción completa

Detalles Bibliográficos
Autores principales: Zhuang, HongChao, Xia, YiLu, Wang, Ning, Li, WeiHua, Dong, Lei, Li, Bo
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Science China Press 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10182537/
https://www.ncbi.nlm.nih.gov/pubmed/37288339
http://dx.doi.org/10.1007/s11431-022-2368-y
_version_ 1785041775628386304
author Zhuang, HongChao
Xia, YiLu
Wang, Ning
Li, WeiHua
Dong, Lei
Li, Bo
author_facet Zhuang, HongChao
Xia, YiLu
Wang, Ning
Li, WeiHua
Dong, Lei
Li, Bo
author_sort Zhuang, HongChao
collection PubMed
description The lightweight human-robot interaction model with high real-time, high accuracy, and strong anti-interference capability can be better applied to future lunar surface exploration and construction work. Based on the feature information inputted from the monocular camera, the signal acquisition and processing fusion of the astronaut gesture and eye-movement modal interaction can be performed. Compared with the single mode, the human-robot interaction model of bimodal collaboration can achieve the issuance of complex interactive commands more efficiently. The optimization of the target detection model is executed by inserting attention into YOLOv4 and filtering image motion blur. The central coordinates of pupils are identified by the neural network to realize the human-robot interaction in the eye movement mode. The fusion between the astronaut gesture signal and eye movement signal is performed at the end of the collaborative model to achieve complex command interactions based on a lightweight model. The dataset used in the network training is enhanced and extended to simulate the realistic lunar space interaction environment. The human-robot interaction effects of complex commands in the single mode are compared with those of complex commands in the bimodal collaboration. The experimental results show that the concatenated interaction model of the astronaut gesture and eye movement signals can excavate the bimodal interaction signal better, discriminate the complex interaction commands more quickly, and has stronger signal anti-interference capability based on its stronger feature information mining ability. Compared with the command interaction realized by using the single gesture modal signal and the single eye movement modal signal, the interaction model of bimodal collaboration is shorter about 79% to 91% of the time under the single mode interaction. Regardless of the influence of any image interference item, the overall judgment accuracy of the proposed model can be maintained at about 83% to 97%. The effectiveness of the proposed method is verified.
format Online
Article
Text
id pubmed-10182537
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Science China Press
record_format MEDLINE/PubMed
spelling pubmed-101825372023-05-14 Interactive method research of dual mode information coordination integration for astronaut gesture and eye movement signals based on hybrid model Zhuang, HongChao Xia, YiLu Wang, Ning Li, WeiHua Dong, Lei Li, Bo Sci China Technol Sci Article The lightweight human-robot interaction model with high real-time, high accuracy, and strong anti-interference capability can be better applied to future lunar surface exploration and construction work. Based on the feature information inputted from the monocular camera, the signal acquisition and processing fusion of the astronaut gesture and eye-movement modal interaction can be performed. Compared with the single mode, the human-robot interaction model of bimodal collaboration can achieve the issuance of complex interactive commands more efficiently. The optimization of the target detection model is executed by inserting attention into YOLOv4 and filtering image motion blur. The central coordinates of pupils are identified by the neural network to realize the human-robot interaction in the eye movement mode. The fusion between the astronaut gesture signal and eye movement signal is performed at the end of the collaborative model to achieve complex command interactions based on a lightweight model. The dataset used in the network training is enhanced and extended to simulate the realistic lunar space interaction environment. The human-robot interaction effects of complex commands in the single mode are compared with those of complex commands in the bimodal collaboration. The experimental results show that the concatenated interaction model of the astronaut gesture and eye movement signals can excavate the bimodal interaction signal better, discriminate the complex interaction commands more quickly, and has stronger signal anti-interference capability based on its stronger feature information mining ability. Compared with the command interaction realized by using the single gesture modal signal and the single eye movement modal signal, the interaction model of bimodal collaboration is shorter about 79% to 91% of the time under the single mode interaction. Regardless of the influence of any image interference item, the overall judgment accuracy of the proposed model can be maintained at about 83% to 97%. The effectiveness of the proposed method is verified. Science China Press 2023-05-09 2023 /pmc/articles/PMC10182537/ /pubmed/37288339 http://dx.doi.org/10.1007/s11431-022-2368-y Text en © Science China Press 2023 This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic.
spellingShingle Article
Zhuang, HongChao
Xia, YiLu
Wang, Ning
Li, WeiHua
Dong, Lei
Li, Bo
Interactive method research of dual mode information coordination integration for astronaut gesture and eye movement signals based on hybrid model
title Interactive method research of dual mode information coordination integration for astronaut gesture and eye movement signals based on hybrid model
title_full Interactive method research of dual mode information coordination integration for astronaut gesture and eye movement signals based on hybrid model
title_fullStr Interactive method research of dual mode information coordination integration for astronaut gesture and eye movement signals based on hybrid model
title_full_unstemmed Interactive method research of dual mode information coordination integration for astronaut gesture and eye movement signals based on hybrid model
title_short Interactive method research of dual mode information coordination integration for astronaut gesture and eye movement signals based on hybrid model
title_sort interactive method research of dual mode information coordination integration for astronaut gesture and eye movement signals based on hybrid model
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10182537/
https://www.ncbi.nlm.nih.gov/pubmed/37288339
http://dx.doi.org/10.1007/s11431-022-2368-y
work_keys_str_mv AT zhuanghongchao interactivemethodresearchofdualmodeinformationcoordinationintegrationforastronautgestureandeyemovementsignalsbasedonhybridmodel
AT xiayilu interactivemethodresearchofdualmodeinformationcoordinationintegrationforastronautgestureandeyemovementsignalsbasedonhybridmodel
AT wangning interactivemethodresearchofdualmodeinformationcoordinationintegrationforastronautgestureandeyemovementsignalsbasedonhybridmodel
AT liweihua interactivemethodresearchofdualmodeinformationcoordinationintegrationforastronautgestureandeyemovementsignalsbasedonhybridmodel
AT donglei interactivemethodresearchofdualmodeinformationcoordinationintegrationforastronautgestureandeyemovementsignalsbasedonhybridmodel
AT libo interactivemethodresearchofdualmodeinformationcoordinationintegrationforastronautgestureandeyemovementsignalsbasedonhybridmodel