Cargando…

Face mediated human–robot interaction for remote medical examination

Realtime visual feedback from consequences of actions is useful for future safety-critical human–robot interaction applications such as remote physical examination of patients. Given multiple formats to present visual feedback, using face as feedback for mediating human–robot interaction in remote e...

Descripción completa

Detalles Bibliográficos
Autores principales: Lalitharatne, Thilina D., Costi, Leone, Hashem, Ryman, Nisky, Ilana, Jack, Rachael E., Nanayakkara, Thrishantha, Iida, Fumiya
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9307637/
https://www.ncbi.nlm.nih.gov/pubmed/35869154
http://dx.doi.org/10.1038/s41598-022-16643-z
_version_ 1784752807390216192
author Lalitharatne, Thilina D.
Costi, Leone
Hashem, Ryman
Nisky, Ilana
Jack, Rachael E.
Nanayakkara, Thrishantha
Iida, Fumiya
author_facet Lalitharatne, Thilina D.
Costi, Leone
Hashem, Ryman
Nisky, Ilana
Jack, Rachael E.
Nanayakkara, Thrishantha
Iida, Fumiya
author_sort Lalitharatne, Thilina D.
collection PubMed
description Realtime visual feedback from consequences of actions is useful for future safety-critical human–robot interaction applications such as remote physical examination of patients. Given multiple formats to present visual feedback, using face as feedback for mediating human–robot interaction in remote examination remains understudied. Here we describe a face mediated human–robot interaction approach for remote palpation. It builds upon a robodoctor–robopatient platform where user can palpate on the robopatient to remotely control the robodoctor to diagnose a patient. A tactile sensor array mounted on the end effector of the robodoctor measures the haptic response of the patient under diagnosis and transfers it to the robopatient to render pain facial expressions in response to palpation forces. We compare this approach against a direct presentation of tactile sensor data in a visual tactile map. As feedback, the former has the advantage of recruiting advanced human capabilities to decode expressions on a human face whereas the later has the advantage of being able to present details such as intensity and spatial information of palpation. In a user study, we compare these two approaches in a teleoperated palpation task to find the hard nodule embedded in the remote abdominal phantom. We show that the face mediated human–robot interaction approach leads to statistically significant improvements in localizing the hard nodule without compromising the nodule position estimation time. We highlight the inherent power of facial expressions as communicative signals to enhance the utility and effectiveness of human–robot interaction in remote medical examinations.
format Online
Article
Text
id pubmed-9307637
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-93076372022-07-24 Face mediated human–robot interaction for remote medical examination Lalitharatne, Thilina D. Costi, Leone Hashem, Ryman Nisky, Ilana Jack, Rachael E. Nanayakkara, Thrishantha Iida, Fumiya Sci Rep Article Realtime visual feedback from consequences of actions is useful for future safety-critical human–robot interaction applications such as remote physical examination of patients. Given multiple formats to present visual feedback, using face as feedback for mediating human–robot interaction in remote examination remains understudied. Here we describe a face mediated human–robot interaction approach for remote palpation. It builds upon a robodoctor–robopatient platform where user can palpate on the robopatient to remotely control the robodoctor to diagnose a patient. A tactile sensor array mounted on the end effector of the robodoctor measures the haptic response of the patient under diagnosis and transfers it to the robopatient to render pain facial expressions in response to palpation forces. We compare this approach against a direct presentation of tactile sensor data in a visual tactile map. As feedback, the former has the advantage of recruiting advanced human capabilities to decode expressions on a human face whereas the later has the advantage of being able to present details such as intensity and spatial information of palpation. In a user study, we compare these two approaches in a teleoperated palpation task to find the hard nodule embedded in the remote abdominal phantom. We show that the face mediated human–robot interaction approach leads to statistically significant improvements in localizing the hard nodule without compromising the nodule position estimation time. We highlight the inherent power of facial expressions as communicative signals to enhance the utility and effectiveness of human–robot interaction in remote medical examinations. Nature Publishing Group UK 2022-07-22 /pmc/articles/PMC9307637/ /pubmed/35869154 http://dx.doi.org/10.1038/s41598-022-16643-z Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Lalitharatne, Thilina D.
Costi, Leone
Hashem, Ryman
Nisky, Ilana
Jack, Rachael E.
Nanayakkara, Thrishantha
Iida, Fumiya
Face mediated human–robot interaction for remote medical examination
title Face mediated human–robot interaction for remote medical examination
title_full Face mediated human–robot interaction for remote medical examination
title_fullStr Face mediated human–robot interaction for remote medical examination
title_full_unstemmed Face mediated human–robot interaction for remote medical examination
title_short Face mediated human–robot interaction for remote medical examination
title_sort face mediated human–robot interaction for remote medical examination
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9307637/
https://www.ncbi.nlm.nih.gov/pubmed/35869154
http://dx.doi.org/10.1038/s41598-022-16643-z
work_keys_str_mv AT lalitharatnethilinad facemediatedhumanrobotinteractionforremotemedicalexamination
AT costileone facemediatedhumanrobotinteractionforremotemedicalexamination
AT hashemryman facemediatedhumanrobotinteractionforremotemedicalexamination
AT niskyilana facemediatedhumanrobotinteractionforremotemedicalexamination
AT jackrachaele facemediatedhumanrobotinteractionforremotemedicalexamination
AT nanayakkarathrishantha facemediatedhumanrobotinteractionforremotemedicalexamination
AT iidafumiya facemediatedhumanrobotinteractionforremotemedicalexamination