Cargando…

Can you notice my attention? A novel information vision enhancement method in MR remote collaborative assembly

In mixed reality (MR) remote collaborative assembly, remote experts can guide local users to complete the assembly of physical tasks by sharing user cues (eye gazes, gestures, etc.) and spatial visual cues (such as AR annotations, virtual replicas). At present, remote experts need to carry out compl...

Descripción completa

Detalles Bibliográficos
Autores principales: Yan, YuXiang, Bai, Xiaoliang, He, Weiping, Wang, Shuxia, Zhang, XiangYu, Wang, Peng, Liu, Liwei, Zhang, Bing
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer London 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10237060/
https://www.ncbi.nlm.nih.gov/pubmed/37360661
http://dx.doi.org/10.1007/s00170-023-11652-2
_version_ 1785053078031958016
author Yan, YuXiang
Bai, Xiaoliang
He, Weiping
Wang, Shuxia
Zhang, XiangYu
Wang, Peng
Liu, Liwei
Zhang, Bing
author_facet Yan, YuXiang
Bai, Xiaoliang
He, Weiping
Wang, Shuxia
Zhang, XiangYu
Wang, Peng
Liu, Liwei
Zhang, Bing
author_sort Yan, YuXiang
collection PubMed
description In mixed reality (MR) remote collaborative assembly, remote experts can guide local users to complete the assembly of physical tasks by sharing user cues (eye gazes, gestures, etc.) and spatial visual cues (such as AR annotations, virtual replicas). At present, remote experts need to carry out complex operations to transfer information to local users, but the fusion of virtual and real information makes the display of information in the MR collaborative interaction interface appear messy and redundant, and local users sometimes find it difficult to pay attention to the focus of information transferred by experts. Our research aims to simplify the operation of remote experts in MR remote collaborative assembly and to enhance the expression of visual cues that reflect experts’ attention, so as to promote the expression and communication of collaborative intention that user has and improve assembly efficiency. We developed a system (EaVAS) through a method that is based on the assembly semantic association model and the expert operation visual enhancement mechanism that integrates gesture, eye gaze, and spatial visual cues. EaVAS can give experts great freedom of operation in MR remote collaborative assembly, so that experts can strengthen the visual expression of the information they want to convey to local users. EaVAS was tested for the first time in an engine physical assembly task. The experimental results show that the EaVAS has better time performance, cognitive performance, and user experience than that of the traditional MR remote collaborative assembly method (3DGAM). Our research results have certain guiding significance for the research of user cognition in MR remote collaborative assembly, which expands the application of MR technology in collaborative assembly tasks.
format Online
Article
Text
id pubmed-10237060
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Springer London
record_format MEDLINE/PubMed
spelling pubmed-102370602023-06-06 Can you notice my attention? A novel information vision enhancement method in MR remote collaborative assembly Yan, YuXiang Bai, Xiaoliang He, Weiping Wang, Shuxia Zhang, XiangYu Wang, Peng Liu, Liwei Zhang, Bing Int J Adv Manuf Technol Original Article In mixed reality (MR) remote collaborative assembly, remote experts can guide local users to complete the assembly of physical tasks by sharing user cues (eye gazes, gestures, etc.) and spatial visual cues (such as AR annotations, virtual replicas). At present, remote experts need to carry out complex operations to transfer information to local users, but the fusion of virtual and real information makes the display of information in the MR collaborative interaction interface appear messy and redundant, and local users sometimes find it difficult to pay attention to the focus of information transferred by experts. Our research aims to simplify the operation of remote experts in MR remote collaborative assembly and to enhance the expression of visual cues that reflect experts’ attention, so as to promote the expression and communication of collaborative intention that user has and improve assembly efficiency. We developed a system (EaVAS) through a method that is based on the assembly semantic association model and the expert operation visual enhancement mechanism that integrates gesture, eye gaze, and spatial visual cues. EaVAS can give experts great freedom of operation in MR remote collaborative assembly, so that experts can strengthen the visual expression of the information they want to convey to local users. EaVAS was tested for the first time in an engine physical assembly task. The experimental results show that the EaVAS has better time performance, cognitive performance, and user experience than that of the traditional MR remote collaborative assembly method (3DGAM). Our research results have certain guiding significance for the research of user cognition in MR remote collaborative assembly, which expands the application of MR technology in collaborative assembly tasks. Springer London 2023-06-02 /pmc/articles/PMC10237060/ /pubmed/37360661 http://dx.doi.org/10.1007/s00170-023-11652-2 Text en © The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2023. Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic.
spellingShingle Original Article
Yan, YuXiang
Bai, Xiaoliang
He, Weiping
Wang, Shuxia
Zhang, XiangYu
Wang, Peng
Liu, Liwei
Zhang, Bing
Can you notice my attention? A novel information vision enhancement method in MR remote collaborative assembly
title Can you notice my attention? A novel information vision enhancement method in MR remote collaborative assembly
title_full Can you notice my attention? A novel information vision enhancement method in MR remote collaborative assembly
title_fullStr Can you notice my attention? A novel information vision enhancement method in MR remote collaborative assembly
title_full_unstemmed Can you notice my attention? A novel information vision enhancement method in MR remote collaborative assembly
title_short Can you notice my attention? A novel information vision enhancement method in MR remote collaborative assembly
title_sort can you notice my attention? a novel information vision enhancement method in mr remote collaborative assembly
topic Original Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10237060/
https://www.ncbi.nlm.nih.gov/pubmed/37360661
http://dx.doi.org/10.1007/s00170-023-11652-2
work_keys_str_mv AT yanyuxiang canyounoticemyattentionanovelinformationvisionenhancementmethodinmrremotecollaborativeassembly
AT baixiaoliang canyounoticemyattentionanovelinformationvisionenhancementmethodinmrremotecollaborativeassembly
AT heweiping canyounoticemyattentionanovelinformationvisionenhancementmethodinmrremotecollaborativeassembly
AT wangshuxia canyounoticemyattentionanovelinformationvisionenhancementmethodinmrremotecollaborativeassembly
AT zhangxiangyu canyounoticemyattentionanovelinformationvisionenhancementmethodinmrremotecollaborativeassembly
AT wangpeng canyounoticemyattentionanovelinformationvisionenhancementmethodinmrremotecollaborativeassembly
AT liuliwei canyounoticemyattentionanovelinformationvisionenhancementmethodinmrremotecollaborativeassembly
AT zhangbing canyounoticemyattentionanovelinformationvisionenhancementmethodinmrremotecollaborativeassembly