Cargando…

A virtual surgical prototype system based on gesture recognition for virtual surgical training in maxillofacial surgery

BACKGROUND: Virtual reality (VR) technology is an ideal alternative of operation training and surgical teaching. However, virtual surgery is usually carried out using the mouse or data gloves, which affects the authenticity of virtual operation. A virtual surgery system with gesture recognition and...

Descripción completa

Detalles Bibliográficos
Autores principales: Zhao, Hanjiang, Cheng, Mengjia, Huang, Jingyang, Li, Meng, Cheng, Huanchong, Tian, Kun, Yu, Hongbo
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer International Publishing 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10113313/
https://www.ncbi.nlm.nih.gov/pubmed/36418763
http://dx.doi.org/10.1007/s11548-022-02790-1
_version_ 1785027812897325056
author Zhao, Hanjiang
Cheng, Mengjia
Huang, Jingyang
Li, Meng
Cheng, Huanchong
Tian, Kun
Yu, Hongbo
author_facet Zhao, Hanjiang
Cheng, Mengjia
Huang, Jingyang
Li, Meng
Cheng, Huanchong
Tian, Kun
Yu, Hongbo
author_sort Zhao, Hanjiang
collection PubMed
description BACKGROUND: Virtual reality (VR) technology is an ideal alternative of operation training and surgical teaching. However, virtual surgery is usually carried out using the mouse or data gloves, which affects the authenticity of virtual operation. A virtual surgery system with gesture recognition and real-time image feedback was explored to realize more authentic immersion. METHOD: Gesture recognition technology proposed with an efficient and real-time algorithm and high fidelity was explored. The recognition of hand contour, palm and fingertip was firstly realized by hand data extraction. Then, an Support Vector Machine classifier was utilized to classify and recognize common gestures after extraction of feature recognition. The algorithm of collision detection adopted Axis Aligned Bounding Box binary tree to build hand and scalpel collision models. What’s more, nominal radius theorem (NRT) and separating axis theorem (SAT) were applied for speeding up collision detection. Based on the maxillofacial virtual surgical system we proposed before, the feasibility of integration of the above technologies in this prototype system was evaluated. RESULTS: Ten kinds of signal static gestures were designed to test gesture recognition algorithms. The accuracy of gestures recognition is more than 80%, some of which were over 90%. The generation speed of collision detection model met the software requirements with the method of NRT and SAT. The response time of gesture] recognition was less than 40 ms, namely the speed of hand gesture recognition system was greater than 25 Hz. On the condition of integration of hand gesture recognition, typical virtual surgical procedures including grabbing a scalpel, puncture site selection, virtual puncture operation and incision were carried out with realization of real-time image feedback. CONCLUSION: Based on the previous maxillofacial virtual surgical system that consisted of VR, triangular mesh collision detection and maxillofacial biomechanical model construction, the integration of hand gesture recognition was a feasible method to improve the interactivity and immersion of virtual surgical operation training.
format Online
Article
Text
id pubmed-10113313
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Springer International Publishing
record_format MEDLINE/PubMed
spelling pubmed-101133132023-04-20 A virtual surgical prototype system based on gesture recognition for virtual surgical training in maxillofacial surgery Zhao, Hanjiang Cheng, Mengjia Huang, Jingyang Li, Meng Cheng, Huanchong Tian, Kun Yu, Hongbo Int J Comput Assist Radiol Surg Original Article BACKGROUND: Virtual reality (VR) technology is an ideal alternative of operation training and surgical teaching. However, virtual surgery is usually carried out using the mouse or data gloves, which affects the authenticity of virtual operation. A virtual surgery system with gesture recognition and real-time image feedback was explored to realize more authentic immersion. METHOD: Gesture recognition technology proposed with an efficient and real-time algorithm and high fidelity was explored. The recognition of hand contour, palm and fingertip was firstly realized by hand data extraction. Then, an Support Vector Machine classifier was utilized to classify and recognize common gestures after extraction of feature recognition. The algorithm of collision detection adopted Axis Aligned Bounding Box binary tree to build hand and scalpel collision models. What’s more, nominal radius theorem (NRT) and separating axis theorem (SAT) were applied for speeding up collision detection. Based on the maxillofacial virtual surgical system we proposed before, the feasibility of integration of the above technologies in this prototype system was evaluated. RESULTS: Ten kinds of signal static gestures were designed to test gesture recognition algorithms. The accuracy of gestures recognition is more than 80%, some of which were over 90%. The generation speed of collision detection model met the software requirements with the method of NRT and SAT. The response time of gesture] recognition was less than 40 ms, namely the speed of hand gesture recognition system was greater than 25 Hz. On the condition of integration of hand gesture recognition, typical virtual surgical procedures including grabbing a scalpel, puncture site selection, virtual puncture operation and incision were carried out with realization of real-time image feedback. CONCLUSION: Based on the previous maxillofacial virtual surgical system that consisted of VR, triangular mesh collision detection and maxillofacial biomechanical model construction, the integration of hand gesture recognition was a feasible method to improve the interactivity and immersion of virtual surgical operation training. Springer International Publishing 2022-11-23 2023 /pmc/articles/PMC10113313/ /pubmed/36418763 http://dx.doi.org/10.1007/s11548-022-02790-1 Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Original Article
Zhao, Hanjiang
Cheng, Mengjia
Huang, Jingyang
Li, Meng
Cheng, Huanchong
Tian, Kun
Yu, Hongbo
A virtual surgical prototype system based on gesture recognition for virtual surgical training in maxillofacial surgery
title A virtual surgical prototype system based on gesture recognition for virtual surgical training in maxillofacial surgery
title_full A virtual surgical prototype system based on gesture recognition for virtual surgical training in maxillofacial surgery
title_fullStr A virtual surgical prototype system based on gesture recognition for virtual surgical training in maxillofacial surgery
title_full_unstemmed A virtual surgical prototype system based on gesture recognition for virtual surgical training in maxillofacial surgery
title_short A virtual surgical prototype system based on gesture recognition for virtual surgical training in maxillofacial surgery
title_sort virtual surgical prototype system based on gesture recognition for virtual surgical training in maxillofacial surgery
topic Original Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10113313/
https://www.ncbi.nlm.nih.gov/pubmed/36418763
http://dx.doi.org/10.1007/s11548-022-02790-1
work_keys_str_mv AT zhaohanjiang avirtualsurgicalprototypesystembasedongesturerecognitionforvirtualsurgicaltraininginmaxillofacialsurgery
AT chengmengjia avirtualsurgicalprototypesystembasedongesturerecognitionforvirtualsurgicaltraininginmaxillofacialsurgery
AT huangjingyang avirtualsurgicalprototypesystembasedongesturerecognitionforvirtualsurgicaltraininginmaxillofacialsurgery
AT limeng avirtualsurgicalprototypesystembasedongesturerecognitionforvirtualsurgicaltraininginmaxillofacialsurgery
AT chenghuanchong avirtualsurgicalprototypesystembasedongesturerecognitionforvirtualsurgicaltraininginmaxillofacialsurgery
AT tiankun avirtualsurgicalprototypesystembasedongesturerecognitionforvirtualsurgicaltraininginmaxillofacialsurgery
AT yuhongbo avirtualsurgicalprototypesystembasedongesturerecognitionforvirtualsurgicaltraininginmaxillofacialsurgery
AT zhaohanjiang virtualsurgicalprototypesystembasedongesturerecognitionforvirtualsurgicaltraininginmaxillofacialsurgery
AT chengmengjia virtualsurgicalprototypesystembasedongesturerecognitionforvirtualsurgicaltraininginmaxillofacialsurgery
AT huangjingyang virtualsurgicalprototypesystembasedongesturerecognitionforvirtualsurgicaltraininginmaxillofacialsurgery
AT limeng virtualsurgicalprototypesystembasedongesturerecognitionforvirtualsurgicaltraininginmaxillofacialsurgery
AT chenghuanchong virtualsurgicalprototypesystembasedongesturerecognitionforvirtualsurgicaltraininginmaxillofacialsurgery
AT tiankun virtualsurgicalprototypesystembasedongesturerecognitionforvirtualsurgicaltraininginmaxillofacialsurgery
AT yuhongbo virtualsurgicalprototypesystembasedongesturerecognitionforvirtualsurgicaltraininginmaxillofacialsurgery