Cargando…
An Automated Skill Assessment Framework Based on Visual Motion Signals and a Deep Neural Network in Robot-Assisted Minimally Invasive Surgery
Surgical skill assessment can quantify the quality of the surgical operation via the motion state of the surgical instrument tip (SIT), which is considered one of the effective primary means by which to improve the accuracy of surgical operation. Traditional methods have displayed promising results...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10181496/ https://www.ncbi.nlm.nih.gov/pubmed/37177699 http://dx.doi.org/10.3390/s23094496 |
_version_ | 1785041588411432960 |
---|---|
author | Pan, Mingzhang Wang, Shuo Li, Jingao Li, Jing Yang, Xiuze Liang, Ke |
author_facet | Pan, Mingzhang Wang, Shuo Li, Jingao Li, Jing Yang, Xiuze Liang, Ke |
author_sort | Pan, Mingzhang |
collection | PubMed |
description | Surgical skill assessment can quantify the quality of the surgical operation via the motion state of the surgical instrument tip (SIT), which is considered one of the effective primary means by which to improve the accuracy of surgical operation. Traditional methods have displayed promising results in skill assessment. However, this success is predicated on the SIT sensors, making these approaches impractical when employing the minimally invasive surgical robot with such a tiny end size. To address the assessment issue regarding the operation quality of robot-assisted minimally invasive surgery (RAMIS), this paper proposes a new automatic framework for assessing surgical skills based on visual motion tracking and deep learning. The new method innovatively combines vision and kinematics. The kernel correlation filter (KCF) is introduced in order to obtain the key motion signals of the SIT and classify them by using the residual neural network (ResNet), realizing automated skill assessment in RAMIS. To verify its effectiveness and accuracy, the proposed method is applied to the public minimally invasive surgical robot dataset, the JIGSAWS. The results show that the method based on visual motion tracking technology and a deep neural network model can effectively and accurately assess the skill of robot-assisted surgery in near real-time. In a fairly short computational processing time of 3 to 5 s, the average accuracy of the assessment method is 92.04% and 84.80% in distinguishing two and three skill levels. This study makes an important contribution to the safe and high-quality development of RAMIS. |
format | Online Article Text |
id | pubmed-10181496 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-101814962023-05-13 An Automated Skill Assessment Framework Based on Visual Motion Signals and a Deep Neural Network in Robot-Assisted Minimally Invasive Surgery Pan, Mingzhang Wang, Shuo Li, Jingao Li, Jing Yang, Xiuze Liang, Ke Sensors (Basel) Article Surgical skill assessment can quantify the quality of the surgical operation via the motion state of the surgical instrument tip (SIT), which is considered one of the effective primary means by which to improve the accuracy of surgical operation. Traditional methods have displayed promising results in skill assessment. However, this success is predicated on the SIT sensors, making these approaches impractical when employing the minimally invasive surgical robot with such a tiny end size. To address the assessment issue regarding the operation quality of robot-assisted minimally invasive surgery (RAMIS), this paper proposes a new automatic framework for assessing surgical skills based on visual motion tracking and deep learning. The new method innovatively combines vision and kinematics. The kernel correlation filter (KCF) is introduced in order to obtain the key motion signals of the SIT and classify them by using the residual neural network (ResNet), realizing automated skill assessment in RAMIS. To verify its effectiveness and accuracy, the proposed method is applied to the public minimally invasive surgical robot dataset, the JIGSAWS. The results show that the method based on visual motion tracking technology and a deep neural network model can effectively and accurately assess the skill of robot-assisted surgery in near real-time. In a fairly short computational processing time of 3 to 5 s, the average accuracy of the assessment method is 92.04% and 84.80% in distinguishing two and three skill levels. This study makes an important contribution to the safe and high-quality development of RAMIS. MDPI 2023-05-05 /pmc/articles/PMC10181496/ /pubmed/37177699 http://dx.doi.org/10.3390/s23094496 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Pan, Mingzhang Wang, Shuo Li, Jingao Li, Jing Yang, Xiuze Liang, Ke An Automated Skill Assessment Framework Based on Visual Motion Signals and a Deep Neural Network in Robot-Assisted Minimally Invasive Surgery |
title | An Automated Skill Assessment Framework Based on Visual Motion Signals and a Deep Neural Network in Robot-Assisted Minimally Invasive Surgery |
title_full | An Automated Skill Assessment Framework Based on Visual Motion Signals and a Deep Neural Network in Robot-Assisted Minimally Invasive Surgery |
title_fullStr | An Automated Skill Assessment Framework Based on Visual Motion Signals and a Deep Neural Network in Robot-Assisted Minimally Invasive Surgery |
title_full_unstemmed | An Automated Skill Assessment Framework Based on Visual Motion Signals and a Deep Neural Network in Robot-Assisted Minimally Invasive Surgery |
title_short | An Automated Skill Assessment Framework Based on Visual Motion Signals and a Deep Neural Network in Robot-Assisted Minimally Invasive Surgery |
title_sort | automated skill assessment framework based on visual motion signals and a deep neural network in robot-assisted minimally invasive surgery |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10181496/ https://www.ncbi.nlm.nih.gov/pubmed/37177699 http://dx.doi.org/10.3390/s23094496 |
work_keys_str_mv | AT panmingzhang anautomatedskillassessmentframeworkbasedonvisualmotionsignalsandadeepneuralnetworkinrobotassistedminimallyinvasivesurgery AT wangshuo anautomatedskillassessmentframeworkbasedonvisualmotionsignalsandadeepneuralnetworkinrobotassistedminimallyinvasivesurgery AT lijingao anautomatedskillassessmentframeworkbasedonvisualmotionsignalsandadeepneuralnetworkinrobotassistedminimallyinvasivesurgery AT lijing anautomatedskillassessmentframeworkbasedonvisualmotionsignalsandadeepneuralnetworkinrobotassistedminimallyinvasivesurgery AT yangxiuze anautomatedskillassessmentframeworkbasedonvisualmotionsignalsandadeepneuralnetworkinrobotassistedminimallyinvasivesurgery AT liangke anautomatedskillassessmentframeworkbasedonvisualmotionsignalsandadeepneuralnetworkinrobotassistedminimallyinvasivesurgery AT panmingzhang automatedskillassessmentframeworkbasedonvisualmotionsignalsandadeepneuralnetworkinrobotassistedminimallyinvasivesurgery AT wangshuo automatedskillassessmentframeworkbasedonvisualmotionsignalsandadeepneuralnetworkinrobotassistedminimallyinvasivesurgery AT lijingao automatedskillassessmentframeworkbasedonvisualmotionsignalsandadeepneuralnetworkinrobotassistedminimallyinvasivesurgery AT lijing automatedskillassessmentframeworkbasedonvisualmotionsignalsandadeepneuralnetworkinrobotassistedminimallyinvasivesurgery AT yangxiuze automatedskillassessmentframeworkbasedonvisualmotionsignalsandadeepneuralnetworkinrobotassistedminimallyinvasivesurgery AT liangke automatedskillassessmentframeworkbasedonvisualmotionsignalsandadeepneuralnetworkinrobotassistedminimallyinvasivesurgery |