Cargando…
Automated recognition of objects and types of forceps in surgical images using deep learning
Analysis of operative data with convolutional neural networks (CNNs) is expected to improve the knowledge and professional skills of surgeons. Identification of objects in videos recorded during surgery can be used for surgical skill assessment and surgical navigation. The objectives of this study w...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8604928/ https://www.ncbi.nlm.nih.gov/pubmed/34799625 http://dx.doi.org/10.1038/s41598-021-01911-1 |
_version_ | 1784602064681172992 |
---|---|
author | Bamba, Yoshiko Ogawa, Shimpei Itabashi, Michio Kameoka, Shingo Okamoto, Takahiro Yamamoto, Masakazu |
author_facet | Bamba, Yoshiko Ogawa, Shimpei Itabashi, Michio Kameoka, Shingo Okamoto, Takahiro Yamamoto, Masakazu |
author_sort | Bamba, Yoshiko |
collection | PubMed |
description | Analysis of operative data with convolutional neural networks (CNNs) is expected to improve the knowledge and professional skills of surgeons. Identification of objects in videos recorded during surgery can be used for surgical skill assessment and surgical navigation. The objectives of this study were to recognize objects and types of forceps in surgical videos acquired during colorectal surgeries and evaluate detection accuracy. Images (n = 1818) were extracted from 11 surgical videos for model training, and another 500 images were extracted from 6 additional videos for validation. The following 5 types of forceps were selected for annotation: ultrasonic scalpel, grasping, clip, angled (Maryland and right-angled), and spatula. IBM Visual Insights software was used, which incorporates the most popular open-source deep-learning CNN frameworks. In total, 1039/1062 (97.8%) forceps were correctly identified among 500 test images. Calculated recall and precision values were as follows: grasping forceps, 98.1% and 98.0%; ultrasonic scalpel, 99.4% and 93.9%; clip forceps, 96.2% and 92.7%; angled forceps, 94.9% and 100%; and spatula forceps, 98.1% and 94.5%, respectively. Forceps recognition can be achieved with high accuracy using deep-learning models, providing the opportunity to evaluate how forceps are used in various operations. |
format | Online Article Text |
id | pubmed-8604928 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | Nature Publishing Group UK |
record_format | MEDLINE/PubMed |
spelling | pubmed-86049282021-11-22 Automated recognition of objects and types of forceps in surgical images using deep learning Bamba, Yoshiko Ogawa, Shimpei Itabashi, Michio Kameoka, Shingo Okamoto, Takahiro Yamamoto, Masakazu Sci Rep Article Analysis of operative data with convolutional neural networks (CNNs) is expected to improve the knowledge and professional skills of surgeons. Identification of objects in videos recorded during surgery can be used for surgical skill assessment and surgical navigation. The objectives of this study were to recognize objects and types of forceps in surgical videos acquired during colorectal surgeries and evaluate detection accuracy. Images (n = 1818) were extracted from 11 surgical videos for model training, and another 500 images were extracted from 6 additional videos for validation. The following 5 types of forceps were selected for annotation: ultrasonic scalpel, grasping, clip, angled (Maryland and right-angled), and spatula. IBM Visual Insights software was used, which incorporates the most popular open-source deep-learning CNN frameworks. In total, 1039/1062 (97.8%) forceps were correctly identified among 500 test images. Calculated recall and precision values were as follows: grasping forceps, 98.1% and 98.0%; ultrasonic scalpel, 99.4% and 93.9%; clip forceps, 96.2% and 92.7%; angled forceps, 94.9% and 100%; and spatula forceps, 98.1% and 94.5%, respectively. Forceps recognition can be achieved with high accuracy using deep-learning models, providing the opportunity to evaluate how forceps are used in various operations. Nature Publishing Group UK 2021-11-19 /pmc/articles/PMC8604928/ /pubmed/34799625 http://dx.doi.org/10.1038/s41598-021-01911-1 Text en © The Author(s) 2021 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Article Bamba, Yoshiko Ogawa, Shimpei Itabashi, Michio Kameoka, Shingo Okamoto, Takahiro Yamamoto, Masakazu Automated recognition of objects and types of forceps in surgical images using deep learning |
title | Automated recognition of objects and types of forceps in surgical images using deep learning |
title_full | Automated recognition of objects and types of forceps in surgical images using deep learning |
title_fullStr | Automated recognition of objects and types of forceps in surgical images using deep learning |
title_full_unstemmed | Automated recognition of objects and types of forceps in surgical images using deep learning |
title_short | Automated recognition of objects and types of forceps in surgical images using deep learning |
title_sort | automated recognition of objects and types of forceps in surgical images using deep learning |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8604928/ https://www.ncbi.nlm.nih.gov/pubmed/34799625 http://dx.doi.org/10.1038/s41598-021-01911-1 |
work_keys_str_mv | AT bambayoshiko automatedrecognitionofobjectsandtypesofforcepsinsurgicalimagesusingdeeplearning AT ogawashimpei automatedrecognitionofobjectsandtypesofforcepsinsurgicalimagesusingdeeplearning AT itabashimichio automatedrecognitionofobjectsandtypesofforcepsinsurgicalimagesusingdeeplearning AT kameokashingo automatedrecognitionofobjectsandtypesofforcepsinsurgicalimagesusingdeeplearning AT okamototakahiro automatedrecognitionofobjectsandtypesofforcepsinsurgicalimagesusingdeeplearning AT yamamotomasakazu automatedrecognitionofobjectsandtypesofforcepsinsurgicalimagesusingdeeplearning |