Cargando…

Mask then classify: multi-instance segmentation for surgical instruments

PURPOSE: The detection and segmentation of surgical instruments has been a vital step for many applications in minimally invasive surgical robotics. Previously, the problem was tackled from a semantic segmentation perspective, yet these methods fail to provide good segmentation maps of instrument ty...

Descripción completa

Detalles Bibliográficos
Autores principales: Kurmann, Thomas, Márquez-Neila, Pablo, Allan, Max, Wolf, Sebastian, Sznitman, Raphael
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer International Publishing 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8260538/
https://www.ncbi.nlm.nih.gov/pubmed/34143374
http://dx.doi.org/10.1007/s11548-021-02404-2
_version_ 1783718828140658688
author Kurmann, Thomas
Márquez-Neila, Pablo
Allan, Max
Wolf, Sebastian
Sznitman, Raphael
author_facet Kurmann, Thomas
Márquez-Neila, Pablo
Allan, Max
Wolf, Sebastian
Sznitman, Raphael
author_sort Kurmann, Thomas
collection PubMed
description PURPOSE: The detection and segmentation of surgical instruments has been a vital step for many applications in minimally invasive surgical robotics. Previously, the problem was tackled from a semantic segmentation perspective, yet these methods fail to provide good segmentation maps of instrument types and do not contain any information on the instance affiliation of each pixel. We propose to overcome this limitation by using a novel instance segmentation method which first masks instruments and then classifies them into their respective type. METHODS: We introduce a novel method for instance segmentation where a pixel-wise mask of each instance is found prior to classification. An encoder–decoder network is used to extract instrument instances, which are then separately classified using the features of the previous stages. Furthermore, we present a method to incorporate instrument priors from surgical robots. RESULTS: Experiments are performed on the robotic instrument segmentation dataset of the 2017 endoscopic vision challenge. We perform a fourfold cross-validation and show an improvement of over 18% to the previous state-of-the-art. Furthermore, we perform an ablation study which highlights the importance of certain design choices and observe an increase of 10% over semantic segmentation methods. CONCLUSIONS: We have presented a novel instance segmentation method for surgical instruments which outperforms previous semantic segmentation-based methods. Our method further provides a more informative output of instance level information, while retaining a precise segmentation mask. Finally, we have shown that robotic instrument priors can be used to further increase the performance. SUPPLEMENTARY INFORMATION: The online version supplementary material available at 10.1007/s11548-021-02404-2.
format Online
Article
Text
id pubmed-8260538
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Springer International Publishing
record_format MEDLINE/PubMed
spelling pubmed-82605382021-07-20 Mask then classify: multi-instance segmentation for surgical instruments Kurmann, Thomas Márquez-Neila, Pablo Allan, Max Wolf, Sebastian Sznitman, Raphael Int J Comput Assist Radiol Surg Original Article PURPOSE: The detection and segmentation of surgical instruments has been a vital step for many applications in minimally invasive surgical robotics. Previously, the problem was tackled from a semantic segmentation perspective, yet these methods fail to provide good segmentation maps of instrument types and do not contain any information on the instance affiliation of each pixel. We propose to overcome this limitation by using a novel instance segmentation method which first masks instruments and then classifies them into their respective type. METHODS: We introduce a novel method for instance segmentation where a pixel-wise mask of each instance is found prior to classification. An encoder–decoder network is used to extract instrument instances, which are then separately classified using the features of the previous stages. Furthermore, we present a method to incorporate instrument priors from surgical robots. RESULTS: Experiments are performed on the robotic instrument segmentation dataset of the 2017 endoscopic vision challenge. We perform a fourfold cross-validation and show an improvement of over 18% to the previous state-of-the-art. Furthermore, we perform an ablation study which highlights the importance of certain design choices and observe an increase of 10% over semantic segmentation methods. CONCLUSIONS: We have presented a novel instance segmentation method for surgical instruments which outperforms previous semantic segmentation-based methods. Our method further provides a more informative output of instance level information, while retaining a precise segmentation mask. Finally, we have shown that robotic instrument priors can be used to further increase the performance. SUPPLEMENTARY INFORMATION: The online version supplementary material available at 10.1007/s11548-021-02404-2. Springer International Publishing 2021-06-18 2021 /pmc/articles/PMC8260538/ /pubmed/34143374 http://dx.doi.org/10.1007/s11548-021-02404-2 Text en © The Author(s) 2021 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Original Article
Kurmann, Thomas
Márquez-Neila, Pablo
Allan, Max
Wolf, Sebastian
Sznitman, Raphael
Mask then classify: multi-instance segmentation for surgical instruments
title Mask then classify: multi-instance segmentation for surgical instruments
title_full Mask then classify: multi-instance segmentation for surgical instruments
title_fullStr Mask then classify: multi-instance segmentation for surgical instruments
title_full_unstemmed Mask then classify: multi-instance segmentation for surgical instruments
title_short Mask then classify: multi-instance segmentation for surgical instruments
title_sort mask then classify: multi-instance segmentation for surgical instruments
topic Original Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8260538/
https://www.ncbi.nlm.nih.gov/pubmed/34143374
http://dx.doi.org/10.1007/s11548-021-02404-2
work_keys_str_mv AT kurmannthomas maskthenclassifymultiinstancesegmentationforsurgicalinstruments
AT marquezneilapablo maskthenclassifymultiinstancesegmentationforsurgicalinstruments
AT allanmax maskthenclassifymultiinstancesegmentationforsurgicalinstruments
AT wolfsebastian maskthenclassifymultiinstancesegmentationforsurgicalinstruments
AT sznitmanraphael maskthenclassifymultiinstancesegmentationforsurgicalinstruments