Cargando…

Mandible segmentation from CT data for virtual surgical planning using an augmented two-stepped convolutional neural network

PURPOSE: For computer-aided planning of facial bony surgery, the creation of high-resolution 3D-models of the bones by segmenting volume imaging data is a labor-intensive step, especially as metal dental inlays or implants cause severe artifacts that reduce the quality of the computer-tomographic im...

Descripción completa

Detalles Bibliográficos
Autores principales: Pankert, Tobias, Lee, Hyun, Peters, Florian, Hölzle, Frank, Modabber, Ali, Raith, Stefan
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer International Publishing 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10363055/
https://www.ncbi.nlm.nih.gov/pubmed/36637748
http://dx.doi.org/10.1007/s11548-022-02830-w
_version_ 1785076557412302848
author Pankert, Tobias
Lee, Hyun
Peters, Florian
Hölzle, Frank
Modabber, Ali
Raith, Stefan
author_facet Pankert, Tobias
Lee, Hyun
Peters, Florian
Hölzle, Frank
Modabber, Ali
Raith, Stefan
author_sort Pankert, Tobias
collection PubMed
description PURPOSE: For computer-aided planning of facial bony surgery, the creation of high-resolution 3D-models of the bones by segmenting volume imaging data is a labor-intensive step, especially as metal dental inlays or implants cause severe artifacts that reduce the quality of the computer-tomographic imaging data. This study provides a method to segment accurate, artifact-free 3D surface models of mandibles from CT data using convolutional neural networks. METHODS: The presented approach cascades two independently trained 3D-U-Nets to perform accurate segmentations of the mandible bone from full resolution CT images. The networks are trained in different settings using three different loss functions and a data augmentation pipeline. Training and evaluation datasets consist of manually segmented CT images from 307 dentate and edentulous individuals, partly with heavy imaging artifacts. The accuracy of the models is measured using overlap-based, surface-based and anatomical-curvature-based metrics. RESULTS: Our approach produces high-resolution segmentations of the mandibles, coping with severe imaging artifacts in the CT imaging data. The use of the two-stepped approach yields highly significant improvements to the prediction accuracies. The best models achieve a Dice coefficient of 94.824% and an average surface distance of 0.31 mm on our test dataset. CONCLUSION: The use of two cascaded U-Net allows high-resolution predictions for small regions of interest in the imaging data. The proposed method is fast and allows a user-independent image segmentation, producing objective and repeatable results that can be used in automated surgical planning procedures.
format Online
Article
Text
id pubmed-10363055
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Springer International Publishing
record_format MEDLINE/PubMed
spelling pubmed-103630552023-07-24 Mandible segmentation from CT data for virtual surgical planning using an augmented two-stepped convolutional neural network Pankert, Tobias Lee, Hyun Peters, Florian Hölzle, Frank Modabber, Ali Raith, Stefan Int J Comput Assist Radiol Surg Original Article PURPOSE: For computer-aided planning of facial bony surgery, the creation of high-resolution 3D-models of the bones by segmenting volume imaging data is a labor-intensive step, especially as metal dental inlays or implants cause severe artifacts that reduce the quality of the computer-tomographic imaging data. This study provides a method to segment accurate, artifact-free 3D surface models of mandibles from CT data using convolutional neural networks. METHODS: The presented approach cascades two independently trained 3D-U-Nets to perform accurate segmentations of the mandible bone from full resolution CT images. The networks are trained in different settings using three different loss functions and a data augmentation pipeline. Training and evaluation datasets consist of manually segmented CT images from 307 dentate and edentulous individuals, partly with heavy imaging artifacts. The accuracy of the models is measured using overlap-based, surface-based and anatomical-curvature-based metrics. RESULTS: Our approach produces high-resolution segmentations of the mandibles, coping with severe imaging artifacts in the CT imaging data. The use of the two-stepped approach yields highly significant improvements to the prediction accuracies. The best models achieve a Dice coefficient of 94.824% and an average surface distance of 0.31 mm on our test dataset. CONCLUSION: The use of two cascaded U-Net allows high-resolution predictions for small regions of interest in the imaging data. The proposed method is fast and allows a user-independent image segmentation, producing objective and repeatable results that can be used in automated surgical planning procedures. Springer International Publishing 2023-01-13 2023 /pmc/articles/PMC10363055/ /pubmed/36637748 http://dx.doi.org/10.1007/s11548-022-02830-w Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Original Article
Pankert, Tobias
Lee, Hyun
Peters, Florian
Hölzle, Frank
Modabber, Ali
Raith, Stefan
Mandible segmentation from CT data for virtual surgical planning using an augmented two-stepped convolutional neural network
title Mandible segmentation from CT data for virtual surgical planning using an augmented two-stepped convolutional neural network
title_full Mandible segmentation from CT data for virtual surgical planning using an augmented two-stepped convolutional neural network
title_fullStr Mandible segmentation from CT data for virtual surgical planning using an augmented two-stepped convolutional neural network
title_full_unstemmed Mandible segmentation from CT data for virtual surgical planning using an augmented two-stepped convolutional neural network
title_short Mandible segmentation from CT data for virtual surgical planning using an augmented two-stepped convolutional neural network
title_sort mandible segmentation from ct data for virtual surgical planning using an augmented two-stepped convolutional neural network
topic Original Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10363055/
https://www.ncbi.nlm.nih.gov/pubmed/36637748
http://dx.doi.org/10.1007/s11548-022-02830-w
work_keys_str_mv AT pankerttobias mandiblesegmentationfromctdataforvirtualsurgicalplanningusinganaugmentedtwosteppedconvolutionalneuralnetwork
AT leehyun mandiblesegmentationfromctdataforvirtualsurgicalplanningusinganaugmentedtwosteppedconvolutionalneuralnetwork
AT petersflorian mandiblesegmentationfromctdataforvirtualsurgicalplanningusinganaugmentedtwosteppedconvolutionalneuralnetwork
AT holzlefrank mandiblesegmentationfromctdataforvirtualsurgicalplanningusinganaugmentedtwosteppedconvolutionalneuralnetwork
AT modabberali mandiblesegmentationfromctdataforvirtualsurgicalplanningusinganaugmentedtwosteppedconvolutionalneuralnetwork
AT raithstefan mandiblesegmentationfromctdataforvirtualsurgicalplanningusinganaugmentedtwosteppedconvolutionalneuralnetwork