Recurrent Convolutional Neural Networks for 3D Mandible Segmentation in Computed Tomography

Purpose: Classic encoder–decoder-based convolutional neural network (EDCNN) approaches cannot accurately segment detailed anatomical structures of the mandible in computed tomography (CT), for instance, condyles and coronoids of the mandible, which are often affected by noise and metal artifacts. Th...

Descripción completa

Detalles Bibliográficos
Autores principales: Qiu, Bingjiang, Guo, Jiapan, Kraeima, Joep, Glas, Haye Hendrik, Zhang, Weichuan, Borra, Ronald J. H., Witjes, Max Johannes Hendrikus, van Ooijen, Peter M. A.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8229770/
https://www.ncbi.nlm.nih.gov/pubmed/34072714
http://dx.doi.org/10.3390/jpm11060492
_version_ 1783713056223657984
author Qiu, Bingjiang
Guo, Jiapan
Kraeima, Joep
Glas, Haye Hendrik
Zhang, Weichuan
Borra, Ronald J. H.
Witjes, Max Johannes Hendrikus
van Ooijen, Peter M. A.
author_facet Qiu, Bingjiang
Guo, Jiapan
Kraeima, Joep
Glas, Haye Hendrik
Zhang, Weichuan
Borra, Ronald J. H.
Witjes, Max Johannes Hendrikus
van Ooijen, Peter M. A.
author_sort Qiu, Bingjiang
collection PubMed
description Purpose: Classic encoder–decoder-based convolutional neural network (EDCNN) approaches cannot accurately segment detailed anatomical structures of the mandible in computed tomography (CT), for instance, condyles and coronoids of the mandible, which are often affected by noise and metal artifacts. The main reason is that EDCNN approaches ignore the anatomical connectivity of the organs. In this paper, we propose a novel CNN-based 3D mandible segmentation approach that has the ability to accurately segment detailed anatomical structures. Methods: Different from the classic EDCNNs that need to slice or crop the whole CT scan into 2D slices or 3D patches during the segmentation process, our proposed approach can perform mandible segmentation on complete 3D CT scans. The proposed method, namely, RCNNSeg, adopts the structure of the recurrent neural networks to form a directed acyclic graph in order to enable recurrent connections between adjacent nodes to retain their connectivity. Each node then functions as a classic EDCNN to segment a single slice in the CT scan. Our proposed approach can perform 3D mandible segmentation on sequential data of any varied lengths and does not require a large computation cost. The proposed RCNNSeg was evaluated on 109 head and neck CT scans from a local dataset and 40 scans from the PDDCA public dataset. The final accuracy of the proposed RCNNSeg was evaluated by calculating the Dice similarity coefficient (DSC), average symmetric surface distance (ASD), and [Formula: see text] Hausdorff distance (95HD) between the reference standard and the automated segmentation. Results: The proposed RCNNSeg outperforms the EDCNN-based approaches on both datasets and yields superior quantitative and qualitative performances when compared to the state-of-the-art approaches on the PDDCA dataset. The proposed RCNNSeg generated the most accurate segmentations with an average DSC of 97.48%, ASD of 0.2170 mm, and 95HD of 2.6562 mm on 109 CT scans, and an average DSC of 95.10%, ASD of 0.1367 mm, and 95HD of 1.3560 mm on the PDDCA dataset. Conclusions: The proposed RCNNSeg method generated more accurate automated segmentations than those of the other classic EDCNN segmentation techniques in terms of quantitative and qualitative evaluation. The proposed RCNNSeg has potential for automatic mandible segmentation by learning spatially structured information.
format Online
Article
Text
id pubmed-8229770
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-82297702021-06-26 Recurrent Convolutional Neural Networks for 3D Mandible Segmentation in Computed Tomography Qiu, Bingjiang Guo, Jiapan Kraeima, Joep Glas, Haye Hendrik Zhang, Weichuan Borra, Ronald J. H. Witjes, Max Johannes Hendrikus van Ooijen, Peter M. A. J Pers Med Article Purpose: Classic encoder–decoder-based convolutional neural network (EDCNN) approaches cannot accurately segment detailed anatomical structures of the mandible in computed tomography (CT), for instance, condyles and coronoids of the mandible, which are often affected by noise and metal artifacts. The main reason is that EDCNN approaches ignore the anatomical connectivity of the organs. In this paper, we propose a novel CNN-based 3D mandible segmentation approach that has the ability to accurately segment detailed anatomical structures. Methods: Different from the classic EDCNNs that need to slice or crop the whole CT scan into 2D slices or 3D patches during the segmentation process, our proposed approach can perform mandible segmentation on complete 3D CT scans. The proposed method, namely, RCNNSeg, adopts the structure of the recurrent neural networks to form a directed acyclic graph in order to enable recurrent connections between adjacent nodes to retain their connectivity. Each node then functions as a classic EDCNN to segment a single slice in the CT scan. Our proposed approach can perform 3D mandible segmentation on sequential data of any varied lengths and does not require a large computation cost. The proposed RCNNSeg was evaluated on 109 head and neck CT scans from a local dataset and 40 scans from the PDDCA public dataset. The final accuracy of the proposed RCNNSeg was evaluated by calculating the Dice similarity coefficient (DSC), average symmetric surface distance (ASD), and [Formula: see text] Hausdorff distance (95HD) between the reference standard and the automated segmentation. Results: The proposed RCNNSeg outperforms the EDCNN-based approaches on both datasets and yields superior quantitative and qualitative performances when compared to the state-of-the-art approaches on the PDDCA dataset. The proposed RCNNSeg generated the most accurate segmentations with an average DSC of 97.48%, ASD of 0.2170 mm, and 95HD of 2.6562 mm on 109 CT scans, and an average DSC of 95.10%, ASD of 0.1367 mm, and 95HD of 1.3560 mm on the PDDCA dataset. Conclusions: The proposed RCNNSeg method generated more accurate automated segmentations than those of the other classic EDCNN segmentation techniques in terms of quantitative and qualitative evaluation. The proposed RCNNSeg has potential for automatic mandible segmentation by learning spatially structured information. MDPI 2021-05-31 /pmc/articles/PMC8229770/ /pubmed/34072714 http://dx.doi.org/10.3390/jpm11060492 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Qiu, Bingjiang
Guo, Jiapan
Kraeima, Joep
Glas, Haye Hendrik
Zhang, Weichuan
Borra, Ronald J. H.
Witjes, Max Johannes Hendrikus
van Ooijen, Peter M. A.
Recurrent Convolutional Neural Networks for 3D Mandible Segmentation in Computed Tomography
title Recurrent Convolutional Neural Networks for 3D Mandible Segmentation in Computed Tomography
title_full Recurrent Convolutional Neural Networks for 3D Mandible Segmentation in Computed Tomography
title_fullStr Recurrent Convolutional Neural Networks for 3D Mandible Segmentation in Computed Tomography
title_full_unstemmed Recurrent Convolutional Neural Networks for 3D Mandible Segmentation in Computed Tomography
title_short Recurrent Convolutional Neural Networks for 3D Mandible Segmentation in Computed Tomography
title_sort recurrent convolutional neural networks for 3d mandible segmentation in computed tomography
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8229770/
https://www.ncbi.nlm.nih.gov/pubmed/34072714
http://dx.doi.org/10.3390/jpm11060492
work_keys_str_mv AT qiubingjiang recurrentconvolutionalneuralnetworksfor3dmandiblesegmentationincomputedtomography
AT guojiapan recurrentconvolutionalneuralnetworksfor3dmandiblesegmentationincomputedtomography
AT kraeimajoep recurrentconvolutionalneuralnetworksfor3dmandiblesegmentationincomputedtomography
AT glashayehendrik recurrentconvolutionalneuralnetworksfor3dmandiblesegmentationincomputedtomography
AT zhangweichuan recurrentconvolutionalneuralnetworksfor3dmandiblesegmentationincomputedtomography
AT borraronaldjh recurrentconvolutionalneuralnetworksfor3dmandiblesegmentationincomputedtomography
AT witjesmaxjohanneshendrikus recurrentconvolutionalneuralnetworksfor3dmandiblesegmentationincomputedtomography
AT vanooijenpeterma recurrentconvolutionalneuralnetworksfor3dmandiblesegmentationincomputedtomography