Cargando…

Multi-view secondary input collaborative deep learning for lung nodule 3D segmentation

BACKGROUND: Convolutional neural networks (CNNs) have been extensively applied to two-dimensional (2D) medical image segmentation, yielding excellent performance. However, their application to three-dimensional (3D) nodule segmentation remains a challenge. METHODS: In this study, we propose a multi-...

Descripción completa

Detalles Bibliográficos
Autores principales: Dong, Xianling, Xu, Shiqi, Liu, Yanli, Wang, Aihui, Saripan, M. Iqbal, Li, Li, Zhang, Xiaolei, Lu, Lijun
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7395980/
https://www.ncbi.nlm.nih.gov/pubmed/32738913
http://dx.doi.org/10.1186/s40644-020-00331-0
_version_ 1783565493413609472
author Dong, Xianling
Xu, Shiqi
Liu, Yanli
Wang, Aihui
Saripan, M. Iqbal
Li, Li
Zhang, Xiaolei
Lu, Lijun
author_facet Dong, Xianling
Xu, Shiqi
Liu, Yanli
Wang, Aihui
Saripan, M. Iqbal
Li, Li
Zhang, Xiaolei
Lu, Lijun
author_sort Dong, Xianling
collection PubMed
description BACKGROUND: Convolutional neural networks (CNNs) have been extensively applied to two-dimensional (2D) medical image segmentation, yielding excellent performance. However, their application to three-dimensional (3D) nodule segmentation remains a challenge. METHODS: In this study, we propose a multi-view secondary input residual (MV-SIR) convolutional neural network model for 3D lung nodule segmentation using the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) dataset of chest computed tomography (CT) images. Lung nodule cubes are prepared from the sample CT images. Further, from the axial, coronal, and sagittal perspectives, multi-view patches are generated with randomly selected voxels in the lung nodule cubes as centers. Our model consists of six submodels, which enable learning of 3D lung nodules sliced into three views of features; each submodel extracts voxel heterogeneity and shape heterogeneity features. We convert the segmentation of 3D lung nodules into voxel classification by inputting the multi-view patches into the model and determine whether the voxel points belong to the nodule. The structure of the secondary input residual submodel comprises a residual block followed by a secondary input module. We integrate the six submodels to classify whether voxel points belong to nodules, and then reconstruct the segmentation image. RESULTS: The results of tests conducted using our model and comparison with other existing CNN models indicate that the MV-SIR model achieves excellent results in the 3D segmentation of pulmonary nodules, with a Dice coefficient of 0.926 and an average surface distance of 0.072. CONCLUSION: our MV-SIR model can accurately perform 3D segmentation of lung nodules with the same segmentation accuracy as the U-net model.
format Online
Article
Text
id pubmed-7395980
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher BioMed Central
record_format MEDLINE/PubMed
spelling pubmed-73959802020-08-06 Multi-view secondary input collaborative deep learning for lung nodule 3D segmentation Dong, Xianling Xu, Shiqi Liu, Yanli Wang, Aihui Saripan, M. Iqbal Li, Li Zhang, Xiaolei Lu, Lijun Cancer Imaging Research Article BACKGROUND: Convolutional neural networks (CNNs) have been extensively applied to two-dimensional (2D) medical image segmentation, yielding excellent performance. However, their application to three-dimensional (3D) nodule segmentation remains a challenge. METHODS: In this study, we propose a multi-view secondary input residual (MV-SIR) convolutional neural network model for 3D lung nodule segmentation using the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) dataset of chest computed tomography (CT) images. Lung nodule cubes are prepared from the sample CT images. Further, from the axial, coronal, and sagittal perspectives, multi-view patches are generated with randomly selected voxels in the lung nodule cubes as centers. Our model consists of six submodels, which enable learning of 3D lung nodules sliced into three views of features; each submodel extracts voxel heterogeneity and shape heterogeneity features. We convert the segmentation of 3D lung nodules into voxel classification by inputting the multi-view patches into the model and determine whether the voxel points belong to the nodule. The structure of the secondary input residual submodel comprises a residual block followed by a secondary input module. We integrate the six submodels to classify whether voxel points belong to nodules, and then reconstruct the segmentation image. RESULTS: The results of tests conducted using our model and comparison with other existing CNN models indicate that the MV-SIR model achieves excellent results in the 3D segmentation of pulmonary nodules, with a Dice coefficient of 0.926 and an average surface distance of 0.072. CONCLUSION: our MV-SIR model can accurately perform 3D segmentation of lung nodules with the same segmentation accuracy as the U-net model. BioMed Central 2020-08-01 /pmc/articles/PMC7395980/ /pubmed/32738913 http://dx.doi.org/10.1186/s40644-020-00331-0 Text en © The Author(s) 2020 Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
spellingShingle Research Article
Dong, Xianling
Xu, Shiqi
Liu, Yanli
Wang, Aihui
Saripan, M. Iqbal
Li, Li
Zhang, Xiaolei
Lu, Lijun
Multi-view secondary input collaborative deep learning for lung nodule 3D segmentation
title Multi-view secondary input collaborative deep learning for lung nodule 3D segmentation
title_full Multi-view secondary input collaborative deep learning for lung nodule 3D segmentation
title_fullStr Multi-view secondary input collaborative deep learning for lung nodule 3D segmentation
title_full_unstemmed Multi-view secondary input collaborative deep learning for lung nodule 3D segmentation
title_short Multi-view secondary input collaborative deep learning for lung nodule 3D segmentation
title_sort multi-view secondary input collaborative deep learning for lung nodule 3d segmentation
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7395980/
https://www.ncbi.nlm.nih.gov/pubmed/32738913
http://dx.doi.org/10.1186/s40644-020-00331-0
work_keys_str_mv AT dongxianling multiviewsecondaryinputcollaborativedeeplearningforlungnodule3dsegmentation
AT xushiqi multiviewsecondaryinputcollaborativedeeplearningforlungnodule3dsegmentation
AT liuyanli multiviewsecondaryinputcollaborativedeeplearningforlungnodule3dsegmentation
AT wangaihui multiviewsecondaryinputcollaborativedeeplearningforlungnodule3dsegmentation
AT saripanmiqbal multiviewsecondaryinputcollaborativedeeplearningforlungnodule3dsegmentation
AT lili multiviewsecondaryinputcollaborativedeeplearningforlungnodule3dsegmentation
AT zhangxiaolei multiviewsecondaryinputcollaborativedeeplearningforlungnodule3dsegmentation
AT lulijun multiviewsecondaryinputcollaborativedeeplearningforlungnodule3dsegmentation