Cargando…

Hybrid U‐Net‐based deep learning model for volume segmentation of lung nodules in CT images

OBJECTIVE: Accurate segmentation of the lung nodule in computed tomography images is a critical component of a computer‐assisted lung cancer detection/diagnosis system. However, lung nodule segmentation is a challenging task due to the heterogeneity of nodules. This study is to develop a hybrid deep...

Descripción completa

Detalles Bibliográficos
Autores principales: Wang, Yifan, Zhou, Chuan, Chan, Heang‐Ping, Hadjiiski, Lubomir M., Chughtai, Aamer, Kazerooni, Ella A.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: John Wiley and Sons Inc. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10087884/
https://www.ncbi.nlm.nih.gov/pubmed/35717560
http://dx.doi.org/10.1002/mp.15810
_version_ 1785022451667697664
author Wang, Yifan
Zhou, Chuan
Chan, Heang‐Ping
Hadjiiski, Lubomir M.
Chughtai, Aamer
Kazerooni, Ella A.
author_facet Wang, Yifan
Zhou, Chuan
Chan, Heang‐Ping
Hadjiiski, Lubomir M.
Chughtai, Aamer
Kazerooni, Ella A.
author_sort Wang, Yifan
collection PubMed
description OBJECTIVE: Accurate segmentation of the lung nodule in computed tomography images is a critical component of a computer‐assisted lung cancer detection/diagnosis system. However, lung nodule segmentation is a challenging task due to the heterogeneity of nodules. This study is to develop a hybrid deep learning (H‐DL) model for the segmentation of lung nodules with a wide variety of sizes, shapes, margins, and opacities. MATERIALS AND METHODS: A dataset collected from Lung Image Database Consortium image collection containing 847 cases with lung nodules manually annotated by at least two radiologists with nodule diameters greater than 7 mm and less than 45 mm was randomly split into 683 training/validation and 164 independent test cases. The 50% consensus consolidation of radiologists' annotation was used as the reference standard for each nodule. We designed a new H‐DL model combining two deep convolutional neural networks (DCNNs) with different structures as encoders to increase the learning capabilities for the segmentation of complex lung nodules. Leveraging the basic symmetric U‐shaped architecture of U‐Net, we redesigned two new U‐shaped deep learning (U‐DL) models that were expanded to six levels of convolutional layers. One U‐DL model used a shallow DCNN structure containing 16 convolutional layers adapted from the VGG‐19 as the encoder, and the other used a deep DCNN structure containing 200 layers adapted from DenseNet‐201 as the encoder, while the same decoder with only one convolutional layer at each level was used in both U‐DL models, and we referred to them as the shallow and deep U‐DL models. Finally, an ensemble layer was used to combine the two U‐DL models into the H‐DL model. We compared the effectiveness of the H‐DL, the shallow U‐DL and the deep U‐DL models by deploying them separately to the test set. The accuracy of volume segmentation for each nodule was evaluated by the 3D Dice coefficient and Jaccard index (JI) relative to the reference standard. For comparison, we calculated the median and minimum of the 3D Dice and JI over the individual radiologists who segmented each nodule, referred to as M‐Dice, min‐Dice, M‐JI, and min‐JI. RESULTS: For the 164 test cases with 327 nodules, our H‐DL model achieved an average 3D Dice coefficient of 0.750 ± 0.135 and an average JI of 0.617 ± 0.159. The radiologists' average M‐Dice was 0.778 ± 0.102, and the average M‐JI was 0.651 ± 0.127; both were significantly higher than those achieved by the H‐DL model (p < 0.05). The radiologists' average min‐Dice (0.685 ± 0.139) and the average min‐JI (0.537 ± 0.153) were significantly lower than those achieved by the H‐DL model (p < 0.05). The results indicated that the H‐DL model approached the average performance of radiologists and was superior to the radiologist whose manual segmentation had the min‐Dice and min‐JI. Moreover, the average Dice and average JI achieved by the H‐DL model were significantly higher than those achieved by the individual shallow U‐DL model (Dice of 0.745 ± 0.139, JI of 0.611 ± 0.161; p < 0.05) or the individual deep U‐DL model alone (Dice of 0.739 ± 0.145, JI of 0.604 ± 0.163; p < 0.05). CONCLUSION: Our newly developed H‐DL model outperformed the individual shallow or deep U‐DL models. The H‐DL method combining multilevel features learned by both the shallow and deep DCNNs could achieve segmentation accuracy comparable to radiologists' segmentation for nodules with wide ranges of image characteristics.
format Online
Article
Text
id pubmed-10087884
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher John Wiley and Sons Inc.
record_format MEDLINE/PubMed
spelling pubmed-100878842023-04-12 Hybrid U‐Net‐based deep learning model for volume segmentation of lung nodules in CT images Wang, Yifan Zhou, Chuan Chan, Heang‐Ping Hadjiiski, Lubomir M. Chughtai, Aamer Kazerooni, Ella A. Med Phys QUANTITATIVE IMAGING AND IMAGE PROCESSING OBJECTIVE: Accurate segmentation of the lung nodule in computed tomography images is a critical component of a computer‐assisted lung cancer detection/diagnosis system. However, lung nodule segmentation is a challenging task due to the heterogeneity of nodules. This study is to develop a hybrid deep learning (H‐DL) model for the segmentation of lung nodules with a wide variety of sizes, shapes, margins, and opacities. MATERIALS AND METHODS: A dataset collected from Lung Image Database Consortium image collection containing 847 cases with lung nodules manually annotated by at least two radiologists with nodule diameters greater than 7 mm and less than 45 mm was randomly split into 683 training/validation and 164 independent test cases. The 50% consensus consolidation of radiologists' annotation was used as the reference standard for each nodule. We designed a new H‐DL model combining two deep convolutional neural networks (DCNNs) with different structures as encoders to increase the learning capabilities for the segmentation of complex lung nodules. Leveraging the basic symmetric U‐shaped architecture of U‐Net, we redesigned two new U‐shaped deep learning (U‐DL) models that were expanded to six levels of convolutional layers. One U‐DL model used a shallow DCNN structure containing 16 convolutional layers adapted from the VGG‐19 as the encoder, and the other used a deep DCNN structure containing 200 layers adapted from DenseNet‐201 as the encoder, while the same decoder with only one convolutional layer at each level was used in both U‐DL models, and we referred to them as the shallow and deep U‐DL models. Finally, an ensemble layer was used to combine the two U‐DL models into the H‐DL model. We compared the effectiveness of the H‐DL, the shallow U‐DL and the deep U‐DL models by deploying them separately to the test set. The accuracy of volume segmentation for each nodule was evaluated by the 3D Dice coefficient and Jaccard index (JI) relative to the reference standard. For comparison, we calculated the median and minimum of the 3D Dice and JI over the individual radiologists who segmented each nodule, referred to as M‐Dice, min‐Dice, M‐JI, and min‐JI. RESULTS: For the 164 test cases with 327 nodules, our H‐DL model achieved an average 3D Dice coefficient of 0.750 ± 0.135 and an average JI of 0.617 ± 0.159. The radiologists' average M‐Dice was 0.778 ± 0.102, and the average M‐JI was 0.651 ± 0.127; both were significantly higher than those achieved by the H‐DL model (p < 0.05). The radiologists' average min‐Dice (0.685 ± 0.139) and the average min‐JI (0.537 ± 0.153) were significantly lower than those achieved by the H‐DL model (p < 0.05). The results indicated that the H‐DL model approached the average performance of radiologists and was superior to the radiologist whose manual segmentation had the min‐Dice and min‐JI. Moreover, the average Dice and average JI achieved by the H‐DL model were significantly higher than those achieved by the individual shallow U‐DL model (Dice of 0.745 ± 0.139, JI of 0.611 ± 0.161; p < 0.05) or the individual deep U‐DL model alone (Dice of 0.739 ± 0.145, JI of 0.604 ± 0.163; p < 0.05). CONCLUSION: Our newly developed H‐DL model outperformed the individual shallow or deep U‐DL models. The H‐DL method combining multilevel features learned by both the shallow and deep DCNNs could achieve segmentation accuracy comparable to radiologists' segmentation for nodules with wide ranges of image characteristics. John Wiley and Sons Inc. 2022-08-17 2022-11 /pmc/articles/PMC10087884/ /pubmed/35717560 http://dx.doi.org/10.1002/mp.15810 Text en © 2022 The Authors. Medical Physics published by Wiley Periodicals LLC on behalf of American Association of Physicists in Medicine https://creativecommons.org/licenses/by-nc-nd/4.0/This is an open access article under the terms of the http://creativecommons.org/licenses/by-nc-nd/4.0/ (https://creativecommons.org/licenses/by-nc-nd/4.0/) License, which permits use and distribution in any medium, provided the original work is properly cited, the use is non‐commercial and no modifications or adaptations are made.
spellingShingle QUANTITATIVE IMAGING AND IMAGE PROCESSING
Wang, Yifan
Zhou, Chuan
Chan, Heang‐Ping
Hadjiiski, Lubomir M.
Chughtai, Aamer
Kazerooni, Ella A.
Hybrid U‐Net‐based deep learning model for volume segmentation of lung nodules in CT images
title Hybrid U‐Net‐based deep learning model for volume segmentation of lung nodules in CT images
title_full Hybrid U‐Net‐based deep learning model for volume segmentation of lung nodules in CT images
title_fullStr Hybrid U‐Net‐based deep learning model for volume segmentation of lung nodules in CT images
title_full_unstemmed Hybrid U‐Net‐based deep learning model for volume segmentation of lung nodules in CT images
title_short Hybrid U‐Net‐based deep learning model for volume segmentation of lung nodules in CT images
title_sort hybrid u‐net‐based deep learning model for volume segmentation of lung nodules in ct images
topic QUANTITATIVE IMAGING AND IMAGE PROCESSING
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10087884/
https://www.ncbi.nlm.nih.gov/pubmed/35717560
http://dx.doi.org/10.1002/mp.15810
work_keys_str_mv AT wangyifan hybridunetbaseddeeplearningmodelforvolumesegmentationoflungnodulesinctimages
AT zhouchuan hybridunetbaseddeeplearningmodelforvolumesegmentationoflungnodulesinctimages
AT chanheangping hybridunetbaseddeeplearningmodelforvolumesegmentationoflungnodulesinctimages
AT hadjiiskilubomirm hybridunetbaseddeeplearningmodelforvolumesegmentationoflungnodulesinctimages
AT chughtaiaamer hybridunetbaseddeeplearningmodelforvolumesegmentationoflungnodulesinctimages
AT kazerooniellaa hybridunetbaseddeeplearningmodelforvolumesegmentationoflungnodulesinctimages