Cargando…
Convolutional neural network for automated mass segmentation in mammography
BACKGROUND: Automatic segmentation and localization of lesions in mammogram (MG) images are challenging even with employing advanced methods such as deep learning (DL) methods. We developed a new model based on the architecture of the semantic segmentation U-Net model to precisely segment mass lesio...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
BioMed Central
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7724817/ https://www.ncbi.nlm.nih.gov/pubmed/33297952 http://dx.doi.org/10.1186/s12859-020-3521-y |
_version_ | 1783620594587140096 |
---|---|
author | Abdelhafiz, Dina Bi, Jinbo Ammar, Reda Yang, Clifford Nabavi, Sheida |
author_facet | Abdelhafiz, Dina Bi, Jinbo Ammar, Reda Yang, Clifford Nabavi, Sheida |
author_sort | Abdelhafiz, Dina |
collection | PubMed |
description | BACKGROUND: Automatic segmentation and localization of lesions in mammogram (MG) images are challenging even with employing advanced methods such as deep learning (DL) methods. We developed a new model based on the architecture of the semantic segmentation U-Net model to precisely segment mass lesions in MG images. The proposed end-to-end convolutional neural network (CNN) based model extracts contextual information by combining low-level and high-level features. We trained the proposed model using huge publicly available databases, (CBIS-DDSM, BCDR-01, and INbreast), and a private database from the University of Connecticut Health Center (UCHC). RESULTS: We compared the performance of the proposed model with those of the state-of-the-art DL models including the fully convolutional network (FCN), SegNet, Dilated-Net, original U-Net, and Faster R-CNN models and the conventional region growing (RG) method. The proposed Vanilla U-Net model outperforms the Faster R-CNN model significantly in terms of the runtime and the Intersection over Union metric (IOU). Training with digitized film-based and fully digitized MG images, the proposed Vanilla U-Net model achieves a mean test accuracy of 92.6%. The proposed model achieves a mean Dice coefficient index (DI) of 0.951 and a mean IOU of 0.909 that show how close the output segments are to the corresponding lesions in the ground truth maps. Data augmentation has been very effective in our experiments resulting in an increase in the mean DI and the mean IOU from 0.922 to 0.951 and 0.856 to 0.909, respectively. CONCLUSIONS: The proposed Vanilla U-Net based model can be used for precise segmentation of masses in MG images. This is because the segmentation process incorporates more multi-scale spatial context, and captures more local and global context to predict a precise pixel-wise segmentation map of an input full MG image. These detected maps can help radiologists in differentiating benign and malignant lesions depend on the lesion shapes. We show that using transfer learning, introducing augmentation, and modifying the architecture of the original model results in better performance in terms of the mean accuracy, the mean DI, and the mean IOU in detecting mass lesion compared to the other DL and the conventional models. |
format | Online Article Text |
id | pubmed-7724817 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
publisher | BioMed Central |
record_format | MEDLINE/PubMed |
spelling | pubmed-77248172020-12-09 Convolutional neural network for automated mass segmentation in mammography Abdelhafiz, Dina Bi, Jinbo Ammar, Reda Yang, Clifford Nabavi, Sheida BMC Bioinformatics Methodology BACKGROUND: Automatic segmentation and localization of lesions in mammogram (MG) images are challenging even with employing advanced methods such as deep learning (DL) methods. We developed a new model based on the architecture of the semantic segmentation U-Net model to precisely segment mass lesions in MG images. The proposed end-to-end convolutional neural network (CNN) based model extracts contextual information by combining low-level and high-level features. We trained the proposed model using huge publicly available databases, (CBIS-DDSM, BCDR-01, and INbreast), and a private database from the University of Connecticut Health Center (UCHC). RESULTS: We compared the performance of the proposed model with those of the state-of-the-art DL models including the fully convolutional network (FCN), SegNet, Dilated-Net, original U-Net, and Faster R-CNN models and the conventional region growing (RG) method. The proposed Vanilla U-Net model outperforms the Faster R-CNN model significantly in terms of the runtime and the Intersection over Union metric (IOU). Training with digitized film-based and fully digitized MG images, the proposed Vanilla U-Net model achieves a mean test accuracy of 92.6%. The proposed model achieves a mean Dice coefficient index (DI) of 0.951 and a mean IOU of 0.909 that show how close the output segments are to the corresponding lesions in the ground truth maps. Data augmentation has been very effective in our experiments resulting in an increase in the mean DI and the mean IOU from 0.922 to 0.951 and 0.856 to 0.909, respectively. CONCLUSIONS: The proposed Vanilla U-Net based model can be used for precise segmentation of masses in MG images. This is because the segmentation process incorporates more multi-scale spatial context, and captures more local and global context to predict a precise pixel-wise segmentation map of an input full MG image. These detected maps can help radiologists in differentiating benign and malignant lesions depend on the lesion shapes. We show that using transfer learning, introducing augmentation, and modifying the architecture of the original model results in better performance in terms of the mean accuracy, the mean DI, and the mean IOU in detecting mass lesion compared to the other DL and the conventional models. BioMed Central 2020-12-09 /pmc/articles/PMC7724817/ /pubmed/33297952 http://dx.doi.org/10.1186/s12859-020-3521-y Text en © The Author(s) 2020 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data. |
spellingShingle | Methodology Abdelhafiz, Dina Bi, Jinbo Ammar, Reda Yang, Clifford Nabavi, Sheida Convolutional neural network for automated mass segmentation in mammography |
title | Convolutional neural network for automated mass segmentation in mammography |
title_full | Convolutional neural network for automated mass segmentation in mammography |
title_fullStr | Convolutional neural network for automated mass segmentation in mammography |
title_full_unstemmed | Convolutional neural network for automated mass segmentation in mammography |
title_short | Convolutional neural network for automated mass segmentation in mammography |
title_sort | convolutional neural network for automated mass segmentation in mammography |
topic | Methodology |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7724817/ https://www.ncbi.nlm.nih.gov/pubmed/33297952 http://dx.doi.org/10.1186/s12859-020-3521-y |
work_keys_str_mv | AT abdelhafizdina convolutionalneuralnetworkforautomatedmasssegmentationinmammography AT bijinbo convolutionalneuralnetworkforautomatedmasssegmentationinmammography AT ammarreda convolutionalneuralnetworkforautomatedmasssegmentationinmammography AT yangclifford convolutionalneuralnetworkforautomatedmasssegmentationinmammography AT nabavisheida convolutionalneuralnetworkforautomatedmasssegmentationinmammography |