Cargando…

Deep Learning to Improve Breast Cancer Detection on Screening Mammography

The rapid development of deep learning, a family of machine learning techniques, has spurred much interest in its application to medical imaging problems. Here, we develop a deep learning algorithm that can accurately detect breast cancer on screening mammograms using an “end-to-end” training approa...

Descripción completa

Detalles Bibliográficos
Autores principales: Shen, Li, Margolies, Laurie R., Rothstein, Joseph H., Fluder, Eugene, McBride, Russell, Sieh, Weiva
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6715802/
https://www.ncbi.nlm.nih.gov/pubmed/31467326
http://dx.doi.org/10.1038/s41598-019-48995-4
_version_ 1783447285750824960
author Shen, Li
Margolies, Laurie R.
Rothstein, Joseph H.
Fluder, Eugene
McBride, Russell
Sieh, Weiva
author_facet Shen, Li
Margolies, Laurie R.
Rothstein, Joseph H.
Fluder, Eugene
McBride, Russell
Sieh, Weiva
author_sort Shen, Li
collection PubMed
description The rapid development of deep learning, a family of machine learning techniques, has spurred much interest in its application to medical imaging problems. Here, we develop a deep learning algorithm that can accurately detect breast cancer on screening mammograms using an “end-to-end” training approach that efficiently leverages training datasets with either complete clinical annotation or only the cancer status (label) of the whole image. In this approach, lesion annotations are required only in the initial training stage, and subsequent stages require only image-level labels, eliminating the reliance on rarely available lesion annotations. Our all convolutional network method for classifying screening mammograms attained excellent performance in comparison with previous methods. On an independent test set of digitized film mammograms from the Digital Database for Screening Mammography (CBIS-DDSM), the best single model achieved a per-image AUC of 0.88, and four-model averaging improved the AUC to 0.91 (sensitivity: 86.1%, specificity: 80.1%). On an independent test set of full-field digital mammography (FFDM) images from the INbreast database, the best single model achieved a per-image AUC of 0.95, and four-model averaging improved the AUC to 0.98 (sensitivity: 86.7%, specificity: 96.1%). We also demonstrate that a whole image classifier trained using our end-to-end approach on the CBIS-DDSM digitized film mammograms can be transferred to INbreast FFDM images using only a subset of the INbreast data for fine-tuning and without further reliance on the availability of lesion annotations. These findings show that automatic deep learning methods can be readily trained to attain high accuracy on heterogeneous mammography platforms, and hold tremendous promise for improving clinical tools to reduce false positive and false negative screening mammography results. Code and model available at: https://github.com/lishen/end2end-all-conv.
format Online
Article
Text
id pubmed-6715802
institution National Center for Biotechnology Information
language English
publishDate 2019
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-67158022019-09-13 Deep Learning to Improve Breast Cancer Detection on Screening Mammography Shen, Li Margolies, Laurie R. Rothstein, Joseph H. Fluder, Eugene McBride, Russell Sieh, Weiva Sci Rep Article The rapid development of deep learning, a family of machine learning techniques, has spurred much interest in its application to medical imaging problems. Here, we develop a deep learning algorithm that can accurately detect breast cancer on screening mammograms using an “end-to-end” training approach that efficiently leverages training datasets with either complete clinical annotation or only the cancer status (label) of the whole image. In this approach, lesion annotations are required only in the initial training stage, and subsequent stages require only image-level labels, eliminating the reliance on rarely available lesion annotations. Our all convolutional network method for classifying screening mammograms attained excellent performance in comparison with previous methods. On an independent test set of digitized film mammograms from the Digital Database for Screening Mammography (CBIS-DDSM), the best single model achieved a per-image AUC of 0.88, and four-model averaging improved the AUC to 0.91 (sensitivity: 86.1%, specificity: 80.1%). On an independent test set of full-field digital mammography (FFDM) images from the INbreast database, the best single model achieved a per-image AUC of 0.95, and four-model averaging improved the AUC to 0.98 (sensitivity: 86.7%, specificity: 96.1%). We also demonstrate that a whole image classifier trained using our end-to-end approach on the CBIS-DDSM digitized film mammograms can be transferred to INbreast FFDM images using only a subset of the INbreast data for fine-tuning and without further reliance on the availability of lesion annotations. These findings show that automatic deep learning methods can be readily trained to attain high accuracy on heterogeneous mammography platforms, and hold tremendous promise for improving clinical tools to reduce false positive and false negative screening mammography results. Code and model available at: https://github.com/lishen/end2end-all-conv. Nature Publishing Group UK 2019-08-29 /pmc/articles/PMC6715802/ /pubmed/31467326 http://dx.doi.org/10.1038/s41598-019-48995-4 Text en © The Author(s) 2019 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
spellingShingle Article
Shen, Li
Margolies, Laurie R.
Rothstein, Joseph H.
Fluder, Eugene
McBride, Russell
Sieh, Weiva
Deep Learning to Improve Breast Cancer Detection on Screening Mammography
title Deep Learning to Improve Breast Cancer Detection on Screening Mammography
title_full Deep Learning to Improve Breast Cancer Detection on Screening Mammography
title_fullStr Deep Learning to Improve Breast Cancer Detection on Screening Mammography
title_full_unstemmed Deep Learning to Improve Breast Cancer Detection on Screening Mammography
title_short Deep Learning to Improve Breast Cancer Detection on Screening Mammography
title_sort deep learning to improve breast cancer detection on screening mammography
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6715802/
https://www.ncbi.nlm.nih.gov/pubmed/31467326
http://dx.doi.org/10.1038/s41598-019-48995-4
work_keys_str_mv AT shenli deeplearningtoimprovebreastcancerdetectiononscreeningmammography
AT margolieslaurier deeplearningtoimprovebreastcancerdetectiononscreeningmammography
AT rothsteinjosephh deeplearningtoimprovebreastcancerdetectiononscreeningmammography
AT fludereugene deeplearningtoimprovebreastcancerdetectiononscreeningmammography
AT mcbriderussell deeplearningtoimprovebreastcancerdetectiononscreeningmammography
AT siehweiva deeplearningtoimprovebreastcancerdetectiononscreeningmammography