Cargando…
Deep learned tissue “fingerprints” classify breast cancers by ER/PR/Her2 status from H&E images
Because histologic types are subjective and difficult to reproduce between pathologists, tissue morphology often takes a back seat to molecular testing for the selection of breast cancer treatments. This work explores whether a deep-learning algorithm can learn objective histologic H&E features...
Autores principales: | , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7190637/ https://www.ncbi.nlm.nih.gov/pubmed/32350370 http://dx.doi.org/10.1038/s41598-020-64156-4 |
_version_ | 1783527723943067648 |
---|---|
author | Rawat, Rishi R. Ortega, Itzel Roy, Preeyam Sha, Fei Shibata, Darryl Ruderman, Daniel Agus, David B. |
author_facet | Rawat, Rishi R. Ortega, Itzel Roy, Preeyam Sha, Fei Shibata, Darryl Ruderman, Daniel Agus, David B. |
author_sort | Rawat, Rishi R. |
collection | PubMed |
description | Because histologic types are subjective and difficult to reproduce between pathologists, tissue morphology often takes a back seat to molecular testing for the selection of breast cancer treatments. This work explores whether a deep-learning algorithm can learn objective histologic H&E features that predict the clinical subtypes of breast cancer, as assessed by immunostaining for estrogen, progesterone, and Her2 receptors (ER/PR/Her2). Translating deep learning to this and related problems in histopathology presents a challenge due to the lack of large, well-annotated data sets, which are typically required for the algorithms to learn statistically significant discriminatory patterns. To overcome this limitation, we introduce the concept of “tissue fingerprints,” which leverages large, unannotated datasets in a label-free manner to learn H&E features that can distinguish one patient from another. The hypothesis is that training the algorithm to learn the morphological differences between patients will implicitly teach it about the biologic variation between them. Following this training internship, we used the features the network learned, which we call “fingerprints,” to predict ER, PR, and Her2 status in two datasets. Despite the discovery dataset being relatively small by the standards of the machine learning community (n = 939), fingerprints enabled the determination of ER, PR, and Her2 status from whole slide H&E images with 0.89 AUC (ER), 0.81 AUC (PR), and 0.79 AUC (Her2) on a large, independent test set (n = 2531). Tissue fingerprints are concise but meaningful histopathologic image representations that capture biological information and may enable machine learning algorithms that go beyond the traditional ER/PR/Her2 clinical groupings by directly predicting theragnosis. |
format | Online Article Text |
id | pubmed-7190637 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
publisher | Nature Publishing Group UK |
record_format | MEDLINE/PubMed |
spelling | pubmed-71906372020-05-05 Deep learned tissue “fingerprints” classify breast cancers by ER/PR/Her2 status from H&E images Rawat, Rishi R. Ortega, Itzel Roy, Preeyam Sha, Fei Shibata, Darryl Ruderman, Daniel Agus, David B. Sci Rep Article Because histologic types are subjective and difficult to reproduce between pathologists, tissue morphology often takes a back seat to molecular testing for the selection of breast cancer treatments. This work explores whether a deep-learning algorithm can learn objective histologic H&E features that predict the clinical subtypes of breast cancer, as assessed by immunostaining for estrogen, progesterone, and Her2 receptors (ER/PR/Her2). Translating deep learning to this and related problems in histopathology presents a challenge due to the lack of large, well-annotated data sets, which are typically required for the algorithms to learn statistically significant discriminatory patterns. To overcome this limitation, we introduce the concept of “tissue fingerprints,” which leverages large, unannotated datasets in a label-free manner to learn H&E features that can distinguish one patient from another. The hypothesis is that training the algorithm to learn the morphological differences between patients will implicitly teach it about the biologic variation between them. Following this training internship, we used the features the network learned, which we call “fingerprints,” to predict ER, PR, and Her2 status in two datasets. Despite the discovery dataset being relatively small by the standards of the machine learning community (n = 939), fingerprints enabled the determination of ER, PR, and Her2 status from whole slide H&E images with 0.89 AUC (ER), 0.81 AUC (PR), and 0.79 AUC (Her2) on a large, independent test set (n = 2531). Tissue fingerprints are concise but meaningful histopathologic image representations that capture biological information and may enable machine learning algorithms that go beyond the traditional ER/PR/Her2 clinical groupings by directly predicting theragnosis. Nature Publishing Group UK 2020-04-29 /pmc/articles/PMC7190637/ /pubmed/32350370 http://dx.doi.org/10.1038/s41598-020-64156-4 Text en © The Author(s) 2020 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/. |
spellingShingle | Article Rawat, Rishi R. Ortega, Itzel Roy, Preeyam Sha, Fei Shibata, Darryl Ruderman, Daniel Agus, David B. Deep learned tissue “fingerprints” classify breast cancers by ER/PR/Her2 status from H&E images |
title | Deep learned tissue “fingerprints” classify breast cancers by ER/PR/Her2 status from H&E images |
title_full | Deep learned tissue “fingerprints” classify breast cancers by ER/PR/Her2 status from H&E images |
title_fullStr | Deep learned tissue “fingerprints” classify breast cancers by ER/PR/Her2 status from H&E images |
title_full_unstemmed | Deep learned tissue “fingerprints” classify breast cancers by ER/PR/Her2 status from H&E images |
title_short | Deep learned tissue “fingerprints” classify breast cancers by ER/PR/Her2 status from H&E images |
title_sort | deep learned tissue “fingerprints” classify breast cancers by er/pr/her2 status from h&e images |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7190637/ https://www.ncbi.nlm.nih.gov/pubmed/32350370 http://dx.doi.org/10.1038/s41598-020-64156-4 |
work_keys_str_mv | AT rawatrishir deeplearnedtissuefingerprintsclassifybreastcancersbyerprher2statusfromheimages AT ortegaitzel deeplearnedtissuefingerprintsclassifybreastcancersbyerprher2statusfromheimages AT roypreeyam deeplearnedtissuefingerprintsclassifybreastcancersbyerprher2statusfromheimages AT shafei deeplearnedtissuefingerprintsclassifybreastcancersbyerprher2statusfromheimages AT shibatadarryl deeplearnedtissuefingerprintsclassifybreastcancersbyerprher2statusfromheimages AT rudermandaniel deeplearnedtissuefingerprintsclassifybreastcancersbyerprher2statusfromheimages AT agusdavidb deeplearnedtissuefingerprintsclassifybreastcancersbyerprher2statusfromheimages |