Cargando…

Overcoming the limitations of patch-based learning to detect cancer in whole slide images

Whole slide images (WSIs) pose unique challenges when training deep learning models. They are very large which makes it necessary to break each image down into smaller patches for analysis, image features have to be extracted at multiple scales in order to capture both detail and context, and extrem...

Descripción completa

Detalles Bibliográficos
Autores principales: Ciga, Ozan, Xu, Tony, Nofech-Mozes, Sharon, Noy, Shawna, Lu, Fang-I, Martel, Anne L.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8076327/
https://www.ncbi.nlm.nih.gov/pubmed/33903725
http://dx.doi.org/10.1038/s41598-021-88494-z
_version_ 1783684675335618560
author Ciga, Ozan
Xu, Tony
Nofech-Mozes, Sharon
Noy, Shawna
Lu, Fang-I
Martel, Anne L.
author_facet Ciga, Ozan
Xu, Tony
Nofech-Mozes, Sharon
Noy, Shawna
Lu, Fang-I
Martel, Anne L.
author_sort Ciga, Ozan
collection PubMed
description Whole slide images (WSIs) pose unique challenges when training deep learning models. They are very large which makes it necessary to break each image down into smaller patches for analysis, image features have to be extracted at multiple scales in order to capture both detail and context, and extreme class imbalances may exist. Significant progress has been made in the analysis of these images, thanks largely due to the availability of public annotated datasets. We postulate, however, that even if a method scores well on a challenge task, this success may not translate to good performance in a more clinically relevant workflow. Many datasets consist of image patches which may suffer from data curation bias; other datasets are only labelled at the whole slide level and the lack of annotations across an image may mask erroneous local predictions so long as the final decision is correct. In this paper, we outline the differences between patch or slide-level classification versus methods that need to localize or segment cancer accurately across the whole slide, and we experimentally verify that best practices differ in both cases. We apply a binary cancer detection network on post neoadjuvant therapy breast cancer WSIs to find the tumor bed outlining the extent of cancer, a task which requires sensitivity and precision across the whole slide. We extensively study multiple design choices and their effects on the outcome, including architectures and augmentations. We propose a negative data sampling strategy, which drastically reduces the false positive rate (25% of false positives versus 62.5%) and improves each metric pertinent to our problem, with a 53% reduction in the error of tumor extent. Our results indicate classification performances of image patches versus WSIs are inversely related when the same negative data sampling strategy is used. Specifically, injection of negatives into training data for image patch classification degrades the performance, whereas the performance is improved for slide and pixel-level WSI classification tasks. Furthermore, we find applying extensive augmentations helps more in WSI-based tasks compared to patch-level image classification.
format Online
Article
Text
id pubmed-8076327
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-80763272021-04-28 Overcoming the limitations of patch-based learning to detect cancer in whole slide images Ciga, Ozan Xu, Tony Nofech-Mozes, Sharon Noy, Shawna Lu, Fang-I Martel, Anne L. Sci Rep Article Whole slide images (WSIs) pose unique challenges when training deep learning models. They are very large which makes it necessary to break each image down into smaller patches for analysis, image features have to be extracted at multiple scales in order to capture both detail and context, and extreme class imbalances may exist. Significant progress has been made in the analysis of these images, thanks largely due to the availability of public annotated datasets. We postulate, however, that even if a method scores well on a challenge task, this success may not translate to good performance in a more clinically relevant workflow. Many datasets consist of image patches which may suffer from data curation bias; other datasets are only labelled at the whole slide level and the lack of annotations across an image may mask erroneous local predictions so long as the final decision is correct. In this paper, we outline the differences between patch or slide-level classification versus methods that need to localize or segment cancer accurately across the whole slide, and we experimentally verify that best practices differ in both cases. We apply a binary cancer detection network on post neoadjuvant therapy breast cancer WSIs to find the tumor bed outlining the extent of cancer, a task which requires sensitivity and precision across the whole slide. We extensively study multiple design choices and their effects on the outcome, including architectures and augmentations. We propose a negative data sampling strategy, which drastically reduces the false positive rate (25% of false positives versus 62.5%) and improves each metric pertinent to our problem, with a 53% reduction in the error of tumor extent. Our results indicate classification performances of image patches versus WSIs are inversely related when the same negative data sampling strategy is used. Specifically, injection of negatives into training data for image patch classification degrades the performance, whereas the performance is improved for slide and pixel-level WSI classification tasks. Furthermore, we find applying extensive augmentations helps more in WSI-based tasks compared to patch-level image classification. Nature Publishing Group UK 2021-04-26 /pmc/articles/PMC8076327/ /pubmed/33903725 http://dx.doi.org/10.1038/s41598-021-88494-z Text en © The Author(s) 2021 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Ciga, Ozan
Xu, Tony
Nofech-Mozes, Sharon
Noy, Shawna
Lu, Fang-I
Martel, Anne L.
Overcoming the limitations of patch-based learning to detect cancer in whole slide images
title Overcoming the limitations of patch-based learning to detect cancer in whole slide images
title_full Overcoming the limitations of patch-based learning to detect cancer in whole slide images
title_fullStr Overcoming the limitations of patch-based learning to detect cancer in whole slide images
title_full_unstemmed Overcoming the limitations of patch-based learning to detect cancer in whole slide images
title_short Overcoming the limitations of patch-based learning to detect cancer in whole slide images
title_sort overcoming the limitations of patch-based learning to detect cancer in whole slide images
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8076327/
https://www.ncbi.nlm.nih.gov/pubmed/33903725
http://dx.doi.org/10.1038/s41598-021-88494-z
work_keys_str_mv AT cigaozan overcomingthelimitationsofpatchbasedlearningtodetectcancerinwholeslideimages
AT xutony overcomingthelimitationsofpatchbasedlearningtodetectcancerinwholeslideimages
AT nofechmozessharon overcomingthelimitationsofpatchbasedlearningtodetectcancerinwholeslideimages
AT noyshawna overcomingthelimitationsofpatchbasedlearningtodetectcancerinwholeslideimages
AT lufangi overcomingthelimitationsofpatchbasedlearningtodetectcancerinwholeslideimages
AT martelannel overcomingthelimitationsofpatchbasedlearningtodetectcancerinwholeslideimages