Cargando…

A loss-based patch label denoising method for improving whole-slide image analysis using a convolutional neural network

This paper proposes a deep learning-based patch label denoising method (LossDiff) for improving the classification of whole-slide images of cancer using a convolutional neural network (CNN). Automated whole-slide image classification is often challenging, requiring a large amount of labeled data. Pa...

Descripción completa

Detalles Bibliográficos
Autores principales: Ashraf, Murtaza, Robles, Willmer Rafell Quiñones, Kim, Mujin, Ko, Young Sin, Yi, Mun Yong
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8791954/
https://www.ncbi.nlm.nih.gov/pubmed/35082315
http://dx.doi.org/10.1038/s41598-022-05001-8
_version_ 1784640300420956160
author Ashraf, Murtaza
Robles, Willmer Rafell Quiñones
Kim, Mujin
Ko, Young Sin
Yi, Mun Yong
author_facet Ashraf, Murtaza
Robles, Willmer Rafell Quiñones
Kim, Mujin
Ko, Young Sin
Yi, Mun Yong
author_sort Ashraf, Murtaza
collection PubMed
description This paper proposes a deep learning-based patch label denoising method (LossDiff) for improving the classification of whole-slide images of cancer using a convolutional neural network (CNN). Automated whole-slide image classification is often challenging, requiring a large amount of labeled data. Pathologists annotate the region of interest by marking malignant areas, which pose a high risk of introducing patch-based label noise by involving benign regions that are typically small in size within the malignant annotations, resulting in low classification accuracy with many Type-II errors. To overcome this critical problem, this paper presents a simple yet effective method for noisy patch classification. The proposed method, validated using stomach cancer images, provides a significant improvement compared to other existing methods in patch-based cancer classification, with accuracies of 98.81%, 97.30% and 89.47% for binary, ternary, and quaternary classes, respectively. Moreover, we conduct several experiments at different noise levels using a publicly available dataset to further demonstrate the robustness of the proposed method. Given the high cost of producing explicit annotations for whole-slide images and the unavoidable error-prone nature of the human annotation of medical images, the proposed method has practical implications for whole-slide image annotation and automated cancer diagnosis.
format Online
Article
Text
id pubmed-8791954
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-87919542022-01-27 A loss-based patch label denoising method for improving whole-slide image analysis using a convolutional neural network Ashraf, Murtaza Robles, Willmer Rafell Quiñones Kim, Mujin Ko, Young Sin Yi, Mun Yong Sci Rep Article This paper proposes a deep learning-based patch label denoising method (LossDiff) for improving the classification of whole-slide images of cancer using a convolutional neural network (CNN). Automated whole-slide image classification is often challenging, requiring a large amount of labeled data. Pathologists annotate the region of interest by marking malignant areas, which pose a high risk of introducing patch-based label noise by involving benign regions that are typically small in size within the malignant annotations, resulting in low classification accuracy with many Type-II errors. To overcome this critical problem, this paper presents a simple yet effective method for noisy patch classification. The proposed method, validated using stomach cancer images, provides a significant improvement compared to other existing methods in patch-based cancer classification, with accuracies of 98.81%, 97.30% and 89.47% for binary, ternary, and quaternary classes, respectively. Moreover, we conduct several experiments at different noise levels using a publicly available dataset to further demonstrate the robustness of the proposed method. Given the high cost of producing explicit annotations for whole-slide images and the unavoidable error-prone nature of the human annotation of medical images, the proposed method has practical implications for whole-slide image annotation and automated cancer diagnosis. Nature Publishing Group UK 2022-01-26 /pmc/articles/PMC8791954/ /pubmed/35082315 http://dx.doi.org/10.1038/s41598-022-05001-8 Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Ashraf, Murtaza
Robles, Willmer Rafell Quiñones
Kim, Mujin
Ko, Young Sin
Yi, Mun Yong
A loss-based patch label denoising method for improving whole-slide image analysis using a convolutional neural network
title A loss-based patch label denoising method for improving whole-slide image analysis using a convolutional neural network
title_full A loss-based patch label denoising method for improving whole-slide image analysis using a convolutional neural network
title_fullStr A loss-based patch label denoising method for improving whole-slide image analysis using a convolutional neural network
title_full_unstemmed A loss-based patch label denoising method for improving whole-slide image analysis using a convolutional neural network
title_short A loss-based patch label denoising method for improving whole-slide image analysis using a convolutional neural network
title_sort loss-based patch label denoising method for improving whole-slide image analysis using a convolutional neural network
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8791954/
https://www.ncbi.nlm.nih.gov/pubmed/35082315
http://dx.doi.org/10.1038/s41598-022-05001-8
work_keys_str_mv AT ashrafmurtaza alossbasedpatchlabeldenoisingmethodforimprovingwholeslideimageanalysisusingaconvolutionalneuralnetwork
AT robleswillmerrafellquinones alossbasedpatchlabeldenoisingmethodforimprovingwholeslideimageanalysisusingaconvolutionalneuralnetwork
AT kimmujin alossbasedpatchlabeldenoisingmethodforimprovingwholeslideimageanalysisusingaconvolutionalneuralnetwork
AT koyoungsin alossbasedpatchlabeldenoisingmethodforimprovingwholeslideimageanalysisusingaconvolutionalneuralnetwork
AT yimunyong alossbasedpatchlabeldenoisingmethodforimprovingwholeslideimageanalysisusingaconvolutionalneuralnetwork
AT ashrafmurtaza lossbasedpatchlabeldenoisingmethodforimprovingwholeslideimageanalysisusingaconvolutionalneuralnetwork
AT robleswillmerrafellquinones lossbasedpatchlabeldenoisingmethodforimprovingwholeslideimageanalysisusingaconvolutionalneuralnetwork
AT kimmujin lossbasedpatchlabeldenoisingmethodforimprovingwholeslideimageanalysisusingaconvolutionalneuralnetwork
AT koyoungsin lossbasedpatchlabeldenoisingmethodforimprovingwholeslideimageanalysisusingaconvolutionalneuralnetwork
AT yimunyong lossbasedpatchlabeldenoisingmethodforimprovingwholeslideimageanalysisusingaconvolutionalneuralnetwork