Cargando…
Whole-Slide Image Focus Quality: Automatic Assessment and Impact on AI Cancer Detection
BACKGROUND: Digital pathology enables remote access or consults and powerful image analysis algorithms. However, the slide digitization process can create artifacts such as out-of-focus (OOF). OOF is often only detected on careful review, potentially causing rescanning, and workflow delays. Although...
Autores principales: | , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Wolters Kluwer - Medknow
2019
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6939343/ https://www.ncbi.nlm.nih.gov/pubmed/31921487 http://dx.doi.org/10.4103/jpi.jpi_11_19 |
_version_ | 1783484198914359296 |
---|---|
author | Kohlberger, Timo Liu, Yun Moran, Melissa Chen, Po-Hsuan Cameron Brown, Trissia Hipp, Jason D. Mermel, Craig H. Stumpe, Martin C. |
author_facet | Kohlberger, Timo Liu, Yun Moran, Melissa Chen, Po-Hsuan Cameron Brown, Trissia Hipp, Jason D. Mermel, Craig H. Stumpe, Martin C. |
author_sort | Kohlberger, Timo |
collection | PubMed |
description | BACKGROUND: Digital pathology enables remote access or consults and powerful image analysis algorithms. However, the slide digitization process can create artifacts such as out-of-focus (OOF). OOF is often only detected on careful review, potentially causing rescanning, and workflow delays. Although scan time operator screening for whole-slide OOF is feasible, manual screening for OOF affecting only parts of a slide is impractical. METHODS: We developed a convolutional neural network (ConvFocus) to exhaustively localize and quantify the severity of OOF regions on digitized slides. ConvFocus was developed using our refined semi-synthetic OOF data generation process and evaluated using seven slides spanning three different tissue and three different stain types, each of which were digitized using two different whole-slide scanner models ConvFocus's predictions were compared with pathologist-annotated focus quality grades across 514 distinct regions representing 37,700 35 μm × 35 μm image patches, and 21 digitized “z-stack” WSIs that contain known OOF patterns. RESULTS: When compared to pathologist-graded focus quality, ConvFocus achieved Spearman rank coefficients of 0.81 and 0.94 on two scanners and reproduced the expected OOF patterns from z-stack scanning. We also evaluated the impact of OOF on the accuracy of a state-of-the-art metastatic breast cancer detector and saw a consistent decrease in performance with increasing OOF. CONCLUSIONS: Comprehensive whole-slide OOF categorization could enable rescans before pathologist review, potentially reducing the impact of digitization focus issues on the clinical workflow. We show that the algorithm trained on our semi-synthetic OOF data generalizes well to real OOF regions across tissue types, stains, and scanners. Finally, quantitative OOF maps can flag regions that might otherwise be misclassified by image analysis algorithms, preventing OOF-induced errors. |
format | Online Article Text |
id | pubmed-6939343 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2019 |
publisher | Wolters Kluwer - Medknow |
record_format | MEDLINE/PubMed |
spelling | pubmed-69393432020-01-09 Whole-Slide Image Focus Quality: Automatic Assessment and Impact on AI Cancer Detection Kohlberger, Timo Liu, Yun Moran, Melissa Chen, Po-Hsuan Cameron Brown, Trissia Hipp, Jason D. Mermel, Craig H. Stumpe, Martin C. J Pathol Inform Research Article BACKGROUND: Digital pathology enables remote access or consults and powerful image analysis algorithms. However, the slide digitization process can create artifacts such as out-of-focus (OOF). OOF is often only detected on careful review, potentially causing rescanning, and workflow delays. Although scan time operator screening for whole-slide OOF is feasible, manual screening for OOF affecting only parts of a slide is impractical. METHODS: We developed a convolutional neural network (ConvFocus) to exhaustively localize and quantify the severity of OOF regions on digitized slides. ConvFocus was developed using our refined semi-synthetic OOF data generation process and evaluated using seven slides spanning three different tissue and three different stain types, each of which were digitized using two different whole-slide scanner models ConvFocus's predictions were compared with pathologist-annotated focus quality grades across 514 distinct regions representing 37,700 35 μm × 35 μm image patches, and 21 digitized “z-stack” WSIs that contain known OOF patterns. RESULTS: When compared to pathologist-graded focus quality, ConvFocus achieved Spearman rank coefficients of 0.81 and 0.94 on two scanners and reproduced the expected OOF patterns from z-stack scanning. We also evaluated the impact of OOF on the accuracy of a state-of-the-art metastatic breast cancer detector and saw a consistent decrease in performance with increasing OOF. CONCLUSIONS: Comprehensive whole-slide OOF categorization could enable rescans before pathologist review, potentially reducing the impact of digitization focus issues on the clinical workflow. We show that the algorithm trained on our semi-synthetic OOF data generalizes well to real OOF regions across tissue types, stains, and scanners. Finally, quantitative OOF maps can flag regions that might otherwise be misclassified by image analysis algorithms, preventing OOF-induced errors. Wolters Kluwer - Medknow 2019-12-12 /pmc/articles/PMC6939343/ /pubmed/31921487 http://dx.doi.org/10.4103/jpi.jpi_11_19 Text en Copyright: © 2019 Journal of Pathology Informatics http://creativecommons.org/licenses/by-nc-sa/4.0 This is an open access journal, and articles are distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 License, which allows others to remix, tweak, and build upon the work non-commercially, as long as appropriate credit is given and the new creations are licensed under the identical terms. |
spellingShingle | Research Article Kohlberger, Timo Liu, Yun Moran, Melissa Chen, Po-Hsuan Cameron Brown, Trissia Hipp, Jason D. Mermel, Craig H. Stumpe, Martin C. Whole-Slide Image Focus Quality: Automatic Assessment and Impact on AI Cancer Detection |
title | Whole-Slide Image Focus Quality: Automatic Assessment and Impact on AI Cancer Detection |
title_full | Whole-Slide Image Focus Quality: Automatic Assessment and Impact on AI Cancer Detection |
title_fullStr | Whole-Slide Image Focus Quality: Automatic Assessment and Impact on AI Cancer Detection |
title_full_unstemmed | Whole-Slide Image Focus Quality: Automatic Assessment and Impact on AI Cancer Detection |
title_short | Whole-Slide Image Focus Quality: Automatic Assessment and Impact on AI Cancer Detection |
title_sort | whole-slide image focus quality: automatic assessment and impact on ai cancer detection |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6939343/ https://www.ncbi.nlm.nih.gov/pubmed/31921487 http://dx.doi.org/10.4103/jpi.jpi_11_19 |
work_keys_str_mv | AT kohlbergertimo wholeslideimagefocusqualityautomaticassessmentandimpactonaicancerdetection AT liuyun wholeslideimagefocusqualityautomaticassessmentandimpactonaicancerdetection AT moranmelissa wholeslideimagefocusqualityautomaticassessmentandimpactonaicancerdetection AT chenpohsuancameron wholeslideimagefocusqualityautomaticassessmentandimpactonaicancerdetection AT browntrissia wholeslideimagefocusqualityautomaticassessmentandimpactonaicancerdetection AT hippjasond wholeslideimagefocusqualityautomaticassessmentandimpactonaicancerdetection AT mermelcraigh wholeslideimagefocusqualityautomaticassessmentandimpactonaicancerdetection AT stumpemartinc wholeslideimagefocusqualityautomaticassessmentandimpactonaicancerdetection |