Cargando…
Validation of machine learning models to detect amyloid pathologies across institutions
Semi-quantitative scoring schemes like the Consortium to Establish a Registry for Alzheimer’s Disease (CERAD) are the most commonly used method in Alzheimer’s disease (AD) neuropathology practice. Computational approaches based on machine learning have recently generated quantitative scores for whol...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
BioMed Central
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7189549/ https://www.ncbi.nlm.nih.gov/pubmed/32345363 http://dx.doi.org/10.1186/s40478-020-00927-4 |
_version_ | 1783527520465846272 |
---|---|
author | Vizcarra, Juan C. Gearing, Marla Keiser, Michael J. Glass, Jonathan D. Dugger, Brittany N. Gutman, David A. |
author_facet | Vizcarra, Juan C. Gearing, Marla Keiser, Michael J. Glass, Jonathan D. Dugger, Brittany N. Gutman, David A. |
author_sort | Vizcarra, Juan C. |
collection | PubMed |
description | Semi-quantitative scoring schemes like the Consortium to Establish a Registry for Alzheimer’s Disease (CERAD) are the most commonly used method in Alzheimer’s disease (AD) neuropathology practice. Computational approaches based on machine learning have recently generated quantitative scores for whole slide images (WSIs) that are highly correlated with human derived semi-quantitative scores, such as those of CERAD, for Alzheimer’s disease pathology. However, the robustness of such models have yet to be tested in different cohorts. To validate previously published machine learning algorithms using convolutional neural networks (CNNs) and determine if pathological heterogeneity may alter algorithm derived measures, 40 cases from the Goizueta Emory Alzheimer’s Disease Center brain bank displaying an array of pathological diagnoses (including AD with and without Lewy body disease (LBD), and / or TDP-43-positive inclusions) and levels of Aβ pathologies were evaluated. Furthermore, to provide deeper phenotyping, amyloid burden in gray matter vs whole tissue were compared, and quantitative CNN scores for both correlated significantly to CERAD-like scores. Quantitative scores also show clear stratification based on AD pathologies with or without additional diagnoses (including LBD and TDP-43 inclusions) vs cases with no significant neurodegeneration (control cases) as well as NIA Reagan scoring criteria. Specifically, the concomitant diagnosis group of AD + TDP-43 showed significantly greater CNN-score for cored plaques than the AD group. Finally, we report that whole tissue computational scores correlate better with CERAD-like categories than focusing on computational scores from a field of view with densest pathology, which is the standard of practice in neuropathological assessment per CERAD guidelines. Together these findings validate and expand CNN models to be robust to cohort variations and provide additional proof-of-concept for future studies to incorporate machine learning algorithms into neuropathological practice. |
format | Online Article Text |
id | pubmed-7189549 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
publisher | BioMed Central |
record_format | MEDLINE/PubMed |
spelling | pubmed-71895492020-05-04 Validation of machine learning models to detect amyloid pathologies across institutions Vizcarra, Juan C. Gearing, Marla Keiser, Michael J. Glass, Jonathan D. Dugger, Brittany N. Gutman, David A. Acta Neuropathol Commun Research Semi-quantitative scoring schemes like the Consortium to Establish a Registry for Alzheimer’s Disease (CERAD) are the most commonly used method in Alzheimer’s disease (AD) neuropathology practice. Computational approaches based on machine learning have recently generated quantitative scores for whole slide images (WSIs) that are highly correlated with human derived semi-quantitative scores, such as those of CERAD, for Alzheimer’s disease pathology. However, the robustness of such models have yet to be tested in different cohorts. To validate previously published machine learning algorithms using convolutional neural networks (CNNs) and determine if pathological heterogeneity may alter algorithm derived measures, 40 cases from the Goizueta Emory Alzheimer’s Disease Center brain bank displaying an array of pathological diagnoses (including AD with and without Lewy body disease (LBD), and / or TDP-43-positive inclusions) and levels of Aβ pathologies were evaluated. Furthermore, to provide deeper phenotyping, amyloid burden in gray matter vs whole tissue were compared, and quantitative CNN scores for both correlated significantly to CERAD-like scores. Quantitative scores also show clear stratification based on AD pathologies with or without additional diagnoses (including LBD and TDP-43 inclusions) vs cases with no significant neurodegeneration (control cases) as well as NIA Reagan scoring criteria. Specifically, the concomitant diagnosis group of AD + TDP-43 showed significantly greater CNN-score for cored plaques than the AD group. Finally, we report that whole tissue computational scores correlate better with CERAD-like categories than focusing on computational scores from a field of view with densest pathology, which is the standard of practice in neuropathological assessment per CERAD guidelines. Together these findings validate and expand CNN models to be robust to cohort variations and provide additional proof-of-concept for future studies to incorporate machine learning algorithms into neuropathological practice. BioMed Central 2020-04-28 /pmc/articles/PMC7189549/ /pubmed/32345363 http://dx.doi.org/10.1186/s40478-020-00927-4 Text en © The Author(s) 2020 Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data. |
spellingShingle | Research Vizcarra, Juan C. Gearing, Marla Keiser, Michael J. Glass, Jonathan D. Dugger, Brittany N. Gutman, David A. Validation of machine learning models to detect amyloid pathologies across institutions |
title | Validation of machine learning models to detect amyloid pathologies across institutions |
title_full | Validation of machine learning models to detect amyloid pathologies across institutions |
title_fullStr | Validation of machine learning models to detect amyloid pathologies across institutions |
title_full_unstemmed | Validation of machine learning models to detect amyloid pathologies across institutions |
title_short | Validation of machine learning models to detect amyloid pathologies across institutions |
title_sort | validation of machine learning models to detect amyloid pathologies across institutions |
topic | Research |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7189549/ https://www.ncbi.nlm.nih.gov/pubmed/32345363 http://dx.doi.org/10.1186/s40478-020-00927-4 |
work_keys_str_mv | AT vizcarrajuanc validationofmachinelearningmodelstodetectamyloidpathologiesacrossinstitutions AT gearingmarla validationofmachinelearningmodelstodetectamyloidpathologiesacrossinstitutions AT keisermichaelj validationofmachinelearningmodelstodetectamyloidpathologiesacrossinstitutions AT glassjonathand validationofmachinelearningmodelstodetectamyloidpathologiesacrossinstitutions AT duggerbrittanyn validationofmachinelearningmodelstodetectamyloidpathologiesacrossinstitutions AT gutmandavida validationofmachinelearningmodelstodetectamyloidpathologiesacrossinstitutions |