Cargando…
In-depth insights into Alzheimer’s disease by using explainable machine learning approach
Alzheimer’s disease is still a field of research with lots of open questions. The complexity of the disease prevents the early diagnosis before visible symptoms regarding the individual’s cognitive capabilities occur. This research presents an in-depth analysis of a huge data set encompassing medica...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9021280/ https://www.ncbi.nlm.nih.gov/pubmed/35444165 http://dx.doi.org/10.1038/s41598-022-10202-2 |
_version_ | 1784689775872049152 |
---|---|
author | Bogdanovic, Bojan Eftimov, Tome Simjanoska, Monika |
author_facet | Bogdanovic, Bojan Eftimov, Tome Simjanoska, Monika |
author_sort | Bogdanovic, Bojan |
collection | PubMed |
description | Alzheimer’s disease is still a field of research with lots of open questions. The complexity of the disease prevents the early diagnosis before visible symptoms regarding the individual’s cognitive capabilities occur. This research presents an in-depth analysis of a huge data set encompassing medical, cognitive and lifestyle’s measurements from more than 12,000 individuals. Several hypothesis were established whose validity has been questioned considering the obtained results. The importance of appropriate experimental design is highly stressed in the research. Thus, a sequence of methods for handling missing data, redundancy, data imbalance, and correlation analysis have been applied for appropriate preprocessing of the data set, and consequently XGBoost model has been trained and evaluated with special attention to the hyperparameters tuning. The model was explained by using the Shapley values produced by the SHAP method. XGBoost produced a f1-score of 0.84 and as such is considered to be highly competitive among those published in the literature. This achievement, however, was not the main contribution of this paper. This research’s goal was to perform global and local interpretability of the intelligent model and derive valuable conclusions over the established hypothesis. Those methods led to a single scheme which presents either positive, or, negative influence of the values of each of the features whose importance has been confirmed by means of Shapley values. This scheme might be considered as additional source of knowledge for the physicians and other experts whose concern is the exact diagnosis of early stage of Alzheimer’s disease. The conclusions derived from the intelligent model’s data-driven interpretability confronted all the established hypotheses. This research clearly showed the importance of explainable Machine learning approach that opens the black box and clearly unveils the relationships among the features and the diagnoses. |
format | Online Article Text |
id | pubmed-9021280 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Nature Publishing Group UK |
record_format | MEDLINE/PubMed |
spelling | pubmed-90212802022-04-21 In-depth insights into Alzheimer’s disease by using explainable machine learning approach Bogdanovic, Bojan Eftimov, Tome Simjanoska, Monika Sci Rep Article Alzheimer’s disease is still a field of research with lots of open questions. The complexity of the disease prevents the early diagnosis before visible symptoms regarding the individual’s cognitive capabilities occur. This research presents an in-depth analysis of a huge data set encompassing medical, cognitive and lifestyle’s measurements from more than 12,000 individuals. Several hypothesis were established whose validity has been questioned considering the obtained results. The importance of appropriate experimental design is highly stressed in the research. Thus, a sequence of methods for handling missing data, redundancy, data imbalance, and correlation analysis have been applied for appropriate preprocessing of the data set, and consequently XGBoost model has been trained and evaluated with special attention to the hyperparameters tuning. The model was explained by using the Shapley values produced by the SHAP method. XGBoost produced a f1-score of 0.84 and as such is considered to be highly competitive among those published in the literature. This achievement, however, was not the main contribution of this paper. This research’s goal was to perform global and local interpretability of the intelligent model and derive valuable conclusions over the established hypothesis. Those methods led to a single scheme which presents either positive, or, negative influence of the values of each of the features whose importance has been confirmed by means of Shapley values. This scheme might be considered as additional source of knowledge for the physicians and other experts whose concern is the exact diagnosis of early stage of Alzheimer’s disease. The conclusions derived from the intelligent model’s data-driven interpretability confronted all the established hypotheses. This research clearly showed the importance of explainable Machine learning approach that opens the black box and clearly unveils the relationships among the features and the diagnoses. Nature Publishing Group UK 2022-04-20 /pmc/articles/PMC9021280/ /pubmed/35444165 http://dx.doi.org/10.1038/s41598-022-10202-2 Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Article Bogdanovic, Bojan Eftimov, Tome Simjanoska, Monika In-depth insights into Alzheimer’s disease by using explainable machine learning approach |
title | In-depth insights into Alzheimer’s disease by using explainable machine learning approach |
title_full | In-depth insights into Alzheimer’s disease by using explainable machine learning approach |
title_fullStr | In-depth insights into Alzheimer’s disease by using explainable machine learning approach |
title_full_unstemmed | In-depth insights into Alzheimer’s disease by using explainable machine learning approach |
title_short | In-depth insights into Alzheimer’s disease by using explainable machine learning approach |
title_sort | in-depth insights into alzheimer’s disease by using explainable machine learning approach |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9021280/ https://www.ncbi.nlm.nih.gov/pubmed/35444165 http://dx.doi.org/10.1038/s41598-022-10202-2 |
work_keys_str_mv | AT bogdanovicbojan indepthinsightsintoalzheimersdiseasebyusingexplainablemachinelearningapproach AT eftimovtome indepthinsightsintoalzheimersdiseasebyusingexplainablemachinelearningapproach AT simjanoskamonika indepthinsightsintoalzheimersdiseasebyusingexplainablemachinelearningapproach |