Cargando…
Towards greater neuroimaging classification transparency via the integration of explainability methods and confidence estimation approaches
The field of neuroimaging has increasingly sought to develop artificial intelligence-based models for neurological and neuropsychiatric disorder automated diagnosis and clinical decision support. However, if these models are to be implemented in a clinical setting, transparency will be vital. Two as...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10078989/ https://www.ncbi.nlm.nih.gov/pubmed/37035832 http://dx.doi.org/10.1016/j.imu.2023.101176 |
_version_ | 1785020629473296384 |
---|---|
author | Ellis, Charles A. Miller, Robyn L. Calhoun, Vince D. |
author_facet | Ellis, Charles A. Miller, Robyn L. Calhoun, Vince D. |
author_sort | Ellis, Charles A. |
collection | PubMed |
description | The field of neuroimaging has increasingly sought to develop artificial intelligence-based models for neurological and neuropsychiatric disorder automated diagnosis and clinical decision support. However, if these models are to be implemented in a clinical setting, transparency will be vital. Two aspects of transparency are (1) confidence estimation and (2) explainability. Confidence estimation approaches indicate confidence in individual predictions. Explainability methods give insight into the importance of features to model predictions. In this study, we integrate confidence estimation and explainability approaches for the first time. We demonstrate their viability for schizophrenia diagnosis using resting state functional magnetic resonance imaging (rs-fMRI) dynamic functional network connectivity (dFNC) data. We compare two confidence estimation approaches: Monte Carlo dropout (MCD) and MC batch normalization (MCBN). We combine them with two gradient-based explainability approaches, saliency and layer-wise relevance propagation (LRP), and examine their effects upon explanations. We find that MCD often adversely affects model gradients, making it ill-suited for integration with gradient-based explainability methods. In contrast, MCBN does not affect model gradients. Additionally, we find many participant-level differences between regular explanations and the distributions of explanations for combined explainability and confidence estimation approaches. This suggests that a similar confidence estimation approach used in a clinical context with explanations only output for the regular model would likely not yield adequate explanations. We hope that our findings will provide a starting point for the integration of the two fields, provide useful guidance for future studies, and accelerate the development of transparent neuroimaging clinical decision support systems. |
format | Online Article Text |
id | pubmed-10078989 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
record_format | MEDLINE/PubMed |
spelling | pubmed-100789892023-04-06 Towards greater neuroimaging classification transparency via the integration of explainability methods and confidence estimation approaches Ellis, Charles A. Miller, Robyn L. Calhoun, Vince D. Inform Med Unlocked Article The field of neuroimaging has increasingly sought to develop artificial intelligence-based models for neurological and neuropsychiatric disorder automated diagnosis and clinical decision support. However, if these models are to be implemented in a clinical setting, transparency will be vital. Two aspects of transparency are (1) confidence estimation and (2) explainability. Confidence estimation approaches indicate confidence in individual predictions. Explainability methods give insight into the importance of features to model predictions. In this study, we integrate confidence estimation and explainability approaches for the first time. We demonstrate their viability for schizophrenia diagnosis using resting state functional magnetic resonance imaging (rs-fMRI) dynamic functional network connectivity (dFNC) data. We compare two confidence estimation approaches: Monte Carlo dropout (MCD) and MC batch normalization (MCBN). We combine them with two gradient-based explainability approaches, saliency and layer-wise relevance propagation (LRP), and examine their effects upon explanations. We find that MCD often adversely affects model gradients, making it ill-suited for integration with gradient-based explainability methods. In contrast, MCBN does not affect model gradients. Additionally, we find many participant-level differences between regular explanations and the distributions of explanations for combined explainability and confidence estimation approaches. This suggests that a similar confidence estimation approach used in a clinical context with explanations only output for the regular model would likely not yield adequate explanations. We hope that our findings will provide a starting point for the integration of the two fields, provide useful guidance for future studies, and accelerate the development of transparent neuroimaging clinical decision support systems. 2023 2023-01-18 /pmc/articles/PMC10078989/ /pubmed/37035832 http://dx.doi.org/10.1016/j.imu.2023.101176 Text en https://creativecommons.org/licenses/by-nc-nd/4.0/This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/ (https://creativecommons.org/licenses/by-nc-nd/4.0/) ). |
spellingShingle | Article Ellis, Charles A. Miller, Robyn L. Calhoun, Vince D. Towards greater neuroimaging classification transparency via the integration of explainability methods and confidence estimation approaches |
title | Towards greater neuroimaging classification transparency via the integration of explainability methods and confidence estimation approaches |
title_full | Towards greater neuroimaging classification transparency via the integration of explainability methods and confidence estimation approaches |
title_fullStr | Towards greater neuroimaging classification transparency via the integration of explainability methods and confidence estimation approaches |
title_full_unstemmed | Towards greater neuroimaging classification transparency via the integration of explainability methods and confidence estimation approaches |
title_short | Towards greater neuroimaging classification transparency via the integration of explainability methods and confidence estimation approaches |
title_sort | towards greater neuroimaging classification transparency via the integration of explainability methods and confidence estimation approaches |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10078989/ https://www.ncbi.nlm.nih.gov/pubmed/37035832 http://dx.doi.org/10.1016/j.imu.2023.101176 |
work_keys_str_mv | AT ellischarlesa towardsgreaterneuroimagingclassificationtransparencyviatheintegrationofexplainabilitymethodsandconfidenceestimationapproaches AT millerrobynl towardsgreaterneuroimagingclassificationtransparencyviatheintegrationofexplainabilitymethodsandconfidenceestimationapproaches AT calhounvinced towardsgreaterneuroimagingclassificationtransparencyviatheintegrationofexplainabilitymethodsandconfidenceestimationapproaches |