Cargando…

Can we explain machine learning-based prediction for rupture status assessments of intracranial aneurysms?

Although applying machine learning (ML) algorithms to rupture status assessment of intracranial aneurysms (IA) has yielded promising results, the opaqueness of some ML methods has limited their clinical translation. We presented the first explainability comparison of six commonly used ML algorithms:...

Descripción completa

Detalles Bibliográficos
Autores principales: Mu, N, Rezaeitaleshmahalleh, M, Lyu, Z, Wang, M, Tang, J, Strother, C M, Gemmete, J J, Pandey, A S, Jiang, J
Formato: Online Artículo Texto
Lenguaje:English
Publicado: IOP Publishing 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9999353/
https://www.ncbi.nlm.nih.gov/pubmed/36626819
http://dx.doi.org/10.1088/2057-1976/acb1b3
_version_ 1784903647214174208
author Mu, N
Rezaeitaleshmahalleh, M
Lyu, Z
Wang, M
Tang, J
Strother, C M
Gemmete, J J
Pandey, A S
Jiang, J
author_facet Mu, N
Rezaeitaleshmahalleh, M
Lyu, Z
Wang, M
Tang, J
Strother, C M
Gemmete, J J
Pandey, A S
Jiang, J
author_sort Mu, N
collection PubMed
description Although applying machine learning (ML) algorithms to rupture status assessment of intracranial aneurysms (IA) has yielded promising results, the opaqueness of some ML methods has limited their clinical translation. We presented the first explainability comparison of six commonly used ML algorithms: multivariate logistic regression (LR), support vector machine (SVM), random forest (RF), extreme gradient boosting (XGBoost), multi-layer perceptron neural network (MLPNN), and Bayesian additive regression trees (BART). A total of 112 IAs with known rupture status were selected for this study. The ML-based classification used two anatomical features, nine hemodynamic parameters, and thirteen morphologic variables. We utilized permutation feature importance, local interpretable model-agnostic explanations (LIME), and SHapley Additive exPlanations (SHAP) algorithms to explain and analyze 6 Ml algorithms. All models performed comparably: LR area under the curve (AUC) was 0.71; SVM AUC was 0.76; RF AUC was 0.73; XGBoost AUC was 0.78; MLPNN AUC was 0.73; BART AUC was 0.73. Our interpretability analysis demonstrated consistent results across all the methods; i.e., the utility of the top 12 features was broadly consistent. Furthermore, contributions of 9 important features (aneurysm area, aneurysm location, aneurysm type, wall shear stress maximum during systole, ostium area, the size ratio between aneurysm width, (parent) vessel diameter, one standard deviation among time-averaged low shear area, and one standard deviation of temporally averaged low shear area less than 0.4 Pa) were nearly the same. This research suggested that ML classifiers can provide explainable predictions consistent with general domain knowledge concerning IA rupture. With the improved understanding of ML algorithms, clinicians’ trust in ML algorithms will be enhanced, accelerating their clinical translation.
format Online
Article
Text
id pubmed-9999353
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher IOP Publishing
record_format MEDLINE/PubMed
spelling pubmed-99993532023-03-11 Can we explain machine learning-based prediction for rupture status assessments of intracranial aneurysms? Mu, N Rezaeitaleshmahalleh, M Lyu, Z Wang, M Tang, J Strother, C M Gemmete, J J Pandey, A S Jiang, J Biomed Phys Eng Express Note Although applying machine learning (ML) algorithms to rupture status assessment of intracranial aneurysms (IA) has yielded promising results, the opaqueness of some ML methods has limited their clinical translation. We presented the first explainability comparison of six commonly used ML algorithms: multivariate logistic regression (LR), support vector machine (SVM), random forest (RF), extreme gradient boosting (XGBoost), multi-layer perceptron neural network (MLPNN), and Bayesian additive regression trees (BART). A total of 112 IAs with known rupture status were selected for this study. The ML-based classification used two anatomical features, nine hemodynamic parameters, and thirteen morphologic variables. We utilized permutation feature importance, local interpretable model-agnostic explanations (LIME), and SHapley Additive exPlanations (SHAP) algorithms to explain and analyze 6 Ml algorithms. All models performed comparably: LR area under the curve (AUC) was 0.71; SVM AUC was 0.76; RF AUC was 0.73; XGBoost AUC was 0.78; MLPNN AUC was 0.73; BART AUC was 0.73. Our interpretability analysis demonstrated consistent results across all the methods; i.e., the utility of the top 12 features was broadly consistent. Furthermore, contributions of 9 important features (aneurysm area, aneurysm location, aneurysm type, wall shear stress maximum during systole, ostium area, the size ratio between aneurysm width, (parent) vessel diameter, one standard deviation among time-averaged low shear area, and one standard deviation of temporally averaged low shear area less than 0.4 Pa) were nearly the same. This research suggested that ML classifiers can provide explainable predictions consistent with general domain knowledge concerning IA rupture. With the improved understanding of ML algorithms, clinicians’ trust in ML algorithms will be enhanced, accelerating their clinical translation. IOP Publishing 2023-05-01 2023-03-10 /pmc/articles/PMC9999353/ /pubmed/36626819 http://dx.doi.org/10.1088/2057-1976/acb1b3 Text en © 2023 The Author(s). Published by IOP Publishing Ltd https://creativecommons.org/licenses/by/4.0/Original content from this work may be used under the terms of the Creative Commons Attribution 4.0 licence (https://creativecommons.org/licenses/by/4.0/) . Any further distribution of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
spellingShingle Note
Mu, N
Rezaeitaleshmahalleh, M
Lyu, Z
Wang, M
Tang, J
Strother, C M
Gemmete, J J
Pandey, A S
Jiang, J
Can we explain machine learning-based prediction for rupture status assessments of intracranial aneurysms?
title Can we explain machine learning-based prediction for rupture status assessments of intracranial aneurysms?
title_full Can we explain machine learning-based prediction for rupture status assessments of intracranial aneurysms?
title_fullStr Can we explain machine learning-based prediction for rupture status assessments of intracranial aneurysms?
title_full_unstemmed Can we explain machine learning-based prediction for rupture status assessments of intracranial aneurysms?
title_short Can we explain machine learning-based prediction for rupture status assessments of intracranial aneurysms?
title_sort can we explain machine learning-based prediction for rupture status assessments of intracranial aneurysms?
topic Note
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9999353/
https://www.ncbi.nlm.nih.gov/pubmed/36626819
http://dx.doi.org/10.1088/2057-1976/acb1b3
work_keys_str_mv AT mun canweexplainmachinelearningbasedpredictionforrupturestatusassessmentsofintracranialaneurysms
AT rezaeitaleshmahallehm canweexplainmachinelearningbasedpredictionforrupturestatusassessmentsofintracranialaneurysms
AT lyuz canweexplainmachinelearningbasedpredictionforrupturestatusassessmentsofintracranialaneurysms
AT wangm canweexplainmachinelearningbasedpredictionforrupturestatusassessmentsofintracranialaneurysms
AT tangj canweexplainmachinelearningbasedpredictionforrupturestatusassessmentsofintracranialaneurysms
AT strothercm canweexplainmachinelearningbasedpredictionforrupturestatusassessmentsofintracranialaneurysms
AT gemmetejj canweexplainmachinelearningbasedpredictionforrupturestatusassessmentsofintracranialaneurysms
AT pandeyas canweexplainmachinelearningbasedpredictionforrupturestatusassessmentsofintracranialaneurysms
AT jiangj canweexplainmachinelearningbasedpredictionforrupturestatusassessmentsofintracranialaneurysms