Cargando…

Machine Learning Interpretability Methods to Characterize Brain Network Dynamics in Epilepsy

The rapid adoption of machine learning (ML) algorithms in a wide range of biomedical applications has highlighted issues of trust and the lack of understanding regarding the results generated by ML algorithms. Recent studies have focused on developing interpretable ML models and establish guidelines...

Descripción completa

Detalles Bibliográficos
Autores principales: Upadhyaya, Dipak P., Prantzalos, Katrina, Thyagaraj, Suraj, Shafiabadi, Nassim, Fernandez-BacaVaca, Guadalupe, Sivagnanam, Subhashini, Majumdar, Amitava, Sahoo, Satya S.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Cold Spring Harbor Laboratory 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10327223/
https://www.ncbi.nlm.nih.gov/pubmed/37425941
http://dx.doi.org/10.1101/2023.06.25.23291874
_version_ 1785069580213813248
author Upadhyaya, Dipak P.
Prantzalos, Katrina
Thyagaraj, Suraj
Shafiabadi, Nassim
Fernandez-BacaVaca, Guadalupe
Sivagnanam, Subhashini
Majumdar, Amitava
Sahoo, Satya S.
author_facet Upadhyaya, Dipak P.
Prantzalos, Katrina
Thyagaraj, Suraj
Shafiabadi, Nassim
Fernandez-BacaVaca, Guadalupe
Sivagnanam, Subhashini
Majumdar, Amitava
Sahoo, Satya S.
author_sort Upadhyaya, Dipak P.
collection PubMed
description The rapid adoption of machine learning (ML) algorithms in a wide range of biomedical applications has highlighted issues of trust and the lack of understanding regarding the results generated by ML algorithms. Recent studies have focused on developing interpretable ML models and establish guidelines for transparency and ethical use, ensuring the responsible integration of machine learning in healthcare. In this study, we demonstrate the effectiveness of ML interpretability methods to provide important insights into the dynamics of brain network interactions in epilepsy, a serious neurological disorder affecting more than 60 million persons worldwide. Using high-resolution intracranial electroencephalogram (EEG) recordings from a cohort of 16 patients, we developed high accuracy ML models to categorize these brain activity recordings into either seizure or non-seizure classes followed by a more complex task of delineating the different stages of seizure progression to different parts of the brain as a multi-class classification task. We applied three distinct types of interpretability methods to the high-accuracy ML models to gain an understanding of the relative contributions of different categories of brain interaction patterns, including multi-focii interactions, which play an important role in distinguishing between different states of the brain. The results of this study demonstrate for the first time that post-hoc interpretability methods enable us to understand why ML algorithms generate a given set of results and how variations in value of input values affect the accuracy of the ML algorithms. In particular, we show in this study that interpretability methods can be used to identify brain regions and interaction patterns that have a significant impact on seizure events. The results of this study highlight the importance of the integrated implementation of ML algorithms together with interpretability methods in aberrant brain network studies and the wider domain of biomedical research.
format Online
Article
Text
id pubmed-10327223
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Cold Spring Harbor Laboratory
record_format MEDLINE/PubMed
spelling pubmed-103272232023-07-08 Machine Learning Interpretability Methods to Characterize Brain Network Dynamics in Epilepsy Upadhyaya, Dipak P. Prantzalos, Katrina Thyagaraj, Suraj Shafiabadi, Nassim Fernandez-BacaVaca, Guadalupe Sivagnanam, Subhashini Majumdar, Amitava Sahoo, Satya S. medRxiv Article The rapid adoption of machine learning (ML) algorithms in a wide range of biomedical applications has highlighted issues of trust and the lack of understanding regarding the results generated by ML algorithms. Recent studies have focused on developing interpretable ML models and establish guidelines for transparency and ethical use, ensuring the responsible integration of machine learning in healthcare. In this study, we demonstrate the effectiveness of ML interpretability methods to provide important insights into the dynamics of brain network interactions in epilepsy, a serious neurological disorder affecting more than 60 million persons worldwide. Using high-resolution intracranial electroencephalogram (EEG) recordings from a cohort of 16 patients, we developed high accuracy ML models to categorize these brain activity recordings into either seizure or non-seizure classes followed by a more complex task of delineating the different stages of seizure progression to different parts of the brain as a multi-class classification task. We applied three distinct types of interpretability methods to the high-accuracy ML models to gain an understanding of the relative contributions of different categories of brain interaction patterns, including multi-focii interactions, which play an important role in distinguishing between different states of the brain. The results of this study demonstrate for the first time that post-hoc interpretability methods enable us to understand why ML algorithms generate a given set of results and how variations in value of input values affect the accuracy of the ML algorithms. In particular, we show in this study that interpretability methods can be used to identify brain regions and interaction patterns that have a significant impact on seizure events. The results of this study highlight the importance of the integrated implementation of ML algorithms together with interpretability methods in aberrant brain network studies and the wider domain of biomedical research. Cold Spring Harbor Laboratory 2023-10-19 /pmc/articles/PMC10327223/ /pubmed/37425941 http://dx.doi.org/10.1101/2023.06.25.23291874 Text en https://creativecommons.org/licenses/by-nc-nd/4.0/This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License (https://creativecommons.org/licenses/by-nc-nd/4.0/) , which allows reusers to copy and distribute the material in any medium or format in unadapted form only, for noncommercial purposes only, and only so long as attribution is given to the creator.
spellingShingle Article
Upadhyaya, Dipak P.
Prantzalos, Katrina
Thyagaraj, Suraj
Shafiabadi, Nassim
Fernandez-BacaVaca, Guadalupe
Sivagnanam, Subhashini
Majumdar, Amitava
Sahoo, Satya S.
Machine Learning Interpretability Methods to Characterize Brain Network Dynamics in Epilepsy
title Machine Learning Interpretability Methods to Characterize Brain Network Dynamics in Epilepsy
title_full Machine Learning Interpretability Methods to Characterize Brain Network Dynamics in Epilepsy
title_fullStr Machine Learning Interpretability Methods to Characterize Brain Network Dynamics in Epilepsy
title_full_unstemmed Machine Learning Interpretability Methods to Characterize Brain Network Dynamics in Epilepsy
title_short Machine Learning Interpretability Methods to Characterize Brain Network Dynamics in Epilepsy
title_sort machine learning interpretability methods to characterize brain network dynamics in epilepsy
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10327223/
https://www.ncbi.nlm.nih.gov/pubmed/37425941
http://dx.doi.org/10.1101/2023.06.25.23291874
work_keys_str_mv AT upadhyayadipakp machinelearninginterpretabilitymethodstocharacterizebrainnetworkdynamicsinepilepsy
AT prantzaloskatrina machinelearninginterpretabilitymethodstocharacterizebrainnetworkdynamicsinepilepsy
AT thyagarajsuraj machinelearninginterpretabilitymethodstocharacterizebrainnetworkdynamicsinepilepsy
AT shafiabadinassim machinelearninginterpretabilitymethodstocharacterizebrainnetworkdynamicsinepilepsy
AT fernandezbacavacaguadalupe machinelearninginterpretabilitymethodstocharacterizebrainnetworkdynamicsinepilepsy
AT sivagnanamsubhashini machinelearninginterpretabilitymethodstocharacterizebrainnetworkdynamicsinepilepsy
AT majumdaramitava machinelearninginterpretabilitymethodstocharacterizebrainnetworkdynamicsinepilepsy
AT sahoosatyas machinelearninginterpretabilitymethodstocharacterizebrainnetworkdynamicsinepilepsy