Cargando…

Machine Learning Interpretability Methods to Characterize Brain Network Dynamics in Epilepsy

The rapid adoption of machine learning (ML) algorithms in a wide range of biomedical applications has highlighted issues of trust and the lack of understanding regarding the results generated by ML algorithms. Recent studies have focused on developing interpretable ML models and establish guidelines...

Descripción completa

Detalles Bibliográficos
Autores principales: Upadhyaya, Dipak P., Prantzalos, Katrina, Thyagaraj, Suraj, Shafiabadi, Nassim, Fernandez-BacaVaca, Guadalupe, Sivagnanam, Subhashini, Majumdar, Amitava, Sahoo, Satya S.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Cold Spring Harbor Laboratory 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10327223/
https://www.ncbi.nlm.nih.gov/pubmed/37425941
http://dx.doi.org/10.1101/2023.06.25.23291874
Descripción
Sumario:The rapid adoption of machine learning (ML) algorithms in a wide range of biomedical applications has highlighted issues of trust and the lack of understanding regarding the results generated by ML algorithms. Recent studies have focused on developing interpretable ML models and establish guidelines for transparency and ethical use, ensuring the responsible integration of machine learning in healthcare. In this study, we demonstrate the effectiveness of ML interpretability methods to provide important insights into the dynamics of brain network interactions in epilepsy, a serious neurological disorder affecting more than 60 million persons worldwide. Using high-resolution intracranial electroencephalogram (EEG) recordings from a cohort of 16 patients, we developed high accuracy ML models to categorize these brain activity recordings into either seizure or non-seizure classes followed by a more complex task of delineating the different stages of seizure progression to different parts of the brain as a multi-class classification task. We applied three distinct types of interpretability methods to the high-accuracy ML models to gain an understanding of the relative contributions of different categories of brain interaction patterns, including multi-focii interactions, which play an important role in distinguishing between different states of the brain. The results of this study demonstrate for the first time that post-hoc interpretability methods enable us to understand why ML algorithms generate a given set of results and how variations in value of input values affect the accuracy of the ML algorithms. In particular, we show in this study that interpretability methods can be used to identify brain regions and interaction patterns that have a significant impact on seizure events. The results of this study highlight the importance of the integrated implementation of ML algorithms together with interpretability methods in aberrant brain network studies and the wider domain of biomedical research.