Cargando…

Connectome-based machine learning models are vulnerable to subtle data manipulations

Neuroimaging-based predictive models continue to improve in performance, yet a widely overlooked aspect of these models is “trustworthiness,” or robustness to data manipulations. High trustworthiness is imperative for researchers to have confidence in their findings and interpretations. In this work...

Descripción completa

Detalles Bibliográficos
Autores principales: Rosenblatt, Matthew, Rodriguez, Raimundo X., Westwater, Margaret L., Dai, Wei, Horien, Corey, Greene, Abigail S., Constable, R. Todd, Noble, Stephanie, Scheinost, Dustin
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Elsevier 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10382940/
https://www.ncbi.nlm.nih.gov/pubmed/37521052
http://dx.doi.org/10.1016/j.patter.2023.100756
Descripción
Sumario:Neuroimaging-based predictive models continue to improve in performance, yet a widely overlooked aspect of these models is “trustworthiness,” or robustness to data manipulations. High trustworthiness is imperative for researchers to have confidence in their findings and interpretations. In this work, we used functional connectomes to explore how minor data manipulations influence machine learning predictions. These manipulations included a method to falsely enhance prediction performance and adversarial noise attacks designed to degrade performance. Although these data manipulations drastically changed model performance, the original and manipulated data were extremely similar (r = 0.99) and did not affect other downstream analysis. Essentially, connectome data could be inconspicuously modified to achieve any desired prediction performance. Overall, our enhancement attacks and evaluation of existing adversarial noise attacks in connectome-based models highlight the need for counter-measures that improve the trustworthiness to preserve the integrity of academic research and any potential translational applications.