Cargando…

Connectome-based machine learning models are vulnerable to subtle data manipulations

Neuroimaging-based predictive models continue to improve in performance, yet a widely overlooked aspect of these models is “trustworthiness,” or robustness to data manipulations. High trustworthiness is imperative for researchers to have confidence in their findings and interpretations. In this work...

Descripción completa

Detalles Bibliográficos
Autores principales: Rosenblatt, Matthew, Rodriguez, Raimundo X., Westwater, Margaret L., Dai, Wei, Horien, Corey, Greene, Abigail S., Constable, R. Todd, Noble, Stephanie, Scheinost, Dustin
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Elsevier 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10382940/
https://www.ncbi.nlm.nih.gov/pubmed/37521052
http://dx.doi.org/10.1016/j.patter.2023.100756
_version_ 1785080786668486656
author Rosenblatt, Matthew
Rodriguez, Raimundo X.
Westwater, Margaret L.
Dai, Wei
Horien, Corey
Greene, Abigail S.
Constable, R. Todd
Noble, Stephanie
Scheinost, Dustin
author_facet Rosenblatt, Matthew
Rodriguez, Raimundo X.
Westwater, Margaret L.
Dai, Wei
Horien, Corey
Greene, Abigail S.
Constable, R. Todd
Noble, Stephanie
Scheinost, Dustin
author_sort Rosenblatt, Matthew
collection PubMed
description Neuroimaging-based predictive models continue to improve in performance, yet a widely overlooked aspect of these models is “trustworthiness,” or robustness to data manipulations. High trustworthiness is imperative for researchers to have confidence in their findings and interpretations. In this work, we used functional connectomes to explore how minor data manipulations influence machine learning predictions. These manipulations included a method to falsely enhance prediction performance and adversarial noise attacks designed to degrade performance. Although these data manipulations drastically changed model performance, the original and manipulated data were extremely similar (r = 0.99) and did not affect other downstream analysis. Essentially, connectome data could be inconspicuously modified to achieve any desired prediction performance. Overall, our enhancement attacks and evaluation of existing adversarial noise attacks in connectome-based models highlight the need for counter-measures that improve the trustworthiness to preserve the integrity of academic research and any potential translational applications.
format Online
Article
Text
id pubmed-10382940
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Elsevier
record_format MEDLINE/PubMed
spelling pubmed-103829402023-07-30 Connectome-based machine learning models are vulnerable to subtle data manipulations Rosenblatt, Matthew Rodriguez, Raimundo X. Westwater, Margaret L. Dai, Wei Horien, Corey Greene, Abigail S. Constable, R. Todd Noble, Stephanie Scheinost, Dustin Patterns (N Y) Article Neuroimaging-based predictive models continue to improve in performance, yet a widely overlooked aspect of these models is “trustworthiness,” or robustness to data manipulations. High trustworthiness is imperative for researchers to have confidence in their findings and interpretations. In this work, we used functional connectomes to explore how minor data manipulations influence machine learning predictions. These manipulations included a method to falsely enhance prediction performance and adversarial noise attacks designed to degrade performance. Although these data manipulations drastically changed model performance, the original and manipulated data were extremely similar (r = 0.99) and did not affect other downstream analysis. Essentially, connectome data could be inconspicuously modified to achieve any desired prediction performance. Overall, our enhancement attacks and evaluation of existing adversarial noise attacks in connectome-based models highlight the need for counter-measures that improve the trustworthiness to preserve the integrity of academic research and any potential translational applications. Elsevier 2023-05-15 /pmc/articles/PMC10382940/ /pubmed/37521052 http://dx.doi.org/10.1016/j.patter.2023.100756 Text en © 2023 The Author(s) https://creativecommons.org/licenses/by-nc-nd/4.0/This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
spellingShingle Article
Rosenblatt, Matthew
Rodriguez, Raimundo X.
Westwater, Margaret L.
Dai, Wei
Horien, Corey
Greene, Abigail S.
Constable, R. Todd
Noble, Stephanie
Scheinost, Dustin
Connectome-based machine learning models are vulnerable to subtle data manipulations
title Connectome-based machine learning models are vulnerable to subtle data manipulations
title_full Connectome-based machine learning models are vulnerable to subtle data manipulations
title_fullStr Connectome-based machine learning models are vulnerable to subtle data manipulations
title_full_unstemmed Connectome-based machine learning models are vulnerable to subtle data manipulations
title_short Connectome-based machine learning models are vulnerable to subtle data manipulations
title_sort connectome-based machine learning models are vulnerable to subtle data manipulations
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10382940/
https://www.ncbi.nlm.nih.gov/pubmed/37521052
http://dx.doi.org/10.1016/j.patter.2023.100756
work_keys_str_mv AT rosenblattmatthew connectomebasedmachinelearningmodelsarevulnerabletosubtledatamanipulations
AT rodriguezraimundox connectomebasedmachinelearningmodelsarevulnerabletosubtledatamanipulations
AT westwatermargaretl connectomebasedmachinelearningmodelsarevulnerabletosubtledatamanipulations
AT daiwei connectomebasedmachinelearningmodelsarevulnerabletosubtledatamanipulations
AT horiencorey connectomebasedmachinelearningmodelsarevulnerabletosubtledatamanipulations
AT greeneabigails connectomebasedmachinelearningmodelsarevulnerabletosubtledatamanipulations
AT constablertodd connectomebasedmachinelearningmodelsarevulnerabletosubtledatamanipulations
AT noblestephanie connectomebasedmachinelearningmodelsarevulnerabletosubtledatamanipulations
AT scheinostdustin connectomebasedmachinelearningmodelsarevulnerabletosubtledatamanipulations