Cargando…

Attention module improves both performance and interpretability of four‐dimensional functional magnetic resonance imaging decoding neural network

Decoding brain cognitive states from neuroimaging signals is an important topic in neuroscience. In recent years, deep neural networks (DNNs) have been recruited for multiple brain state decoding and achieved good performance. However, the open question of how to interpret the DNN black box remains...

Descripción completa

Detalles Bibliográficos
Autores principales: Jiang, Zhoufan, Wang, Yanming, Shi, ChenWei, Wu, Yueyang, Hu, Rongjie, Chen, Shishuo, Hu, Sheng, Wang, Xiaoxiao, Qiu, Bensheng
Formato: Online Artículo Texto
Lenguaje:English
Publicado: John Wiley & Sons, Inc. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9057093/
https://www.ncbi.nlm.nih.gov/pubmed/35212436
http://dx.doi.org/10.1002/hbm.25813
Descripción
Sumario:Decoding brain cognitive states from neuroimaging signals is an important topic in neuroscience. In recent years, deep neural networks (DNNs) have been recruited for multiple brain state decoding and achieved good performance. However, the open question of how to interpret the DNN black box remains unanswered. Capitalizing on advances in machine learning, we integrated attention modules into brain decoders to facilitate an in‐depth interpretation of DNN channels. A four‐dimensional (4D) convolution operation was also included to extract temporo‐spatial interaction within the fMRI signal. The experiments showed that the proposed model obtains a very high accuracy (97.4%) and outperforms previous researches on the seven different task benchmarks from the Human Connectome Project (HCP) dataset. The visualization analysis further illustrated the hierarchical emergence of task‐specific masks with depth. Finally, the model was retrained to regress individual traits within the HCP and to classify viewing images from the BOLD5000 dataset, respectively. Transfer learning also achieves good performance. Further visualization analysis shows that, after transfer learning, low‐level attention masks remained similar to the source domain, whereas high‐level attention masks changed adaptively. In conclusion, the proposed 4D model with attention module performed well and facilitated interpretation of DNNs, which is helpful for subsequent research.