Cargando…
Predicting visual working memory with multimodal magnetic resonance imaging
The indispensability of visual working memory (VWM) in human daily life suggests its importance in higher cognitive functions and neurological diseases. However, despite the extensive research efforts, most findings on the neural basis of VWM are limited to a unimodal context (either structure or fu...
Autores principales: | , , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
John Wiley & Sons, Inc.
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7927291/ https://www.ncbi.nlm.nih.gov/pubmed/33277955 http://dx.doi.org/10.1002/hbm.25305 |
_version_ | 1783659648121831424 |
---|---|
author | Xiao, Yu Lin, Ying Ma, Junji Qian, Jiehui Ke, Zijun Li, Liangfang Yi, Yangyang Zhang, Jinbo Dai, Zhengjia |
author_facet | Xiao, Yu Lin, Ying Ma, Junji Qian, Jiehui Ke, Zijun Li, Liangfang Yi, Yangyang Zhang, Jinbo Dai, Zhengjia |
author_sort | Xiao, Yu |
collection | PubMed |
description | The indispensability of visual working memory (VWM) in human daily life suggests its importance in higher cognitive functions and neurological diseases. However, despite the extensive research efforts, most findings on the neural basis of VWM are limited to a unimodal context (either structure or function) and have low generalization. To address the above issues, this study proposed the usage of multimodal neuroimaging in combination with machine learning to reveal the neural mechanism of VWM across a large cohort (N = 547). Specifically, multimodal magnetic resonance imaging features extracted from voxel‐wise amplitude of low‐frequency fluctuations, gray matter volume, and fractional anisotropy were used to build an individual VWM capacity prediction model through a machine learning pipeline, including the steps of feature selection, relevance vector regression, cross‐validation, and model fusion. The resulting model exhibited promising predictive performance on VWM (r = .402, p < .001), and identified features within the subcortical‐cerebellum network, default mode network, motor network, corpus callosum, anterior corona radiata, and external capsule as significant predictors. The main results were then compared with those obtained on emotional regulation and fluid intelligence using the same pipeline, confirming the specificity of our findings. Moreover, the main results maintained well under different cross‐validation regimes and preprocess strategies. These findings, while providing richer evidence for the importance of multimodality in understanding cognitive functions, offer a solid and general foundation for comprehensively understanding the VWM process from the top down. |
format | Online Article Text |
id | pubmed-7927291 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
publisher | John Wiley & Sons, Inc. |
record_format | MEDLINE/PubMed |
spelling | pubmed-79272912021-03-12 Predicting visual working memory with multimodal magnetic resonance imaging Xiao, Yu Lin, Ying Ma, Junji Qian, Jiehui Ke, Zijun Li, Liangfang Yi, Yangyang Zhang, Jinbo Dai, Zhengjia Hum Brain Mapp Research Articles The indispensability of visual working memory (VWM) in human daily life suggests its importance in higher cognitive functions and neurological diseases. However, despite the extensive research efforts, most findings on the neural basis of VWM are limited to a unimodal context (either structure or function) and have low generalization. To address the above issues, this study proposed the usage of multimodal neuroimaging in combination with machine learning to reveal the neural mechanism of VWM across a large cohort (N = 547). Specifically, multimodal magnetic resonance imaging features extracted from voxel‐wise amplitude of low‐frequency fluctuations, gray matter volume, and fractional anisotropy were used to build an individual VWM capacity prediction model through a machine learning pipeline, including the steps of feature selection, relevance vector regression, cross‐validation, and model fusion. The resulting model exhibited promising predictive performance on VWM (r = .402, p < .001), and identified features within the subcortical‐cerebellum network, default mode network, motor network, corpus callosum, anterior corona radiata, and external capsule as significant predictors. The main results were then compared with those obtained on emotional regulation and fluid intelligence using the same pipeline, confirming the specificity of our findings. Moreover, the main results maintained well under different cross‐validation regimes and preprocess strategies. These findings, while providing richer evidence for the importance of multimodality in understanding cognitive functions, offer a solid and general foundation for comprehensively understanding the VWM process from the top down. John Wiley & Sons, Inc. 2020-12-05 /pmc/articles/PMC7927291/ /pubmed/33277955 http://dx.doi.org/10.1002/hbm.25305 Text en © 2020 The Authors. Human Brain Mapping published by Wiley Periodicals LLC. https://creativecommons.org/licenses/by/4.0/This is an open access article under the terms of the http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. |
spellingShingle | Research Articles Xiao, Yu Lin, Ying Ma, Junji Qian, Jiehui Ke, Zijun Li, Liangfang Yi, Yangyang Zhang, Jinbo Dai, Zhengjia Predicting visual working memory with multimodal magnetic resonance imaging |
title | Predicting visual working memory with multimodal magnetic resonance imaging |
title_full | Predicting visual working memory with multimodal magnetic resonance imaging |
title_fullStr | Predicting visual working memory with multimodal magnetic resonance imaging |
title_full_unstemmed | Predicting visual working memory with multimodal magnetic resonance imaging |
title_short | Predicting visual working memory with multimodal magnetic resonance imaging |
title_sort | predicting visual working memory with multimodal magnetic resonance imaging |
topic | Research Articles |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7927291/ https://www.ncbi.nlm.nih.gov/pubmed/33277955 http://dx.doi.org/10.1002/hbm.25305 |
work_keys_str_mv | AT xiaoyu predictingvisualworkingmemorywithmultimodalmagneticresonanceimaging AT linying predictingvisualworkingmemorywithmultimodalmagneticresonanceimaging AT majunji predictingvisualworkingmemorywithmultimodalmagneticresonanceimaging AT qianjiehui predictingvisualworkingmemorywithmultimodalmagneticresonanceimaging AT kezijun predictingvisualworkingmemorywithmultimodalmagneticresonanceimaging AT liliangfang predictingvisualworkingmemorywithmultimodalmagneticresonanceimaging AT yiyangyang predictingvisualworkingmemorywithmultimodalmagneticresonanceimaging AT zhangjinbo predictingvisualworkingmemorywithmultimodalmagneticresonanceimaging AT predictingvisualworkingmemorywithmultimodalmagneticresonanceimaging AT daizhengjia predictingvisualworkingmemorywithmultimodalmagneticresonanceimaging |