Cargando…

Autoencoder and restricted Boltzmann machine for transfer learning in functional magnetic resonance imaging task classification

Deep neural networks (DNNs) have been adopted widely as classifiers for functional magnetic resonance imaging (fMRI) data, advancing beyond traditional machine learning models. Consequently, transfer learning of the pre-trained DNN becomes crucial to enhance DNN classification performance, specifica...

Descripción completa

Detalles Bibliográficos
Autores principales: Hwang, Jundong, Lustig, Niv, Jung, Minyoung, Lee, Jong-Hwan
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Elsevier 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10372668/
https://www.ncbi.nlm.nih.gov/pubmed/37519689
http://dx.doi.org/10.1016/j.heliyon.2023.e18086
_version_ 1785078416642408448
author Hwang, Jundong
Lustig, Niv
Jung, Minyoung
Lee, Jong-Hwan
author_facet Hwang, Jundong
Lustig, Niv
Jung, Minyoung
Lee, Jong-Hwan
author_sort Hwang, Jundong
collection PubMed
description Deep neural networks (DNNs) have been adopted widely as classifiers for functional magnetic resonance imaging (fMRI) data, advancing beyond traditional machine learning models. Consequently, transfer learning of the pre-trained DNN becomes crucial to enhance DNN classification performance, specifically by alleviating an overfitting issue that occurs when a substantial number of DNN parameters are fitted to a relatively small number of fMRI samples. In this study, we first systematically compared the two most popularly used, unsupervised pretraining models for resting-state fMRI (rfMRI) volume data to pre-train the DNNs, namely autoencoder (AE) and restricted Boltzmann machine (RBM). The group in-brain mask used when training AE and RBM displayed a sizable overlap ratio with Yeo's seven functional brain networks (FNs). The parcellated FNs obtained from the RBM were fine-grained compared to those from the AE. The pre-trained AE and RBM served as the weight parameters of the first of the two hidden DNN layers, and the DNN fulfilled the task classifier role for fMRI (tfMRI) data in the Human Connectome Project (HCP). We tested two transfer learning schemes: (1) fixing and (2) fine-tuning the DNN's pre-trained AE or RBM weights. The DNN with transfer learning was compared to a baseline DNN, trained using random initial weights. Overall, DNN classification performance from the transfer learning proved superior when the pre-trained RBM weights were fixed and when the pre-trained AE weights were fine-tuned (average error rates: 14.8% for fixed RBM, 15.1% fine-tuned AE, and 15.5% for the baseline model) compared to the alternative scenarios of DNN transfer learning schemes. Moreover, the optimal transfer learning scheme between the fixed RBM and fine-tuned AE varied according to seven task conditions in the HCP. Nonetheless, the computational load reduced substantially for the fixed-weight-based transfer learning compared to the fine-tuning-based transfer learning (e.g., the number of weight parameters for the fixed-weight-based DNN model reduced to 1.9% compared with a baseline/fine-tuned DNN model). Our findings suggest that weight initialization at the DNN's first layer using RBM-based pre-trained weights provides the most promising approach when the whole-brain fMRI volume supports associated task classification. We believe that our proposed scheme could be applied to a variety of task conditions to improve their classification performance and to utilize computational resources efficiently using our AE/RBM-based pre-trained weights compared to random initial weights for DNN training.
format Online
Article
Text
id pubmed-10372668
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Elsevier
record_format MEDLINE/PubMed
spelling pubmed-103726682023-07-28 Autoencoder and restricted Boltzmann machine for transfer learning in functional magnetic resonance imaging task classification Hwang, Jundong Lustig, Niv Jung, Minyoung Lee, Jong-Hwan Heliyon Research Article Deep neural networks (DNNs) have been adopted widely as classifiers for functional magnetic resonance imaging (fMRI) data, advancing beyond traditional machine learning models. Consequently, transfer learning of the pre-trained DNN becomes crucial to enhance DNN classification performance, specifically by alleviating an overfitting issue that occurs when a substantial number of DNN parameters are fitted to a relatively small number of fMRI samples. In this study, we first systematically compared the two most popularly used, unsupervised pretraining models for resting-state fMRI (rfMRI) volume data to pre-train the DNNs, namely autoencoder (AE) and restricted Boltzmann machine (RBM). The group in-brain mask used when training AE and RBM displayed a sizable overlap ratio with Yeo's seven functional brain networks (FNs). The parcellated FNs obtained from the RBM were fine-grained compared to those from the AE. The pre-trained AE and RBM served as the weight parameters of the first of the two hidden DNN layers, and the DNN fulfilled the task classifier role for fMRI (tfMRI) data in the Human Connectome Project (HCP). We tested two transfer learning schemes: (1) fixing and (2) fine-tuning the DNN's pre-trained AE or RBM weights. The DNN with transfer learning was compared to a baseline DNN, trained using random initial weights. Overall, DNN classification performance from the transfer learning proved superior when the pre-trained RBM weights were fixed and when the pre-trained AE weights were fine-tuned (average error rates: 14.8% for fixed RBM, 15.1% fine-tuned AE, and 15.5% for the baseline model) compared to the alternative scenarios of DNN transfer learning schemes. Moreover, the optimal transfer learning scheme between the fixed RBM and fine-tuned AE varied according to seven task conditions in the HCP. Nonetheless, the computational load reduced substantially for the fixed-weight-based transfer learning compared to the fine-tuning-based transfer learning (e.g., the number of weight parameters for the fixed-weight-based DNN model reduced to 1.9% compared with a baseline/fine-tuned DNN model). Our findings suggest that weight initialization at the DNN's first layer using RBM-based pre-trained weights provides the most promising approach when the whole-brain fMRI volume supports associated task classification. We believe that our proposed scheme could be applied to a variety of task conditions to improve their classification performance and to utilize computational resources efficiently using our AE/RBM-based pre-trained weights compared to random initial weights for DNN training. Elsevier 2023-07-16 /pmc/articles/PMC10372668/ /pubmed/37519689 http://dx.doi.org/10.1016/j.heliyon.2023.e18086 Text en © 2023 The Authors https://creativecommons.org/licenses/by-nc-nd/4.0/This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
spellingShingle Research Article
Hwang, Jundong
Lustig, Niv
Jung, Minyoung
Lee, Jong-Hwan
Autoencoder and restricted Boltzmann machine for transfer learning in functional magnetic resonance imaging task classification
title Autoencoder and restricted Boltzmann machine for transfer learning in functional magnetic resonance imaging task classification
title_full Autoencoder and restricted Boltzmann machine for transfer learning in functional magnetic resonance imaging task classification
title_fullStr Autoencoder and restricted Boltzmann machine for transfer learning in functional magnetic resonance imaging task classification
title_full_unstemmed Autoencoder and restricted Boltzmann machine for transfer learning in functional magnetic resonance imaging task classification
title_short Autoencoder and restricted Boltzmann machine for transfer learning in functional magnetic resonance imaging task classification
title_sort autoencoder and restricted boltzmann machine for transfer learning in functional magnetic resonance imaging task classification
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10372668/
https://www.ncbi.nlm.nih.gov/pubmed/37519689
http://dx.doi.org/10.1016/j.heliyon.2023.e18086
work_keys_str_mv AT hwangjundong autoencoderandrestrictedboltzmannmachinefortransferlearninginfunctionalmagneticresonanceimagingtaskclassification
AT lustigniv autoencoderandrestrictedboltzmannmachinefortransferlearninginfunctionalmagneticresonanceimagingtaskclassification
AT jungminyoung autoencoderandrestrictedboltzmannmachinefortransferlearninginfunctionalmagneticresonanceimagingtaskclassification
AT leejonghwan autoencoderandrestrictedboltzmannmachinefortransferlearninginfunctionalmagneticresonanceimagingtaskclassification