Cargando…

Linear fine-tuning: a linear transformation based transfer strategy for deep MRI reconstruction

INTRODUCTION: Fine-tuning (FT) is a generally adopted transfer learning method for deep learning-based magnetic resonance imaging (MRI) reconstruction. In this approach, the reconstruction model is initialized with pre-trained weights derived from a source domain with ample data and subsequently upd...

Descripción completa

Detalles Bibliográficos
Autores principales: Bi, Wanqing, Xv, Jianan, Song, Mengdie, Hao, Xiaohan, Gao, Dayong, Qi, Fulang
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10318193/
https://www.ncbi.nlm.nih.gov/pubmed/37409107
http://dx.doi.org/10.3389/fnins.2023.1202143
_version_ 1785067983929868288
author Bi, Wanqing
Xv, Jianan
Song, Mengdie
Hao, Xiaohan
Gao, Dayong
Qi, Fulang
author_facet Bi, Wanqing
Xv, Jianan
Song, Mengdie
Hao, Xiaohan
Gao, Dayong
Qi, Fulang
author_sort Bi, Wanqing
collection PubMed
description INTRODUCTION: Fine-tuning (FT) is a generally adopted transfer learning method for deep learning-based magnetic resonance imaging (MRI) reconstruction. In this approach, the reconstruction model is initialized with pre-trained weights derived from a source domain with ample data and subsequently updated with limited data from the target domain. However, the direct full-weight update strategy can pose the risk of "catastrophic forgetting" and overfitting, hindering its effectiveness. The goal of this study is to develop a zero-weight update transfer strategy to preserve pre-trained generic knowledge and reduce overfitting. METHODS: Based on the commonality between the source and target domains, we assume a linear transformation relationship of the optimal model weights from the source domain to the target domain. Accordingly, we propose a novel transfer strategy, linear fine-tuning (LFT), which introduces scaling and shifting (SS) factors into the pre-trained model. In contrast to FT, LFT only updates SS factors in the transfer phase, while the pre-trained weights remain fixed. RESULTS: To evaluate the proposed LFT, we designed three different transfer scenarios and conducted a comparative analysis of FT, LFT, and other methods at various sampling rates and data volumes. In the transfer scenario between different contrasts, LFT outperforms typical transfer strategies at various sampling rates and considerably reduces artifacts on reconstructed images. In transfer scenarios between different slice directions or anatomical structures, LFT surpasses the FT method, particularly when the target domain contains a decreasing number of training images, with a maximum improvement of up to 2.06 dB (5.89%) in peak signal-to-noise ratio. DISCUSSION: The LFT strategy shows great potential to address the issues of "catastrophic forgetting" and overfitting in transfer scenarios for MRI reconstruction, while reducing the reliance on the amount of data in the target domain. Linear fine-tuning is expected to shorten the development cycle of reconstruction models for adapting complicated clinical scenarios, thereby enhancing the clinical applicability of deep MRI reconstruction.
format Online
Article
Text
id pubmed-10318193
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-103181932023-07-05 Linear fine-tuning: a linear transformation based transfer strategy for deep MRI reconstruction Bi, Wanqing Xv, Jianan Song, Mengdie Hao, Xiaohan Gao, Dayong Qi, Fulang Front Neurosci Neuroscience INTRODUCTION: Fine-tuning (FT) is a generally adopted transfer learning method for deep learning-based magnetic resonance imaging (MRI) reconstruction. In this approach, the reconstruction model is initialized with pre-trained weights derived from a source domain with ample data and subsequently updated with limited data from the target domain. However, the direct full-weight update strategy can pose the risk of "catastrophic forgetting" and overfitting, hindering its effectiveness. The goal of this study is to develop a zero-weight update transfer strategy to preserve pre-trained generic knowledge and reduce overfitting. METHODS: Based on the commonality between the source and target domains, we assume a linear transformation relationship of the optimal model weights from the source domain to the target domain. Accordingly, we propose a novel transfer strategy, linear fine-tuning (LFT), which introduces scaling and shifting (SS) factors into the pre-trained model. In contrast to FT, LFT only updates SS factors in the transfer phase, while the pre-trained weights remain fixed. RESULTS: To evaluate the proposed LFT, we designed three different transfer scenarios and conducted a comparative analysis of FT, LFT, and other methods at various sampling rates and data volumes. In the transfer scenario between different contrasts, LFT outperforms typical transfer strategies at various sampling rates and considerably reduces artifacts on reconstructed images. In transfer scenarios between different slice directions or anatomical structures, LFT surpasses the FT method, particularly when the target domain contains a decreasing number of training images, with a maximum improvement of up to 2.06 dB (5.89%) in peak signal-to-noise ratio. DISCUSSION: The LFT strategy shows great potential to address the issues of "catastrophic forgetting" and overfitting in transfer scenarios for MRI reconstruction, while reducing the reliance on the amount of data in the target domain. Linear fine-tuning is expected to shorten the development cycle of reconstruction models for adapting complicated clinical scenarios, thereby enhancing the clinical applicability of deep MRI reconstruction. Frontiers Media S.A. 2023-06-20 /pmc/articles/PMC10318193/ /pubmed/37409107 http://dx.doi.org/10.3389/fnins.2023.1202143 Text en Copyright © 2023 Bi, Xv, Song, Hao, Gao and Qi. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
Bi, Wanqing
Xv, Jianan
Song, Mengdie
Hao, Xiaohan
Gao, Dayong
Qi, Fulang
Linear fine-tuning: a linear transformation based transfer strategy for deep MRI reconstruction
title Linear fine-tuning: a linear transformation based transfer strategy for deep MRI reconstruction
title_full Linear fine-tuning: a linear transformation based transfer strategy for deep MRI reconstruction
title_fullStr Linear fine-tuning: a linear transformation based transfer strategy for deep MRI reconstruction
title_full_unstemmed Linear fine-tuning: a linear transformation based transfer strategy for deep MRI reconstruction
title_short Linear fine-tuning: a linear transformation based transfer strategy for deep MRI reconstruction
title_sort linear fine-tuning: a linear transformation based transfer strategy for deep mri reconstruction
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10318193/
https://www.ncbi.nlm.nih.gov/pubmed/37409107
http://dx.doi.org/10.3389/fnins.2023.1202143
work_keys_str_mv AT biwanqing linearfinetuningalineartransformationbasedtransferstrategyfordeepmrireconstruction
AT xvjianan linearfinetuningalineartransformationbasedtransferstrategyfordeepmrireconstruction
AT songmengdie linearfinetuningalineartransformationbasedtransferstrategyfordeepmrireconstruction
AT haoxiaohan linearfinetuningalineartransformationbasedtransferstrategyfordeepmrireconstruction
AT gaodayong linearfinetuningalineartransformationbasedtransferstrategyfordeepmrireconstruction
AT qifulang linearfinetuningalineartransformationbasedtransferstrategyfordeepmrireconstruction