Cargando…

Memory-Efficient Training for Fully Unrolled Deep Learned PET Image Reconstruction with Iteration-Dependent Targets

We propose a new version of the forward-backward splitting expectation-maximisation network (FBSEM-Net) along with a new memory-efficient training method enabling the training of fully unrolled implementations of 3D FBSEM-Net. FBSEM-Net unfolds the maximum a posteriori expectation-maximisation algor...

Descripción completa

Detalles Bibliográficos
Autores principales: Corda-D’Incan, Guillaume, Schnabel, Julia A., Reader, Andrew J.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7612803/
https://www.ncbi.nlm.nih.gov/pubmed/35664091
http://dx.doi.org/10.1109/TRPMS.2021.3101947
_version_ 1783605411714170880
author Corda-D’Incan, Guillaume
Schnabel, Julia A.
Reader, Andrew J.
author_facet Corda-D’Incan, Guillaume
Schnabel, Julia A.
Reader, Andrew J.
author_sort Corda-D’Incan, Guillaume
collection PubMed
description We propose a new version of the forward-backward splitting expectation-maximisation network (FBSEM-Net) along with a new memory-efficient training method enabling the training of fully unrolled implementations of 3D FBSEM-Net. FBSEM-Net unfolds the maximum a posteriori expectation-maximisation algorithm and replaces the regularisation step by a residual convolutional neural network. Both the gradient of the prior and the regularisation strength are learned from training data. In this new implementation, three modifications of the original framework are included. First, iteration-dependent networks are used to have a customised regularisation at each iteration. Second, iteration-dependent targets and losses are introduced so that the regularised reconstruction matches the reconstruction of noise-free data at every iteration. Third, sequential training is performed, making training of large unrolled networks far more memory efficient and feasible. Since sequential training permits unrolling a high number of iterations, there is no need for artificial use of the regularisation step as a leapfrogging acceleration. The results obtained on 2D and 3D simulated data show that FBSEM-Net using iteration-dependent targets and losses improves the consistency in the optimisation of the network parameters over different training runs. We also found that using iteration-dependent targets increases the generalisation capabilities of the network. Furthermore, unrolled networks using iteration-dependent regularisation allowed a slight reduction in reconstruction error compared to using a fixed regularisation network at each iteration. Finally, we demonstrate that sequential training successfully addresses potentially serious memory issues during the training of deep unrolled networks. In particular, it enables the training of 3D fully unrolled FBSEM-Net, not previously feasible, by reducing the memory usage by up to 98% compared to a conventional end-to-end training. We also note that the truncation of the backpropagation (due to sequential training) does not notably impact the network’s performance compared to conventional training with a full backpropagation through the entire network.
format Online
Article
Text
id pubmed-7612803
institution National Center for Biotechnology Information
language English
publishDate 2022
record_format MEDLINE/PubMed
spelling pubmed-76128032022-06-02 Memory-Efficient Training for Fully Unrolled Deep Learned PET Image Reconstruction with Iteration-Dependent Targets Corda-D’Incan, Guillaume Schnabel, Julia A. Reader, Andrew J. IEEE Trans Radiat Plasma Med Sci Article We propose a new version of the forward-backward splitting expectation-maximisation network (FBSEM-Net) along with a new memory-efficient training method enabling the training of fully unrolled implementations of 3D FBSEM-Net. FBSEM-Net unfolds the maximum a posteriori expectation-maximisation algorithm and replaces the regularisation step by a residual convolutional neural network. Both the gradient of the prior and the regularisation strength are learned from training data. In this new implementation, three modifications of the original framework are included. First, iteration-dependent networks are used to have a customised regularisation at each iteration. Second, iteration-dependent targets and losses are introduced so that the regularised reconstruction matches the reconstruction of noise-free data at every iteration. Third, sequential training is performed, making training of large unrolled networks far more memory efficient and feasible. Since sequential training permits unrolling a high number of iterations, there is no need for artificial use of the regularisation step as a leapfrogging acceleration. The results obtained on 2D and 3D simulated data show that FBSEM-Net using iteration-dependent targets and losses improves the consistency in the optimisation of the network parameters over different training runs. We also found that using iteration-dependent targets increases the generalisation capabilities of the network. Furthermore, unrolled networks using iteration-dependent regularisation allowed a slight reduction in reconstruction error compared to using a fixed regularisation network at each iteration. Finally, we demonstrate that sequential training successfully addresses potentially serious memory issues during the training of deep unrolled networks. In particular, it enables the training of 3D fully unrolled FBSEM-Net, not previously feasible, by reducing the memory usage by up to 98% compared to a conventional end-to-end training. We also note that the truncation of the backpropagation (due to sequential training) does not notably impact the network’s performance compared to conventional training with a full backpropagation through the entire network. 2022-05 2021-08-02 /pmc/articles/PMC7612803/ /pubmed/35664091 http://dx.doi.org/10.1109/TRPMS.2021.3101947 Text en https://creativecommons.org/licenses/by/4.0/This work is licensed under a CC BY 4.0 (https://creativecommons.org/licenses/by/4.0/) International license.
spellingShingle Article
Corda-D’Incan, Guillaume
Schnabel, Julia A.
Reader, Andrew J.
Memory-Efficient Training for Fully Unrolled Deep Learned PET Image Reconstruction with Iteration-Dependent Targets
title Memory-Efficient Training for Fully Unrolled Deep Learned PET Image Reconstruction with Iteration-Dependent Targets
title_full Memory-Efficient Training for Fully Unrolled Deep Learned PET Image Reconstruction with Iteration-Dependent Targets
title_fullStr Memory-Efficient Training for Fully Unrolled Deep Learned PET Image Reconstruction with Iteration-Dependent Targets
title_full_unstemmed Memory-Efficient Training for Fully Unrolled Deep Learned PET Image Reconstruction with Iteration-Dependent Targets
title_short Memory-Efficient Training for Fully Unrolled Deep Learned PET Image Reconstruction with Iteration-Dependent Targets
title_sort memory-efficient training for fully unrolled deep learned pet image reconstruction with iteration-dependent targets
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7612803/
https://www.ncbi.nlm.nih.gov/pubmed/35664091
http://dx.doi.org/10.1109/TRPMS.2021.3101947
work_keys_str_mv AT cordadincanguillaume memoryefficienttrainingforfullyunrolleddeeplearnedpetimagereconstructionwithiterationdependenttargets
AT schnabeljuliaa memoryefficienttrainingforfullyunrolleddeeplearnedpetimagereconstructionwithiterationdependenttargets
AT readerandrewj memoryefficienttrainingforfullyunrolleddeeplearnedpetimagereconstructionwithiterationdependenttargets