Cargando…

Deep learning-assisted ultra-fast/low-dose whole-body PET/CT imaging

PURPOSE: Tendency is to moderate the injected activity and/or reduce acquisition time in PET examinations to minimize potential radiation hazards and increase patient comfort. This work aims to assess the performance of regular full-dose (FD) synthesis from fast/low-dose (LD) whole-body (WB) PET ima...

Descripción completa

Detalles Bibliográficos
Autores principales: Sanaat, Amirhossein, Shiri, Isaac, Arabi, Hossein, Mainta, Ismini, Nkoulou, René, Zaidi, Habib
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer Berlin Heidelberg 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8241799/
https://www.ncbi.nlm.nih.gov/pubmed/33495927
http://dx.doi.org/10.1007/s00259-020-05167-1
_version_ 1783715490839920640
author Sanaat, Amirhossein
Shiri, Isaac
Arabi, Hossein
Mainta, Ismini
Nkoulou, René
Zaidi, Habib
author_facet Sanaat, Amirhossein
Shiri, Isaac
Arabi, Hossein
Mainta, Ismini
Nkoulou, René
Zaidi, Habib
author_sort Sanaat, Amirhossein
collection PubMed
description PURPOSE: Tendency is to moderate the injected activity and/or reduce acquisition time in PET examinations to minimize potential radiation hazards and increase patient comfort. This work aims to assess the performance of regular full-dose (FD) synthesis from fast/low-dose (LD) whole-body (WB) PET images using deep learning techniques. METHODS: Instead of using synthetic LD scans, two separate clinical WB (18)F-Fluorodeoxyglucose ((18)F-FDG) PET/CT studies of 100 patients were acquired: one regular FD (~ 27 min) and one fast or LD (~ 3 min) consisting of 1/8(th) of the standard acquisition time. A modified cycle-consistent generative adversarial network (CycleGAN) and residual neural network (ResNET) models, denoted as CGAN and RNET, respectively, were implemented to predict FD PET images. The quality of the predicted PET images was assessed by two nuclear medicine physicians. Moreover, the diagnostic quality of the predicted PET images was evaluated using a pass/fail scheme for lesion detectability task. Quantitative analysis using established metrics including standardized uptake value (SUV) bias was performed for the liver, left/right lung, brain, and 400 malignant lesions from the test and evaluation datasets. RESULTS: CGAN scored 4.92 and 3.88 (out of 5) (adequate to good) for brain and neck + trunk, respectively. The average SUV bias calculated over normal tissues was 3.39 ± 0.71% and − 3.83 ± 1.25% for CGAN and RNET, respectively. Bland-Altman analysis reported the lowest SUV bias (0.01%) and 95% confidence interval of − 0.36, + 0.47 for CGAN compared with the reference FD images for malignant lesions. CONCLUSION: CycleGAN is able to synthesize clinical FD WB PET images from LD images with 1/8th of standard injected activity or acquisition time. The predicted FD images present almost similar performance in terms of lesion detectability, qualitative scores, and quantification bias and variance. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s00259-020-05167-1.
format Online
Article
Text
id pubmed-8241799
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Springer Berlin Heidelberg
record_format MEDLINE/PubMed
spelling pubmed-82417992021-07-14 Deep learning-assisted ultra-fast/low-dose whole-body PET/CT imaging Sanaat, Amirhossein Shiri, Isaac Arabi, Hossein Mainta, Ismini Nkoulou, René Zaidi, Habib Eur J Nucl Med Mol Imaging Original Article PURPOSE: Tendency is to moderate the injected activity and/or reduce acquisition time in PET examinations to minimize potential radiation hazards and increase patient comfort. This work aims to assess the performance of regular full-dose (FD) synthesis from fast/low-dose (LD) whole-body (WB) PET images using deep learning techniques. METHODS: Instead of using synthetic LD scans, two separate clinical WB (18)F-Fluorodeoxyglucose ((18)F-FDG) PET/CT studies of 100 patients were acquired: one regular FD (~ 27 min) and one fast or LD (~ 3 min) consisting of 1/8(th) of the standard acquisition time. A modified cycle-consistent generative adversarial network (CycleGAN) and residual neural network (ResNET) models, denoted as CGAN and RNET, respectively, were implemented to predict FD PET images. The quality of the predicted PET images was assessed by two nuclear medicine physicians. Moreover, the diagnostic quality of the predicted PET images was evaluated using a pass/fail scheme for lesion detectability task. Quantitative analysis using established metrics including standardized uptake value (SUV) bias was performed for the liver, left/right lung, brain, and 400 malignant lesions from the test and evaluation datasets. RESULTS: CGAN scored 4.92 and 3.88 (out of 5) (adequate to good) for brain and neck + trunk, respectively. The average SUV bias calculated over normal tissues was 3.39 ± 0.71% and − 3.83 ± 1.25% for CGAN and RNET, respectively. Bland-Altman analysis reported the lowest SUV bias (0.01%) and 95% confidence interval of − 0.36, + 0.47 for CGAN compared with the reference FD images for malignant lesions. CONCLUSION: CycleGAN is able to synthesize clinical FD WB PET images from LD images with 1/8th of standard injected activity or acquisition time. The predicted FD images present almost similar performance in terms of lesion detectability, qualitative scores, and quantification bias and variance. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s00259-020-05167-1. Springer Berlin Heidelberg 2021-01-25 2021 /pmc/articles/PMC8241799/ /pubmed/33495927 http://dx.doi.org/10.1007/s00259-020-05167-1 Text en © The Author(s) 2021 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Original Article
Sanaat, Amirhossein
Shiri, Isaac
Arabi, Hossein
Mainta, Ismini
Nkoulou, René
Zaidi, Habib
Deep learning-assisted ultra-fast/low-dose whole-body PET/CT imaging
title Deep learning-assisted ultra-fast/low-dose whole-body PET/CT imaging
title_full Deep learning-assisted ultra-fast/low-dose whole-body PET/CT imaging
title_fullStr Deep learning-assisted ultra-fast/low-dose whole-body PET/CT imaging
title_full_unstemmed Deep learning-assisted ultra-fast/low-dose whole-body PET/CT imaging
title_short Deep learning-assisted ultra-fast/low-dose whole-body PET/CT imaging
title_sort deep learning-assisted ultra-fast/low-dose whole-body pet/ct imaging
topic Original Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8241799/
https://www.ncbi.nlm.nih.gov/pubmed/33495927
http://dx.doi.org/10.1007/s00259-020-05167-1
work_keys_str_mv AT sanaatamirhossein deeplearningassistedultrafastlowdosewholebodypetctimaging
AT shiriisaac deeplearningassistedultrafastlowdosewholebodypetctimaging
AT arabihossein deeplearningassistedultrafastlowdosewholebodypetctimaging
AT maintaismini deeplearningassistedultrafastlowdosewholebodypetctimaging
AT nkoulourene deeplearningassistedultrafastlowdosewholebodypetctimaging
AT zaidihabib deeplearningassistedultrafastlowdosewholebodypetctimaging