Cargando…

Comparative study of the quantitative accuracy of oncological PET imaging based on deep learning methods

BACKGROUND: [(18)F] Fluorodeoxyglucose (FDG) positron emission tomography/computed tomography (PET/CT) is an important tool for tumor assessment. Shortening scanning time and reducing the amount of radioactive tracer remain the most difficult challenges. Deep learning methods have provided powerful...

Descripción completa

Detalles Bibliográficos
Autores principales: Hu, Yiyi, Lv, Doudou, Jian, Shaojie, Lang, Limin, Cui, Caozhe, Liang, Meng, Song, Liwei, Li, Sijin, Wu, Zhifang
Formato: Online Artículo Texto
Lenguaje:English
Publicado: AME Publishing Company 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10240003/
https://www.ncbi.nlm.nih.gov/pubmed/37284102
http://dx.doi.org/10.21037/qims-22-1181
_version_ 1785053641427648512
author Hu, Yiyi
Lv, Doudou
Jian, Shaojie
Lang, Limin
Cui, Caozhe
Liang, Meng
Song, Liwei
Li, Sijin
Wu, Zhifang
author_facet Hu, Yiyi
Lv, Doudou
Jian, Shaojie
Lang, Limin
Cui, Caozhe
Liang, Meng
Song, Liwei
Li, Sijin
Wu, Zhifang
author_sort Hu, Yiyi
collection PubMed
description BACKGROUND: [(18)F] Fluorodeoxyglucose (FDG) positron emission tomography/computed tomography (PET/CT) is an important tool for tumor assessment. Shortening scanning time and reducing the amount of radioactive tracer remain the most difficult challenges. Deep learning methods have provided powerful solutions, thus making it important to choose an appropriate neural network architecture. METHODS: A total of 311 tumor patients who underwent (18)F-FDG PET/CT were retrospectively collected. The PET collection time was 3 min/bed. The first 15 and 30 s of each bed collection time were selected to simulate low-dose collection, and the pre-90s was used as the clinical standard protocol. Low-dose PET was used as input, convolutional neural network (CNN, 3D Unet as representative) and generative adversarial network (GAN, P2P as representative) were used to predict the full-dose images. The image visual scores, noise levels and quantitative parameters of tumor tissue were compared. RESULTS: There was high consistency in image quality scores among all groups [Kappa =0.719, 95% confidence interval (CI): 0.697–0.741, P<0.001]. There were 264 cases (3D Unet-15s), 311 cases (3D Unet-30s), 89 cases (P2P-15s) and 247 cases (P2P-30s) with image quality score ≥3, respectively. There was significant difference in the score composition among all groups (χ(2)=1,325.46, P<0.001). Both deep learning models reduced the standard deviation (SD) of background, and increased the signal-to-noise ratio (SNR). When 8%PET images were used as input, P2P and 3D Unet had similar enhancement effect on SNR of tumor lesions, but 3D Unet could significantly improve the contrast-noise ratio (CNR) (P<0.05). There was no significant difference in SUVmean of tumor lesions compared with s-PET group (P>0.05). When 17%PET image was used as input, SNR, CNR and SUVmax of tumor lesion of 3D Unet group had no statistical difference with those of s-PET group (P>0.05). CONCLUSIONS: Both GAN and CNN can suppress image noise to varying degrees and improve image quality. However, when 3D Unet reduces the noise of tumor lesions, it can improve the CNR of tumor lesions. Moreover, quantitative parameters of tumor tissue are similar to those under the standard acquisition protocol, which can meet the needs of clinical diagnosis.
format Online
Article
Text
id pubmed-10240003
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher AME Publishing Company
record_format MEDLINE/PubMed
spelling pubmed-102400032023-06-06 Comparative study of the quantitative accuracy of oncological PET imaging based on deep learning methods Hu, Yiyi Lv, Doudou Jian, Shaojie Lang, Limin Cui, Caozhe Liang, Meng Song, Liwei Li, Sijin Wu, Zhifang Quant Imaging Med Surg Original Article BACKGROUND: [(18)F] Fluorodeoxyglucose (FDG) positron emission tomography/computed tomography (PET/CT) is an important tool for tumor assessment. Shortening scanning time and reducing the amount of radioactive tracer remain the most difficult challenges. Deep learning methods have provided powerful solutions, thus making it important to choose an appropriate neural network architecture. METHODS: A total of 311 tumor patients who underwent (18)F-FDG PET/CT were retrospectively collected. The PET collection time was 3 min/bed. The first 15 and 30 s of each bed collection time were selected to simulate low-dose collection, and the pre-90s was used as the clinical standard protocol. Low-dose PET was used as input, convolutional neural network (CNN, 3D Unet as representative) and generative adversarial network (GAN, P2P as representative) were used to predict the full-dose images. The image visual scores, noise levels and quantitative parameters of tumor tissue were compared. RESULTS: There was high consistency in image quality scores among all groups [Kappa =0.719, 95% confidence interval (CI): 0.697–0.741, P<0.001]. There were 264 cases (3D Unet-15s), 311 cases (3D Unet-30s), 89 cases (P2P-15s) and 247 cases (P2P-30s) with image quality score ≥3, respectively. There was significant difference in the score composition among all groups (χ(2)=1,325.46, P<0.001). Both deep learning models reduced the standard deviation (SD) of background, and increased the signal-to-noise ratio (SNR). When 8%PET images were used as input, P2P and 3D Unet had similar enhancement effect on SNR of tumor lesions, but 3D Unet could significantly improve the contrast-noise ratio (CNR) (P<0.05). There was no significant difference in SUVmean of tumor lesions compared with s-PET group (P>0.05). When 17%PET image was used as input, SNR, CNR and SUVmax of tumor lesion of 3D Unet group had no statistical difference with those of s-PET group (P>0.05). CONCLUSIONS: Both GAN and CNN can suppress image noise to varying degrees and improve image quality. However, when 3D Unet reduces the noise of tumor lesions, it can improve the CNR of tumor lesions. Moreover, quantitative parameters of tumor tissue are similar to those under the standard acquisition protocol, which can meet the needs of clinical diagnosis. AME Publishing Company 2023-04-13 2023-06-01 /pmc/articles/PMC10240003/ /pubmed/37284102 http://dx.doi.org/10.21037/qims-22-1181 Text en 2023 Quantitative Imaging in Medicine and Surgery. All rights reserved. https://creativecommons.org/licenses/by-nc-nd/4.0/Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0 (https://creativecommons.org/licenses/by-nc-nd/4.0/) .
spellingShingle Original Article
Hu, Yiyi
Lv, Doudou
Jian, Shaojie
Lang, Limin
Cui, Caozhe
Liang, Meng
Song, Liwei
Li, Sijin
Wu, Zhifang
Comparative study of the quantitative accuracy of oncological PET imaging based on deep learning methods
title Comparative study of the quantitative accuracy of oncological PET imaging based on deep learning methods
title_full Comparative study of the quantitative accuracy of oncological PET imaging based on deep learning methods
title_fullStr Comparative study of the quantitative accuracy of oncological PET imaging based on deep learning methods
title_full_unstemmed Comparative study of the quantitative accuracy of oncological PET imaging based on deep learning methods
title_short Comparative study of the quantitative accuracy of oncological PET imaging based on deep learning methods
title_sort comparative study of the quantitative accuracy of oncological pet imaging based on deep learning methods
topic Original Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10240003/
https://www.ncbi.nlm.nih.gov/pubmed/37284102
http://dx.doi.org/10.21037/qims-22-1181
work_keys_str_mv AT huyiyi comparativestudyofthequantitativeaccuracyofoncologicalpetimagingbasedondeeplearningmethods
AT lvdoudou comparativestudyofthequantitativeaccuracyofoncologicalpetimagingbasedondeeplearningmethods
AT jianshaojie comparativestudyofthequantitativeaccuracyofoncologicalpetimagingbasedondeeplearningmethods
AT langlimin comparativestudyofthequantitativeaccuracyofoncologicalpetimagingbasedondeeplearningmethods
AT cuicaozhe comparativestudyofthequantitativeaccuracyofoncologicalpetimagingbasedondeeplearningmethods
AT liangmeng comparativestudyofthequantitativeaccuracyofoncologicalpetimagingbasedondeeplearningmethods
AT songliwei comparativestudyofthequantitativeaccuracyofoncologicalpetimagingbasedondeeplearningmethods
AT lisijin comparativestudyofthequantitativeaccuracyofoncologicalpetimagingbasedondeeplearningmethods
AT wuzhifang comparativestudyofthequantitativeaccuracyofoncologicalpetimagingbasedondeeplearningmethods