Cargando…
A deep learning approach for (18)F-FDG PET attenuation correction
BACKGROUND: To develop and evaluate the feasibility of a data-driven deep learning approach (deepAC) for positron-emission tomography (PET) image attenuation correction without anatomical imaging. A PET attenuation correction pipeline was developed utilizing deep learning to generate continuously va...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer International Publishing
2018
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6230542/ https://www.ncbi.nlm.nih.gov/pubmed/30417316 http://dx.doi.org/10.1186/s40658-018-0225-8 |
_version_ | 1783370094846410752 |
---|---|
author | Liu, Fang Jang, Hyungseok Kijowski, Richard Zhao, Gengyan Bradshaw, Tyler McMillan, Alan B. |
author_facet | Liu, Fang Jang, Hyungseok Kijowski, Richard Zhao, Gengyan Bradshaw, Tyler McMillan, Alan B. |
author_sort | Liu, Fang |
collection | PubMed |
description | BACKGROUND: To develop and evaluate the feasibility of a data-driven deep learning approach (deepAC) for positron-emission tomography (PET) image attenuation correction without anatomical imaging. A PET attenuation correction pipeline was developed utilizing deep learning to generate continuously valued pseudo-computed tomography (CT) images from uncorrected (18)F-fluorodeoxyglucose ((18)F-FDG) PET images. A deep convolutional encoder-decoder network was trained to identify tissue contrast in volumetric uncorrected PET images co-registered to CT data. A set of 100 retrospective 3D FDG PET head images was used to train the model. The model was evaluated in another 28 patients by comparing the generated pseudo-CT to the acquired CT using Dice coefficient and mean absolute error (MAE) and finally by comparing reconstructed PET images using the pseudo-CT and acquired CT for attenuation correction. Paired-sample t tests were used for statistical analysis to compare PET reconstruction error using deepAC with CT-based attenuation correction. RESULTS: deepAC produced pseudo-CTs with Dice coefficients of 0.80 ± 0.02 for air, 0.94 ± 0.01 for soft tissue, and 0.75 ± 0.03 for bone and MAE of 111 ± 16 HU relative to the PET/CT dataset. deepAC provides quantitatively accurate (18)F-FDG PET results with average errors of less than 1% in most brain regions. CONCLUSIONS: We have developed an automated approach (deepAC) that allows generation of a continuously valued pseudo-CT from a single (18)F-FDG non-attenuation-corrected (NAC) PET image and evaluated it in PET/CT brain imaging. |
format | Online Article Text |
id | pubmed-6230542 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2018 |
publisher | Springer International Publishing |
record_format | MEDLINE/PubMed |
spelling | pubmed-62305422018-11-26 A deep learning approach for (18)F-FDG PET attenuation correction Liu, Fang Jang, Hyungseok Kijowski, Richard Zhao, Gengyan Bradshaw, Tyler McMillan, Alan B. EJNMMI Phys Original Research BACKGROUND: To develop and evaluate the feasibility of a data-driven deep learning approach (deepAC) for positron-emission tomography (PET) image attenuation correction without anatomical imaging. A PET attenuation correction pipeline was developed utilizing deep learning to generate continuously valued pseudo-computed tomography (CT) images from uncorrected (18)F-fluorodeoxyglucose ((18)F-FDG) PET images. A deep convolutional encoder-decoder network was trained to identify tissue contrast in volumetric uncorrected PET images co-registered to CT data. A set of 100 retrospective 3D FDG PET head images was used to train the model. The model was evaluated in another 28 patients by comparing the generated pseudo-CT to the acquired CT using Dice coefficient and mean absolute error (MAE) and finally by comparing reconstructed PET images using the pseudo-CT and acquired CT for attenuation correction. Paired-sample t tests were used for statistical analysis to compare PET reconstruction error using deepAC with CT-based attenuation correction. RESULTS: deepAC produced pseudo-CTs with Dice coefficients of 0.80 ± 0.02 for air, 0.94 ± 0.01 for soft tissue, and 0.75 ± 0.03 for bone and MAE of 111 ± 16 HU relative to the PET/CT dataset. deepAC provides quantitatively accurate (18)F-FDG PET results with average errors of less than 1% in most brain regions. CONCLUSIONS: We have developed an automated approach (deepAC) that allows generation of a continuously valued pseudo-CT from a single (18)F-FDG non-attenuation-corrected (NAC) PET image and evaluated it in PET/CT brain imaging. Springer International Publishing 2018-11-12 /pmc/articles/PMC6230542/ /pubmed/30417316 http://dx.doi.org/10.1186/s40658-018-0225-8 Text en © The Author(s). 2018 Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. |
spellingShingle | Original Research Liu, Fang Jang, Hyungseok Kijowski, Richard Zhao, Gengyan Bradshaw, Tyler McMillan, Alan B. A deep learning approach for (18)F-FDG PET attenuation correction |
title | A deep learning approach for (18)F-FDG PET attenuation correction |
title_full | A deep learning approach for (18)F-FDG PET attenuation correction |
title_fullStr | A deep learning approach for (18)F-FDG PET attenuation correction |
title_full_unstemmed | A deep learning approach for (18)F-FDG PET attenuation correction |
title_short | A deep learning approach for (18)F-FDG PET attenuation correction |
title_sort | deep learning approach for (18)f-fdg pet attenuation correction |
topic | Original Research |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6230542/ https://www.ncbi.nlm.nih.gov/pubmed/30417316 http://dx.doi.org/10.1186/s40658-018-0225-8 |
work_keys_str_mv | AT liufang adeeplearningapproachfor18ffdgpetattenuationcorrection AT janghyungseok adeeplearningapproachfor18ffdgpetattenuationcorrection AT kijowskirichard adeeplearningapproachfor18ffdgpetattenuationcorrection AT zhaogengyan adeeplearningapproachfor18ffdgpetattenuationcorrection AT bradshawtyler adeeplearningapproachfor18ffdgpetattenuationcorrection AT mcmillanalanb adeeplearningapproachfor18ffdgpetattenuationcorrection AT liufang deeplearningapproachfor18ffdgpetattenuationcorrection AT janghyungseok deeplearningapproachfor18ffdgpetattenuationcorrection AT kijowskirichard deeplearningapproachfor18ffdgpetattenuationcorrection AT zhaogengyan deeplearningapproachfor18ffdgpetattenuationcorrection AT bradshawtyler deeplearningapproachfor18ffdgpetattenuationcorrection AT mcmillanalanb deeplearningapproachfor18ffdgpetattenuationcorrection |