Cargando…
Decentralized collaborative multi-institutional PET attenuation and scatter correction using federated deep learning
PURPOSE: Attenuation correction and scatter compensation (AC/SC) are two main steps toward quantitative PET imaging, which remain challenging in PET-only and PET/MRI systems. These can be effectively tackled via deep learning (DL) methods. However, trustworthy, and generalizable DL models commonly r...
Autores principales: | , , , , , , , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer Berlin Heidelberg
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9742659/ https://www.ncbi.nlm.nih.gov/pubmed/36508026 http://dx.doi.org/10.1007/s00259-022-06053-8 |
_version_ | 1784848573014212608 |
---|---|
author | Shiri, Isaac Vafaei Sadr, Alireza Akhavan, Azadeh Salimi, Yazdan Sanaat, Amirhossein Amini, Mehdi Razeghi, Behrooz Saberi, Abdollah Arabi, Hossein Ferdowsi, Sohrab Voloshynovskiy, Slava Gündüz, Deniz Rahmim, Arman Zaidi, Habib |
author_facet | Shiri, Isaac Vafaei Sadr, Alireza Akhavan, Azadeh Salimi, Yazdan Sanaat, Amirhossein Amini, Mehdi Razeghi, Behrooz Saberi, Abdollah Arabi, Hossein Ferdowsi, Sohrab Voloshynovskiy, Slava Gündüz, Deniz Rahmim, Arman Zaidi, Habib |
author_sort | Shiri, Isaac |
collection | PubMed |
description | PURPOSE: Attenuation correction and scatter compensation (AC/SC) are two main steps toward quantitative PET imaging, which remain challenging in PET-only and PET/MRI systems. These can be effectively tackled via deep learning (DL) methods. However, trustworthy, and generalizable DL models commonly require well-curated, heterogeneous, and large datasets from multiple clinical centers. At the same time, owing to legal/ethical issues and privacy concerns, forming a large collective, centralized dataset poses significant challenges. In this work, we aimed to develop a DL-based model in a multicenter setting without direct sharing of data using federated learning (FL) for AC/SC of PET images. METHODS: Non-attenuation/scatter corrected and CT-based attenuation/scatter corrected (CT-ASC) (18)F-FDG PET images of 300 patients were enrolled in this study. The dataset consisted of 6 different centers, each with 50 patients, with scanner, image acquisition, and reconstruction protocols varying across the centers. CT-based ASC PET images served as the standard reference. All images were reviewed to include high-quality and artifact-free PET images. Both corrected and uncorrected PET images were converted to standardized uptake values (SUVs). We used a modified nested U-Net utilizing residual U-block in a U-shape architecture. We evaluated two FL models, namely sequential (FL-SQ) and parallel (FL-PL) and compared their performance with the baseline centralized (CZ) learning model wherein the data were pooled to one server, as well as center-based (CB) models where for each center the model was built and evaluated separately. Data from each center were divided to contribute to training (30 patients), validation (10 patients), and test sets (10 patients). Final evaluations and reports were performed on 60 patients (10 patients from each center). RESULTS: In terms of percent SUV absolute relative error (ARE%), both FL-SQ (CI:12.21–14.81%) and FL-PL (CI:11.82–13.84%) models demonstrated excellent agreement with the centralized framework (CI:10.32–12.00%), while FL-based algorithms improved model performance by over 11% compared to CB training strategy (CI: 22.34–26.10%). Furthermore, the Mann–Whitney test between different strategies revealed no significant differences between CZ and FL-based algorithms (p-value > 0.05) in center-categorized mode. At the same time, a significant difference was observed between the different training approaches on the overall dataset (p-value < 0.05). In addition, voxel-wise comparison, with respect to reference CT-ASC, exhibited similar performance for images predicted by CZ (R(2) = 0.94), FL-SQ (R(2) = 0.93), and FL-PL (R(2) = 0.92), while CB model achieved a far lower coefficient of determination (R(2) = 0.74). Despite the strong correlations between CZ and FL-based methods compared to reference CT-ASC, a slight underestimation of predicted voxel values was observed. CONCLUSION: Deep learning-based models provide promising results toward quantitative PET image reconstruction. Specifically, we developed two FL models and compared their performance with center-based and centralized models. The proposed FL-based models achieved higher performance compared to center-based models, comparable with centralized models. Our work provided strong empirical evidence that the FL framework can fully benefit from the generalizability and robustness of DL models used for AC/SC in PET, while obviating the need for the direct sharing of datasets between clinical imaging centers. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s00259-022-06053-8. |
format | Online Article Text |
id | pubmed-9742659 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Springer Berlin Heidelberg |
record_format | MEDLINE/PubMed |
spelling | pubmed-97426592022-12-12 Decentralized collaborative multi-institutional PET attenuation and scatter correction using federated deep learning Shiri, Isaac Vafaei Sadr, Alireza Akhavan, Azadeh Salimi, Yazdan Sanaat, Amirhossein Amini, Mehdi Razeghi, Behrooz Saberi, Abdollah Arabi, Hossein Ferdowsi, Sohrab Voloshynovskiy, Slava Gündüz, Deniz Rahmim, Arman Zaidi, Habib Eur J Nucl Med Mol Imaging Original Article PURPOSE: Attenuation correction and scatter compensation (AC/SC) are two main steps toward quantitative PET imaging, which remain challenging in PET-only and PET/MRI systems. These can be effectively tackled via deep learning (DL) methods. However, trustworthy, and generalizable DL models commonly require well-curated, heterogeneous, and large datasets from multiple clinical centers. At the same time, owing to legal/ethical issues and privacy concerns, forming a large collective, centralized dataset poses significant challenges. In this work, we aimed to develop a DL-based model in a multicenter setting without direct sharing of data using federated learning (FL) for AC/SC of PET images. METHODS: Non-attenuation/scatter corrected and CT-based attenuation/scatter corrected (CT-ASC) (18)F-FDG PET images of 300 patients were enrolled in this study. The dataset consisted of 6 different centers, each with 50 patients, with scanner, image acquisition, and reconstruction protocols varying across the centers. CT-based ASC PET images served as the standard reference. All images were reviewed to include high-quality and artifact-free PET images. Both corrected and uncorrected PET images were converted to standardized uptake values (SUVs). We used a modified nested U-Net utilizing residual U-block in a U-shape architecture. We evaluated two FL models, namely sequential (FL-SQ) and parallel (FL-PL) and compared their performance with the baseline centralized (CZ) learning model wherein the data were pooled to one server, as well as center-based (CB) models where for each center the model was built and evaluated separately. Data from each center were divided to contribute to training (30 patients), validation (10 patients), and test sets (10 patients). Final evaluations and reports were performed on 60 patients (10 patients from each center). RESULTS: In terms of percent SUV absolute relative error (ARE%), both FL-SQ (CI:12.21–14.81%) and FL-PL (CI:11.82–13.84%) models demonstrated excellent agreement with the centralized framework (CI:10.32–12.00%), while FL-based algorithms improved model performance by over 11% compared to CB training strategy (CI: 22.34–26.10%). Furthermore, the Mann–Whitney test between different strategies revealed no significant differences between CZ and FL-based algorithms (p-value > 0.05) in center-categorized mode. At the same time, a significant difference was observed between the different training approaches on the overall dataset (p-value < 0.05). In addition, voxel-wise comparison, with respect to reference CT-ASC, exhibited similar performance for images predicted by CZ (R(2) = 0.94), FL-SQ (R(2) = 0.93), and FL-PL (R(2) = 0.92), while CB model achieved a far lower coefficient of determination (R(2) = 0.74). Despite the strong correlations between CZ and FL-based methods compared to reference CT-ASC, a slight underestimation of predicted voxel values was observed. CONCLUSION: Deep learning-based models provide promising results toward quantitative PET image reconstruction. Specifically, we developed two FL models and compared their performance with center-based and centralized models. The proposed FL-based models achieved higher performance compared to center-based models, comparable with centralized models. Our work provided strong empirical evidence that the FL framework can fully benefit from the generalizability and robustness of DL models used for AC/SC in PET, while obviating the need for the direct sharing of datasets between clinical imaging centers. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s00259-022-06053-8. Springer Berlin Heidelberg 2022-12-12 2023 /pmc/articles/PMC9742659/ /pubmed/36508026 http://dx.doi.org/10.1007/s00259-022-06053-8 Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Original Article Shiri, Isaac Vafaei Sadr, Alireza Akhavan, Azadeh Salimi, Yazdan Sanaat, Amirhossein Amini, Mehdi Razeghi, Behrooz Saberi, Abdollah Arabi, Hossein Ferdowsi, Sohrab Voloshynovskiy, Slava Gündüz, Deniz Rahmim, Arman Zaidi, Habib Decentralized collaborative multi-institutional PET attenuation and scatter correction using federated deep learning |
title | Decentralized collaborative multi-institutional PET attenuation and scatter correction using federated deep learning |
title_full | Decentralized collaborative multi-institutional PET attenuation and scatter correction using federated deep learning |
title_fullStr | Decentralized collaborative multi-institutional PET attenuation and scatter correction using federated deep learning |
title_full_unstemmed | Decentralized collaborative multi-institutional PET attenuation and scatter correction using federated deep learning |
title_short | Decentralized collaborative multi-institutional PET attenuation and scatter correction using federated deep learning |
title_sort | decentralized collaborative multi-institutional pet attenuation and scatter correction using federated deep learning |
topic | Original Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9742659/ https://www.ncbi.nlm.nih.gov/pubmed/36508026 http://dx.doi.org/10.1007/s00259-022-06053-8 |
work_keys_str_mv | AT shiriisaac decentralizedcollaborativemultiinstitutionalpetattenuationandscattercorrectionusingfederateddeeplearning AT vafaeisadralireza decentralizedcollaborativemultiinstitutionalpetattenuationandscattercorrectionusingfederateddeeplearning AT akhavanazadeh decentralizedcollaborativemultiinstitutionalpetattenuationandscattercorrectionusingfederateddeeplearning AT salimiyazdan decentralizedcollaborativemultiinstitutionalpetattenuationandscattercorrectionusingfederateddeeplearning AT sanaatamirhossein decentralizedcollaborativemultiinstitutionalpetattenuationandscattercorrectionusingfederateddeeplearning AT aminimehdi decentralizedcollaborativemultiinstitutionalpetattenuationandscattercorrectionusingfederateddeeplearning AT razeghibehrooz decentralizedcollaborativemultiinstitutionalpetattenuationandscattercorrectionusingfederateddeeplearning AT saberiabdollah decentralizedcollaborativemultiinstitutionalpetattenuationandscattercorrectionusingfederateddeeplearning AT arabihossein decentralizedcollaborativemultiinstitutionalpetattenuationandscattercorrectionusingfederateddeeplearning AT ferdowsisohrab decentralizedcollaborativemultiinstitutionalpetattenuationandscattercorrectionusingfederateddeeplearning AT voloshynovskiyslava decentralizedcollaborativemultiinstitutionalpetattenuationandscattercorrectionusingfederateddeeplearning AT gunduzdeniz decentralizedcollaborativemultiinstitutionalpetattenuationandscattercorrectionusingfederateddeeplearning AT rahmimarman decentralizedcollaborativemultiinstitutionalpetattenuationandscattercorrectionusingfederateddeeplearning AT zaidihabib decentralizedcollaborativemultiinstitutionalpetattenuationandscattercorrectionusingfederateddeeplearning |