Cargando…

A deep learning method for translating 3DCT to SPECT ventilation imaging: First comparison with (81m)Kr‐gas SPECT ventilation imaging

PURPOSE: This study aimed to evaluate the accuracy of deep learning (DL)‐based computed tomography (CT) ventilation imaging (CTVI). METHODS: A total of 71 cases that underwent single‐photon emission CT (81m)Kr‐gas ventilation (SPECT V) and CT imaging were included. Sixty cases were assigned to the t...

Descripción completa

Detalles Bibliográficos
Autores principales: Kajikawa, Tomohiro, Kadoya, Noriyuki, Maehara, Yosuke, Miura, Hiroshi, Katsuta, Yoshiyuki, Nagasawa, Shinsuke, Suzuki, Gen, Yamazaki, Hideya, Tamaki, Nagara, Yamada, Kei
Formato: Online Artículo Texto
Lenguaje:English
Publicado: John Wiley and Sons Inc. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9545310/
https://www.ncbi.nlm.nih.gov/pubmed/35510535
http://dx.doi.org/10.1002/mp.15697
_version_ 1784804791785881600
author Kajikawa, Tomohiro
Kadoya, Noriyuki
Maehara, Yosuke
Miura, Hiroshi
Katsuta, Yoshiyuki
Nagasawa, Shinsuke
Suzuki, Gen
Yamazaki, Hideya
Tamaki, Nagara
Yamada, Kei
author_facet Kajikawa, Tomohiro
Kadoya, Noriyuki
Maehara, Yosuke
Miura, Hiroshi
Katsuta, Yoshiyuki
Nagasawa, Shinsuke
Suzuki, Gen
Yamazaki, Hideya
Tamaki, Nagara
Yamada, Kei
author_sort Kajikawa, Tomohiro
collection PubMed
description PURPOSE: This study aimed to evaluate the accuracy of deep learning (DL)‐based computed tomography (CT) ventilation imaging (CTVI). METHODS: A total of 71 cases that underwent single‐photon emission CT (81m)Kr‐gas ventilation (SPECT V) and CT imaging were included. Sixty cases were assigned to the training and validation sets, and the remaining 11 cases were assigned to the test set. To directly transform three‐dimensional (3D) CT (free‐breathing CT) images to SPECT V images, a DL‐based model was implemented based on the U‐Net architecture. The input and output data were 3DCT‐ and SPECT V‐masked, respectively, except for whole‐lung volumes. These data were rearranged in voxel size, registered rigidly, cropped, and normalized in preprocessing. In addition to a standard estimation method (i.e., without dropout during the estimation process), a Monte Carlo dropout (MCD) method (i.e., with dropout during the estimation process) was used to calculate prediction uncertainty. To evaluate the two models’ (CTVI(MCD U‐Net), CTVI(U‐Net)) performance, we used fivefold cross‐validation for the training and validation sets. To test the final model performances for both approaches, we applied the test set to each trained model and averaged the test prediction results from the five trained models to acquire the mean test result (bagging) for each approach. For the MCD method, the models were predicted repeatedly (sample size = 200), and the average and standard deviation (SD) maps were calculated in each voxel from the predicted results: The average maps were defined as test prediction results in each fold. As an evaluation index, the voxel‐wise Spearman rank correlation coefficient (Spearman r (s)) and Dice similarity coefficient (DSC) were calculated. The DSC was calculated for three functional regions (high, moderate, and low) separated by an almost equal volume. The coefficient of variation was defined as prediction uncertainty, and these average values were calculated within three functional regions. The Wilcoxon signed‐rank test was used to test for a significant difference between the two DL‐based approaches. RESULTS: The average indexes with one SD (1SD) between CTVI(MCD U‐Net) and SPECT V were 0.76 ± 0.06, 0.69 ± 0.07, 0.51 ± 0.06, and 0.75 ± 0.04 for Spearman r (s), DSC(high), DSC(moderate), and DSC(low), respectively. The average indexes with 1SD between CTVI(U‐Net) and SPECT V were 0.72 ± 0.05, 0.66 ± 0.04, 0.48 ± 0.04, and 0.74 ± 0.06 for Spearman r (s), DSC(high), DSC(moderate), and DSC(low), respectively. These indexes between CTVI(MCD U‐Net) and CTVI(U‐Net) showed no significance difference (Spearman r (s), p = 0.175; DSC(high), p = 0.123; DSC(moderate), p = 0.278; DSC(low), p = 0.520). The average coefficient of variations with 1SD were 0.27 ± 0.00, 0.27 ± 0.01, and 0.36 ± 0.03 for the high‐, moderate‐, and low‐functional regions, respectively, and the low‐functional region showed a tendency to exhibit larger uncertainties than the others. CONCLUSION: We evaluated DL‐based framework for estimating lung‐functional ventilation images only from CT images. The results indicated that the DL‐based approach could potentially be used for lung‐ventilation estimation.
format Online
Article
Text
id pubmed-9545310
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher John Wiley and Sons Inc.
record_format MEDLINE/PubMed
spelling pubmed-95453102022-10-14 A deep learning method for translating 3DCT to SPECT ventilation imaging: First comparison with (81m)Kr‐gas SPECT ventilation imaging Kajikawa, Tomohiro Kadoya, Noriyuki Maehara, Yosuke Miura, Hiroshi Katsuta, Yoshiyuki Nagasawa, Shinsuke Suzuki, Gen Yamazaki, Hideya Tamaki, Nagara Yamada, Kei Med Phys THERAPEUTIC INTERVENTIONS PURPOSE: This study aimed to evaluate the accuracy of deep learning (DL)‐based computed tomography (CT) ventilation imaging (CTVI). METHODS: A total of 71 cases that underwent single‐photon emission CT (81m)Kr‐gas ventilation (SPECT V) and CT imaging were included. Sixty cases were assigned to the training and validation sets, and the remaining 11 cases were assigned to the test set. To directly transform three‐dimensional (3D) CT (free‐breathing CT) images to SPECT V images, a DL‐based model was implemented based on the U‐Net architecture. The input and output data were 3DCT‐ and SPECT V‐masked, respectively, except for whole‐lung volumes. These data were rearranged in voxel size, registered rigidly, cropped, and normalized in preprocessing. In addition to a standard estimation method (i.e., without dropout during the estimation process), a Monte Carlo dropout (MCD) method (i.e., with dropout during the estimation process) was used to calculate prediction uncertainty. To evaluate the two models’ (CTVI(MCD U‐Net), CTVI(U‐Net)) performance, we used fivefold cross‐validation for the training and validation sets. To test the final model performances for both approaches, we applied the test set to each trained model and averaged the test prediction results from the five trained models to acquire the mean test result (bagging) for each approach. For the MCD method, the models were predicted repeatedly (sample size = 200), and the average and standard deviation (SD) maps were calculated in each voxel from the predicted results: The average maps were defined as test prediction results in each fold. As an evaluation index, the voxel‐wise Spearman rank correlation coefficient (Spearman r (s)) and Dice similarity coefficient (DSC) were calculated. The DSC was calculated for three functional regions (high, moderate, and low) separated by an almost equal volume. The coefficient of variation was defined as prediction uncertainty, and these average values were calculated within three functional regions. The Wilcoxon signed‐rank test was used to test for a significant difference between the two DL‐based approaches. RESULTS: The average indexes with one SD (1SD) between CTVI(MCD U‐Net) and SPECT V were 0.76 ± 0.06, 0.69 ± 0.07, 0.51 ± 0.06, and 0.75 ± 0.04 for Spearman r (s), DSC(high), DSC(moderate), and DSC(low), respectively. The average indexes with 1SD between CTVI(U‐Net) and SPECT V were 0.72 ± 0.05, 0.66 ± 0.04, 0.48 ± 0.04, and 0.74 ± 0.06 for Spearman r (s), DSC(high), DSC(moderate), and DSC(low), respectively. These indexes between CTVI(MCD U‐Net) and CTVI(U‐Net) showed no significance difference (Spearman r (s), p = 0.175; DSC(high), p = 0.123; DSC(moderate), p = 0.278; DSC(low), p = 0.520). The average coefficient of variations with 1SD were 0.27 ± 0.00, 0.27 ± 0.01, and 0.36 ± 0.03 for the high‐, moderate‐, and low‐functional regions, respectively, and the low‐functional region showed a tendency to exhibit larger uncertainties than the others. CONCLUSION: We evaluated DL‐based framework for estimating lung‐functional ventilation images only from CT images. The results indicated that the DL‐based approach could potentially be used for lung‐ventilation estimation. John Wiley and Sons Inc. 2022-05-17 2022-07 /pmc/articles/PMC9545310/ /pubmed/35510535 http://dx.doi.org/10.1002/mp.15697 Text en © 2022 The Authors. Medical Physics published by Wiley Periodicals LLC on behalf of American Association of Physicists in Medicine. https://creativecommons.org/licenses/by/4.0/This is an open access article under the terms of the http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
spellingShingle THERAPEUTIC INTERVENTIONS
Kajikawa, Tomohiro
Kadoya, Noriyuki
Maehara, Yosuke
Miura, Hiroshi
Katsuta, Yoshiyuki
Nagasawa, Shinsuke
Suzuki, Gen
Yamazaki, Hideya
Tamaki, Nagara
Yamada, Kei
A deep learning method for translating 3DCT to SPECT ventilation imaging: First comparison with (81m)Kr‐gas SPECT ventilation imaging
title A deep learning method for translating 3DCT to SPECT ventilation imaging: First comparison with (81m)Kr‐gas SPECT ventilation imaging
title_full A deep learning method for translating 3DCT to SPECT ventilation imaging: First comparison with (81m)Kr‐gas SPECT ventilation imaging
title_fullStr A deep learning method for translating 3DCT to SPECT ventilation imaging: First comparison with (81m)Kr‐gas SPECT ventilation imaging
title_full_unstemmed A deep learning method for translating 3DCT to SPECT ventilation imaging: First comparison with (81m)Kr‐gas SPECT ventilation imaging
title_short A deep learning method for translating 3DCT to SPECT ventilation imaging: First comparison with (81m)Kr‐gas SPECT ventilation imaging
title_sort deep learning method for translating 3dct to spect ventilation imaging: first comparison with (81m)kr‐gas spect ventilation imaging
topic THERAPEUTIC INTERVENTIONS
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9545310/
https://www.ncbi.nlm.nih.gov/pubmed/35510535
http://dx.doi.org/10.1002/mp.15697
work_keys_str_mv AT kajikawatomohiro adeeplearningmethodfortranslating3dcttospectventilationimagingfirstcomparisonwith81mkrgasspectventilationimaging
AT kadoyanoriyuki adeeplearningmethodfortranslating3dcttospectventilationimagingfirstcomparisonwith81mkrgasspectventilationimaging
AT maeharayosuke adeeplearningmethodfortranslating3dcttospectventilationimagingfirstcomparisonwith81mkrgasspectventilationimaging
AT miurahiroshi adeeplearningmethodfortranslating3dcttospectventilationimagingfirstcomparisonwith81mkrgasspectventilationimaging
AT katsutayoshiyuki adeeplearningmethodfortranslating3dcttospectventilationimagingfirstcomparisonwith81mkrgasspectventilationimaging
AT nagasawashinsuke adeeplearningmethodfortranslating3dcttospectventilationimagingfirstcomparisonwith81mkrgasspectventilationimaging
AT suzukigen adeeplearningmethodfortranslating3dcttospectventilationimagingfirstcomparisonwith81mkrgasspectventilationimaging
AT yamazakihideya adeeplearningmethodfortranslating3dcttospectventilationimagingfirstcomparisonwith81mkrgasspectventilationimaging
AT tamakinagara adeeplearningmethodfortranslating3dcttospectventilationimagingfirstcomparisonwith81mkrgasspectventilationimaging
AT yamadakei adeeplearningmethodfortranslating3dcttospectventilationimagingfirstcomparisonwith81mkrgasspectventilationimaging
AT kajikawatomohiro deeplearningmethodfortranslating3dcttospectventilationimagingfirstcomparisonwith81mkrgasspectventilationimaging
AT kadoyanoriyuki deeplearningmethodfortranslating3dcttospectventilationimagingfirstcomparisonwith81mkrgasspectventilationimaging
AT maeharayosuke deeplearningmethodfortranslating3dcttospectventilationimagingfirstcomparisonwith81mkrgasspectventilationimaging
AT miurahiroshi deeplearningmethodfortranslating3dcttospectventilationimagingfirstcomparisonwith81mkrgasspectventilationimaging
AT katsutayoshiyuki deeplearningmethodfortranslating3dcttospectventilationimagingfirstcomparisonwith81mkrgasspectventilationimaging
AT nagasawashinsuke deeplearningmethodfortranslating3dcttospectventilationimagingfirstcomparisonwith81mkrgasspectventilationimaging
AT suzukigen deeplearningmethodfortranslating3dcttospectventilationimagingfirstcomparisonwith81mkrgasspectventilationimaging
AT yamazakihideya deeplearningmethodfortranslating3dcttospectventilationimagingfirstcomparisonwith81mkrgasspectventilationimaging
AT tamakinagara deeplearningmethodfortranslating3dcttospectventilationimagingfirstcomparisonwith81mkrgasspectventilationimaging
AT yamadakei deeplearningmethodfortranslating3dcttospectventilationimagingfirstcomparisonwith81mkrgasspectventilationimaging