Cargando…

Prediction of an oxygen extraction fraction map by convolutional neural network: validation of input data among MR and PET images

PURPOSE: Oxygen extraction fraction (OEF) is a biomarker for the viability of brain tissue in ischemic stroke. However, acquisition of the OEF map using positron emission tomography (PET) with oxygen-15 gas is uncomfortable for patients because of the long fixation time, invasive arterial sampling,...

Descripción completa

Detalles Bibliográficos
Autores principales: Matsubara, Keisuke, Ibaraki, Masanobu, Shinohara, Yuki, Takahashi, Noriyuki, Toyoshima, Hideto, Kinoshita, Toshibumi
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer International Publishing 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8589760/
https://www.ncbi.nlm.nih.gov/pubmed/33821419
http://dx.doi.org/10.1007/s11548-021-02356-7
_version_ 1784598800794386432
author Matsubara, Keisuke
Ibaraki, Masanobu
Shinohara, Yuki
Takahashi, Noriyuki
Toyoshima, Hideto
Kinoshita, Toshibumi
author_facet Matsubara, Keisuke
Ibaraki, Masanobu
Shinohara, Yuki
Takahashi, Noriyuki
Toyoshima, Hideto
Kinoshita, Toshibumi
author_sort Matsubara, Keisuke
collection PubMed
description PURPOSE: Oxygen extraction fraction (OEF) is a biomarker for the viability of brain tissue in ischemic stroke. However, acquisition of the OEF map using positron emission tomography (PET) with oxygen-15 gas is uncomfortable for patients because of the long fixation time, invasive arterial sampling, and radiation exposure. We aimed to predict the OEF map from magnetic resonance (MR) and PET images using a deep convolutional neural network (CNN) and to demonstrate which PET and MR images are optimal as inputs for the prediction of OEF maps. METHODS: Cerebral blood flow at rest (CBF) and during stress (sCBF), cerebral blood volume (CBV) maps acquired from oxygen-15 PET, and routine MR images (T1-, T2-, and T2*-weighted images) for 113 patients with steno-occlusive disease were learned with U-Net. MR and PET images acquired from the other 25 patients were used as test data. We compared the predicted OEF maps and intraclass correlation (ICC) with the real OEF values among combinations of MRI, CBF, CBV, and sCBF. RESULTS: Among the combinations of input images, OEF maps predicted by the model learned with MRI, CBF, CBV, and sCBF maps were the most similar to the real OEF maps (ICC: 0.597 ± 0.082). However, the contrast of predicted OEF maps was lower than that of real OEF maps. CONCLUSION: These results suggest that the deep CNN learned useful features from CBF, sCBF, CBV, and MR images and predict qualitatively realistic OEF maps. These findings suggest that the deep CNN model can shorten the fixation time for (15)O PET by skipping (15)O(2) scans. Further training with a larger data set is required to predict accurate OEF maps quantitatively. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s11548-021-02356-7.
format Online
Article
Text
id pubmed-8589760
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Springer International Publishing
record_format MEDLINE/PubMed
spelling pubmed-85897602021-11-15 Prediction of an oxygen extraction fraction map by convolutional neural network: validation of input data among MR and PET images Matsubara, Keisuke Ibaraki, Masanobu Shinohara, Yuki Takahashi, Noriyuki Toyoshima, Hideto Kinoshita, Toshibumi Int J Comput Assist Radiol Surg Original Article PURPOSE: Oxygen extraction fraction (OEF) is a biomarker for the viability of brain tissue in ischemic stroke. However, acquisition of the OEF map using positron emission tomography (PET) with oxygen-15 gas is uncomfortable for patients because of the long fixation time, invasive arterial sampling, and radiation exposure. We aimed to predict the OEF map from magnetic resonance (MR) and PET images using a deep convolutional neural network (CNN) and to demonstrate which PET and MR images are optimal as inputs for the prediction of OEF maps. METHODS: Cerebral blood flow at rest (CBF) and during stress (sCBF), cerebral blood volume (CBV) maps acquired from oxygen-15 PET, and routine MR images (T1-, T2-, and T2*-weighted images) for 113 patients with steno-occlusive disease were learned with U-Net. MR and PET images acquired from the other 25 patients were used as test data. We compared the predicted OEF maps and intraclass correlation (ICC) with the real OEF values among combinations of MRI, CBF, CBV, and sCBF. RESULTS: Among the combinations of input images, OEF maps predicted by the model learned with MRI, CBF, CBV, and sCBF maps were the most similar to the real OEF maps (ICC: 0.597 ± 0.082). However, the contrast of predicted OEF maps was lower than that of real OEF maps. CONCLUSION: These results suggest that the deep CNN learned useful features from CBF, sCBF, CBV, and MR images and predict qualitatively realistic OEF maps. These findings suggest that the deep CNN model can shorten the fixation time for (15)O PET by skipping (15)O(2) scans. Further training with a larger data set is required to predict accurate OEF maps quantitatively. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1007/s11548-021-02356-7. Springer International Publishing 2021-04-05 2021 /pmc/articles/PMC8589760/ /pubmed/33821419 http://dx.doi.org/10.1007/s11548-021-02356-7 Text en © The Author(s) 2021 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Original Article
Matsubara, Keisuke
Ibaraki, Masanobu
Shinohara, Yuki
Takahashi, Noriyuki
Toyoshima, Hideto
Kinoshita, Toshibumi
Prediction of an oxygen extraction fraction map by convolutional neural network: validation of input data among MR and PET images
title Prediction of an oxygen extraction fraction map by convolutional neural network: validation of input data among MR and PET images
title_full Prediction of an oxygen extraction fraction map by convolutional neural network: validation of input data among MR and PET images
title_fullStr Prediction of an oxygen extraction fraction map by convolutional neural network: validation of input data among MR and PET images
title_full_unstemmed Prediction of an oxygen extraction fraction map by convolutional neural network: validation of input data among MR and PET images
title_short Prediction of an oxygen extraction fraction map by convolutional neural network: validation of input data among MR and PET images
title_sort prediction of an oxygen extraction fraction map by convolutional neural network: validation of input data among mr and pet images
topic Original Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8589760/
https://www.ncbi.nlm.nih.gov/pubmed/33821419
http://dx.doi.org/10.1007/s11548-021-02356-7
work_keys_str_mv AT matsubarakeisuke predictionofanoxygenextractionfractionmapbyconvolutionalneuralnetworkvalidationofinputdataamongmrandpetimages
AT ibarakimasanobu predictionofanoxygenextractionfractionmapbyconvolutionalneuralnetworkvalidationofinputdataamongmrandpetimages
AT shinoharayuki predictionofanoxygenextractionfractionmapbyconvolutionalneuralnetworkvalidationofinputdataamongmrandpetimages
AT takahashinoriyuki predictionofanoxygenextractionfractionmapbyconvolutionalneuralnetworkvalidationofinputdataamongmrandpetimages
AT toyoshimahideto predictionofanoxygenextractionfractionmapbyconvolutionalneuralnetworkvalidationofinputdataamongmrandpetimages
AT kinoshitatoshibumi predictionofanoxygenextractionfractionmapbyconvolutionalneuralnetworkvalidationofinputdataamongmrandpetimages