Cargando…
Deep learning‐based convolutional neural network for intramodality brain MRI synthesis
PURPOSE: The existence of multicontrast magnetic resonance (MR) images increases the level of clinical information available for the diagnosis and treatment of brain cancer patients. However, acquiring the complete set of multicontrast MR images is not always practically feasible. In this study, we...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
John Wiley and Sons Inc.
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8992958/ https://www.ncbi.nlm.nih.gov/pubmed/35044073 http://dx.doi.org/10.1002/acm2.13530 |
_version_ | 1784683812602511360 |
---|---|
author | Osman, Alexander F. I. Tamam, Nissren M. |
author_facet | Osman, Alexander F. I. Tamam, Nissren M. |
author_sort | Osman, Alexander F. I. |
collection | PubMed |
description | PURPOSE: The existence of multicontrast magnetic resonance (MR) images increases the level of clinical information available for the diagnosis and treatment of brain cancer patients. However, acquiring the complete set of multicontrast MR images is not always practically feasible. In this study, we developed a state‐of‐the‐art deep learning convolutional neural network (CNN) for image‐to‐image translation across three standards MRI contrasts for the brain. METHODS: BRATS’2018 MRI dataset of 477 patients clinically diagnosed with glioma brain cancer was used in this study, with each patient having T1‐weighted (T1), T2‐weighted (T2), and FLAIR contrasts. It was randomly split into 64%, 16%, and 20% as training, validation, and test set, respectively. We developed a U‐Net model to learn the nonlinear mapping of a source image contrast to a target image contrast across three MRI contrasts. The model was trained and validated with 2D paired MR images using a mean‐squared error (MSE) cost function, Adam optimizer with 0.001 learning rate, and 120 epochs with a batch size of 32. The generated synthetic‐MR images were evaluated against the ground‐truth images by computing the MSE, mean absolute error (MAE), peak signal‐to‐noise ratio (PSNR), and structural similarity index (SSIM). RESULTS: The generated synthetic‐MR images with our model were nearly indistinguishable from the real images on the testing dataset for all translations, except synthetic FLAIR images had slightly lower quality and exhibited loss of details. The range of average PSNR, MSE, MAE, and SSIM values over the six translations were 29.44–33.25 dB, 0.0005–0.0012, 0.0086–0.0149, and 0.932–0.946, respectively. Our results were as good as the best‐reported results by other deep learning models on BRATS datasets. CONCLUSIONS: Our U‐Net model exhibited that it can accurately perform image‐to‐image translation across brain MRI contrasts. It could hold great promise for clinical use for improved clinical decision‐making and better diagnosis of brain cancer patients due to the availability of multicontrast MRIs. This approach may be clinically relevant and setting a significant step to efficiently fill a gap of absent MR sequences without additional scanning. |
format | Online Article Text |
id | pubmed-8992958 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | John Wiley and Sons Inc. |
record_format | MEDLINE/PubMed |
spelling | pubmed-89929582022-04-13 Deep learning‐based convolutional neural network for intramodality brain MRI synthesis Osman, Alexander F. I. Tamam, Nissren M. J Appl Clin Med Phys Radiation Oncology Physics PURPOSE: The existence of multicontrast magnetic resonance (MR) images increases the level of clinical information available for the diagnosis and treatment of brain cancer patients. However, acquiring the complete set of multicontrast MR images is not always practically feasible. In this study, we developed a state‐of‐the‐art deep learning convolutional neural network (CNN) for image‐to‐image translation across three standards MRI contrasts for the brain. METHODS: BRATS’2018 MRI dataset of 477 patients clinically diagnosed with glioma brain cancer was used in this study, with each patient having T1‐weighted (T1), T2‐weighted (T2), and FLAIR contrasts. It was randomly split into 64%, 16%, and 20% as training, validation, and test set, respectively. We developed a U‐Net model to learn the nonlinear mapping of a source image contrast to a target image contrast across three MRI contrasts. The model was trained and validated with 2D paired MR images using a mean‐squared error (MSE) cost function, Adam optimizer with 0.001 learning rate, and 120 epochs with a batch size of 32. The generated synthetic‐MR images were evaluated against the ground‐truth images by computing the MSE, mean absolute error (MAE), peak signal‐to‐noise ratio (PSNR), and structural similarity index (SSIM). RESULTS: The generated synthetic‐MR images with our model were nearly indistinguishable from the real images on the testing dataset for all translations, except synthetic FLAIR images had slightly lower quality and exhibited loss of details. The range of average PSNR, MSE, MAE, and SSIM values over the six translations were 29.44–33.25 dB, 0.0005–0.0012, 0.0086–0.0149, and 0.932–0.946, respectively. Our results were as good as the best‐reported results by other deep learning models on BRATS datasets. CONCLUSIONS: Our U‐Net model exhibited that it can accurately perform image‐to‐image translation across brain MRI contrasts. It could hold great promise for clinical use for improved clinical decision‐making and better diagnosis of brain cancer patients due to the availability of multicontrast MRIs. This approach may be clinically relevant and setting a significant step to efficiently fill a gap of absent MR sequences without additional scanning. John Wiley and Sons Inc. 2022-01-19 /pmc/articles/PMC8992958/ /pubmed/35044073 http://dx.doi.org/10.1002/acm2.13530 Text en © 2022 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, LLC on behalf of The American Association of Physicists in Medicine https://creativecommons.org/licenses/by/4.0/This is an open access article under the terms of the http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. |
spellingShingle | Radiation Oncology Physics Osman, Alexander F. I. Tamam, Nissren M. Deep learning‐based convolutional neural network for intramodality brain MRI synthesis |
title | Deep learning‐based convolutional neural network for intramodality brain MRI synthesis |
title_full | Deep learning‐based convolutional neural network for intramodality brain MRI synthesis |
title_fullStr | Deep learning‐based convolutional neural network for intramodality brain MRI synthesis |
title_full_unstemmed | Deep learning‐based convolutional neural network for intramodality brain MRI synthesis |
title_short | Deep learning‐based convolutional neural network for intramodality brain MRI synthesis |
title_sort | deep learning‐based convolutional neural network for intramodality brain mri synthesis |
topic | Radiation Oncology Physics |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8992958/ https://www.ncbi.nlm.nih.gov/pubmed/35044073 http://dx.doi.org/10.1002/acm2.13530 |
work_keys_str_mv | AT osmanalexanderfi deeplearningbasedconvolutionalneuralnetworkforintramodalitybrainmrisynthesis AT tamamnissrenm deeplearningbasedconvolutionalneuralnetworkforintramodalitybrainmrisynthesis |