Cargando…

Comparison of Supervised and Unsupervised Deep Learning Methods for Medical Image Synthesis between Computed Tomography and Magnetic Resonance Images

Cross-modality medical image synthesis between magnetic resonance (MR) images and computed tomography (CT) images has attracted increasing attention in many medical imaging area. Many deep learning methods have been used to generate pseudo-MR/CT images from counterpart modality images. In this study...

Descripción completa

Detalles Bibliográficos
Autores principales: Li, Yafen, Li, Wen, Xiong, Jing, Xia, Jun, Xie, Yaoqin
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Hindawi 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7661122/
https://www.ncbi.nlm.nih.gov/pubmed/33204701
http://dx.doi.org/10.1155/2020/5193707
_version_ 1783609145828573184
author Li, Yafen
Li, Wen
Xiong, Jing
Xia, Jun
Xie, Yaoqin
author_facet Li, Yafen
Li, Wen
Xiong, Jing
Xia, Jun
Xie, Yaoqin
author_sort Li, Yafen
collection PubMed
description Cross-modality medical image synthesis between magnetic resonance (MR) images and computed tomography (CT) images has attracted increasing attention in many medical imaging area. Many deep learning methods have been used to generate pseudo-MR/CT images from counterpart modality images. In this study, we used U-Net and Cycle-Consistent Adversarial Networks (CycleGAN), which were typical networks of supervised and unsupervised deep learning methods, respectively, to transform MR/CT images to their counterpart modality. Experimental results show that synthetic images predicted by the proposed U-Net method got lower mean absolute error (MAE), higher structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR) in both directions of CT/MR synthesis, especially in synthetic CT image generation. Though synthetic images by the U-Net method has less contrast information than those by the CycleGAN method, the pixel value profile tendency of the synthetic images by the U-Net method is closer to the ground truth images. This work demonstrated that supervised deep learning method outperforms unsupervised deep learning method in accuracy for medical tasks of MR/CT synthesis.
format Online
Article
Text
id pubmed-7661122
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher Hindawi
record_format MEDLINE/PubMed
spelling pubmed-76611222020-11-16 Comparison of Supervised and Unsupervised Deep Learning Methods for Medical Image Synthesis between Computed Tomography and Magnetic Resonance Images Li, Yafen Li, Wen Xiong, Jing Xia, Jun Xie, Yaoqin Biomed Res Int Research Article Cross-modality medical image synthesis between magnetic resonance (MR) images and computed tomography (CT) images has attracted increasing attention in many medical imaging area. Many deep learning methods have been used to generate pseudo-MR/CT images from counterpart modality images. In this study, we used U-Net and Cycle-Consistent Adversarial Networks (CycleGAN), which were typical networks of supervised and unsupervised deep learning methods, respectively, to transform MR/CT images to their counterpart modality. Experimental results show that synthetic images predicted by the proposed U-Net method got lower mean absolute error (MAE), higher structural similarity index (SSIM), and peak signal-to-noise ratio (PSNR) in both directions of CT/MR synthesis, especially in synthetic CT image generation. Though synthetic images by the U-Net method has less contrast information than those by the CycleGAN method, the pixel value profile tendency of the synthetic images by the U-Net method is closer to the ground truth images. This work demonstrated that supervised deep learning method outperforms unsupervised deep learning method in accuracy for medical tasks of MR/CT synthesis. Hindawi 2020-11-05 /pmc/articles/PMC7661122/ /pubmed/33204701 http://dx.doi.org/10.1155/2020/5193707 Text en Copyright © 2020 Yafen Li et al. https://creativecommons.org/licenses/by/4.0/ This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
spellingShingle Research Article
Li, Yafen
Li, Wen
Xiong, Jing
Xia, Jun
Xie, Yaoqin
Comparison of Supervised and Unsupervised Deep Learning Methods for Medical Image Synthesis between Computed Tomography and Magnetic Resonance Images
title Comparison of Supervised and Unsupervised Deep Learning Methods for Medical Image Synthesis between Computed Tomography and Magnetic Resonance Images
title_full Comparison of Supervised and Unsupervised Deep Learning Methods for Medical Image Synthesis between Computed Tomography and Magnetic Resonance Images
title_fullStr Comparison of Supervised and Unsupervised Deep Learning Methods for Medical Image Synthesis between Computed Tomography and Magnetic Resonance Images
title_full_unstemmed Comparison of Supervised and Unsupervised Deep Learning Methods for Medical Image Synthesis between Computed Tomography and Magnetic Resonance Images
title_short Comparison of Supervised and Unsupervised Deep Learning Methods for Medical Image Synthesis between Computed Tomography and Magnetic Resonance Images
title_sort comparison of supervised and unsupervised deep learning methods for medical image synthesis between computed tomography and magnetic resonance images
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7661122/
https://www.ncbi.nlm.nih.gov/pubmed/33204701
http://dx.doi.org/10.1155/2020/5193707
work_keys_str_mv AT liyafen comparisonofsupervisedandunsuperviseddeeplearningmethodsformedicalimagesynthesisbetweencomputedtomographyandmagneticresonanceimages
AT liwen comparisonofsupervisedandunsuperviseddeeplearningmethodsformedicalimagesynthesisbetweencomputedtomographyandmagneticresonanceimages
AT xiongjing comparisonofsupervisedandunsuperviseddeeplearningmethodsformedicalimagesynthesisbetweencomputedtomographyandmagneticresonanceimages
AT xiajun comparisonofsupervisedandunsuperviseddeeplearningmethodsformedicalimagesynthesisbetweencomputedtomographyandmagneticresonanceimages
AT xieyaoqin comparisonofsupervisedandunsuperviseddeeplearningmethodsformedicalimagesynthesisbetweencomputedtomographyandmagneticresonanceimages