Cargando…

T1-weighted and T2-weighted MRI image synthesis with convolutional generative adversarial networks

BACKGROUND: The objective of this study was to propose an optimal input image quality for a conditional generative adversarial network (GAN) in T1-weighted and T2-weighted magnetic resonance imaging (MRI) images. MATERIALS AND METHODS: A total of 2,024 images scanned from 2017 to 2018 in 104 patient...

Descripción completa

Detalles Bibliográficos
Autores principales: Kawahara, Daisuke, Nagata, Yasushi
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Via Medica 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8086713/
https://www.ncbi.nlm.nih.gov/pubmed/33948300
http://dx.doi.org/10.5603/RPOR.a2021.0005
_version_ 1783686561533001728
author Kawahara, Daisuke
Nagata, Yasushi
author_facet Kawahara, Daisuke
Nagata, Yasushi
author_sort Kawahara, Daisuke
collection PubMed
description BACKGROUND: The objective of this study was to propose an optimal input image quality for a conditional generative adversarial network (GAN) in T1-weighted and T2-weighted magnetic resonance imaging (MRI) images. MATERIALS AND METHODS: A total of 2,024 images scanned from 2017 to 2018 in 104 patients were used. The prediction framework of T1-weighted to T2-weighted MRI images and T2-weighted to T1-weighted MRI images were created with GAN. Two image sizes (512 × 512 and 256 × 256) and two grayscale level conversion method (simple and adaptive) were used for the input images. The images were converted from 16-bit to 8-bit by dividing with 256 levels in a simple conversion method. For the adaptive conversion method, the unused levels were eliminated in 16-bit images, which were converted to 8-bit images by dividing with the value obtained after dividing the maximum pixel value with 256. RESULTS: The relative mean absolute error (rMAE ) was 0.15 for T1-weighted to T2-weighted MRI images and 0.17 for T2-weighted to T1-weighted MRI images with an adaptive conversion method, which was the smallest. Moreover, the adaptive conversion method has a smallest mean square error (rMSE) and root mean square error (rRMSE), and the largest peak signal-to-noise ratio (PSNR) and mutual information (MI). The computation time depended on the image size. CONCLUSIONS: Input resolution and image size affect the accuracy of prediction. The proposed model and approach of prediction framework can help improve the versatility and quality of multi-contrast MRI tests without the need for prolonged examinations.
format Online
Article
Text
id pubmed-8086713
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher Via Medica
record_format MEDLINE/PubMed
spelling pubmed-80867132021-05-03 T1-weighted and T2-weighted MRI image synthesis with convolutional generative adversarial networks Kawahara, Daisuke Nagata, Yasushi Rep Pract Oncol Radiother Research Paper BACKGROUND: The objective of this study was to propose an optimal input image quality for a conditional generative adversarial network (GAN) in T1-weighted and T2-weighted magnetic resonance imaging (MRI) images. MATERIALS AND METHODS: A total of 2,024 images scanned from 2017 to 2018 in 104 patients were used. The prediction framework of T1-weighted to T2-weighted MRI images and T2-weighted to T1-weighted MRI images were created with GAN. Two image sizes (512 × 512 and 256 × 256) and two grayscale level conversion method (simple and adaptive) were used for the input images. The images were converted from 16-bit to 8-bit by dividing with 256 levels in a simple conversion method. For the adaptive conversion method, the unused levels were eliminated in 16-bit images, which were converted to 8-bit images by dividing with the value obtained after dividing the maximum pixel value with 256. RESULTS: The relative mean absolute error (rMAE ) was 0.15 for T1-weighted to T2-weighted MRI images and 0.17 for T2-weighted to T1-weighted MRI images with an adaptive conversion method, which was the smallest. Moreover, the adaptive conversion method has a smallest mean square error (rMSE) and root mean square error (rRMSE), and the largest peak signal-to-noise ratio (PSNR) and mutual information (MI). The computation time depended on the image size. CONCLUSIONS: Input resolution and image size affect the accuracy of prediction. The proposed model and approach of prediction framework can help improve the versatility and quality of multi-contrast MRI tests without the need for prolonged examinations. Via Medica 2021-02-25 /pmc/articles/PMC8086713/ /pubmed/33948300 http://dx.doi.org/10.5603/RPOR.a2021.0005 Text en © 2021 Greater Poland Cancer Centre https://creativecommons.org/licenses/by-nc-nd/4.0/This article is available in open access under Creative Common Attribution-Non-Commercial-No Derivatives 4.0 International (CC BY-NC-ND 4.0) license, allowing to download articles and share them with others as long as they credit the authors and the publisher, but without permission to change them in any way or use them commercially
spellingShingle Research Paper
Kawahara, Daisuke
Nagata, Yasushi
T1-weighted and T2-weighted MRI image synthesis with convolutional generative adversarial networks
title T1-weighted and T2-weighted MRI image synthesis with convolutional generative adversarial networks
title_full T1-weighted and T2-weighted MRI image synthesis with convolutional generative adversarial networks
title_fullStr T1-weighted and T2-weighted MRI image synthesis with convolutional generative adversarial networks
title_full_unstemmed T1-weighted and T2-weighted MRI image synthesis with convolutional generative adversarial networks
title_short T1-weighted and T2-weighted MRI image synthesis with convolutional generative adversarial networks
title_sort t1-weighted and t2-weighted mri image synthesis with convolutional generative adversarial networks
topic Research Paper
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8086713/
https://www.ncbi.nlm.nih.gov/pubmed/33948300
http://dx.doi.org/10.5603/RPOR.a2021.0005
work_keys_str_mv AT kawaharadaisuke t1weightedandt2weightedmriimagesynthesiswithconvolutionalgenerativeadversarialnetworks
AT nagatayasushi t1weightedandt2weightedmriimagesynthesiswithconvolutionalgenerativeadversarialnetworks