Cargando…

Evaluation of motion artefact reduction depending on the artefacts’ directions in head MRI using conditional generative adversarial networks

Motion artefacts caused by the patient’s body movements affect magnetic resonance imaging (MRI) accuracy. This study aimed to compare and evaluate the accuracy of motion artefacts correction using a conditional generative adversarial network (CGAN) with an autoencoder and U-net models. The training...

Descripción completa

Detalles Bibliográficos
Autores principales: Usui, Keisuke, Muro, Isao, Shibukawa, Syuhei, Goto, Masami, Ogawa, Koichi, Sakano, Yasuaki, Kyogoku, Shinsuke, Daida, Hiroyuki
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10220077/
https://www.ncbi.nlm.nih.gov/pubmed/37237139
http://dx.doi.org/10.1038/s41598-023-35794-1
_version_ 1785049140311359488
author Usui, Keisuke
Muro, Isao
Shibukawa, Syuhei
Goto, Masami
Ogawa, Koichi
Sakano, Yasuaki
Kyogoku, Shinsuke
Daida, Hiroyuki
author_facet Usui, Keisuke
Muro, Isao
Shibukawa, Syuhei
Goto, Masami
Ogawa, Koichi
Sakano, Yasuaki
Kyogoku, Shinsuke
Daida, Hiroyuki
author_sort Usui, Keisuke
collection PubMed
description Motion artefacts caused by the patient’s body movements affect magnetic resonance imaging (MRI) accuracy. This study aimed to compare and evaluate the accuracy of motion artefacts correction using a conditional generative adversarial network (CGAN) with an autoencoder and U-net models. The training dataset consisted of motion artefacts generated through simulations. Motion artefacts occur in the phase encoding direction, which is set to either the horizontal or vertical direction of the image. To create T2-weighted axial images with simulated motion artefacts, 5500 head images were used in each direction. Of these data, 90% were used for training, while the remainder were used for the evaluation of image quality. Moreover, the validation data used in the model training consisted of 10% of the training dataset. The training data were divided into horizontal and vertical directions of motion artefact appearance, and the effect of combining this data with the training dataset was verified. The resulting corrected images were evaluated using structural image similarity (SSIM) and peak signal-to-noise ratio (PSNR), and the metrics were compared with the images without motion artefacts. The best improvements in the SSIM and PSNR were observed in the consistent condition in the direction of the occurrence of motion artefacts in the training and evaluation datasets. However, SSIM > 0.9 and PSNR > 29 dB were accomplished for the learning model with both image directions. The latter model exhibited the highest robustness for actual patient motion in head MRI images. Moreover, the image quality of the corrected image with the CGAN was the closest to that of the original image, while the improvement rates for SSIM and PSNR were approximately 26% and 7.7%, respectively. The CGAN model demonstrated a high image reproducibility, and the most significant model was the consistent condition of the learning model and the direction of the appearance of motion artefacts.
format Online
Article
Text
id pubmed-10220077
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-102200772023-05-28 Evaluation of motion artefact reduction depending on the artefacts’ directions in head MRI using conditional generative adversarial networks Usui, Keisuke Muro, Isao Shibukawa, Syuhei Goto, Masami Ogawa, Koichi Sakano, Yasuaki Kyogoku, Shinsuke Daida, Hiroyuki Sci Rep Article Motion artefacts caused by the patient’s body movements affect magnetic resonance imaging (MRI) accuracy. This study aimed to compare and evaluate the accuracy of motion artefacts correction using a conditional generative adversarial network (CGAN) with an autoencoder and U-net models. The training dataset consisted of motion artefacts generated through simulations. Motion artefacts occur in the phase encoding direction, which is set to either the horizontal or vertical direction of the image. To create T2-weighted axial images with simulated motion artefacts, 5500 head images were used in each direction. Of these data, 90% were used for training, while the remainder were used for the evaluation of image quality. Moreover, the validation data used in the model training consisted of 10% of the training dataset. The training data were divided into horizontal and vertical directions of motion artefact appearance, and the effect of combining this data with the training dataset was verified. The resulting corrected images were evaluated using structural image similarity (SSIM) and peak signal-to-noise ratio (PSNR), and the metrics were compared with the images without motion artefacts. The best improvements in the SSIM and PSNR were observed in the consistent condition in the direction of the occurrence of motion artefacts in the training and evaluation datasets. However, SSIM > 0.9 and PSNR > 29 dB were accomplished for the learning model with both image directions. The latter model exhibited the highest robustness for actual patient motion in head MRI images. Moreover, the image quality of the corrected image with the CGAN was the closest to that of the original image, while the improvement rates for SSIM and PSNR were approximately 26% and 7.7%, respectively. The CGAN model demonstrated a high image reproducibility, and the most significant model was the consistent condition of the learning model and the direction of the appearance of motion artefacts. Nature Publishing Group UK 2023-05-26 /pmc/articles/PMC10220077/ /pubmed/37237139 http://dx.doi.org/10.1038/s41598-023-35794-1 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Usui, Keisuke
Muro, Isao
Shibukawa, Syuhei
Goto, Masami
Ogawa, Koichi
Sakano, Yasuaki
Kyogoku, Shinsuke
Daida, Hiroyuki
Evaluation of motion artefact reduction depending on the artefacts’ directions in head MRI using conditional generative adversarial networks
title Evaluation of motion artefact reduction depending on the artefacts’ directions in head MRI using conditional generative adversarial networks
title_full Evaluation of motion artefact reduction depending on the artefacts’ directions in head MRI using conditional generative adversarial networks
title_fullStr Evaluation of motion artefact reduction depending on the artefacts’ directions in head MRI using conditional generative adversarial networks
title_full_unstemmed Evaluation of motion artefact reduction depending on the artefacts’ directions in head MRI using conditional generative adversarial networks
title_short Evaluation of motion artefact reduction depending on the artefacts’ directions in head MRI using conditional generative adversarial networks
title_sort evaluation of motion artefact reduction depending on the artefacts’ directions in head mri using conditional generative adversarial networks
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10220077/
https://www.ncbi.nlm.nih.gov/pubmed/37237139
http://dx.doi.org/10.1038/s41598-023-35794-1
work_keys_str_mv AT usuikeisuke evaluationofmotionartefactreductiondependingontheartefactsdirectionsinheadmriusingconditionalgenerativeadversarialnetworks
AT muroisao evaluationofmotionartefactreductiondependingontheartefactsdirectionsinheadmriusingconditionalgenerativeadversarialnetworks
AT shibukawasyuhei evaluationofmotionartefactreductiondependingontheartefactsdirectionsinheadmriusingconditionalgenerativeadversarialnetworks
AT gotomasami evaluationofmotionartefactreductiondependingontheartefactsdirectionsinheadmriusingconditionalgenerativeadversarialnetworks
AT ogawakoichi evaluationofmotionartefactreductiondependingontheartefactsdirectionsinheadmriusingconditionalgenerativeadversarialnetworks
AT sakanoyasuaki evaluationofmotionartefactreductiondependingontheartefactsdirectionsinheadmriusingconditionalgenerativeadversarialnetworks
AT kyogokushinsuke evaluationofmotionartefactreductiondependingontheartefactsdirectionsinheadmriusingconditionalgenerativeadversarialnetworks
AT daidahiroyuki evaluationofmotionartefactreductiondependingontheartefactsdirectionsinheadmriusingconditionalgenerativeadversarialnetworks