Cargando…
MR-contrast-aware image-to-image translations with generative adversarial networks
PURPOSE: A magnetic resonance imaging (MRI) exam typically consists of several sequences that yield different image contrasts. Each sequence is parameterized through multiple acquisition parameters that influence image contrast, signal-to-noise ratio, acquisition time, and/or resolution. Depending o...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer International Publishing
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8616894/ https://www.ncbi.nlm.nih.gov/pubmed/34148167 http://dx.doi.org/10.1007/s11548-021-02433-x |
_version_ | 1784604428389580800 |
---|---|
author | Denck, Jonas Guehring, Jens Maier, Andreas Rothgang, Eva |
author_facet | Denck, Jonas Guehring, Jens Maier, Andreas Rothgang, Eva |
author_sort | Denck, Jonas |
collection | PubMed |
description | PURPOSE: A magnetic resonance imaging (MRI) exam typically consists of several sequences that yield different image contrasts. Each sequence is parameterized through multiple acquisition parameters that influence image contrast, signal-to-noise ratio, acquisition time, and/or resolution. Depending on the clinical indication, different contrasts are required by the radiologist to make a diagnosis. As MR sequence acquisition is time consuming and acquired images may be corrupted due to motion, a method to synthesize MR images with adjustable contrast properties is required. METHODS: Therefore, we trained an image-to-image generative adversarial network conditioned on the MR acquisition parameters repetition time and echo time. Our approach is motivated by style transfer networks, whereas the “style” for an image is explicitly given in our case, as it is determined by the MR acquisition parameters our network is conditioned on. RESULTS: This enables us to synthesize MR images with adjustable image contrast. We evaluated our approach on the fastMRI dataset, a large set of publicly available MR knee images, and show that our method outperforms a benchmark pix2pix approach in the translation of non-fat-saturated MR images to fat-saturated images. Our approach yields a peak signal-to-noise ratio and structural similarity of 24.48 and 0.66, surpassing the pix2pix benchmark model significantly. CONCLUSION: Our model is the first that enables fine-tuned contrast synthesis, which can be used to synthesize missing MR-contrasts or as a data augmentation technique for AI training in MRI. It can also be used as basis for other image-to-image translation tasks within medical imaging, e.g., to enhance intermodality translation (MRI → CT) or 7 T image synthesis from 3 T MR images. |
format | Online Article Text |
id | pubmed-8616894 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | Springer International Publishing |
record_format | MEDLINE/PubMed |
spelling | pubmed-86168942021-12-01 MR-contrast-aware image-to-image translations with generative adversarial networks Denck, Jonas Guehring, Jens Maier, Andreas Rothgang, Eva Int J Comput Assist Radiol Surg Original Article PURPOSE: A magnetic resonance imaging (MRI) exam typically consists of several sequences that yield different image contrasts. Each sequence is parameterized through multiple acquisition parameters that influence image contrast, signal-to-noise ratio, acquisition time, and/or resolution. Depending on the clinical indication, different contrasts are required by the radiologist to make a diagnosis. As MR sequence acquisition is time consuming and acquired images may be corrupted due to motion, a method to synthesize MR images with adjustable contrast properties is required. METHODS: Therefore, we trained an image-to-image generative adversarial network conditioned on the MR acquisition parameters repetition time and echo time. Our approach is motivated by style transfer networks, whereas the “style” for an image is explicitly given in our case, as it is determined by the MR acquisition parameters our network is conditioned on. RESULTS: This enables us to synthesize MR images with adjustable image contrast. We evaluated our approach on the fastMRI dataset, a large set of publicly available MR knee images, and show that our method outperforms a benchmark pix2pix approach in the translation of non-fat-saturated MR images to fat-saturated images. Our approach yields a peak signal-to-noise ratio and structural similarity of 24.48 and 0.66, surpassing the pix2pix benchmark model significantly. CONCLUSION: Our model is the first that enables fine-tuned contrast synthesis, which can be used to synthesize missing MR-contrasts or as a data augmentation technique for AI training in MRI. It can also be used as basis for other image-to-image translation tasks within medical imaging, e.g., to enhance intermodality translation (MRI → CT) or 7 T image synthesis from 3 T MR images. Springer International Publishing 2021-06-20 2021 /pmc/articles/PMC8616894/ /pubmed/34148167 http://dx.doi.org/10.1007/s11548-021-02433-x Text en © The Author(s) 2021 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Original Article Denck, Jonas Guehring, Jens Maier, Andreas Rothgang, Eva MR-contrast-aware image-to-image translations with generative adversarial networks |
title | MR-contrast-aware image-to-image translations with generative adversarial networks |
title_full | MR-contrast-aware image-to-image translations with generative adversarial networks |
title_fullStr | MR-contrast-aware image-to-image translations with generative adversarial networks |
title_full_unstemmed | MR-contrast-aware image-to-image translations with generative adversarial networks |
title_short | MR-contrast-aware image-to-image translations with generative adversarial networks |
title_sort | mr-contrast-aware image-to-image translations with generative adversarial networks |
topic | Original Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8616894/ https://www.ncbi.nlm.nih.gov/pubmed/34148167 http://dx.doi.org/10.1007/s11548-021-02433-x |
work_keys_str_mv | AT denckjonas mrcontrastawareimagetoimagetranslationswithgenerativeadversarialnetworks AT guehringjens mrcontrastawareimagetoimagetranslationswithgenerativeadversarialnetworks AT maierandreas mrcontrastawareimagetoimagetranslationswithgenerativeadversarialnetworks AT rothgangeva mrcontrastawareimagetoimagetranslationswithgenerativeadversarialnetworks |