Cargando…

MRI Cross-Modality Image-to-Image Translation

We present a cross-modality generation framework that learns to generate translated modalities from given modalities in MR images. Our proposed method performs Image Modality Translation (abbreviated as IMT) by means of a deep learning model that leverages conditional generative adversarial networks...

Descripción completa

Detalles Bibliográficos
Autores principales: Yang, Qianye, Li, Nannan, Zhao, Zixu, Fan, Xingyu, Chang, Eric I-Chao, Xu, Yan
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7048849/
https://www.ncbi.nlm.nih.gov/pubmed/32111966
http://dx.doi.org/10.1038/s41598-020-60520-6
_version_ 1783502348802326528
author Yang, Qianye
Li, Nannan
Zhao, Zixu
Fan, Xingyu
Chang, Eric I-Chao
Xu, Yan
author_facet Yang, Qianye
Li, Nannan
Zhao, Zixu
Fan, Xingyu
Chang, Eric I-Chao
Xu, Yan
author_sort Yang, Qianye
collection PubMed
description We present a cross-modality generation framework that learns to generate translated modalities from given modalities in MR images. Our proposed method performs Image Modality Translation (abbreviated as IMT) by means of a deep learning model that leverages conditional generative adversarial networks (cGANs). Our framework jointly exploits the low-level features (pixel-wise information) and high-level representations (e.g. brain tumors, brain structure like gray matter, etc.) between cross modalities which are important for resolving the challenging complexity in brain structures. Our framework can serve as an auxiliary method in medical use and has great application potential. Based on our proposed framework, we first propose a method for cross-modality registration by fusing the deformation fields to adopt the cross-modality information from translated modalities. Second, we propose an approach for MRI segmentation, translated multichannel segmentation (TMS), where given modalities, along with translated modalities, are segmented by fully convolutional networks (FCN) in a multichannel manner. Both of these two methods successfully adopt the cross-modality information to improve the performance without adding any extra data. Experiments demonstrate that our proposed framework advances the state-of-the-art on five brain MRI datasets. We also observe encouraging results in cross-modality registration and segmentation on some widely adopted brain datasets. Overall, our work can serve as an auxiliary method in medical use and be applied to various tasks in medical fields.
format Online
Article
Text
id pubmed-7048849
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-70488492020-03-06 MRI Cross-Modality Image-to-Image Translation Yang, Qianye Li, Nannan Zhao, Zixu Fan, Xingyu Chang, Eric I-Chao Xu, Yan Sci Rep Article We present a cross-modality generation framework that learns to generate translated modalities from given modalities in MR images. Our proposed method performs Image Modality Translation (abbreviated as IMT) by means of a deep learning model that leverages conditional generative adversarial networks (cGANs). Our framework jointly exploits the low-level features (pixel-wise information) and high-level representations (e.g. brain tumors, brain structure like gray matter, etc.) between cross modalities which are important for resolving the challenging complexity in brain structures. Our framework can serve as an auxiliary method in medical use and has great application potential. Based on our proposed framework, we first propose a method for cross-modality registration by fusing the deformation fields to adopt the cross-modality information from translated modalities. Second, we propose an approach for MRI segmentation, translated multichannel segmentation (TMS), where given modalities, along with translated modalities, are segmented by fully convolutional networks (FCN) in a multichannel manner. Both of these two methods successfully adopt the cross-modality information to improve the performance without adding any extra data. Experiments demonstrate that our proposed framework advances the state-of-the-art on five brain MRI datasets. We also observe encouraging results in cross-modality registration and segmentation on some widely adopted brain datasets. Overall, our work can serve as an auxiliary method in medical use and be applied to various tasks in medical fields. Nature Publishing Group UK 2020-02-28 /pmc/articles/PMC7048849/ /pubmed/32111966 http://dx.doi.org/10.1038/s41598-020-60520-6 Text en © The Author(s) 2020 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
spellingShingle Article
Yang, Qianye
Li, Nannan
Zhao, Zixu
Fan, Xingyu
Chang, Eric I-Chao
Xu, Yan
MRI Cross-Modality Image-to-Image Translation
title MRI Cross-Modality Image-to-Image Translation
title_full MRI Cross-Modality Image-to-Image Translation
title_fullStr MRI Cross-Modality Image-to-Image Translation
title_full_unstemmed MRI Cross-Modality Image-to-Image Translation
title_short MRI Cross-Modality Image-to-Image Translation
title_sort mri cross-modality image-to-image translation
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7048849/
https://www.ncbi.nlm.nih.gov/pubmed/32111966
http://dx.doi.org/10.1038/s41598-020-60520-6
work_keys_str_mv AT yangqianye mricrossmodalityimagetoimagetranslation
AT linannan mricrossmodalityimagetoimagetranslation
AT zhaozixu mricrossmodalityimagetoimagetranslation
AT fanxingyu mricrossmodalityimagetoimagetranslation
AT changericichao mricrossmodalityimagetoimagetranslation
AT xuyan mricrossmodalityimagetoimagetranslation