Cargando…

Unsupervised Image Registration towards Enhancing Performance and Explainability in Cardiac and Brain Image Analysis

Magnetic Resonance Imaging (MRI) typically recruits multiple sequences (defined here as “modalities”). As each modality is designed to offer different anatomical and functional clinical information, there are evident disparities in the imaging content across modalities. Inter- and intra-modality aff...

Descripción completa

Detalles Bibliográficos
Autores principales: Wang, Chengjia, Yang, Guang, Papanastasiou, Giorgos
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8951078/
https://www.ncbi.nlm.nih.gov/pubmed/35336295
http://dx.doi.org/10.3390/s22062125
_version_ 1784675297174487040
author Wang, Chengjia
Yang, Guang
Papanastasiou, Giorgos
author_facet Wang, Chengjia
Yang, Guang
Papanastasiou, Giorgos
author_sort Wang, Chengjia
collection PubMed
description Magnetic Resonance Imaging (MRI) typically recruits multiple sequences (defined here as “modalities”). As each modality is designed to offer different anatomical and functional clinical information, there are evident disparities in the imaging content across modalities. Inter- and intra-modality affine and non-rigid image registration is an essential medical image analysis process in clinical imaging, as for example before imaging biomarkers need to be derived and clinically evaluated across different MRI modalities, time phases and slices. Although commonly needed in real clinical scenarios, affine and non-rigid image registration is not extensively investigated using a single unsupervised model architecture. In our work, we present an unsupervised deep learning registration methodology that can accurately model affine and non-rigid transformations, simultaneously. Moreover, inverse-consistency is a fundamental inter-modality registration property that is not considered in deep learning registration algorithms. To address inverse consistency, our methodology performs bi-directional cross-modality image synthesis to learn modality-invariant latent representations, and involves two factorised transformation networks (one per each encoder-decoder channel) and an inverse-consistency loss to learn topology-preserving anatomical transformations. Overall, our model (named “FIRE”) shows improved performances against the reference standard baseline method (i.e., Symmetric Normalization implemented using the ANTs toolbox) on multi-modality brain 2D and 3D MRI and intra-modality cardiac 4D MRI data experiments. We focus on explaining model-data components to enhance model explainability in medical image registration. On computational time experiments, we show that the FIRE model performs on a memory-saving mode, as it can inherently learn topology-preserving image registration directly in the training phase. We therefore demonstrate an efficient and versatile registration technique that can have merit in multi-modal image registrations in the clinical setting.
format Online
Article
Text
id pubmed-8951078
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-89510782022-03-26 Unsupervised Image Registration towards Enhancing Performance and Explainability in Cardiac and Brain Image Analysis Wang, Chengjia Yang, Guang Papanastasiou, Giorgos Sensors (Basel) Article Magnetic Resonance Imaging (MRI) typically recruits multiple sequences (defined here as “modalities”). As each modality is designed to offer different anatomical and functional clinical information, there are evident disparities in the imaging content across modalities. Inter- and intra-modality affine and non-rigid image registration is an essential medical image analysis process in clinical imaging, as for example before imaging biomarkers need to be derived and clinically evaluated across different MRI modalities, time phases and slices. Although commonly needed in real clinical scenarios, affine and non-rigid image registration is not extensively investigated using a single unsupervised model architecture. In our work, we present an unsupervised deep learning registration methodology that can accurately model affine and non-rigid transformations, simultaneously. Moreover, inverse-consistency is a fundamental inter-modality registration property that is not considered in deep learning registration algorithms. To address inverse consistency, our methodology performs bi-directional cross-modality image synthesis to learn modality-invariant latent representations, and involves two factorised transformation networks (one per each encoder-decoder channel) and an inverse-consistency loss to learn topology-preserving anatomical transformations. Overall, our model (named “FIRE”) shows improved performances against the reference standard baseline method (i.e., Symmetric Normalization implemented using the ANTs toolbox) on multi-modality brain 2D and 3D MRI and intra-modality cardiac 4D MRI data experiments. We focus on explaining model-data components to enhance model explainability in medical image registration. On computational time experiments, we show that the FIRE model performs on a memory-saving mode, as it can inherently learn topology-preserving image registration directly in the training phase. We therefore demonstrate an efficient and versatile registration technique that can have merit in multi-modal image registrations in the clinical setting. MDPI 2022-03-09 /pmc/articles/PMC8951078/ /pubmed/35336295 http://dx.doi.org/10.3390/s22062125 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Wang, Chengjia
Yang, Guang
Papanastasiou, Giorgos
Unsupervised Image Registration towards Enhancing Performance and Explainability in Cardiac and Brain Image Analysis
title Unsupervised Image Registration towards Enhancing Performance and Explainability in Cardiac and Brain Image Analysis
title_full Unsupervised Image Registration towards Enhancing Performance and Explainability in Cardiac and Brain Image Analysis
title_fullStr Unsupervised Image Registration towards Enhancing Performance and Explainability in Cardiac and Brain Image Analysis
title_full_unstemmed Unsupervised Image Registration towards Enhancing Performance and Explainability in Cardiac and Brain Image Analysis
title_short Unsupervised Image Registration towards Enhancing Performance and Explainability in Cardiac and Brain Image Analysis
title_sort unsupervised image registration towards enhancing performance and explainability in cardiac and brain image analysis
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8951078/
https://www.ncbi.nlm.nih.gov/pubmed/35336295
http://dx.doi.org/10.3390/s22062125
work_keys_str_mv AT wangchengjia unsupervisedimageregistrationtowardsenhancingperformanceandexplainabilityincardiacandbrainimageanalysis
AT yangguang unsupervisedimageregistrationtowardsenhancingperformanceandexplainabilityincardiacandbrainimageanalysis
AT papanastasiougiorgos unsupervisedimageregistrationtowardsenhancingperformanceandexplainabilityincardiacandbrainimageanalysis