Cargando…

Multi-Modal Medical Image Registration with Full or Partial Data: A Manifold Learning Approach

Multi-modal image registration is the primary step in integrating information stored in two or more images, which are captured using multiple imaging modalities. In addition to intensity variations and structural differences between images, they may have partial or full overlap, which adds an extra...

Descripción completa

Detalles Bibliográficos
Autores principales: Bashiri, Fereshteh S., Baghaie, Ahmadreza, Rostami, Reihaneh, Yu, Zeyun, D’Souza, Roshan M.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8320870/
https://www.ncbi.nlm.nih.gov/pubmed/34470183
http://dx.doi.org/10.3390/jimaging5010005
_version_ 1783730717637738496
author Bashiri, Fereshteh S.
Baghaie, Ahmadreza
Rostami, Reihaneh
Yu, Zeyun
D’Souza, Roshan M.
author_facet Bashiri, Fereshteh S.
Baghaie, Ahmadreza
Rostami, Reihaneh
Yu, Zeyun
D’Souza, Roshan M.
author_sort Bashiri, Fereshteh S.
collection PubMed
description Multi-modal image registration is the primary step in integrating information stored in two or more images, which are captured using multiple imaging modalities. In addition to intensity variations and structural differences between images, they may have partial or full overlap, which adds an extra hurdle to the success of registration process. In this contribution, we propose a multi-modal to mono-modal transformation method that facilitates direct application of well-founded mono-modal registration methods in order to obtain accurate alignment of multi-modal images in both cases, with complete (full) and incomplete (partial) overlap. The proposed transformation facilitates recovering strong scales, rotations, and translations. We explain the method thoroughly and discuss the choice of parameters. For evaluation purposes, the effectiveness of the proposed method is examined and compared with widely used information theory-based techniques using simulated and clinical human brain images with full data. Using RIRE dataset, mean absolute error of 1.37, 1.00, and 1.41 mm are obtained for registering CT images with PD-, T1-, and T2-MRIs, respectively. In the end, we empirically investigate the efficacy of the proposed transformation in registering multi-modal partially overlapped images.
format Online
Article
Text
id pubmed-8320870
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-83208702021-08-26 Multi-Modal Medical Image Registration with Full or Partial Data: A Manifold Learning Approach Bashiri, Fereshteh S. Baghaie, Ahmadreza Rostami, Reihaneh Yu, Zeyun D’Souza, Roshan M. J Imaging Article Multi-modal image registration is the primary step in integrating information stored in two or more images, which are captured using multiple imaging modalities. In addition to intensity variations and structural differences between images, they may have partial or full overlap, which adds an extra hurdle to the success of registration process. In this contribution, we propose a multi-modal to mono-modal transformation method that facilitates direct application of well-founded mono-modal registration methods in order to obtain accurate alignment of multi-modal images in both cases, with complete (full) and incomplete (partial) overlap. The proposed transformation facilitates recovering strong scales, rotations, and translations. We explain the method thoroughly and discuss the choice of parameters. For evaluation purposes, the effectiveness of the proposed method is examined and compared with widely used information theory-based techniques using simulated and clinical human brain images with full data. Using RIRE dataset, mean absolute error of 1.37, 1.00, and 1.41 mm are obtained for registering CT images with PD-, T1-, and T2-MRIs, respectively. In the end, we empirically investigate the efficacy of the proposed transformation in registering multi-modal partially overlapped images. MDPI 2018-12-30 /pmc/articles/PMC8320870/ /pubmed/34470183 http://dx.doi.org/10.3390/jimaging5010005 Text en © 2018 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) ).
spellingShingle Article
Bashiri, Fereshteh S.
Baghaie, Ahmadreza
Rostami, Reihaneh
Yu, Zeyun
D’Souza, Roshan M.
Multi-Modal Medical Image Registration with Full or Partial Data: A Manifold Learning Approach
title Multi-Modal Medical Image Registration with Full or Partial Data: A Manifold Learning Approach
title_full Multi-Modal Medical Image Registration with Full or Partial Data: A Manifold Learning Approach
title_fullStr Multi-Modal Medical Image Registration with Full or Partial Data: A Manifold Learning Approach
title_full_unstemmed Multi-Modal Medical Image Registration with Full or Partial Data: A Manifold Learning Approach
title_short Multi-Modal Medical Image Registration with Full or Partial Data: A Manifold Learning Approach
title_sort multi-modal medical image registration with full or partial data: a manifold learning approach
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8320870/
https://www.ncbi.nlm.nih.gov/pubmed/34470183
http://dx.doi.org/10.3390/jimaging5010005
work_keys_str_mv AT bashirifereshtehs multimodalmedicalimageregistrationwithfullorpartialdataamanifoldlearningapproach
AT baghaieahmadreza multimodalmedicalimageregistrationwithfullorpartialdataamanifoldlearningapproach
AT rostamireihaneh multimodalmedicalimageregistrationwithfullorpartialdataamanifoldlearningapproach
AT yuzeyun multimodalmedicalimageregistrationwithfullorpartialdataamanifoldlearningapproach
AT dsouzaroshanm multimodalmedicalimageregistrationwithfullorpartialdataamanifoldlearningapproach