Cargando…
Real-time multimodal image registration with partial intraoperative point-set data
We present Free Point Transformer (FPT) – a deep neural network architecture for non-rigid point-set registration. Consisting of two modules, a global feature extraction module and a point transformation module, FPT does not assume explicit constraints based on point vicinity, thereby overcoming a c...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Elsevier
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8566274/ https://www.ncbi.nlm.nih.gov/pubmed/34583240 http://dx.doi.org/10.1016/j.media.2021.102231 |
_version_ | 1784593979521630208 |
---|---|
author | Baum, Zachary M C Hu, Yipeng Barratt, Dean C |
author_facet | Baum, Zachary M C Hu, Yipeng Barratt, Dean C |
author_sort | Baum, Zachary M C |
collection | PubMed |
description | We present Free Point Transformer (FPT) – a deep neural network architecture for non-rigid point-set registration. Consisting of two modules, a global feature extraction module and a point transformation module, FPT does not assume explicit constraints based on point vicinity, thereby overcoming a common requirement of previous learning-based point-set registration methods. FPT is designed to accept unordered and unstructured point-sets with a variable number of points and uses a “model-free” approach without heuristic constraints. Training FPT is flexible and involves minimizing an intuitive unsupervised loss function, but supervised, semi-supervised, and partially- or weakly-supervised training are also supported. This flexibility makes FPT amenable to multimodal image registration problems where the ground-truth deformations are difficult or impossible to measure. In this paper, we demonstrate the application of FPT to non-rigid registration of prostate magnetic resonance (MR) imaging and sparsely-sampled transrectal ultrasound (TRUS) images. The registration errors were 4.71 mm and 4.81 mm for complete TRUS imaging and sparsely-sampled TRUS imaging, respectively. The results indicate superior accuracy to the alternative rigid and non-rigid registration algorithms tested and substantially lower computation time. The rapid inference possible with FPT makes it particularly suitable for applications where real-time registration is beneficial. |
format | Online Article Text |
id | pubmed-8566274 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | Elsevier |
record_format | MEDLINE/PubMed |
spelling | pubmed-85662742021-12-01 Real-time multimodal image registration with partial intraoperative point-set data Baum, Zachary M C Hu, Yipeng Barratt, Dean C Med Image Anal Article We present Free Point Transformer (FPT) – a deep neural network architecture for non-rigid point-set registration. Consisting of two modules, a global feature extraction module and a point transformation module, FPT does not assume explicit constraints based on point vicinity, thereby overcoming a common requirement of previous learning-based point-set registration methods. FPT is designed to accept unordered and unstructured point-sets with a variable number of points and uses a “model-free” approach without heuristic constraints. Training FPT is flexible and involves minimizing an intuitive unsupervised loss function, but supervised, semi-supervised, and partially- or weakly-supervised training are also supported. This flexibility makes FPT amenable to multimodal image registration problems where the ground-truth deformations are difficult or impossible to measure. In this paper, we demonstrate the application of FPT to non-rigid registration of prostate magnetic resonance (MR) imaging and sparsely-sampled transrectal ultrasound (TRUS) images. The registration errors were 4.71 mm and 4.81 mm for complete TRUS imaging and sparsely-sampled TRUS imaging, respectively. The results indicate superior accuracy to the alternative rigid and non-rigid registration algorithms tested and substantially lower computation time. The rapid inference possible with FPT makes it particularly suitable for applications where real-time registration is beneficial. Elsevier 2021-12 /pmc/articles/PMC8566274/ /pubmed/34583240 http://dx.doi.org/10.1016/j.media.2021.102231 Text en © 2021 The Author(s) https://creativecommons.org/licenses/by/4.0/This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Baum, Zachary M C Hu, Yipeng Barratt, Dean C Real-time multimodal image registration with partial intraoperative point-set data |
title | Real-time multimodal image registration with partial intraoperative point-set data |
title_full | Real-time multimodal image registration with partial intraoperative point-set data |
title_fullStr | Real-time multimodal image registration with partial intraoperative point-set data |
title_full_unstemmed | Real-time multimodal image registration with partial intraoperative point-set data |
title_short | Real-time multimodal image registration with partial intraoperative point-set data |
title_sort | real-time multimodal image registration with partial intraoperative point-set data |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8566274/ https://www.ncbi.nlm.nih.gov/pubmed/34583240 http://dx.doi.org/10.1016/j.media.2021.102231 |
work_keys_str_mv | AT baumzacharymc realtimemultimodalimageregistrationwithpartialintraoperativepointsetdata AT huyipeng realtimemultimodalimageregistrationwithpartialintraoperativepointsetdata AT barrattdeanc realtimemultimodalimageregistrationwithpartialintraoperativepointsetdata |