Cargando…
Robust Orthogonal-View 2-D/3-D Rigid Registration for Minimally Invasive Surgery
Intra-operative target pose estimation is fundamental in minimally invasive surgery (MIS) to guiding surgical robots. This task can be fulfilled by the 2-D/3-D rigid registration, which aligns the anatomical structures between intra-operative 2-D fluoroscopy and the pre-operative 3-D computed tomogr...
Autores principales: | , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8303962/ https://www.ncbi.nlm.nih.gov/pubmed/34357254 http://dx.doi.org/10.3390/mi12070844 |
_version_ | 1783727217751097344 |
---|---|
author | An, Zhou Ma, Honghai Liu, Lilu Wang, Yue Lu, Haojian Zhou, Chunlin Xiong, Rong Hu, Jian |
author_facet | An, Zhou Ma, Honghai Liu, Lilu Wang, Yue Lu, Haojian Zhou, Chunlin Xiong, Rong Hu, Jian |
author_sort | An, Zhou |
collection | PubMed |
description | Intra-operative target pose estimation is fundamental in minimally invasive surgery (MIS) to guiding surgical robots. This task can be fulfilled by the 2-D/3-D rigid registration, which aligns the anatomical structures between intra-operative 2-D fluoroscopy and the pre-operative 3-D computed tomography (CT) with annotated target information. Although this technique has been researched for decades, it is still challenging to achieve accuracy, robustness and efficiency simultaneously. In this paper, a novel orthogonal-view 2-D/3-D rigid registration framework is proposed which combines the dense reconstruction based on deep learning and the GPU-accelerated 3-D/3-D rigid registration. First, we employ the X2CT-GAN to reconstruct a target CT from two orthogonal fluoroscopy images. After that, the generated target CT and pre-operative CT are input into the 3-D/3-D rigid registration part, which potentially needs a few iterations to converge the global optima. For further efficiency improvement, we make the 3-D/3-D registration algorithm parallel and apply a GPU to accelerate this part. For evaluation, a novel tool is employed to preprocess the public head CT dataset CQ500 and a CT-DRR dataset is presented as the benchmark. The proposed method achieves 1.65 ± 1.41 mm in mean target registration error(mTRE), 20% in the gross failure rate(GFR) and 1.8 s in running time. Our method outperforms the state-of-the-art methods in most test cases. It is promising to apply the proposed method in localization and nano manipulation of micro surgical robot for highly precise MIS. |
format | Online Article Text |
id | pubmed-8303962 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-83039622021-07-25 Robust Orthogonal-View 2-D/3-D Rigid Registration for Minimally Invasive Surgery An, Zhou Ma, Honghai Liu, Lilu Wang, Yue Lu, Haojian Zhou, Chunlin Xiong, Rong Hu, Jian Micromachines (Basel) Article Intra-operative target pose estimation is fundamental in minimally invasive surgery (MIS) to guiding surgical robots. This task can be fulfilled by the 2-D/3-D rigid registration, which aligns the anatomical structures between intra-operative 2-D fluoroscopy and the pre-operative 3-D computed tomography (CT) with annotated target information. Although this technique has been researched for decades, it is still challenging to achieve accuracy, robustness and efficiency simultaneously. In this paper, a novel orthogonal-view 2-D/3-D rigid registration framework is proposed which combines the dense reconstruction based on deep learning and the GPU-accelerated 3-D/3-D rigid registration. First, we employ the X2CT-GAN to reconstruct a target CT from two orthogonal fluoroscopy images. After that, the generated target CT and pre-operative CT are input into the 3-D/3-D rigid registration part, which potentially needs a few iterations to converge the global optima. For further efficiency improvement, we make the 3-D/3-D registration algorithm parallel and apply a GPU to accelerate this part. For evaluation, a novel tool is employed to preprocess the public head CT dataset CQ500 and a CT-DRR dataset is presented as the benchmark. The proposed method achieves 1.65 ± 1.41 mm in mean target registration error(mTRE), 20% in the gross failure rate(GFR) and 1.8 s in running time. Our method outperforms the state-of-the-art methods in most test cases. It is promising to apply the proposed method in localization and nano manipulation of micro surgical robot for highly precise MIS. MDPI 2021-07-20 /pmc/articles/PMC8303962/ /pubmed/34357254 http://dx.doi.org/10.3390/mi12070844 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article An, Zhou Ma, Honghai Liu, Lilu Wang, Yue Lu, Haojian Zhou, Chunlin Xiong, Rong Hu, Jian Robust Orthogonal-View 2-D/3-D Rigid Registration for Minimally Invasive Surgery |
title | Robust Orthogonal-View 2-D/3-D Rigid Registration for Minimally Invasive Surgery |
title_full | Robust Orthogonal-View 2-D/3-D Rigid Registration for Minimally Invasive Surgery |
title_fullStr | Robust Orthogonal-View 2-D/3-D Rigid Registration for Minimally Invasive Surgery |
title_full_unstemmed | Robust Orthogonal-View 2-D/3-D Rigid Registration for Minimally Invasive Surgery |
title_short | Robust Orthogonal-View 2-D/3-D Rigid Registration for Minimally Invasive Surgery |
title_sort | robust orthogonal-view 2-d/3-d rigid registration for minimally invasive surgery |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8303962/ https://www.ncbi.nlm.nih.gov/pubmed/34357254 http://dx.doi.org/10.3390/mi12070844 |
work_keys_str_mv | AT anzhou robustorthogonalview2d3drigidregistrationforminimallyinvasivesurgery AT mahonghai robustorthogonalview2d3drigidregistrationforminimallyinvasivesurgery AT liulilu robustorthogonalview2d3drigidregistrationforminimallyinvasivesurgery AT wangyue robustorthogonalview2d3drigidregistrationforminimallyinvasivesurgery AT luhaojian robustorthogonalview2d3drigidregistrationforminimallyinvasivesurgery AT zhouchunlin robustorthogonalview2d3drigidregistrationforminimallyinvasivesurgery AT xiongrong robustorthogonalview2d3drigidregistrationforminimallyinvasivesurgery AT hujian robustorthogonalview2d3drigidregistrationforminimallyinvasivesurgery |