Cargando…

Weakly-supervised convolutional neural networks for multimodal image registration

One of the fundamental challenges in supervised learning for multimodal image registration is the lack of ground-truth for voxel-level spatial correspondence. This work describes a method to infer voxel-level transformation from higher-level correspondence information contained in anatomical labels....

Descripción completa

Detalles Bibliográficos
Autores principales: Hu, Yipeng, Modat, Marc, Gibson, Eli, Li, Wenqi, Ghavami, Nooshin, Bonmati, Ester, Wang, Guotai, Bandula, Steven, Moore, Caroline M., Emberton, Mark, Ourselin, Sébastien, Noble, J. Alison, Barratt, Dean C., Vercauteren, Tom
Formato: Online Artículo Texto
Lenguaje:English
Publicado: 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6742510/
https://www.ncbi.nlm.nih.gov/pubmed/30007253
http://dx.doi.org/10.1016/j.media.2018.07.002
_version_ 1783451122411765760
author Hu, Yipeng
Modat, Marc
Gibson, Eli
Li, Wenqi
Ghavami, Nooshin
Bonmati, Ester
Wang, Guotai
Bandula, Steven
Moore, Caroline M.
Emberton, Mark
Ourselin, Sébastien
Noble, J. Alison
Barratt, Dean C.
Vercauteren, Tom
author_facet Hu, Yipeng
Modat, Marc
Gibson, Eli
Li, Wenqi
Ghavami, Nooshin
Bonmati, Ester
Wang, Guotai
Bandula, Steven
Moore, Caroline M.
Emberton, Mark
Ourselin, Sébastien
Noble, J. Alison
Barratt, Dean C.
Vercauteren, Tom
author_sort Hu, Yipeng
collection PubMed
description One of the fundamental challenges in supervised learning for multimodal image registration is the lack of ground-truth for voxel-level spatial correspondence. This work describes a method to infer voxel-level transformation from higher-level correspondence information contained in anatomical labels. We argue that such labels are more reliable and practical to obtain for reference sets of image pairs than voxel-level correspondence. Typical anatomical labels of interest may include solid organs, vessels, ducts, structure boundaries and other subject-specific ad hoc landmarks. The proposed end-to-end convolutional neural network approach aims to predict displacement fields to align multiple labelled corresponding structures for individual image pairs during the training, while only unlabelled image pairs are used as the network input for inference. We highlight the versatility of the proposed strategy, for training, utilising diverse types of anatomical labels, which need not to be identifiable over all training image pairs. At inference, the resulting 3D deformable image registration algorithm runs in real-time and is fully-automated without requiring any anatomical labels or initialisation. Several network architecture variants are compared for registering T2-weighted magnetic resonance images and 3D transrectal ultrasound images from prostate cancer patients. A median target registration error of 3.6 mm on landmark centroids and a median Dice of 0.87 on prostate glands are achieved from cross-validation experiments, in which 108 pairs of multimodal images from 76 patients were tested with high-quality anatomical labels.
format Online
Article
Text
id pubmed-6742510
institution National Center for Biotechnology Information
language English
publishDate 2018
record_format MEDLINE/PubMed
spelling pubmed-67425102019-09-12 Weakly-supervised convolutional neural networks for multimodal image registration Hu, Yipeng Modat, Marc Gibson, Eli Li, Wenqi Ghavami, Nooshin Bonmati, Ester Wang, Guotai Bandula, Steven Moore, Caroline M. Emberton, Mark Ourselin, Sébastien Noble, J. Alison Barratt, Dean C. Vercauteren, Tom Med Image Anal Article One of the fundamental challenges in supervised learning for multimodal image registration is the lack of ground-truth for voxel-level spatial correspondence. This work describes a method to infer voxel-level transformation from higher-level correspondence information contained in anatomical labels. We argue that such labels are more reliable and practical to obtain for reference sets of image pairs than voxel-level correspondence. Typical anatomical labels of interest may include solid organs, vessels, ducts, structure boundaries and other subject-specific ad hoc landmarks. The proposed end-to-end convolutional neural network approach aims to predict displacement fields to align multiple labelled corresponding structures for individual image pairs during the training, while only unlabelled image pairs are used as the network input for inference. We highlight the versatility of the proposed strategy, for training, utilising diverse types of anatomical labels, which need not to be identifiable over all training image pairs. At inference, the resulting 3D deformable image registration algorithm runs in real-time and is fully-automated without requiring any anatomical labels or initialisation. Several network architecture variants are compared for registering T2-weighted magnetic resonance images and 3D transrectal ultrasound images from prostate cancer patients. A median target registration error of 3.6 mm on landmark centroids and a median Dice of 0.87 on prostate glands are achieved from cross-validation experiments, in which 108 pairs of multimodal images from 76 patients were tested with high-quality anatomical labels. 2018-10-01 2018-07-04 /pmc/articles/PMC6742510/ /pubmed/30007253 http://dx.doi.org/10.1016/j.media.2018.07.002 Text en http://creativecommons.org/licenses/by/4.0/ This is an open access article under the CC BY license. (http://creativecommons.org/licenses/by/4.0/)
spellingShingle Article
Hu, Yipeng
Modat, Marc
Gibson, Eli
Li, Wenqi
Ghavami, Nooshin
Bonmati, Ester
Wang, Guotai
Bandula, Steven
Moore, Caroline M.
Emberton, Mark
Ourselin, Sébastien
Noble, J. Alison
Barratt, Dean C.
Vercauteren, Tom
Weakly-supervised convolutional neural networks for multimodal image registration
title Weakly-supervised convolutional neural networks for multimodal image registration
title_full Weakly-supervised convolutional neural networks for multimodal image registration
title_fullStr Weakly-supervised convolutional neural networks for multimodal image registration
title_full_unstemmed Weakly-supervised convolutional neural networks for multimodal image registration
title_short Weakly-supervised convolutional neural networks for multimodal image registration
title_sort weakly-supervised convolutional neural networks for multimodal image registration
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6742510/
https://www.ncbi.nlm.nih.gov/pubmed/30007253
http://dx.doi.org/10.1016/j.media.2018.07.002
work_keys_str_mv AT huyipeng weaklysupervisedconvolutionalneuralnetworksformultimodalimageregistration
AT modatmarc weaklysupervisedconvolutionalneuralnetworksformultimodalimageregistration
AT gibsoneli weaklysupervisedconvolutionalneuralnetworksformultimodalimageregistration
AT liwenqi weaklysupervisedconvolutionalneuralnetworksformultimodalimageregistration
AT ghavaminooshin weaklysupervisedconvolutionalneuralnetworksformultimodalimageregistration
AT bonmatiester weaklysupervisedconvolutionalneuralnetworksformultimodalimageregistration
AT wangguotai weaklysupervisedconvolutionalneuralnetworksformultimodalimageregistration
AT bandulasteven weaklysupervisedconvolutionalneuralnetworksformultimodalimageregistration
AT moorecarolinem weaklysupervisedconvolutionalneuralnetworksformultimodalimageregistration
AT embertonmark weaklysupervisedconvolutionalneuralnetworksformultimodalimageregistration
AT ourselinsebastien weaklysupervisedconvolutionalneuralnetworksformultimodalimageregistration
AT noblejalison weaklysupervisedconvolutionalneuralnetworksformultimodalimageregistration
AT barrattdeanc weaklysupervisedconvolutionalneuralnetworksformultimodalimageregistration
AT vercauterentom weaklysupervisedconvolutionalneuralnetworksformultimodalimageregistration