Cargando…

Landmark tracking in 4D ultrasound using generalized representation learning

PURPOSE: In this study, we present and validate a novel concept for target tracking in 4D ultrasound. The key idea is to replace image patch similarity metrics by distances in a latent representation. For this, 3D ultrasound patches are mapped into a representation space using sliced-Wasserstein aut...

Descripción completa

Detalles Bibliográficos
Autores principales: Wulff, Daniel, Hagenah, Jannis, Ernst, Floris
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer International Publishing 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9939499/
https://www.ncbi.nlm.nih.gov/pubmed/36242701
http://dx.doi.org/10.1007/s11548-022-02768-z
_version_ 1784890867082854400
author Wulff, Daniel
Hagenah, Jannis
Ernst, Floris
author_facet Wulff, Daniel
Hagenah, Jannis
Ernst, Floris
author_sort Wulff, Daniel
collection PubMed
description PURPOSE: In this study, we present and validate a novel concept for target tracking in 4D ultrasound. The key idea is to replace image patch similarity metrics by distances in a latent representation. For this, 3D ultrasound patches are mapped into a representation space using sliced-Wasserstein autoencoders. METHODS: A novel target tracking method for 4D ultrasound is presented that performs tracking in a representation space instead of in images space. Sliced-Wasserstein autoencoders are trained in an unsupervised manner which are used to map 3D ultrasound patches into a representation space. The tracking procedure is based on a greedy algorithm approach and measuring distances between representation vectors to relocate the target . The proposed algorithm is validated on an in vivo data set of liver images. Furthermore, three different concepts for training the autoencoder are presented to provide cross-patient generalizability, aiming at minimal training time on data of the individual patient. RESULTS: Eight annotated 4D ultrasound sequences are used to test the tracking method. Tracking could be performed in all sequences using all autoencoder training approaches. A mean tracking error of 3.23 mm could be achieved using generalized fine-tuned autoencoders. It is shown that using generalized autoencoders and fine-tuning them achieves better tracking results than training subject individual autoencoders. CONCLUSION: It could be shown that distances between encoded image patches in a representation space can serve as a meaningful measure of the image patch similarity, even under realistic deformations of the anatomical structure. Based on that, we could validate the proposed tracking algorithm in an in vivo setting. Furthermore, our results indicate that using generalized autoencoders, fine-tuning on only a small number of patches from the individual patient provides promising results.
format Online
Article
Text
id pubmed-9939499
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Springer International Publishing
record_format MEDLINE/PubMed
spelling pubmed-99394992023-02-21 Landmark tracking in 4D ultrasound using generalized representation learning Wulff, Daniel Hagenah, Jannis Ernst, Floris Int J Comput Assist Radiol Surg Original Article PURPOSE: In this study, we present and validate a novel concept for target tracking in 4D ultrasound. The key idea is to replace image patch similarity metrics by distances in a latent representation. For this, 3D ultrasound patches are mapped into a representation space using sliced-Wasserstein autoencoders. METHODS: A novel target tracking method for 4D ultrasound is presented that performs tracking in a representation space instead of in images space. Sliced-Wasserstein autoencoders are trained in an unsupervised manner which are used to map 3D ultrasound patches into a representation space. The tracking procedure is based on a greedy algorithm approach and measuring distances between representation vectors to relocate the target . The proposed algorithm is validated on an in vivo data set of liver images. Furthermore, three different concepts for training the autoencoder are presented to provide cross-patient generalizability, aiming at minimal training time on data of the individual patient. RESULTS: Eight annotated 4D ultrasound sequences are used to test the tracking method. Tracking could be performed in all sequences using all autoencoder training approaches. A mean tracking error of 3.23 mm could be achieved using generalized fine-tuned autoencoders. It is shown that using generalized autoencoders and fine-tuning them achieves better tracking results than training subject individual autoencoders. CONCLUSION: It could be shown that distances between encoded image patches in a representation space can serve as a meaningful measure of the image patch similarity, even under realistic deformations of the anatomical structure. Based on that, we could validate the proposed tracking algorithm in an in vivo setting. Furthermore, our results indicate that using generalized autoencoders, fine-tuning on only a small number of patches from the individual patient provides promising results. Springer International Publishing 2022-10-15 2023 /pmc/articles/PMC9939499/ /pubmed/36242701 http://dx.doi.org/10.1007/s11548-022-02768-z Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Original Article
Wulff, Daniel
Hagenah, Jannis
Ernst, Floris
Landmark tracking in 4D ultrasound using generalized representation learning
title Landmark tracking in 4D ultrasound using generalized representation learning
title_full Landmark tracking in 4D ultrasound using generalized representation learning
title_fullStr Landmark tracking in 4D ultrasound using generalized representation learning
title_full_unstemmed Landmark tracking in 4D ultrasound using generalized representation learning
title_short Landmark tracking in 4D ultrasound using generalized representation learning
title_sort landmark tracking in 4d ultrasound using generalized representation learning
topic Original Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9939499/
https://www.ncbi.nlm.nih.gov/pubmed/36242701
http://dx.doi.org/10.1007/s11548-022-02768-z
work_keys_str_mv AT wulffdaniel landmarktrackingin4dultrasoundusinggeneralizedrepresentationlearning
AT hagenahjannis landmarktrackingin4dultrasoundusinggeneralizedrepresentationlearning
AT ernstfloris landmarktrackingin4dultrasoundusinggeneralizedrepresentationlearning