Cargando…

A comparison of point-tracking algorithms in ultrasound videos from the upper limb

Tracking points in ultrasound (US) videos can be especially useful to characterize tissues in motion. Tracking algorithms that analyze successive video frames, such as variations of Optical Flow and Lucas–Kanade (LK), exploit frame-to-frame temporal information to track regions of interest. In contr...

Descripción completa

Detalles Bibliográficos
Autores principales: Magana-Salgado, Uriel, Namburi, Praneeth, Feigin-Almon, Micha, Pallares-Lopez, Roger, Anthony, Brian
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10207829/
https://www.ncbi.nlm.nih.gov/pubmed/37226240
http://dx.doi.org/10.1186/s12938-023-01105-y
_version_ 1785046540021137408
author Magana-Salgado, Uriel
Namburi, Praneeth
Feigin-Almon, Micha
Pallares-Lopez, Roger
Anthony, Brian
author_facet Magana-Salgado, Uriel
Namburi, Praneeth
Feigin-Almon, Micha
Pallares-Lopez, Roger
Anthony, Brian
author_sort Magana-Salgado, Uriel
collection PubMed
description Tracking points in ultrasound (US) videos can be especially useful to characterize tissues in motion. Tracking algorithms that analyze successive video frames, such as variations of Optical Flow and Lucas–Kanade (LK), exploit frame-to-frame temporal information to track regions of interest. In contrast, convolutional neural-network (CNN) models process each video frame independently of neighboring frames. In this paper, we show that frame-to-frame trackers accumulate error over time. We propose three interpolation-like methods to combat error accumulation and show that all three methods reduce tracking errors in frame-to-frame trackers. On the neural-network end, we show that a CNN-based tracker, DeepLabCut (DLC), outperforms all four frame-to-frame trackers when tracking tissues in motion. DLC is more accurate than the frame-to-frame trackers and less sensitive to variations in types of tissue movement. The only caveat found with DLC comes from its non-temporal tracking strategy, leading to jitter between consecutive frames. Overall, when tracking points in videos of moving tissue, we recommend using DLC when prioritizing accuracy and robustness across movements in videos, and using LK with the proposed error-correction methods for small movements when tracking jitter is unacceptable. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12938-023-01105-y.
format Online
Article
Text
id pubmed-10207829
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher BioMed Central
record_format MEDLINE/PubMed
spelling pubmed-102078292023-05-25 A comparison of point-tracking algorithms in ultrasound videos from the upper limb Magana-Salgado, Uriel Namburi, Praneeth Feigin-Almon, Micha Pallares-Lopez, Roger Anthony, Brian Biomed Eng Online Research Tracking points in ultrasound (US) videos can be especially useful to characterize tissues in motion. Tracking algorithms that analyze successive video frames, such as variations of Optical Flow and Lucas–Kanade (LK), exploit frame-to-frame temporal information to track regions of interest. In contrast, convolutional neural-network (CNN) models process each video frame independently of neighboring frames. In this paper, we show that frame-to-frame trackers accumulate error over time. We propose three interpolation-like methods to combat error accumulation and show that all three methods reduce tracking errors in frame-to-frame trackers. On the neural-network end, we show that a CNN-based tracker, DeepLabCut (DLC), outperforms all four frame-to-frame trackers when tracking tissues in motion. DLC is more accurate than the frame-to-frame trackers and less sensitive to variations in types of tissue movement. The only caveat found with DLC comes from its non-temporal tracking strategy, leading to jitter between consecutive frames. Overall, when tracking points in videos of moving tissue, we recommend using DLC when prioritizing accuracy and robustness across movements in videos, and using LK with the proposed error-correction methods for small movements when tracking jitter is unacceptable. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12938-023-01105-y. BioMed Central 2023-05-24 /pmc/articles/PMC10207829/ /pubmed/37226240 http://dx.doi.org/10.1186/s12938-023-01105-y Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/ (https://creativecommons.org/publicdomain/zero/1.0/) ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
spellingShingle Research
Magana-Salgado, Uriel
Namburi, Praneeth
Feigin-Almon, Micha
Pallares-Lopez, Roger
Anthony, Brian
A comparison of point-tracking algorithms in ultrasound videos from the upper limb
title A comparison of point-tracking algorithms in ultrasound videos from the upper limb
title_full A comparison of point-tracking algorithms in ultrasound videos from the upper limb
title_fullStr A comparison of point-tracking algorithms in ultrasound videos from the upper limb
title_full_unstemmed A comparison of point-tracking algorithms in ultrasound videos from the upper limb
title_short A comparison of point-tracking algorithms in ultrasound videos from the upper limb
title_sort comparison of point-tracking algorithms in ultrasound videos from the upper limb
topic Research
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10207829/
https://www.ncbi.nlm.nih.gov/pubmed/37226240
http://dx.doi.org/10.1186/s12938-023-01105-y
work_keys_str_mv AT maganasalgadouriel acomparisonofpointtrackingalgorithmsinultrasoundvideosfromtheupperlimb
AT namburipraneeth acomparisonofpointtrackingalgorithmsinultrasoundvideosfromtheupperlimb
AT feiginalmonmicha acomparisonofpointtrackingalgorithmsinultrasoundvideosfromtheupperlimb
AT pallareslopezroger acomparisonofpointtrackingalgorithmsinultrasoundvideosfromtheupperlimb
AT anthonybrian acomparisonofpointtrackingalgorithmsinultrasoundvideosfromtheupperlimb
AT maganasalgadouriel comparisonofpointtrackingalgorithmsinultrasoundvideosfromtheupperlimb
AT namburipraneeth comparisonofpointtrackingalgorithmsinultrasoundvideosfromtheupperlimb
AT feiginalmonmicha comparisonofpointtrackingalgorithmsinultrasoundvideosfromtheupperlimb
AT pallareslopezroger comparisonofpointtrackingalgorithmsinultrasoundvideosfromtheupperlimb
AT anthonybrian comparisonofpointtrackingalgorithmsinultrasoundvideosfromtheupperlimb