Cargando…

Joint keypoint detection and description network for color fundus image registration

BACKGROUND: Retinal imaging is widely used to diagnose many diseases, both systemic and eye-specific. In these cases, image registration, which is the process of aligning images taken from different viewpoints or moments in time, is fundamental to compare different images and to assess changes in th...

Descripción completa

Detalles Bibliográficos
Autores principales: Rivas-Villar, David, Hervella, Álvaro S., Rouco, José, Novo, Jorge
Formato: Online Artículo Texto
Lenguaje:English
Publicado: AME Publishing Company 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10347320/
https://www.ncbi.nlm.nih.gov/pubmed/37456305
http://dx.doi.org/10.21037/qims-23-4
_version_ 1785073522314313728
author Rivas-Villar, David
Hervella, Álvaro S.
Rouco, José
Novo, Jorge
author_facet Rivas-Villar, David
Hervella, Álvaro S.
Rouco, José
Novo, Jorge
author_sort Rivas-Villar, David
collection PubMed
description BACKGROUND: Retinal imaging is widely used to diagnose many diseases, both systemic and eye-specific. In these cases, image registration, which is the process of aligning images taken from different viewpoints or moments in time, is fundamental to compare different images and to assess changes in their appearance, commonly caused by disease progression. Currently, the field of color fundus registration is dominated by classical methods, as deep learning alternatives have not shown sufficient improvement over classic methods to justify the added computational cost. However, deep learning registration methods are still considered beneficial as they can be easily adapted to different modalities and devices following a data-driven learning approach. METHODS: In this work, we propose a novel methodology to register color fundus images using deep learning for the joint detection and description of keypoints. In particular, we use an unsupervised neural network trained to obtain repeatable keypoints and reliable descriptors. These keypoints and descriptors allow to produce an accurate registration using RANdom SAmple Consensus (RANSAC). We train the method using the Messidor dataset and test it with the Fundus Image Registration Dataset (FIRE) dataset, both of which are publicly accessible. RESULTS: Our work demonstrates a color fundus registration method that is robust to changes in imaging devices and capture conditions. Moreover, we conduct multiple experiments exploring several of the method’s parameters to assess their impact on the registration performance. The method obtained an overall Registration Score of 0.695 for the whole FIRE dataset (0.925 for category S, 0.352 for P, and 0.726 for A). CONCLUSIONS: Our proposal improves the results of previous deep learning methods in every category and surpasses the performance of classical approaches in category A which has disease progression and thus represents the most relevant scenario for clinical practice as registration is commonly used in patients with diseases for disease monitoring purposes.
format Online
Article
Text
id pubmed-10347320
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher AME Publishing Company
record_format MEDLINE/PubMed
spelling pubmed-103473202023-07-15 Joint keypoint detection and description network for color fundus image registration Rivas-Villar, David Hervella, Álvaro S. Rouco, José Novo, Jorge Quant Imaging Med Surg Original Article BACKGROUND: Retinal imaging is widely used to diagnose many diseases, both systemic and eye-specific. In these cases, image registration, which is the process of aligning images taken from different viewpoints or moments in time, is fundamental to compare different images and to assess changes in their appearance, commonly caused by disease progression. Currently, the field of color fundus registration is dominated by classical methods, as deep learning alternatives have not shown sufficient improvement over classic methods to justify the added computational cost. However, deep learning registration methods are still considered beneficial as they can be easily adapted to different modalities and devices following a data-driven learning approach. METHODS: In this work, we propose a novel methodology to register color fundus images using deep learning for the joint detection and description of keypoints. In particular, we use an unsupervised neural network trained to obtain repeatable keypoints and reliable descriptors. These keypoints and descriptors allow to produce an accurate registration using RANdom SAmple Consensus (RANSAC). We train the method using the Messidor dataset and test it with the Fundus Image Registration Dataset (FIRE) dataset, both of which are publicly accessible. RESULTS: Our work demonstrates a color fundus registration method that is robust to changes in imaging devices and capture conditions. Moreover, we conduct multiple experiments exploring several of the method’s parameters to assess their impact on the registration performance. The method obtained an overall Registration Score of 0.695 for the whole FIRE dataset (0.925 for category S, 0.352 for P, and 0.726 for A). CONCLUSIONS: Our proposal improves the results of previous deep learning methods in every category and surpasses the performance of classical approaches in category A which has disease progression and thus represents the most relevant scenario for clinical practice as registration is commonly used in patients with diseases for disease monitoring purposes. AME Publishing Company 2023-05-26 2023-07-01 /pmc/articles/PMC10347320/ /pubmed/37456305 http://dx.doi.org/10.21037/qims-23-4 Text en 2023 Quantitative Imaging in Medicine and Surgery. All rights reserved. https://creativecommons.org/licenses/by-nc-nd/4.0/Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0 (https://creativecommons.org/licenses/by-nc-nd/4.0/) .
spellingShingle Original Article
Rivas-Villar, David
Hervella, Álvaro S.
Rouco, José
Novo, Jorge
Joint keypoint detection and description network for color fundus image registration
title Joint keypoint detection and description network for color fundus image registration
title_full Joint keypoint detection and description network for color fundus image registration
title_fullStr Joint keypoint detection and description network for color fundus image registration
title_full_unstemmed Joint keypoint detection and description network for color fundus image registration
title_short Joint keypoint detection and description network for color fundus image registration
title_sort joint keypoint detection and description network for color fundus image registration
topic Original Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10347320/
https://www.ncbi.nlm.nih.gov/pubmed/37456305
http://dx.doi.org/10.21037/qims-23-4
work_keys_str_mv AT rivasvillardavid jointkeypointdetectionanddescriptionnetworkforcolorfundusimageregistration
AT hervellaalvaros jointkeypointdetectionanddescriptionnetworkforcolorfundusimageregistration
AT roucojose jointkeypointdetectionanddescriptionnetworkforcolorfundusimageregistration
AT novojorge jointkeypointdetectionanddescriptionnetworkforcolorfundusimageregistration