Cargando…

Deep learning approaches to landmark detection in tsetse wing images

Morphometric analysis of wings has been suggested for identifying and controlling isolated populations of tsetse (Glossina spp), vectors of human and animal trypanosomiasis in Africa. Single-wing images were captured from an extensive data set of field-collected tsetse wings of species Glossina pall...

Descripción completa

Detalles Bibliográficos
Autores principales: Geldenhuys, Dylan S., Josias, Shane, Brink, Willie, Makhubele, Mulanga, Hui, Cang, Landi, Pietro, Bingham, Jeremy, Hargrove, John, Hazelbag, Marijn C.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10328335/
https://www.ncbi.nlm.nih.gov/pubmed/37363914
http://dx.doi.org/10.1371/journal.pcbi.1011194
_version_ 1785069775415672832
author Geldenhuys, Dylan S.
Josias, Shane
Brink, Willie
Makhubele, Mulanga
Hui, Cang
Landi, Pietro
Bingham, Jeremy
Hargrove, John
Hazelbag, Marijn C.
author_facet Geldenhuys, Dylan S.
Josias, Shane
Brink, Willie
Makhubele, Mulanga
Hui, Cang
Landi, Pietro
Bingham, Jeremy
Hargrove, John
Hazelbag, Marijn C.
author_sort Geldenhuys, Dylan S.
collection PubMed
description Morphometric analysis of wings has been suggested for identifying and controlling isolated populations of tsetse (Glossina spp), vectors of human and animal trypanosomiasis in Africa. Single-wing images were captured from an extensive data set of field-collected tsetse wings of species Glossina pallidipes and G. m. morsitans. Morphometric analysis required locating 11 anatomical landmarks on each wing. The manual location of landmarks is time-consuming, prone to error, and infeasible for large data sets. We developed a two-tier method using deep learning architectures to classify images and make accurate landmark predictions. The first tier used a classification convolutional neural network to remove most wings that were missing landmarks. The second tier provided landmark coordinates for the remaining wings. We compared direct coordinate regression using a convolutional neural network and segmentation using a fully convolutional network for the second tier. For the resulting landmark predictions, we evaluate shape bias using Procrustes analysis. We pay particular attention to consistent labelling to improve model performance. For an image size of 1024 × 1280, data augmentation reduced the mean pixel distance error from 8.3 (95% confidence interval [4.4,10.3]) to 5.34 (95% confidence interval [3.0,7.0]) for the regression model. For the segmentation model, data augmentation did not alter the mean pixel distance error of 3.43 (95% confidence interval [1.9,4.4]). Segmentation had a higher computational complexity and some large outliers. Both models showed minimal shape bias. We deployed the regression model on the complete unannotated data consisting of 14,354 pairs of wing images since this model had a lower computational cost and more stable predictions than the segmentation model. The resulting landmark data set was provided for future morphometric analysis. The methods we have developed could provide a starting point to studying the wings of other insect species. All the code used in this study has been written in Python and open sourced.
format Online
Article
Text
id pubmed-10328335
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-103283352023-07-08 Deep learning approaches to landmark detection in tsetse wing images Geldenhuys, Dylan S. Josias, Shane Brink, Willie Makhubele, Mulanga Hui, Cang Landi, Pietro Bingham, Jeremy Hargrove, John Hazelbag, Marijn C. PLoS Comput Biol Research Article Morphometric analysis of wings has been suggested for identifying and controlling isolated populations of tsetse (Glossina spp), vectors of human and animal trypanosomiasis in Africa. Single-wing images were captured from an extensive data set of field-collected tsetse wings of species Glossina pallidipes and G. m. morsitans. Morphometric analysis required locating 11 anatomical landmarks on each wing. The manual location of landmarks is time-consuming, prone to error, and infeasible for large data sets. We developed a two-tier method using deep learning architectures to classify images and make accurate landmark predictions. The first tier used a classification convolutional neural network to remove most wings that were missing landmarks. The second tier provided landmark coordinates for the remaining wings. We compared direct coordinate regression using a convolutional neural network and segmentation using a fully convolutional network for the second tier. For the resulting landmark predictions, we evaluate shape bias using Procrustes analysis. We pay particular attention to consistent labelling to improve model performance. For an image size of 1024 × 1280, data augmentation reduced the mean pixel distance error from 8.3 (95% confidence interval [4.4,10.3]) to 5.34 (95% confidence interval [3.0,7.0]) for the regression model. For the segmentation model, data augmentation did not alter the mean pixel distance error of 3.43 (95% confidence interval [1.9,4.4]). Segmentation had a higher computational complexity and some large outliers. Both models showed minimal shape bias. We deployed the regression model on the complete unannotated data consisting of 14,354 pairs of wing images since this model had a lower computational cost and more stable predictions than the segmentation model. The resulting landmark data set was provided for future morphometric analysis. The methods we have developed could provide a starting point to studying the wings of other insect species. All the code used in this study has been written in Python and open sourced. Public Library of Science 2023-06-26 /pmc/articles/PMC10328335/ /pubmed/37363914 http://dx.doi.org/10.1371/journal.pcbi.1011194 Text en © 2023 Geldenhuys et al https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Research Article
Geldenhuys, Dylan S.
Josias, Shane
Brink, Willie
Makhubele, Mulanga
Hui, Cang
Landi, Pietro
Bingham, Jeremy
Hargrove, John
Hazelbag, Marijn C.
Deep learning approaches to landmark detection in tsetse wing images
title Deep learning approaches to landmark detection in tsetse wing images
title_full Deep learning approaches to landmark detection in tsetse wing images
title_fullStr Deep learning approaches to landmark detection in tsetse wing images
title_full_unstemmed Deep learning approaches to landmark detection in tsetse wing images
title_short Deep learning approaches to landmark detection in tsetse wing images
title_sort deep learning approaches to landmark detection in tsetse wing images
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10328335/
https://www.ncbi.nlm.nih.gov/pubmed/37363914
http://dx.doi.org/10.1371/journal.pcbi.1011194
work_keys_str_mv AT geldenhuysdylans deeplearningapproachestolandmarkdetectionintsetsewingimages
AT josiasshane deeplearningapproachestolandmarkdetectionintsetsewingimages
AT brinkwillie deeplearningapproachestolandmarkdetectionintsetsewingimages
AT makhubelemulanga deeplearningapproachestolandmarkdetectionintsetsewingimages
AT huicang deeplearningapproachestolandmarkdetectionintsetsewingimages
AT landipietro deeplearningapproachestolandmarkdetectionintsetsewingimages
AT binghamjeremy deeplearningapproachestolandmarkdetectionintsetsewingimages
AT hargrovejohn deeplearningapproachestolandmarkdetectionintsetsewingimages
AT hazelbagmarijnc deeplearningapproachestolandmarkdetectionintsetsewingimages