Cargando…

Identification of herbarium specimen sheet components from high‐resolution images using deep learning

Advanced computer vision techniques hold the potential to mobilise vast quantities of biodiversity data by facilitating the rapid extraction of text‐ and trait‐based data from herbarium specimen digital images, and to increase the efficiency and accuracy of downstream data capture during digitisatio...

Descripción completa

Detalles Bibliográficos
Autores principales: Thompson, Karen M., Turnbull, Robert, Fitzgerald, Emily, Birch, Joanne L.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: John Wiley and Sons Inc. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10425611/
https://www.ncbi.nlm.nih.gov/pubmed/37589042
http://dx.doi.org/10.1002/ece3.10395
_version_ 1785089877651488768
author Thompson, Karen M.
Turnbull, Robert
Fitzgerald, Emily
Birch, Joanne L.
author_facet Thompson, Karen M.
Turnbull, Robert
Fitzgerald, Emily
Birch, Joanne L.
author_sort Thompson, Karen M.
collection PubMed
description Advanced computer vision techniques hold the potential to mobilise vast quantities of biodiversity data by facilitating the rapid extraction of text‐ and trait‐based data from herbarium specimen digital images, and to increase the efficiency and accuracy of downstream data capture during digitisation. This investigation developed an object detection model using YOLOv5 and digitised collection images from the University of Melbourne Herbarium (MELU). The MELU‐trained ‘sheet‐component’ model—trained on 3371 annotated images, validated on 1000 annotated images, run using ‘large’ model type, at 640 pixels, for 200 epochs—successfully identified most of the 11 component types of the digital specimen images, with an overall model precision measure of 0.983, recall of 0.969 and moving average precision (mAP0.5–0.95) of 0.847. Specifically, ‘institutional’ and ‘annotation’ labels were predicted with mAP0.5–0.95 of 0.970 and 0.878 respectively. It was found that annotating at least 2000 images was required to train an adequate model, likely due to the heterogeneity of specimen sheets. The full model was then applied to selected specimens from nine global herbaria (Biodiversity Data Journal, 7, 2019), quantifying its generalisability: for example, the ‘institutional label’ was identified with mAP0.5–0.95 of between 0.68 and 0.89 across the various herbaria. Further detailed study demonstrated that starting with the MELU‐model weights and retraining for as few as 50 epochs on 30 additional annotated images was sufficient to enable the prediction of a previously unseen component. As many herbaria are resource‐constrained, the MELU‐trained ‘sheet‐component’ model weights are made available and application encouraged.
format Online
Article
Text
id pubmed-10425611
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher John Wiley and Sons Inc.
record_format MEDLINE/PubMed
spelling pubmed-104256112023-08-16 Identification of herbarium specimen sheet components from high‐resolution images using deep learning Thompson, Karen M. Turnbull, Robert Fitzgerald, Emily Birch, Joanne L. Ecol Evol Research Articles Advanced computer vision techniques hold the potential to mobilise vast quantities of biodiversity data by facilitating the rapid extraction of text‐ and trait‐based data from herbarium specimen digital images, and to increase the efficiency and accuracy of downstream data capture during digitisation. This investigation developed an object detection model using YOLOv5 and digitised collection images from the University of Melbourne Herbarium (MELU). The MELU‐trained ‘sheet‐component’ model—trained on 3371 annotated images, validated on 1000 annotated images, run using ‘large’ model type, at 640 pixels, for 200 epochs—successfully identified most of the 11 component types of the digital specimen images, with an overall model precision measure of 0.983, recall of 0.969 and moving average precision (mAP0.5–0.95) of 0.847. Specifically, ‘institutional’ and ‘annotation’ labels were predicted with mAP0.5–0.95 of 0.970 and 0.878 respectively. It was found that annotating at least 2000 images was required to train an adequate model, likely due to the heterogeneity of specimen sheets. The full model was then applied to selected specimens from nine global herbaria (Biodiversity Data Journal, 7, 2019), quantifying its generalisability: for example, the ‘institutional label’ was identified with mAP0.5–0.95 of between 0.68 and 0.89 across the various herbaria. Further detailed study demonstrated that starting with the MELU‐model weights and retraining for as few as 50 epochs on 30 additional annotated images was sufficient to enable the prediction of a previously unseen component. As many herbaria are resource‐constrained, the MELU‐trained ‘sheet‐component’ model weights are made available and application encouraged. John Wiley and Sons Inc. 2023-08-14 /pmc/articles/PMC10425611/ /pubmed/37589042 http://dx.doi.org/10.1002/ece3.10395 Text en © 2023 The Authors. Ecology and Evolution published by John Wiley & Sons Ltd. https://creativecommons.org/licenses/by/4.0/This is an open access article under the terms of the http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.
spellingShingle Research Articles
Thompson, Karen M.
Turnbull, Robert
Fitzgerald, Emily
Birch, Joanne L.
Identification of herbarium specimen sheet components from high‐resolution images using deep learning
title Identification of herbarium specimen sheet components from high‐resolution images using deep learning
title_full Identification of herbarium specimen sheet components from high‐resolution images using deep learning
title_fullStr Identification of herbarium specimen sheet components from high‐resolution images using deep learning
title_full_unstemmed Identification of herbarium specimen sheet components from high‐resolution images using deep learning
title_short Identification of herbarium specimen sheet components from high‐resolution images using deep learning
title_sort identification of herbarium specimen sheet components from high‐resolution images using deep learning
topic Research Articles
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10425611/
https://www.ncbi.nlm.nih.gov/pubmed/37589042
http://dx.doi.org/10.1002/ece3.10395
work_keys_str_mv AT thompsonkarenm identificationofherbariumspecimensheetcomponentsfromhighresolutionimagesusingdeeplearning
AT turnbullrobert identificationofherbariumspecimensheetcomponentsfromhighresolutionimagesusingdeeplearning
AT fitzgeraldemily identificationofherbariumspecimensheetcomponentsfromhighresolutionimagesusingdeeplearning
AT birchjoannel identificationofherbariumspecimensheetcomponentsfromhighresolutionimagesusingdeeplearning