Cargando…

Deep learning pose estimation for multi-cattle lameness detection

The objective of this study was to develop a fully automated multiple-cow real-time lameness detection system using a deep learning approach for cattle detection and pose estimation that could be deployed across dairy farms. Utilising computer vision and deep learning, the system can analyse simulta...

Descripción completa

Detalles Bibliográficos
Autores principales: Barney, Shaun, Dlay, Satnam, Crowe, Andrew, Kyriazakis, Ilias, Leach, Matthew
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10024686/
https://www.ncbi.nlm.nih.gov/pubmed/36934125
http://dx.doi.org/10.1038/s41598-023-31297-1
_version_ 1784909160990638080
author Barney, Shaun
Dlay, Satnam
Crowe, Andrew
Kyriazakis, Ilias
Leach, Matthew
author_facet Barney, Shaun
Dlay, Satnam
Crowe, Andrew
Kyriazakis, Ilias
Leach, Matthew
author_sort Barney, Shaun
collection PubMed
description The objective of this study was to develop a fully automated multiple-cow real-time lameness detection system using a deep learning approach for cattle detection and pose estimation that could be deployed across dairy farms. Utilising computer vision and deep learning, the system can analyse simultaneously both the posture and gait of each cow within a camera field of view to a very high degree of accuracy (94–100%). Twenty-five video sequences containing 250 cows in varying degrees of lameness were recorded and independently scored by three accredited Agriculture and Horticulture Development Board (AHDB) mobility scorers using the AHDB dairy mobility scoring system to provide ground truth lameness data. These observers showed significant inter-observer reliability. Video sequences were broken down into their constituent frames and with a further 500 images downloaded from google, annotated with 15 anatomical points for each animal. A modified Mask-RCNN estimated the pose of each cow to output 5 key-points to determine back arching and 2 key-points to determine head position. Using the SORT (simple, online, and real-time tracking) algorithm, cows were tracked as they move through frames of the video sequence (i.e., in moving animals). All the features were combined using the CatBoost gradient boosting algorithm with accuracy being determined using threefold cross-validation including recursive feature elimination. Precision was assessed using Cohen’s kappa coefficient and assessments of precision and recall. This methodology was applied to cows with varying degrees of lameness (according to accredited scoring, n = 3) and demonstrated that some characteristics directly associated with lameness could be monitored simultaneously. By combining the algorithm results over time, more robust evaluation of individual cow lameness was obtained. The model showed high performance for predicting and matching the ground truth lameness data with the outputs of the algorithm. Overall, threefold lameness detection accuracy of 100% and a lameness severity classification accuracy of 94% respectively was achieved with a high degree of precision (Cohen’s kappa = 0.8782, precision = 0.8650 and recall = 0.9209).
format Online
Article
Text
id pubmed-10024686
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-100246862023-03-20 Deep learning pose estimation for multi-cattle lameness detection Barney, Shaun Dlay, Satnam Crowe, Andrew Kyriazakis, Ilias Leach, Matthew Sci Rep Article The objective of this study was to develop a fully automated multiple-cow real-time lameness detection system using a deep learning approach for cattle detection and pose estimation that could be deployed across dairy farms. Utilising computer vision and deep learning, the system can analyse simultaneously both the posture and gait of each cow within a camera field of view to a very high degree of accuracy (94–100%). Twenty-five video sequences containing 250 cows in varying degrees of lameness were recorded and independently scored by three accredited Agriculture and Horticulture Development Board (AHDB) mobility scorers using the AHDB dairy mobility scoring system to provide ground truth lameness data. These observers showed significant inter-observer reliability. Video sequences were broken down into their constituent frames and with a further 500 images downloaded from google, annotated with 15 anatomical points for each animal. A modified Mask-RCNN estimated the pose of each cow to output 5 key-points to determine back arching and 2 key-points to determine head position. Using the SORT (simple, online, and real-time tracking) algorithm, cows were tracked as they move through frames of the video sequence (i.e., in moving animals). All the features were combined using the CatBoost gradient boosting algorithm with accuracy being determined using threefold cross-validation including recursive feature elimination. Precision was assessed using Cohen’s kappa coefficient and assessments of precision and recall. This methodology was applied to cows with varying degrees of lameness (according to accredited scoring, n = 3) and demonstrated that some characteristics directly associated with lameness could be monitored simultaneously. By combining the algorithm results over time, more robust evaluation of individual cow lameness was obtained. The model showed high performance for predicting and matching the ground truth lameness data with the outputs of the algorithm. Overall, threefold lameness detection accuracy of 100% and a lameness severity classification accuracy of 94% respectively was achieved with a high degree of precision (Cohen’s kappa = 0.8782, precision = 0.8650 and recall = 0.9209). Nature Publishing Group UK 2023-03-18 /pmc/articles/PMC10024686/ /pubmed/36934125 http://dx.doi.org/10.1038/s41598-023-31297-1 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Barney, Shaun
Dlay, Satnam
Crowe, Andrew
Kyriazakis, Ilias
Leach, Matthew
Deep learning pose estimation for multi-cattle lameness detection
title Deep learning pose estimation for multi-cattle lameness detection
title_full Deep learning pose estimation for multi-cattle lameness detection
title_fullStr Deep learning pose estimation for multi-cattle lameness detection
title_full_unstemmed Deep learning pose estimation for multi-cattle lameness detection
title_short Deep learning pose estimation for multi-cattle lameness detection
title_sort deep learning pose estimation for multi-cattle lameness detection
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10024686/
https://www.ncbi.nlm.nih.gov/pubmed/36934125
http://dx.doi.org/10.1038/s41598-023-31297-1
work_keys_str_mv AT barneyshaun deeplearningposeestimationformulticattlelamenessdetection
AT dlaysatnam deeplearningposeestimationformulticattlelamenessdetection
AT croweandrew deeplearningposeestimationformulticattlelamenessdetection
AT kyriazakisilias deeplearningposeestimationformulticattlelamenessdetection
AT leachmatthew deeplearningposeestimationformulticattlelamenessdetection