Cargando…

A tea bud segmentation, detection and picking point localization based on the MDY7-3PTB model

INTRODUCTION: The identification and localization of tea picking points is a prerequisite for achieving automatic picking of famous tea. However, due to the similarity in color between tea buds and young leaves and old leaves, it is difficult for the human eye to accurately identify them. METHODS: T...

Descripción completa

Detalles Bibliográficos
Autores principales: Zhang, Fenyun, Sun, Hongwei, Xie, Shuang, Dong, Chunwang, Li, You, Xu, Yiting, Zhang, Zhengwei, Chen, Fengnong
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10570925/
https://www.ncbi.nlm.nih.gov/pubmed/37841621
http://dx.doi.org/10.3389/fpls.2023.1199473
_version_ 1785119872984809472
author Zhang, Fenyun
Sun, Hongwei
Xie, Shuang
Dong, Chunwang
Li, You
Xu, Yiting
Zhang, Zhengwei
Chen, Fengnong
author_facet Zhang, Fenyun
Sun, Hongwei
Xie, Shuang
Dong, Chunwang
Li, You
Xu, Yiting
Zhang, Zhengwei
Chen, Fengnong
author_sort Zhang, Fenyun
collection PubMed
description INTRODUCTION: The identification and localization of tea picking points is a prerequisite for achieving automatic picking of famous tea. However, due to the similarity in color between tea buds and young leaves and old leaves, it is difficult for the human eye to accurately identify them. METHODS: To address the problem of segmentation, detection, and localization of tea picking points in the complex environment of mechanical picking of famous tea, this paper proposes a new model called the MDY7-3PTB model, which combines the high-precision segmentation capability of DeepLabv3+ and the rapid detection capability of YOLOv7. This model achieves the process of segmentation first, followed by detection and finally localization of tea buds, resulting in accurate identification of the tea bud picking point. This model replaced the DeepLabv3+ feature extraction network with the more lightweight MobileNetV2 network to improve the model computation speed. In addition, multiple attention mechanisms (CBAM) were fused into the feature extraction and ASPP modules to further optimize model performance. Moreover, to address the problem of class imbalance in the dataset, the Focal Loss function was used to correct data imbalance and improve segmentation, detection, and positioning accuracy. RESULTS AND DISCUSSION: The MDY7-3PTB model achieved a mean intersection over union (mIoU) of 86.61%, a mean pixel accuracy (mPA) of 93.01%, and a mean recall (mRecall) of 91.78% on the tea bud segmentation dataset, which performed better than usual segmentation models such as PSPNet, Unet, and DeeplabV3+. In terms of tea bud picking point recognition and positioning, the model achieved a mean average precision (mAP) of 93.52%, a weighted average of precision and recall (F1 score) of 93.17%, a precision of 97.27%, and a recall of 89.41%. This model showed significant improvements in all aspects compared to existing mainstream YOLO series detection models, with strong versatility and robustness. This method eliminates the influence of the background and directly detects the tea bud picking points with almost no missed detections, providing accurate two-dimensional coordinates for the tea bud picking points, with a positioning precision of 96.41%. This provides a strong theoretical basis for future tea bud picking.
format Online
Article
Text
id pubmed-10570925
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-105709252023-10-14 A tea bud segmentation, detection and picking point localization based on the MDY7-3PTB model Zhang, Fenyun Sun, Hongwei Xie, Shuang Dong, Chunwang Li, You Xu, Yiting Zhang, Zhengwei Chen, Fengnong Front Plant Sci Plant Science INTRODUCTION: The identification and localization of tea picking points is a prerequisite for achieving automatic picking of famous tea. However, due to the similarity in color between tea buds and young leaves and old leaves, it is difficult for the human eye to accurately identify them. METHODS: To address the problem of segmentation, detection, and localization of tea picking points in the complex environment of mechanical picking of famous tea, this paper proposes a new model called the MDY7-3PTB model, which combines the high-precision segmentation capability of DeepLabv3+ and the rapid detection capability of YOLOv7. This model achieves the process of segmentation first, followed by detection and finally localization of tea buds, resulting in accurate identification of the tea bud picking point. This model replaced the DeepLabv3+ feature extraction network with the more lightweight MobileNetV2 network to improve the model computation speed. In addition, multiple attention mechanisms (CBAM) were fused into the feature extraction and ASPP modules to further optimize model performance. Moreover, to address the problem of class imbalance in the dataset, the Focal Loss function was used to correct data imbalance and improve segmentation, detection, and positioning accuracy. RESULTS AND DISCUSSION: The MDY7-3PTB model achieved a mean intersection over union (mIoU) of 86.61%, a mean pixel accuracy (mPA) of 93.01%, and a mean recall (mRecall) of 91.78% on the tea bud segmentation dataset, which performed better than usual segmentation models such as PSPNet, Unet, and DeeplabV3+. In terms of tea bud picking point recognition and positioning, the model achieved a mean average precision (mAP) of 93.52%, a weighted average of precision and recall (F1 score) of 93.17%, a precision of 97.27%, and a recall of 89.41%. This model showed significant improvements in all aspects compared to existing mainstream YOLO series detection models, with strong versatility and robustness. This method eliminates the influence of the background and directly detects the tea bud picking points with almost no missed detections, providing accurate two-dimensional coordinates for the tea bud picking points, with a positioning precision of 96.41%. This provides a strong theoretical basis for future tea bud picking. Frontiers Media S.A. 2023-09-28 /pmc/articles/PMC10570925/ /pubmed/37841621 http://dx.doi.org/10.3389/fpls.2023.1199473 Text en Copyright © 2023 Zhang, Sun, Xie, Dong, Li, Xu, Zhang and Chen https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Plant Science
Zhang, Fenyun
Sun, Hongwei
Xie, Shuang
Dong, Chunwang
Li, You
Xu, Yiting
Zhang, Zhengwei
Chen, Fengnong
A tea bud segmentation, detection and picking point localization based on the MDY7-3PTB model
title A tea bud segmentation, detection and picking point localization based on the MDY7-3PTB model
title_full A tea bud segmentation, detection and picking point localization based on the MDY7-3PTB model
title_fullStr A tea bud segmentation, detection and picking point localization based on the MDY7-3PTB model
title_full_unstemmed A tea bud segmentation, detection and picking point localization based on the MDY7-3PTB model
title_short A tea bud segmentation, detection and picking point localization based on the MDY7-3PTB model
title_sort tea bud segmentation, detection and picking point localization based on the mdy7-3ptb model
topic Plant Science
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10570925/
https://www.ncbi.nlm.nih.gov/pubmed/37841621
http://dx.doi.org/10.3389/fpls.2023.1199473
work_keys_str_mv AT zhangfenyun ateabudsegmentationdetectionandpickingpointlocalizationbasedonthemdy73ptbmodel
AT sunhongwei ateabudsegmentationdetectionandpickingpointlocalizationbasedonthemdy73ptbmodel
AT xieshuang ateabudsegmentationdetectionandpickingpointlocalizationbasedonthemdy73ptbmodel
AT dongchunwang ateabudsegmentationdetectionandpickingpointlocalizationbasedonthemdy73ptbmodel
AT liyou ateabudsegmentationdetectionandpickingpointlocalizationbasedonthemdy73ptbmodel
AT xuyiting ateabudsegmentationdetectionandpickingpointlocalizationbasedonthemdy73ptbmodel
AT zhangzhengwei ateabudsegmentationdetectionandpickingpointlocalizationbasedonthemdy73ptbmodel
AT chenfengnong ateabudsegmentationdetectionandpickingpointlocalizationbasedonthemdy73ptbmodel
AT zhangfenyun teabudsegmentationdetectionandpickingpointlocalizationbasedonthemdy73ptbmodel
AT sunhongwei teabudsegmentationdetectionandpickingpointlocalizationbasedonthemdy73ptbmodel
AT xieshuang teabudsegmentationdetectionandpickingpointlocalizationbasedonthemdy73ptbmodel
AT dongchunwang teabudsegmentationdetectionandpickingpointlocalizationbasedonthemdy73ptbmodel
AT liyou teabudsegmentationdetectionandpickingpointlocalizationbasedonthemdy73ptbmodel
AT xuyiting teabudsegmentationdetectionandpickingpointlocalizationbasedonthemdy73ptbmodel
AT zhangzhengwei teabudsegmentationdetectionandpickingpointlocalizationbasedonthemdy73ptbmodel
AT chenfengnong teabudsegmentationdetectionandpickingpointlocalizationbasedonthemdy73ptbmodel