Cargando…

Adaptive Multi-ROI Agricultural Robot Navigation Line Extraction Based on Image Semantic Segmentation

Automated robots are an important part of realizing sustainable food production in smart agriculture. Agricultural robots require a powerful and precise navigation system to be able to perform tasks in the field. Aiming at the problems of complex image background, as well as weed and light interfere...

Descripción completa

Detalles Bibliográficos
Autores principales: Li, Xia, Su, Junhao, Yue, Zhenchao, Duan, Fangtao
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9609012/
https://www.ncbi.nlm.nih.gov/pubmed/36298058
http://dx.doi.org/10.3390/s22207707
_version_ 1784818911287443456
author Li, Xia
Su, Junhao
Yue, Zhenchao
Duan, Fangtao
author_facet Li, Xia
Su, Junhao
Yue, Zhenchao
Duan, Fangtao
author_sort Li, Xia
collection PubMed
description Automated robots are an important part of realizing sustainable food production in smart agriculture. Agricultural robots require a powerful and precise navigation system to be able to perform tasks in the field. Aiming at the problems of complex image background, as well as weed and light interference factors of the visual navigation system in field and greenhouse environments, a Faster-U-net model that retains the advantages of the U-net model feature jump connection is proposed. Based on the U-net model, pruning and optimization were carried out to predict crop ridges. Firstly, a corn dataset was trained to obtain the weight of the corn dataset. Then, the training weight of the obtained corn dataset was used as the pretraining weight for the cucumber, wheat, and tomato datasets, respectively. The three datasets were trained separately. Finally, the navigation line between ridges and the yaw angle of the robot were generated by B-spline curve fitting. The experimental results showed that the parameters of the improved path segmentation model were reduced by 65.86%, and the mPA was 97.39%. The recognition accuracy MIoU of the Faster-U-net model for maize, tomatoes, cucumbers, and wheat was 93.86%, 94.01%, 93.14%, and 89.10%, respectively. The processing speed of the single-core CPU was 22.32 fps/s. The proposed method had strong robustness in predicting rows of different crops. The average angle difference of the navigation line under a ridge environment such as that for corn, tomatoes, cucumbers, or wheat was 0.624°, 0.556°, 0.526°, and 0.999°, respectively. This research can provide technical support and reference for the research and development of intelligent agricultural robot navigation equipment in the field.
format Online
Article
Text
id pubmed-9609012
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-96090122022-10-28 Adaptive Multi-ROI Agricultural Robot Navigation Line Extraction Based on Image Semantic Segmentation Li, Xia Su, Junhao Yue, Zhenchao Duan, Fangtao Sensors (Basel) Article Automated robots are an important part of realizing sustainable food production in smart agriculture. Agricultural robots require a powerful and precise navigation system to be able to perform tasks in the field. Aiming at the problems of complex image background, as well as weed and light interference factors of the visual navigation system in field and greenhouse environments, a Faster-U-net model that retains the advantages of the U-net model feature jump connection is proposed. Based on the U-net model, pruning and optimization were carried out to predict crop ridges. Firstly, a corn dataset was trained to obtain the weight of the corn dataset. Then, the training weight of the obtained corn dataset was used as the pretraining weight for the cucumber, wheat, and tomato datasets, respectively. The three datasets were trained separately. Finally, the navigation line between ridges and the yaw angle of the robot were generated by B-spline curve fitting. The experimental results showed that the parameters of the improved path segmentation model were reduced by 65.86%, and the mPA was 97.39%. The recognition accuracy MIoU of the Faster-U-net model for maize, tomatoes, cucumbers, and wheat was 93.86%, 94.01%, 93.14%, and 89.10%, respectively. The processing speed of the single-core CPU was 22.32 fps/s. The proposed method had strong robustness in predicting rows of different crops. The average angle difference of the navigation line under a ridge environment such as that for corn, tomatoes, cucumbers, or wheat was 0.624°, 0.556°, 0.526°, and 0.999°, respectively. This research can provide technical support and reference for the research and development of intelligent agricultural robot navigation equipment in the field. MDPI 2022-10-11 /pmc/articles/PMC9609012/ /pubmed/36298058 http://dx.doi.org/10.3390/s22207707 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Li, Xia
Su, Junhao
Yue, Zhenchao
Duan, Fangtao
Adaptive Multi-ROI Agricultural Robot Navigation Line Extraction Based on Image Semantic Segmentation
title Adaptive Multi-ROI Agricultural Robot Navigation Line Extraction Based on Image Semantic Segmentation
title_full Adaptive Multi-ROI Agricultural Robot Navigation Line Extraction Based on Image Semantic Segmentation
title_fullStr Adaptive Multi-ROI Agricultural Robot Navigation Line Extraction Based on Image Semantic Segmentation
title_full_unstemmed Adaptive Multi-ROI Agricultural Robot Navigation Line Extraction Based on Image Semantic Segmentation
title_short Adaptive Multi-ROI Agricultural Robot Navigation Line Extraction Based on Image Semantic Segmentation
title_sort adaptive multi-roi agricultural robot navigation line extraction based on image semantic segmentation
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9609012/
https://www.ncbi.nlm.nih.gov/pubmed/36298058
http://dx.doi.org/10.3390/s22207707
work_keys_str_mv AT lixia adaptivemultiroiagriculturalrobotnavigationlineextractionbasedonimagesemanticsegmentation
AT sujunhao adaptivemultiroiagriculturalrobotnavigationlineextractionbasedonimagesemanticsegmentation
AT yuezhenchao adaptivemultiroiagriculturalrobotnavigationlineextractionbasedonimagesemanticsegmentation
AT duanfangtao adaptivemultiroiagriculturalrobotnavigationlineextractionbasedonimagesemanticsegmentation