Cargando…

Sky and Ground Segmentation in the Navigation Visions of the Planetary Rovers

Sky and ground are two essential semantic components in computer vision, robotics, and remote sensing. The sky and ground segmentation has become increasingly popular. This research proposes a sky and ground segmentation framework for the rover navigation visions by adopting weak supervision and tra...

Descripción completa

Detalles Bibliográficos
Autores principales: Kuang, Boyu, Rana, Zeeshan A., Zhao, Yifan
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8588092/
https://www.ncbi.nlm.nih.gov/pubmed/34770302
http://dx.doi.org/10.3390/s21216996
_version_ 1784598352216719360
author Kuang, Boyu
Rana, Zeeshan A.
Zhao, Yifan
author_facet Kuang, Boyu
Rana, Zeeshan A.
Zhao, Yifan
author_sort Kuang, Boyu
collection PubMed
description Sky and ground are two essential semantic components in computer vision, robotics, and remote sensing. The sky and ground segmentation has become increasingly popular. This research proposes a sky and ground segmentation framework for the rover navigation visions by adopting weak supervision and transfer learning technologies. A new sky and ground segmentation neural network (network in U-shaped network (NI-U-Net)) and a conservative annotation method have been proposed. The pre-trained process achieves the best results on a popular open benchmark (the Skyfinder dataset) by evaluating seven metrics compared to the state-of-the-art. These seven metrics achieve 99.232%, 99.211%, 99.221%, 99.104%, 0.0077, 0.0427, and 98.223% on accuracy, precision, recall, dice score (F1), misclassification rate (MCR), root mean squared error (RMSE), and intersection over union (IoU), respectively. The conservative annotation method achieves superior performance with limited manual intervention. The NI-U-Net can operate with 40 frames per second (FPS) to maintain the real-time property. The proposed framework successfully fills the gap between the laboratory results (with rich idea data) and the practical application (in the wild). The achievement can provide essential semantic information (sky and ground) for the rover navigation vision.
format Online
Article
Text
id pubmed-8588092
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-85880922021-11-13 Sky and Ground Segmentation in the Navigation Visions of the Planetary Rovers Kuang, Boyu Rana, Zeeshan A. Zhao, Yifan Sensors (Basel) Article Sky and ground are two essential semantic components in computer vision, robotics, and remote sensing. The sky and ground segmentation has become increasingly popular. This research proposes a sky and ground segmentation framework for the rover navigation visions by adopting weak supervision and transfer learning technologies. A new sky and ground segmentation neural network (network in U-shaped network (NI-U-Net)) and a conservative annotation method have been proposed. The pre-trained process achieves the best results on a popular open benchmark (the Skyfinder dataset) by evaluating seven metrics compared to the state-of-the-art. These seven metrics achieve 99.232%, 99.211%, 99.221%, 99.104%, 0.0077, 0.0427, and 98.223% on accuracy, precision, recall, dice score (F1), misclassification rate (MCR), root mean squared error (RMSE), and intersection over union (IoU), respectively. The conservative annotation method achieves superior performance with limited manual intervention. The NI-U-Net can operate with 40 frames per second (FPS) to maintain the real-time property. The proposed framework successfully fills the gap between the laboratory results (with rich idea data) and the practical application (in the wild). The achievement can provide essential semantic information (sky and ground) for the rover navigation vision. MDPI 2021-10-21 /pmc/articles/PMC8588092/ /pubmed/34770302 http://dx.doi.org/10.3390/s21216996 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Kuang, Boyu
Rana, Zeeshan A.
Zhao, Yifan
Sky and Ground Segmentation in the Navigation Visions of the Planetary Rovers
title Sky and Ground Segmentation in the Navigation Visions of the Planetary Rovers
title_full Sky and Ground Segmentation in the Navigation Visions of the Planetary Rovers
title_fullStr Sky and Ground Segmentation in the Navigation Visions of the Planetary Rovers
title_full_unstemmed Sky and Ground Segmentation in the Navigation Visions of the Planetary Rovers
title_short Sky and Ground Segmentation in the Navigation Visions of the Planetary Rovers
title_sort sky and ground segmentation in the navigation visions of the planetary rovers
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8588092/
https://www.ncbi.nlm.nih.gov/pubmed/34770302
http://dx.doi.org/10.3390/s21216996
work_keys_str_mv AT kuangboyu skyandgroundsegmentationinthenavigationvisionsoftheplanetaryrovers
AT ranazeeshana skyandgroundsegmentationinthenavigationvisionsoftheplanetaryrovers
AT zhaoyifan skyandgroundsegmentationinthenavigationvisionsoftheplanetaryrovers