Cargando…

An embarrassingly simple approach for visual navigation of forest environments

Navigation in forest environments is a challenging and open problem in the area of field robotics. Rovers in forest environments are required to infer the traversability of a priori unknown terrains, comprising a number of different types of compliant and rigid obstacles, under varying lighting and...

Descripción completa

Detalles Bibliográficos
Autores principales: Niu, Chaoyue, Newlands, Callum, Zauner, Klaus-Peter, Tarapore, Danesh
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10338120/
https://www.ncbi.nlm.nih.gov/pubmed/37448877
http://dx.doi.org/10.3389/frobt.2023.1086798
_version_ 1785071560434909184
author Niu, Chaoyue
Newlands, Callum
Zauner, Klaus-Peter
Tarapore, Danesh
author_facet Niu, Chaoyue
Newlands, Callum
Zauner, Klaus-Peter
Tarapore, Danesh
author_sort Niu, Chaoyue
collection PubMed
description Navigation in forest environments is a challenging and open problem in the area of field robotics. Rovers in forest environments are required to infer the traversability of a priori unknown terrains, comprising a number of different types of compliant and rigid obstacles, under varying lighting and weather conditions. The challenges are further compounded for inexpensive small-sized (portable) rovers. While such rovers may be useful for collaboratively monitoring large tracts of forests as a swarm, with low environmental impact, their small-size affords them only a low viewpoint of their proximal terrain. Moreover, their limited view may frequently be partially occluded by compliant obstacles in close proximity such as shrubs and tall grass. Perhaps, consequently, most studies on off-road navigation typically use large-sized rovers equipped with expensive exteroceptive navigation sensors. We design a low-cost navigation system tailored for small-sized forest rovers. For navigation, a light-weight convolution neural network is used to predict depth images from RGB input images from a low-viewpoint monocular camera. Subsequently, a simple coarse-grained navigation algorithm aggregates the predicted depth information to steer our mobile platform towards open traversable areas in the forest while avoiding obstacles. In this study, the steering commands output from our navigation algorithm direct an operator pushing the mobile platform. Our navigation algorithm has been extensively tested in high-fidelity forest simulations and in field trials. Using no more than a 16 × 16 pixel depth prediction image from a 32 × 32 pixel RGB image, our algorithm running on a Raspberry Pi was able to successfully navigate a total of over 750 m of real-world forest terrain comprising shrubs, dense bushes, tall grass, fallen branches, fallen tree trunks, small ditches and mounds, and standing trees, under five different weather conditions and four different times of day. Furthermore, our algorithm exhibits robustness to changes in the mobile platform’s camera pitch angle, motion blur, low lighting at dusk, and high-contrast lighting conditions.
format Online
Article
Text
id pubmed-10338120
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-103381202023-07-13 An embarrassingly simple approach for visual navigation of forest environments Niu, Chaoyue Newlands, Callum Zauner, Klaus-Peter Tarapore, Danesh Front Robot AI Robotics and AI Navigation in forest environments is a challenging and open problem in the area of field robotics. Rovers in forest environments are required to infer the traversability of a priori unknown terrains, comprising a number of different types of compliant and rigid obstacles, under varying lighting and weather conditions. The challenges are further compounded for inexpensive small-sized (portable) rovers. While such rovers may be useful for collaboratively monitoring large tracts of forests as a swarm, with low environmental impact, their small-size affords them only a low viewpoint of their proximal terrain. Moreover, their limited view may frequently be partially occluded by compliant obstacles in close proximity such as shrubs and tall grass. Perhaps, consequently, most studies on off-road navigation typically use large-sized rovers equipped with expensive exteroceptive navigation sensors. We design a low-cost navigation system tailored for small-sized forest rovers. For navigation, a light-weight convolution neural network is used to predict depth images from RGB input images from a low-viewpoint monocular camera. Subsequently, a simple coarse-grained navigation algorithm aggregates the predicted depth information to steer our mobile platform towards open traversable areas in the forest while avoiding obstacles. In this study, the steering commands output from our navigation algorithm direct an operator pushing the mobile platform. Our navigation algorithm has been extensively tested in high-fidelity forest simulations and in field trials. Using no more than a 16 × 16 pixel depth prediction image from a 32 × 32 pixel RGB image, our algorithm running on a Raspberry Pi was able to successfully navigate a total of over 750 m of real-world forest terrain comprising shrubs, dense bushes, tall grass, fallen branches, fallen tree trunks, small ditches and mounds, and standing trees, under five different weather conditions and four different times of day. Furthermore, our algorithm exhibits robustness to changes in the mobile platform’s camera pitch angle, motion blur, low lighting at dusk, and high-contrast lighting conditions. Frontiers Media S.A. 2023-06-28 /pmc/articles/PMC10338120/ /pubmed/37448877 http://dx.doi.org/10.3389/frobt.2023.1086798 Text en Copyright © 2023 Niu, Newlands, Zauner and Tarapore. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Robotics and AI
Niu, Chaoyue
Newlands, Callum
Zauner, Klaus-Peter
Tarapore, Danesh
An embarrassingly simple approach for visual navigation of forest environments
title An embarrassingly simple approach for visual navigation of forest environments
title_full An embarrassingly simple approach for visual navigation of forest environments
title_fullStr An embarrassingly simple approach for visual navigation of forest environments
title_full_unstemmed An embarrassingly simple approach for visual navigation of forest environments
title_short An embarrassingly simple approach for visual navigation of forest environments
title_sort embarrassingly simple approach for visual navigation of forest environments
topic Robotics and AI
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10338120/
https://www.ncbi.nlm.nih.gov/pubmed/37448877
http://dx.doi.org/10.3389/frobt.2023.1086798
work_keys_str_mv AT niuchaoyue anembarrassinglysimpleapproachforvisualnavigationofforestenvironments
AT newlandscallum anembarrassinglysimpleapproachforvisualnavigationofforestenvironments
AT zaunerklauspeter anembarrassinglysimpleapproachforvisualnavigationofforestenvironments
AT taraporedanesh anembarrassinglysimpleapproachforvisualnavigationofforestenvironments
AT niuchaoyue embarrassinglysimpleapproachforvisualnavigationofforestenvironments
AT newlandscallum embarrassinglysimpleapproachforvisualnavigationofforestenvironments
AT zaunerklauspeter embarrassinglysimpleapproachforvisualnavigationofforestenvironments
AT taraporedanesh embarrassinglysimpleapproachforvisualnavigationofforestenvironments