Cargando…
Dynamic Obstacle Avoidance for USVs Using Cross-Domain Deep Reinforcement Learning and Neural Network Model Predictive Controller
This work presents a framework that allows Unmanned Surface Vehicles (USVs) to avoid dynamic obstacles through initial training on an Unmanned Ground Vehicle (UGV) and cross-domain retraining on a USV. This is achieved by integrating a Deep Reinforcement Learning (DRL) agent that generates high-leve...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10099039/ https://www.ncbi.nlm.nih.gov/pubmed/37050633 http://dx.doi.org/10.3390/s23073572 |
_version_ | 1785024962093907968 |
---|---|
author | Li, Jianwen Chavez-Galaviz, Jalil Azizzadenesheli, Kamyar Mahmoudian, Nina |
author_facet | Li, Jianwen Chavez-Galaviz, Jalil Azizzadenesheli, Kamyar Mahmoudian, Nina |
author_sort | Li, Jianwen |
collection | PubMed |
description | This work presents a framework that allows Unmanned Surface Vehicles (USVs) to avoid dynamic obstacles through initial training on an Unmanned Ground Vehicle (UGV) and cross-domain retraining on a USV. This is achieved by integrating a Deep Reinforcement Learning (DRL) agent that generates high-level control commands and leveraging a neural network based model predictive controller (NN-MPC) to reach target waypoints and reject disturbances. A Deep Q Network (DQN) utilized in this framework is trained in a ground environment using a Turtlebot robot and retrained in a water environment using the BREAM USV in the Gazebo simulator to avoid dynamic obstacles. The network is then validated in both simulation and real-world tests. The cross-domain learning largely decreases the training time ([Formula: see text]) and increases the obstacle avoidance performance (70 more reward points) compared to pure water domain training. This methodology shows that it is possible to leverage the data-rich and accessible ground environments to train DRL agent in data-poor and difficult-to-access marine environments. This will allow rapid and iterative agent development without further training due to the change in environment or vehicle dynamics. |
format | Online Article Text |
id | pubmed-10099039 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-100990392023-04-14 Dynamic Obstacle Avoidance for USVs Using Cross-Domain Deep Reinforcement Learning and Neural Network Model Predictive Controller Li, Jianwen Chavez-Galaviz, Jalil Azizzadenesheli, Kamyar Mahmoudian, Nina Sensors (Basel) Article This work presents a framework that allows Unmanned Surface Vehicles (USVs) to avoid dynamic obstacles through initial training on an Unmanned Ground Vehicle (UGV) and cross-domain retraining on a USV. This is achieved by integrating a Deep Reinforcement Learning (DRL) agent that generates high-level control commands and leveraging a neural network based model predictive controller (NN-MPC) to reach target waypoints and reject disturbances. A Deep Q Network (DQN) utilized in this framework is trained in a ground environment using a Turtlebot robot and retrained in a water environment using the BREAM USV in the Gazebo simulator to avoid dynamic obstacles. The network is then validated in both simulation and real-world tests. The cross-domain learning largely decreases the training time ([Formula: see text]) and increases the obstacle avoidance performance (70 more reward points) compared to pure water domain training. This methodology shows that it is possible to leverage the data-rich and accessible ground environments to train DRL agent in data-poor and difficult-to-access marine environments. This will allow rapid and iterative agent development without further training due to the change in environment or vehicle dynamics. MDPI 2023-03-29 /pmc/articles/PMC10099039/ /pubmed/37050633 http://dx.doi.org/10.3390/s23073572 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Li, Jianwen Chavez-Galaviz, Jalil Azizzadenesheli, Kamyar Mahmoudian, Nina Dynamic Obstacle Avoidance for USVs Using Cross-Domain Deep Reinforcement Learning and Neural Network Model Predictive Controller |
title | Dynamic Obstacle Avoidance for USVs Using Cross-Domain Deep Reinforcement Learning and Neural Network Model Predictive Controller |
title_full | Dynamic Obstacle Avoidance for USVs Using Cross-Domain Deep Reinforcement Learning and Neural Network Model Predictive Controller |
title_fullStr | Dynamic Obstacle Avoidance for USVs Using Cross-Domain Deep Reinforcement Learning and Neural Network Model Predictive Controller |
title_full_unstemmed | Dynamic Obstacle Avoidance for USVs Using Cross-Domain Deep Reinforcement Learning and Neural Network Model Predictive Controller |
title_short | Dynamic Obstacle Avoidance for USVs Using Cross-Domain Deep Reinforcement Learning and Neural Network Model Predictive Controller |
title_sort | dynamic obstacle avoidance for usvs using cross-domain deep reinforcement learning and neural network model predictive controller |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10099039/ https://www.ncbi.nlm.nih.gov/pubmed/37050633 http://dx.doi.org/10.3390/s23073572 |
work_keys_str_mv | AT lijianwen dynamicobstacleavoidanceforusvsusingcrossdomaindeepreinforcementlearningandneuralnetworkmodelpredictivecontroller AT chavezgalavizjalil dynamicobstacleavoidanceforusvsusingcrossdomaindeepreinforcementlearningandneuralnetworkmodelpredictivecontroller AT azizzadeneshelikamyar dynamicobstacleavoidanceforusvsusingcrossdomaindeepreinforcementlearningandneuralnetworkmodelpredictivecontroller AT mahmoudiannina dynamicobstacleavoidanceforusvsusingcrossdomaindeepreinforcementlearningandneuralnetworkmodelpredictivecontroller |