Cargando…

Signal Novelty Detection as an Intrinsic Reward for Robotics

In advanced robot control, reinforcement learning is a common technique used to transform sensor data into signals for actuators, based on feedback from the robot’s environment. However, the feedback or reward is typically sparse, as it is provided mainly after the task’s completion or failure, lead...

Descripción completa

Detalles Bibliográficos
Autores principales: Kubovčík, Martin, Dirgová Luptáková, Iveta, Pospíchal, Jiří
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10142593/
https://www.ncbi.nlm.nih.gov/pubmed/37112324
http://dx.doi.org/10.3390/s23083985
_version_ 1785033650240225280
author Kubovčík, Martin
Dirgová Luptáková, Iveta
Pospíchal, Jiří
author_facet Kubovčík, Martin
Dirgová Luptáková, Iveta
Pospíchal, Jiří
author_sort Kubovčík, Martin
collection PubMed
description In advanced robot control, reinforcement learning is a common technique used to transform sensor data into signals for actuators, based on feedback from the robot’s environment. However, the feedback or reward is typically sparse, as it is provided mainly after the task’s completion or failure, leading to slow convergence. Additional intrinsic rewards based on the state visitation frequency can provide more feedback. In this study, an Autoencoder deep learning neural network was utilized as novelty detection for intrinsic rewards to guide the search process through a state space. The neural network processed signals from various types of sensors simultaneously. It was tested on simulated robotic agents in a benchmark set of classic control OpenAI Gym test environments (including Mountain Car, Acrobot, CartPole, and LunarLander), achieving more efficient and accurate robot control in three of the four tasks (with only slight degradation in the Lunar Lander task) when purely intrinsic rewards were used compared to standard extrinsic rewards. By incorporating autoencoder-based intrinsic rewards, robots could potentially become more dependable in autonomous operations like space or underwater exploration or during natural disaster response. This is because the system could better adapt to changing environments or unexpected situations.
format Online
Article
Text
id pubmed-10142593
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-101425932023-04-29 Signal Novelty Detection as an Intrinsic Reward for Robotics Kubovčík, Martin Dirgová Luptáková, Iveta Pospíchal, Jiří Sensors (Basel) Article In advanced robot control, reinforcement learning is a common technique used to transform sensor data into signals for actuators, based on feedback from the robot’s environment. However, the feedback or reward is typically sparse, as it is provided mainly after the task’s completion or failure, leading to slow convergence. Additional intrinsic rewards based on the state visitation frequency can provide more feedback. In this study, an Autoencoder deep learning neural network was utilized as novelty detection for intrinsic rewards to guide the search process through a state space. The neural network processed signals from various types of sensors simultaneously. It was tested on simulated robotic agents in a benchmark set of classic control OpenAI Gym test environments (including Mountain Car, Acrobot, CartPole, and LunarLander), achieving more efficient and accurate robot control in three of the four tasks (with only slight degradation in the Lunar Lander task) when purely intrinsic rewards were used compared to standard extrinsic rewards. By incorporating autoencoder-based intrinsic rewards, robots could potentially become more dependable in autonomous operations like space or underwater exploration or during natural disaster response. This is because the system could better adapt to changing environments or unexpected situations. MDPI 2023-04-14 /pmc/articles/PMC10142593/ /pubmed/37112324 http://dx.doi.org/10.3390/s23083985 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Kubovčík, Martin
Dirgová Luptáková, Iveta
Pospíchal, Jiří
Signal Novelty Detection as an Intrinsic Reward for Robotics
title Signal Novelty Detection as an Intrinsic Reward for Robotics
title_full Signal Novelty Detection as an Intrinsic Reward for Robotics
title_fullStr Signal Novelty Detection as an Intrinsic Reward for Robotics
title_full_unstemmed Signal Novelty Detection as an Intrinsic Reward for Robotics
title_short Signal Novelty Detection as an Intrinsic Reward for Robotics
title_sort signal novelty detection as an intrinsic reward for robotics
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10142593/
https://www.ncbi.nlm.nih.gov/pubmed/37112324
http://dx.doi.org/10.3390/s23083985
work_keys_str_mv AT kubovcikmartin signalnoveltydetectionasanintrinsicrewardforrobotics
AT dirgovaluptakovaiveta signalnoveltydetectionasanintrinsicrewardforrobotics
AT pospichaljiri signalnoveltydetectionasanintrinsicrewardforrobotics