Cargando…

Computational Optimization of Image-Based Reinforcement Learning for Robotics

The robotics field has been deeply influenced by the advent of deep learning. In recent years, this trend has been characterized by the adoption of large, pretrained models for robotic use cases, which are not compatible with the computational hardware available in robotic systems. Moreover, such la...

Descripción completa

Detalles Bibliográficos
Autores principales: Ferraro, Stefano, Van de Maele, Toon, Mazzaglia, Pietro, Verbelen, Tim, Dhoedt, Bart
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9571553/
https://www.ncbi.nlm.nih.gov/pubmed/36236477
http://dx.doi.org/10.3390/s22197382
_version_ 1784810391460642816
author Ferraro, Stefano
Van de Maele, Toon
Mazzaglia, Pietro
Verbelen, Tim
Dhoedt, Bart
author_facet Ferraro, Stefano
Van de Maele, Toon
Mazzaglia, Pietro
Verbelen, Tim
Dhoedt, Bart
author_sort Ferraro, Stefano
collection PubMed
description The robotics field has been deeply influenced by the advent of deep learning. In recent years, this trend has been characterized by the adoption of large, pretrained models for robotic use cases, which are not compatible with the computational hardware available in robotic systems. Moreover, such large, computationally intensive models impede the low-latency execution which is required for many closed-loop control systems. In this work, we propose different strategies for improving the computational efficiency of the deep-learning models adopted in reinforcement-learning (RL) scenarios. As a use-case project, we consider an image-based RL method on the synergy between push-and-grasp actions. As a first optimization step, we reduce the model architecture in complexity, by decreasing the number of layers and by altering the architecture structure. Second, we consider downscaling the input resolution to reduce the computational load. Finally, we perform weight quantization, where we compare post-training quantization and quantized-aware training. We benchmark the improvements introduced in each optimization by running a standard testing routine. We show that the optimization strategies introduced can improve the computational efficiency by around 300 times, while also slightly improving the functional performance of the system. In addition, we demonstrate closed-loop control behaviour on a real-world robot, while processing everything on a Jetson Xavier NX edge device.
format Online
Article
Text
id pubmed-9571553
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-95715532022-10-17 Computational Optimization of Image-Based Reinforcement Learning for Robotics Ferraro, Stefano Van de Maele, Toon Mazzaglia, Pietro Verbelen, Tim Dhoedt, Bart Sensors (Basel) Article The robotics field has been deeply influenced by the advent of deep learning. In recent years, this trend has been characterized by the adoption of large, pretrained models for robotic use cases, which are not compatible with the computational hardware available in robotic systems. Moreover, such large, computationally intensive models impede the low-latency execution which is required for many closed-loop control systems. In this work, we propose different strategies for improving the computational efficiency of the deep-learning models adopted in reinforcement-learning (RL) scenarios. As a use-case project, we consider an image-based RL method on the synergy between push-and-grasp actions. As a first optimization step, we reduce the model architecture in complexity, by decreasing the number of layers and by altering the architecture structure. Second, we consider downscaling the input resolution to reduce the computational load. Finally, we perform weight quantization, where we compare post-training quantization and quantized-aware training. We benchmark the improvements introduced in each optimization by running a standard testing routine. We show that the optimization strategies introduced can improve the computational efficiency by around 300 times, while also slightly improving the functional performance of the system. In addition, we demonstrate closed-loop control behaviour on a real-world robot, while processing everything on a Jetson Xavier NX edge device. MDPI 2022-09-28 /pmc/articles/PMC9571553/ /pubmed/36236477 http://dx.doi.org/10.3390/s22197382 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Ferraro, Stefano
Van de Maele, Toon
Mazzaglia, Pietro
Verbelen, Tim
Dhoedt, Bart
Computational Optimization of Image-Based Reinforcement Learning for Robotics
title Computational Optimization of Image-Based Reinforcement Learning for Robotics
title_full Computational Optimization of Image-Based Reinforcement Learning for Robotics
title_fullStr Computational Optimization of Image-Based Reinforcement Learning for Robotics
title_full_unstemmed Computational Optimization of Image-Based Reinforcement Learning for Robotics
title_short Computational Optimization of Image-Based Reinforcement Learning for Robotics
title_sort computational optimization of image-based reinforcement learning for robotics
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9571553/
https://www.ncbi.nlm.nih.gov/pubmed/36236477
http://dx.doi.org/10.3390/s22197382
work_keys_str_mv AT ferrarostefano computationaloptimizationofimagebasedreinforcementlearningforrobotics
AT vandemaeletoon computationaloptimizationofimagebasedreinforcementlearningforrobotics
AT mazzagliapietro computationaloptimizationofimagebasedreinforcementlearningforrobotics
AT verbelentim computationaloptimizationofimagebasedreinforcementlearningforrobotics
AT dhoedtbart computationaloptimizationofimagebasedreinforcementlearningforrobotics