Cargando…

Resource-Efficient Sensor Data Management for Autonomous Systems Using Deep Reinforcement Learning

Hyperconnectivity via modern Internet of Things (IoT) technologies has recently driven us to envision “digital twin”, in which physical attributes are all embedded, and their latest updates are synchronized on digital spaces in a timely fashion. From the point of view of cyberphysical system (CPS) a...

Descripción completa

Detalles Bibliográficos
Autores principales: Jeong, Seunghwan, Yoo, Gwangpyo, Yoo, Minjong, Yeom, Ikjun, Woo, Honguk
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6832860/
https://www.ncbi.nlm.nih.gov/pubmed/31614654
http://dx.doi.org/10.3390/s19204410
_version_ 1783466245919604736
author Jeong, Seunghwan
Yoo, Gwangpyo
Yoo, Minjong
Yeom, Ikjun
Woo, Honguk
author_facet Jeong, Seunghwan
Yoo, Gwangpyo
Yoo, Minjong
Yeom, Ikjun
Woo, Honguk
author_sort Jeong, Seunghwan
collection PubMed
description Hyperconnectivity via modern Internet of Things (IoT) technologies has recently driven us to envision “digital twin”, in which physical attributes are all embedded, and their latest updates are synchronized on digital spaces in a timely fashion. From the point of view of cyberphysical system (CPS) architectures, the goals of digital twin include providing common programming abstraction on the same level of databases, thereby facilitating seamless integration of real-world physical objects and digital assets at several different system layers. However, the inherent limitations of sampling and observing physical attributes often pose issues related to data uncertainty in practice. In this paper, we propose a learning-based data management scheme where the implementation is layered between sensors attached to physical attributes and domain-specific applications, thereby mitigating the data uncertainty between them. To do so, we present a sensor data management framework, namely D2WIN, which adopts reinforcement learning (RL) techniques to manage the data quality for CPS applications and autonomous systems. To deal with the scale issue incurred by many physical attributes and sensor streams when adopting RL, we propose an action embedding strategy that exploits their distance-based similarity in the physical space coordination. We introduce two embedding methods, i.e., a user-defined function and a generative model, for different conditions. Through experiments, we demonstrate that the D2WIN framework with the action embedding outperforms several known heuristics in terms of achievable data quality under certain resource restrictions. We also test the framework with an autonomous driving simulator, clearly showing its benefit. For example, with only 30% of updates selectively applied by the learned policy, the driving agent maintains its performance about 96.2%, as compared to the ideal condition with full updates.
format Online
Article
Text
id pubmed-6832860
institution National Center for Biotechnology Information
language English
publishDate 2019
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-68328602019-11-25 Resource-Efficient Sensor Data Management for Autonomous Systems Using Deep Reinforcement Learning Jeong, Seunghwan Yoo, Gwangpyo Yoo, Minjong Yeom, Ikjun Woo, Honguk Sensors (Basel) Article Hyperconnectivity via modern Internet of Things (IoT) technologies has recently driven us to envision “digital twin”, in which physical attributes are all embedded, and their latest updates are synchronized on digital spaces in a timely fashion. From the point of view of cyberphysical system (CPS) architectures, the goals of digital twin include providing common programming abstraction on the same level of databases, thereby facilitating seamless integration of real-world physical objects and digital assets at several different system layers. However, the inherent limitations of sampling and observing physical attributes often pose issues related to data uncertainty in practice. In this paper, we propose a learning-based data management scheme where the implementation is layered between sensors attached to physical attributes and domain-specific applications, thereby mitigating the data uncertainty between them. To do so, we present a sensor data management framework, namely D2WIN, which adopts reinforcement learning (RL) techniques to manage the data quality for CPS applications and autonomous systems. To deal with the scale issue incurred by many physical attributes and sensor streams when adopting RL, we propose an action embedding strategy that exploits their distance-based similarity in the physical space coordination. We introduce two embedding methods, i.e., a user-defined function and a generative model, for different conditions. Through experiments, we demonstrate that the D2WIN framework with the action embedding outperforms several known heuristics in terms of achievable data quality under certain resource restrictions. We also test the framework with an autonomous driving simulator, clearly showing its benefit. For example, with only 30% of updates selectively applied by the learned policy, the driving agent maintains its performance about 96.2%, as compared to the ideal condition with full updates. MDPI 2019-10-11 /pmc/articles/PMC6832860/ /pubmed/31614654 http://dx.doi.org/10.3390/s19204410 Text en © 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Jeong, Seunghwan
Yoo, Gwangpyo
Yoo, Minjong
Yeom, Ikjun
Woo, Honguk
Resource-Efficient Sensor Data Management for Autonomous Systems Using Deep Reinforcement Learning
title Resource-Efficient Sensor Data Management for Autonomous Systems Using Deep Reinforcement Learning
title_full Resource-Efficient Sensor Data Management for Autonomous Systems Using Deep Reinforcement Learning
title_fullStr Resource-Efficient Sensor Data Management for Autonomous Systems Using Deep Reinforcement Learning
title_full_unstemmed Resource-Efficient Sensor Data Management for Autonomous Systems Using Deep Reinforcement Learning
title_short Resource-Efficient Sensor Data Management for Autonomous Systems Using Deep Reinforcement Learning
title_sort resource-efficient sensor data management for autonomous systems using deep reinforcement learning
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6832860/
https://www.ncbi.nlm.nih.gov/pubmed/31614654
http://dx.doi.org/10.3390/s19204410
work_keys_str_mv AT jeongseunghwan resourceefficientsensordatamanagementforautonomoussystemsusingdeepreinforcementlearning
AT yoogwangpyo resourceefficientsensordatamanagementforautonomoussystemsusingdeepreinforcementlearning
AT yoominjong resourceefficientsensordatamanagementforautonomoussystemsusingdeepreinforcementlearning
AT yeomikjun resourceefficientsensordatamanagementforautonomoussystemsusingdeepreinforcementlearning
AT woohonguk resourceefficientsensordatamanagementforautonomoussystemsusingdeepreinforcementlearning