Cargando…
Deep Q-network for social robotics using emotional social signals
Social robotics represents a branch of human-robot interaction dedicated to developing systems to control the robots to operate in unstructured environments with the presence of human beings. Social robots must interact with human beings by understanding social signals and responding appropriately t...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9548603/ https://www.ncbi.nlm.nih.gov/pubmed/36226257 http://dx.doi.org/10.3389/frobt.2022.880547 |
_version_ | 1784805467391787008 |
---|---|
author | Belo, José Pedro R. Azevedo, Helio Ramos, Josué J. G. Romero, Roseli A. F. |
author_facet | Belo, José Pedro R. Azevedo, Helio Ramos, Josué J. G. Romero, Roseli A. F. |
author_sort | Belo, José Pedro R. |
collection | PubMed |
description | Social robotics represents a branch of human-robot interaction dedicated to developing systems to control the robots to operate in unstructured environments with the presence of human beings. Social robots must interact with human beings by understanding social signals and responding appropriately to them. Most social robots are still pre-programmed, not having great ability to learn and respond with actions adequate during an interaction with humans. Recently more elaborate methods use body movements, gaze direction, and body language. However, these methods generally neglect vital signs present during an interaction, such as the human emotional state. In this article, we address the problem of developing a system to turn a robot able to decide, autonomously, what behaviors to emit in the function of the human emotional state. From one side, the use of Reinforcement Learning (RL) represents a way for social robots to learn advanced models of social cognition, following a self-learning paradigm, using characteristics automatically extracted from high-dimensional sensory information. On the other side, Deep Learning (DL) models can help the robots to capture information from the environment, abstracting complex patterns from the visual information. The combination of these two techniques is known as Deep Reinforcement Learning (DRL). The purpose of this work is the development of a DRL system to promote a natural and socially acceptable interaction among humans and robots. For this, we propose an architecture, Social Robotics Deep Q-Network (SocialDQN), for teaching social robots to behave and interact appropriately with humans based on social signals, especially on human emotional states. This constitutes a relevant contribution for the area since the social signals must not only be recognized by the robot but help him to take action appropriated according to the situation presented. Characteristics extracted from people’s faces are considered for extracting the human emotional state aiming to improve the robot perception. The development and validation of the system are carried out with the support of SimDRLSR simulator. Results obtained through several tests demonstrate that the system learned satisfactorily to maximize the rewards, and consequently, the robot behaves in a socially acceptable way. |
format | Online Article Text |
id | pubmed-9548603 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-95486032022-10-11 Deep Q-network for social robotics using emotional social signals Belo, José Pedro R. Azevedo, Helio Ramos, Josué J. G. Romero, Roseli A. F. Front Robot AI Robotics and AI Social robotics represents a branch of human-robot interaction dedicated to developing systems to control the robots to operate in unstructured environments with the presence of human beings. Social robots must interact with human beings by understanding social signals and responding appropriately to them. Most social robots are still pre-programmed, not having great ability to learn and respond with actions adequate during an interaction with humans. Recently more elaborate methods use body movements, gaze direction, and body language. However, these methods generally neglect vital signs present during an interaction, such as the human emotional state. In this article, we address the problem of developing a system to turn a robot able to decide, autonomously, what behaviors to emit in the function of the human emotional state. From one side, the use of Reinforcement Learning (RL) represents a way for social robots to learn advanced models of social cognition, following a self-learning paradigm, using characteristics automatically extracted from high-dimensional sensory information. On the other side, Deep Learning (DL) models can help the robots to capture information from the environment, abstracting complex patterns from the visual information. The combination of these two techniques is known as Deep Reinforcement Learning (DRL). The purpose of this work is the development of a DRL system to promote a natural and socially acceptable interaction among humans and robots. For this, we propose an architecture, Social Robotics Deep Q-Network (SocialDQN), for teaching social robots to behave and interact appropriately with humans based on social signals, especially on human emotional states. This constitutes a relevant contribution for the area since the social signals must not only be recognized by the robot but help him to take action appropriated according to the situation presented. Characteristics extracted from people’s faces are considered for extracting the human emotional state aiming to improve the robot perception. The development and validation of the system are carried out with the support of SimDRLSR simulator. Results obtained through several tests demonstrate that the system learned satisfactorily to maximize the rewards, and consequently, the robot behaves in a socially acceptable way. Frontiers Media S.A. 2022-09-26 /pmc/articles/PMC9548603/ /pubmed/36226257 http://dx.doi.org/10.3389/frobt.2022.880547 Text en Copyright © 2022 Belo, Azevedo, Ramos and Romero. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Robotics and AI Belo, José Pedro R. Azevedo, Helio Ramos, Josué J. G. Romero, Roseli A. F. Deep Q-network for social robotics using emotional social signals |
title | Deep Q-network for social robotics using emotional social signals |
title_full | Deep Q-network for social robotics using emotional social signals |
title_fullStr | Deep Q-network for social robotics using emotional social signals |
title_full_unstemmed | Deep Q-network for social robotics using emotional social signals |
title_short | Deep Q-network for social robotics using emotional social signals |
title_sort | deep q-network for social robotics using emotional social signals |
topic | Robotics and AI |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9548603/ https://www.ncbi.nlm.nih.gov/pubmed/36226257 http://dx.doi.org/10.3389/frobt.2022.880547 |
work_keys_str_mv | AT belojosepedror deepqnetworkforsocialroboticsusingemotionalsocialsignals AT azevedohelio deepqnetworkforsocialroboticsusingemotionalsocialsignals AT ramosjosuejg deepqnetworkforsocialroboticsusingemotionalsocialsignals AT romeroroseliaf deepqnetworkforsocialroboticsusingemotionalsocialsignals |