Cargando…
Active Inference Through Energy Minimization in Multimodal Affective Human–Robot Interaction
During communication, humans express their emotional states using various modalities (e.g., facial expressions and gestures), and they estimate the emotional states of others by paying attention to multimodal signals. To ensure that a communication robot with limited resources can pay attention to s...
Autores principales: | Horii, Takato, Nagai, Yukie |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8662315/ https://www.ncbi.nlm.nih.gov/pubmed/34901166 http://dx.doi.org/10.3389/frobt.2021.684401 |
Ejemplares similares
-
Integrated Cognitive Architecture for Robot Learning of Action and Language
por: Miyazawa, Kazuki, et al.
Publicado: (2019) -
Editorial: Language and Robotics
por: Taniguchi, Tadahiro, et al.
Publicado: (2021) -
Affect-Driven Learning of Robot Behaviour for Collaborative Human-Robot Interactions
por: Churamani, Nikhil, et al.
Publicado: (2022) -
SIGVerse: A Cloud-Based VR Platform for Research on Multimodal Human-Robot Interaction
por: Inamura, Tetsunari, et al.
Publicado: (2021) -
Robotic Telemedicine for Mental Health: A Multimodal Approach to Improve Human-Robot Engagement
por: Lima, Maria R., et al.
Publicado: (2021)