Cargando…
Developing crossmodal expression recognition based on a deep neural model
A robot capable of understanding emotion expressions can increase its own capability of solving problems by using emotion expressions as part of its own decision-making, in a similar way to humans. Evidence shows that the perception of human interaction starts with an innate perception mechanism, wh...
Autores principales: | Barros, Pablo, Wermter, Stefan |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
SAGE Publications
2016
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5098700/ https://www.ncbi.nlm.nih.gov/pubmed/27853349 http://dx.doi.org/10.1177/1059712316664017 |
Ejemplares similares
-
Crossmodal Language Grounding in an Embodied Neurocognitive Model
por: Heinrich, Stefan, et al.
Publicado: (2020) -
Bioinspired multisensory neural network with crossmodal integration and recognition
por: Tan, Hongwei, et al.
Publicado: (2021) -
What Can Computational Models Learn From Human Selective Attention? A Review From an Audiovisual Unimodal and Crossmodal Perspective
por: Fu, Di, et al.
Publicado: (2020) -
The Embodied Crossmodal Self Forms Language and Interaction: A Computational Cognitive Review
por: Röder, Frank, et al.
Publicado: (2021) -
Introduction to Machine Learning, Neural Networks, and Deep Learning
por: Choi, Rene Y., et al.
Publicado: (2020)