Cargando…
Human but not robotic gaze facilitates action prediction
Do people ascribe intentions to humanoid robots as they would to humans or non-human-like animated objects? In six experiments, we compared people’s ability to extract non-mentalistic (i.e., where an agent is looking) and mentalistic (i.e., what an agent is looking at; what an agent is going to do)...
Autores principales: | Tidoni, Emmanuele, Holle, Henning, Scandola, Michele, Schindler, Igor, Hill, Loron, Cross, Emily S. |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Elsevier
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9189121/ https://www.ncbi.nlm.nih.gov/pubmed/35707718 http://dx.doi.org/10.1016/j.isci.2022.104462 |
Ejemplares similares
-
Apparent Biological Motion in First and Third Person Perspective
por: Tidoni, Emmanuele, et al.
Publicado: (2016) -
Rubber hand illusion induced by touching the face ipsilaterally to a deprived hand: evidence for plastic “somatotopic” remapping in tetraplegics
por: Scandola, Michele, et al.
Publicado: (2014) -
Commentary: Understanding intentions from actions: Direct perception, inference, and the roles of mirror and mentalizing systems
por: Tidoni, Emmanuele, et al.
Publicado: (2016) -
Audio-visual feedback improves the BCI performance in the navigational control of a humanoid robot
por: Tidoni, Emmanuele, et al.
Publicado: (2014) -
Does a robot’s gaze aversion affect human gaze aversion?
por: Mishra, Chinmaya, et al.
Publicado: (2023)