Cargando…
Learning From the Slips of Others: Neural Correlates of Trust in Automated Agents
With the rise of increasingly complex artificial intelligence (AI), there is a need to design new methods to monitor AI in a transparent, human-aware manner. Decades of research have demonstrated that people, who are not aware of the exact performance levels of automated algorithms, often experience...
Autores principales: | de Visser, Ewart J., Beatty, Paul J., Estepp, Justin R., Kohn, Spencer, Abubshait, Abdulaziz, Fedota, John R., McDonald, Craig G. |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2018
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6095965/ https://www.ncbi.nlm.nih.gov/pubmed/30147648 http://dx.doi.org/10.3389/fnhum.2018.00309 |
Ejemplares similares
-
Measurement of Trust in Automation: A Narrative Review and Reference Guide
por: Kohn, Spencer C., et al.
Publicado: (2021) -
Examining Social Cognition with Embodied Robots: Does Prior Experience with a Robot Impact Feedback-associated Learning in a Gambling Task?
por: Abubshait, Abdulaziz, et al.
Publicado: (2021) -
You Look Human, But Act Like a Machine: Agent Appearance and Behavior Modulate Different Aspects of Human–Robot Interaction
por: Abubshait, Abdulaziz, et al.
Publicado: (2017) -
Trust-Based Smart Contract for Automated Agent to Agent Communication
por: Mhamdi, Halima, et al.
Publicado: (2022) -
Lack of Association between Human Plasma Oxytocin and Interpersonal Trust in a Prisoner’s Dilemma Paradigm
por: Christensen, James C., et al.
Publicado: (2014)