Cargando…

Learning From the Slips of Others: Neural Correlates of Trust in Automated Agents

With the rise of increasingly complex artificial intelligence (AI), there is a need to design new methods to monitor AI in a transparent, human-aware manner. Decades of research have demonstrated that people, who are not aware of the exact performance levels of automated algorithms, often experience...

Descripción completa

Detalles Bibliográficos
Autores principales: de Visser, Ewart J., Beatty, Paul J., Estepp, Justin R., Kohn, Spencer, Abubshait, Abdulaziz, Fedota, John R., McDonald, Craig G.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6095965/
https://www.ncbi.nlm.nih.gov/pubmed/30147648
http://dx.doi.org/10.3389/fnhum.2018.00309
_version_ 1783348020590411776
author de Visser, Ewart J.
Beatty, Paul J.
Estepp, Justin R.
Kohn, Spencer
Abubshait, Abdulaziz
Fedota, John R.
McDonald, Craig G.
author_facet de Visser, Ewart J.
Beatty, Paul J.
Estepp, Justin R.
Kohn, Spencer
Abubshait, Abdulaziz
Fedota, John R.
McDonald, Craig G.
author_sort de Visser, Ewart J.
collection PubMed
description With the rise of increasingly complex artificial intelligence (AI), there is a need to design new methods to monitor AI in a transparent, human-aware manner. Decades of research have demonstrated that people, who are not aware of the exact performance levels of automated algorithms, often experience a mismatch in expectations. Consequently, they will often provide either too little or too much trust in an algorithm. Detecting such a mismatch in expectations, or trust calibration, remains a fundamental challenge in research investigating the use of automation. Due to the context-dependent nature of trust, universal measures of trust have not been established. Trust is a difficult construct to investigate because even the act of reflecting on how much a person trusts a certain agent can change the perception of that agent. We hypothesized that electroencephalograms (EEGs) would be able to provide such a universal index of trust without the need of self-report. In this work, EEGs were recorded for 21 participants (mean age = 22.1; 13 females) while they observed a series of algorithms perform a modified version of a flanker task. Each algorithm’s degree of credibility and reliability were manipulated. We hypothesized that neural markers of action monitoring, such as the observational error-related negativity (oERN) and observational error positivity (oPe), are potential candidates for monitoring computer algorithm performance. Our findings demonstrate that (1) it is possible to reliably elicit both the oERN and oPe while participants monitored these computer algorithms, (2) the oPe, as opposed to the oERN, significantly distinguished between high and low reliability algorithms, and (3) the oPe significantly correlated with subjective measures of trust. This work provides the first evidence for the utility of neural correlates of error monitoring for examining trust in computer algorithms.
format Online
Article
Text
id pubmed-6095965
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-60959652018-08-24 Learning From the Slips of Others: Neural Correlates of Trust in Automated Agents de Visser, Ewart J. Beatty, Paul J. Estepp, Justin R. Kohn, Spencer Abubshait, Abdulaziz Fedota, John R. McDonald, Craig G. Front Hum Neurosci Neuroscience With the rise of increasingly complex artificial intelligence (AI), there is a need to design new methods to monitor AI in a transparent, human-aware manner. Decades of research have demonstrated that people, who are not aware of the exact performance levels of automated algorithms, often experience a mismatch in expectations. Consequently, they will often provide either too little or too much trust in an algorithm. Detecting such a mismatch in expectations, or trust calibration, remains a fundamental challenge in research investigating the use of automation. Due to the context-dependent nature of trust, universal measures of trust have not been established. Trust is a difficult construct to investigate because even the act of reflecting on how much a person trusts a certain agent can change the perception of that agent. We hypothesized that electroencephalograms (EEGs) would be able to provide such a universal index of trust without the need of self-report. In this work, EEGs were recorded for 21 participants (mean age = 22.1; 13 females) while they observed a series of algorithms perform a modified version of a flanker task. Each algorithm’s degree of credibility and reliability were manipulated. We hypothesized that neural markers of action monitoring, such as the observational error-related negativity (oERN) and observational error positivity (oPe), are potential candidates for monitoring computer algorithm performance. Our findings demonstrate that (1) it is possible to reliably elicit both the oERN and oPe while participants monitored these computer algorithms, (2) the oPe, as opposed to the oERN, significantly distinguished between high and low reliability algorithms, and (3) the oPe significantly correlated with subjective measures of trust. This work provides the first evidence for the utility of neural correlates of error monitoring for examining trust in computer algorithms. Frontiers Media S.A. 2018-08-10 /pmc/articles/PMC6095965/ /pubmed/30147648 http://dx.doi.org/10.3389/fnhum.2018.00309 Text en Copyright © 2018 de Visser, Beatty, Estepp, Kohn, Abubshait, Fedota and McDonald. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Neuroscience
de Visser, Ewart J.
Beatty, Paul J.
Estepp, Justin R.
Kohn, Spencer
Abubshait, Abdulaziz
Fedota, John R.
McDonald, Craig G.
Learning From the Slips of Others: Neural Correlates of Trust in Automated Agents
title Learning From the Slips of Others: Neural Correlates of Trust in Automated Agents
title_full Learning From the Slips of Others: Neural Correlates of Trust in Automated Agents
title_fullStr Learning From the Slips of Others: Neural Correlates of Trust in Automated Agents
title_full_unstemmed Learning From the Slips of Others: Neural Correlates of Trust in Automated Agents
title_short Learning From the Slips of Others: Neural Correlates of Trust in Automated Agents
title_sort learning from the slips of others: neural correlates of trust in automated agents
topic Neuroscience
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6095965/
https://www.ncbi.nlm.nih.gov/pubmed/30147648
http://dx.doi.org/10.3389/fnhum.2018.00309
work_keys_str_mv AT devisserewartj learningfromtheslipsofothersneuralcorrelatesoftrustinautomatedagents
AT beattypaulj learningfromtheslipsofothersneuralcorrelatesoftrustinautomatedagents
AT esteppjustinr learningfromtheslipsofothersneuralcorrelatesoftrustinautomatedagents
AT kohnspencer learningfromtheslipsofothersneuralcorrelatesoftrustinautomatedagents
AT abubshaitabdulaziz learningfromtheslipsofothersneuralcorrelatesoftrustinautomatedagents
AT fedotajohnr learningfromtheslipsofothersneuralcorrelatesoftrustinautomatedagents
AT mcdonaldcraigg learningfromtheslipsofothersneuralcorrelatesoftrustinautomatedagents