Cargando…
Explainable Agents for Less Bias in Human-Agent Decision Making
As autonomous agents become more self-governing, ubiquitous and sophisticated, it is vital that humans should have effective interactions with them. Agents often use Machine Learning (ML) for acquiring expertise, but traditional ML methods produce opaque results which are difficult to interpret. Hen...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7338195/ http://dx.doi.org/10.1007/978-3-030-51924-7_8 |
_version_ | 1783554633054027776 |
---|---|
author | Malhi, Avleen Knapic, Samanta Främling, Kary |
author_facet | Malhi, Avleen Knapic, Samanta Främling, Kary |
author_sort | Malhi, Avleen |
collection | PubMed |
description | As autonomous agents become more self-governing, ubiquitous and sophisticated, it is vital that humans should have effective interactions with them. Agents often use Machine Learning (ML) for acquiring expertise, but traditional ML methods produce opaque results which are difficult to interpret. Hence, these autonomous agents should be able to explain their behaviour and decisions before they can be trusted by humans. This paper focuses on analyzing the human understanding of the explainable agents behaviour. It conducts a preliminary human-agent interaction study to investigate the effect of explanations on the introduced bias in human-agent decision making for the human participants. We test the hypothesis where different explanation types are used to detect the bias introduced in the autonomous agents decisions. We present three user groups: Agents without explanation, and explainable agents using two different algorithms which automatically generate different explanations for agent actions. Quantitative analysis of three user groups (n = 20, 25, 20) in which users detect the bias in agents’ decisions for each explanation type for 15 test data cases is conducted for three different explanations types. Although the interaction study does not give significant findings, but it shows the notable differences between the explanation based recommendations and non-XAI recommendations in human-agent decision making. |
format | Online Article Text |
id | pubmed-7338195 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
record_format | MEDLINE/PubMed |
spelling | pubmed-73381952020-07-07 Explainable Agents for Less Bias in Human-Agent Decision Making Malhi, Avleen Knapic, Samanta Främling, Kary Explainable, Transparent Autonomous Agents and Multi-Agent Systems Article As autonomous agents become more self-governing, ubiquitous and sophisticated, it is vital that humans should have effective interactions with them. Agents often use Machine Learning (ML) for acquiring expertise, but traditional ML methods produce opaque results which are difficult to interpret. Hence, these autonomous agents should be able to explain their behaviour and decisions before they can be trusted by humans. This paper focuses on analyzing the human understanding of the explainable agents behaviour. It conducts a preliminary human-agent interaction study to investigate the effect of explanations on the introduced bias in human-agent decision making for the human participants. We test the hypothesis where different explanation types are used to detect the bias introduced in the autonomous agents decisions. We present three user groups: Agents without explanation, and explainable agents using two different algorithms which automatically generate different explanations for agent actions. Quantitative analysis of three user groups (n = 20, 25, 20) in which users detect the bias in agents’ decisions for each explanation type for 15 test data cases is conducted for three different explanations types. Although the interaction study does not give significant findings, but it shows the notable differences between the explanation based recommendations and non-XAI recommendations in human-agent decision making. 2020-06-04 /pmc/articles/PMC7338195/ http://dx.doi.org/10.1007/978-3-030-51924-7_8 Text en © Springer Nature Switzerland AG 2020 This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic. |
spellingShingle | Article Malhi, Avleen Knapic, Samanta Främling, Kary Explainable Agents for Less Bias in Human-Agent Decision Making |
title | Explainable Agents for Less Bias in Human-Agent Decision Making |
title_full | Explainable Agents for Less Bias in Human-Agent Decision Making |
title_fullStr | Explainable Agents for Less Bias in Human-Agent Decision Making |
title_full_unstemmed | Explainable Agents for Less Bias in Human-Agent Decision Making |
title_short | Explainable Agents for Less Bias in Human-Agent Decision Making |
title_sort | explainable agents for less bias in human-agent decision making |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7338195/ http://dx.doi.org/10.1007/978-3-030-51924-7_8 |
work_keys_str_mv | AT malhiavleen explainableagentsforlessbiasinhumanagentdecisionmaking AT knapicsamanta explainableagentsforlessbiasinhumanagentdecisionmaking AT framlingkary explainableagentsforlessbiasinhumanagentdecisionmaking |