Cargando…

Rapid Trust Calibration through Interpretable and Uncertainty-Aware AI

Artificial intelligence (AI) systems hold great promise as decision-support tools, but we must be able to identify and understand their inevitable mistakes if they are to fulfill this potential. This is particularly true in domains where the decisions are high-stakes, such as law, medicine, and the...

Descripción completa

Detalles Bibliográficos
Autores principales: Tomsett, Richard, Preece, Alun, Braines, Dave, Cerutti, Federico, Chakraborty, Supriyo, Srivastava, Mani, Pearson, Gavin, Kaplan, Lance
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Elsevier 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7660448/
https://www.ncbi.nlm.nih.gov/pubmed/33205113
http://dx.doi.org/10.1016/j.patter.2020.100049
_version_ 1783609005475627008
author Tomsett, Richard
Preece, Alun
Braines, Dave
Cerutti, Federico
Chakraborty, Supriyo
Srivastava, Mani
Pearson, Gavin
Kaplan, Lance
author_facet Tomsett, Richard
Preece, Alun
Braines, Dave
Cerutti, Federico
Chakraborty, Supriyo
Srivastava, Mani
Pearson, Gavin
Kaplan, Lance
author_sort Tomsett, Richard
collection PubMed
description Artificial intelligence (AI) systems hold great promise as decision-support tools, but we must be able to identify and understand their inevitable mistakes if they are to fulfill this potential. This is particularly true in domains where the decisions are high-stakes, such as law, medicine, and the military. In this Perspective, we describe the particular challenges for AI decision support posed in military coalition operations. These include having to deal with limited, low-quality data, which inevitably compromises AI performance. We suggest that these problems can be mitigated by taking steps that allow rapid trust calibration so that decision makers understand the AI system's limitations and likely failures and can calibrate their trust in its outputs appropriately. We propose that AI services can achieve this by being both interpretable and uncertainty-aware. Creating such AI systems poses various technical and human factors challenges. We review these challenges and recommend directions for future research.
format Online
Article
Text
id pubmed-7660448
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher Elsevier
record_format MEDLINE/PubMed
spelling pubmed-76604482020-11-16 Rapid Trust Calibration through Interpretable and Uncertainty-Aware AI Tomsett, Richard Preece, Alun Braines, Dave Cerutti, Federico Chakraborty, Supriyo Srivastava, Mani Pearson, Gavin Kaplan, Lance Patterns (N Y) Perspective Artificial intelligence (AI) systems hold great promise as decision-support tools, but we must be able to identify and understand their inevitable mistakes if they are to fulfill this potential. This is particularly true in domains where the decisions are high-stakes, such as law, medicine, and the military. In this Perspective, we describe the particular challenges for AI decision support posed in military coalition operations. These include having to deal with limited, low-quality data, which inevitably compromises AI performance. We suggest that these problems can be mitigated by taking steps that allow rapid trust calibration so that decision makers understand the AI system's limitations and likely failures and can calibrate their trust in its outputs appropriately. We propose that AI services can achieve this by being both interpretable and uncertainty-aware. Creating such AI systems poses various technical and human factors challenges. We review these challenges and recommend directions for future research. Elsevier 2020-07-10 /pmc/articles/PMC7660448/ /pubmed/33205113 http://dx.doi.org/10.1016/j.patter.2020.100049 Text en © 2020 The Authors http://creativecommons.org/licenses/by/4.0/ This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Perspective
Tomsett, Richard
Preece, Alun
Braines, Dave
Cerutti, Federico
Chakraborty, Supriyo
Srivastava, Mani
Pearson, Gavin
Kaplan, Lance
Rapid Trust Calibration through Interpretable and Uncertainty-Aware AI
title Rapid Trust Calibration through Interpretable and Uncertainty-Aware AI
title_full Rapid Trust Calibration through Interpretable and Uncertainty-Aware AI
title_fullStr Rapid Trust Calibration through Interpretable and Uncertainty-Aware AI
title_full_unstemmed Rapid Trust Calibration through Interpretable and Uncertainty-Aware AI
title_short Rapid Trust Calibration through Interpretable and Uncertainty-Aware AI
title_sort rapid trust calibration through interpretable and uncertainty-aware ai
topic Perspective
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7660448/
https://www.ncbi.nlm.nih.gov/pubmed/33205113
http://dx.doi.org/10.1016/j.patter.2020.100049
work_keys_str_mv AT tomsettrichard rapidtrustcalibrationthroughinterpretableanduncertaintyawareai
AT preecealun rapidtrustcalibrationthroughinterpretableanduncertaintyawareai
AT brainesdave rapidtrustcalibrationthroughinterpretableanduncertaintyawareai
AT ceruttifederico rapidtrustcalibrationthroughinterpretableanduncertaintyawareai
AT chakrabortysupriyo rapidtrustcalibrationthroughinterpretableanduncertaintyawareai
AT srivastavamani rapidtrustcalibrationthroughinterpretableanduncertaintyawareai
AT pearsongavin rapidtrustcalibrationthroughinterpretableanduncertaintyawareai
AT kaplanlance rapidtrustcalibrationthroughinterpretableanduncertaintyawareai