Cargando…

A mental models approach for defining explainable artificial intelligence

BACKGROUND: Wide-ranging concerns exist regarding the use of black-box modelling methods in sensitive contexts such as healthcare. Despite performance gains and hype, uptake of artificial intelligence (AI) is hindered by these concerns. Explainable AI is thought to help alleviate these concerns. How...

Descripción completa

Detalles Bibliográficos
Autores principales: Merry, Michael, Riddle, Pat, Warren, Jim
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8656102/
https://www.ncbi.nlm.nih.gov/pubmed/34886856
http://dx.doi.org/10.1186/s12911-021-01703-7
_version_ 1784612213603958784
author Merry, Michael
Riddle, Pat
Warren, Jim
author_facet Merry, Michael
Riddle, Pat
Warren, Jim
author_sort Merry, Michael
collection PubMed
description BACKGROUND: Wide-ranging concerns exist regarding the use of black-box modelling methods in sensitive contexts such as healthcare. Despite performance gains and hype, uptake of artificial intelligence (AI) is hindered by these concerns. Explainable AI is thought to help alleviate these concerns. However, existing definitions for explainable are not forming a solid foundation for this work. METHODS: We critique recent reviews on the literature regarding: the agency of an AI within a team; mental models, especially as they apply to healthcare, and the practical aspects of their elicitation; and existing and current definitions of explainability, especially from the perspective of AI researchers. On the basis of this literature, we create a new definition of explainable, and supporting terms, providing definitions that can be objectively evaluated. Finally, we apply the new definition of explainable to three existing models, demonstrating how it can apply to previous research, and providing guidance for future research on the basis of this definition. RESULTS: Existing definitions of explanation are premised on global applicability and don’t address the question ‘understandable by whom?’. Eliciting mental models can be likened to creating explainable AI if one considers the AI as a member of a team. On this basis, we define explainability in terms of the context of the model, comprising the purpose, audience, and language of the model and explanation. As examples, this definition is applied to regression models, neural nets, and human mental models in operating-room teams. CONCLUSIONS: Existing definitions of explanation have limitations for ensuring that the concerns for practical applications are resolved. Defining explainability in terms of the context of their application forces evaluations to be aligned with the practical goals of the model. Further, it will allow researchers to explicitly distinguish between explanations for technical and lay audiences, allowing different evaluations to be applied to each.
format Online
Article
Text
id pubmed-8656102
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher BioMed Central
record_format MEDLINE/PubMed
spelling pubmed-86561022021-12-10 A mental models approach for defining explainable artificial intelligence Merry, Michael Riddle, Pat Warren, Jim BMC Med Inform Decis Mak Research BACKGROUND: Wide-ranging concerns exist regarding the use of black-box modelling methods in sensitive contexts such as healthcare. Despite performance gains and hype, uptake of artificial intelligence (AI) is hindered by these concerns. Explainable AI is thought to help alleviate these concerns. However, existing definitions for explainable are not forming a solid foundation for this work. METHODS: We critique recent reviews on the literature regarding: the agency of an AI within a team; mental models, especially as they apply to healthcare, and the practical aspects of their elicitation; and existing and current definitions of explainability, especially from the perspective of AI researchers. On the basis of this literature, we create a new definition of explainable, and supporting terms, providing definitions that can be objectively evaluated. Finally, we apply the new definition of explainable to three existing models, demonstrating how it can apply to previous research, and providing guidance for future research on the basis of this definition. RESULTS: Existing definitions of explanation are premised on global applicability and don’t address the question ‘understandable by whom?’. Eliciting mental models can be likened to creating explainable AI if one considers the AI as a member of a team. On this basis, we define explainability in terms of the context of the model, comprising the purpose, audience, and language of the model and explanation. As examples, this definition is applied to regression models, neural nets, and human mental models in operating-room teams. CONCLUSIONS: Existing definitions of explanation have limitations for ensuring that the concerns for practical applications are resolved. Defining explainability in terms of the context of their application forces evaluations to be aligned with the practical goals of the model. Further, it will allow researchers to explicitly distinguish between explanations for technical and lay audiences, allowing different evaluations to be applied to each. BioMed Central 2021-12-09 /pmc/articles/PMC8656102/ /pubmed/34886856 http://dx.doi.org/10.1186/s12911-021-01703-7 Text en © The Author(s) 2021 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/ (https://creativecommons.org/publicdomain/zero/1.0/) ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
spellingShingle Research
Merry, Michael
Riddle, Pat
Warren, Jim
A mental models approach for defining explainable artificial intelligence
title A mental models approach for defining explainable artificial intelligence
title_full A mental models approach for defining explainable artificial intelligence
title_fullStr A mental models approach for defining explainable artificial intelligence
title_full_unstemmed A mental models approach for defining explainable artificial intelligence
title_short A mental models approach for defining explainable artificial intelligence
title_sort mental models approach for defining explainable artificial intelligence
topic Research
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8656102/
https://www.ncbi.nlm.nih.gov/pubmed/34886856
http://dx.doi.org/10.1186/s12911-021-01703-7
work_keys_str_mv AT merrymichael amentalmodelsapproachfordefiningexplainableartificialintelligence
AT riddlepat amentalmodelsapproachfordefiningexplainableartificialintelligence
AT warrenjim amentalmodelsapproachfordefiningexplainableartificialintelligence
AT merrymichael mentalmodelsapproachfordefiningexplainableartificialintelligence
AT riddlepat mentalmodelsapproachfordefiningexplainableartificialintelligence
AT warrenjim mentalmodelsapproachfordefiningexplainableartificialintelligence