Cargando…
Mitigating belief projection in explainable artificial intelligence via Bayesian teaching
State-of-the-art deep-learning systems use decision rules that are challenging for humans to model. Explainable AI (XAI) attempts to improve human understanding but rarely accounts for how people typically reason about unfamiliar agents. We propose explicitly modelling the human explainee via Bayesi...
Autores principales: | Yang, Scott Cheng-Hsin, Vong, Wai Keen, Sojitra, Ravi B., Folke, Tomas, Shafto, Patrick |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8110978/ https://www.ncbi.nlm.nih.gov/pubmed/33972625 http://dx.doi.org/10.1038/s41598-021-89267-4 |
Ejemplares similares
-
To explain or not to explain?—Artificial intelligence explainability in clinical decision support systems
por: Amann, Julia, et al.
Publicado: (2022) -
Causability and explainability of artificial intelligence in medicine
por: Holzinger, Andreas, et al.
Publicado: (2019) -
Explainable Artificial Intelligence for Neuroscience: Behavioral Neurostimulation
por: Fellous, Jean-Marc, et al.
Publicado: (2019) -
Explainable Artificial Intelligence in Endocrinological Medical Research
por: Webb-Robertson, Bobbie-Jo M
Publicado: (2021) -
Artificial intelligence explainability: the technical and ethical dimensions
por: McDermid, John A., et al.
Publicado: (2021)