Cargando…
To trust or not to trust an explanation: using LEAF to evaluate local linear XAI methods
The main objective of eXplainable Artificial Intelligence (XAI) is to provide effective explanations for black-box classifiers. The existing literature lists many desirable properties for explanations to be useful, but there is a scarce consensus on how to quantitatively evaluate explanations in pra...
Autores principales: | Amparore, Elvio, Perotti, Alan, Bajardi, Paolo |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
PeerJ Inc.
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8056245/ https://www.ncbi.nlm.nih.gov/pubmed/33977131 http://dx.doi.org/10.7717/peerj-cs.479 |
Ejemplares similares
-
Feature relevance XAI in anomaly detection: Reviewing approaches and challenges
por: Tritscher, Julian, et al.
Publicado: (2023) -
First impressions of a financial AI assistant: differences between high trust and low trust users
por: Schreibelmayr, Simon, et al.
Publicado: (2023) -
The Relationship Between Performance and Trust in AI in E-Finance
por: Maier, Torsten, et al.
Publicado: (2022) -
Trust Dynamics and Verbal Assurances in Human Robot Physical Collaboration
por: Alhaji, Basel, et al.
Publicado: (2021) -
Corrigendum: Trust Dynamics and Verbal Assurances in Human Robot Physical Collaboration
por: Alhaji, Basel, et al.
Publicado: (2021)