Cargando…
Scrutinizing XAI using linear ground-truth data with suppressor variables
Machine learning (ML) is increasingly often used to inform high-stakes decisions. As complex ML models (e.g., deep neural networks) are often considered black boxes, a wealth of procedures has been developed to shed light on their inner workings and the ways in which their predictions come about, de...
Autores principales: | Wilming, Rick, Budding, Céline, Müller, Klaus-Robert, Haufe, Stefan |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer US
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9123083/ https://www.ncbi.nlm.nih.gov/pubmed/35611184 http://dx.doi.org/10.1007/s10994-022-06167-y |
Ejemplares similares
-
Ground Truth
por: Garrity, George M.
Publicado: (2009) -
To trust or not to trust an explanation: using LEAF to evaluate local linear XAI methods
por: Amparore, Elvio, et al.
Publicado: (2021) -
Scrutinizing the epigenetics revolution
por: Meloni, Maurizio, et al.
Publicado: (2014) -
Hands-On Explainable AI (XAI) with Python
por: Rothman, Denis
Publicado: (2020) -
Tryggo: Old norse for truth: The real truth about ground truth: New insights into the challenges of generating ground truth maps for WSI CAD algorithm evaluation
por: Hipp, Jason D., et al.
Publicado: (2012)