Cargando…
Defense Against Explanation Manipulation
Explainable machine learning attracts increasing attention as it improves the transparency of models, which is helpful for machine learning to be trusted in real applications. However, explanation methods have recently been demonstrated to be vulnerable to manipulation, where we can easily change a...
Autores principales: | Tang, Ruixiang, Liu, Ninghao, Yang, Fan, Zou, Na, Hu, Xia |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8866947/ https://www.ncbi.nlm.nih.gov/pubmed/35224483 http://dx.doi.org/10.3389/fdata.2022.704203 |
Ejemplares similares
-
Deep Representation Learning for Social Network Analysis
por: Tan, Qiaoyu, et al.
Publicado: (2019) -
Effects of Feature-Based Explanation and Its Output Modality on User Satisfaction With Service Recommender Systems
por: Zhang, Zhirun, et al.
Publicado: (2022) -
Vulnerabilities of Connectionist AI Applications: Evaluation and Defense
por: Berghoff, Christian, et al.
Publicado: (2020) -
On Robustness of Neural Architecture Search Under Label Noise
por: Chen, Yi-Wei, et al.
Publicado: (2020) -
PME: pruning-based multi-size embedding for recommender systems
por: Liu, Zirui, et al.
Publicado: (2023)