Cargando…

Defense Against Explanation Manipulation

Explainable machine learning attracts increasing attention as it improves the transparency of models, which is helpful for machine learning to be trusted in real applications. However, explanation methods have recently been demonstrated to be vulnerable to manipulation, where we can easily change a...

Descripción completa

Detalles Bibliográficos
Autores principales: Tang, Ruixiang, Liu, Ninghao, Yang, Fan, Zou, Na, Hu, Xia
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8866947/
https://www.ncbi.nlm.nih.gov/pubmed/35224483
http://dx.doi.org/10.3389/fdata.2022.704203
_version_ 1784655945618423808
author Tang, Ruixiang
Liu, Ninghao
Yang, Fan
Zou, Na
Hu, Xia
author_facet Tang, Ruixiang
Liu, Ninghao
Yang, Fan
Zou, Na
Hu, Xia
author_sort Tang, Ruixiang
collection PubMed
description Explainable machine learning attracts increasing attention as it improves the transparency of models, which is helpful for machine learning to be trusted in real applications. However, explanation methods have recently been demonstrated to be vulnerable to manipulation, where we can easily change a model's explanation while keeping its prediction constant. To tackle this problem, some efforts have been paid to use more stable explanation methods or to change model configurations. In this work, we tackle the problem from the training perspective, and propose a new training scheme called Adversarial Training on EXplanations (ATEX) to improve the internal explanation stability of a model regardless of the specific explanation method being applied. Instead of directly specifying explanation values over data instances, ATEX only puts constraints on model predictions which avoids involving second-order derivatives in optimization. As a further discussion, we also find that explanation stability is closely related to another property of the model, i.e., the risk of being exposed to adversarial attack. Through experiments, besides showing that ATEX improves model robustness against manipulation targeting explanation, it also brings additional benefits including smoothing explanations and improving the efficacy of adversarial training if applied to the model.
format Online
Article
Text
id pubmed-8866947
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-88669472022-02-25 Defense Against Explanation Manipulation Tang, Ruixiang Liu, Ninghao Yang, Fan Zou, Na Hu, Xia Front Big Data Big Data Explainable machine learning attracts increasing attention as it improves the transparency of models, which is helpful for machine learning to be trusted in real applications. However, explanation methods have recently been demonstrated to be vulnerable to manipulation, where we can easily change a model's explanation while keeping its prediction constant. To tackle this problem, some efforts have been paid to use more stable explanation methods or to change model configurations. In this work, we tackle the problem from the training perspective, and propose a new training scheme called Adversarial Training on EXplanations (ATEX) to improve the internal explanation stability of a model regardless of the specific explanation method being applied. Instead of directly specifying explanation values over data instances, ATEX only puts constraints on model predictions which avoids involving second-order derivatives in optimization. As a further discussion, we also find that explanation stability is closely related to another property of the model, i.e., the risk of being exposed to adversarial attack. Through experiments, besides showing that ATEX improves model robustness against manipulation targeting explanation, it also brings additional benefits including smoothing explanations and improving the efficacy of adversarial training if applied to the model. Frontiers Media S.A. 2022-02-10 /pmc/articles/PMC8866947/ /pubmed/35224483 http://dx.doi.org/10.3389/fdata.2022.704203 Text en Copyright © 2022 Tang, Liu, Yang, Zou and Hu. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Big Data
Tang, Ruixiang
Liu, Ninghao
Yang, Fan
Zou, Na
Hu, Xia
Defense Against Explanation Manipulation
title Defense Against Explanation Manipulation
title_full Defense Against Explanation Manipulation
title_fullStr Defense Against Explanation Manipulation
title_full_unstemmed Defense Against Explanation Manipulation
title_short Defense Against Explanation Manipulation
title_sort defense against explanation manipulation
topic Big Data
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8866947/
https://www.ncbi.nlm.nih.gov/pubmed/35224483
http://dx.doi.org/10.3389/fdata.2022.704203
work_keys_str_mv AT tangruixiang defenseagainstexplanationmanipulation
AT liuninghao defenseagainstexplanationmanipulation
AT yangfan defenseagainstexplanationmanipulation
AT zouna defenseagainstexplanationmanipulation
AT huxia defenseagainstexplanationmanipulation