Cargando…
Translating theory into practice: assessing the privacy implications of concept-based explanations for biomedical AI
Artificial Intelligence (AI) has achieved remarkable success in image generation, image analysis, and language modeling, making data-driven techniques increasingly relevant in practical real-world applications, promising enhanced creativity and efficiency for human users. However, the deployment of...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10356902/ https://www.ncbi.nlm.nih.gov/pubmed/37484865 http://dx.doi.org/10.3389/fbinf.2023.1194993 |
_version_ | 1785075378462654464 |
---|---|
author | Lucieri, Adriano Dengel, Andreas Ahmed, Sheraz |
author_facet | Lucieri, Adriano Dengel, Andreas Ahmed, Sheraz |
author_sort | Lucieri, Adriano |
collection | PubMed |
description | Artificial Intelligence (AI) has achieved remarkable success in image generation, image analysis, and language modeling, making data-driven techniques increasingly relevant in practical real-world applications, promising enhanced creativity and efficiency for human users. However, the deployment of AI in high-stakes domains such as infrastructure and healthcare still raises concerns regarding algorithm accountability and safety. The emerging field of explainable AI (XAI) has made significant strides in developing interfaces that enable humans to comprehend the decisions made by data-driven models. Among these approaches, concept-based explainability stands out due to its ability to align explanations with high-level concepts familiar to users. Nonetheless, early research in adversarial machine learning has unveiled that exposing model explanations can render victim models more susceptible to attacks. This is the first study to investigate and compare the impact of concept-based explanations on the privacy of Deep Learning based AI models in the context of biomedical image analysis. An extensive privacy benchmark is conducted on three different state-of-the-art model architectures (ResNet50, NFNet, ConvNeXt) trained on two biomedical (ISIC and EyePACS) and one synthetic dataset (SCDB). The success of membership inference attacks while exposing varying degrees of attribution-based and concept-based explanations is systematically compared. The findings indicate that, in theory, concept-based explanations can potentially increase the vulnerability of a private AI system by up to 16% compared to attributions in the baseline setting. However, it is demonstrated that, in more realistic attack scenarios, the threat posed by explanations is negligible in practice. Furthermore, actionable recommendations are provided to ensure the safe deployment of concept-based XAI systems. In addition, the impact of differential privacy (DP) on the quality of concept-based explanations is explored, revealing that while negatively influencing the explanation ability, DP can have an adverse effect on the models’ privacy. |
format | Online Article Text |
id | pubmed-10356902 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-103569022023-07-21 Translating theory into practice: assessing the privacy implications of concept-based explanations for biomedical AI Lucieri, Adriano Dengel, Andreas Ahmed, Sheraz Front Bioinform Bioinformatics Artificial Intelligence (AI) has achieved remarkable success in image generation, image analysis, and language modeling, making data-driven techniques increasingly relevant in practical real-world applications, promising enhanced creativity and efficiency for human users. However, the deployment of AI in high-stakes domains such as infrastructure and healthcare still raises concerns regarding algorithm accountability and safety. The emerging field of explainable AI (XAI) has made significant strides in developing interfaces that enable humans to comprehend the decisions made by data-driven models. Among these approaches, concept-based explainability stands out due to its ability to align explanations with high-level concepts familiar to users. Nonetheless, early research in adversarial machine learning has unveiled that exposing model explanations can render victim models more susceptible to attacks. This is the first study to investigate and compare the impact of concept-based explanations on the privacy of Deep Learning based AI models in the context of biomedical image analysis. An extensive privacy benchmark is conducted on three different state-of-the-art model architectures (ResNet50, NFNet, ConvNeXt) trained on two biomedical (ISIC and EyePACS) and one synthetic dataset (SCDB). The success of membership inference attacks while exposing varying degrees of attribution-based and concept-based explanations is systematically compared. The findings indicate that, in theory, concept-based explanations can potentially increase the vulnerability of a private AI system by up to 16% compared to attributions in the baseline setting. However, it is demonstrated that, in more realistic attack scenarios, the threat posed by explanations is negligible in practice. Furthermore, actionable recommendations are provided to ensure the safe deployment of concept-based XAI systems. In addition, the impact of differential privacy (DP) on the quality of concept-based explanations is explored, revealing that while negatively influencing the explanation ability, DP can have an adverse effect on the models’ privacy. Frontiers Media S.A. 2023-07-05 /pmc/articles/PMC10356902/ /pubmed/37484865 http://dx.doi.org/10.3389/fbinf.2023.1194993 Text en Copyright © 2023 Lucieri, Dengel and Ahmed. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Bioinformatics Lucieri, Adriano Dengel, Andreas Ahmed, Sheraz Translating theory into practice: assessing the privacy implications of concept-based explanations for biomedical AI |
title | Translating theory into practice: assessing the privacy implications of concept-based explanations for biomedical AI |
title_full | Translating theory into practice: assessing the privacy implications of concept-based explanations for biomedical AI |
title_fullStr | Translating theory into practice: assessing the privacy implications of concept-based explanations for biomedical AI |
title_full_unstemmed | Translating theory into practice: assessing the privacy implications of concept-based explanations for biomedical AI |
title_short | Translating theory into practice: assessing the privacy implications of concept-based explanations for biomedical AI |
title_sort | translating theory into practice: assessing the privacy implications of concept-based explanations for biomedical ai |
topic | Bioinformatics |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10356902/ https://www.ncbi.nlm.nih.gov/pubmed/37484865 http://dx.doi.org/10.3389/fbinf.2023.1194993 |
work_keys_str_mv | AT lucieriadriano translatingtheoryintopracticeassessingtheprivacyimplicationsofconceptbasedexplanationsforbiomedicalai AT dengelandreas translatingtheoryintopracticeassessingtheprivacyimplicationsofconceptbasedexplanationsforbiomedicalai AT ahmedsheraz translatingtheoryintopracticeassessingtheprivacyimplicationsofconceptbasedexplanationsforbiomedicalai |