Cargando…
Defense against membership inference attack in graph neural networks through graph perturbation
Graph neural networks have demonstrated remarkable performance in learning node or graph representations for various graph-related tasks. However, learning with graph data or its embedded representations may induce privacy issues when the node representations contain sensitive or private user inform...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer Berlin Heidelberg
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9756746/ https://www.ncbi.nlm.nih.gov/pubmed/36540905 http://dx.doi.org/10.1007/s10207-022-00646-y |
_version_ | 1784851684175904768 |
---|---|
author | Wang, Kai Wu, Jinxia Zhu, Tianqing Ren, Wei Hong, Ying |
author_facet | Wang, Kai Wu, Jinxia Zhu, Tianqing Ren, Wei Hong, Ying |
author_sort | Wang, Kai |
collection | PubMed |
description | Graph neural networks have demonstrated remarkable performance in learning node or graph representations for various graph-related tasks. However, learning with graph data or its embedded representations may induce privacy issues when the node representations contain sensitive or private user information. Although many machine learning models or techniques have been proposed for privacy preservation of traditional non-graph structured data, there is limited work to address graph privacy concerns. In this paper, we investigate the privacy problem of embedding representations of nodes, in which an adversary can infer the user’s privacy by designing an inference attack algorithm. To address this problem, we develop a defense algorithm against white-box membership inference attacks, based on perturbation injection on the graph. In particular, we employ a graph reconstruction model and inject a certain size of noise into the intermediate output of the model, i.e., the latent representations of the nodes. The experimental results obtained on real-world datasets, along with reasonable usability and privacy metrics, demonstrate that our proposed approach can effectively resist membership inference attacks. Meanwhile, based on our method, the trade-off between usability and privacy brought by defense measures can be observed intuitively, which provides a reference for subsequent research in the field of graph privacy protection. |
format | Online Article Text |
id | pubmed-9756746 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Springer Berlin Heidelberg |
record_format | MEDLINE/PubMed |
spelling | pubmed-97567462022-12-16 Defense against membership inference attack in graph neural networks through graph perturbation Wang, Kai Wu, Jinxia Zhu, Tianqing Ren, Wei Hong, Ying Int J Inf Secur Regular Contribution Graph neural networks have demonstrated remarkable performance in learning node or graph representations for various graph-related tasks. However, learning with graph data or its embedded representations may induce privacy issues when the node representations contain sensitive or private user information. Although many machine learning models or techniques have been proposed for privacy preservation of traditional non-graph structured data, there is limited work to address graph privacy concerns. In this paper, we investigate the privacy problem of embedding representations of nodes, in which an adversary can infer the user’s privacy by designing an inference attack algorithm. To address this problem, we develop a defense algorithm against white-box membership inference attacks, based on perturbation injection on the graph. In particular, we employ a graph reconstruction model and inject a certain size of noise into the intermediate output of the model, i.e., the latent representations of the nodes. The experimental results obtained on real-world datasets, along with reasonable usability and privacy metrics, demonstrate that our proposed approach can effectively resist membership inference attacks. Meanwhile, based on our method, the trade-off between usability and privacy brought by defense measures can be observed intuitively, which provides a reference for subsequent research in the field of graph privacy protection. Springer Berlin Heidelberg 2022-12-16 2023 /pmc/articles/PMC9756746/ /pubmed/36540905 http://dx.doi.org/10.1007/s10207-022-00646-y Text en © The Author(s), under exclusive licence to Springer-Verlag GmbH, DE 2022, Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law. This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic. |
spellingShingle | Regular Contribution Wang, Kai Wu, Jinxia Zhu, Tianqing Ren, Wei Hong, Ying Defense against membership inference attack in graph neural networks through graph perturbation |
title | Defense against membership inference attack in graph neural networks through graph perturbation |
title_full | Defense against membership inference attack in graph neural networks through graph perturbation |
title_fullStr | Defense against membership inference attack in graph neural networks through graph perturbation |
title_full_unstemmed | Defense against membership inference attack in graph neural networks through graph perturbation |
title_short | Defense against membership inference attack in graph neural networks through graph perturbation |
title_sort | defense against membership inference attack in graph neural networks through graph perturbation |
topic | Regular Contribution |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9756746/ https://www.ncbi.nlm.nih.gov/pubmed/36540905 http://dx.doi.org/10.1007/s10207-022-00646-y |
work_keys_str_mv | AT wangkai defenseagainstmembershipinferenceattackingraphneuralnetworksthroughgraphperturbation AT wujinxia defenseagainstmembershipinferenceattackingraphneuralnetworksthroughgraphperturbation AT zhutianqing defenseagainstmembershipinferenceattackingraphneuralnetworksthroughgraphperturbation AT renwei defenseagainstmembershipinferenceattackingraphneuralnetworksthroughgraphperturbation AT hongying defenseagainstmembershipinferenceattackingraphneuralnetworksthroughgraphperturbation |