Cargando…
Deep Neural Network Quantization Framework for Effective Defense against Membership Inference Attacks
Machine learning deployment on edge devices has faced challenges such as computational costs and privacy issues. Membership inference attack (MIA) refers to the attack where the adversary aims to infer whether a data sample belongs to the training set. In other words, user data privacy might be comp...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10538103/ https://www.ncbi.nlm.nih.gov/pubmed/37765778 http://dx.doi.org/10.3390/s23187722 |
_version_ | 1785113249631436800 |
---|---|
author | Famili, Azadeh Lao, Yingjie |
author_facet | Famili, Azadeh Lao, Yingjie |
author_sort | Famili, Azadeh |
collection | PubMed |
description | Machine learning deployment on edge devices has faced challenges such as computational costs and privacy issues. Membership inference attack (MIA) refers to the attack where the adversary aims to infer whether a data sample belongs to the training set. In other words, user data privacy might be compromised by MIA from a well-trained model. Therefore, it is vital to have defense mechanisms in place to protect training data, especially in privacy-sensitive applications such as healthcare. This paper exploits the implications of quantization on privacy leakage and proposes a novel quantization method that enhances the resistance of a neural network against MIA. Recent studies have shown that model quantization leads to resistance against membership inference attacks. Existing quantization approaches primarily prioritize performance and energy efficiency; we propose a quantization framework with the main objective of boosting the resistance against membership inference attacks. Unlike conventional quantization methods whose primary objectives are compression or increased speed, our proposed quantization aims to provide defense against MIA. We evaluate the effectiveness of our methods on various popular benchmark datasets and model architectures. All popular evaluation metrics, including [Formula: see text] , [Formula: see text] , and [Formula: see text]- [Formula: see text] , show improvement when compared to the full bitwidth model. For example, for ResNet on Cifar10, our experimental results show that our algorithm can reduce the attack accuracy of MIA by 14%, the true positive rate by 37%, and [Formula: see text]- [Formula: see text] of members by 39% compared to the full bitwidth network. Here, reduction in true positive rate means the attacker will not be able to identify the training dataset members, which is the main goal of the MIA. |
format | Online Article Text |
id | pubmed-10538103 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-105381032023-09-29 Deep Neural Network Quantization Framework for Effective Defense against Membership Inference Attacks Famili, Azadeh Lao, Yingjie Sensors (Basel) Article Machine learning deployment on edge devices has faced challenges such as computational costs and privacy issues. Membership inference attack (MIA) refers to the attack where the adversary aims to infer whether a data sample belongs to the training set. In other words, user data privacy might be compromised by MIA from a well-trained model. Therefore, it is vital to have defense mechanisms in place to protect training data, especially in privacy-sensitive applications such as healthcare. This paper exploits the implications of quantization on privacy leakage and proposes a novel quantization method that enhances the resistance of a neural network against MIA. Recent studies have shown that model quantization leads to resistance against membership inference attacks. Existing quantization approaches primarily prioritize performance and energy efficiency; we propose a quantization framework with the main objective of boosting the resistance against membership inference attacks. Unlike conventional quantization methods whose primary objectives are compression or increased speed, our proposed quantization aims to provide defense against MIA. We evaluate the effectiveness of our methods on various popular benchmark datasets and model architectures. All popular evaluation metrics, including [Formula: see text] , [Formula: see text] , and [Formula: see text]- [Formula: see text] , show improvement when compared to the full bitwidth model. For example, for ResNet on Cifar10, our experimental results show that our algorithm can reduce the attack accuracy of MIA by 14%, the true positive rate by 37%, and [Formula: see text]- [Formula: see text] of members by 39% compared to the full bitwidth network. Here, reduction in true positive rate means the attacker will not be able to identify the training dataset members, which is the main goal of the MIA. MDPI 2023-09-07 /pmc/articles/PMC10538103/ /pubmed/37765778 http://dx.doi.org/10.3390/s23187722 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Famili, Azadeh Lao, Yingjie Deep Neural Network Quantization Framework for Effective Defense against Membership Inference Attacks |
title | Deep Neural Network Quantization Framework for Effective Defense against Membership Inference Attacks |
title_full | Deep Neural Network Quantization Framework for Effective Defense against Membership Inference Attacks |
title_fullStr | Deep Neural Network Quantization Framework for Effective Defense against Membership Inference Attacks |
title_full_unstemmed | Deep Neural Network Quantization Framework for Effective Defense against Membership Inference Attacks |
title_short | Deep Neural Network Quantization Framework for Effective Defense against Membership Inference Attacks |
title_sort | deep neural network quantization framework for effective defense against membership inference attacks |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10538103/ https://www.ncbi.nlm.nih.gov/pubmed/37765778 http://dx.doi.org/10.3390/s23187722 |
work_keys_str_mv | AT familiazadeh deepneuralnetworkquantizationframeworkforeffectivedefenseagainstmembershipinferenceattacks AT laoyingjie deepneuralnetworkquantizationframeworkforeffectivedefenseagainstmembershipinferenceattacks |