Cargando…
MixNN: A Design for Protecting Deep Learning Models
In this paper, we propose a novel design, called MixNN, for protecting deep learning model structure and parameters since the model consists of several layers and each layer contains its own structure and parameters. The layers in a deep learning model of MixNN are fully decentralized. It hides comm...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9656547/ https://www.ncbi.nlm.nih.gov/pubmed/36365952 http://dx.doi.org/10.3390/s22218254 |
_version_ | 1784829462782672896 |
---|---|
author | Liu, Chao Chen, Hao Wu, Yusen Jin, Rui |
author_facet | Liu, Chao Chen, Hao Wu, Yusen Jin, Rui |
author_sort | Liu, Chao |
collection | PubMed |
description | In this paper, we propose a novel design, called MixNN, for protecting deep learning model structure and parameters since the model consists of several layers and each layer contains its own structure and parameters. The layers in a deep learning model of MixNN are fully decentralized. It hides communication address, layer parameters and operations, and forward as well as backward message flows among non-adjacent layers using the ideas from mix networks. MixNN has the following advantages: (i) an adversary cannot fully control all layers of a model, including the structure and parameters; (ii) even some layers may collude but they cannot tamper with other honest layers; (iii) model privacy is preserved in the training phase. We provide detailed descriptions for deployment. In one classification experiment, we compared a neural network deployed in a virtual machine with the same one using the MixNN design on the AWS EC2. The result shows that our MixNN retains less than 0.001 difference in terms of classification accuracy, while the whole running time of MixNN is about 7.5 times slower than the one running on a single virtual machine. |
format | Online Article Text |
id | pubmed-9656547 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-96565472022-11-15 MixNN: A Design for Protecting Deep Learning Models Liu, Chao Chen, Hao Wu, Yusen Jin, Rui Sensors (Basel) Article In this paper, we propose a novel design, called MixNN, for protecting deep learning model structure and parameters since the model consists of several layers and each layer contains its own structure and parameters. The layers in a deep learning model of MixNN are fully decentralized. It hides communication address, layer parameters and operations, and forward as well as backward message flows among non-adjacent layers using the ideas from mix networks. MixNN has the following advantages: (i) an adversary cannot fully control all layers of a model, including the structure and parameters; (ii) even some layers may collude but they cannot tamper with other honest layers; (iii) model privacy is preserved in the training phase. We provide detailed descriptions for deployment. In one classification experiment, we compared a neural network deployed in a virtual machine with the same one using the MixNN design on the AWS EC2. The result shows that our MixNN retains less than 0.001 difference in terms of classification accuracy, while the whole running time of MixNN is about 7.5 times slower than the one running on a single virtual machine. MDPI 2022-10-28 /pmc/articles/PMC9656547/ /pubmed/36365952 http://dx.doi.org/10.3390/s22218254 Text en © 2022 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Liu, Chao Chen, Hao Wu, Yusen Jin, Rui MixNN: A Design for Protecting Deep Learning Models |
title | MixNN: A Design for Protecting Deep Learning Models |
title_full | MixNN: A Design for Protecting Deep Learning Models |
title_fullStr | MixNN: A Design for Protecting Deep Learning Models |
title_full_unstemmed | MixNN: A Design for Protecting Deep Learning Models |
title_short | MixNN: A Design for Protecting Deep Learning Models |
title_sort | mixnn: a design for protecting deep learning models |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9656547/ https://www.ncbi.nlm.nih.gov/pubmed/36365952 http://dx.doi.org/10.3390/s22218254 |
work_keys_str_mv | AT liuchao mixnnadesignforprotectingdeeplearningmodels AT chenhao mixnnadesignforprotectingdeeplearningmodels AT wuyusen mixnnadesignforprotectingdeeplearningmodels AT jinrui mixnnadesignforprotectingdeeplearningmodels |