Cargando…
Is Homomorphic Encryption-Based Deep Learning Secure Enough?
As the amount of data collected and analyzed by machine learning technology increases, data that can identify individuals is also being collected in large quantities. In particular, as deep learning technology—which requires a large amount of analysis data—is activated in various service fields, the...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8659496/ https://www.ncbi.nlm.nih.gov/pubmed/34883809 http://dx.doi.org/10.3390/s21237806 |
_version_ | 1784612976249012224 |
---|---|
author | Shin, Jinmyeong Choi, Seok-Hwan Choi, Yoon-Ho |
author_facet | Shin, Jinmyeong Choi, Seok-Hwan Choi, Yoon-Ho |
author_sort | Shin, Jinmyeong |
collection | PubMed |
description | As the amount of data collected and analyzed by machine learning technology increases, data that can identify individuals is also being collected in large quantities. In particular, as deep learning technology—which requires a large amount of analysis data—is activated in various service fields, the possibility of exposing sensitive information of users increases, and the user privacy problem is growing more than ever. As a solution to this user’s data privacy problem, homomorphic encryption technology, which is an encryption technology that supports arithmetic operations using encrypted data, has been applied to various field including finance and health care in recent years. If so, is it possible to use the deep learning service while preserving the data privacy of users by using the data to which homomorphic encryption is applied? In this paper, we propose three attack methods to infringe user’s data privacy by exploiting possible security vulnerabilities in the process of using homomorphic encryption-based deep learning services for the first time. To specify and verify the feasibility of exploiting possible security vulnerabilities, we propose three attacks: (1) an adversarial attack exploiting communication link between client and trusted party; (2) a reconstruction attack using the paired input and output data; and (3) a membership inference attack by malicious insider. In addition, we describe real-world exploit scenarios for financial and medical services. From the experimental evaluation results, we show that the adversarial example and reconstruction attacks are a practical threat to homomorphic encryption-based deep learning models. The adversarial attack decreased average classification accuracy from 0.927 to 0.043, and the reconstruction attack showed average reclassification accuracy of 0.888, respectively. |
format | Online Article Text |
id | pubmed-8659496 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-86594962021-12-10 Is Homomorphic Encryption-Based Deep Learning Secure Enough? Shin, Jinmyeong Choi, Seok-Hwan Choi, Yoon-Ho Sensors (Basel) Article As the amount of data collected and analyzed by machine learning technology increases, data that can identify individuals is also being collected in large quantities. In particular, as deep learning technology—which requires a large amount of analysis data—is activated in various service fields, the possibility of exposing sensitive information of users increases, and the user privacy problem is growing more than ever. As a solution to this user’s data privacy problem, homomorphic encryption technology, which is an encryption technology that supports arithmetic operations using encrypted data, has been applied to various field including finance and health care in recent years. If so, is it possible to use the deep learning service while preserving the data privacy of users by using the data to which homomorphic encryption is applied? In this paper, we propose three attack methods to infringe user’s data privacy by exploiting possible security vulnerabilities in the process of using homomorphic encryption-based deep learning services for the first time. To specify and verify the feasibility of exploiting possible security vulnerabilities, we propose three attacks: (1) an adversarial attack exploiting communication link between client and trusted party; (2) a reconstruction attack using the paired input and output data; and (3) a membership inference attack by malicious insider. In addition, we describe real-world exploit scenarios for financial and medical services. From the experimental evaluation results, we show that the adversarial example and reconstruction attacks are a practical threat to homomorphic encryption-based deep learning models. The adversarial attack decreased average classification accuracy from 0.927 to 0.043, and the reconstruction attack showed average reclassification accuracy of 0.888, respectively. MDPI 2021-11-24 /pmc/articles/PMC8659496/ /pubmed/34883809 http://dx.doi.org/10.3390/s21237806 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Shin, Jinmyeong Choi, Seok-Hwan Choi, Yoon-Ho Is Homomorphic Encryption-Based Deep Learning Secure Enough? |
title | Is Homomorphic Encryption-Based Deep Learning Secure Enough? |
title_full | Is Homomorphic Encryption-Based Deep Learning Secure Enough? |
title_fullStr | Is Homomorphic Encryption-Based Deep Learning Secure Enough? |
title_full_unstemmed | Is Homomorphic Encryption-Based Deep Learning Secure Enough? |
title_short | Is Homomorphic Encryption-Based Deep Learning Secure Enough? |
title_sort | is homomorphic encryption-based deep learning secure enough? |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8659496/ https://www.ncbi.nlm.nih.gov/pubmed/34883809 http://dx.doi.org/10.3390/s21237806 |
work_keys_str_mv | AT shinjinmyeong ishomomorphicencryptionbaseddeeplearningsecureenough AT choiseokhwan ishomomorphicencryptionbaseddeeplearningsecureenough AT choiyoonho ishomomorphicencryptionbaseddeeplearningsecureenough |