Cargando…
Advancing Federated Learning through Verifiable Computations and Homomorphic Encryption
Federated learning, as one of the three main technical routes for privacy computing, has been widely studied and applied in both academia and industry. However, malicious nodes may tamper with the algorithm execution process or submit false learning results, which directly affects the performance of...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10670442/ https://www.ncbi.nlm.nih.gov/pubmed/37998241 http://dx.doi.org/10.3390/e25111550 |
_version_ | 1785149312522518528 |
---|---|
author | Zhang, Bingxue Lu, Guangguang Qiu, Pengpeng Gui, Xumin Shi, Yang |
author_facet | Zhang, Bingxue Lu, Guangguang Qiu, Pengpeng Gui, Xumin Shi, Yang |
author_sort | Zhang, Bingxue |
collection | PubMed |
description | Federated learning, as one of the three main technical routes for privacy computing, has been widely studied and applied in both academia and industry. However, malicious nodes may tamper with the algorithm execution process or submit false learning results, which directly affects the performance of federated learning. In addition, learning nodes can easily obtain the global model. In practical applications, we would like to obtain the federated learning results only by the demand side. Unfortunately, no discussion on protecting the privacy of the global model is found in the existing research. As emerging cryptographic tools, the zero-knowledge virtual machine (ZKVM) and homomorphic encryption provide new ideas for the design of federated learning frameworks. We have introduced ZKVM for the first time, creating learning nodes as local computing provers. This provides execution integrity proofs for multi-class machine learning algorithms. Meanwhile, we discuss how to generate verifiable proofs for large-scale machine learning tasks under resource constraints. In addition, we implement the fully homomorphic encryption (FHE) scheme in ZKVM. We encrypt the model weights so that the federated learning nodes always collaborate in the ciphertext space. The real results can be obtained only after the demand side decrypts them using the private key. The innovativeness of this paper is demonstrated in the following aspects: 1. We introduce the ZKVM for the first time, which achieves zero-knowledge proofs (ZKP) for machine learning tasks with multiple classes and arbitrary scales. 2. We encrypt the global model, which protects the model privacy during local computation and transmission. 3. We propose and implement a new federated learning framework. We measure the verification costs under different federated learning rounds on the IRIS dataset. Despite the impact of homomorphic encryption on computational accuracy, the framework proposed in this paper achieves a satisfactory 90% model accuracy. Our framework is highly secure and is expected to further improve the overall efficiency as cryptographic tools continue to evolve. |
format | Online Article Text |
id | pubmed-10670442 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-106704422023-11-16 Advancing Federated Learning through Verifiable Computations and Homomorphic Encryption Zhang, Bingxue Lu, Guangguang Qiu, Pengpeng Gui, Xumin Shi, Yang Entropy (Basel) Article Federated learning, as one of the three main technical routes for privacy computing, has been widely studied and applied in both academia and industry. However, malicious nodes may tamper with the algorithm execution process or submit false learning results, which directly affects the performance of federated learning. In addition, learning nodes can easily obtain the global model. In practical applications, we would like to obtain the federated learning results only by the demand side. Unfortunately, no discussion on protecting the privacy of the global model is found in the existing research. As emerging cryptographic tools, the zero-knowledge virtual machine (ZKVM) and homomorphic encryption provide new ideas for the design of federated learning frameworks. We have introduced ZKVM for the first time, creating learning nodes as local computing provers. This provides execution integrity proofs for multi-class machine learning algorithms. Meanwhile, we discuss how to generate verifiable proofs for large-scale machine learning tasks under resource constraints. In addition, we implement the fully homomorphic encryption (FHE) scheme in ZKVM. We encrypt the model weights so that the federated learning nodes always collaborate in the ciphertext space. The real results can be obtained only after the demand side decrypts them using the private key. The innovativeness of this paper is demonstrated in the following aspects: 1. We introduce the ZKVM for the first time, which achieves zero-knowledge proofs (ZKP) for machine learning tasks with multiple classes and arbitrary scales. 2. We encrypt the global model, which protects the model privacy during local computation and transmission. 3. We propose and implement a new federated learning framework. We measure the verification costs under different federated learning rounds on the IRIS dataset. Despite the impact of homomorphic encryption on computational accuracy, the framework proposed in this paper achieves a satisfactory 90% model accuracy. Our framework is highly secure and is expected to further improve the overall efficiency as cryptographic tools continue to evolve. MDPI 2023-11-16 /pmc/articles/PMC10670442/ /pubmed/37998241 http://dx.doi.org/10.3390/e25111550 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Zhang, Bingxue Lu, Guangguang Qiu, Pengpeng Gui, Xumin Shi, Yang Advancing Federated Learning through Verifiable Computations and Homomorphic Encryption |
title | Advancing Federated Learning through Verifiable Computations and Homomorphic Encryption |
title_full | Advancing Federated Learning through Verifiable Computations and Homomorphic Encryption |
title_fullStr | Advancing Federated Learning through Verifiable Computations and Homomorphic Encryption |
title_full_unstemmed | Advancing Federated Learning through Verifiable Computations and Homomorphic Encryption |
title_short | Advancing Federated Learning through Verifiable Computations and Homomorphic Encryption |
title_sort | advancing federated learning through verifiable computations and homomorphic encryption |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10670442/ https://www.ncbi.nlm.nih.gov/pubmed/37998241 http://dx.doi.org/10.3390/e25111550 |
work_keys_str_mv | AT zhangbingxue advancingfederatedlearningthroughverifiablecomputationsandhomomorphicencryption AT luguangguang advancingfederatedlearningthroughverifiablecomputationsandhomomorphicencryption AT qiupengpeng advancingfederatedlearningthroughverifiablecomputationsandhomomorphicencryption AT guixumin advancingfederatedlearningthroughverifiablecomputationsandhomomorphicencryption AT shiyang advancingfederatedlearningthroughverifiablecomputationsandhomomorphicencryption |