Cargando…
Confidential machine learning on untrusted platforms: a survey
With the ever-growing data and the need for developing powerful machine learning models, data owners increasingly depend on various untrusted platforms (e.g., public clouds, edges, and machine learning service providers) for scalable processing or collaborative learning. Thus, sensitive data and mod...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer Singapore
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8591683/ https://www.ncbi.nlm.nih.gov/pubmed/34805760 http://dx.doi.org/10.1186/s42400-021-00092-8 |
_version_ | 1784599304834383872 |
---|---|
author | Sagar, Sharma Keke, Chen |
author_facet | Sagar, Sharma Keke, Chen |
author_sort | Sagar, Sharma |
collection | PubMed |
description | With the ever-growing data and the need for developing powerful machine learning models, data owners increasingly depend on various untrusted platforms (e.g., public clouds, edges, and machine learning service providers) for scalable processing or collaborative learning. Thus, sensitive data and models are in danger of unauthorized access, misuse, and privacy compromises. A relatively new body of research confidentially trains machine learning models on protected data to address these concerns. In this survey, we summarize notable studies in this emerging area of research. With a unified framework, we highlight the critical challenges and innovations in outsourcing machine learning confidentially. We focus on the cryptographic approaches for confidential machine learning (CML), primarily on model training, while also covering other directions such as perturbation-based approaches and CML in the hardware-assisted computing environment. The discussion will take a holistic way to consider a rich context of the related threat models, security assumptions, design principles, and associated trade-offs amongst data utility, cost, and confidentiality. |
format | Online Article Text |
id | pubmed-8591683 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | Springer Singapore |
record_format | MEDLINE/PubMed |
spelling | pubmed-85916832021-11-19 Confidential machine learning on untrusted platforms: a survey Sagar, Sharma Keke, Chen Cybersecur (Singap) Survey With the ever-growing data and the need for developing powerful machine learning models, data owners increasingly depend on various untrusted platforms (e.g., public clouds, edges, and machine learning service providers) for scalable processing or collaborative learning. Thus, sensitive data and models are in danger of unauthorized access, misuse, and privacy compromises. A relatively new body of research confidentially trains machine learning models on protected data to address these concerns. In this survey, we summarize notable studies in this emerging area of research. With a unified framework, we highlight the critical challenges and innovations in outsourcing machine learning confidentially. We focus on the cryptographic approaches for confidential machine learning (CML), primarily on model training, while also covering other directions such as perturbation-based approaches and CML in the hardware-assisted computing environment. The discussion will take a holistic way to consider a rich context of the related threat models, security assumptions, design principles, and associated trade-offs amongst data utility, cost, and confidentiality. Springer Singapore 2021-09-01 2021 /pmc/articles/PMC8591683/ /pubmed/34805760 http://dx.doi.org/10.1186/s42400-021-00092-8 Text en © The Author(s) 2021 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Survey Sagar, Sharma Keke, Chen Confidential machine learning on untrusted platforms: a survey |
title | Confidential machine learning on untrusted platforms: a survey |
title_full | Confidential machine learning on untrusted platforms: a survey |
title_fullStr | Confidential machine learning on untrusted platforms: a survey |
title_full_unstemmed | Confidential machine learning on untrusted platforms: a survey |
title_short | Confidential machine learning on untrusted platforms: a survey |
title_sort | confidential machine learning on untrusted platforms: a survey |
topic | Survey |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8591683/ https://www.ncbi.nlm.nih.gov/pubmed/34805760 http://dx.doi.org/10.1186/s42400-021-00092-8 |
work_keys_str_mv | AT sagarsharma confidentialmachinelearningonuntrustedplatformsasurvey AT kekechen confidentialmachinelearningonuntrustedplatformsasurvey |