Cargando…

Poster Abstract: Protecting User Data Privacy with Adversarial Perturbations

The increased availability of on-body sensors gives researchers access to rich time-series data, many of which are related to human health conditions. Sharing such data can allow cross-institutional collaborations that create advanced data-driven models to make inferences on human well-being. Howeve...

Descripción completa

Detalles Bibliográficos
Autores principales: Wang, Ziqi, Wang, Brian, Srivastava, Mani
Formato: Online Artículo Texto
Lenguaje:English
Publicado: 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8513393/
https://www.ncbi.nlm.nih.gov/pubmed/34651144
http://dx.doi.org/10.1145/3412382.3458776
_version_ 1784583204521377792
author Wang, Ziqi
Wang, Brian
Srivastava, Mani
author_facet Wang, Ziqi
Wang, Brian
Srivastava, Mani
author_sort Wang, Ziqi
collection PubMed
description The increased availability of on-body sensors gives researchers access to rich time-series data, many of which are related to human health conditions. Sharing such data can allow cross-institutional collaborations that create advanced data-driven models to make inferences on human well-being. However, such data are usually considered privacy-sensitive, and publicly sharing this data may incur significant privacy concerns. In this work, we seek to protect clinical time-series data against membership inference attacks, while maximally retaining the data utility. We achieve this by adding an imperceptible noise to the raw data. Known as adversarial perturbations, the noise is specially trained to force a deep learning model to make inference mistakes (in our case, mispredicting user identities). Our preliminary results show that our solution can better protect the data from membership inference attacks than the baselines, while succeeding in all the designed data quality checks.
format Online
Article
Text
id pubmed-8513393
institution National Center for Biotechnology Information
language English
publishDate 2021
record_format MEDLINE/PubMed
spelling pubmed-85133932021-10-13 Poster Abstract: Protecting User Data Privacy with Adversarial Perturbations Wang, Ziqi Wang, Brian Srivastava, Mani IPSN Article The increased availability of on-body sensors gives researchers access to rich time-series data, many of which are related to human health conditions. Sharing such data can allow cross-institutional collaborations that create advanced data-driven models to make inferences on human well-being. However, such data are usually considered privacy-sensitive, and publicly sharing this data may incur significant privacy concerns. In this work, we seek to protect clinical time-series data against membership inference attacks, while maximally retaining the data utility. We achieve this by adding an imperceptible noise to the raw data. Known as adversarial perturbations, the noise is specially trained to force a deep learning model to make inference mistakes (in our case, mispredicting user identities). Our preliminary results show that our solution can better protect the data from membership inference attacks than the baselines, while succeeding in all the designed data quality checks. 2021-05 /pmc/articles/PMC8513393/ /pubmed/34651144 http://dx.doi.org/10.1145/3412382.3458776 Text en https://creativecommons.org/licenses/by/4.0/This work is licensed under a Creative Commons Attribution International 4.0 License (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Wang, Ziqi
Wang, Brian
Srivastava, Mani
Poster Abstract: Protecting User Data Privacy with Adversarial Perturbations
title Poster Abstract: Protecting User Data Privacy with Adversarial Perturbations
title_full Poster Abstract: Protecting User Data Privacy with Adversarial Perturbations
title_fullStr Poster Abstract: Protecting User Data Privacy with Adversarial Perturbations
title_full_unstemmed Poster Abstract: Protecting User Data Privacy with Adversarial Perturbations
title_short Poster Abstract: Protecting User Data Privacy with Adversarial Perturbations
title_sort poster abstract: protecting user data privacy with adversarial perturbations
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8513393/
https://www.ncbi.nlm.nih.gov/pubmed/34651144
http://dx.doi.org/10.1145/3412382.3458776
work_keys_str_mv AT wangziqi posterabstractprotectinguserdataprivacywithadversarialperturbations
AT wangbrian posterabstractprotectinguserdataprivacywithadversarialperturbations
AT srivastavamani posterabstractprotectinguserdataprivacywithadversarialperturbations