Cargando…
Human intuition as a defense against attribute inference
Attribute inference—the process of analyzing publicly available data in order to uncover hidden information—has become a major threat to privacy, given the recent technological leap in machine learning. One way to tackle this threat is to strategically modify one’s publicly available data in order t...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10522675/ https://www.ncbi.nlm.nih.gov/pubmed/37752210 http://dx.doi.org/10.1038/s41598-023-43062-5 |
_version_ | 1785110403117744128 |
---|---|
author | Waniek, Marcin Suri, Navya Zameek, Abdullah AlShebli, Bedoor Rahwan, Talal |
author_facet | Waniek, Marcin Suri, Navya Zameek, Abdullah AlShebli, Bedoor Rahwan, Talal |
author_sort | Waniek, Marcin |
collection | PubMed |
description | Attribute inference—the process of analyzing publicly available data in order to uncover hidden information—has become a major threat to privacy, given the recent technological leap in machine learning. One way to tackle this threat is to strategically modify one’s publicly available data in order to keep one’s private information hidden from attribute inference. We evaluate people’s ability to perform this task, and compare it against algorithms designed for this purpose. We focus on three attributes: the gender of the author of a piece of text, the country in which a set of photos was taken, and the link missing from a social network. For each of these attributes, we find that people’s effectiveness is inferior to that of AI, especially when it comes to hiding the attribute in question. Moreover, when people are asked to modify the publicly available information in order to hide these attributes, they are less likely to make high-impact modifications compared to AI. This suggests that people are unable to recognize the aspects of the data that are critical to an inference algorithm. Taken together, our findings highlight the limitations of relying on human intuition to protect privacy in the age of AI, and emphasize the need for algorithmic support to protect private information from attribute inference. |
format | Online Article Text |
id | pubmed-10522675 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Nature Publishing Group UK |
record_format | MEDLINE/PubMed |
spelling | pubmed-105226752023-09-28 Human intuition as a defense against attribute inference Waniek, Marcin Suri, Navya Zameek, Abdullah AlShebli, Bedoor Rahwan, Talal Sci Rep Article Attribute inference—the process of analyzing publicly available data in order to uncover hidden information—has become a major threat to privacy, given the recent technological leap in machine learning. One way to tackle this threat is to strategically modify one’s publicly available data in order to keep one’s private information hidden from attribute inference. We evaluate people’s ability to perform this task, and compare it against algorithms designed for this purpose. We focus on three attributes: the gender of the author of a piece of text, the country in which a set of photos was taken, and the link missing from a social network. For each of these attributes, we find that people’s effectiveness is inferior to that of AI, especially when it comes to hiding the attribute in question. Moreover, when people are asked to modify the publicly available information in order to hide these attributes, they are less likely to make high-impact modifications compared to AI. This suggests that people are unable to recognize the aspects of the data that are critical to an inference algorithm. Taken together, our findings highlight the limitations of relying on human intuition to protect privacy in the age of AI, and emphasize the need for algorithmic support to protect private information from attribute inference. Nature Publishing Group UK 2023-09-26 /pmc/articles/PMC10522675/ /pubmed/37752210 http://dx.doi.org/10.1038/s41598-023-43062-5 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Article Waniek, Marcin Suri, Navya Zameek, Abdullah AlShebli, Bedoor Rahwan, Talal Human intuition as a defense against attribute inference |
title | Human intuition as a defense against attribute inference |
title_full | Human intuition as a defense against attribute inference |
title_fullStr | Human intuition as a defense against attribute inference |
title_full_unstemmed | Human intuition as a defense against attribute inference |
title_short | Human intuition as a defense against attribute inference |
title_sort | human intuition as a defense against attribute inference |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10522675/ https://www.ncbi.nlm.nih.gov/pubmed/37752210 http://dx.doi.org/10.1038/s41598-023-43062-5 |
work_keys_str_mv | AT waniekmarcin humanintuitionasadefenseagainstattributeinference AT surinavya humanintuitionasadefenseagainstattributeinference AT zameekabdullah humanintuitionasadefenseagainstattributeinference AT alsheblibedoor humanintuitionasadefenseagainstattributeinference AT rahwantalal humanintuitionasadefenseagainstattributeinference |