Cargando…
Hiding opinions from machine learning
Recent breakthroughs in machine learning and big data analysis are allowing our online activities to be scrutinized at an unprecedented scale, and our private information to be inferred without our consent or knowledge. Here, we focus on algorithms designed to infer the opinions of Twitter users tow...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Oxford University Press
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9802261/ https://www.ncbi.nlm.nih.gov/pubmed/36712321 http://dx.doi.org/10.1093/pnasnexus/pgac256 |
_version_ | 1784861646499348480 |
---|---|
author | Waniek, Marcin Magdy, Walid Rahwan, Talal |
author_facet | Waniek, Marcin Magdy, Walid Rahwan, Talal |
author_sort | Waniek, Marcin |
collection | PubMed |
description | Recent breakthroughs in machine learning and big data analysis are allowing our online activities to be scrutinized at an unprecedented scale, and our private information to be inferred without our consent or knowledge. Here, we focus on algorithms designed to infer the opinions of Twitter users toward a growing number of topics, and consider the possibility of modifying the profiles of these users in the hope of hiding their opinions from such algorithms. We ran a survey to understand the extent of this privacy threat, and found evidence suggesting that a significant proportion of Twitter users wish to avoid revealing at least some of their opinions about social, political, and religious issues. Moreover, our participants were unable to reliably identify the Twitter activities that reveal one’s opinion to such algorithms. Given these findings, we consider the possibility of fighting AI with AI, i.e., instead of relying on human intuition, people may have a better chance at hiding their opinion if they modify their Twitter profiles following advice from an automated assistant. We propose a heuristic that identifies which Twitter accounts the users should follow or mention in their tweets, and show that such a heuristic can effectively hide the user’s opinions. Altogether, our study highlights the risk associated with developing machine learning algorithms that analyze people’s profiles, and demonstrates the potential to develop countermeasures that preserve the basic right of choosing which of our opinions to share with the world. |
format | Online Article Text |
id | pubmed-9802261 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Oxford University Press |
record_format | MEDLINE/PubMed |
spelling | pubmed-98022612023-01-26 Hiding opinions from machine learning Waniek, Marcin Magdy, Walid Rahwan, Talal PNAS Nexus Physical Sciences and Engineering Recent breakthroughs in machine learning and big data analysis are allowing our online activities to be scrutinized at an unprecedented scale, and our private information to be inferred without our consent or knowledge. Here, we focus on algorithms designed to infer the opinions of Twitter users toward a growing number of topics, and consider the possibility of modifying the profiles of these users in the hope of hiding their opinions from such algorithms. We ran a survey to understand the extent of this privacy threat, and found evidence suggesting that a significant proportion of Twitter users wish to avoid revealing at least some of their opinions about social, political, and religious issues. Moreover, our participants were unable to reliably identify the Twitter activities that reveal one’s opinion to such algorithms. Given these findings, we consider the possibility of fighting AI with AI, i.e., instead of relying on human intuition, people may have a better chance at hiding their opinion if they modify their Twitter profiles following advice from an automated assistant. We propose a heuristic that identifies which Twitter accounts the users should follow or mention in their tweets, and show that such a heuristic can effectively hide the user’s opinions. Altogether, our study highlights the risk associated with developing machine learning algorithms that analyze people’s profiles, and demonstrates the potential to develop countermeasures that preserve the basic right of choosing which of our opinions to share with the world. Oxford University Press 2022-11-16 /pmc/articles/PMC9802261/ /pubmed/36712321 http://dx.doi.org/10.1093/pnasnexus/pgac256 Text en © The Author(s) 2022. Published by Oxford University Press on behalf of National Academy of Sciences. https://creativecommons.org/licenses/by/4.0/This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited. |
spellingShingle | Physical Sciences and Engineering Waniek, Marcin Magdy, Walid Rahwan, Talal Hiding opinions from machine learning |
title | Hiding opinions from machine learning |
title_full | Hiding opinions from machine learning |
title_fullStr | Hiding opinions from machine learning |
title_full_unstemmed | Hiding opinions from machine learning |
title_short | Hiding opinions from machine learning |
title_sort | hiding opinions from machine learning |
topic | Physical Sciences and Engineering |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9802261/ https://www.ncbi.nlm.nih.gov/pubmed/36712321 http://dx.doi.org/10.1093/pnasnexus/pgac256 |
work_keys_str_mv | AT waniekmarcin hidingopinionsfrommachinelearning AT magdywalid hidingopinionsfrommachinelearning AT rahwantalal hidingopinionsfrommachinelearning |