Cargando…
Algorithmic Political Bias in Artificial Intelligence Systems
Some artificial intelligence (AI) systems can display algorithmic bias, i.e. they may produce outputs that unfairly discriminate against people based on their social identity. Much research on this topic focuses on algorithmic bias that disadvantages people based on their gender or racial identity....
Autor principal: | |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer Netherlands
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8967082/ https://www.ncbi.nlm.nih.gov/pubmed/35378902 http://dx.doi.org/10.1007/s13347-022-00512-8 |
_version_ | 1784678762934173696 |
---|---|
author | Peters, Uwe |
author_facet | Peters, Uwe |
author_sort | Peters, Uwe |
collection | PubMed |
description | Some artificial intelligence (AI) systems can display algorithmic bias, i.e. they may produce outputs that unfairly discriminate against people based on their social identity. Much research on this topic focuses on algorithmic bias that disadvantages people based on their gender or racial identity. The related ethical problems are significant and well known. Algorithmic bias against other aspects of people’s social identity, for instance, their political orientation, remains largely unexplored. This paper argues that algorithmic bias against people’s political orientation can arise in some of the same ways in which algorithmic gender and racial biases emerge. However, it differs importantly from them because there are (in a democratic society) strong social norms against gender and racial biases. This does not hold to the same extent for political biases. Political biases can thus more powerfully influence people, which increases the chances that these biases become embedded in algorithms and makes algorithmic political biases harder to detect and eradicate than gender and racial biases even though they all can produce similar harm. Since some algorithms can now also easily identify people’s political orientations against their will, these problems are exacerbated. Algorithmic political bias thus raises substantial and distinctive risks that the AI community should be aware of and examine. |
format | Online Article Text |
id | pubmed-8967082 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Springer Netherlands |
record_format | MEDLINE/PubMed |
spelling | pubmed-89670822022-03-31 Algorithmic Political Bias in Artificial Intelligence Systems Peters, Uwe Philos Technol Research Article Some artificial intelligence (AI) systems can display algorithmic bias, i.e. they may produce outputs that unfairly discriminate against people based on their social identity. Much research on this topic focuses on algorithmic bias that disadvantages people based on their gender or racial identity. The related ethical problems are significant and well known. Algorithmic bias against other aspects of people’s social identity, for instance, their political orientation, remains largely unexplored. This paper argues that algorithmic bias against people’s political orientation can arise in some of the same ways in which algorithmic gender and racial biases emerge. However, it differs importantly from them because there are (in a democratic society) strong social norms against gender and racial biases. This does not hold to the same extent for political biases. Political biases can thus more powerfully influence people, which increases the chances that these biases become embedded in algorithms and makes algorithmic political biases harder to detect and eradicate than gender and racial biases even though they all can produce similar harm. Since some algorithms can now also easily identify people’s political orientations against their will, these problems are exacerbated. Algorithmic political bias thus raises substantial and distinctive risks that the AI community should be aware of and examine. Springer Netherlands 2022-03-30 2022 /pmc/articles/PMC8967082/ /pubmed/35378902 http://dx.doi.org/10.1007/s13347-022-00512-8 Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Research Article Peters, Uwe Algorithmic Political Bias in Artificial Intelligence Systems |
title | Algorithmic Political Bias in Artificial Intelligence Systems |
title_full | Algorithmic Political Bias in Artificial Intelligence Systems |
title_fullStr | Algorithmic Political Bias in Artificial Intelligence Systems |
title_full_unstemmed | Algorithmic Political Bias in Artificial Intelligence Systems |
title_short | Algorithmic Political Bias in Artificial Intelligence Systems |
title_sort | algorithmic political bias in artificial intelligence systems |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8967082/ https://www.ncbi.nlm.nih.gov/pubmed/35378902 http://dx.doi.org/10.1007/s13347-022-00512-8 |
work_keys_str_mv | AT petersuwe algorithmicpoliticalbiasinartificialintelligencesystems |