Cargando…

Algorithms are not neutral: Bias in collaborative filtering

When Artificial Intelligence (AI) is applied in decision-making that affects people’s lives, it is now well established that the outcomes can be biased or discriminatory. The question of whether algorithms themselves can be among the sources of bias has been the subject of recent debate among Artifi...

Descripción completa

Detalles Bibliográficos
Autor principal: Stinson, Catherine
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer International Publishing 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8802245/
https://www.ncbi.nlm.nih.gov/pubmed/35128540
http://dx.doi.org/10.1007/s43681-022-00136-w
_version_ 1784642639388213248
author Stinson, Catherine
author_facet Stinson, Catherine
author_sort Stinson, Catherine
collection PubMed
description When Artificial Intelligence (AI) is applied in decision-making that affects people’s lives, it is now well established that the outcomes can be biased or discriminatory. The question of whether algorithms themselves can be among the sources of bias has been the subject of recent debate among Artificial Intelligence researchers, and scholars who study the social impact of technology. There has been a tendency to focus on examples, where the data set used to train the AI is biased, and denial on the part of some researchers that algorithms can also be biased. Here we illustrate the point that algorithms themselves can be the source of bias with the example of collaborative filtering algorithms for recommendation and search. These algorithms are known to suffer from cold-start, popularity, and homogenizing biases, among others. While these are typically described as statistical biases rather than biases of moral import; in this paper we show that these statistical biases can lead directly to discriminatory outcomes. The intuitive idea is that data points on the margins of distributions of human data tend to correspond to marginalized people. The statistical biases described here have the effect of further marginalizing the already marginal. Biased algorithms for applications such as media recommendations can have significant impact on individuals’ and communities’ access to information and culturally-relevant resources. This source of bias warrants serious attention given the ubiquity of algorithmic decision-making.
format Online
Article
Text
id pubmed-8802245
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Springer International Publishing
record_format MEDLINE/PubMed
spelling pubmed-88022452022-01-31 Algorithms are not neutral: Bias in collaborative filtering Stinson, Catherine AI Ethics Original Research When Artificial Intelligence (AI) is applied in decision-making that affects people’s lives, it is now well established that the outcomes can be biased or discriminatory. The question of whether algorithms themselves can be among the sources of bias has been the subject of recent debate among Artificial Intelligence researchers, and scholars who study the social impact of technology. There has been a tendency to focus on examples, where the data set used to train the AI is biased, and denial on the part of some researchers that algorithms can also be biased. Here we illustrate the point that algorithms themselves can be the source of bias with the example of collaborative filtering algorithms for recommendation and search. These algorithms are known to suffer from cold-start, popularity, and homogenizing biases, among others. While these are typically described as statistical biases rather than biases of moral import; in this paper we show that these statistical biases can lead directly to discriminatory outcomes. The intuitive idea is that data points on the margins of distributions of human data tend to correspond to marginalized people. The statistical biases described here have the effect of further marginalizing the already marginal. Biased algorithms for applications such as media recommendations can have significant impact on individuals’ and communities’ access to information and culturally-relevant resources. This source of bias warrants serious attention given the ubiquity of algorithmic decision-making. Springer International Publishing 2022-01-31 2022 /pmc/articles/PMC8802245/ /pubmed/35128540 http://dx.doi.org/10.1007/s43681-022-00136-w Text en © The Author(s), under exclusive licence to Springer Nature Switzerland AG 2022 This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic.
spellingShingle Original Research
Stinson, Catherine
Algorithms are not neutral: Bias in collaborative filtering
title Algorithms are not neutral: Bias in collaborative filtering
title_full Algorithms are not neutral: Bias in collaborative filtering
title_fullStr Algorithms are not neutral: Bias in collaborative filtering
title_full_unstemmed Algorithms are not neutral: Bias in collaborative filtering
title_short Algorithms are not neutral: Bias in collaborative filtering
title_sort algorithms are not neutral: bias in collaborative filtering
topic Original Research
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8802245/
https://www.ncbi.nlm.nih.gov/pubmed/35128540
http://dx.doi.org/10.1007/s43681-022-00136-w
work_keys_str_mv AT stinsoncatherine algorithmsarenotneutralbiasincollaborativefiltering