Cargando…

Are AI systems biased against the poor? A machine learning analysis using Word2Vec and GloVe embeddings

Among the myriad of technical approaches and abstract guidelines proposed to the topic of AI bias, there has been an urgent call to translate the principle of fairness into the operational AI reality with the involvement of social sciences specialists to analyse the context of specific types of bias...

Descripción completa

Detalles Bibliográficos
Autores principales: Curto, Georgina, Jojoa Acosta, Mario Fernando, Comim, Flavio, Garcia-Zapirain, Begoña
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer London 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9243923/
https://www.ncbi.nlm.nih.gov/pubmed/35789618
http://dx.doi.org/10.1007/s00146-022-01494-z
_version_ 1784738416172204032
author Curto, Georgina
Jojoa Acosta, Mario Fernando
Comim, Flavio
Garcia-Zapirain, Begoña
author_facet Curto, Georgina
Jojoa Acosta, Mario Fernando
Comim, Flavio
Garcia-Zapirain, Begoña
author_sort Curto, Georgina
collection PubMed
description Among the myriad of technical approaches and abstract guidelines proposed to the topic of AI bias, there has been an urgent call to translate the principle of fairness into the operational AI reality with the involvement of social sciences specialists to analyse the context of specific types of bias, since there is not a generalizable solution. This article offers an interdisciplinary contribution to the topic of AI and societal bias, in particular against the poor, providing a conceptual framework of the issue and a tailor-made model from which meaningful data are obtained using Natural Language Processing word vectors in pretrained Google Word2Vec, Twitter and Wikipedia GloVe word embeddings. The results of the study offer the first set of data that evidences the existence of bias against the poor and suggest that Google Word2vec shows a higher degree of bias when the terms are related to beliefs, whereas bias is higher in Twitter GloVe when the terms express behaviour. This article contributes to the body of work on bias, both from and AI and a social sciences perspective, by providing evidence of a transversal aggravating factor for historical types of discrimination. The evidence of bias against the poor also has important consequences in terms of human development, since it often leads to discrimination, which constitutes an obstacle for the effectiveness of poverty reduction policies.
format Online
Article
Text
id pubmed-9243923
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Springer London
record_format MEDLINE/PubMed
spelling pubmed-92439232022-06-30 Are AI systems biased against the poor? A machine learning analysis using Word2Vec and GloVe embeddings Curto, Georgina Jojoa Acosta, Mario Fernando Comim, Flavio Garcia-Zapirain, Begoña AI Soc Original Paper Among the myriad of technical approaches and abstract guidelines proposed to the topic of AI bias, there has been an urgent call to translate the principle of fairness into the operational AI reality with the involvement of social sciences specialists to analyse the context of specific types of bias, since there is not a generalizable solution. This article offers an interdisciplinary contribution to the topic of AI and societal bias, in particular against the poor, providing a conceptual framework of the issue and a tailor-made model from which meaningful data are obtained using Natural Language Processing word vectors in pretrained Google Word2Vec, Twitter and Wikipedia GloVe word embeddings. The results of the study offer the first set of data that evidences the existence of bias against the poor and suggest that Google Word2vec shows a higher degree of bias when the terms are related to beliefs, whereas bias is higher in Twitter GloVe when the terms express behaviour. This article contributes to the body of work on bias, both from and AI and a social sciences perspective, by providing evidence of a transversal aggravating factor for historical types of discrimination. The evidence of bias against the poor also has important consequences in terms of human development, since it often leads to discrimination, which constitutes an obstacle for the effectiveness of poverty reduction policies. Springer London 2022-06-28 /pmc/articles/PMC9243923/ /pubmed/35789618 http://dx.doi.org/10.1007/s00146-022-01494-z Text en © The Author(s) 2022, corrected publication 2022 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Original Paper
Curto, Georgina
Jojoa Acosta, Mario Fernando
Comim, Flavio
Garcia-Zapirain, Begoña
Are AI systems biased against the poor? A machine learning analysis using Word2Vec and GloVe embeddings
title Are AI systems biased against the poor? A machine learning analysis using Word2Vec and GloVe embeddings
title_full Are AI systems biased against the poor? A machine learning analysis using Word2Vec and GloVe embeddings
title_fullStr Are AI systems biased against the poor? A machine learning analysis using Word2Vec and GloVe embeddings
title_full_unstemmed Are AI systems biased against the poor? A machine learning analysis using Word2Vec and GloVe embeddings
title_short Are AI systems biased against the poor? A machine learning analysis using Word2Vec and GloVe embeddings
title_sort are ai systems biased against the poor? a machine learning analysis using word2vec and glove embeddings
topic Original Paper
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9243923/
https://www.ncbi.nlm.nih.gov/pubmed/35789618
http://dx.doi.org/10.1007/s00146-022-01494-z
work_keys_str_mv AT curtogeorgina areaisystemsbiasedagainstthepooramachinelearninganalysisusingword2vecandgloveembeddings
AT jojoaacostamariofernando areaisystemsbiasedagainstthepooramachinelearninganalysisusingword2vecandgloveembeddings
AT comimflavio areaisystemsbiasedagainstthepooramachinelearninganalysisusingword2vecandgloveembeddings
AT garciazapirainbegona areaisystemsbiasedagainstthepooramachinelearninganalysisusingword2vecandgloveembeddings