Cargando…
Wide range screening of algorithmic bias in word embedding models using large sentiment lexicons reveals underreported bias types
Concerns about gender bias in word embedding models have captured substantial attention in the algorithmic bias research literature. Other bias types however have received lesser amounts of scrutiny. This work describes a large-scale analysis of sentiment associations in popular word embedding model...
Autor principal: | |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7173861/ https://www.ncbi.nlm.nih.gov/pubmed/32315320 http://dx.doi.org/10.1371/journal.pone.0231189 |
_version_ | 1783524529438457856 |
---|---|
author | Rozado, David |
author_facet | Rozado, David |
author_sort | Rozado, David |
collection | PubMed |
description | Concerns about gender bias in word embedding models have captured substantial attention in the algorithmic bias research literature. Other bias types however have received lesser amounts of scrutiny. This work describes a large-scale analysis of sentiment associations in popular word embedding models along the lines of gender and ethnicity but also along the less frequently studied dimensions of socioeconomic status, age, physical appearance, sexual orientation, religious sentiment and political leanings. Consistent with previous scholarly literature, this work has found systemic bias against given names popular among African-Americans in most embedding models examined. Gender bias in embedding models however appears to be multifaceted and often reversed in polarity to what has been regularly reported. Interestingly, using the common operationalization of the term bias in the fairness literature, novel types of so far unreported bias types in word embedding models have also been identified. Specifically, the popular embedding models analyzed here display negative biases against middle and working-class socioeconomic status, male children, senior citizens, plain physical appearance and intellectual phenomena such as Islamic religious faith, non-religiosity and conservative political orientation. Reasons for the paradoxical underreporting of these bias types in the relevant literature are probably manifold but widely held blind spots when searching for algorithmic bias and a lack of widespread technical jargon to unambiguously describe a variety of algorithmic associations could conceivably be playing a role. The causal origins for the multiplicity of loaded associations attached to distinct demographic groups within embedding models are often unclear but the heterogeneity of said associations and their potential multifactorial roots raises doubts about the validity of grouping them all under the umbrella term bias. Richer and more fine-grained terminology as well as a more comprehensive exploration of the bias landscape could help the fairness epistemic community to characterize and neutralize algorithmic discrimination more efficiently. |
format | Online Article Text |
id | pubmed-7173861 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
publisher | Public Library of Science |
record_format | MEDLINE/PubMed |
spelling | pubmed-71738612020-04-27 Wide range screening of algorithmic bias in word embedding models using large sentiment lexicons reveals underreported bias types Rozado, David PLoS One Research Article Concerns about gender bias in word embedding models have captured substantial attention in the algorithmic bias research literature. Other bias types however have received lesser amounts of scrutiny. This work describes a large-scale analysis of sentiment associations in popular word embedding models along the lines of gender and ethnicity but also along the less frequently studied dimensions of socioeconomic status, age, physical appearance, sexual orientation, religious sentiment and political leanings. Consistent with previous scholarly literature, this work has found systemic bias against given names popular among African-Americans in most embedding models examined. Gender bias in embedding models however appears to be multifaceted and often reversed in polarity to what has been regularly reported. Interestingly, using the common operationalization of the term bias in the fairness literature, novel types of so far unreported bias types in word embedding models have also been identified. Specifically, the popular embedding models analyzed here display negative biases against middle and working-class socioeconomic status, male children, senior citizens, plain physical appearance and intellectual phenomena such as Islamic religious faith, non-religiosity and conservative political orientation. Reasons for the paradoxical underreporting of these bias types in the relevant literature are probably manifold but widely held blind spots when searching for algorithmic bias and a lack of widespread technical jargon to unambiguously describe a variety of algorithmic associations could conceivably be playing a role. The causal origins for the multiplicity of loaded associations attached to distinct demographic groups within embedding models are often unclear but the heterogeneity of said associations and their potential multifactorial roots raises doubts about the validity of grouping them all under the umbrella term bias. Richer and more fine-grained terminology as well as a more comprehensive exploration of the bias landscape could help the fairness epistemic community to characterize and neutralize algorithmic discrimination more efficiently. Public Library of Science 2020-04-21 /pmc/articles/PMC7173861/ /pubmed/32315320 http://dx.doi.org/10.1371/journal.pone.0231189 Text en © 2020 David Rozado http://creativecommons.org/licenses/by/4.0/ This is an open access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. |
spellingShingle | Research Article Rozado, David Wide range screening of algorithmic bias in word embedding models using large sentiment lexicons reveals underreported bias types |
title | Wide range screening of algorithmic bias in word embedding models using large sentiment lexicons reveals underreported bias types |
title_full | Wide range screening of algorithmic bias in word embedding models using large sentiment lexicons reveals underreported bias types |
title_fullStr | Wide range screening of algorithmic bias in word embedding models using large sentiment lexicons reveals underreported bias types |
title_full_unstemmed | Wide range screening of algorithmic bias in word embedding models using large sentiment lexicons reveals underreported bias types |
title_short | Wide range screening of algorithmic bias in word embedding models using large sentiment lexicons reveals underreported bias types |
title_sort | wide range screening of algorithmic bias in word embedding models using large sentiment lexicons reveals underreported bias types |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7173861/ https://www.ncbi.nlm.nih.gov/pubmed/32315320 http://dx.doi.org/10.1371/journal.pone.0231189 |
work_keys_str_mv | AT rozadodavid widerangescreeningofalgorithmicbiasinwordembeddingmodelsusinglargesentimentlexiconsrevealsunderreportedbiastypes |