Cargando…
Moving beyond “algorithmic bias is a data problem”
A surprisingly sticky belief is that a machine learning model merely reflects existing algorithmic bias in the dataset and does not itself contribute to harm. Why, despite clear evidence to the contrary, does the myth of the impartial model still hold allure for so many within our research community...
Autor principal: | |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Elsevier
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8085589/ https://www.ncbi.nlm.nih.gov/pubmed/33982031 http://dx.doi.org/10.1016/j.patter.2021.100241 |
_version_ | 1783686373632376832 |
---|---|
author | Hooker, Sara |
author_facet | Hooker, Sara |
author_sort | Hooker, Sara |
collection | PubMed |
description | A surprisingly sticky belief is that a machine learning model merely reflects existing algorithmic bias in the dataset and does not itself contribute to harm. Why, despite clear evidence to the contrary, does the myth of the impartial model still hold allure for so many within our research community? Algorithms are not impartial, and some design choices are better than others. Recognizing how model design impacts harm opens up new mitigation techniques that are less burdensome than comprehensive data collection. |
format | Online Article Text |
id | pubmed-8085589 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | Elsevier |
record_format | MEDLINE/PubMed |
spelling | pubmed-80855892021-05-11 Moving beyond “algorithmic bias is a data problem” Hooker, Sara Patterns (N Y) Opinion A surprisingly sticky belief is that a machine learning model merely reflects existing algorithmic bias in the dataset and does not itself contribute to harm. Why, despite clear evidence to the contrary, does the myth of the impartial model still hold allure for so many within our research community? Algorithms are not impartial, and some design choices are better than others. Recognizing how model design impacts harm opens up new mitigation techniques that are less burdensome than comprehensive data collection. Elsevier 2021-04-09 /pmc/articles/PMC8085589/ /pubmed/33982031 http://dx.doi.org/10.1016/j.patter.2021.100241 Text en © 2021 The Author https://creativecommons.org/licenses/by-nc-nd/4.0/This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). |
spellingShingle | Opinion Hooker, Sara Moving beyond “algorithmic bias is a data problem” |
title | Moving beyond “algorithmic bias is a data problem” |
title_full | Moving beyond “algorithmic bias is a data problem” |
title_fullStr | Moving beyond “algorithmic bias is a data problem” |
title_full_unstemmed | Moving beyond “algorithmic bias is a data problem” |
title_short | Moving beyond “algorithmic bias is a data problem” |
title_sort | moving beyond “algorithmic bias is a data problem” |
topic | Opinion |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8085589/ https://www.ncbi.nlm.nih.gov/pubmed/33982031 http://dx.doi.org/10.1016/j.patter.2021.100241 |
work_keys_str_mv | AT hookersara movingbeyondalgorithmicbiasisadataproblem |