Cargando…
Algorithmic fairness in pandemic forecasting: lessons from COVID-19
Racial and ethnic minorities have borne a particularly acute burden of the COVID-19 pandemic in the United States. There is a growing awareness from both researchers and public health leaders of the critical need to ensure fairness in forecast results. Without careful and deliberate bias mitigation,...
Autores principales: | , , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9090910/ https://www.ncbi.nlm.nih.gov/pubmed/35538215 http://dx.doi.org/10.1038/s41746-022-00602-z |
_version_ | 1784704826622345216 |
---|---|
author | Tsai, Thomas C. Arik, Sercan Jacobson, Benjamin H. Yoon, Jinsung Yoder, Nate Sava, Dario Mitchell, Margaret Graham, Garth Pfister, Tomas |
author_facet | Tsai, Thomas C. Arik, Sercan Jacobson, Benjamin H. Yoon, Jinsung Yoder, Nate Sava, Dario Mitchell, Margaret Graham, Garth Pfister, Tomas |
author_sort | Tsai, Thomas C. |
collection | PubMed |
description | Racial and ethnic minorities have borne a particularly acute burden of the COVID-19 pandemic in the United States. There is a growing awareness from both researchers and public health leaders of the critical need to ensure fairness in forecast results. Without careful and deliberate bias mitigation, inequities embedded in data can be transferred to model predictions, perpetuating disparities, and exacerbating the disproportionate harms of the COVID-19 pandemic. These biases in data and forecasts can be viewed through both statistical and sociological lenses, and the challenges of both building hierarchical models with limited data availability and drawing on data that reflects structural inequities must be confronted. We present an outline of key modeling domains in which unfairness may be introduced and draw on our experience building and testing the Google-Harvard COVID-19 Public Forecasting model to illustrate these challenges and offer strategies to address them. While targeted toward pandemic forecasting, these domains of potentially biased modeling and concurrent approaches to pursuing fairness present important considerations for equitable machine-learning innovation. |
format | Online Article Text |
id | pubmed-9090910 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Nature Publishing Group UK |
record_format | MEDLINE/PubMed |
spelling | pubmed-90909102022-05-12 Algorithmic fairness in pandemic forecasting: lessons from COVID-19 Tsai, Thomas C. Arik, Sercan Jacobson, Benjamin H. Yoon, Jinsung Yoder, Nate Sava, Dario Mitchell, Margaret Graham, Garth Pfister, Tomas NPJ Digit Med Perspective Racial and ethnic minorities have borne a particularly acute burden of the COVID-19 pandemic in the United States. There is a growing awareness from both researchers and public health leaders of the critical need to ensure fairness in forecast results. Without careful and deliberate bias mitigation, inequities embedded in data can be transferred to model predictions, perpetuating disparities, and exacerbating the disproportionate harms of the COVID-19 pandemic. These biases in data and forecasts can be viewed through both statistical and sociological lenses, and the challenges of both building hierarchical models with limited data availability and drawing on data that reflects structural inequities must be confronted. We present an outline of key modeling domains in which unfairness may be introduced and draw on our experience building and testing the Google-Harvard COVID-19 Public Forecasting model to illustrate these challenges and offer strategies to address them. While targeted toward pandemic forecasting, these domains of potentially biased modeling and concurrent approaches to pursuing fairness present important considerations for equitable machine-learning innovation. Nature Publishing Group UK 2022-05-10 /pmc/articles/PMC9090910/ /pubmed/35538215 http://dx.doi.org/10.1038/s41746-022-00602-z Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Perspective Tsai, Thomas C. Arik, Sercan Jacobson, Benjamin H. Yoon, Jinsung Yoder, Nate Sava, Dario Mitchell, Margaret Graham, Garth Pfister, Tomas Algorithmic fairness in pandemic forecasting: lessons from COVID-19 |
title | Algorithmic fairness in pandemic forecasting: lessons from COVID-19 |
title_full | Algorithmic fairness in pandemic forecasting: lessons from COVID-19 |
title_fullStr | Algorithmic fairness in pandemic forecasting: lessons from COVID-19 |
title_full_unstemmed | Algorithmic fairness in pandemic forecasting: lessons from COVID-19 |
title_short | Algorithmic fairness in pandemic forecasting: lessons from COVID-19 |
title_sort | algorithmic fairness in pandemic forecasting: lessons from covid-19 |
topic | Perspective |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9090910/ https://www.ncbi.nlm.nih.gov/pubmed/35538215 http://dx.doi.org/10.1038/s41746-022-00602-z |
work_keys_str_mv | AT tsaithomasc algorithmicfairnessinpandemicforecastinglessonsfromcovid19 AT ariksercan algorithmicfairnessinpandemicforecastinglessonsfromcovid19 AT jacobsonbenjaminh algorithmicfairnessinpandemicforecastinglessonsfromcovid19 AT yoonjinsung algorithmicfairnessinpandemicforecastinglessonsfromcovid19 AT yodernate algorithmicfairnessinpandemicforecastinglessonsfromcovid19 AT savadario algorithmicfairnessinpandemicforecastinglessonsfromcovid19 AT mitchellmargaret algorithmicfairnessinpandemicforecastinglessonsfromcovid19 AT grahamgarth algorithmicfairnessinpandemicforecastinglessonsfromcovid19 AT pfistertomas algorithmicfairnessinpandemicforecastinglessonsfromcovid19 |