Cargando…
Towards a pragmatist dealing with algorithmic bias in medical machine learning
Machine Learning (ML) is on the rise in medicine, promising improved diagnostic, therapeutic and prognostic clinical tools. While these technological innovations are bound to transform health care, they also bring new ethical concerns to the forefront. One particularly elusive challenge regards disc...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer Netherlands
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7955212/ https://www.ncbi.nlm.nih.gov/pubmed/33713239 http://dx.doi.org/10.1007/s11019-021-10008-5 |
_version_ | 1783664210625953792 |
---|---|
author | Starke, Georg De Clercq, Eva Elger, Bernice S. |
author_facet | Starke, Georg De Clercq, Eva Elger, Bernice S. |
author_sort | Starke, Georg |
collection | PubMed |
description | Machine Learning (ML) is on the rise in medicine, promising improved diagnostic, therapeutic and prognostic clinical tools. While these technological innovations are bound to transform health care, they also bring new ethical concerns to the forefront. One particularly elusive challenge regards discriminatory algorithmic judgements based on biases inherent in the training data. A common line of reasoning distinguishes between justified differential treatments that mirror true disparities between socially salient groups, and unjustified biases which do not, leading to misdiagnosis and erroneous treatment. In the curation of training data this strategy runs into severe problems though, since distinguishing between the two can be next to impossible. We thus plead for a pragmatist dealing with algorithmic bias in healthcare environments. By recurring to a recent reformulation of William James’s pragmatist understanding of truth, we recommend that, instead of aiming at a supposedly objective truth, outcome-based therapeutic usefulness should serve as the guiding principle for assessing ML applications in medicine. |
format | Online Article Text |
id | pubmed-7955212 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | Springer Netherlands |
record_format | MEDLINE/PubMed |
spelling | pubmed-79552122021-03-15 Towards a pragmatist dealing with algorithmic bias in medical machine learning Starke, Georg De Clercq, Eva Elger, Bernice S. Med Health Care Philos Scientific Contribution Machine Learning (ML) is on the rise in medicine, promising improved diagnostic, therapeutic and prognostic clinical tools. While these technological innovations are bound to transform health care, they also bring new ethical concerns to the forefront. One particularly elusive challenge regards discriminatory algorithmic judgements based on biases inherent in the training data. A common line of reasoning distinguishes between justified differential treatments that mirror true disparities between socially salient groups, and unjustified biases which do not, leading to misdiagnosis and erroneous treatment. In the curation of training data this strategy runs into severe problems though, since distinguishing between the two can be next to impossible. We thus plead for a pragmatist dealing with algorithmic bias in healthcare environments. By recurring to a recent reformulation of William James’s pragmatist understanding of truth, we recommend that, instead of aiming at a supposedly objective truth, outcome-based therapeutic usefulness should serve as the guiding principle for assessing ML applications in medicine. Springer Netherlands 2021-03-13 2021 /pmc/articles/PMC7955212/ /pubmed/33713239 http://dx.doi.org/10.1007/s11019-021-10008-5 Text en © The Author(s) 2021 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Scientific Contribution Starke, Georg De Clercq, Eva Elger, Bernice S. Towards a pragmatist dealing with algorithmic bias in medical machine learning |
title | Towards a pragmatist dealing with algorithmic bias in medical machine learning |
title_full | Towards a pragmatist dealing with algorithmic bias in medical machine learning |
title_fullStr | Towards a pragmatist dealing with algorithmic bias in medical machine learning |
title_full_unstemmed | Towards a pragmatist dealing with algorithmic bias in medical machine learning |
title_short | Towards a pragmatist dealing with algorithmic bias in medical machine learning |
title_sort | towards a pragmatist dealing with algorithmic bias in medical machine learning |
topic | Scientific Contribution |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7955212/ https://www.ncbi.nlm.nih.gov/pubmed/33713239 http://dx.doi.org/10.1007/s11019-021-10008-5 |
work_keys_str_mv | AT starkegeorg towardsapragmatistdealingwithalgorithmicbiasinmedicalmachinelearning AT declercqeva towardsapragmatistdealingwithalgorithmicbiasinmedicalmachinelearning AT elgerbernices towardsapragmatistdealingwithalgorithmicbiasinmedicalmachinelearning |