Cargando…
Automated opioid risk scores: a case for machine learning-induced epistemic injustice in healthcare
Artificial intelligence-based (AI) technologies such as machine learning (ML) systems are playing an increasingly relevant role in medicine and healthcare, bringing about novel ethical and epistemological issues that need to be timely addressed. Even though ethical questions connected to epistemic c...
Autor principal: | |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer Netherlands
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9869303/ https://www.ncbi.nlm.nih.gov/pubmed/36711076 http://dx.doi.org/10.1007/s10676-023-09676-z |
_version_ | 1784876741750161408 |
---|---|
author | Pozzi, Giorgia |
author_facet | Pozzi, Giorgia |
author_sort | Pozzi, Giorgia |
collection | PubMed |
description | Artificial intelligence-based (AI) technologies such as machine learning (ML) systems are playing an increasingly relevant role in medicine and healthcare, bringing about novel ethical and epistemological issues that need to be timely addressed. Even though ethical questions connected to epistemic concerns have been at the center of the debate, it is going unnoticed how epistemic forms of injustice can be ML-induced, specifically in healthcare. I analyze the shortcomings of an ML system currently deployed in the USA to predict patients’ likelihood of opioid addiction and misuse (PDMP algorithmic platforms). Drawing on this analysis, I aim to show that the wrong inflicted on epistemic agents involved in and affected by these systems’ decision-making processes can be captured through the lenses of Miranda Fricker’s account of hermeneutical injustice. I further argue that ML-induced hermeneutical injustice is particularly harmful due to what I define as an automated hermeneutical appropriation from the side of the ML system. The latter occurs if the ML system establishes meanings and shared hermeneutical resources without allowing for human oversight, impairing understanding and communication practices among stakeholders involved in medical decision-making. Furthermore and very much crucially, an automated hermeneutical appropriation can be recognized if physicians are strongly limited in their possibilities to safeguard patients from ML-induced hermeneutical injustice. Overall, my paper should expand the analysis of ethical issues raised by ML systems that are to be considered epistemic in nature, thus contributing to bridging the gap between these two dimensions in the ongoing debate. |
format | Online Article Text |
id | pubmed-9869303 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Springer Netherlands |
record_format | MEDLINE/PubMed |
spelling | pubmed-98693032023-01-23 Automated opioid risk scores: a case for machine learning-induced epistemic injustice in healthcare Pozzi, Giorgia Ethics Inf Technol OriginalPaper Artificial intelligence-based (AI) technologies such as machine learning (ML) systems are playing an increasingly relevant role in medicine and healthcare, bringing about novel ethical and epistemological issues that need to be timely addressed. Even though ethical questions connected to epistemic concerns have been at the center of the debate, it is going unnoticed how epistemic forms of injustice can be ML-induced, specifically in healthcare. I analyze the shortcomings of an ML system currently deployed in the USA to predict patients’ likelihood of opioid addiction and misuse (PDMP algorithmic platforms). Drawing on this analysis, I aim to show that the wrong inflicted on epistemic agents involved in and affected by these systems’ decision-making processes can be captured through the lenses of Miranda Fricker’s account of hermeneutical injustice. I further argue that ML-induced hermeneutical injustice is particularly harmful due to what I define as an automated hermeneutical appropriation from the side of the ML system. The latter occurs if the ML system establishes meanings and shared hermeneutical resources without allowing for human oversight, impairing understanding and communication practices among stakeholders involved in medical decision-making. Furthermore and very much crucially, an automated hermeneutical appropriation can be recognized if physicians are strongly limited in their possibilities to safeguard patients from ML-induced hermeneutical injustice. Overall, my paper should expand the analysis of ethical issues raised by ML systems that are to be considered epistemic in nature, thus contributing to bridging the gap between these two dimensions in the ongoing debate. Springer Netherlands 2023-01-23 2023 /pmc/articles/PMC9869303/ /pubmed/36711076 http://dx.doi.org/10.1007/s10676-023-09676-z Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | OriginalPaper Pozzi, Giorgia Automated opioid risk scores: a case for machine learning-induced epistemic injustice in healthcare |
title | Automated opioid risk scores: a case for machine learning-induced epistemic injustice in healthcare |
title_full | Automated opioid risk scores: a case for machine learning-induced epistemic injustice in healthcare |
title_fullStr | Automated opioid risk scores: a case for machine learning-induced epistemic injustice in healthcare |
title_full_unstemmed | Automated opioid risk scores: a case for machine learning-induced epistemic injustice in healthcare |
title_short | Automated opioid risk scores: a case for machine learning-induced epistemic injustice in healthcare |
title_sort | automated opioid risk scores: a case for machine learning-induced epistemic injustice in healthcare |
topic | OriginalPaper |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9869303/ https://www.ncbi.nlm.nih.gov/pubmed/36711076 http://dx.doi.org/10.1007/s10676-023-09676-z |
work_keys_str_mv | AT pozzigiorgia automatedopioidriskscoresacaseformachinelearninginducedepistemicinjusticeinhealthcare |