Cargando…

Rules for robots, and why medical AI breaks them

This article critiques the quest to state general rules to protect human rights against AI/ML computational tools. The White House Blueprint for an AI Bill of Rights was a recent attempt that fails in ways this article explores. There are limits to how far ethicolegal analysis can go in abstracting...

Descripción completa

Detalles Bibliográficos
Autor principal: Evans, Barbara J
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Oxford University Press 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9934949/
https://www.ncbi.nlm.nih.gov/pubmed/36815975
http://dx.doi.org/10.1093/jlb/lsad001
_version_ 1784889977024282624
author Evans, Barbara J
author_facet Evans, Barbara J
author_sort Evans, Barbara J
collection PubMed
description This article critiques the quest to state general rules to protect human rights against AI/ML computational tools. The White House Blueprint for an AI Bill of Rights was a recent attempt that fails in ways this article explores. There are limits to how far ethicolegal analysis can go in abstracting AI/ML tools, as a category, from the specific contexts where AI tools are deployed. Health technology offers a good example of this principle. The salient dilemma with AI/ML medical software is that privacy policy has the potential to undermine distributional justice, forcing a choice between two competing visions of privacy protection. The first, stressing individual consent, won favor among bioethicists, information privacy theorists, and policymakers after 1970 but displays an ominous potential to bias AI training data in ways that promote health care inequities. The alternative, an older duty-based approach from medical privacy law aligns with a broader critique of how late-20th-century American law and ethics endorsed atomistic autonomy as the highest moral good, neglecting principles of caring, social interdependency, justice, and equity. Disregarding the context of such choices can produce suboptimal policies when - as in medicine and many other contexts - the use of personal data has high social value.
format Online
Article
Text
id pubmed-9934949
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Oxford University Press
record_format MEDLINE/PubMed
spelling pubmed-99349492023-02-17 Rules for robots, and why medical AI breaks them Evans, Barbara J J Law Biosci Original Article This article critiques the quest to state general rules to protect human rights against AI/ML computational tools. The White House Blueprint for an AI Bill of Rights was a recent attempt that fails in ways this article explores. There are limits to how far ethicolegal analysis can go in abstracting AI/ML tools, as a category, from the specific contexts where AI tools are deployed. Health technology offers a good example of this principle. The salient dilemma with AI/ML medical software is that privacy policy has the potential to undermine distributional justice, forcing a choice between two competing visions of privacy protection. The first, stressing individual consent, won favor among bioethicists, information privacy theorists, and policymakers after 1970 but displays an ominous potential to bias AI training data in ways that promote health care inequities. The alternative, an older duty-based approach from medical privacy law aligns with a broader critique of how late-20th-century American law and ethics endorsed atomistic autonomy as the highest moral good, neglecting principles of caring, social interdependency, justice, and equity. Disregarding the context of such choices can produce suboptimal policies when - as in medicine and many other contexts - the use of personal data has high social value. Oxford University Press 2023-02-16 /pmc/articles/PMC9934949/ /pubmed/36815975 http://dx.doi.org/10.1093/jlb/lsad001 Text en © The Author(s) 2023. Published by Oxford University Press on behalf of Duke University School of Law, Harvard Law School, Oxford University Press, and Stanford Law School. https://creativecommons.org/licenses/by/4.0/This is an Open Access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted reuse, distribution, and reproduction in any medium, provided the original work is properly cited.
spellingShingle Original Article
Evans, Barbara J
Rules for robots, and why medical AI breaks them
title Rules for robots, and why medical AI breaks them
title_full Rules for robots, and why medical AI breaks them
title_fullStr Rules for robots, and why medical AI breaks them
title_full_unstemmed Rules for robots, and why medical AI breaks them
title_short Rules for robots, and why medical AI breaks them
title_sort rules for robots, and why medical ai breaks them
topic Original Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9934949/
https://www.ncbi.nlm.nih.gov/pubmed/36815975
http://dx.doi.org/10.1093/jlb/lsad001
work_keys_str_mv AT evansbarbaraj rulesforrobotsandwhymedicalaibreaksthem