Cargando…

Intelligent decision support in medical triage: are people robust to biased advice?

BACKGROUND: Intelligent artificial agents (‘agents’) have emerged in various domains of human society (healthcare, legal, social). Since using intelligent agents can lead to biases, a common proposed solution is to keep the human in the loop. Will this be enough to ensure unbiased decision making? M...

Descripción completa

Detalles Bibliográficos
Autores principales: van der Stigchel, Birgit, van den Bosch, Karel, van Diggelen, Jurriaan, Haselager, Pim
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Oxford University Press 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10470333/
https://www.ncbi.nlm.nih.gov/pubmed/36947701
http://dx.doi.org/10.1093/pubmed/fdad005
_version_ 1785099658926751744
author van der Stigchel, Birgit
van den Bosch, Karel
van Diggelen, Jurriaan
Haselager, Pim
author_facet van der Stigchel, Birgit
van den Bosch, Karel
van Diggelen, Jurriaan
Haselager, Pim
author_sort van der Stigchel, Birgit
collection PubMed
description BACKGROUND: Intelligent artificial agents (‘agents’) have emerged in various domains of human society (healthcare, legal, social). Since using intelligent agents can lead to biases, a common proposed solution is to keep the human in the loop. Will this be enough to ensure unbiased decision making? METHODS: To address this question, an experimental testbed was developed in which a human participant and an agent collaboratively conduct triage on patients during a pandemic crisis. The agent uses data to support the human by providing advice and extra information about the patients. In one condition, the agent provided sound advice; the agent in the other condition gave biased advice. The research question was whether participants neutralized bias from the biased artificial agent. RESULTS: Although it was an exploratory study, the data suggest that human participants may not be sufficiently in control to correct the agent’s bias. CONCLUSIONS: This research shows how important it is to design and test for human control in concrete human–machine collaboration contexts. It suggests that insufficient human control can potentially result in people being unable to detect biases in machines and thus unable to prevent machine biases from affecting decisions.
format Online
Article
Text
id pubmed-10470333
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Oxford University Press
record_format MEDLINE/PubMed
spelling pubmed-104703332023-09-01 Intelligent decision support in medical triage: are people robust to biased advice? van der Stigchel, Birgit van den Bosch, Karel van Diggelen, Jurriaan Haselager, Pim J Public Health (Oxf) Original Article BACKGROUND: Intelligent artificial agents (‘agents’) have emerged in various domains of human society (healthcare, legal, social). Since using intelligent agents can lead to biases, a common proposed solution is to keep the human in the loop. Will this be enough to ensure unbiased decision making? METHODS: To address this question, an experimental testbed was developed in which a human participant and an agent collaboratively conduct triage on patients during a pandemic crisis. The agent uses data to support the human by providing advice and extra information about the patients. In one condition, the agent provided sound advice; the agent in the other condition gave biased advice. The research question was whether participants neutralized bias from the biased artificial agent. RESULTS: Although it was an exploratory study, the data suggest that human participants may not be sufficiently in control to correct the agent’s bias. CONCLUSIONS: This research shows how important it is to design and test for human control in concrete human–machine collaboration contexts. It suggests that insufficient human control can potentially result in people being unable to detect biases in machines and thus unable to prevent machine biases from affecting decisions. Oxford University Press 2023-03-20 /pmc/articles/PMC10470333/ /pubmed/36947701 http://dx.doi.org/10.1093/pubmed/fdad005 Text en © The Author(s) 2023. Published by Oxford University Press on behalf of Faculty of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com https://creativecommons.org/licenses/by-nc/4.0/This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (https://creativecommons.org/licenses/by-nc/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is properly cited. For commercial re-use, please contact journals.permissions@oup.com
spellingShingle Original Article
van der Stigchel, Birgit
van den Bosch, Karel
van Diggelen, Jurriaan
Haselager, Pim
Intelligent decision support in medical triage: are people robust to biased advice?
title Intelligent decision support in medical triage: are people robust to biased advice?
title_full Intelligent decision support in medical triage: are people robust to biased advice?
title_fullStr Intelligent decision support in medical triage: are people robust to biased advice?
title_full_unstemmed Intelligent decision support in medical triage: are people robust to biased advice?
title_short Intelligent decision support in medical triage: are people robust to biased advice?
title_sort intelligent decision support in medical triage: are people robust to biased advice?
topic Original Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10470333/
https://www.ncbi.nlm.nih.gov/pubmed/36947701
http://dx.doi.org/10.1093/pubmed/fdad005
work_keys_str_mv AT vanderstigchelbirgit intelligentdecisionsupportinmedicaltriagearepeoplerobusttobiasedadvice
AT vandenboschkarel intelligentdecisionsupportinmedicaltriagearepeoplerobusttobiasedadvice
AT vandiggelenjurriaan intelligentdecisionsupportinmedicaltriagearepeoplerobusttobiasedadvice
AT haselagerpim intelligentdecisionsupportinmedicaltriagearepeoplerobusttobiasedadvice