Cargando…
Large language models propagate race-based medicine
Large language models (LLMs) are being integrated into healthcare systems; but these models may recapitulate harmful, race-based medicine. The objective of this study is to assess whether four commercially available large language models (LLMs) propagate harmful, inaccurate, race-based content when...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10589311/ https://www.ncbi.nlm.nih.gov/pubmed/37864012 http://dx.doi.org/10.1038/s41746-023-00939-z |
_version_ | 1785123763525779456 |
---|---|
author | Omiye, Jesutofunmi A. Lester, Jenna C. Spichak, Simon Rotemberg, Veronica Daneshjou, Roxana |
author_facet | Omiye, Jesutofunmi A. Lester, Jenna C. Spichak, Simon Rotemberg, Veronica Daneshjou, Roxana |
author_sort | Omiye, Jesutofunmi A. |
collection | PubMed |
description | Large language models (LLMs) are being integrated into healthcare systems; but these models may recapitulate harmful, race-based medicine. The objective of this study is to assess whether four commercially available large language models (LLMs) propagate harmful, inaccurate, race-based content when responding to eight different scenarios that check for race-based medicine or widespread misconceptions around race. Questions were derived from discussions among four physician experts and prior work on race-based medical misconceptions believed by medical trainees. We assessed four large language models with nine different questions that were interrogated five times each with a total of 45 responses per model. All models had examples of perpetuating race-based medicine in their responses. Models were not always consistent in their responses when asked the same question repeatedly. LLMs are being proposed for use in the healthcare setting, with some models already connecting to electronic health record systems. However, this study shows that based on our findings, these LLMs could potentially cause harm by perpetuating debunked, racist ideas. |
format | Online Article Text |
id | pubmed-10589311 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Nature Publishing Group UK |
record_format | MEDLINE/PubMed |
spelling | pubmed-105893112023-10-22 Large language models propagate race-based medicine Omiye, Jesutofunmi A. Lester, Jenna C. Spichak, Simon Rotemberg, Veronica Daneshjou, Roxana NPJ Digit Med Brief Communication Large language models (LLMs) are being integrated into healthcare systems; but these models may recapitulate harmful, race-based medicine. The objective of this study is to assess whether four commercially available large language models (LLMs) propagate harmful, inaccurate, race-based content when responding to eight different scenarios that check for race-based medicine or widespread misconceptions around race. Questions were derived from discussions among four physician experts and prior work on race-based medical misconceptions believed by medical trainees. We assessed four large language models with nine different questions that were interrogated five times each with a total of 45 responses per model. All models had examples of perpetuating race-based medicine in their responses. Models were not always consistent in their responses when asked the same question repeatedly. LLMs are being proposed for use in the healthcare setting, with some models already connecting to electronic health record systems. However, this study shows that based on our findings, these LLMs could potentially cause harm by perpetuating debunked, racist ideas. Nature Publishing Group UK 2023-10-20 /pmc/articles/PMC10589311/ /pubmed/37864012 http://dx.doi.org/10.1038/s41746-023-00939-z Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Brief Communication Omiye, Jesutofunmi A. Lester, Jenna C. Spichak, Simon Rotemberg, Veronica Daneshjou, Roxana Large language models propagate race-based medicine |
title | Large language models propagate race-based medicine |
title_full | Large language models propagate race-based medicine |
title_fullStr | Large language models propagate race-based medicine |
title_full_unstemmed | Large language models propagate race-based medicine |
title_short | Large language models propagate race-based medicine |
title_sort | large language models propagate race-based medicine |
topic | Brief Communication |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10589311/ https://www.ncbi.nlm.nih.gov/pubmed/37864012 http://dx.doi.org/10.1038/s41746-023-00939-z |
work_keys_str_mv | AT omiyejesutofunmia largelanguagemodelspropagateracebasedmedicine AT lesterjennac largelanguagemodelspropagateracebasedmedicine AT spichaksimon largelanguagemodelspropagateracebasedmedicine AT rotembergveronica largelanguagemodelspropagateracebasedmedicine AT daneshjouroxana largelanguagemodelspropagateracebasedmedicine |