Cargando…

Assessing the Utility of ChatGPT Throughout the Entire Clinical Workflow: Development and Usability Study

BACKGROUND: Large language model (LLM)–based artificial intelligence chatbots direct the power of large training data sets toward successive, related tasks as opposed to single-ask tasks, for which artificial intelligence already achieves impressive performance. The capacity of LLMs to assist in the...

Descripción completa

Detalles Bibliográficos
Autores principales: Rao, Arya, Pang, Michael, Kim, John, Kamineni, Meghana, Lie, Winston, Prasad, Anoop K, Landman, Adam, Dreyer, Keith, Succi, Marc D
Formato: Online Artículo Texto
Lenguaje:English
Publicado: JMIR Publications 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10481210/
https://www.ncbi.nlm.nih.gov/pubmed/37606976
http://dx.doi.org/10.2196/48659
_version_ 1785101926319259648
author Rao, Arya
Pang, Michael
Kim, John
Kamineni, Meghana
Lie, Winston
Prasad, Anoop K
Landman, Adam
Dreyer, Keith
Succi, Marc D
author_facet Rao, Arya
Pang, Michael
Kim, John
Kamineni, Meghana
Lie, Winston
Prasad, Anoop K
Landman, Adam
Dreyer, Keith
Succi, Marc D
author_sort Rao, Arya
collection PubMed
description BACKGROUND: Large language model (LLM)–based artificial intelligence chatbots direct the power of large training data sets toward successive, related tasks as opposed to single-ask tasks, for which artificial intelligence already achieves impressive performance. The capacity of LLMs to assist in the full scope of iterative clinical reasoning via successive prompting, in effect acting as artificial physicians, has not yet been evaluated. OBJECTIVE: This study aimed to evaluate ChatGPT’s capacity for ongoing clinical decision support via its performance on standardized clinical vignettes. METHODS: We inputted all 36 published clinical vignettes from the Merck Sharpe & Dohme (MSD) Clinical Manual into ChatGPT and compared its accuracy on differential diagnoses, diagnostic testing, final diagnosis, and management based on patient age, gender, and case acuity. Accuracy was measured by the proportion of correct responses to the questions posed within the clinical vignettes tested, as calculated by human scorers. We further conducted linear regression to assess the contributing factors toward ChatGPT’s performance on clinical tasks. RESULTS: ChatGPT achieved an overall accuracy of 71.7% (95% CI 69.3%-74.1%) across all 36 clinical vignettes. The LLM demonstrated the highest performance in making a final diagnosis with an accuracy of 76.9% (95% CI 67.8%-86.1%) and the lowest performance in generating an initial differential diagnosis with an accuracy of 60.3% (95% CI 54.2%-66.6%). Compared to answering questions about general medical knowledge, ChatGPT demonstrated inferior performance on differential diagnosis (β=–15.8%; P<.001) and clinical management (β=–7.4%; P=.02) question types. CONCLUSIONS: ChatGPT achieves impressive accuracy in clinical decision-making, with increasing strength as it gains more clinical information at its disposal. In particular, ChatGPT demonstrates the greatest accuracy in tasks of final diagnosis as compared to initial diagnosis. Limitations include possible model hallucinations and the unclear composition of ChatGPT’s training data set.
format Online
Article
Text
id pubmed-10481210
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher JMIR Publications
record_format MEDLINE/PubMed
spelling pubmed-104812102023-09-07 Assessing the Utility of ChatGPT Throughout the Entire Clinical Workflow: Development and Usability Study Rao, Arya Pang, Michael Kim, John Kamineni, Meghana Lie, Winston Prasad, Anoop K Landman, Adam Dreyer, Keith Succi, Marc D J Med Internet Res Original Paper BACKGROUND: Large language model (LLM)–based artificial intelligence chatbots direct the power of large training data sets toward successive, related tasks as opposed to single-ask tasks, for which artificial intelligence already achieves impressive performance. The capacity of LLMs to assist in the full scope of iterative clinical reasoning via successive prompting, in effect acting as artificial physicians, has not yet been evaluated. OBJECTIVE: This study aimed to evaluate ChatGPT’s capacity for ongoing clinical decision support via its performance on standardized clinical vignettes. METHODS: We inputted all 36 published clinical vignettes from the Merck Sharpe & Dohme (MSD) Clinical Manual into ChatGPT and compared its accuracy on differential diagnoses, diagnostic testing, final diagnosis, and management based on patient age, gender, and case acuity. Accuracy was measured by the proportion of correct responses to the questions posed within the clinical vignettes tested, as calculated by human scorers. We further conducted linear regression to assess the contributing factors toward ChatGPT’s performance on clinical tasks. RESULTS: ChatGPT achieved an overall accuracy of 71.7% (95% CI 69.3%-74.1%) across all 36 clinical vignettes. The LLM demonstrated the highest performance in making a final diagnosis with an accuracy of 76.9% (95% CI 67.8%-86.1%) and the lowest performance in generating an initial differential diagnosis with an accuracy of 60.3% (95% CI 54.2%-66.6%). Compared to answering questions about general medical knowledge, ChatGPT demonstrated inferior performance on differential diagnosis (β=–15.8%; P<.001) and clinical management (β=–7.4%; P=.02) question types. CONCLUSIONS: ChatGPT achieves impressive accuracy in clinical decision-making, with increasing strength as it gains more clinical information at its disposal. In particular, ChatGPT demonstrates the greatest accuracy in tasks of final diagnosis as compared to initial diagnosis. Limitations include possible model hallucinations and the unclear composition of ChatGPT’s training data set. JMIR Publications 2023-08-22 /pmc/articles/PMC10481210/ /pubmed/37606976 http://dx.doi.org/10.2196/48659 Text en ©Arya Rao, Michael Pang, John Kim, Meghana Kamineni, Winston Lie, Anoop K Prasad, Adam Landman, Keith Dreyer, Marc D Succi. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 22.08.2023. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.
spellingShingle Original Paper
Rao, Arya
Pang, Michael
Kim, John
Kamineni, Meghana
Lie, Winston
Prasad, Anoop K
Landman, Adam
Dreyer, Keith
Succi, Marc D
Assessing the Utility of ChatGPT Throughout the Entire Clinical Workflow: Development and Usability Study
title Assessing the Utility of ChatGPT Throughout the Entire Clinical Workflow: Development and Usability Study
title_full Assessing the Utility of ChatGPT Throughout the Entire Clinical Workflow: Development and Usability Study
title_fullStr Assessing the Utility of ChatGPT Throughout the Entire Clinical Workflow: Development and Usability Study
title_full_unstemmed Assessing the Utility of ChatGPT Throughout the Entire Clinical Workflow: Development and Usability Study
title_short Assessing the Utility of ChatGPT Throughout the Entire Clinical Workflow: Development and Usability Study
title_sort assessing the utility of chatgpt throughout the entire clinical workflow: development and usability study
topic Original Paper
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10481210/
https://www.ncbi.nlm.nih.gov/pubmed/37606976
http://dx.doi.org/10.2196/48659
work_keys_str_mv AT raoarya assessingtheutilityofchatgptthroughouttheentireclinicalworkflowdevelopmentandusabilitystudy
AT pangmichael assessingtheutilityofchatgptthroughouttheentireclinicalworkflowdevelopmentandusabilitystudy
AT kimjohn assessingtheutilityofchatgptthroughouttheentireclinicalworkflowdevelopmentandusabilitystudy
AT kaminenimeghana assessingtheutilityofchatgptthroughouttheentireclinicalworkflowdevelopmentandusabilitystudy
AT liewinston assessingtheutilityofchatgptthroughouttheentireclinicalworkflowdevelopmentandusabilitystudy
AT prasadanoopk assessingtheutilityofchatgptthroughouttheentireclinicalworkflowdevelopmentandusabilitystudy
AT landmanadam assessingtheutilityofchatgptthroughouttheentireclinicalworkflowdevelopmentandusabilitystudy
AT dreyerkeith assessingtheutilityofchatgptthroughouttheentireclinicalworkflowdevelopmentandusabilitystudy
AT succimarcd assessingtheutilityofchatgptthroughouttheentireclinicalworkflowdevelopmentandusabilitystudy