Cargando…

Comparing ChatGPT and GPT-4 performance in USMLE soft skill assessments

The United States Medical Licensing Examination (USMLE) has been a subject of performance study for artificial intelligence (AI) models. However, their performance on questions involving USMLE soft skills remains unexplored. This study aimed to evaluate ChatGPT and GPT-4 on USMLE questions involving...

Descripción completa

Detalles Bibliográficos
Autores principales: Brin, Dana, Sorin, Vera, Vaid, Akhil, Soroush, Ali, Glicksberg, Benjamin S., Charney, Alexander W., Nadkarni, Girish, Klang, Eyal
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Nature Publishing Group UK 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10543445/
https://www.ncbi.nlm.nih.gov/pubmed/37779171
http://dx.doi.org/10.1038/s41598-023-43436-9
_version_ 1785114302322049024
author Brin, Dana
Sorin, Vera
Vaid, Akhil
Soroush, Ali
Glicksberg, Benjamin S.
Charney, Alexander W.
Nadkarni, Girish
Klang, Eyal
author_facet Brin, Dana
Sorin, Vera
Vaid, Akhil
Soroush, Ali
Glicksberg, Benjamin S.
Charney, Alexander W.
Nadkarni, Girish
Klang, Eyal
author_sort Brin, Dana
collection PubMed
description The United States Medical Licensing Examination (USMLE) has been a subject of performance study for artificial intelligence (AI) models. However, their performance on questions involving USMLE soft skills remains unexplored. This study aimed to evaluate ChatGPT and GPT-4 on USMLE questions involving communication skills, ethics, empathy, and professionalism. We used 80 USMLE-style questions involving soft skills, taken from the USMLE website and the AMBOSS question bank. A follow-up query was used to assess the models’ consistency. The performance of the AI models was compared to that of previous AMBOSS users. GPT-4 outperformed ChatGPT, correctly answering 90% compared to ChatGPT’s 62.5%. GPT-4 showed more confidence, not revising any responses, while ChatGPT modified its original answers 82.5% of the time. The performance of GPT-4 was higher than that of AMBOSS's past users. Both AI models, notably GPT-4, showed capacity for empathy, indicating AI's potential to meet the complex interpersonal, ethical, and professional demands intrinsic to the practice of medicine.
format Online
Article
Text
id pubmed-10543445
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Nature Publishing Group UK
record_format MEDLINE/PubMed
spelling pubmed-105434452023-10-03 Comparing ChatGPT and GPT-4 performance in USMLE soft skill assessments Brin, Dana Sorin, Vera Vaid, Akhil Soroush, Ali Glicksberg, Benjamin S. Charney, Alexander W. Nadkarni, Girish Klang, Eyal Sci Rep Article The United States Medical Licensing Examination (USMLE) has been a subject of performance study for artificial intelligence (AI) models. However, their performance on questions involving USMLE soft skills remains unexplored. This study aimed to evaluate ChatGPT and GPT-4 on USMLE questions involving communication skills, ethics, empathy, and professionalism. We used 80 USMLE-style questions involving soft skills, taken from the USMLE website and the AMBOSS question bank. A follow-up query was used to assess the models’ consistency. The performance of the AI models was compared to that of previous AMBOSS users. GPT-4 outperformed ChatGPT, correctly answering 90% compared to ChatGPT’s 62.5%. GPT-4 showed more confidence, not revising any responses, while ChatGPT modified its original answers 82.5% of the time. The performance of GPT-4 was higher than that of AMBOSS's past users. Both AI models, notably GPT-4, showed capacity for empathy, indicating AI's potential to meet the complex interpersonal, ethical, and professional demands intrinsic to the practice of medicine. Nature Publishing Group UK 2023-10-01 /pmc/articles/PMC10543445/ /pubmed/37779171 http://dx.doi.org/10.1038/s41598-023-43436-9 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) .
spellingShingle Article
Brin, Dana
Sorin, Vera
Vaid, Akhil
Soroush, Ali
Glicksberg, Benjamin S.
Charney, Alexander W.
Nadkarni, Girish
Klang, Eyal
Comparing ChatGPT and GPT-4 performance in USMLE soft skill assessments
title Comparing ChatGPT and GPT-4 performance in USMLE soft skill assessments
title_full Comparing ChatGPT and GPT-4 performance in USMLE soft skill assessments
title_fullStr Comparing ChatGPT and GPT-4 performance in USMLE soft skill assessments
title_full_unstemmed Comparing ChatGPT and GPT-4 performance in USMLE soft skill assessments
title_short Comparing ChatGPT and GPT-4 performance in USMLE soft skill assessments
title_sort comparing chatgpt and gpt-4 performance in usmle soft skill assessments
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10543445/
https://www.ncbi.nlm.nih.gov/pubmed/37779171
http://dx.doi.org/10.1038/s41598-023-43436-9
work_keys_str_mv AT brindana comparingchatgptandgpt4performanceinusmlesoftskillassessments
AT sorinvera comparingchatgptandgpt4performanceinusmlesoftskillassessments
AT vaidakhil comparingchatgptandgpt4performanceinusmlesoftskillassessments
AT soroushali comparingchatgptandgpt4performanceinusmlesoftskillassessments
AT glicksbergbenjamins comparingchatgptandgpt4performanceinusmlesoftskillassessments
AT charneyalexanderw comparingchatgptandgpt4performanceinusmlesoftskillassessments
AT nadkarnigirish comparingchatgptandgpt4performanceinusmlesoftskillassessments
AT klangeyal comparingchatgptandgpt4performanceinusmlesoftskillassessments