Cargando…
Comparative performance of humans versus GPT-4.0 and GPT-3.5 in the self-assessment program of American Academy of Ophthalmology
To compare the performance of humans, GPT-4.0 and GPT-3.5 in answering multiple-choice questions from the American Academy of Ophthalmology (AAO) Basic and Clinical Science Course (BCSC) self-assessment program, available at https://www.aao.org/education/self-assessments. In June 2023, text-based mu...
Autores principales: | , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10613606/ https://www.ncbi.nlm.nih.gov/pubmed/37899405 http://dx.doi.org/10.1038/s41598-023-45837-2 |
_version_ | 1785128865212923904 |
---|---|
author | Taloni, Andrea Borselli, Massimiliano Scarsi, Valentina Rossi, Costanza Coco, Giulia Scorcia, Vincenzo Giannaccare, Giuseppe |
author_facet | Taloni, Andrea Borselli, Massimiliano Scarsi, Valentina Rossi, Costanza Coco, Giulia Scorcia, Vincenzo Giannaccare, Giuseppe |
author_sort | Taloni, Andrea |
collection | PubMed |
description | To compare the performance of humans, GPT-4.0 and GPT-3.5 in answering multiple-choice questions from the American Academy of Ophthalmology (AAO) Basic and Clinical Science Course (BCSC) self-assessment program, available at https://www.aao.org/education/self-assessments. In June 2023, text-based multiple-choice questions were submitted to GPT-4.0 and GPT-3.5. The AAO provides the percentage of humans who selected the correct answer, which was analyzed for comparison. All questions were classified by 10 subspecialties and 3 practice areas (diagnostics/clinics, medical treatment, surgery). Out of 1023 questions, GPT-4.0 achieved the best score (82.4%), followed by humans (75.7%) and GPT-3.5 (65.9%), with significant difference in accuracy rates (always P < 0.0001). Both GPT-4.0 and GPT-3.5 showed the worst results in surgery-related questions (74.6% and 57.0% respectively). For difficult questions (answered incorrectly by > 50% of humans), both GPT models favorably compared to humans, without reaching significancy. The word count for answers provided by GPT-4.0 was significantly lower than those produced by GPT-3.5 (160 ± 56 and 206 ± 77 respectively, P < 0.0001); however, incorrect responses were longer (P < 0.02). GPT-4.0 represented a substantial improvement over GPT-3.5, achieving better performance than humans in an AAO BCSC self-assessment test. However, ChatGPT is still limited by inconsistency across different practice areas, especially when it comes to surgery. |
format | Online Article Text |
id | pubmed-10613606 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Nature Publishing Group UK |
record_format | MEDLINE/PubMed |
spelling | pubmed-106136062023-10-31 Comparative performance of humans versus GPT-4.0 and GPT-3.5 in the self-assessment program of American Academy of Ophthalmology Taloni, Andrea Borselli, Massimiliano Scarsi, Valentina Rossi, Costanza Coco, Giulia Scorcia, Vincenzo Giannaccare, Giuseppe Sci Rep Article To compare the performance of humans, GPT-4.0 and GPT-3.5 in answering multiple-choice questions from the American Academy of Ophthalmology (AAO) Basic and Clinical Science Course (BCSC) self-assessment program, available at https://www.aao.org/education/self-assessments. In June 2023, text-based multiple-choice questions were submitted to GPT-4.0 and GPT-3.5. The AAO provides the percentage of humans who selected the correct answer, which was analyzed for comparison. All questions were classified by 10 subspecialties and 3 practice areas (diagnostics/clinics, medical treatment, surgery). Out of 1023 questions, GPT-4.0 achieved the best score (82.4%), followed by humans (75.7%) and GPT-3.5 (65.9%), with significant difference in accuracy rates (always P < 0.0001). Both GPT-4.0 and GPT-3.5 showed the worst results in surgery-related questions (74.6% and 57.0% respectively). For difficult questions (answered incorrectly by > 50% of humans), both GPT models favorably compared to humans, without reaching significancy. The word count for answers provided by GPT-4.0 was significantly lower than those produced by GPT-3.5 (160 ± 56 and 206 ± 77 respectively, P < 0.0001); however, incorrect responses were longer (P < 0.02). GPT-4.0 represented a substantial improvement over GPT-3.5, achieving better performance than humans in an AAO BCSC self-assessment test. However, ChatGPT is still limited by inconsistency across different practice areas, especially when it comes to surgery. Nature Publishing Group UK 2023-10-29 /pmc/articles/PMC10613606/ /pubmed/37899405 http://dx.doi.org/10.1038/s41598-023-45837-2 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Article Taloni, Andrea Borselli, Massimiliano Scarsi, Valentina Rossi, Costanza Coco, Giulia Scorcia, Vincenzo Giannaccare, Giuseppe Comparative performance of humans versus GPT-4.0 and GPT-3.5 in the self-assessment program of American Academy of Ophthalmology |
title | Comparative performance of humans versus GPT-4.0 and GPT-3.5 in the self-assessment program of American Academy of Ophthalmology |
title_full | Comparative performance of humans versus GPT-4.0 and GPT-3.5 in the self-assessment program of American Academy of Ophthalmology |
title_fullStr | Comparative performance of humans versus GPT-4.0 and GPT-3.5 in the self-assessment program of American Academy of Ophthalmology |
title_full_unstemmed | Comparative performance of humans versus GPT-4.0 and GPT-3.5 in the self-assessment program of American Academy of Ophthalmology |
title_short | Comparative performance of humans versus GPT-4.0 and GPT-3.5 in the self-assessment program of American Academy of Ophthalmology |
title_sort | comparative performance of humans versus gpt-4.0 and gpt-3.5 in the self-assessment program of american academy of ophthalmology |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10613606/ https://www.ncbi.nlm.nih.gov/pubmed/37899405 http://dx.doi.org/10.1038/s41598-023-45837-2 |
work_keys_str_mv | AT taloniandrea comparativeperformanceofhumansversusgpt40andgpt35intheselfassessmentprogramofamericanacademyofophthalmology AT borsellimassimiliano comparativeperformanceofhumansversusgpt40andgpt35intheselfassessmentprogramofamericanacademyofophthalmology AT scarsivalentina comparativeperformanceofhumansversusgpt40andgpt35intheselfassessmentprogramofamericanacademyofophthalmology AT rossicostanza comparativeperformanceofhumansversusgpt40andgpt35intheselfassessmentprogramofamericanacademyofophthalmology AT cocogiulia comparativeperformanceofhumansversusgpt40andgpt35intheselfassessmentprogramofamericanacademyofophthalmology AT scorciavincenzo comparativeperformanceofhumansversusgpt40andgpt35intheselfassessmentprogramofamericanacademyofophthalmology AT giannaccaregiuseppe comparativeperformanceofhumansversusgpt40andgpt35intheselfassessmentprogramofamericanacademyofophthalmology |