Cargando…
Evaluation of the performance of GPT-3.5 and GPT-4 on the Polish Medical Final Examination
The study aimed to evaluate the performance of two Large Language Models (LLMs): ChatGPT (based on GPT-3.5) and GPT-4 with two temperature parameter values, on the Polish Medical Final Examination (MFE). The models were tested on three editions of the MFE from: Spring 2022, Autumn 2022, and Spring 2...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10665355/ https://www.ncbi.nlm.nih.gov/pubmed/37993519 http://dx.doi.org/10.1038/s41598-023-46995-z |
_version_ | 1785148851732086784 |
---|---|
author | Rosoł, Maciej Gąsior, Jakub S. Łaba, Jonasz Korzeniewski, Kacper Młyńczak, Marcel |
author_facet | Rosoł, Maciej Gąsior, Jakub S. Łaba, Jonasz Korzeniewski, Kacper Młyńczak, Marcel |
author_sort | Rosoł, Maciej |
collection | PubMed |
description | The study aimed to evaluate the performance of two Large Language Models (LLMs): ChatGPT (based on GPT-3.5) and GPT-4 with two temperature parameter values, on the Polish Medical Final Examination (MFE). The models were tested on three editions of the MFE from: Spring 2022, Autumn 2022, and Spring 2023 in two language versions—English and Polish. The accuracies of both models were compared and the relationships between the correctness of answers with the answer’s metrics were investigated. The study demonstrated that GPT-4 outperformed GPT-3.5 in all three examinations regardless of the language used. GPT-4 achieved mean accuracies of 79.7% for both Polish and English versions, passing all MFE versions. GPT-3.5 had mean accuracies of 54.8% for Polish and 60.3% for English, passing none and 2 of 3 Polish versions for temperature parameter equal to 0 and 1 respectively while passing all English versions regardless of the temperature parameter value. GPT-4 score was mostly lower than the average score of a medical student. There was a statistically significant correlation between the correctness of the answers and the index of difficulty for both models. The overall accuracy of both models was still suboptimal and worse than the average for medical students. This emphasizes the need for further improvements in LLMs before they can be reliably deployed in medical settings. These findings suggest an increasing potential for the usage of LLMs in terms of medical education. |
format | Online Article Text |
id | pubmed-10665355 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Nature Publishing Group UK |
record_format | MEDLINE/PubMed |
spelling | pubmed-106653552023-11-22 Evaluation of the performance of GPT-3.5 and GPT-4 on the Polish Medical Final Examination Rosoł, Maciej Gąsior, Jakub S. Łaba, Jonasz Korzeniewski, Kacper Młyńczak, Marcel Sci Rep Article The study aimed to evaluate the performance of two Large Language Models (LLMs): ChatGPT (based on GPT-3.5) and GPT-4 with two temperature parameter values, on the Polish Medical Final Examination (MFE). The models were tested on three editions of the MFE from: Spring 2022, Autumn 2022, and Spring 2023 in two language versions—English and Polish. The accuracies of both models were compared and the relationships between the correctness of answers with the answer’s metrics were investigated. The study demonstrated that GPT-4 outperformed GPT-3.5 in all three examinations regardless of the language used. GPT-4 achieved mean accuracies of 79.7% for both Polish and English versions, passing all MFE versions. GPT-3.5 had mean accuracies of 54.8% for Polish and 60.3% for English, passing none and 2 of 3 Polish versions for temperature parameter equal to 0 and 1 respectively while passing all English versions regardless of the temperature parameter value. GPT-4 score was mostly lower than the average score of a medical student. There was a statistically significant correlation between the correctness of the answers and the index of difficulty for both models. The overall accuracy of both models was still suboptimal and worse than the average for medical students. This emphasizes the need for further improvements in LLMs before they can be reliably deployed in medical settings. These findings suggest an increasing potential for the usage of LLMs in terms of medical education. Nature Publishing Group UK 2023-11-22 /pmc/articles/PMC10665355/ /pubmed/37993519 http://dx.doi.org/10.1038/s41598-023-46995-z Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Article Rosoł, Maciej Gąsior, Jakub S. Łaba, Jonasz Korzeniewski, Kacper Młyńczak, Marcel Evaluation of the performance of GPT-3.5 and GPT-4 on the Polish Medical Final Examination |
title | Evaluation of the performance of GPT-3.5 and GPT-4 on the Polish Medical Final Examination |
title_full | Evaluation of the performance of GPT-3.5 and GPT-4 on the Polish Medical Final Examination |
title_fullStr | Evaluation of the performance of GPT-3.5 and GPT-4 on the Polish Medical Final Examination |
title_full_unstemmed | Evaluation of the performance of GPT-3.5 and GPT-4 on the Polish Medical Final Examination |
title_short | Evaluation of the performance of GPT-3.5 and GPT-4 on the Polish Medical Final Examination |
title_sort | evaluation of the performance of gpt-3.5 and gpt-4 on the polish medical final examination |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10665355/ https://www.ncbi.nlm.nih.gov/pubmed/37993519 http://dx.doi.org/10.1038/s41598-023-46995-z |
work_keys_str_mv | AT rosołmaciej evaluationoftheperformanceofgpt35andgpt4onthepolishmedicalfinalexamination AT gasiorjakubs evaluationoftheperformanceofgpt35andgpt4onthepolishmedicalfinalexamination AT łabajonasz evaluationoftheperformanceofgpt35andgpt4onthepolishmedicalfinalexamination AT korzeniewskikacper evaluationoftheperformanceofgpt35andgpt4onthepolishmedicalfinalexamination AT młynczakmarcel evaluationoftheperformanceofgpt35andgpt4onthepolishmedicalfinalexamination |