Cargando…
ChatGPT versus human in generating medical graduate exam multiple choice questions—A multinational prospective study (Hong Kong S.A.R., Singapore, Ireland, and the United Kingdom)
INTRODUCTION: Large language models, in particular ChatGPT, have showcased remarkable language processing capabilities. Given the substantial workload of university medical staff, this study aims to assess the quality of multiple-choice questions (MCQs) produced by ChatGPT for use in graduate medica...
Autores principales: | , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10464959/ https://www.ncbi.nlm.nih.gov/pubmed/37643186 http://dx.doi.org/10.1371/journal.pone.0290691 |
_version_ | 1785098575163686912 |
---|---|
author | Cheung, Billy Ho Hung Lau, Gary Kui Kai Wong, Gordon Tin Chun Lee, Elaine Yuen Phin Kulkarni, Dhananjay Seow, Choon Sheong Wong, Ruby Co, Michael Tiong-Hong |
author_facet | Cheung, Billy Ho Hung Lau, Gary Kui Kai Wong, Gordon Tin Chun Lee, Elaine Yuen Phin Kulkarni, Dhananjay Seow, Choon Sheong Wong, Ruby Co, Michael Tiong-Hong |
author_sort | Cheung, Billy Ho Hung |
collection | PubMed |
description | INTRODUCTION: Large language models, in particular ChatGPT, have showcased remarkable language processing capabilities. Given the substantial workload of university medical staff, this study aims to assess the quality of multiple-choice questions (MCQs) produced by ChatGPT for use in graduate medical examinations, compared to questions written by university professoriate staffs based on standard medical textbooks. METHODS: 50 MCQs were generated by ChatGPT with reference to two standard undergraduate medical textbooks (Harrison’s, and Bailey & Love’s). Another 50 MCQs were drafted by two university professoriate staff using the same medical textbooks. All 100 MCQ were individually numbered, randomized and sent to five independent international assessors for MCQ quality assessment using a standardized assessment score on five assessment domains, namely, appropriateness of the question, clarity and specificity, relevance, discriminative power of alternatives, and suitability for medical graduate examination. RESULTS: The total time required for ChatGPT to create the 50 questions was 20 minutes 25 seconds, while it took two human examiners a total of 211 minutes 33 seconds to draft the 50 questions. When a comparison of the mean score was made between the questions constructed by A.I. with those drafted by humans, only in the relevance domain that the A.I. was inferior to humans (A.I.: 7.56 +/- 0.94 vs human: 7.88 +/- 0.52; p = 0.04). There was no significant difference in question quality between questions drafted by A.I. versus humans, in the total assessment score as well as in other domains. Questions generated by A.I. yielded a wider range of scores, while those created by humans were consistent and within a narrower range. CONCLUSION: ChatGPT has the potential to generate comparable-quality MCQs for medical graduate examinations within a significantly shorter time. |
format | Online Article Text |
id | pubmed-10464959 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Public Library of Science |
record_format | MEDLINE/PubMed |
spelling | pubmed-104649592023-08-30 ChatGPT versus human in generating medical graduate exam multiple choice questions—A multinational prospective study (Hong Kong S.A.R., Singapore, Ireland, and the United Kingdom) Cheung, Billy Ho Hung Lau, Gary Kui Kai Wong, Gordon Tin Chun Lee, Elaine Yuen Phin Kulkarni, Dhananjay Seow, Choon Sheong Wong, Ruby Co, Michael Tiong-Hong PLoS One Research Article INTRODUCTION: Large language models, in particular ChatGPT, have showcased remarkable language processing capabilities. Given the substantial workload of university medical staff, this study aims to assess the quality of multiple-choice questions (MCQs) produced by ChatGPT for use in graduate medical examinations, compared to questions written by university professoriate staffs based on standard medical textbooks. METHODS: 50 MCQs were generated by ChatGPT with reference to two standard undergraduate medical textbooks (Harrison’s, and Bailey & Love’s). Another 50 MCQs were drafted by two university professoriate staff using the same medical textbooks. All 100 MCQ were individually numbered, randomized and sent to five independent international assessors for MCQ quality assessment using a standardized assessment score on five assessment domains, namely, appropriateness of the question, clarity and specificity, relevance, discriminative power of alternatives, and suitability for medical graduate examination. RESULTS: The total time required for ChatGPT to create the 50 questions was 20 minutes 25 seconds, while it took two human examiners a total of 211 minutes 33 seconds to draft the 50 questions. When a comparison of the mean score was made between the questions constructed by A.I. with those drafted by humans, only in the relevance domain that the A.I. was inferior to humans (A.I.: 7.56 +/- 0.94 vs human: 7.88 +/- 0.52; p = 0.04). There was no significant difference in question quality between questions drafted by A.I. versus humans, in the total assessment score as well as in other domains. Questions generated by A.I. yielded a wider range of scores, while those created by humans were consistent and within a narrower range. CONCLUSION: ChatGPT has the potential to generate comparable-quality MCQs for medical graduate examinations within a significantly shorter time. Public Library of Science 2023-08-29 /pmc/articles/PMC10464959/ /pubmed/37643186 http://dx.doi.org/10.1371/journal.pone.0290691 Text en © 2023 Cheung et al https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. |
spellingShingle | Research Article Cheung, Billy Ho Hung Lau, Gary Kui Kai Wong, Gordon Tin Chun Lee, Elaine Yuen Phin Kulkarni, Dhananjay Seow, Choon Sheong Wong, Ruby Co, Michael Tiong-Hong ChatGPT versus human in generating medical graduate exam multiple choice questions—A multinational prospective study (Hong Kong S.A.R., Singapore, Ireland, and the United Kingdom) |
title | ChatGPT versus human in generating medical graduate exam multiple choice questions—A multinational prospective study (Hong Kong S.A.R., Singapore, Ireland, and the United Kingdom) |
title_full | ChatGPT versus human in generating medical graduate exam multiple choice questions—A multinational prospective study (Hong Kong S.A.R., Singapore, Ireland, and the United Kingdom) |
title_fullStr | ChatGPT versus human in generating medical graduate exam multiple choice questions—A multinational prospective study (Hong Kong S.A.R., Singapore, Ireland, and the United Kingdom) |
title_full_unstemmed | ChatGPT versus human in generating medical graduate exam multiple choice questions—A multinational prospective study (Hong Kong S.A.R., Singapore, Ireland, and the United Kingdom) |
title_short | ChatGPT versus human in generating medical graduate exam multiple choice questions—A multinational prospective study (Hong Kong S.A.R., Singapore, Ireland, and the United Kingdom) |
title_sort | chatgpt versus human in generating medical graduate exam multiple choice questions—a multinational prospective study (hong kong s.a.r., singapore, ireland, and the united kingdom) |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10464959/ https://www.ncbi.nlm.nih.gov/pubmed/37643186 http://dx.doi.org/10.1371/journal.pone.0290691 |
work_keys_str_mv | AT cheungbillyhohung chatgptversushumaningeneratingmedicalgraduateexammultiplechoicequestionsamultinationalprospectivestudyhongkongsarsingaporeirelandandtheunitedkingdom AT laugarykuikai chatgptversushumaningeneratingmedicalgraduateexammultiplechoicequestionsamultinationalprospectivestudyhongkongsarsingaporeirelandandtheunitedkingdom AT wonggordontinchun chatgptversushumaningeneratingmedicalgraduateexammultiplechoicequestionsamultinationalprospectivestudyhongkongsarsingaporeirelandandtheunitedkingdom AT leeelaineyuenphin chatgptversushumaningeneratingmedicalgraduateexammultiplechoicequestionsamultinationalprospectivestudyhongkongsarsingaporeirelandandtheunitedkingdom AT kulkarnidhananjay chatgptversushumaningeneratingmedicalgraduateexammultiplechoicequestionsamultinationalprospectivestudyhongkongsarsingaporeirelandandtheunitedkingdom AT seowchoonsheong chatgptversushumaningeneratingmedicalgraduateexammultiplechoicequestionsamultinationalprospectivestudyhongkongsarsingaporeirelandandtheunitedkingdom AT wongruby chatgptversushumaningeneratingmedicalgraduateexammultiplechoicequestionsamultinationalprospectivestudyhongkongsarsingaporeirelandandtheunitedkingdom AT comichaeltionghong chatgptversushumaningeneratingmedicalgraduateexammultiplechoicequestionsamultinationalprospectivestudyhongkongsarsingaporeirelandandtheunitedkingdom |