Cargando…

Is ChatGPT’s Knowledge and Interpretative Ability Comparable to First Professional MBBS (Bachelor of Medicine, Bachelor of Surgery) Students of India in Taking a Medical Biochemistry Examination?

Introduction ChatGPT is a large language model (LLM)-based chatbot that uses natural language processing to create humanlike conversational dialogue. It has created a significant impact on the entire global landscape, especially in sectors like finance and banking, e-commerce, education, legal, huma...

Descripción completa

Detalles Bibliográficos
Autores principales: Ghosh, Abhra, Maini Jindal, Nandita, Gupta, Vikram K, Bansal, Ekta, Kaur Bajwa, Navjot, Sett, Abhishek
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Cureus 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10657167/
https://www.ncbi.nlm.nih.gov/pubmed/38021639
http://dx.doi.org/10.7759/cureus.47329
_version_ 1785137158954156032
author Ghosh, Abhra
Maini Jindal, Nandita
Gupta, Vikram K
Bansal, Ekta
Kaur Bajwa, Navjot
Sett, Abhishek
author_facet Ghosh, Abhra
Maini Jindal, Nandita
Gupta, Vikram K
Bansal, Ekta
Kaur Bajwa, Navjot
Sett, Abhishek
author_sort Ghosh, Abhra
collection PubMed
description Introduction ChatGPT is a large language model (LLM)-based chatbot that uses natural language processing to create humanlike conversational dialogue. It has created a significant impact on the entire global landscape, especially in sectors like finance and banking, e-commerce, education, legal, human resources (HR), and recruitment since its inception. There have been multiple ongoing controversies regarding the seamless integration of ChatGPT with the healthcare system because of its factual accuracy, lack of experience, lack of clarity, expertise, and above all, lack of empathy. Our study seeks to compare ChatGPT’s knowledge and interpretative abilities with those of first-year medical students in India in the subject of medical biochemistry. Materials and methods A total of 79 questions (40 multiple choice questions and 39 subjective questions) of medical biochemistry were set for Phase 1, block II term examination. Chat GPT was enrolled as the 101st student in the class. The questions were entered into ChatGPT’s interface and responses were noted. The response time for the multiple-choice questions (MCQs) asked was also noted. The answers given by ChatGPT and 100 students of the class were checked by two subject experts, and marks were given according to the quality of answers. Marks obtained by the AI chatbot were compared with the marks obtained by the students. Results ChatGPT scored 140 marks out of 200 and outperformed almost all the students and ranked fifth in the class. It scored very well in information-based MCQs (92%) and descriptive logical reasoning (80%), whereas performed poorly in descriptive clinical scenario-based questions (52%). In terms of time taken to respond to the MCQs, it took significantly more time to answer logical reasoning MCQs than simple information-based MCQs (3.10±0.882 sec vs. 2.02±0.477 sec, p<0.005). Conclusions ChatGPT was able to outperform almost all the students in the subject of medical biochemistry. If the ethical issues are dealt with efficiently, these LLMs have a huge potential to be used in teaching and learning methods of modern medicine by students successfully.
format Online
Article
Text
id pubmed-10657167
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Cureus
record_format MEDLINE/PubMed
spelling pubmed-106571672023-10-19 Is ChatGPT’s Knowledge and Interpretative Ability Comparable to First Professional MBBS (Bachelor of Medicine, Bachelor of Surgery) Students of India in Taking a Medical Biochemistry Examination? Ghosh, Abhra Maini Jindal, Nandita Gupta, Vikram K Bansal, Ekta Kaur Bajwa, Navjot Sett, Abhishek Cureus Other Introduction ChatGPT is a large language model (LLM)-based chatbot that uses natural language processing to create humanlike conversational dialogue. It has created a significant impact on the entire global landscape, especially in sectors like finance and banking, e-commerce, education, legal, human resources (HR), and recruitment since its inception. There have been multiple ongoing controversies regarding the seamless integration of ChatGPT with the healthcare system because of its factual accuracy, lack of experience, lack of clarity, expertise, and above all, lack of empathy. Our study seeks to compare ChatGPT’s knowledge and interpretative abilities with those of first-year medical students in India in the subject of medical biochemistry. Materials and methods A total of 79 questions (40 multiple choice questions and 39 subjective questions) of medical biochemistry were set for Phase 1, block II term examination. Chat GPT was enrolled as the 101st student in the class. The questions were entered into ChatGPT’s interface and responses were noted. The response time for the multiple-choice questions (MCQs) asked was also noted. The answers given by ChatGPT and 100 students of the class were checked by two subject experts, and marks were given according to the quality of answers. Marks obtained by the AI chatbot were compared with the marks obtained by the students. Results ChatGPT scored 140 marks out of 200 and outperformed almost all the students and ranked fifth in the class. It scored very well in information-based MCQs (92%) and descriptive logical reasoning (80%), whereas performed poorly in descriptive clinical scenario-based questions (52%). In terms of time taken to respond to the MCQs, it took significantly more time to answer logical reasoning MCQs than simple information-based MCQs (3.10±0.882 sec vs. 2.02±0.477 sec, p<0.005). Conclusions ChatGPT was able to outperform almost all the students in the subject of medical biochemistry. If the ethical issues are dealt with efficiently, these LLMs have a huge potential to be used in teaching and learning methods of modern medicine by students successfully. Cureus 2023-10-19 /pmc/articles/PMC10657167/ /pubmed/38021639 http://dx.doi.org/10.7759/cureus.47329 Text en Copyright © 2023, Ghosh et al. https://creativecommons.org/licenses/by/3.0/This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Other
Ghosh, Abhra
Maini Jindal, Nandita
Gupta, Vikram K
Bansal, Ekta
Kaur Bajwa, Navjot
Sett, Abhishek
Is ChatGPT’s Knowledge and Interpretative Ability Comparable to First Professional MBBS (Bachelor of Medicine, Bachelor of Surgery) Students of India in Taking a Medical Biochemistry Examination?
title Is ChatGPT’s Knowledge and Interpretative Ability Comparable to First Professional MBBS (Bachelor of Medicine, Bachelor of Surgery) Students of India in Taking a Medical Biochemistry Examination?
title_full Is ChatGPT’s Knowledge and Interpretative Ability Comparable to First Professional MBBS (Bachelor of Medicine, Bachelor of Surgery) Students of India in Taking a Medical Biochemistry Examination?
title_fullStr Is ChatGPT’s Knowledge and Interpretative Ability Comparable to First Professional MBBS (Bachelor of Medicine, Bachelor of Surgery) Students of India in Taking a Medical Biochemistry Examination?
title_full_unstemmed Is ChatGPT’s Knowledge and Interpretative Ability Comparable to First Professional MBBS (Bachelor of Medicine, Bachelor of Surgery) Students of India in Taking a Medical Biochemistry Examination?
title_short Is ChatGPT’s Knowledge and Interpretative Ability Comparable to First Professional MBBS (Bachelor of Medicine, Bachelor of Surgery) Students of India in Taking a Medical Biochemistry Examination?
title_sort is chatgpt’s knowledge and interpretative ability comparable to first professional mbbs (bachelor of medicine, bachelor of surgery) students of india in taking a medical biochemistry examination?
topic Other
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10657167/
https://www.ncbi.nlm.nih.gov/pubmed/38021639
http://dx.doi.org/10.7759/cureus.47329
work_keys_str_mv AT ghoshabhra ischatgptsknowledgeandinterpretativeabilitycomparabletofirstprofessionalmbbsbachelorofmedicinebachelorofsurgerystudentsofindiaintakingamedicalbiochemistryexamination
AT mainijindalnandita ischatgptsknowledgeandinterpretativeabilitycomparabletofirstprofessionalmbbsbachelorofmedicinebachelorofsurgerystudentsofindiaintakingamedicalbiochemistryexamination
AT guptavikramk ischatgptsknowledgeandinterpretativeabilitycomparabletofirstprofessionalmbbsbachelorofmedicinebachelorofsurgerystudentsofindiaintakingamedicalbiochemistryexamination
AT bansalekta ischatgptsknowledgeandinterpretativeabilitycomparabletofirstprofessionalmbbsbachelorofmedicinebachelorofsurgerystudentsofindiaintakingamedicalbiochemistryexamination
AT kaurbajwanavjot ischatgptsknowledgeandinterpretativeabilitycomparabletofirstprofessionalmbbsbachelorofmedicinebachelorofsurgerystudentsofindiaintakingamedicalbiochemistryexamination
AT settabhishek ischatgptsknowledgeandinterpretativeabilitycomparabletofirstprofessionalmbbsbachelorofmedicinebachelorofsurgerystudentsofindiaintakingamedicalbiochemistryexamination