Cargando…
Assessing the Utilization of Large Language Models in Medical Education: Insights From Undergraduate Medical Students
Background Artificial intelligence (AI) has the potential to be integrated into medical education. Among AI-based technology, large language models (LLMs) such as ChatGPT, Google Bard, Microsoft Bing, and Perplexity have emerged as powerful tools with capabilities in natural language processing. Wit...
Autores principales: | , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Cureus
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10662537/ https://www.ncbi.nlm.nih.gov/pubmed/38021810 http://dx.doi.org/10.7759/cureus.47468 |
_version_ | 1785138218490920960 |
---|---|
author | Biri, Sairavi Kiran Kumar, Subir Panigrahi, Muralidhar Mondal, Shaikat Behera, Joshil Kumar Mondal, Himel |
author_facet | Biri, Sairavi Kiran Kumar, Subir Panigrahi, Muralidhar Mondal, Shaikat Behera, Joshil Kumar Mondal, Himel |
author_sort | Biri, Sairavi Kiran |
collection | PubMed |
description | Background Artificial intelligence (AI) has the potential to be integrated into medical education. Among AI-based technology, large language models (LLMs) such as ChatGPT, Google Bard, Microsoft Bing, and Perplexity have emerged as powerful tools with capabilities in natural language processing. With this background, this study investigates the knowledge, attitude, and practice of undergraduate medical students regarding the utilization of LLMs in medical education in a medical college in Jharkhand, India. Methods A cross-sectional online survey was sent to 370 undergraduate medical students on Google Forms. The questionnaire comprised the following three domains: knowledge, attitude, and practice, each containing six questions. Cronbach’s alphas for knowledge, attitude, and practice domains were 0.703, 0.707, and 0.809, respectively. Intraclass correlation coefficients for knowledge, attitude, and practice domains were 0.82, 0.87, and 0.78, respectively. The average scores in the three domains were compared using ANOVA. Results A total of 172 students participated in the study (response rate: 46.49%). The majority of the students (45.93%) rarely used the LLMs for their teaching-learning purposes (chi-square (3) = 41.44, p < 0.0001). The overall score of knowledge (3.21±0.55), attitude (3.47±0.54), and practice (3.26±0.61) were statistically significantly different (ANOVA F (2, 513) = 10.2, p < 0.0001), with the highest score in attitude and lowest in knowledge. Conclusion While there is a generally positive attitude toward the incorporation of LLMs in medical education, concerns about overreliance and potential inaccuracies are evident. LLMs offer the potential to enhance learning resources and provide accessible education, but their integration requires further planning. Further studies are required to explore the long-term impact of LLMs in diverse educational contexts. |
format | Online Article Text |
id | pubmed-10662537 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Cureus |
record_format | MEDLINE/PubMed |
spelling | pubmed-106625372023-10-22 Assessing the Utilization of Large Language Models in Medical Education: Insights From Undergraduate Medical Students Biri, Sairavi Kiran Kumar, Subir Panigrahi, Muralidhar Mondal, Shaikat Behera, Joshil Kumar Mondal, Himel Cureus Psychology Background Artificial intelligence (AI) has the potential to be integrated into medical education. Among AI-based technology, large language models (LLMs) such as ChatGPT, Google Bard, Microsoft Bing, and Perplexity have emerged as powerful tools with capabilities in natural language processing. With this background, this study investigates the knowledge, attitude, and practice of undergraduate medical students regarding the utilization of LLMs in medical education in a medical college in Jharkhand, India. Methods A cross-sectional online survey was sent to 370 undergraduate medical students on Google Forms. The questionnaire comprised the following three domains: knowledge, attitude, and practice, each containing six questions. Cronbach’s alphas for knowledge, attitude, and practice domains were 0.703, 0.707, and 0.809, respectively. Intraclass correlation coefficients for knowledge, attitude, and practice domains were 0.82, 0.87, and 0.78, respectively. The average scores in the three domains were compared using ANOVA. Results A total of 172 students participated in the study (response rate: 46.49%). The majority of the students (45.93%) rarely used the LLMs for their teaching-learning purposes (chi-square (3) = 41.44, p < 0.0001). The overall score of knowledge (3.21±0.55), attitude (3.47±0.54), and practice (3.26±0.61) were statistically significantly different (ANOVA F (2, 513) = 10.2, p < 0.0001), with the highest score in attitude and lowest in knowledge. Conclusion While there is a generally positive attitude toward the incorporation of LLMs in medical education, concerns about overreliance and potential inaccuracies are evident. LLMs offer the potential to enhance learning resources and provide accessible education, but their integration requires further planning. Further studies are required to explore the long-term impact of LLMs in diverse educational contexts. Cureus 2023-10-22 /pmc/articles/PMC10662537/ /pubmed/38021810 http://dx.doi.org/10.7759/cureus.47468 Text en Copyright © 2023, Biri et al. https://creativecommons.org/licenses/by/3.0/This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. |
spellingShingle | Psychology Biri, Sairavi Kiran Kumar, Subir Panigrahi, Muralidhar Mondal, Shaikat Behera, Joshil Kumar Mondal, Himel Assessing the Utilization of Large Language Models in Medical Education: Insights From Undergraduate Medical Students |
title | Assessing the Utilization of Large Language Models in Medical Education: Insights From Undergraduate Medical Students |
title_full | Assessing the Utilization of Large Language Models in Medical Education: Insights From Undergraduate Medical Students |
title_fullStr | Assessing the Utilization of Large Language Models in Medical Education: Insights From Undergraduate Medical Students |
title_full_unstemmed | Assessing the Utilization of Large Language Models in Medical Education: Insights From Undergraduate Medical Students |
title_short | Assessing the Utilization of Large Language Models in Medical Education: Insights From Undergraduate Medical Students |
title_sort | assessing the utilization of large language models in medical education: insights from undergraduate medical students |
topic | Psychology |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10662537/ https://www.ncbi.nlm.nih.gov/pubmed/38021810 http://dx.doi.org/10.7759/cureus.47468 |
work_keys_str_mv | AT birisairavikiran assessingtheutilizationoflargelanguagemodelsinmedicaleducationinsightsfromundergraduatemedicalstudents AT kumarsubir assessingtheutilizationoflargelanguagemodelsinmedicaleducationinsightsfromundergraduatemedicalstudents AT panigrahimuralidhar assessingtheutilizationoflargelanguagemodelsinmedicaleducationinsightsfromundergraduatemedicalstudents AT mondalshaikat assessingtheutilizationoflargelanguagemodelsinmedicaleducationinsightsfromundergraduatemedicalstudents AT beherajoshilkumar assessingtheutilizationoflargelanguagemodelsinmedicaleducationinsightsfromundergraduatemedicalstudents AT mondalhimel assessingtheutilizationoflargelanguagemodelsinmedicaleducationinsightsfromundergraduatemedicalstudents |