Cargando…
Assessing Health Students' Attitudes and Usage of ChatGPT in Jordan: Validation Study
BACKGROUND: ChatGPT is a conversational large language model that has the potential to revolutionize knowledge acquisition. However, the impact of this technology on the quality of education is still unknown considering the risks and concerns surrounding ChatGPT use. Therefore, it is necessary to as...
Autores principales: | , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
JMIR Publications
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10509747/ https://www.ncbi.nlm.nih.gov/pubmed/37578934 http://dx.doi.org/10.2196/48254 |
_version_ | 1785107811434233856 |
---|---|
author | Sallam, Malik Salim, Nesreen A Barakat, Muna Al-Mahzoum, Kholoud Al-Tammemi, Ala'a B Malaeb, Diana Hallit, Rabih Hallit, Souheil |
author_facet | Sallam, Malik Salim, Nesreen A Barakat, Muna Al-Mahzoum, Kholoud Al-Tammemi, Ala'a B Malaeb, Diana Hallit, Rabih Hallit, Souheil |
author_sort | Sallam, Malik |
collection | PubMed |
description | BACKGROUND: ChatGPT is a conversational large language model that has the potential to revolutionize knowledge acquisition. However, the impact of this technology on the quality of education is still unknown considering the risks and concerns surrounding ChatGPT use. Therefore, it is necessary to assess the usability and acceptability of this promising tool. As an innovative technology, the intention to use ChatGPT can be studied in the context of the technology acceptance model (TAM). OBJECTIVE: This study aimed to develop and validate a TAM-based survey instrument called TAME-ChatGPT (Technology Acceptance Model Edited to Assess ChatGPT Adoption) that could be employed to examine the successful integration and use of ChatGPT in health care education. METHODS: The survey tool was created based on the TAM framework. It comprised 13 items for participants who heard of ChatGPT but did not use it and 23 items for participants who used ChatGPT. Using a convenient sampling approach, the survey link was circulated electronically among university students between February and March 2023. Exploratory factor analysis (EFA) was used to assess the construct validity of the survey instrument. RESULTS: The final sample comprised 458 respondents, the majority among them undergraduate students (n=442, 96.5%). Only 109 (23.8%) respondents had heard of ChatGPT prior to participation and only 55 (11.3%) self-reported ChatGPT use before the study. EFA analysis on the attitude and usage scales showed significant Bartlett tests of sphericity scores (P<.001) and adequate Kaiser-Meyer-Olkin measures (0.823 for the attitude scale and 0.702 for the usage scale), confirming the factorability of the correlation matrices. The EFA showed that 3 constructs explained a cumulative total of 69.3% variance in the attitude scale, and these subscales represented perceived risks, attitude to technology/social influence, and anxiety. For the ChatGPT usage scale, EFA showed that 4 constructs explained a cumulative total of 72% variance in the data and comprised the perceived usefulness, perceived risks, perceived ease of use, and behavior/cognitive factors. All the ChatGPT attitude and usage subscales showed good reliability with Cronbach α values >.78 for all the deduced subscales. CONCLUSIONS: The TAME-ChatGPT demonstrated good reliability, validity, and usefulness in assessing health care students’ attitudes toward ChatGPT. The findings highlighted the importance of considering risk perceptions, usefulness, ease of use, attitudes toward technology, and behavioral factors when adopting ChatGPT as a tool in health care education. This information can aid the stakeholders in creating strategies to support the optimal and ethical use of ChatGPT and to identify the potential challenges hindering its successful implementation. Future research is recommended to guide the effective adoption of ChatGPT in health care education. |
format | Online Article Text |
id | pubmed-10509747 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | JMIR Publications |
record_format | MEDLINE/PubMed |
spelling | pubmed-105097472023-09-21 Assessing Health Students' Attitudes and Usage of ChatGPT in Jordan: Validation Study Sallam, Malik Salim, Nesreen A Barakat, Muna Al-Mahzoum, Kholoud Al-Tammemi, Ala'a B Malaeb, Diana Hallit, Rabih Hallit, Souheil JMIR Med Educ Original Paper BACKGROUND: ChatGPT is a conversational large language model that has the potential to revolutionize knowledge acquisition. However, the impact of this technology on the quality of education is still unknown considering the risks and concerns surrounding ChatGPT use. Therefore, it is necessary to assess the usability and acceptability of this promising tool. As an innovative technology, the intention to use ChatGPT can be studied in the context of the technology acceptance model (TAM). OBJECTIVE: This study aimed to develop and validate a TAM-based survey instrument called TAME-ChatGPT (Technology Acceptance Model Edited to Assess ChatGPT Adoption) that could be employed to examine the successful integration and use of ChatGPT in health care education. METHODS: The survey tool was created based on the TAM framework. It comprised 13 items for participants who heard of ChatGPT but did not use it and 23 items for participants who used ChatGPT. Using a convenient sampling approach, the survey link was circulated electronically among university students between February and March 2023. Exploratory factor analysis (EFA) was used to assess the construct validity of the survey instrument. RESULTS: The final sample comprised 458 respondents, the majority among them undergraduate students (n=442, 96.5%). Only 109 (23.8%) respondents had heard of ChatGPT prior to participation and only 55 (11.3%) self-reported ChatGPT use before the study. EFA analysis on the attitude and usage scales showed significant Bartlett tests of sphericity scores (P<.001) and adequate Kaiser-Meyer-Olkin measures (0.823 for the attitude scale and 0.702 for the usage scale), confirming the factorability of the correlation matrices. The EFA showed that 3 constructs explained a cumulative total of 69.3% variance in the attitude scale, and these subscales represented perceived risks, attitude to technology/social influence, and anxiety. For the ChatGPT usage scale, EFA showed that 4 constructs explained a cumulative total of 72% variance in the data and comprised the perceived usefulness, perceived risks, perceived ease of use, and behavior/cognitive factors. All the ChatGPT attitude and usage subscales showed good reliability with Cronbach α values >.78 for all the deduced subscales. CONCLUSIONS: The TAME-ChatGPT demonstrated good reliability, validity, and usefulness in assessing health care students’ attitudes toward ChatGPT. The findings highlighted the importance of considering risk perceptions, usefulness, ease of use, attitudes toward technology, and behavioral factors when adopting ChatGPT as a tool in health care education. This information can aid the stakeholders in creating strategies to support the optimal and ethical use of ChatGPT and to identify the potential challenges hindering its successful implementation. Future research is recommended to guide the effective adoption of ChatGPT in health care education. JMIR Publications 2023-09-05 /pmc/articles/PMC10509747/ /pubmed/37578934 http://dx.doi.org/10.2196/48254 Text en ©Malik Sallam, Nesreen A Salim, Muna Barakat, Kholoud Al-Mahzoum, Ala'a B Al-Tammemi, Diana Malaeb, Rabih Hallit, Souheil Hallit. Originally published in JMIR Medical Education (https://mededu.jmir.org), 05.09.2023. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Education, is properly cited. The complete bibliographic information, a link to the original publication on https://mededu.jmir.org/, as well as this copyright and license information must be included. |
spellingShingle | Original Paper Sallam, Malik Salim, Nesreen A Barakat, Muna Al-Mahzoum, Kholoud Al-Tammemi, Ala'a B Malaeb, Diana Hallit, Rabih Hallit, Souheil Assessing Health Students' Attitudes and Usage of ChatGPT in Jordan: Validation Study |
title | Assessing Health Students' Attitudes and Usage of ChatGPT in Jordan: Validation Study |
title_full | Assessing Health Students' Attitudes and Usage of ChatGPT in Jordan: Validation Study |
title_fullStr | Assessing Health Students' Attitudes and Usage of ChatGPT in Jordan: Validation Study |
title_full_unstemmed | Assessing Health Students' Attitudes and Usage of ChatGPT in Jordan: Validation Study |
title_short | Assessing Health Students' Attitudes and Usage of ChatGPT in Jordan: Validation Study |
title_sort | assessing health students' attitudes and usage of chatgpt in jordan: validation study |
topic | Original Paper |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10509747/ https://www.ncbi.nlm.nih.gov/pubmed/37578934 http://dx.doi.org/10.2196/48254 |
work_keys_str_mv | AT sallammalik assessinghealthstudentsattitudesandusageofchatgptinjordanvalidationstudy AT salimnesreena assessinghealthstudentsattitudesandusageofchatgptinjordanvalidationstudy AT barakatmuna assessinghealthstudentsattitudesandusageofchatgptinjordanvalidationstudy AT almahzoumkholoud assessinghealthstudentsattitudesandusageofchatgptinjordanvalidationstudy AT altammemialaab assessinghealthstudentsattitudesandusageofchatgptinjordanvalidationstudy AT malaebdiana assessinghealthstudentsattitudesandusageofchatgptinjordanvalidationstudy AT hallitrabih assessinghealthstudentsattitudesandusageofchatgptinjordanvalidationstudy AT hallitsouheil assessinghealthstudentsattitudesandusageofchatgptinjordanvalidationstudy |