Cargando…
Online assessment in two consequent semesters during COVID-19 pandemic: K-means clustering using data mining approach
BACKGROUND: Education and assessment have changed during the COVID-19 pandemic so that online courses replaced face-to-face classes to observe the social distance. The quality of online assessments conducted during the pandemic is an important subject to be addressed. In this study, the quality of o...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Wolters Kluwer - Medknow
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9683448/ https://www.ncbi.nlm.nih.gov/pubmed/36439001 http://dx.doi.org/10.4103/jehp.jehp_1466_21 |
Sumario: | BACKGROUND: Education and assessment have changed during the COVID-19 pandemic so that online courses replaced face-to-face classes to observe the social distance. The quality of online assessments conducted during the pandemic is an important subject to be addressed. In this study, the quality of online assessments held in two consecutive semesters was investigated. MATERIALS AND METHODS: One thousand two hundred and sixty-nine multiple-choice online examinations held in the first (n = 535) and second (n = 734) semesters in Birjand University of Medical Sciences during 2020–2021 were examined. Mean, standard deviation, number of questions, skewness, kurtosis, difficulty, and discrimination index of tests were calculated. Data mining was applied using the k-means clustering approach to classify the tests. RESULTS: The mean percentage of answers to all tests was 69.97 ± 19.16, and the number of questions was 34.48 ± 18.75. In two semesters, there was no significant difference between the difficulty of examinations (P = 0.84). However, there was a significant difference in the discrimination index, skewness, and kurtosis of tests (P < 0.001). Moreover, according to the results of the clustering analysis in the first semester, 43% of the tests were very hard, 16% hard, and 7% moderate. In the second semester, 43% were hard, 16% moderate, and 41% easy. CONCLUSION: To evaluate the tests’ quality, calculating difficulty and discrimination indices is not sufficient; many factors can affect the quality of tests. Furthermore, the experience of the first semester had changed characteristics of the second-semester examinations. To enhance the quality of online tests, establishing appropriate rules to hold the examinations and using questions with higher taxonomy are recommended. |
---|