Cargando…

Measuring student's proficiency in MOOCs: multiple attempts extensions for the Rasch model

Popularity of online courses with open access and unlimited student participation, the so-called massive open online courses (MOOCs), has been growing intensively. Students, professors, and universities have an interest in accurate measures of students' proficiency in MOOCs. However, these meas...

Descripción completa

Detalles Bibliográficos
Autores principales: Abbakumov, Dmitry, Desmet, Piet, Van den Noortgate, Wim
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Elsevier 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6280598/
https://www.ncbi.nlm.nih.gov/pubmed/30555955
http://dx.doi.org/10.1016/j.heliyon.2018.e01003
_version_ 1783378713986990080
author Abbakumov, Dmitry
Desmet, Piet
Van den Noortgate, Wim
author_facet Abbakumov, Dmitry
Desmet, Piet
Van den Noortgate, Wim
author_sort Abbakumov, Dmitry
collection PubMed
description Popularity of online courses with open access and unlimited student participation, the so-called massive open online courses (MOOCs), has been growing intensively. Students, professors, and universities have an interest in accurate measures of students' proficiency in MOOCs. However, these measurements face several challenges: (a) assessments are dynamic: items can be added, removed or replaced by a course author at any time; (b) students may be allowed to make several attempts within one assessment; (c) assessments may include an insufficient number of items for accurate individual-level conclusions. Therefore, common psychometric models and techniques of Classical Test Theory (CTT) and Item Response Theory (IRT) do not serve perfectly to measure proficiency. In this study we try to cover this gap and propose cross-classification multilevel logistic extensions of the common IRT model, the Rasch model, aimed at improving the assessment of the student's proficiency by modeling the effect of attempts and by involving non-assessment data such as student's interaction with video lectures and practical tasks. We illustrate these extensions on the logged data from one MOOC and check the quality using a cross-validation procedure on three MOOCs. We found that (a) the performance changes over attempts depend on the student: whereas for some students performance ameliorates, for other students, the performance might deteriorate; (b) similarly, the change over attempts varies over items; (c) student's activity with video lectures and practical tasks are significant predictors of response correctness in a sense of higher activity leads to higher chances of a correct response; (d) overall accuracy of prediction of student's item responses using the extensions is 6% higher than using the traditional Rasch model. In sum, our results show that the approach is an improvement in assessment procedures in MOOCs and could serve as an additional source for accurate conclusions on student's proficiency.
format Online
Article
Text
id pubmed-6280598
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher Elsevier
record_format MEDLINE/PubMed
spelling pubmed-62805982018-12-14 Measuring student's proficiency in MOOCs: multiple attempts extensions for the Rasch model Abbakumov, Dmitry Desmet, Piet Van den Noortgate, Wim Heliyon Article Popularity of online courses with open access and unlimited student participation, the so-called massive open online courses (MOOCs), has been growing intensively. Students, professors, and universities have an interest in accurate measures of students' proficiency in MOOCs. However, these measurements face several challenges: (a) assessments are dynamic: items can be added, removed or replaced by a course author at any time; (b) students may be allowed to make several attempts within one assessment; (c) assessments may include an insufficient number of items for accurate individual-level conclusions. Therefore, common psychometric models and techniques of Classical Test Theory (CTT) and Item Response Theory (IRT) do not serve perfectly to measure proficiency. In this study we try to cover this gap and propose cross-classification multilevel logistic extensions of the common IRT model, the Rasch model, aimed at improving the assessment of the student's proficiency by modeling the effect of attempts and by involving non-assessment data such as student's interaction with video lectures and practical tasks. We illustrate these extensions on the logged data from one MOOC and check the quality using a cross-validation procedure on three MOOCs. We found that (a) the performance changes over attempts depend on the student: whereas for some students performance ameliorates, for other students, the performance might deteriorate; (b) similarly, the change over attempts varies over items; (c) student's activity with video lectures and practical tasks are significant predictors of response correctness in a sense of higher activity leads to higher chances of a correct response; (d) overall accuracy of prediction of student's item responses using the extensions is 6% higher than using the traditional Rasch model. In sum, our results show that the approach is an improvement in assessment procedures in MOOCs and could serve as an additional source for accurate conclusions on student's proficiency. Elsevier 2018-12-04 /pmc/articles/PMC6280598/ /pubmed/30555955 http://dx.doi.org/10.1016/j.heliyon.2018.e01003 Text en © 2018 The Authors http://creativecommons.org/licenses/by-nc-nd/4.0/ This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
spellingShingle Article
Abbakumov, Dmitry
Desmet, Piet
Van den Noortgate, Wim
Measuring student's proficiency in MOOCs: multiple attempts extensions for the Rasch model
title Measuring student's proficiency in MOOCs: multiple attempts extensions for the Rasch model
title_full Measuring student's proficiency in MOOCs: multiple attempts extensions for the Rasch model
title_fullStr Measuring student's proficiency in MOOCs: multiple attempts extensions for the Rasch model
title_full_unstemmed Measuring student's proficiency in MOOCs: multiple attempts extensions for the Rasch model
title_short Measuring student's proficiency in MOOCs: multiple attempts extensions for the Rasch model
title_sort measuring student's proficiency in moocs: multiple attempts extensions for the rasch model
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6280598/
https://www.ncbi.nlm.nih.gov/pubmed/30555955
http://dx.doi.org/10.1016/j.heliyon.2018.e01003
work_keys_str_mv AT abbakumovdmitry measuringstudentsproficiencyinmoocsmultipleattemptsextensionsfortheraschmodel
AT desmetpiet measuringstudentsproficiencyinmoocsmultipleattemptsextensionsfortheraschmodel
AT vandennoortgatewim measuringstudentsproficiencyinmoocsmultipleattemptsextensionsfortheraschmodel