Cargando…

Comparing the prediction performance of item response theory and machine learning methods on item responses for educational assessments

To obtain more accurate and robust feedback information from the students’ assessment outcomes and to communicate it to students and optimize teaching and learning strategies, educational researchers and practitioners must critically reflect on whether the existing methods of data analytics are capa...

Descripción completa

Detalles Bibliográficos
Autores principales: Park, Jung Yeon, Dedja, Klest, Pliakos, Konstantinos, Kim, Jinho, Joo, Sean, Cornillie, Frederik, Vens, Celine, Van den Noortgate, Wim
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer US 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9275388/
https://www.ncbi.nlm.nih.gov/pubmed/35819719
http://dx.doi.org/10.3758/s13428-022-01910-8
_version_ 1784745475480485888
author Park, Jung Yeon
Dedja, Klest
Pliakos, Konstantinos
Kim, Jinho
Joo, Sean
Cornillie, Frederik
Vens, Celine
Van den Noortgate, Wim
author_facet Park, Jung Yeon
Dedja, Klest
Pliakos, Konstantinos
Kim, Jinho
Joo, Sean
Cornillie, Frederik
Vens, Celine
Van den Noortgate, Wim
author_sort Park, Jung Yeon
collection PubMed
description To obtain more accurate and robust feedback information from the students’ assessment outcomes and to communicate it to students and optimize teaching and learning strategies, educational researchers and practitioners must critically reflect on whether the existing methods of data analytics are capable of retrieving the information provided in the database. This study compared and contrasted the prediction performance of an item response theory method, particularly the use of an explanatory item response model (EIRM), and six supervised machine learning (ML) methods for predicting students’ item responses in educational assessments, considering student- and item-related background information. Each of seven prediction methods was evaluated through cross-validation approaches under three prediction scenarios: (a) unrealized responses of new students to existing items, (b) unrealized responses of existing students to new items, and (c) missing responses of existing students to existing items. The results of a simulation study and two real-life assessment data examples showed that employing student- and item-related background information in addition to the item response data substantially increases the prediction accuracy for new students or items. We also found that the EIRM is as competitive as the best performing ML methods in predicting the student performance outcomes for the educational assessment datasets.
format Online
Article
Text
id pubmed-9275388
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Springer US
record_format MEDLINE/PubMed
spelling pubmed-92753882022-07-14 Comparing the prediction performance of item response theory and machine learning methods on item responses for educational assessments Park, Jung Yeon Dedja, Klest Pliakos, Konstantinos Kim, Jinho Joo, Sean Cornillie, Frederik Vens, Celine Van den Noortgate, Wim Behav Res Methods Article To obtain more accurate and robust feedback information from the students’ assessment outcomes and to communicate it to students and optimize teaching and learning strategies, educational researchers and practitioners must critically reflect on whether the existing methods of data analytics are capable of retrieving the information provided in the database. This study compared and contrasted the prediction performance of an item response theory method, particularly the use of an explanatory item response model (EIRM), and six supervised machine learning (ML) methods for predicting students’ item responses in educational assessments, considering student- and item-related background information. Each of seven prediction methods was evaluated through cross-validation approaches under three prediction scenarios: (a) unrealized responses of new students to existing items, (b) unrealized responses of existing students to new items, and (c) missing responses of existing students to existing items. The results of a simulation study and two real-life assessment data examples showed that employing student- and item-related background information in addition to the item response data substantially increases the prediction accuracy for new students or items. We also found that the EIRM is as competitive as the best performing ML methods in predicting the student performance outcomes for the educational assessment datasets. Springer US 2022-07-11 2023 /pmc/articles/PMC9275388/ /pubmed/35819719 http://dx.doi.org/10.3758/s13428-022-01910-8 Text en © The Psychonomic Society, Inc. 2022 This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic.
spellingShingle Article
Park, Jung Yeon
Dedja, Klest
Pliakos, Konstantinos
Kim, Jinho
Joo, Sean
Cornillie, Frederik
Vens, Celine
Van den Noortgate, Wim
Comparing the prediction performance of item response theory and machine learning methods on item responses for educational assessments
title Comparing the prediction performance of item response theory and machine learning methods on item responses for educational assessments
title_full Comparing the prediction performance of item response theory and machine learning methods on item responses for educational assessments
title_fullStr Comparing the prediction performance of item response theory and machine learning methods on item responses for educational assessments
title_full_unstemmed Comparing the prediction performance of item response theory and machine learning methods on item responses for educational assessments
title_short Comparing the prediction performance of item response theory and machine learning methods on item responses for educational assessments
title_sort comparing the prediction performance of item response theory and machine learning methods on item responses for educational assessments
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9275388/
https://www.ncbi.nlm.nih.gov/pubmed/35819719
http://dx.doi.org/10.3758/s13428-022-01910-8
work_keys_str_mv AT parkjungyeon comparingthepredictionperformanceofitemresponsetheoryandmachinelearningmethodsonitemresponsesforeducationalassessments
AT dedjaklest comparingthepredictionperformanceofitemresponsetheoryandmachinelearningmethodsonitemresponsesforeducationalassessments
AT pliakoskonstantinos comparingthepredictionperformanceofitemresponsetheoryandmachinelearningmethodsonitemresponsesforeducationalassessments
AT kimjinho comparingthepredictionperformanceofitemresponsetheoryandmachinelearningmethodsonitemresponsesforeducationalassessments
AT joosean comparingthepredictionperformanceofitemresponsetheoryandmachinelearningmethodsonitemresponsesforeducationalassessments
AT cornilliefrederik comparingthepredictionperformanceofitemresponsetheoryandmachinelearningmethodsonitemresponsesforeducationalassessments
AT vensceline comparingthepredictionperformanceofitemresponsetheoryandmachinelearningmethodsonitemresponsesforeducationalassessments
AT vandennoortgatewim comparingthepredictionperformanceofitemresponsetheoryandmachinelearningmethodsonitemresponsesforeducationalassessments