Cargando…

Modeling Rating Order Effects Under Item Response Theory Models for Rater-Mediated Assessments

Rater effects are commonly observed in rater-mediated assessments. By using item response theory (IRT) modeling, raters can be treated as independent factors that function as instruments for measuring ratees. Most rater effects are static and can be addressed appropriately within an IRT framework, a...

Descripción completa

Detalles Bibliográficos
Autor principal: Huang, Hung-Yu
Formato: Online Artículo Texto
Lenguaje:English
Publicado: SAGE Publications 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10240569/
https://www.ncbi.nlm.nih.gov/pubmed/37283589
http://dx.doi.org/10.1177/01466216231174566
_version_ 1785053791348850688
author Huang, Hung-Yu
author_facet Huang, Hung-Yu
author_sort Huang, Hung-Yu
collection PubMed
description Rater effects are commonly observed in rater-mediated assessments. By using item response theory (IRT) modeling, raters can be treated as independent factors that function as instruments for measuring ratees. Most rater effects are static and can be addressed appropriately within an IRT framework, and a few models have been developed for dynamic rater effects. Operational rating projects often require human raters to continuously and repeatedly score ratees over a certain period, imposing a burden on the cognitive processing abilities and attention spans of raters that stems from judgment fatigue and thus affects the rating quality observed during the rating period. As a result, ratees’ scores may be influenced by the order in which they are graded by raters in a rating sequence, and the rating order effect should be considered in new IRT models. In this study, two types of many-faceted (MF)-IRT models are developed to account for such dynamic rater effects, which assume that rater severity can drift systematically or stochastically. The results obtained from two simulation studies indicate that the parameters of the newly developed models can be estimated satisfactorily using Bayesian estimation and that disregarding the rating order effect produces biased model structure and ratee proficiency parameter estimations. A creativity assessment is outlined to demonstrate the application of the new models and to investigate the consequences of failing to detect the possible rating order effect in a real rater-mediated evaluation.
format Online
Article
Text
id pubmed-10240569
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher SAGE Publications
record_format MEDLINE/PubMed
spelling pubmed-102405692023-06-06 Modeling Rating Order Effects Under Item Response Theory Models for Rater-Mediated Assessments Huang, Hung-Yu Appl Psychol Meas Articles Rater effects are commonly observed in rater-mediated assessments. By using item response theory (IRT) modeling, raters can be treated as independent factors that function as instruments for measuring ratees. Most rater effects are static and can be addressed appropriately within an IRT framework, and a few models have been developed for dynamic rater effects. Operational rating projects often require human raters to continuously and repeatedly score ratees over a certain period, imposing a burden on the cognitive processing abilities and attention spans of raters that stems from judgment fatigue and thus affects the rating quality observed during the rating period. As a result, ratees’ scores may be influenced by the order in which they are graded by raters in a rating sequence, and the rating order effect should be considered in new IRT models. In this study, two types of many-faceted (MF)-IRT models are developed to account for such dynamic rater effects, which assume that rater severity can drift systematically or stochastically. The results obtained from two simulation studies indicate that the parameters of the newly developed models can be estimated satisfactorily using Bayesian estimation and that disregarding the rating order effect produces biased model structure and ratee proficiency parameter estimations. A creativity assessment is outlined to demonstrate the application of the new models and to investigate the consequences of failing to detect the possible rating order effect in a real rater-mediated evaluation. SAGE Publications 2023-05-13 2023-06 /pmc/articles/PMC10240569/ /pubmed/37283589 http://dx.doi.org/10.1177/01466216231174566 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by-nc/4.0/This article is distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 License (https://creativecommons.org/licenses/by-nc/4.0/) which permits non-commercial use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access page (https://us.sagepub.com/en-us/nam/open-access-at-sage).
spellingShingle Articles
Huang, Hung-Yu
Modeling Rating Order Effects Under Item Response Theory Models for Rater-Mediated Assessments
title Modeling Rating Order Effects Under Item Response Theory Models for Rater-Mediated Assessments
title_full Modeling Rating Order Effects Under Item Response Theory Models for Rater-Mediated Assessments
title_fullStr Modeling Rating Order Effects Under Item Response Theory Models for Rater-Mediated Assessments
title_full_unstemmed Modeling Rating Order Effects Under Item Response Theory Models for Rater-Mediated Assessments
title_short Modeling Rating Order Effects Under Item Response Theory Models for Rater-Mediated Assessments
title_sort modeling rating order effects under item response theory models for rater-mediated assessments
topic Articles
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10240569/
https://www.ncbi.nlm.nih.gov/pubmed/37283589
http://dx.doi.org/10.1177/01466216231174566
work_keys_str_mv AT huanghungyu modelingratingordereffectsunderitemresponsetheorymodelsforratermediatedassessments