Cargando…
Empirical comparison of item response theory models with rater's parameters
In various assessment contexts including entrance examinations, educational assessments, and personnel appraisal, performance assessment by raters has attracted much attention to measure higher order abilities of examinees. However, a persistent difficulty is that the ability measurement accuracy de...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Elsevier
2018
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5948474/ https://www.ncbi.nlm.nih.gov/pubmed/29761162 http://dx.doi.org/10.1016/j.heliyon.2018.e00622 |
_version_ | 1783322557444784128 |
---|---|
author | Uto, Masaki Ueno, Maomi |
author_facet | Uto, Masaki Ueno, Maomi |
author_sort | Uto, Masaki |
collection | PubMed |
description | In various assessment contexts including entrance examinations, educational assessments, and personnel appraisal, performance assessment by raters has attracted much attention to measure higher order abilities of examinees. However, a persistent difficulty is that the ability measurement accuracy depends strongly on rater and task characteristics. To resolve this shortcoming, various item response theory (IRT) models that incorporate rater and task characteristic parameters have been proposed. However, because various models with different rater and task parameters exist, it is difficult to understand each model's features. Therefore, this study presents empirical comparisons of IRT models. Specifically, after reviewing and summarizing features of existing models, we compare their performance through simulation and actual data experiments. |
format | Online Article Text |
id | pubmed-5948474 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2018 |
publisher | Elsevier |
record_format | MEDLINE/PubMed |
spelling | pubmed-59484742018-05-14 Empirical comparison of item response theory models with rater's parameters Uto, Masaki Ueno, Maomi Heliyon Article In various assessment contexts including entrance examinations, educational assessments, and personnel appraisal, performance assessment by raters has attracted much attention to measure higher order abilities of examinees. However, a persistent difficulty is that the ability measurement accuracy depends strongly on rater and task characteristics. To resolve this shortcoming, various item response theory (IRT) models that incorporate rater and task characteristic parameters have been proposed. However, because various models with different rater and task parameters exist, it is difficult to understand each model's features. Therefore, this study presents empirical comparisons of IRT models. Specifically, after reviewing and summarizing features of existing models, we compare their performance through simulation and actual data experiments. Elsevier 2018-05-08 /pmc/articles/PMC5948474/ /pubmed/29761162 http://dx.doi.org/10.1016/j.heliyon.2018.e00622 Text en © 2018 The Authors http://creativecommons.org/licenses/by/4.0/ This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Uto, Masaki Ueno, Maomi Empirical comparison of item response theory models with rater's parameters |
title | Empirical comparison of item response theory models with rater's parameters |
title_full | Empirical comparison of item response theory models with rater's parameters |
title_fullStr | Empirical comparison of item response theory models with rater's parameters |
title_full_unstemmed | Empirical comparison of item response theory models with rater's parameters |
title_short | Empirical comparison of item response theory models with rater's parameters |
title_sort | empirical comparison of item response theory models with rater's parameters |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5948474/ https://www.ncbi.nlm.nih.gov/pubmed/29761162 http://dx.doi.org/10.1016/j.heliyon.2018.e00622 |
work_keys_str_mv | AT utomasaki empiricalcomparisonofitemresponsetheorymodelswithratersparameters AT uenomaomi empiricalcomparisonofitemresponsetheorymodelswithratersparameters |