Cargando…

Approximate measurement invariance in cross-classified rater-mediated assessments

An important assumption underlying meaningful comparisons of scores in rater-mediated assessments is that measurement is commensurate across raters. When raters differentially apply the standards established by an instrument, scores from different raters are on fundamentally different scales and no...

Descripción completa

Detalles Bibliográficos
Autores principales: Kelcey, Ben, McGinn, Dan, Hill, Heather
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2014
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4274900/
https://www.ncbi.nlm.nih.gov/pubmed/25566145
http://dx.doi.org/10.3389/fpsyg.2014.01469
_version_ 1782350060034457600
author Kelcey, Ben
McGinn, Dan
Hill, Heather
author_facet Kelcey, Ben
McGinn, Dan
Hill, Heather
author_sort Kelcey, Ben
collection PubMed
description An important assumption underlying meaningful comparisons of scores in rater-mediated assessments is that measurement is commensurate across raters. When raters differentially apply the standards established by an instrument, scores from different raters are on fundamentally different scales and no longer preserve a common meaning and basis for comparison. In this study, we developed a method to accommodate measurement noninvariance across raters when measurements are cross-classified within two distinct hierarchical units. We conceptualized random item effects cross-classified graded response models and used random discrimination and threshold effects to test, calibrate, and account for measurement noninvariance among raters. By leveraging empirical estimates of rater-specific deviations in the discrimination and threshold parameters, the proposed method allows us to identify noninvariant items and empirically estimate and directly adjust for this noninvariance within a cross-classified framework. Within the context of teaching evaluations, the results of a case study suggested substantial noninvariance across raters and that establishing an approximately invariant scale through random item effects improves model fit and predictive validity.
format Online
Article
Text
id pubmed-4274900
institution National Center for Biotechnology Information
language English
publishDate 2014
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-42749002015-01-06 Approximate measurement invariance in cross-classified rater-mediated assessments Kelcey, Ben McGinn, Dan Hill, Heather Front Psychol Psychology An important assumption underlying meaningful comparisons of scores in rater-mediated assessments is that measurement is commensurate across raters. When raters differentially apply the standards established by an instrument, scores from different raters are on fundamentally different scales and no longer preserve a common meaning and basis for comparison. In this study, we developed a method to accommodate measurement noninvariance across raters when measurements are cross-classified within two distinct hierarchical units. We conceptualized random item effects cross-classified graded response models and used random discrimination and threshold effects to test, calibrate, and account for measurement noninvariance among raters. By leveraging empirical estimates of rater-specific deviations in the discrimination and threshold parameters, the proposed method allows us to identify noninvariant items and empirically estimate and directly adjust for this noninvariance within a cross-classified framework. Within the context of teaching evaluations, the results of a case study suggested substantial noninvariance across raters and that establishing an approximately invariant scale through random item effects improves model fit and predictive validity. Frontiers Media S.A. 2014-12-23 /pmc/articles/PMC4274900/ /pubmed/25566145 http://dx.doi.org/10.3389/fpsyg.2014.01469 Text en Copyright © 2014 Kelcey, McGinn and Hill. http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Psychology
Kelcey, Ben
McGinn, Dan
Hill, Heather
Approximate measurement invariance in cross-classified rater-mediated assessments
title Approximate measurement invariance in cross-classified rater-mediated assessments
title_full Approximate measurement invariance in cross-classified rater-mediated assessments
title_fullStr Approximate measurement invariance in cross-classified rater-mediated assessments
title_full_unstemmed Approximate measurement invariance in cross-classified rater-mediated assessments
title_short Approximate measurement invariance in cross-classified rater-mediated assessments
title_sort approximate measurement invariance in cross-classified rater-mediated assessments
topic Psychology
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4274900/
https://www.ncbi.nlm.nih.gov/pubmed/25566145
http://dx.doi.org/10.3389/fpsyg.2014.01469
work_keys_str_mv AT kelceyben approximatemeasurementinvarianceincrossclassifiedratermediatedassessments
AT mcginndan approximatemeasurementinvarianceincrossclassifiedratermediatedassessments
AT hillheather approximatemeasurementinvarianceincrossclassifiedratermediatedassessments