Cargando…
Overfactoring in rating scale data: A comparison between factor analysis and item response theory
Educational and psychological measurement is typically based on dichotomous variables or rating scales comprising a few ordered categories. When the mean of the observed responses approaches the upper or the lower bound of the scale, the distribution of the data becomes skewed and, if a categorical...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9750161/ https://www.ncbi.nlm.nih.gov/pubmed/36533017 http://dx.doi.org/10.3389/fpsyg.2022.982137 |
_version_ | 1784850192742219776 |
---|---|
author | Revuelta, Javier Ximénez, Carmen Minaya, Noelia |
author_facet | Revuelta, Javier Ximénez, Carmen Minaya, Noelia |
author_sort | Revuelta, Javier |
collection | PubMed |
description | Educational and psychological measurement is typically based on dichotomous variables or rating scales comprising a few ordered categories. When the mean of the observed responses approaches the upper or the lower bound of the scale, the distribution of the data becomes skewed and, if a categorical factor model holds in the population, the Pearson correlation between variables is attenuated. The consequence of this correlation attenuation is that the traditional linear factor model renders an excessive number of factors. This article presents the results of a simulation study investigating the problem of overfactoring and some solutions. We compare five widely known approaches: (1) The maximum-likelihood factor analysis (FA) model for normal data, (2) the categorical factor analysis (FAC) model based on polychoric correlations and maximum likelihood (ML) estimation, (3) the FAC model estimated using a weighted least squares algorithm, (4) the mean corrected chi-square statistic by Satorra–Bentler to handle the lack of normality, and (5) the Samejima’s graded response model (GRM) from item response theory (IRT). Likelihood-ratio chi-square, parallel analysis (PA), and categorical parallel analysis (CPA) are used as goodness-of-fit criteria to estimate the number of factors in the simulation study. Our results indicate that the maximum-likelihood estimation led to overfactoring in the presence of skewed variables both for the linear and categorical factor model. The Satorra–Bentler and GRM constitute the most reliable alternatives to estimate the number of factors. |
format | Online Article Text |
id | pubmed-9750161 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-97501612022-12-15 Overfactoring in rating scale data: A comparison between factor analysis and item response theory Revuelta, Javier Ximénez, Carmen Minaya, Noelia Front Psychol Psychology Educational and psychological measurement is typically based on dichotomous variables or rating scales comprising a few ordered categories. When the mean of the observed responses approaches the upper or the lower bound of the scale, the distribution of the data becomes skewed and, if a categorical factor model holds in the population, the Pearson correlation between variables is attenuated. The consequence of this correlation attenuation is that the traditional linear factor model renders an excessive number of factors. This article presents the results of a simulation study investigating the problem of overfactoring and some solutions. We compare five widely known approaches: (1) The maximum-likelihood factor analysis (FA) model for normal data, (2) the categorical factor analysis (FAC) model based on polychoric correlations and maximum likelihood (ML) estimation, (3) the FAC model estimated using a weighted least squares algorithm, (4) the mean corrected chi-square statistic by Satorra–Bentler to handle the lack of normality, and (5) the Samejima’s graded response model (GRM) from item response theory (IRT). Likelihood-ratio chi-square, parallel analysis (PA), and categorical parallel analysis (CPA) are used as goodness-of-fit criteria to estimate the number of factors in the simulation study. Our results indicate that the maximum-likelihood estimation led to overfactoring in the presence of skewed variables both for the linear and categorical factor model. The Satorra–Bentler and GRM constitute the most reliable alternatives to estimate the number of factors. Frontiers Media S.A. 2022-11-30 /pmc/articles/PMC9750161/ /pubmed/36533017 http://dx.doi.org/10.3389/fpsyg.2022.982137 Text en Copyright © 2022 Revuelta, Ximénez and Minaya. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Psychology Revuelta, Javier Ximénez, Carmen Minaya, Noelia Overfactoring in rating scale data: A comparison between factor analysis and item response theory |
title | Overfactoring in rating scale data: A comparison between factor analysis and item response theory |
title_full | Overfactoring in rating scale data: A comparison between factor analysis and item response theory |
title_fullStr | Overfactoring in rating scale data: A comparison between factor analysis and item response theory |
title_full_unstemmed | Overfactoring in rating scale data: A comparison between factor analysis and item response theory |
title_short | Overfactoring in rating scale data: A comparison between factor analysis and item response theory |
title_sort | overfactoring in rating scale data: a comparison between factor analysis and item response theory |
topic | Psychology |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9750161/ https://www.ncbi.nlm.nih.gov/pubmed/36533017 http://dx.doi.org/10.3389/fpsyg.2022.982137 |
work_keys_str_mv | AT revueltajavier overfactoringinratingscaledataacomparisonbetweenfactoranalysisanditemresponsetheory AT ximenezcarmen overfactoringinratingscaledataacomparisonbetweenfactoranalysisanditemresponsetheory AT minayanoelia overfactoringinratingscaledataacomparisonbetweenfactoranalysisanditemresponsetheory |