Cargando…
Modeling Wording Effects Does Not Help in Recovering Uncontaminated Person Scores: A Systematic Evaluation With Random Intercept Item Factor Analysis
The item wording (or keying) effect consists of logically inconsistent answers to positively and negatively worded items that tap into similar (but polarly opposite) content. Previous research has shown that this effect can be successfully modeled through the random intercept item factor analysis (R...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8206482/ https://www.ncbi.nlm.nih.gov/pubmed/34149573 http://dx.doi.org/10.3389/fpsyg.2021.685326 |
_version_ | 1783708635188166656 |
---|---|
author | Nieto, María Dolores Garrido, Luis Eduardo Martínez-Molina, Agustín Abad, Francisco José |
author_facet | Nieto, María Dolores Garrido, Luis Eduardo Martínez-Molina, Agustín Abad, Francisco José |
author_sort | Nieto, María Dolores |
collection | PubMed |
description | The item wording (or keying) effect consists of logically inconsistent answers to positively and negatively worded items that tap into similar (but polarly opposite) content. Previous research has shown that this effect can be successfully modeled through the random intercept item factor analysis (RIIFA) model, as evidenced by the improvements in the model fit in comparison to models that only contain substantive factors. However, little is known regarding the capability of this model in recovering the uncontaminated person scores. To address this issue, the study analyzes the performance of the RIIFA approach across three types of wording effects proposed in the literature: carelessness, item verification difficulty, and acquiescence. In the context of unidimensional substantive models, four independent variables were manipulated, using Monte Carlo methods: type of wording effect, amount of wording effect, sample size, and test length. The results corroborated previous findings by showing that the RIIFA models were consistently able to account for the variance in the data, attaining an excellent fit regardless of the amount of bias. Conversely, the models without the RIIFA factor produced increasingly a poorer fit with greater amounts of wording effects. Surprisingly, however, the RIIFA models were not able to better estimate the uncontaminated person scores for any type of wording effect in comparison to the substantive unidimensional models. The simulation results were then corroborated with an empirical dataset, examining the relationship between learning strategies and personality with grade point average in undergraduate studies. The apparently paradoxical findings regarding the model fit and the recovery of the person scores are explained, considering the properties of the factor models examined. |
format | Online Article Text |
id | pubmed-8206482 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-82064822021-06-17 Modeling Wording Effects Does Not Help in Recovering Uncontaminated Person Scores: A Systematic Evaluation With Random Intercept Item Factor Analysis Nieto, María Dolores Garrido, Luis Eduardo Martínez-Molina, Agustín Abad, Francisco José Front Psychol Psychology The item wording (or keying) effect consists of logically inconsistent answers to positively and negatively worded items that tap into similar (but polarly opposite) content. Previous research has shown that this effect can be successfully modeled through the random intercept item factor analysis (RIIFA) model, as evidenced by the improvements in the model fit in comparison to models that only contain substantive factors. However, little is known regarding the capability of this model in recovering the uncontaminated person scores. To address this issue, the study analyzes the performance of the RIIFA approach across three types of wording effects proposed in the literature: carelessness, item verification difficulty, and acquiescence. In the context of unidimensional substantive models, four independent variables were manipulated, using Monte Carlo methods: type of wording effect, amount of wording effect, sample size, and test length. The results corroborated previous findings by showing that the RIIFA models were consistently able to account for the variance in the data, attaining an excellent fit regardless of the amount of bias. Conversely, the models without the RIIFA factor produced increasingly a poorer fit with greater amounts of wording effects. Surprisingly, however, the RIIFA models were not able to better estimate the uncontaminated person scores for any type of wording effect in comparison to the substantive unidimensional models. The simulation results were then corroborated with an empirical dataset, examining the relationship between learning strategies and personality with grade point average in undergraduate studies. The apparently paradoxical findings regarding the model fit and the recovery of the person scores are explained, considering the properties of the factor models examined. Frontiers Media S.A. 2021-06-02 /pmc/articles/PMC8206482/ /pubmed/34149573 http://dx.doi.org/10.3389/fpsyg.2021.685326 Text en Copyright © 2021 Nieto, Garrido, Martínez-Molina and Abad. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Psychology Nieto, María Dolores Garrido, Luis Eduardo Martínez-Molina, Agustín Abad, Francisco José Modeling Wording Effects Does Not Help in Recovering Uncontaminated Person Scores: A Systematic Evaluation With Random Intercept Item Factor Analysis |
title | Modeling Wording Effects Does Not Help in Recovering Uncontaminated Person Scores: A Systematic Evaluation With Random Intercept Item Factor Analysis |
title_full | Modeling Wording Effects Does Not Help in Recovering Uncontaminated Person Scores: A Systematic Evaluation With Random Intercept Item Factor Analysis |
title_fullStr | Modeling Wording Effects Does Not Help in Recovering Uncontaminated Person Scores: A Systematic Evaluation With Random Intercept Item Factor Analysis |
title_full_unstemmed | Modeling Wording Effects Does Not Help in Recovering Uncontaminated Person Scores: A Systematic Evaluation With Random Intercept Item Factor Analysis |
title_short | Modeling Wording Effects Does Not Help in Recovering Uncontaminated Person Scores: A Systematic Evaluation With Random Intercept Item Factor Analysis |
title_sort | modeling wording effects does not help in recovering uncontaminated person scores: a systematic evaluation with random intercept item factor analysis |
topic | Psychology |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8206482/ https://www.ncbi.nlm.nih.gov/pubmed/34149573 http://dx.doi.org/10.3389/fpsyg.2021.685326 |
work_keys_str_mv | AT nietomariadolores modelingwordingeffectsdoesnothelpinrecoveringuncontaminatedpersonscoresasystematicevaluationwithrandominterceptitemfactoranalysis AT garridoluiseduardo modelingwordingeffectsdoesnothelpinrecoveringuncontaminatedpersonscoresasystematicevaluationwithrandominterceptitemfactoranalysis AT martinezmolinaagustin modelingwordingeffectsdoesnothelpinrecoveringuncontaminatedpersonscoresasystematicevaluationwithrandominterceptitemfactoranalysis AT abadfranciscojose modelingwordingeffectsdoesnothelpinrecoveringuncontaminatedpersonscoresasystematicevaluationwithrandominterceptitemfactoranalysis |