Cargando…
Model Selection for Cogitative Diagnostic Analysis of the Reading Comprehension Test
Reading subskills are generally regarded as continuous variables, while most models used in the previous reading diagnoses have the hypothesis that the latent variables are dichotomous. Considering that the multidimensional item response theory (MIRT) model has continuous latent variables and can be...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Frontiers Media S.A.
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8422035/ https://www.ncbi.nlm.nih.gov/pubmed/34504454 http://dx.doi.org/10.3389/fpsyg.2021.644764 |
_version_ | 1783749204034715648 |
---|---|
author | Liu, Hui Bian, Yufang |
author_facet | Liu, Hui Bian, Yufang |
author_sort | Liu, Hui |
collection | PubMed |
description | Reading subskills are generally regarded as continuous variables, while most models used in the previous reading diagnoses have the hypothesis that the latent variables are dichotomous. Considering that the multidimensional item response theory (MIRT) model has continuous latent variables and can be used for diagnostic purposes, this study compared the performances of MIRT with two representatives of traditionally widely used models in reading diagnoses [reduced reparametrized unified model (R-RUM) and generalized deterministic, noisy, and gate (G-DINA)]. The comparison was carried out with both empirical and simulated data. First, model-data fit indices were used to evaluate whether MIRT was more appropriate than R-RUM and G-DINA with real data. Then, with the simulated data, relations between the estimated scores from MIRT, R-RUM, and G-DINA and the true scores were compared to examine whether the true abilities were well-represented, correct classification rates under different research conditions for MIRT, R-RUM, and G-DINA were calculated to examine the person parameter recovery, and the frequency distributions of subskill mastery probability were also compared to show the deviation of the estimated subskill mastery probabilities from the true values in the general value distribution. The MIRT obtained better model-data fit, gained estimated scores being a more reasonable representation for the true abilities, had an advantage on correct classification rates, and showed less deviation from the true values in frequency distributions of subskill mastery probabilities, which means it can produce more accurate diagnostic information about the reading abilities of the test-takers. Considering that more accurate diagnostic information has greater guiding value for the remedial teaching and learning, and in reading diagnoses, the score interpretation will be more reasonable with the MIRT model, this study recommended MIRT as a new methodology for future reading diagnostic analyses. |
format | Online Article Text |
id | pubmed-8422035 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | Frontiers Media S.A. |
record_format | MEDLINE/PubMed |
spelling | pubmed-84220352021-09-08 Model Selection for Cogitative Diagnostic Analysis of the Reading Comprehension Test Liu, Hui Bian, Yufang Front Psychol Psychology Reading subskills are generally regarded as continuous variables, while most models used in the previous reading diagnoses have the hypothesis that the latent variables are dichotomous. Considering that the multidimensional item response theory (MIRT) model has continuous latent variables and can be used for diagnostic purposes, this study compared the performances of MIRT with two representatives of traditionally widely used models in reading diagnoses [reduced reparametrized unified model (R-RUM) and generalized deterministic, noisy, and gate (G-DINA)]. The comparison was carried out with both empirical and simulated data. First, model-data fit indices were used to evaluate whether MIRT was more appropriate than R-RUM and G-DINA with real data. Then, with the simulated data, relations between the estimated scores from MIRT, R-RUM, and G-DINA and the true scores were compared to examine whether the true abilities were well-represented, correct classification rates under different research conditions for MIRT, R-RUM, and G-DINA were calculated to examine the person parameter recovery, and the frequency distributions of subskill mastery probability were also compared to show the deviation of the estimated subskill mastery probabilities from the true values in the general value distribution. The MIRT obtained better model-data fit, gained estimated scores being a more reasonable representation for the true abilities, had an advantage on correct classification rates, and showed less deviation from the true values in frequency distributions of subskill mastery probabilities, which means it can produce more accurate diagnostic information about the reading abilities of the test-takers. Considering that more accurate diagnostic information has greater guiding value for the remedial teaching and learning, and in reading diagnoses, the score interpretation will be more reasonable with the MIRT model, this study recommended MIRT as a new methodology for future reading diagnostic analyses. Frontiers Media S.A. 2021-08-13 /pmc/articles/PMC8422035/ /pubmed/34504454 http://dx.doi.org/10.3389/fpsyg.2021.644764 Text en Copyright © 2021 Liu and Bian. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. |
spellingShingle | Psychology Liu, Hui Bian, Yufang Model Selection for Cogitative Diagnostic Analysis of the Reading Comprehension Test |
title | Model Selection for Cogitative Diagnostic Analysis of the Reading Comprehension Test |
title_full | Model Selection for Cogitative Diagnostic Analysis of the Reading Comprehension Test |
title_fullStr | Model Selection for Cogitative Diagnostic Analysis of the Reading Comprehension Test |
title_full_unstemmed | Model Selection for Cogitative Diagnostic Analysis of the Reading Comprehension Test |
title_short | Model Selection for Cogitative Diagnostic Analysis of the Reading Comprehension Test |
title_sort | model selection for cogitative diagnostic analysis of the reading comprehension test |
topic | Psychology |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8422035/ https://www.ncbi.nlm.nih.gov/pubmed/34504454 http://dx.doi.org/10.3389/fpsyg.2021.644764 |
work_keys_str_mv | AT liuhui modelselectionforcogitativediagnosticanalysisofthereadingcomprehensiontest AT bianyufang modelselectionforcogitativediagnosticanalysisofthereadingcomprehensiontest |