Cargando…

A comparison of the polytomous Rasch analysis output of RUMM2030 and R (ltm/eRm/TAM/lordif)

BACKGROUND: Patient-reported outcome measures developed using Classical Test Theory are commonly comprised of ordinal level items on a Likert response scale are problematic as they do not permit the results to be compared between patients. Rasch analysis provides a solution to overcome this by evalu...

Descripción completa

Detalles Bibliográficos
Autores principales: Robinson, Michael, Johnson, Andrew M., Walton, David M., MacDermid, Joy C.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6381688/
https://www.ncbi.nlm.nih.gov/pubmed/30786868
http://dx.doi.org/10.1186/s12874-019-0680-5
_version_ 1783396550881312768
author Robinson, Michael
Johnson, Andrew M.
Walton, David M.
MacDermid, Joy C.
author_facet Robinson, Michael
Johnson, Andrew M.
Walton, David M.
MacDermid, Joy C.
author_sort Robinson, Michael
collection PubMed
description BACKGROUND: Patient-reported outcome measures developed using Classical Test Theory are commonly comprised of ordinal level items on a Likert response scale are problematic as they do not permit the results to be compared between patients. Rasch analysis provides a solution to overcome this by evaluating the measurement characteristics of the rating scales using probability estimates. This is typically achieved using commercial software dedicated to Rasch analysis however, it is possible to conduct this analysis using non-specific open source software such a R. METHODS: Rasch analysis was conducted using the most commonly used commercial software package, RUMM 2030, and R, using four open-source packages, with a common data set (6-month post-injury PRWE Questionnaire responses) to evaluate the statistical results for consistency. The analysis plan followed recommendations used in a similar study supported by the software package’s instructions in order to obtain category thresholds, item and person fit statistics, measures of reliability and evaluate the data for construct validity, differential item functioning, local dependency and unidimensionality of the items. RESULTS: There was substantial agreement between RUMM2030 and R with regards for most of the results, however there are some small discrepancies between the output of the two programs. CONCLUSIONS: While the differences in output between RUMM2030 and R can easily be explained by comparing the underlying statistical approaches taken by each program, there is disagreement on critical statistical decisions made by each program. This disagreement however should not be an issue as Rasch analysis requires users to apply their own subjective analysis. While researchers might expect that Rasch performed on a large sample would be a stable, two authors who complete Rasch analysis of the PRWE found somewhat dissimilar findings. So, while some variations in results may be due to samples, this paper adds that some variation in findings may be software dependent.
format Online
Article
Text
id pubmed-6381688
institution National Center for Biotechnology Information
language English
publishDate 2019
publisher BioMed Central
record_format MEDLINE/PubMed
spelling pubmed-63816882019-03-01 A comparison of the polytomous Rasch analysis output of RUMM2030 and R (ltm/eRm/TAM/lordif) Robinson, Michael Johnson, Andrew M. Walton, David M. MacDermid, Joy C. BMC Med Res Methodol Technical Advance BACKGROUND: Patient-reported outcome measures developed using Classical Test Theory are commonly comprised of ordinal level items on a Likert response scale are problematic as they do not permit the results to be compared between patients. Rasch analysis provides a solution to overcome this by evaluating the measurement characteristics of the rating scales using probability estimates. This is typically achieved using commercial software dedicated to Rasch analysis however, it is possible to conduct this analysis using non-specific open source software such a R. METHODS: Rasch analysis was conducted using the most commonly used commercial software package, RUMM 2030, and R, using four open-source packages, with a common data set (6-month post-injury PRWE Questionnaire responses) to evaluate the statistical results for consistency. The analysis plan followed recommendations used in a similar study supported by the software package’s instructions in order to obtain category thresholds, item and person fit statistics, measures of reliability and evaluate the data for construct validity, differential item functioning, local dependency and unidimensionality of the items. RESULTS: There was substantial agreement between RUMM2030 and R with regards for most of the results, however there are some small discrepancies between the output of the two programs. CONCLUSIONS: While the differences in output between RUMM2030 and R can easily be explained by comparing the underlying statistical approaches taken by each program, there is disagreement on critical statistical decisions made by each program. This disagreement however should not be an issue as Rasch analysis requires users to apply their own subjective analysis. While researchers might expect that Rasch performed on a large sample would be a stable, two authors who complete Rasch analysis of the PRWE found somewhat dissimilar findings. So, while some variations in results may be due to samples, this paper adds that some variation in findings may be software dependent. BioMed Central 2019-02-20 /pmc/articles/PMC6381688/ /pubmed/30786868 http://dx.doi.org/10.1186/s12874-019-0680-5 Text en © The Author(s). 2019 Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
spellingShingle Technical Advance
Robinson, Michael
Johnson, Andrew M.
Walton, David M.
MacDermid, Joy C.
A comparison of the polytomous Rasch analysis output of RUMM2030 and R (ltm/eRm/TAM/lordif)
title A comparison of the polytomous Rasch analysis output of RUMM2030 and R (ltm/eRm/TAM/lordif)
title_full A comparison of the polytomous Rasch analysis output of RUMM2030 and R (ltm/eRm/TAM/lordif)
title_fullStr A comparison of the polytomous Rasch analysis output of RUMM2030 and R (ltm/eRm/TAM/lordif)
title_full_unstemmed A comparison of the polytomous Rasch analysis output of RUMM2030 and R (ltm/eRm/TAM/lordif)
title_short A comparison of the polytomous Rasch analysis output of RUMM2030 and R (ltm/eRm/TAM/lordif)
title_sort comparison of the polytomous rasch analysis output of rumm2030 and r (ltm/erm/tam/lordif)
topic Technical Advance
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6381688/
https://www.ncbi.nlm.nih.gov/pubmed/30786868
http://dx.doi.org/10.1186/s12874-019-0680-5
work_keys_str_mv AT robinsonmichael acomparisonofthepolytomousraschanalysisoutputofrumm2030andrltmermtamlordif
AT johnsonandrewm acomparisonofthepolytomousraschanalysisoutputofrumm2030andrltmermtamlordif
AT waltondavidm acomparisonofthepolytomousraschanalysisoutputofrumm2030andrltmermtamlordif
AT macdermidjoyc acomparisonofthepolytomousraschanalysisoutputofrumm2030andrltmermtamlordif
AT robinsonmichael comparisonofthepolytomousraschanalysisoutputofrumm2030andrltmermtamlordif
AT johnsonandrewm comparisonofthepolytomousraschanalysisoutputofrumm2030andrltmermtamlordif
AT waltondavidm comparisonofthepolytomousraschanalysisoutputofrumm2030andrltmermtamlordif
AT macdermidjoyc comparisonofthepolytomousraschanalysisoutputofrumm2030andrltmermtamlordif