Cargando…
Reliability of the PEDro scale: comparison between trials published in predatory and non-predatory journals
BACKGROUND: Lack of effective peer-review process of predatory journals, resulting in more ambiguity in reporting, language and incomplete descriptions of processes might have an impact on the reliability of PEDro scale. The aim of this investigation was to compare the reliability of the PEDro scale...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
BioMed Central
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8969341/ https://www.ncbi.nlm.nih.gov/pubmed/35354496 http://dx.doi.org/10.1186/s40945-022-00133-6 |
_version_ | 1784679226730872832 |
---|---|
author | Paci, Matteo Bianchini, Claudio Baccini, Marco |
author_facet | Paci, Matteo Bianchini, Claudio Baccini, Marco |
author_sort | Paci, Matteo |
collection | PubMed |
description | BACKGROUND: Lack of effective peer-review process of predatory journals, resulting in more ambiguity in reporting, language and incomplete descriptions of processes might have an impact on the reliability of PEDro scale. The aim of this investigation was to compare the reliability of the PEDro scale when evaluating the methodological quality of RCTs published in predatory (PJs) and non-predatory (NPJs) journals, to more confidently select interventions appropriate for application to practice. METHODS: A selected sample of RCTs was independently rated by two raters randomly selected among 11 physical therapists. Reliability of each item of the PEDro scale and the total PEDro score were assessed by Cohen’s kappa statistic and percent of agreement and by Intraclass Correlation Coefficients (ICC) and the Standard Error of Measurement (SEM), respectively. The Chi-square test was used to compare the rate of agreement between PJs and NPJs. RESULTS: A total number of 298 RCTs were assessed (119 published in NPJs). Cronbach’s alphas were .704 and .845 for trials published in PJs and NPJs, respectively. Kappa values for individual scale items ranged from .14 to .73 for PJs and from .09 to .70 for NPJs. The ICC was .537 (95% CI .425—.634) and .729 (95% CI .632-.803), and SEM was 1.055 and 0.957 for PJs and NPJs, respectively. Inter-rater reliability in discriminating between studies of moderate to high and low quality was higher for NPJs (k = .57) than for PJs (k = .28). CONCLUSIONS: Interrater reliability of PEDro score of RCTs published in PJs is lower than that of trials published in NPJs, likely also due to ambiguous language and incomplete reporting. This might make the detection of risk of bias more difficult when selecting interventions appropriate for application to practice or producing secondary literature. |
format | Online Article Text |
id | pubmed-8969341 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | BioMed Central |
record_format | MEDLINE/PubMed |
spelling | pubmed-89693412022-04-01 Reliability of the PEDro scale: comparison between trials published in predatory and non-predatory journals Paci, Matteo Bianchini, Claudio Baccini, Marco Arch Physiother Research Article BACKGROUND: Lack of effective peer-review process of predatory journals, resulting in more ambiguity in reporting, language and incomplete descriptions of processes might have an impact on the reliability of PEDro scale. The aim of this investigation was to compare the reliability of the PEDro scale when evaluating the methodological quality of RCTs published in predatory (PJs) and non-predatory (NPJs) journals, to more confidently select interventions appropriate for application to practice. METHODS: A selected sample of RCTs was independently rated by two raters randomly selected among 11 physical therapists. Reliability of each item of the PEDro scale and the total PEDro score were assessed by Cohen’s kappa statistic and percent of agreement and by Intraclass Correlation Coefficients (ICC) and the Standard Error of Measurement (SEM), respectively. The Chi-square test was used to compare the rate of agreement between PJs and NPJs. RESULTS: A total number of 298 RCTs were assessed (119 published in NPJs). Cronbach’s alphas were .704 and .845 for trials published in PJs and NPJs, respectively. Kappa values for individual scale items ranged from .14 to .73 for PJs and from .09 to .70 for NPJs. The ICC was .537 (95% CI .425—.634) and .729 (95% CI .632-.803), and SEM was 1.055 and 0.957 for PJs and NPJs, respectively. Inter-rater reliability in discriminating between studies of moderate to high and low quality was higher for NPJs (k = .57) than for PJs (k = .28). CONCLUSIONS: Interrater reliability of PEDro score of RCTs published in PJs is lower than that of trials published in NPJs, likely also due to ambiguous language and incomplete reporting. This might make the detection of risk of bias more difficult when selecting interventions appropriate for application to practice or producing secondary literature. BioMed Central 2022-03-31 /pmc/articles/PMC8969341/ /pubmed/35354496 http://dx.doi.org/10.1186/s40945-022-00133-6 Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/ (https://creativecommons.org/publicdomain/zero/1.0/) ) applies to the data made available in this article, unless otherwise stated in a credit line to the data. |
spellingShingle | Research Article Paci, Matteo Bianchini, Claudio Baccini, Marco Reliability of the PEDro scale: comparison between trials published in predatory and non-predatory journals |
title | Reliability of the PEDro scale: comparison between trials published in predatory and non-predatory journals |
title_full | Reliability of the PEDro scale: comparison between trials published in predatory and non-predatory journals |
title_fullStr | Reliability of the PEDro scale: comparison between trials published in predatory and non-predatory journals |
title_full_unstemmed | Reliability of the PEDro scale: comparison between trials published in predatory and non-predatory journals |
title_short | Reliability of the PEDro scale: comparison between trials published in predatory and non-predatory journals |
title_sort | reliability of the pedro scale: comparison between trials published in predatory and non-predatory journals |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8969341/ https://www.ncbi.nlm.nih.gov/pubmed/35354496 http://dx.doi.org/10.1186/s40945-022-00133-6 |
work_keys_str_mv | AT pacimatteo reliabilityofthepedroscalecomparisonbetweentrialspublishedinpredatoryandnonpredatoryjournals AT bianchiniclaudio reliabilityofthepedroscalecomparisonbetweentrialspublishedinpredatoryandnonpredatoryjournals AT baccinimarco reliabilityofthepedroscalecomparisonbetweentrialspublishedinpredatoryandnonpredatoryjournals |