Cargando…
Equivalence of electronic and paper administration of patient-reported outcome measures: a systematic review and meta-analysis of studies conducted between 2007 and 2013
OBJECTIVE: To conduct a systematic review and meta-analysis of the equivalence between electronic and paper administration of patient reported outcome measures (PROMs) in studies conducted subsequent to those included in Gwaltney et al’s 2008 review. METHODS: A systematic literature review of PROM e...
Autores principales: | , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
BioMed Central
2015
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4597451/ https://www.ncbi.nlm.nih.gov/pubmed/26446159 http://dx.doi.org/10.1186/s12955-015-0362-x |
_version_ | 1782393927821688832 |
---|---|
author | Muehlhausen, Willie Doll, Helen Quadri, Nuz Fordham, Bethany O’Donohoe, Paul Dogar, Nijda Wild, Diane J. |
author_facet | Muehlhausen, Willie Doll, Helen Quadri, Nuz Fordham, Bethany O’Donohoe, Paul Dogar, Nijda Wild, Diane J. |
author_sort | Muehlhausen, Willie |
collection | PubMed |
description | OBJECTIVE: To conduct a systematic review and meta-analysis of the equivalence between electronic and paper administration of patient reported outcome measures (PROMs) in studies conducted subsequent to those included in Gwaltney et al’s 2008 review. METHODS: A systematic literature review of PROM equivalence studies conducted between 2007 and 2013 identified 1,997 records from which 72 studies met pre-defined inclusion/exclusion criteria. PRO data from each study were extracted, in terms of both correlation coefficients (ICCs, Spearman and Pearson correlations, Kappa statistics) and mean differences (standardized by the standard deviation, SD, and the response scale range). Pooled estimates of correlation and mean difference were estimated. The modifying effects of mode of administration, year of publication, study design, time interval between administrations, mean age of participants and publication type were examined. RESULTS: Four hundred thirty-five individual correlations were extracted, these correlations being highly variable (I2 = 93.8) but showing generally good equivalence, with ICCs ranging from 0.65 to 0.99 and the pooled correlation coefficient being 0.88 (95 % CI 0.87 to 0.88). Standardised mean differences for 307 studies were small and less variable (I2 = 33.5) with a pooled standardised mean difference of 0.037 (95 % CI 0.031 to 0.042). Average administration mode/platform-specific correlations from 56 studies (61 estimates) had a pooled estimate of 0.88 (95 % CI 0.86 to 0.90) and were still highly variable (I2 = 92.1). Similarly, average platform-specific ICCs from 39 studies (42 estimates) had a pooled estimate of 0.90 (95 % CI 0.88 to 0.92) with an I2 of 91.5. After excluding 20 studies with outlying correlation coefficients (≥3SD from the mean), the I2 was 54.4, with the equivalence still high, the overall pooled correlation coefficient being 0.88 (95 % CI 0.87 to 0.88). Agreement was found to be greater in more recent studies (p < 0.001), in randomized studies compared with non-randomised studies (p < 0.001), in studies with a shorter interval (<1 day) (p < 0.001), and in respondents of mean age 28 to 55 compared with those either younger or older (p < 0.001). In terms of mode/platform, paper vs Interactive Voice Response System (IVRS) comparisons had the lowest pooled agreement and paper vs tablet/touch screen the highest (p < 0.001). CONCLUSION: The present study supports the conclusion of Gwaltney’s previous meta-analysis showing that PROMs administered on paper are quantitatively comparable with measures administered on an electronic device. It also confirms the ISPOR Taskforce´s conclusion that quantitative equivalence studies are not required for migrations with minor change only. This finding should be reassuring to investigators, regulators and sponsors using questionnaires on electronic devicesafter migration using best practices. Although there is data indicating that migrations with moderate changes produce equivalent instrument versions, hence do not require quantitative equivalence studies, additional work is necessary to establish this. Furthermore, there is the need to standardize migration practices and reporting practices (i.e. include copies of tested instrument versions and screenshots) so that clear recommendations regarding equivalence testing can be made in the future.raising questions about the necessity of conducting equivalence testing moving forward. |
format | Online Article Text |
id | pubmed-4597451 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2015 |
publisher | BioMed Central |
record_format | MEDLINE/PubMed |
spelling | pubmed-45974512015-10-08 Equivalence of electronic and paper administration of patient-reported outcome measures: a systematic review and meta-analysis of studies conducted between 2007 and 2013 Muehlhausen, Willie Doll, Helen Quadri, Nuz Fordham, Bethany O’Donohoe, Paul Dogar, Nijda Wild, Diane J. Health Qual Life Outcomes Research OBJECTIVE: To conduct a systematic review and meta-analysis of the equivalence between electronic and paper administration of patient reported outcome measures (PROMs) in studies conducted subsequent to those included in Gwaltney et al’s 2008 review. METHODS: A systematic literature review of PROM equivalence studies conducted between 2007 and 2013 identified 1,997 records from which 72 studies met pre-defined inclusion/exclusion criteria. PRO data from each study were extracted, in terms of both correlation coefficients (ICCs, Spearman and Pearson correlations, Kappa statistics) and mean differences (standardized by the standard deviation, SD, and the response scale range). Pooled estimates of correlation and mean difference were estimated. The modifying effects of mode of administration, year of publication, study design, time interval between administrations, mean age of participants and publication type were examined. RESULTS: Four hundred thirty-five individual correlations were extracted, these correlations being highly variable (I2 = 93.8) but showing generally good equivalence, with ICCs ranging from 0.65 to 0.99 and the pooled correlation coefficient being 0.88 (95 % CI 0.87 to 0.88). Standardised mean differences for 307 studies were small and less variable (I2 = 33.5) with a pooled standardised mean difference of 0.037 (95 % CI 0.031 to 0.042). Average administration mode/platform-specific correlations from 56 studies (61 estimates) had a pooled estimate of 0.88 (95 % CI 0.86 to 0.90) and were still highly variable (I2 = 92.1). Similarly, average platform-specific ICCs from 39 studies (42 estimates) had a pooled estimate of 0.90 (95 % CI 0.88 to 0.92) with an I2 of 91.5. After excluding 20 studies with outlying correlation coefficients (≥3SD from the mean), the I2 was 54.4, with the equivalence still high, the overall pooled correlation coefficient being 0.88 (95 % CI 0.87 to 0.88). Agreement was found to be greater in more recent studies (p < 0.001), in randomized studies compared with non-randomised studies (p < 0.001), in studies with a shorter interval (<1 day) (p < 0.001), and in respondents of mean age 28 to 55 compared with those either younger or older (p < 0.001). In terms of mode/platform, paper vs Interactive Voice Response System (IVRS) comparisons had the lowest pooled agreement and paper vs tablet/touch screen the highest (p < 0.001). CONCLUSION: The present study supports the conclusion of Gwaltney’s previous meta-analysis showing that PROMs administered on paper are quantitatively comparable with measures administered on an electronic device. It also confirms the ISPOR Taskforce´s conclusion that quantitative equivalence studies are not required for migrations with minor change only. This finding should be reassuring to investigators, regulators and sponsors using questionnaires on electronic devicesafter migration using best practices. Although there is data indicating that migrations with moderate changes produce equivalent instrument versions, hence do not require quantitative equivalence studies, additional work is necessary to establish this. Furthermore, there is the need to standardize migration practices and reporting practices (i.e. include copies of tested instrument versions and screenshots) so that clear recommendations regarding equivalence testing can be made in the future.raising questions about the necessity of conducting equivalence testing moving forward. BioMed Central 2015-10-07 /pmc/articles/PMC4597451/ /pubmed/26446159 http://dx.doi.org/10.1186/s12955-015-0362-x Text en © Muehlhausen et al. 2015 Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. |
spellingShingle | Research Muehlhausen, Willie Doll, Helen Quadri, Nuz Fordham, Bethany O’Donohoe, Paul Dogar, Nijda Wild, Diane J. Equivalence of electronic and paper administration of patient-reported outcome measures: a systematic review and meta-analysis of studies conducted between 2007 and 2013 |
title | Equivalence of electronic and paper administration of patient-reported outcome measures: a systematic review and meta-analysis of studies conducted between 2007 and 2013 |
title_full | Equivalence of electronic and paper administration of patient-reported outcome measures: a systematic review and meta-analysis of studies conducted between 2007 and 2013 |
title_fullStr | Equivalence of electronic and paper administration of patient-reported outcome measures: a systematic review and meta-analysis of studies conducted between 2007 and 2013 |
title_full_unstemmed | Equivalence of electronic and paper administration of patient-reported outcome measures: a systematic review and meta-analysis of studies conducted between 2007 and 2013 |
title_short | Equivalence of electronic and paper administration of patient-reported outcome measures: a systematic review and meta-analysis of studies conducted between 2007 and 2013 |
title_sort | equivalence of electronic and paper administration of patient-reported outcome measures: a systematic review and meta-analysis of studies conducted between 2007 and 2013 |
topic | Research |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4597451/ https://www.ncbi.nlm.nih.gov/pubmed/26446159 http://dx.doi.org/10.1186/s12955-015-0362-x |
work_keys_str_mv | AT muehlhausenwillie equivalenceofelectronicandpaperadministrationofpatientreportedoutcomemeasuresasystematicreviewandmetaanalysisofstudiesconductedbetween2007and2013 AT dollhelen equivalenceofelectronicandpaperadministrationofpatientreportedoutcomemeasuresasystematicreviewandmetaanalysisofstudiesconductedbetween2007and2013 AT quadrinuz equivalenceofelectronicandpaperadministrationofpatientreportedoutcomemeasuresasystematicreviewandmetaanalysisofstudiesconductedbetween2007and2013 AT fordhambethany equivalenceofelectronicandpaperadministrationofpatientreportedoutcomemeasuresasystematicreviewandmetaanalysisofstudiesconductedbetween2007and2013 AT odonohoepaul equivalenceofelectronicandpaperadministrationofpatientreportedoutcomemeasuresasystematicreviewandmetaanalysisofstudiesconductedbetween2007and2013 AT dogarnijda equivalenceofelectronicandpaperadministrationofpatientreportedoutcomemeasuresasystematicreviewandmetaanalysisofstudiesconductedbetween2007and2013 AT wilddianej equivalenceofelectronicandpaperadministrationofpatientreportedoutcomemeasuresasystematicreviewandmetaanalysisofstudiesconductedbetween2007and2013 |