Cargando…

Presenting comparative study PRO results to clinicians and researchers: beyond the eye of the beholder

PURPOSE: Patient-reported outcome (PRO) results from clinical trials can inform clinical care, but PRO interpretation is challenging. We evaluated the interpretation accuracy and perceived clarity of various strategies for displaying clinical trial PRO findings. METHODS: We conducted an e-survey of...

Descripción completa

Detalles Bibliográficos
Autores principales: Brundage, Michael, Blackford, Amanda, Tolbert, Elliott, Smith, Katherine, Bantug, Elissa, Snyder, Claire
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Springer International Publishing 2017
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5770492/
https://www.ncbi.nlm.nih.gov/pubmed/29098606
http://dx.doi.org/10.1007/s11136-017-1710-6
Descripción
Sumario:PURPOSE: Patient-reported outcome (PRO) results from clinical trials can inform clinical care, but PRO interpretation is challenging. We evaluated the interpretation accuracy and perceived clarity of various strategies for displaying clinical trial PRO findings. METHODS: We conducted an e-survey of oncology clinicians and PRO researchers (supplemented by one-on-one clinician interviews) that randomized respondents to view one of the three line-graph formats (average scores over time for two treatments on four domains): (1) higher scores consistently indicating “better” patient status; (2) higher scores indicating “more” of what was being measured (better for function, worse for symptoms); or (3) normed scores. Two formats displayed proportions changed (pie/bar charts). Multivariate modeling was used to analyze interpretation accuracy and clarity ratings. RESULTS: Two hundred and thirty-three clinicians and 248 researchers responded; ten clinicians were interviewed. Line graphs with “better” directionality were more likely to be interpreted accurately than “normed” line graphs (OR 1.55; 95% CI 1.01–2.38; p = 0.04). No significant differences were found between “better” and “more” formats. “Better” formatted graphs were also more likely to be rated “very clear” versus “normed” formatted graphs (OR 1.91; 95% CI 1.44–2.54; p < 0.001). For proportions changed, respondents were less likely to make an interpretation error with pie versus bar charts (OR 0.35; 95% CI 0.2–0.6; p < 0.001); clarity ratings did not differ between formats. Qualitative findings informed the interpretation of the survey findings. CONCLUSIONS: Graphic formats for presenting PRO data differ in how accurately they are interpreted and how clear they are perceived to be. These findings will inform the development of best practices for optimally reporting PRO findings. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.1007/s11136-017-1710-6) contains supplementary material, which is available to authorized users.