Cargando…
Preliminary testing of the reliability and feasibility of SAGE: a system to measure and score engagement with and use of research in health policies and programs
BACKGROUND: Few measures of research use in health policymaking are available, and the reliability of such measures has yet to be evaluated. A new measure called the Staff Assessment of Engagement with Evidence (SAGE) incorporates an interview that explores policymakers’ research use within discrete...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
BioMed Central
2017
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5735943/ https://www.ncbi.nlm.nih.gov/pubmed/29258601 http://dx.doi.org/10.1186/s13012-017-0676-7 |
_version_ | 1783287299365142528 |
---|---|
author | Makkar, Steve R. Williamson, Anna D’Este, Catherine Redman, Sally |
author_facet | Makkar, Steve R. Williamson, Anna D’Este, Catherine Redman, Sally |
author_sort | Makkar, Steve R. |
collection | PubMed |
description | BACKGROUND: Few measures of research use in health policymaking are available, and the reliability of such measures has yet to be evaluated. A new measure called the Staff Assessment of Engagement with Evidence (SAGE) incorporates an interview that explores policymakers’ research use within discrete policy documents and a scoring tool that quantifies the extent of policymakers’ research use based on the interview transcript and analysis of the policy document itself. We aimed to conduct a preliminary investigation of the usability, sensitivity, and reliability of the scoring tool in measuring research use by policymakers. METHODS: Nine experts in health policy research and two independent coders were recruited. Each expert used the scoring tool to rate a random selection of 20 interview transcripts, and each independent coder rated 60 transcripts. The distribution of scores among experts was examined, and then, interrater reliability was tested within and between the experts and independent coders. Average- and single-measure reliability coefficients were computed for each SAGE subscales. RESULTS: Experts’ scores ranged from the limited to extensive scoring bracket for all subscales. Experts as a group also exhibited at least a fair level of interrater agreement across all subscales. Single-measure reliability was at least fair except for three subscales: Relevance Appraisal, Conceptual Use, and Instrumental Use. Average- and single-measure reliability among independent coders was good to excellent for all subscales. Finally, reliability between experts and independent coders was fair to excellent for all subscales. CONCLUSIONS: Among experts, the scoring tool was comprehensible, usable, and sensitive to discriminate between documents with varying degrees of research use. Secondly, the scoring tool yielded scores with good reliability among the independent coders. There was greater variability among experts, although as a group, the tool was fairly reliable. The alignment between experts’ and independent coders’ ratings indicates that the independent coders were scoring in a manner comparable to health policy research experts. If the present findings are replicated in a larger sample, end users (e.g. policy agency staff) could potentially be trained to use SAGE to reliably score research use within their agencies, which would provide a cost-effective and time-efficient approach to utilising this measure in practice. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (10.1186/s13012-017-0676-7) contains supplementary material, which is available to authorized users. |
format | Online Article Text |
id | pubmed-5735943 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2017 |
publisher | BioMed Central |
record_format | MEDLINE/PubMed |
spelling | pubmed-57359432017-12-21 Preliminary testing of the reliability and feasibility of SAGE: a system to measure and score engagement with and use of research in health policies and programs Makkar, Steve R. Williamson, Anna D’Este, Catherine Redman, Sally Implement Sci Research BACKGROUND: Few measures of research use in health policymaking are available, and the reliability of such measures has yet to be evaluated. A new measure called the Staff Assessment of Engagement with Evidence (SAGE) incorporates an interview that explores policymakers’ research use within discrete policy documents and a scoring tool that quantifies the extent of policymakers’ research use based on the interview transcript and analysis of the policy document itself. We aimed to conduct a preliminary investigation of the usability, sensitivity, and reliability of the scoring tool in measuring research use by policymakers. METHODS: Nine experts in health policy research and two independent coders were recruited. Each expert used the scoring tool to rate a random selection of 20 interview transcripts, and each independent coder rated 60 transcripts. The distribution of scores among experts was examined, and then, interrater reliability was tested within and between the experts and independent coders. Average- and single-measure reliability coefficients were computed for each SAGE subscales. RESULTS: Experts’ scores ranged from the limited to extensive scoring bracket for all subscales. Experts as a group also exhibited at least a fair level of interrater agreement across all subscales. Single-measure reliability was at least fair except for three subscales: Relevance Appraisal, Conceptual Use, and Instrumental Use. Average- and single-measure reliability among independent coders was good to excellent for all subscales. Finally, reliability between experts and independent coders was fair to excellent for all subscales. CONCLUSIONS: Among experts, the scoring tool was comprehensible, usable, and sensitive to discriminate between documents with varying degrees of research use. Secondly, the scoring tool yielded scores with good reliability among the independent coders. There was greater variability among experts, although as a group, the tool was fairly reliable. The alignment between experts’ and independent coders’ ratings indicates that the independent coders were scoring in a manner comparable to health policy research experts. If the present findings are replicated in a larger sample, end users (e.g. policy agency staff) could potentially be trained to use SAGE to reliably score research use within their agencies, which would provide a cost-effective and time-efficient approach to utilising this measure in practice. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (10.1186/s13012-017-0676-7) contains supplementary material, which is available to authorized users. BioMed Central 2017-12-19 /pmc/articles/PMC5735943/ /pubmed/29258601 http://dx.doi.org/10.1186/s13012-017-0676-7 Text en © The Author(s). 2017 Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. |
spellingShingle | Research Makkar, Steve R. Williamson, Anna D’Este, Catherine Redman, Sally Preliminary testing of the reliability and feasibility of SAGE: a system to measure and score engagement with and use of research in health policies and programs |
title | Preliminary testing of the reliability and feasibility of SAGE: a system to measure and score engagement with and use of research in health policies and programs |
title_full | Preliminary testing of the reliability and feasibility of SAGE: a system to measure and score engagement with and use of research in health policies and programs |
title_fullStr | Preliminary testing of the reliability and feasibility of SAGE: a system to measure and score engagement with and use of research in health policies and programs |
title_full_unstemmed | Preliminary testing of the reliability and feasibility of SAGE: a system to measure and score engagement with and use of research in health policies and programs |
title_short | Preliminary testing of the reliability and feasibility of SAGE: a system to measure and score engagement with and use of research in health policies and programs |
title_sort | preliminary testing of the reliability and feasibility of sage: a system to measure and score engagement with and use of research in health policies and programs |
topic | Research |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5735943/ https://www.ncbi.nlm.nih.gov/pubmed/29258601 http://dx.doi.org/10.1186/s13012-017-0676-7 |
work_keys_str_mv | AT makkarstever preliminarytestingofthereliabilityandfeasibilityofsageasystemtomeasureandscoreengagementwithanduseofresearchinhealthpoliciesandprograms AT williamsonanna preliminarytestingofthereliabilityandfeasibilityofsageasystemtomeasureandscoreengagementwithanduseofresearchinhealthpoliciesandprograms AT destecatherine preliminarytestingofthereliabilityandfeasibilityofsageasystemtomeasureandscoreengagementwithanduseofresearchinhealthpoliciesandprograms AT redmansally preliminarytestingofthereliabilityandfeasibilityofsageasystemtomeasureandscoreengagementwithanduseofresearchinhealthpoliciesandprograms |