Cargando…

Advancing implementation science through measure development and evaluation: a study protocol

BACKGROUND: Significant gaps related to measurement issues are among the most critical barriers to advancing implementation science. Three issues motivated the study aims: (a) the lack of stakeholder involvement in defining pragmatic measure qualities; (b) the dearth of measures, particularly for im...

Descripción completa

Detalles Bibliográficos
Autores principales: Lewis, Cara C., Weiner, Bryan J., Stanick, Cameo, Fischer, Sarah M.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2015
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4511441/
https://www.ncbi.nlm.nih.gov/pubmed/26197880
http://dx.doi.org/10.1186/s13012-015-0287-0
_version_ 1782382335095734272
author Lewis, Cara C.
Weiner, Bryan J.
Stanick, Cameo
Fischer, Sarah M.
author_facet Lewis, Cara C.
Weiner, Bryan J.
Stanick, Cameo
Fischer, Sarah M.
author_sort Lewis, Cara C.
collection PubMed
description BACKGROUND: Significant gaps related to measurement issues are among the most critical barriers to advancing implementation science. Three issues motivated the study aims: (a) the lack of stakeholder involvement in defining pragmatic measure qualities; (b) the dearth of measures, particularly for implementation outcomes; and (c) unknown psychometric and pragmatic strength of existing measures. Aim 1: Establish a stakeholder-driven operationalization of pragmatic measures and develop reliable, valid rating criteria for assessing the construct. Aim 2: Develop reliable, valid, and pragmatic measures of three critical implementation outcomes, acceptability, appropriateness, and feasibility. Aim 3: Identify Consolidated Framework for Implementation Research and Implementation Outcome Framework-linked measures that demonstrate both psychometric and pragmatic strength. METHODS/DESIGN: For Aim 1, we will conduct (a) interviews with stakeholder panelists (N = 7) and complete a literature review to populate pragmatic measure construct criteria, (b) Q-sort activities (N = 20) to clarify the internal structure of the definition, (c) Delphi activities (N = 20) to achieve consensus on the dimension priorities, (d) test-retest and inter-rater reliability assessments of the emergent rating system, and (e) known-groups validity testing of the top three prioritized pragmatic criteria. For Aim 2, our systematic development process involves domain delineation, item generation, substantive validity assessment, structural validity assessment, reliability assessment, and predictive validity assessment. We will also assess discriminant validity, known-groups validity, structural invariance, sensitivity to change, and other pragmatic features. For Aim 3, we will refine our established evidence-based assessment (EBA) criteria, extract the relevant data from the literature, rate each measure using the EBA criteria, and summarize the data. DISCUSSION: The study outputs of each aim are expected to have a positive impact as they will establish and guide a comprehensive measurement-focused research agenda for implementation science and provide empirically supported measures, tools, and methods for accomplishing this work.
format Online
Article
Text
id pubmed-4511441
institution National Center for Biotechnology Information
language English
publishDate 2015
publisher BioMed Central
record_format MEDLINE/PubMed
spelling pubmed-45114412015-07-23 Advancing implementation science through measure development and evaluation: a study protocol Lewis, Cara C. Weiner, Bryan J. Stanick, Cameo Fischer, Sarah M. Implement Sci Study Protocol BACKGROUND: Significant gaps related to measurement issues are among the most critical barriers to advancing implementation science. Three issues motivated the study aims: (a) the lack of stakeholder involvement in defining pragmatic measure qualities; (b) the dearth of measures, particularly for implementation outcomes; and (c) unknown psychometric and pragmatic strength of existing measures. Aim 1: Establish a stakeholder-driven operationalization of pragmatic measures and develop reliable, valid rating criteria for assessing the construct. Aim 2: Develop reliable, valid, and pragmatic measures of three critical implementation outcomes, acceptability, appropriateness, and feasibility. Aim 3: Identify Consolidated Framework for Implementation Research and Implementation Outcome Framework-linked measures that demonstrate both psychometric and pragmatic strength. METHODS/DESIGN: For Aim 1, we will conduct (a) interviews with stakeholder panelists (N = 7) and complete a literature review to populate pragmatic measure construct criteria, (b) Q-sort activities (N = 20) to clarify the internal structure of the definition, (c) Delphi activities (N = 20) to achieve consensus on the dimension priorities, (d) test-retest and inter-rater reliability assessments of the emergent rating system, and (e) known-groups validity testing of the top three prioritized pragmatic criteria. For Aim 2, our systematic development process involves domain delineation, item generation, substantive validity assessment, structural validity assessment, reliability assessment, and predictive validity assessment. We will also assess discriminant validity, known-groups validity, structural invariance, sensitivity to change, and other pragmatic features. For Aim 3, we will refine our established evidence-based assessment (EBA) criteria, extract the relevant data from the literature, rate each measure using the EBA criteria, and summarize the data. DISCUSSION: The study outputs of each aim are expected to have a positive impact as they will establish and guide a comprehensive measurement-focused research agenda for implementation science and provide empirically supported measures, tools, and methods for accomplishing this work. BioMed Central 2015-07-22 /pmc/articles/PMC4511441/ /pubmed/26197880 http://dx.doi.org/10.1186/s13012-015-0287-0 Text en © Lewis et al. 2015 This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
spellingShingle Study Protocol
Lewis, Cara C.
Weiner, Bryan J.
Stanick, Cameo
Fischer, Sarah M.
Advancing implementation science through measure development and evaluation: a study protocol
title Advancing implementation science through measure development and evaluation: a study protocol
title_full Advancing implementation science through measure development and evaluation: a study protocol
title_fullStr Advancing implementation science through measure development and evaluation: a study protocol
title_full_unstemmed Advancing implementation science through measure development and evaluation: a study protocol
title_short Advancing implementation science through measure development and evaluation: a study protocol
title_sort advancing implementation science through measure development and evaluation: a study protocol
topic Study Protocol
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4511441/
https://www.ncbi.nlm.nih.gov/pubmed/26197880
http://dx.doi.org/10.1186/s13012-015-0287-0
work_keys_str_mv AT lewiscarac advancingimplementationsciencethroughmeasuredevelopmentandevaluationastudyprotocol
AT weinerbryanj advancingimplementationsciencethroughmeasuredevelopmentandevaluationastudyprotocol
AT stanickcameo advancingimplementationsciencethroughmeasuredevelopmentandevaluationastudyprotocol
AT fischersarahm advancingimplementationsciencethroughmeasuredevelopmentandevaluationastudyprotocol