Cargando…

Evaluation of a brief virtual implementation science training program: the Penn Implementation Science Institute

BACKGROUND: To meet the growing demand for implementation science expertise, building capacity is a priority. Various training opportunities have emerged to meet this need. To ensure rigor and achievement of specific implementation science competencies, it is critical to systematically evaluate trai...

Descripción completa

Detalles Bibliográficos
Autores principales: Van Pelt, Amelia E., Bonafide, Christopher P., Rendle, Katharine A., Wolk, Courtney, Shea, Judy A., Bettencourt, Amanda, Beidas, Rinad S., Lane-Fall, Meghan B.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10626776/
https://www.ncbi.nlm.nih.gov/pubmed/37932840
http://dx.doi.org/10.1186/s43058-023-00512-5
Descripción
Sumario:BACKGROUND: To meet the growing demand for implementation science expertise, building capacity is a priority. Various training opportunities have emerged to meet this need. To ensure rigor and achievement of specific implementation science competencies, it is critical to systematically evaluate training programs. METHODS: The Penn Implementation Science Institute (PennISI) offers 4 days (20 h) of virtual synchronous training on foundational and advanced topics in implementation science. Through a pre-post design, this study evaluated the sixth PennISI, delivered in 2022. Surveys measures included 43 implementation science training evaluation competencies grouped into four thematic domains (e.g., items related to implementation science study design grouped into the “design, background, and rationale” competency category), course-specific evaluation criteria, and open-ended questions to evaluate change in knowledge and suggestions for improving future institutes. Mean composite scores were created for each of the competency themes. Descriptive statistics and thematic analysis were completed. RESULTS: One hundred four (95.41% response rate) and 55 (50.46% response rate) participants completed the pre-survey and post-survey, respectively. Participants included a diverse cohort of individuals primarily affiliated with US-based academic institutions and self-reported as having novice or beginner-level knowledge of implementation science at baseline (81.73%). In the pre-survey, all mean composite scores for implementation science competencies were below one (i.e., beginner-level). Participants reported high value from the PennISI across standard course evaluation criteria (e.g., mean score of 3.77/4.00 for overall quality of course). Scores for all competency domains increased to a score between beginner-level and intermediate-level following training. In both the pre-survey and post-survey, competencies related to “definition, background, and rationale” had the highest mean composite score, whereas competencies related to “design and analysis” received the lowest score. Qualitative themes offered impressions of the PennISI, didactic content, PennISI structure, and suggestions for improvement. Prior experience with or knowledge of implementation science influenced many themes. CONCLUSIONS: This evaluation highlights the strengths of an established implementation science institute, which can serve as a model for brief, virtual training programs. Findings provide insight for improving future program efforts to meet the needs of the heterogenous implementation science community (e.g., different disciplines and levels of implementation science knowledge). This study contributes to ensuring rigorous implementation science capacity building through the evaluation of programs. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s43058-023-00512-5.