Cargando…

Evaluation of a brief virtual implementation science training program: the Penn Implementation Science Institute

BACKGROUND: To meet the growing demand for implementation science expertise, building capacity is a priority. Various training opportunities have emerged to meet this need. To ensure rigor and achievement of specific implementation science competencies, it is critical to systematically evaluate trai...

Descripción completa

Detalles Bibliográficos
Autores principales: Van Pelt, Amelia E., Bonafide, Christopher P., Rendle, Katharine A., Wolk, Courtney, Shea, Judy A., Bettencourt, Amanda, Beidas, Rinad S., Lane-Fall, Meghan B.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10626776/
https://www.ncbi.nlm.nih.gov/pubmed/37932840
http://dx.doi.org/10.1186/s43058-023-00512-5
_version_ 1785131408881090560
author Van Pelt, Amelia E.
Bonafide, Christopher P.
Rendle, Katharine A.
Wolk, Courtney
Shea, Judy A.
Bettencourt, Amanda
Beidas, Rinad S.
Lane-Fall, Meghan B.
author_facet Van Pelt, Amelia E.
Bonafide, Christopher P.
Rendle, Katharine A.
Wolk, Courtney
Shea, Judy A.
Bettencourt, Amanda
Beidas, Rinad S.
Lane-Fall, Meghan B.
author_sort Van Pelt, Amelia E.
collection PubMed
description BACKGROUND: To meet the growing demand for implementation science expertise, building capacity is a priority. Various training opportunities have emerged to meet this need. To ensure rigor and achievement of specific implementation science competencies, it is critical to systematically evaluate training programs. METHODS: The Penn Implementation Science Institute (PennISI) offers 4 days (20 h) of virtual synchronous training on foundational and advanced topics in implementation science. Through a pre-post design, this study evaluated the sixth PennISI, delivered in 2022. Surveys measures included 43 implementation science training evaluation competencies grouped into four thematic domains (e.g., items related to implementation science study design grouped into the “design, background, and rationale” competency category), course-specific evaluation criteria, and open-ended questions to evaluate change in knowledge and suggestions for improving future institutes. Mean composite scores were created for each of the competency themes. Descriptive statistics and thematic analysis were completed. RESULTS: One hundred four (95.41% response rate) and 55 (50.46% response rate) participants completed the pre-survey and post-survey, respectively. Participants included a diverse cohort of individuals primarily affiliated with US-based academic institutions and self-reported as having novice or beginner-level knowledge of implementation science at baseline (81.73%). In the pre-survey, all mean composite scores for implementation science competencies were below one (i.e., beginner-level). Participants reported high value from the PennISI across standard course evaluation criteria (e.g., mean score of 3.77/4.00 for overall quality of course). Scores for all competency domains increased to a score between beginner-level and intermediate-level following training. In both the pre-survey and post-survey, competencies related to “definition, background, and rationale” had the highest mean composite score, whereas competencies related to “design and analysis” received the lowest score. Qualitative themes offered impressions of the PennISI, didactic content, PennISI structure, and suggestions for improvement. Prior experience with or knowledge of implementation science influenced many themes. CONCLUSIONS: This evaluation highlights the strengths of an established implementation science institute, which can serve as a model for brief, virtual training programs. Findings provide insight for improving future program efforts to meet the needs of the heterogenous implementation science community (e.g., different disciplines and levels of implementation science knowledge). This study contributes to ensuring rigorous implementation science capacity building through the evaluation of programs. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s43058-023-00512-5.
format Online
Article
Text
id pubmed-10626776
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher BioMed Central
record_format MEDLINE/PubMed
spelling pubmed-106267762023-11-07 Evaluation of a brief virtual implementation science training program: the Penn Implementation Science Institute Van Pelt, Amelia E. Bonafide, Christopher P. Rendle, Katharine A. Wolk, Courtney Shea, Judy A. Bettencourt, Amanda Beidas, Rinad S. Lane-Fall, Meghan B. Implement Sci Commun Research BACKGROUND: To meet the growing demand for implementation science expertise, building capacity is a priority. Various training opportunities have emerged to meet this need. To ensure rigor and achievement of specific implementation science competencies, it is critical to systematically evaluate training programs. METHODS: The Penn Implementation Science Institute (PennISI) offers 4 days (20 h) of virtual synchronous training on foundational and advanced topics in implementation science. Through a pre-post design, this study evaluated the sixth PennISI, delivered in 2022. Surveys measures included 43 implementation science training evaluation competencies grouped into four thematic domains (e.g., items related to implementation science study design grouped into the “design, background, and rationale” competency category), course-specific evaluation criteria, and open-ended questions to evaluate change in knowledge and suggestions for improving future institutes. Mean composite scores were created for each of the competency themes. Descriptive statistics and thematic analysis were completed. RESULTS: One hundred four (95.41% response rate) and 55 (50.46% response rate) participants completed the pre-survey and post-survey, respectively. Participants included a diverse cohort of individuals primarily affiliated with US-based academic institutions and self-reported as having novice or beginner-level knowledge of implementation science at baseline (81.73%). In the pre-survey, all mean composite scores for implementation science competencies were below one (i.e., beginner-level). Participants reported high value from the PennISI across standard course evaluation criteria (e.g., mean score of 3.77/4.00 for overall quality of course). Scores for all competency domains increased to a score between beginner-level and intermediate-level following training. In both the pre-survey and post-survey, competencies related to “definition, background, and rationale” had the highest mean composite score, whereas competencies related to “design and analysis” received the lowest score. Qualitative themes offered impressions of the PennISI, didactic content, PennISI structure, and suggestions for improvement. Prior experience with or knowledge of implementation science influenced many themes. CONCLUSIONS: This evaluation highlights the strengths of an established implementation science institute, which can serve as a model for brief, virtual training programs. Findings provide insight for improving future program efforts to meet the needs of the heterogenous implementation science community (e.g., different disciplines and levels of implementation science knowledge). This study contributes to ensuring rigorous implementation science capacity building through the evaluation of programs. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s43058-023-00512-5. BioMed Central 2023-11-06 /pmc/articles/PMC10626776/ /pubmed/37932840 http://dx.doi.org/10.1186/s43058-023-00512-5 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/ (https://creativecommons.org/publicdomain/zero/1.0/) ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
spellingShingle Research
Van Pelt, Amelia E.
Bonafide, Christopher P.
Rendle, Katharine A.
Wolk, Courtney
Shea, Judy A.
Bettencourt, Amanda
Beidas, Rinad S.
Lane-Fall, Meghan B.
Evaluation of a brief virtual implementation science training program: the Penn Implementation Science Institute
title Evaluation of a brief virtual implementation science training program: the Penn Implementation Science Institute
title_full Evaluation of a brief virtual implementation science training program: the Penn Implementation Science Institute
title_fullStr Evaluation of a brief virtual implementation science training program: the Penn Implementation Science Institute
title_full_unstemmed Evaluation of a brief virtual implementation science training program: the Penn Implementation Science Institute
title_short Evaluation of a brief virtual implementation science training program: the Penn Implementation Science Institute
title_sort evaluation of a brief virtual implementation science training program: the penn implementation science institute
topic Research
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10626776/
https://www.ncbi.nlm.nih.gov/pubmed/37932840
http://dx.doi.org/10.1186/s43058-023-00512-5
work_keys_str_mv AT vanpeltameliae evaluationofabriefvirtualimplementationsciencetrainingprogramthepennimplementationscienceinstitute
AT bonafidechristopherp evaluationofabriefvirtualimplementationsciencetrainingprogramthepennimplementationscienceinstitute
AT rendlekatharinea evaluationofabriefvirtualimplementationsciencetrainingprogramthepennimplementationscienceinstitute
AT wolkcourtney evaluationofabriefvirtualimplementationsciencetrainingprogramthepennimplementationscienceinstitute
AT sheajudya evaluationofabriefvirtualimplementationsciencetrainingprogramthepennimplementationscienceinstitute
AT bettencourtamanda evaluationofabriefvirtualimplementationsciencetrainingprogramthepennimplementationscienceinstitute
AT beidasrinads evaluationofabriefvirtualimplementationsciencetrainingprogramthepennimplementationscienceinstitute
AT lanefallmeghanb evaluationofabriefvirtualimplementationsciencetrainingprogramthepennimplementationscienceinstitute