Cargando…
Does faculty development influence the quality of in-training evaluation reports in pharmacy?
BACKGROUND: In-training evaluation reports (ITERs) of student workplace-based learning are completed by clinical supervisors across various health disciplines. However, outside of medicine, the quality of submitted workplace-based assessments is largely uninvestigated. This study assessed the qualit...
Autor principal: | |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
BioMed Central
2017
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5697106/ https://www.ncbi.nlm.nih.gov/pubmed/29157239 http://dx.doi.org/10.1186/s12909-017-1054-5 |
_version_ | 1783280545259585536 |
---|---|
author | Wilbur, Kerry |
author_facet | Wilbur, Kerry |
author_sort | Wilbur, Kerry |
collection | PubMed |
description | BACKGROUND: In-training evaluation reports (ITERs) of student workplace-based learning are completed by clinical supervisors across various health disciplines. However, outside of medicine, the quality of submitted workplace-based assessments is largely uninvestigated. This study assessed the quality of ITERs in pharmacy and whether clinical supervisors could be trained to complete higher quality reports. METHODS: A random sample of ITERs submitted in a pharmacy program during 2013–2014 was evaluated. These ITERs served as a historical control (control group 1) for comparison with ITERs submitted in 2015–2016 by clinical supervisors who participated in an interactive faculty development workshop (intervention group) and those who did not (control group 2). Two trained independent raters scored the ITERs using a previously validated nine-item scale assessing report quality, the Completed Clinical Evaluation Report Rating (CCERR). The scoring scale for each item is anchored at 1 (“not at all”) and 5 (“exemplary”), with 3 categorized as “acceptable”. RESULTS: Mean CCERR score for reports completed after the workshop (22.9 ± 3.39) did not significantly improve when compared to prospective control group 2 (22.7 ± 3.63, p = 0.84) and were worse than historical control group 1 (37.9 ± 8.21, p = 0.001). Mean item scores for individual CCERR items were below acceptable thresholds for 5 of the 9 domains in control group 1, including supervisor documented evidence of specific examples to clearly explain weaknesses and concrete recommendations for student improvement. Mean item scores for individual CCERR items were below acceptable thresholds for 6 and 7 of the 9 domains in control group 2 and the intervention group, respectively. CONCLUSIONS: This study is the first using CCERR to evaluate ITER quality outside of medicine. Findings demonstrate low baseline CCERR scores in a pharmacy program not demonstrably changed by a faculty development workshop, but strategies are identified to augment future rater training. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (10.1186/s12909-017-1054-5) contains supplementary material, which is available to authorized users. |
format | Online Article Text |
id | pubmed-5697106 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2017 |
publisher | BioMed Central |
record_format | MEDLINE/PubMed |
spelling | pubmed-56971062017-12-01 Does faculty development influence the quality of in-training evaluation reports in pharmacy? Wilbur, Kerry BMC Med Educ Research Article BACKGROUND: In-training evaluation reports (ITERs) of student workplace-based learning are completed by clinical supervisors across various health disciplines. However, outside of medicine, the quality of submitted workplace-based assessments is largely uninvestigated. This study assessed the quality of ITERs in pharmacy and whether clinical supervisors could be trained to complete higher quality reports. METHODS: A random sample of ITERs submitted in a pharmacy program during 2013–2014 was evaluated. These ITERs served as a historical control (control group 1) for comparison with ITERs submitted in 2015–2016 by clinical supervisors who participated in an interactive faculty development workshop (intervention group) and those who did not (control group 2). Two trained independent raters scored the ITERs using a previously validated nine-item scale assessing report quality, the Completed Clinical Evaluation Report Rating (CCERR). The scoring scale for each item is anchored at 1 (“not at all”) and 5 (“exemplary”), with 3 categorized as “acceptable”. RESULTS: Mean CCERR score for reports completed after the workshop (22.9 ± 3.39) did not significantly improve when compared to prospective control group 2 (22.7 ± 3.63, p = 0.84) and were worse than historical control group 1 (37.9 ± 8.21, p = 0.001). Mean item scores for individual CCERR items were below acceptable thresholds for 5 of the 9 domains in control group 1, including supervisor documented evidence of specific examples to clearly explain weaknesses and concrete recommendations for student improvement. Mean item scores for individual CCERR items were below acceptable thresholds for 6 and 7 of the 9 domains in control group 2 and the intervention group, respectively. CONCLUSIONS: This study is the first using CCERR to evaluate ITER quality outside of medicine. Findings demonstrate low baseline CCERR scores in a pharmacy program not demonstrably changed by a faculty development workshop, but strategies are identified to augment future rater training. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (10.1186/s12909-017-1054-5) contains supplementary material, which is available to authorized users. BioMed Central 2017-11-21 /pmc/articles/PMC5697106/ /pubmed/29157239 http://dx.doi.org/10.1186/s12909-017-1054-5 Text en © The Author(s). 2017 Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. |
spellingShingle | Research Article Wilbur, Kerry Does faculty development influence the quality of in-training evaluation reports in pharmacy? |
title | Does faculty development influence the quality of in-training evaluation reports in pharmacy? |
title_full | Does faculty development influence the quality of in-training evaluation reports in pharmacy? |
title_fullStr | Does faculty development influence the quality of in-training evaluation reports in pharmacy? |
title_full_unstemmed | Does faculty development influence the quality of in-training evaluation reports in pharmacy? |
title_short | Does faculty development influence the quality of in-training evaluation reports in pharmacy? |
title_sort | does faculty development influence the quality of in-training evaluation reports in pharmacy? |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5697106/ https://www.ncbi.nlm.nih.gov/pubmed/29157239 http://dx.doi.org/10.1186/s12909-017-1054-5 |
work_keys_str_mv | AT wilburkerry doesfacultydevelopmentinfluencethequalityofintrainingevaluationreportsinpharmacy |