Cargando…
Assessing Interrater Reliability of a Faculty-Provided Feedback Rating Instrument
High quality feedback on resident clinical performance is pivotal to growth and development. Therefore, a reliable means of assessing faculty feedback is necessary. A feedback assessment instrument would also allow for appropriate focus of interventions to improve faculty feedback. We piloted an ass...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
SAGE Publications
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9168931/ https://www.ncbi.nlm.nih.gov/pubmed/35677580 http://dx.doi.org/10.1177/23821205221093205 |
_version_ | 1784721107263160320 |
---|---|
author | Walsh, Daniel P. Chen, Michael J. Buhl, Lauren K. Neves, Sara E. Mitchell, John D. |
author_facet | Walsh, Daniel P. Chen, Michael J. Buhl, Lauren K. Neves, Sara E. Mitchell, John D. |
author_sort | Walsh, Daniel P. |
collection | PubMed |
description | High quality feedback on resident clinical performance is pivotal to growth and development. Therefore, a reliable means of assessing faculty feedback is necessary. A feedback assessment instrument would also allow for appropriate focus of interventions to improve faculty feedback. We piloted an assessment of the interrater reliability of a seven-item feedback rating instrument on faculty educators trained via a three-workshop frame-of-reference training regimen. The rating instrument's items assessed for the presence or absence of six feedback traits: actionable, behavior focused, detailed, negative feedback, professionalism / communication, and specific; as well as for overall utility of feedback with regard to devising a resident performance improvement plan on an ordinal scale from 1 to 5. Participants completed three cycles consisting of one-hour-long workshops where an instructor led a review of the feedback rating instrument on deidentified feedback comments, followed by participants independently rating a set of 20 deidentified feedback comments, and the study team reviewing the interrater reliability for each feedback rating category to guide future workshops. Comments came from four different anesthesia residency programs in the United States; each set of feedback comments was balanced with respect to utility scores to promote participants’ ability to discriminate between high and low utility comments. On the third and final independent rating exercise, participants achieved moderate or greater interrater reliability on all seven rating categories of a feedback rating instrument using Gwet's agreement coefficient 1 for the six feedback traits and using intraclass correlation for utility score. This illustrates that when this instrument is utilized by trained, expert educators, reliable assessments of faculty-provided feedback can be made. This rating instrument, with further validity evidence, has the potential to help programs reliably assess both the quality and utility of their feedback, as well as the impact of any educational interventions designed to improve feedback. |
format | Online Article Text |
id | pubmed-9168931 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | SAGE Publications |
record_format | MEDLINE/PubMed |
spelling | pubmed-91689312022-06-07 Assessing Interrater Reliability of a Faculty-Provided Feedback Rating Instrument Walsh, Daniel P. Chen, Michael J. Buhl, Lauren K. Neves, Sara E. Mitchell, John D. J Med Educ Curric Dev Original Research High quality feedback on resident clinical performance is pivotal to growth and development. Therefore, a reliable means of assessing faculty feedback is necessary. A feedback assessment instrument would also allow for appropriate focus of interventions to improve faculty feedback. We piloted an assessment of the interrater reliability of a seven-item feedback rating instrument on faculty educators trained via a three-workshop frame-of-reference training regimen. The rating instrument's items assessed for the presence or absence of six feedback traits: actionable, behavior focused, detailed, negative feedback, professionalism / communication, and specific; as well as for overall utility of feedback with regard to devising a resident performance improvement plan on an ordinal scale from 1 to 5. Participants completed three cycles consisting of one-hour-long workshops where an instructor led a review of the feedback rating instrument on deidentified feedback comments, followed by participants independently rating a set of 20 deidentified feedback comments, and the study team reviewing the interrater reliability for each feedback rating category to guide future workshops. Comments came from four different anesthesia residency programs in the United States; each set of feedback comments was balanced with respect to utility scores to promote participants’ ability to discriminate between high and low utility comments. On the third and final independent rating exercise, participants achieved moderate or greater interrater reliability on all seven rating categories of a feedback rating instrument using Gwet's agreement coefficient 1 for the six feedback traits and using intraclass correlation for utility score. This illustrates that when this instrument is utilized by trained, expert educators, reliable assessments of faculty-provided feedback can be made. This rating instrument, with further validity evidence, has the potential to help programs reliably assess both the quality and utility of their feedback, as well as the impact of any educational interventions designed to improve feedback. SAGE Publications 2022-06-02 /pmc/articles/PMC9168931/ /pubmed/35677580 http://dx.doi.org/10.1177/23821205221093205 Text en © The Author(s) 2022 https://creativecommons.org/licenses/by-nc/4.0/This article is distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 License (https://creativecommons.org/licenses/by-nc/4.0/) which permits non-commercial use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access page (https://us.sagepub.com/en-us/nam/open-access-at-sage). |
spellingShingle | Original Research Walsh, Daniel P. Chen, Michael J. Buhl, Lauren K. Neves, Sara E. Mitchell, John D. Assessing Interrater Reliability of a Faculty-Provided Feedback Rating Instrument |
title | Assessing Interrater Reliability of a Faculty-Provided Feedback
Rating Instrument |
title_full | Assessing Interrater Reliability of a Faculty-Provided Feedback
Rating Instrument |
title_fullStr | Assessing Interrater Reliability of a Faculty-Provided Feedback
Rating Instrument |
title_full_unstemmed | Assessing Interrater Reliability of a Faculty-Provided Feedback
Rating Instrument |
title_short | Assessing Interrater Reliability of a Faculty-Provided Feedback
Rating Instrument |
title_sort | assessing interrater reliability of a faculty-provided feedback
rating instrument |
topic | Original Research |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9168931/ https://www.ncbi.nlm.nih.gov/pubmed/35677580 http://dx.doi.org/10.1177/23821205221093205 |
work_keys_str_mv | AT walshdanielp assessinginterraterreliabilityofafacultyprovidedfeedbackratinginstrument AT chenmichaelj assessinginterraterreliabilityofafacultyprovidedfeedbackratinginstrument AT buhllaurenk assessinginterraterreliabilityofafacultyprovidedfeedbackratinginstrument AT nevessarae assessinginterraterreliabilityofafacultyprovidedfeedbackratinginstrument AT mitchelljohnd assessinginterraterreliabilityofafacultyprovidedfeedbackratinginstrument |