Cargando…

Effect of Insufficient Interaction on the Evaluation of Anesthesiologists’ Quality of Clinical Supervision by Anesthesiology Residents and Fellows

Introduction In this study, we tested whether raters’ (residents and fellows) decisions to evaluate (or not) critical care anesthesiologists were significantly associated with clinical interactions documented from electronic health record progress notes and whether that influenced the reliability of...

Descripción completa

Detalles Bibliográficos
Autores principales: Hadler, Rachel A, Dexter, Franklin, Hindman, Bradley J
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Cureus 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9036497/
https://www.ncbi.nlm.nih.gov/pubmed/35494980
http://dx.doi.org/10.7759/cureus.23500
_version_ 1784693532001304576
author Hadler, Rachel A
Dexter, Franklin
Hindman, Bradley J
author_facet Hadler, Rachel A
Dexter, Franklin
Hindman, Bradley J
author_sort Hadler, Rachel A
collection PubMed
description Introduction In this study, we tested whether raters’ (residents and fellows) decisions to evaluate (or not) critical care anesthesiologists were significantly associated with clinical interactions documented from electronic health record progress notes and whether that influenced the reliability of supervision scores. We used the de Oliveira Filho clinical supervision scale for the evaluation of faculty anesthesiologists. Email requests were sent to raters who worked one hour or longer with the anesthesiologist the preceding day in an operating room. In contrast, potential raters were requested to evaluate all critical care anesthesiologists scheduled in intensive care units during the preceding week. Methods Over 7.6 years, raters (N=172) received a total of 7764 requests to evaluate 21 critical care anesthesiologists. Each rater received a median/mode of three evaluation requests, one per anesthesiologist on service that week. In this retrospective cohort study, we related responses (2970 selections of “insufficient interaction” to evaluate the faculty, and 3127 completed supervision scores) to progress notes (N=25,469) electronically co-signed by the rater and anesthesiologist combination during that week. Results Raters with few jointly signed notes were more likely to select insufficient interaction for evaluation (P < 0.0001): 62% when no joint notes versus 1% with at least 20 joint notes during the week. Still, rater-anesthesiologist combinations with no co-authored notes accounted not only for most (78%) of the evaluation requests but also most (56%) of the completed evaluations (both P < 0.0001). Among rater and anesthesiologist combinations with each anesthesiologist receiving evaluations from multiple (at least nine) raters and each rater evaluating multiple anesthesiologists, most (72%) rater-anesthesiologist combinations were among raters who had no co-authored notes with the anesthesiologist (P < 0.0001). Conclusions Regular use of the supervision scale should be practiced with raters who were selected not only from their scheduled clinical site but also using electronic health record data verifying joint workload with the anesthesiologist.
format Online
Article
Text
id pubmed-9036497
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher Cureus
record_format MEDLINE/PubMed
spelling pubmed-90364972022-04-27 Effect of Insufficient Interaction on the Evaluation of Anesthesiologists’ Quality of Clinical Supervision by Anesthesiology Residents and Fellows Hadler, Rachel A Dexter, Franklin Hindman, Bradley J Cureus Anesthesiology Introduction In this study, we tested whether raters’ (residents and fellows) decisions to evaluate (or not) critical care anesthesiologists were significantly associated with clinical interactions documented from electronic health record progress notes and whether that influenced the reliability of supervision scores. We used the de Oliveira Filho clinical supervision scale for the evaluation of faculty anesthesiologists. Email requests were sent to raters who worked one hour or longer with the anesthesiologist the preceding day in an operating room. In contrast, potential raters were requested to evaluate all critical care anesthesiologists scheduled in intensive care units during the preceding week. Methods Over 7.6 years, raters (N=172) received a total of 7764 requests to evaluate 21 critical care anesthesiologists. Each rater received a median/mode of three evaluation requests, one per anesthesiologist on service that week. In this retrospective cohort study, we related responses (2970 selections of “insufficient interaction” to evaluate the faculty, and 3127 completed supervision scores) to progress notes (N=25,469) electronically co-signed by the rater and anesthesiologist combination during that week. Results Raters with few jointly signed notes were more likely to select insufficient interaction for evaluation (P < 0.0001): 62% when no joint notes versus 1% with at least 20 joint notes during the week. Still, rater-anesthesiologist combinations with no co-authored notes accounted not only for most (78%) of the evaluation requests but also most (56%) of the completed evaluations (both P < 0.0001). Among rater and anesthesiologist combinations with each anesthesiologist receiving evaluations from multiple (at least nine) raters and each rater evaluating multiple anesthesiologists, most (72%) rater-anesthesiologist combinations were among raters who had no co-authored notes with the anesthesiologist (P < 0.0001). Conclusions Regular use of the supervision scale should be practiced with raters who were selected not only from their scheduled clinical site but also using electronic health record data verifying joint workload with the anesthesiologist. Cureus 2022-03-26 /pmc/articles/PMC9036497/ /pubmed/35494980 http://dx.doi.org/10.7759/cureus.23500 Text en Copyright © 2022, Hadler et al. https://creativecommons.org/licenses/by/3.0/This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
spellingShingle Anesthesiology
Hadler, Rachel A
Dexter, Franklin
Hindman, Bradley J
Effect of Insufficient Interaction on the Evaluation of Anesthesiologists’ Quality of Clinical Supervision by Anesthesiology Residents and Fellows
title Effect of Insufficient Interaction on the Evaluation of Anesthesiologists’ Quality of Clinical Supervision by Anesthesiology Residents and Fellows
title_full Effect of Insufficient Interaction on the Evaluation of Anesthesiologists’ Quality of Clinical Supervision by Anesthesiology Residents and Fellows
title_fullStr Effect of Insufficient Interaction on the Evaluation of Anesthesiologists’ Quality of Clinical Supervision by Anesthesiology Residents and Fellows
title_full_unstemmed Effect of Insufficient Interaction on the Evaluation of Anesthesiologists’ Quality of Clinical Supervision by Anesthesiology Residents and Fellows
title_short Effect of Insufficient Interaction on the Evaluation of Anesthesiologists’ Quality of Clinical Supervision by Anesthesiology Residents and Fellows
title_sort effect of insufficient interaction on the evaluation of anesthesiologists’ quality of clinical supervision by anesthesiology residents and fellows
topic Anesthesiology
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9036497/
https://www.ncbi.nlm.nih.gov/pubmed/35494980
http://dx.doi.org/10.7759/cureus.23500
work_keys_str_mv AT hadlerrachela effectofinsufficientinteractionontheevaluationofanesthesiologistsqualityofclinicalsupervisionbyanesthesiologyresidentsandfellows
AT dexterfranklin effectofinsufficientinteractionontheevaluationofanesthesiologistsqualityofclinicalsupervisionbyanesthesiologyresidentsandfellows
AT hindmanbradleyj effectofinsufficientinteractionontheevaluationofanesthesiologistsqualityofclinicalsupervisionbyanesthesiologyresidentsandfellows