Cargando…
Reliability of simulation-based assessment for practicing physicians: performance is context-specific
INTRODUCTION: Even physicians who routinely work in complex, dynamic practices may be unprepared to optimally manage challenging critical events. High-fidelity simulation can realistically mimic critical clinically relevant events, however the reliability and validity of simulation-based assessment...
Autores principales: | , , , , , , , , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
BioMed Central
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8042680/ https://www.ncbi.nlm.nih.gov/pubmed/33845837 http://dx.doi.org/10.1186/s12909-021-02617-8 |
_version_ | 1783678167227039744 |
---|---|
author | Sinz, Elizabeth Banerjee, Arna Steadman, Randolph Shotwell, Matthew S. Slagle, Jason McIvor, William R. Torsher, Laurence Burden, Amanda Cooper, Jeffrey B. DeMaria, Samuel Levine, Adam I. Park, Christine Gaba, David M. Weinger, Matthew B. Boulet, John R. |
author_facet | Sinz, Elizabeth Banerjee, Arna Steadman, Randolph Shotwell, Matthew S. Slagle, Jason McIvor, William R. Torsher, Laurence Burden, Amanda Cooper, Jeffrey B. DeMaria, Samuel Levine, Adam I. Park, Christine Gaba, David M. Weinger, Matthew B. Boulet, John R. |
author_sort | Sinz, Elizabeth |
collection | PubMed |
description | INTRODUCTION: Even physicians who routinely work in complex, dynamic practices may be unprepared to optimally manage challenging critical events. High-fidelity simulation can realistically mimic critical clinically relevant events, however the reliability and validity of simulation-based assessment scores for practicing physicians has not been established. METHODS: Standardised complex simulation scenarios were developed and administered to board-certified, practicing anesthesiologists who volunteered to participate in an assessment study during formative maintenance of certification activities. A subset of the study population agreed to participate as the primary responder in a second scenario for this study. The physicians were assessed independently by trained raters on both teamwork/behavioural and technical performance measures. Analysis using Generalisability and Decision studies were completed for the two scenarios with two raters. RESULTS: The behavioural score was not more reliable than the technical score. With two raters > 20 scenarios would be required to achieve a reliability estimate of 0.7. Increasing the number of raters for a given scenario would have little effect on reliability. CONCLUSIONS: The performance of practicing physicians on simulated critical events may be highly context-specific. Realistic simulation-based assessment for practicing physicians is resource-intensive and may be best-suited for individualized formative feedback. More importantly, aggregate data from a population of participants may have an even higher impact if used to identify skill or knowledge gaps to be addressed by training programs and inform continuing education improvements across the profession. |
format | Online Article Text |
id | pubmed-8042680 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | BioMed Central |
record_format | MEDLINE/PubMed |
spelling | pubmed-80426802021-04-14 Reliability of simulation-based assessment for practicing physicians: performance is context-specific Sinz, Elizabeth Banerjee, Arna Steadman, Randolph Shotwell, Matthew S. Slagle, Jason McIvor, William R. Torsher, Laurence Burden, Amanda Cooper, Jeffrey B. DeMaria, Samuel Levine, Adam I. Park, Christine Gaba, David M. Weinger, Matthew B. Boulet, John R. BMC Med Educ Research Article INTRODUCTION: Even physicians who routinely work in complex, dynamic practices may be unprepared to optimally manage challenging critical events. High-fidelity simulation can realistically mimic critical clinically relevant events, however the reliability and validity of simulation-based assessment scores for practicing physicians has not been established. METHODS: Standardised complex simulation scenarios were developed and administered to board-certified, practicing anesthesiologists who volunteered to participate in an assessment study during formative maintenance of certification activities. A subset of the study population agreed to participate as the primary responder in a second scenario for this study. The physicians were assessed independently by trained raters on both teamwork/behavioural and technical performance measures. Analysis using Generalisability and Decision studies were completed for the two scenarios with two raters. RESULTS: The behavioural score was not more reliable than the technical score. With two raters > 20 scenarios would be required to achieve a reliability estimate of 0.7. Increasing the number of raters for a given scenario would have little effect on reliability. CONCLUSIONS: The performance of practicing physicians on simulated critical events may be highly context-specific. Realistic simulation-based assessment for practicing physicians is resource-intensive and may be best-suited for individualized formative feedback. More importantly, aggregate data from a population of participants may have an even higher impact if used to identify skill or knowledge gaps to be addressed by training programs and inform continuing education improvements across the profession. BioMed Central 2021-04-12 /pmc/articles/PMC8042680/ /pubmed/33845837 http://dx.doi.org/10.1186/s12909-021-02617-8 Text en © The Author(s) 2021 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/ (https://creativecommons.org/publicdomain/zero/1.0/) ) applies to the data made available in this article, unless otherwise stated in a credit line to the data. |
spellingShingle | Research Article Sinz, Elizabeth Banerjee, Arna Steadman, Randolph Shotwell, Matthew S. Slagle, Jason McIvor, William R. Torsher, Laurence Burden, Amanda Cooper, Jeffrey B. DeMaria, Samuel Levine, Adam I. Park, Christine Gaba, David M. Weinger, Matthew B. Boulet, John R. Reliability of simulation-based assessment for practicing physicians: performance is context-specific |
title | Reliability of simulation-based assessment for practicing physicians: performance is context-specific |
title_full | Reliability of simulation-based assessment for practicing physicians: performance is context-specific |
title_fullStr | Reliability of simulation-based assessment for practicing physicians: performance is context-specific |
title_full_unstemmed | Reliability of simulation-based assessment for practicing physicians: performance is context-specific |
title_short | Reliability of simulation-based assessment for practicing physicians: performance is context-specific |
title_sort | reliability of simulation-based assessment for practicing physicians: performance is context-specific |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8042680/ https://www.ncbi.nlm.nih.gov/pubmed/33845837 http://dx.doi.org/10.1186/s12909-021-02617-8 |
work_keys_str_mv | AT sinzelizabeth reliabilityofsimulationbasedassessmentforpracticingphysiciansperformanceiscontextspecific AT banerjeearna reliabilityofsimulationbasedassessmentforpracticingphysiciansperformanceiscontextspecific AT steadmanrandolph reliabilityofsimulationbasedassessmentforpracticingphysiciansperformanceiscontextspecific AT shotwellmatthews reliabilityofsimulationbasedassessmentforpracticingphysiciansperformanceiscontextspecific AT slaglejason reliabilityofsimulationbasedassessmentforpracticingphysiciansperformanceiscontextspecific AT mcivorwilliamr reliabilityofsimulationbasedassessmentforpracticingphysiciansperformanceiscontextspecific AT torsherlaurence reliabilityofsimulationbasedassessmentforpracticingphysiciansperformanceiscontextspecific AT burdenamanda reliabilityofsimulationbasedassessmentforpracticingphysiciansperformanceiscontextspecific AT cooperjeffreyb reliabilityofsimulationbasedassessmentforpracticingphysiciansperformanceiscontextspecific AT demariasamuel reliabilityofsimulationbasedassessmentforpracticingphysiciansperformanceiscontextspecific AT levineadami reliabilityofsimulationbasedassessmentforpracticingphysiciansperformanceiscontextspecific AT parkchristine reliabilityofsimulationbasedassessmentforpracticingphysiciansperformanceiscontextspecific AT gabadavidm reliabilityofsimulationbasedassessmentforpracticingphysiciansperformanceiscontextspecific AT weingermatthewb reliabilityofsimulationbasedassessmentforpracticingphysiciansperformanceiscontextspecific AT bouletjohnr reliabilityofsimulationbasedassessmentforpracticingphysiciansperformanceiscontextspecific |