Cargando…
Are evaluations in simulated medical encounters reliable among rater types? A comparison between standardized patient and outside observer ratings of OSCEs
OBJECTIVE: By analyzing Objective Structured Clinical Examination (OSCE) evaluations of first-year interns’ communication with standardized patients (SP), our study aimed to examine the differences between ratings of SPs and a set of outside observers with training in healthcare communication. METHO...
Autores principales: | , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Elsevier
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10194306/ https://www.ncbi.nlm.nih.gov/pubmed/37214504 http://dx.doi.org/10.1016/j.pecinn.2023.100125 |
_version_ | 1785043989820342272 |
---|---|
author | Wollney, Easton N. Vasquez, Taylor S. Stalvey, Carolyn Close, Julia Markham, Merry Jennifer Meyer, Lynne E. Cooper, Lou Ann Bylund, Carma L. |
author_facet | Wollney, Easton N. Vasquez, Taylor S. Stalvey, Carolyn Close, Julia Markham, Merry Jennifer Meyer, Lynne E. Cooper, Lou Ann Bylund, Carma L. |
author_sort | Wollney, Easton N. |
collection | PubMed |
description | OBJECTIVE: By analyzing Objective Structured Clinical Examination (OSCE) evaluations of first-year interns’ communication with standardized patients (SP), our study aimed to examine the differences between ratings of SPs and a set of outside observers with training in healthcare communication. METHODS: Immediately following completion of OSCEs, SPs evaluated interns’ communication skills using 30 items. Later, two observers independently coded video recordings using the same items. We conducted two-tailed t-tests to examine differences between SP and observers’ ratings. RESULTS: Rater scores differed significantly on 21 items (p < .05), with 20 of the 21 differences due to higher SP in-person evaluation scores. Items most divergent between SPs and observers included items related to empathic communication and nonverbal communication. CONCLUSION: Differences between SP and observer ratings should be further investigated to determine if additional rater training is needed or if a revised evaluation measure is needed. Educators may benefit from adjusting evaluation criteria to decrease the number of items raters must complete and may do so by encompassing more global questions regarding various criteria. Furthermore, evaluation measures may be strengthened by undergoing reliability and validity testing. INNOVATION: This study highlights the strengths and limitations to rater types (observers or SPs), as well as evaluation methods (recorded or in-person). |
format | Online Article Text |
id | pubmed-10194306 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Elsevier |
record_format | MEDLINE/PubMed |
spelling | pubmed-101943062023-05-19 Are evaluations in simulated medical encounters reliable among rater types? A comparison between standardized patient and outside observer ratings of OSCEs Wollney, Easton N. Vasquez, Taylor S. Stalvey, Carolyn Close, Julia Markham, Merry Jennifer Meyer, Lynne E. Cooper, Lou Ann Bylund, Carma L. PEC Innov Short communication OBJECTIVE: By analyzing Objective Structured Clinical Examination (OSCE) evaluations of first-year interns’ communication with standardized patients (SP), our study aimed to examine the differences between ratings of SPs and a set of outside observers with training in healthcare communication. METHODS: Immediately following completion of OSCEs, SPs evaluated interns’ communication skills using 30 items. Later, two observers independently coded video recordings using the same items. We conducted two-tailed t-tests to examine differences between SP and observers’ ratings. RESULTS: Rater scores differed significantly on 21 items (p < .05), with 20 of the 21 differences due to higher SP in-person evaluation scores. Items most divergent between SPs and observers included items related to empathic communication and nonverbal communication. CONCLUSION: Differences between SP and observer ratings should be further investigated to determine if additional rater training is needed or if a revised evaluation measure is needed. Educators may benefit from adjusting evaluation criteria to decrease the number of items raters must complete and may do so by encompassing more global questions regarding various criteria. Furthermore, evaluation measures may be strengthened by undergoing reliability and validity testing. INNOVATION: This study highlights the strengths and limitations to rater types (observers or SPs), as well as evaluation methods (recorded or in-person). Elsevier 2023-01-29 /pmc/articles/PMC10194306/ /pubmed/37214504 http://dx.doi.org/10.1016/j.pecinn.2023.100125 Text en © 2023 The Authors https://creativecommons.org/licenses/by/4.0/This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Short communication Wollney, Easton N. Vasquez, Taylor S. Stalvey, Carolyn Close, Julia Markham, Merry Jennifer Meyer, Lynne E. Cooper, Lou Ann Bylund, Carma L. Are evaluations in simulated medical encounters reliable among rater types? A comparison between standardized patient and outside observer ratings of OSCEs |
title | Are evaluations in simulated medical encounters reliable among rater types? A comparison between standardized patient and outside observer ratings of OSCEs |
title_full | Are evaluations in simulated medical encounters reliable among rater types? A comparison between standardized patient and outside observer ratings of OSCEs |
title_fullStr | Are evaluations in simulated medical encounters reliable among rater types? A comparison between standardized patient and outside observer ratings of OSCEs |
title_full_unstemmed | Are evaluations in simulated medical encounters reliable among rater types? A comparison between standardized patient and outside observer ratings of OSCEs |
title_short | Are evaluations in simulated medical encounters reliable among rater types? A comparison between standardized patient and outside observer ratings of OSCEs |
title_sort | are evaluations in simulated medical encounters reliable among rater types? a comparison between standardized patient and outside observer ratings of osces |
topic | Short communication |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10194306/ https://www.ncbi.nlm.nih.gov/pubmed/37214504 http://dx.doi.org/10.1016/j.pecinn.2023.100125 |
work_keys_str_mv | AT wollneyeastonn areevaluationsinsimulatedmedicalencountersreliableamongratertypesacomparisonbetweenstandardizedpatientandoutsideobserverratingsofosces AT vasqueztaylors areevaluationsinsimulatedmedicalencountersreliableamongratertypesacomparisonbetweenstandardizedpatientandoutsideobserverratingsofosces AT stalveycarolyn areevaluationsinsimulatedmedicalencountersreliableamongratertypesacomparisonbetweenstandardizedpatientandoutsideobserverratingsofosces AT closejulia areevaluationsinsimulatedmedicalencountersreliableamongratertypesacomparisonbetweenstandardizedpatientandoutsideobserverratingsofosces AT markhammerryjennifer areevaluationsinsimulatedmedicalencountersreliableamongratertypesacomparisonbetweenstandardizedpatientandoutsideobserverratingsofosces AT meyerlynnee areevaluationsinsimulatedmedicalencountersreliableamongratertypesacomparisonbetweenstandardizedpatientandoutsideobserverratingsofosces AT cooperlouann areevaluationsinsimulatedmedicalencountersreliableamongratertypesacomparisonbetweenstandardizedpatientandoutsideobserverratingsofosces AT bylundcarmal areevaluationsinsimulatedmedicalencountersreliableamongratertypesacomparisonbetweenstandardizedpatientandoutsideobserverratingsofosces |