Cargando…
eOSCE stations live versus remote evaluation and scores variability
BACKGROUND: Objective structured clinical examinations (OSCEs) are known to be a fair evaluation method. These recent years, the use of online OSCEs (eOSCEs) has spread. This study aimed to compare remote versus live evaluation and assess the factors associated with score variability during eOSCEs....
Autores principales: | , , , , , , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
BioMed Central
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9745699/ https://www.ncbi.nlm.nih.gov/pubmed/36514011 http://dx.doi.org/10.1186/s12909-022-03919-1 |
_version_ | 1784849205072756736 |
---|---|
author | Bouzid, Donia Mullaert, Jimmy Ghazali, Aiham Ferré, Valentine Marie Mentré, France Lemogne, Cédric Ruszniewski, Philippe Faye, Albert Dinh, Alexy Tran Mirault, Tristan |
author_facet | Bouzid, Donia Mullaert, Jimmy Ghazali, Aiham Ferré, Valentine Marie Mentré, France Lemogne, Cédric Ruszniewski, Philippe Faye, Albert Dinh, Alexy Tran Mirault, Tristan |
author_sort | Bouzid, Donia |
collection | PubMed |
description | BACKGROUND: Objective structured clinical examinations (OSCEs) are known to be a fair evaluation method. These recent years, the use of online OSCEs (eOSCEs) has spread. This study aimed to compare remote versus live evaluation and assess the factors associated with score variability during eOSCEs. METHODS: We conducted large-scale eOSCEs at the medical school of the Université de Paris Cité in June 2021 and recorded all the students’ performances, allowing a second evaluation. To assess the agreement in our context of multiple raters and students, we fitted a linear mixed model with student and rater as random effects and the score as an explained variable. RESULTS: One hundred seventy observations were analyzed for the first station after quality control. We retained 192 and 110 observations for the statistical analysis of the two other stations. The median score and interquartile range were 60 out of 100 (IQR 50–70), 60 out of 100 (IQR 54–70), and 53 out of 100 (IQR 45–62) for the three stations. The score variance proportions explained by the rater (ICC rater) were 23.0, 16.8, and 32.8%, respectively. Of the 31 raters, 18 (58%) were male. Scores did not differ significantly according to the gender of the rater (p = 0.96, 0.10, and 0.26, respectively). The two evaluations showed no systematic difference in scores (p = 0.92, 0.053, and 0.38, respectively). CONCLUSION: Our study suggests that remote evaluation is as reliable as live evaluation for eOSCEs. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12909-022-03919-1. |
format | Online Article Text |
id | pubmed-9745699 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | BioMed Central |
record_format | MEDLINE/PubMed |
spelling | pubmed-97456992022-12-13 eOSCE stations live versus remote evaluation and scores variability Bouzid, Donia Mullaert, Jimmy Ghazali, Aiham Ferré, Valentine Marie Mentré, France Lemogne, Cédric Ruszniewski, Philippe Faye, Albert Dinh, Alexy Tran Mirault, Tristan BMC Med Educ Research BACKGROUND: Objective structured clinical examinations (OSCEs) are known to be a fair evaluation method. These recent years, the use of online OSCEs (eOSCEs) has spread. This study aimed to compare remote versus live evaluation and assess the factors associated with score variability during eOSCEs. METHODS: We conducted large-scale eOSCEs at the medical school of the Université de Paris Cité in June 2021 and recorded all the students’ performances, allowing a second evaluation. To assess the agreement in our context of multiple raters and students, we fitted a linear mixed model with student and rater as random effects and the score as an explained variable. RESULTS: One hundred seventy observations were analyzed for the first station after quality control. We retained 192 and 110 observations for the statistical analysis of the two other stations. The median score and interquartile range were 60 out of 100 (IQR 50–70), 60 out of 100 (IQR 54–70), and 53 out of 100 (IQR 45–62) for the three stations. The score variance proportions explained by the rater (ICC rater) were 23.0, 16.8, and 32.8%, respectively. Of the 31 raters, 18 (58%) were male. Scores did not differ significantly according to the gender of the rater (p = 0.96, 0.10, and 0.26, respectively). The two evaluations showed no systematic difference in scores (p = 0.92, 0.053, and 0.38, respectively). CONCLUSION: Our study suggests that remote evaluation is as reliable as live evaluation for eOSCEs. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12909-022-03919-1. BioMed Central 2022-12-13 /pmc/articles/PMC9745699/ /pubmed/36514011 http://dx.doi.org/10.1186/s12909-022-03919-1 Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/ (https://creativecommons.org/publicdomain/zero/1.0/) ) applies to the data made available in this article, unless otherwise stated in a credit line to the data. |
spellingShingle | Research Bouzid, Donia Mullaert, Jimmy Ghazali, Aiham Ferré, Valentine Marie Mentré, France Lemogne, Cédric Ruszniewski, Philippe Faye, Albert Dinh, Alexy Tran Mirault, Tristan eOSCE stations live versus remote evaluation and scores variability |
title | eOSCE stations live versus remote evaluation and scores variability |
title_full | eOSCE stations live versus remote evaluation and scores variability |
title_fullStr | eOSCE stations live versus remote evaluation and scores variability |
title_full_unstemmed | eOSCE stations live versus remote evaluation and scores variability |
title_short | eOSCE stations live versus remote evaluation and scores variability |
title_sort | eosce stations live versus remote evaluation and scores variability |
topic | Research |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9745699/ https://www.ncbi.nlm.nih.gov/pubmed/36514011 http://dx.doi.org/10.1186/s12909-022-03919-1 |
work_keys_str_mv | AT bouziddonia eoscestationsliveversusremoteevaluationandscoresvariability AT mullaertjimmy eoscestationsliveversusremoteevaluationandscoresvariability AT ghazaliaiham eoscestationsliveversusremoteevaluationandscoresvariability AT ferrevalentinemarie eoscestationsliveversusremoteevaluationandscoresvariability AT mentrefrance eoscestationsliveversusremoteevaluationandscoresvariability AT lemognecedric eoscestationsliveversusremoteevaluationandscoresvariability AT ruszniewskiphilippe eoscestationsliveversusremoteevaluationandscoresvariability AT fayealbert eoscestationsliveversusremoteevaluationandscoresvariability AT dinhalexytran eoscestationsliveversusremoteevaluationandscoresvariability AT miraulttristan eoscestationsliveversusremoteevaluationandscoresvariability AT eoscestationsliveversusremoteevaluationandscoresvariability |