Cargando…

i-Assess: Evaluating the impact of electronic data capture for OSCE

INTRODUCTION: Tablet-based assessments offer benefits over scannable-paper assessments; however, there is little known about the impact to the variability of assessment scores. METHODS: Two studies were conducted to evaluate changes in rating technology. Rating modality (paper vs tablets) was manipu...

Descripción completa

Detalles Bibliográficos
Autores principales: Monteiro, Sandra, Sibbald, Debra, Coetzee, Karen
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Bohn Stafleu van Loghum 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5889381/
https://www.ncbi.nlm.nih.gov/pubmed/29488098
http://dx.doi.org/10.1007/s40037-018-0410-4
_version_ 1783312683110498304
author Monteiro, Sandra
Sibbald, Debra
Coetzee, Karen
author_facet Monteiro, Sandra
Sibbald, Debra
Coetzee, Karen
author_sort Monteiro, Sandra
collection PubMed
description INTRODUCTION: Tablet-based assessments offer benefits over scannable-paper assessments; however, there is little known about the impact to the variability of assessment scores. METHODS: Two studies were conducted to evaluate changes in rating technology. Rating modality (paper vs tablets) was manipulated between candidates (Study 1) and within candidates (Study 2). Average scores were analyzed using repeated measures ANOVA, Cronbach’s alpha and generalizability theory. Post-hoc analyses included a Rasch analysis and McDonald’s omega. RESULTS: Study 1 revealed a main effect of modality (F (1,152) = 25.06, p < 0.01). Average tablet-based scores were higher, (3.39/5, 95% CI = 3.28 to 3.51), compared with average scan-sheet scores (3.00/5, 95% CI = 2.90 to 3.11). Study 2 also revealed a main effect of modality (F (1, 88) = 15.64, p < 0.01), however, the difference was reduced to 2% with higher scan-sheet scores (3.36, 95% CI = 3.30 to 3.42) compared with tablet scores (3.27, 95% CI = 3.21 to 3.33). Internal consistency (alpha and omega) remained high (>0.8) and inter-station reliability remained constant (0.3). Rasch analyses showed no relationship between station difficulty and rating modality. DISCUSSION: Analyses of average scores may be misleading without an understanding of internal consistency and overall reliability of scores. Although updating to tablet-based forms did not result in systematic variations in scores, routine analyses ensured accurate interpretation of the variability of assessment scores. CONCLUSION: This study demonstrates the importance of ongoing program evaluation and data analysis.
format Online
Article
Text
id pubmed-5889381
institution National Center for Biotechnology Information
language English
publishDate 2018
publisher Bohn Stafleu van Loghum
record_format MEDLINE/PubMed
spelling pubmed-58893812018-04-12 i-Assess: Evaluating the impact of electronic data capture for OSCE Monteiro, Sandra Sibbald, Debra Coetzee, Karen Perspect Med Educ Original Article INTRODUCTION: Tablet-based assessments offer benefits over scannable-paper assessments; however, there is little known about the impact to the variability of assessment scores. METHODS: Two studies were conducted to evaluate changes in rating technology. Rating modality (paper vs tablets) was manipulated between candidates (Study 1) and within candidates (Study 2). Average scores were analyzed using repeated measures ANOVA, Cronbach’s alpha and generalizability theory. Post-hoc analyses included a Rasch analysis and McDonald’s omega. RESULTS: Study 1 revealed a main effect of modality (F (1,152) = 25.06, p < 0.01). Average tablet-based scores were higher, (3.39/5, 95% CI = 3.28 to 3.51), compared with average scan-sheet scores (3.00/5, 95% CI = 2.90 to 3.11). Study 2 also revealed a main effect of modality (F (1, 88) = 15.64, p < 0.01), however, the difference was reduced to 2% with higher scan-sheet scores (3.36, 95% CI = 3.30 to 3.42) compared with tablet scores (3.27, 95% CI = 3.21 to 3.33). Internal consistency (alpha and omega) remained high (>0.8) and inter-station reliability remained constant (0.3). Rasch analyses showed no relationship between station difficulty and rating modality. DISCUSSION: Analyses of average scores may be misleading without an understanding of internal consistency and overall reliability of scores. Although updating to tablet-based forms did not result in systematic variations in scores, routine analyses ensured accurate interpretation of the variability of assessment scores. CONCLUSION: This study demonstrates the importance of ongoing program evaluation and data analysis. Bohn Stafleu van Loghum 2018-02-27 2018-04 /pmc/articles/PMC5889381/ /pubmed/29488098 http://dx.doi.org/10.1007/s40037-018-0410-4 Text en © The Author(s) 2018 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
spellingShingle Original Article
Monteiro, Sandra
Sibbald, Debra
Coetzee, Karen
i-Assess: Evaluating the impact of electronic data capture for OSCE
title i-Assess: Evaluating the impact of electronic data capture for OSCE
title_full i-Assess: Evaluating the impact of electronic data capture for OSCE
title_fullStr i-Assess: Evaluating the impact of electronic data capture for OSCE
title_full_unstemmed i-Assess: Evaluating the impact of electronic data capture for OSCE
title_short i-Assess: Evaluating the impact of electronic data capture for OSCE
title_sort i-assess: evaluating the impact of electronic data capture for osce
topic Original Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5889381/
https://www.ncbi.nlm.nih.gov/pubmed/29488098
http://dx.doi.org/10.1007/s40037-018-0410-4
work_keys_str_mv AT monteirosandra iassessevaluatingtheimpactofelectronicdatacaptureforosce
AT sibbalddebra iassessevaluatingtheimpactofelectronicdatacaptureforosce
AT coetzeekaren iassessevaluatingtheimpactofelectronicdatacaptureforosce