Cargando…

i-Assess: Evaluating the impact of electronic data capture for OSCE

INTRODUCTION: Tablet-based assessments offer benefits over scannable-paper assessments; however, there is little known about the impact to the variability of assessment scores. METHODS: Two studies were conducted to evaluate changes in rating technology. Rating modality (paper vs tablets) was manipu...

Descripción completa

Detalles Bibliográficos
Autores principales: Monteiro, Sandra, Sibbald, Debra, Coetzee, Karen
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Bohn Stafleu van Loghum 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5889381/
https://www.ncbi.nlm.nih.gov/pubmed/29488098
http://dx.doi.org/10.1007/s40037-018-0410-4
Descripción
Sumario:INTRODUCTION: Tablet-based assessments offer benefits over scannable-paper assessments; however, there is little known about the impact to the variability of assessment scores. METHODS: Two studies were conducted to evaluate changes in rating technology. Rating modality (paper vs tablets) was manipulated between candidates (Study 1) and within candidates (Study 2). Average scores were analyzed using repeated measures ANOVA, Cronbach’s alpha and generalizability theory. Post-hoc analyses included a Rasch analysis and McDonald’s omega. RESULTS: Study 1 revealed a main effect of modality (F (1,152) = 25.06, p < 0.01). Average tablet-based scores were higher, (3.39/5, 95% CI = 3.28 to 3.51), compared with average scan-sheet scores (3.00/5, 95% CI = 2.90 to 3.11). Study 2 also revealed a main effect of modality (F (1, 88) = 15.64, p < 0.01), however, the difference was reduced to 2% with higher scan-sheet scores (3.36, 95% CI = 3.30 to 3.42) compared with tablet scores (3.27, 95% CI = 3.21 to 3.33). Internal consistency (alpha and omega) remained high (>0.8) and inter-station reliability remained constant (0.3). Rasch analyses showed no relationship between station difficulty and rating modality. DISCUSSION: Analyses of average scores may be misleading without an understanding of internal consistency and overall reliability of scores. Although updating to tablet-based forms did not result in systematic variations in scores, routine analyses ensured accurate interpretation of the variability of assessment scores. CONCLUSION: This study demonstrates the importance of ongoing program evaluation and data analysis.