Cargando…

Providing Validation Evidence for a Clinical-Science Module: Improving Testing Reliability with Quizzes

DESCRIPTION OF THE PROBLEM: High-stakes decision-making should have sound validation evidence; reliability is vital towards this. A short exam may not be very reliable on its own within didactic courses, and so supplementing it with quizzes might help. But how much? This study’s objective was to und...

Descripción completa

Detalles Bibliográficos
Autores principales: Peeters, Michael J., Cor, M. Kenneth, Maki, Erik D.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: University of Minnesota Libraries Publishing 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8102960/
https://www.ncbi.nlm.nih.gov/pubmed/34007668
http://dx.doi.org/10.24926/iip.v12i1.2235
_version_ 1783689215324717056
author Peeters, Michael J.
Cor, M. Kenneth
Maki, Erik D.
author_facet Peeters, Michael J.
Cor, M. Kenneth
Maki, Erik D.
author_sort Peeters, Michael J.
collection PubMed
description DESCRIPTION OF THE PROBLEM: High-stakes decision-making should have sound validation evidence; reliability is vital towards this. A short exam may not be very reliable on its own within didactic courses, and so supplementing it with quizzes might help. But how much? This study’s objective was to understand how much reliability (for the overall module-grades) could be gained by adding quiz data to traditional exam data in a clinical-science module. THE INNOVATION: In didactic coursework, quizzes are a common instructional strategy. However, individual contexts/instructors can vary quiz use formatively and/or summatively. Second-year PharmD students took a clinical-science course, wherein a 5-week module focused on cardiovascular therapeutics. Generalizability Theory (G-Theory) combined seven quizzes leading to an exam into one module-level reliability, based on a model where students were crossed with items nested in eight fixed testing occasions (mGENOVA used). Furthermore, G-Theory decision-studies were planned to illustrate changes in module-grade reliability, where the number of quiz-items and relative-weighting of quizzes were altered. CRITICAL ANALYSIS: One-hundred students took seven quizzes and one exam. Individually, the exam had 32 multiple-choice questions (MCQ) (KR-20 reliability=0.67), while quizzes had a total of 50MCQ (5-9MCQ each) with most individual quiz KR-20s less than or equal to 0.54. After combining the quizzes and exam using G-Theory, estimated reliability of module-grades was 0.73; improved from the exam alone. Doubling the quiz-weight, from the syllabus’ 18% quizzes and 82% exam, increased the composite-reliability of module-grades to 0.77. Reliability of 0.80 was achieved with equal-weight for quizzes and exam. NEXT STEPS: Expectedly, more items lent to higher reliability. However, using quizzes predominantly formatively had little impact on reliability, while using quizzes more summatively (i.e., increasing their relative-weight in module-grade) improved reliability further. Thus, depending on use, quizzes can add to a course’s rigor.
format Online
Article
Text
id pubmed-8102960
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher University of Minnesota Libraries Publishing
record_format MEDLINE/PubMed
spelling pubmed-81029602021-05-17 Providing Validation Evidence for a Clinical-Science Module: Improving Testing Reliability with Quizzes Peeters, Michael J. Cor, M. Kenneth Maki, Erik D. Innov Pharm Note DESCRIPTION OF THE PROBLEM: High-stakes decision-making should have sound validation evidence; reliability is vital towards this. A short exam may not be very reliable on its own within didactic courses, and so supplementing it with quizzes might help. But how much? This study’s objective was to understand how much reliability (for the overall module-grades) could be gained by adding quiz data to traditional exam data in a clinical-science module. THE INNOVATION: In didactic coursework, quizzes are a common instructional strategy. However, individual contexts/instructors can vary quiz use formatively and/or summatively. Second-year PharmD students took a clinical-science course, wherein a 5-week module focused on cardiovascular therapeutics. Generalizability Theory (G-Theory) combined seven quizzes leading to an exam into one module-level reliability, based on a model where students were crossed with items nested in eight fixed testing occasions (mGENOVA used). Furthermore, G-Theory decision-studies were planned to illustrate changes in module-grade reliability, where the number of quiz-items and relative-weighting of quizzes were altered. CRITICAL ANALYSIS: One-hundred students took seven quizzes and one exam. Individually, the exam had 32 multiple-choice questions (MCQ) (KR-20 reliability=0.67), while quizzes had a total of 50MCQ (5-9MCQ each) with most individual quiz KR-20s less than or equal to 0.54. After combining the quizzes and exam using G-Theory, estimated reliability of module-grades was 0.73; improved from the exam alone. Doubling the quiz-weight, from the syllabus’ 18% quizzes and 82% exam, increased the composite-reliability of module-grades to 0.77. Reliability of 0.80 was achieved with equal-weight for quizzes and exam. NEXT STEPS: Expectedly, more items lent to higher reliability. However, using quizzes predominantly formatively had little impact on reliability, while using quizzes more summatively (i.e., increasing their relative-weight in module-grade) improved reliability further. Thus, depending on use, quizzes can add to a course’s rigor. University of Minnesota Libraries Publishing 2021-02-26 /pmc/articles/PMC8102960/ /pubmed/34007668 http://dx.doi.org/10.24926/iip.v12i1.2235 Text en © Individual authors https://creativecommons.org/licenses/by-nc/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial License, which permits noncommercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
spellingShingle Note
Peeters, Michael J.
Cor, M. Kenneth
Maki, Erik D.
Providing Validation Evidence for a Clinical-Science Module: Improving Testing Reliability with Quizzes
title Providing Validation Evidence for a Clinical-Science Module: Improving Testing Reliability with Quizzes
title_full Providing Validation Evidence for a Clinical-Science Module: Improving Testing Reliability with Quizzes
title_fullStr Providing Validation Evidence for a Clinical-Science Module: Improving Testing Reliability with Quizzes
title_full_unstemmed Providing Validation Evidence for a Clinical-Science Module: Improving Testing Reliability with Quizzes
title_short Providing Validation Evidence for a Clinical-Science Module: Improving Testing Reliability with Quizzes
title_sort providing validation evidence for a clinical-science module: improving testing reliability with quizzes
topic Note
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8102960/
https://www.ncbi.nlm.nih.gov/pubmed/34007668
http://dx.doi.org/10.24926/iip.v12i1.2235
work_keys_str_mv AT peetersmichaelj providingvalidationevidenceforaclinicalsciencemoduleimprovingtestingreliabilitywithquizzes
AT cormkenneth providingvalidationevidenceforaclinicalsciencemoduleimprovingtestingreliabilitywithquizzes
AT makierikd providingvalidationevidenceforaclinicalsciencemoduleimprovingtestingreliabilitywithquizzes