Cargando…

Reliability of a seminar grading rubric in a grand rounds course

PURPOSE: Formal presentations are a common requirement for students in health professional programs, and evaluations are often viewed as subjective. To date, literature describing the reliability or validity of seminar grading rubrics is lacking. The objectives of this study were to characterize int...

Descripción completa

Detalles Bibliográficos
Autores principales: MacLaughlin, Eric J, Fike, David S, Alvarez, Carlos A, Seifert, Charles F, Blaszczyk, Amie T
Formato: Texto
Lenguaje:English
Publicado: Dove Medlical Press 2010
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3004600/
https://www.ncbi.nlm.nih.gov/pubmed/21197366
http://dx.doi.org/10.2147/JMDH.S12346
_version_ 1782194005741666304
author MacLaughlin, Eric J
Fike, David S
Alvarez, Carlos A
Seifert, Charles F
Blaszczyk, Amie T
author_facet MacLaughlin, Eric J
Fike, David S
Alvarez, Carlos A
Seifert, Charles F
Blaszczyk, Amie T
author_sort MacLaughlin, Eric J
collection PubMed
description PURPOSE: Formal presentations are a common requirement for students in health professional programs, and evaluations are often viewed as subjective. To date, literature describing the reliability or validity of seminar grading rubrics is lacking. The objectives of this study were to characterize inter-rater agreement and internal consistency of a grading rubric used in a grand rounds seminar course. METHODS: Retrospective study of 252 student presentations given from fall 2007 to fall 2008. Data including student and faculty demographics, overall content score, overall communication scores, subcomponents of content and communication, and total presentation scores were collected. Statistical analyses were performed using SPSS, 16.0. RESULTS: The rubric demonstrated internal consistency (Cronbach’s alpha = 0.826). Mean grade difference between faculty graders was 4.54 percentage points (SD = 3.614), with ≤ 10-point difference for 92.5% of faculty evaluations. Student self evaluations correlated with faculty scores for content, communication, and overall presentation (r = 0.513, r = 0.455, and r = 0.539; P < 0.001 for all respectively). When comparing mean faculty scores to student’s self-evaluations between quintiles, students with lower faculty evaluations overestimated their performance, and those with high faculty evaluations underestimated their performance (P < 0.001). CONCLUSION: The seminar evaluation rubric demonstrated inter-rater agreement and internal consistency.
format Text
id pubmed-3004600
institution National Center for Biotechnology Information
language English
publishDate 2010
publisher Dove Medlical Press
record_format MEDLINE/PubMed
spelling pubmed-30046002010-12-30 Reliability of a seminar grading rubric in a grand rounds course MacLaughlin, Eric J Fike, David S Alvarez, Carlos A Seifert, Charles F Blaszczyk, Amie T J Multidiscip Healthc Original Research PURPOSE: Formal presentations are a common requirement for students in health professional programs, and evaluations are often viewed as subjective. To date, literature describing the reliability or validity of seminar grading rubrics is lacking. The objectives of this study were to characterize inter-rater agreement and internal consistency of a grading rubric used in a grand rounds seminar course. METHODS: Retrospective study of 252 student presentations given from fall 2007 to fall 2008. Data including student and faculty demographics, overall content score, overall communication scores, subcomponents of content and communication, and total presentation scores were collected. Statistical analyses were performed using SPSS, 16.0. RESULTS: The rubric demonstrated internal consistency (Cronbach’s alpha = 0.826). Mean grade difference between faculty graders was 4.54 percentage points (SD = 3.614), with ≤ 10-point difference for 92.5% of faculty evaluations. Student self evaluations correlated with faculty scores for content, communication, and overall presentation (r = 0.513, r = 0.455, and r = 0.539; P < 0.001 for all respectively). When comparing mean faculty scores to student’s self-evaluations between quintiles, students with lower faculty evaluations overestimated their performance, and those with high faculty evaluations underestimated their performance (P < 0.001). CONCLUSION: The seminar evaluation rubric demonstrated inter-rater agreement and internal consistency. Dove Medlical Press 2010-09-09 /pmc/articles/PMC3004600/ /pubmed/21197366 http://dx.doi.org/10.2147/JMDH.S12346 Text en © 2010 MacLaughlin et al, publisher and licensee Dove Medical Press Ltd. This is an Open Access article which permits unrestricted noncommercial use, provided the original work is properly cited.
spellingShingle Original Research
MacLaughlin, Eric J
Fike, David S
Alvarez, Carlos A
Seifert, Charles F
Blaszczyk, Amie T
Reliability of a seminar grading rubric in a grand rounds course
title Reliability of a seminar grading rubric in a grand rounds course
title_full Reliability of a seminar grading rubric in a grand rounds course
title_fullStr Reliability of a seminar grading rubric in a grand rounds course
title_full_unstemmed Reliability of a seminar grading rubric in a grand rounds course
title_short Reliability of a seminar grading rubric in a grand rounds course
title_sort reliability of a seminar grading rubric in a grand rounds course
topic Original Research
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3004600/
https://www.ncbi.nlm.nih.gov/pubmed/21197366
http://dx.doi.org/10.2147/JMDH.S12346
work_keys_str_mv AT maclaughlinericj reliabilityofaseminargradingrubricinagrandroundscourse
AT fikedavids reliabilityofaseminargradingrubricinagrandroundscourse
AT alvarezcarlosa reliabilityofaseminargradingrubricinagrandroundscourse
AT seifertcharlesf reliabilityofaseminargradingrubricinagrandroundscourse
AT blaszczykamiet reliabilityofaseminargradingrubricinagrandroundscourse