Cargando…
Assessing a Capstone Research Project in Medical Training: Examiner Consistency Using Generic Versus Domain-Specific Rubrics
Rubrics are utilized extensively in tertiary contexts to assess student performance on written tasks; however, their use for assessment of research projects has received little attention. In particular, there is little evidence on the reliability of examiner judgements according to rubric type (gene...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
SAGE Publications
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8883397/ https://www.ncbi.nlm.nih.gov/pubmed/35237723 http://dx.doi.org/10.1177/23821205221081813 |
_version_ | 1784659921100341248 |
---|---|
author | Reid, Katharine J. Chiavaroli, Neville G. Bilszta, Justin L. C. |
author_facet | Reid, Katharine J. Chiavaroli, Neville G. Bilszta, Justin L. C. |
author_sort | Reid, Katharine J. |
collection | PubMed |
description | Rubrics are utilized extensively in tertiary contexts to assess student performance on written tasks; however, their use for assessment of research projects has received little attention. In particular, there is little evidence on the reliability of examiner judgements according to rubric type (general or specific) in a research context. This research examines the concordance between pairs of examiners assessing a medical student research project during a two-year period employing a generic rubric followed by a subsequent two-year implementation of task-specific rubrics. Following examiner feedback, and with consideration to the available literature, we expected the task-specific rubrics would increase the consistency of examiner judgements and reduce the need for arbitration due to discrepant marks. However, in contrast, results showed that generic rubrics provided greater consistency of examiner judgements and fewer arbitrations compared with the task-specific rubrics. These findings have practical implications for educational practise in the assessment of research projects and contribute valuable empirical evidence to inform the development and use of rubrics in medical education. |
format | Online Article Text |
id | pubmed-8883397 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | SAGE Publications |
record_format | MEDLINE/PubMed |
spelling | pubmed-88833972022-03-01 Assessing a Capstone Research Project in Medical Training: Examiner Consistency Using Generic Versus Domain-Specific Rubrics Reid, Katharine J. Chiavaroli, Neville G. Bilszta, Justin L. C. J Med Educ Curric Dev Original Research Rubrics are utilized extensively in tertiary contexts to assess student performance on written tasks; however, their use for assessment of research projects has received little attention. In particular, there is little evidence on the reliability of examiner judgements according to rubric type (general or specific) in a research context. This research examines the concordance between pairs of examiners assessing a medical student research project during a two-year period employing a generic rubric followed by a subsequent two-year implementation of task-specific rubrics. Following examiner feedback, and with consideration to the available literature, we expected the task-specific rubrics would increase the consistency of examiner judgements and reduce the need for arbitration due to discrepant marks. However, in contrast, results showed that generic rubrics provided greater consistency of examiner judgements and fewer arbitrations compared with the task-specific rubrics. These findings have practical implications for educational practise in the assessment of research projects and contribute valuable empirical evidence to inform the development and use of rubrics in medical education. SAGE Publications 2022-02-24 /pmc/articles/PMC8883397/ /pubmed/35237723 http://dx.doi.org/10.1177/23821205221081813 Text en © The Author(s) 2022 https://creativecommons.org/licenses/by-nc/4.0/This article is distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 License (https://creativecommons.org/licenses/by-nc/4.0/) which permits non-commercial use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access page (https://us.sagepub.com/en-us/nam/open-access-at-sage). |
spellingShingle | Original Research Reid, Katharine J. Chiavaroli, Neville G. Bilszta, Justin L. C. Assessing a Capstone Research Project in Medical Training: Examiner Consistency Using Generic Versus Domain-Specific Rubrics |
title | Assessing a Capstone Research Project in Medical Training: Examiner Consistency Using Generic Versus Domain-Specific Rubrics |
title_full | Assessing a Capstone Research Project in Medical Training: Examiner Consistency Using Generic Versus Domain-Specific Rubrics |
title_fullStr | Assessing a Capstone Research Project in Medical Training: Examiner Consistency Using Generic Versus Domain-Specific Rubrics |
title_full_unstemmed | Assessing a Capstone Research Project in Medical Training: Examiner Consistency Using Generic Versus Domain-Specific Rubrics |
title_short | Assessing a Capstone Research Project in Medical Training: Examiner Consistency Using Generic Versus Domain-Specific Rubrics |
title_sort | assessing a capstone research project in medical training: examiner consistency using generic versus domain-specific rubrics |
topic | Original Research |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8883397/ https://www.ncbi.nlm.nih.gov/pubmed/35237723 http://dx.doi.org/10.1177/23821205221081813 |
work_keys_str_mv | AT reidkatharinej assessingacapstoneresearchprojectinmedicaltrainingexaminerconsistencyusinggenericversusdomainspecificrubrics AT chiavarolinevilleg assessingacapstoneresearchprojectinmedicaltrainingexaminerconsistencyusinggenericversusdomainspecificrubrics AT bilsztajustinlc assessingacapstoneresearchprojectinmedicaltrainingexaminerconsistencyusinggenericversusdomainspecificrubrics |