Cargando…

Measuring Assessment Quality With an Assessment Utility Rubric for Medical Education

INTRODUCTION: Prior research has identified seven elements of a good assessment, but the elements have not been operationalized in the form of a rubric to rate assessment utility. It would be valuable for medical educators to have a systematic way to evaluate the utility of an assessment in order to...

Descripción completa

Detalles Bibliográficos
Autores principales: Colbert-Getz, Jorie M., Ryan, Michael, Hennessey, Erin, Lindeman, Brenessa, Pitts, Brian, Rutherford, Kim A., Schwengel, Deborah, Sozio, Stephen M., George, Jessica, Jung, Julianna
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Association of American Medical Colleges 2017
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6338154/
https://www.ncbi.nlm.nih.gov/pubmed/30800790
http://dx.doi.org/10.15766/mep_2374-8265.10588
_version_ 1783388412190916608
author Colbert-Getz, Jorie M.
Ryan, Michael
Hennessey, Erin
Lindeman, Brenessa
Pitts, Brian
Rutherford, Kim A.
Schwengel, Deborah
Sozio, Stephen M.
George, Jessica
Jung, Julianna
author_facet Colbert-Getz, Jorie M.
Ryan, Michael
Hennessey, Erin
Lindeman, Brenessa
Pitts, Brian
Rutherford, Kim A.
Schwengel, Deborah
Sozio, Stephen M.
George, Jessica
Jung, Julianna
author_sort Colbert-Getz, Jorie M.
collection PubMed
description INTRODUCTION: Prior research has identified seven elements of a good assessment, but the elements have not been operationalized in the form of a rubric to rate assessment utility. It would be valuable for medical educators to have a systematic way to evaluate the utility of an assessment in order to determine if the assessment used is optimal for the setting. METHODS: We developed and refined an assessment utility rubric using a modified Delphi process. Twenty-nine graduate students pilot-tested the rubric in 2016 with hypothetical data from three examinations, and interrater reliability of rubric scores was measured with interclass correlation coefficients (ICCs). RESULTS: Consensus for all rubric items was reached after three rounds. The resulting assessment utility rubric includes four elements (equivalence, educational effect, catalytic effect, acceptability) with three items each, one element (validity evidence) with five items, and space to provide four feasibility items relating to time and cost. Rater scores had ICC values greater than .75. DISCUSSION: The rubric shows promise in allowing educators to evaluate the utility of an assessment specific to their setting. The medical education field needs to give more consideration to how an assessment drives learning forward, how it motivates trainees, and whether it produces acceptable ranges of scores for all stakeholders.
format Online
Article
Text
id pubmed-6338154
institution National Center for Biotechnology Information
language English
publishDate 2017
publisher Association of American Medical Colleges
record_format MEDLINE/PubMed
spelling pubmed-63381542019-02-22 Measuring Assessment Quality With an Assessment Utility Rubric for Medical Education Colbert-Getz, Jorie M. Ryan, Michael Hennessey, Erin Lindeman, Brenessa Pitts, Brian Rutherford, Kim A. Schwengel, Deborah Sozio, Stephen M. George, Jessica Jung, Julianna MedEdPORTAL Original Publication INTRODUCTION: Prior research has identified seven elements of a good assessment, but the elements have not been operationalized in the form of a rubric to rate assessment utility. It would be valuable for medical educators to have a systematic way to evaluate the utility of an assessment in order to determine if the assessment used is optimal for the setting. METHODS: We developed and refined an assessment utility rubric using a modified Delphi process. Twenty-nine graduate students pilot-tested the rubric in 2016 with hypothetical data from three examinations, and interrater reliability of rubric scores was measured with interclass correlation coefficients (ICCs). RESULTS: Consensus for all rubric items was reached after three rounds. The resulting assessment utility rubric includes four elements (equivalence, educational effect, catalytic effect, acceptability) with three items each, one element (validity evidence) with five items, and space to provide four feasibility items relating to time and cost. Rater scores had ICC values greater than .75. DISCUSSION: The rubric shows promise in allowing educators to evaluate the utility of an assessment specific to their setting. The medical education field needs to give more consideration to how an assessment drives learning forward, how it motivates trainees, and whether it produces acceptable ranges of scores for all stakeholders. Association of American Medical Colleges 2017-05-24 /pmc/articles/PMC6338154/ /pubmed/30800790 http://dx.doi.org/10.15766/mep_2374-8265.10588 Text en Copyright © 2017 Colbert-Getz et al. https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode This is an open-access publication distributed under the terms of the Creative Commons Attribution-NonCommercial-Share Alike (https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode) license.
spellingShingle Original Publication
Colbert-Getz, Jorie M.
Ryan, Michael
Hennessey, Erin
Lindeman, Brenessa
Pitts, Brian
Rutherford, Kim A.
Schwengel, Deborah
Sozio, Stephen M.
George, Jessica
Jung, Julianna
Measuring Assessment Quality With an Assessment Utility Rubric for Medical Education
title Measuring Assessment Quality With an Assessment Utility Rubric for Medical Education
title_full Measuring Assessment Quality With an Assessment Utility Rubric for Medical Education
title_fullStr Measuring Assessment Quality With an Assessment Utility Rubric for Medical Education
title_full_unstemmed Measuring Assessment Quality With an Assessment Utility Rubric for Medical Education
title_short Measuring Assessment Quality With an Assessment Utility Rubric for Medical Education
title_sort measuring assessment quality with an assessment utility rubric for medical education
topic Original Publication
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6338154/
https://www.ncbi.nlm.nih.gov/pubmed/30800790
http://dx.doi.org/10.15766/mep_2374-8265.10588
work_keys_str_mv AT colbertgetzjoriem measuringassessmentqualitywithanassessmentutilityrubricformedicaleducation
AT ryanmichael measuringassessmentqualitywithanassessmentutilityrubricformedicaleducation
AT hennesseyerin measuringassessmentqualitywithanassessmentutilityrubricformedicaleducation
AT lindemanbrenessa measuringassessmentqualitywithanassessmentutilityrubricformedicaleducation
AT pittsbrian measuringassessmentqualitywithanassessmentutilityrubricformedicaleducation
AT rutherfordkima measuringassessmentqualitywithanassessmentutilityrubricformedicaleducation
AT schwengeldeborah measuringassessmentqualitywithanassessmentutilityrubricformedicaleducation
AT soziostephenm measuringassessmentqualitywithanassessmentutilityrubricformedicaleducation
AT georgejessica measuringassessmentqualitywithanassessmentutilityrubricformedicaleducation
AT jungjulianna measuringassessmentqualitywithanassessmentutilityrubricformedicaleducation