Cargando…
Validation Evidence using Generalizability Theory for an Objective Structured Clinical Examination
OBJECTIVES: Performance-based assessments, including objective structured clinical examinations (OSCEs), are essential learning assessments within pharmacy education. Because important educational decisions can follow from performance-based assessment results, pharmacy colleges/schools should demons...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
University of Minnesota Libraries Publishing
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8102968/ https://www.ncbi.nlm.nih.gov/pubmed/34007675 http://dx.doi.org/10.24926/iip.v12i1.2110 |
_version_ | 1783689217206910976 |
---|---|
author | Peeters, Michael J. Cor, M. Kenneth Petite, Sarah E. Schroeder, Michelle N. |
author_facet | Peeters, Michael J. Cor, M. Kenneth Petite, Sarah E. Schroeder, Michelle N. |
author_sort | Peeters, Michael J. |
collection | PubMed |
description | OBJECTIVES: Performance-based assessments, including objective structured clinical examinations (OSCEs), are essential learning assessments within pharmacy education. Because important educational decisions can follow from performance-based assessment results, pharmacy colleges/schools should demonstrate acceptable rigor in validation of their learning assessments. Though G-Theory has rarely been reported in pharmacy education, it would behoove pharmacy educators to, using G-Theory, produce evidence demonstrating reliability as a part of their OSCE validation process. This investigation demonstrates the use of G-Theory to describes reliability for an OSCE, as well as to show methods for enhancement of the OSCE’s reliability. INNOVATION: To evaluate practice-readiness in the semester before final-year rotations, third-year PharmD students took an OSCE. This OSCE included 14 stations over three weeks. Each week had four or five stations; one or two stations were scored by faculty-raters while three stations required students’ written responses. All stations were scored 1-4. For G-Theory analyses, we used G_Strings and then mGENOVA. CRITICAL ANALYSIS: Ninety-seven students completed the OSCE; stations were scored independently. First, univariate G-Theory design of students crossed with stations nested in weeks (p × s:w) was used. The total-score g-coefficient (reliability) for this OSCE was 0.72. Variance components for test parameters were identified. Of note, students accounted for only some OSCE score variation. Second, a multivariate G-Theory design of students crossed with stations (p(•) × s°) was used. This further analysis revealed which week(s) were weakest for the reliability of test-scores from this learning assessment. Moreover, decision-studies showed how reliability could change depending on the number of stations each week. For a g-coefficient >0.80, seven stations per week were needed. Additionally, targets for improvements were identified. IMPLICATIONS: In test validation, evidence of reliability is vital for the inference of generalization; G-Theory provided this for our OSCE. Results indicated that the reliability of scores was mediocre and could be improved with more stations. Revision of problematic stations could help reliability as well. Within this need for more stations, one practical insight was to administer those stations over multiple weeks/occasions (instead of all stations in one occasion). |
format | Online Article Text |
id | pubmed-8102968 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | University of Minnesota Libraries Publishing |
record_format | MEDLINE/PubMed |
spelling | pubmed-81029682021-05-17 Validation Evidence using Generalizability Theory for an Objective Structured Clinical Examination Peeters, Michael J. Cor, M. Kenneth Petite, Sarah E. Schroeder, Michelle N. Innov Pharm Note OBJECTIVES: Performance-based assessments, including objective structured clinical examinations (OSCEs), are essential learning assessments within pharmacy education. Because important educational decisions can follow from performance-based assessment results, pharmacy colleges/schools should demonstrate acceptable rigor in validation of their learning assessments. Though G-Theory has rarely been reported in pharmacy education, it would behoove pharmacy educators to, using G-Theory, produce evidence demonstrating reliability as a part of their OSCE validation process. This investigation demonstrates the use of G-Theory to describes reliability for an OSCE, as well as to show methods for enhancement of the OSCE’s reliability. INNOVATION: To evaluate practice-readiness in the semester before final-year rotations, third-year PharmD students took an OSCE. This OSCE included 14 stations over three weeks. Each week had four or five stations; one or two stations were scored by faculty-raters while three stations required students’ written responses. All stations were scored 1-4. For G-Theory analyses, we used G_Strings and then mGENOVA. CRITICAL ANALYSIS: Ninety-seven students completed the OSCE; stations were scored independently. First, univariate G-Theory design of students crossed with stations nested in weeks (p × s:w) was used. The total-score g-coefficient (reliability) for this OSCE was 0.72. Variance components for test parameters were identified. Of note, students accounted for only some OSCE score variation. Second, a multivariate G-Theory design of students crossed with stations (p(•) × s°) was used. This further analysis revealed which week(s) were weakest for the reliability of test-scores from this learning assessment. Moreover, decision-studies showed how reliability could change depending on the number of stations each week. For a g-coefficient >0.80, seven stations per week were needed. Additionally, targets for improvements were identified. IMPLICATIONS: In test validation, evidence of reliability is vital for the inference of generalization; G-Theory provided this for our OSCE. Results indicated that the reliability of scores was mediocre and could be improved with more stations. Revision of problematic stations could help reliability as well. Within this need for more stations, one practical insight was to administer those stations over multiple weeks/occasions (instead of all stations in one occasion). University of Minnesota Libraries Publishing 2021-02-26 /pmc/articles/PMC8102968/ /pubmed/34007675 http://dx.doi.org/10.24926/iip.v12i1.2110 Text en © Individual authors https://creativecommons.org/licenses/by-nc/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial License, which permits noncommercial use, distribution, and reproduction in any medium, provided the original work is properly cited. |
spellingShingle | Note Peeters, Michael J. Cor, M. Kenneth Petite, Sarah E. Schroeder, Michelle N. Validation Evidence using Generalizability Theory for an Objective Structured Clinical Examination |
title | Validation Evidence using Generalizability Theory for an Objective Structured Clinical Examination |
title_full | Validation Evidence using Generalizability Theory for an Objective Structured Clinical Examination |
title_fullStr | Validation Evidence using Generalizability Theory for an Objective Structured Clinical Examination |
title_full_unstemmed | Validation Evidence using Generalizability Theory for an Objective Structured Clinical Examination |
title_short | Validation Evidence using Generalizability Theory for an Objective Structured Clinical Examination |
title_sort | validation evidence using generalizability theory for an objective structured clinical examination |
topic | Note |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8102968/ https://www.ncbi.nlm.nih.gov/pubmed/34007675 http://dx.doi.org/10.24926/iip.v12i1.2110 |
work_keys_str_mv | AT peetersmichaelj validationevidenceusinggeneralizabilitytheoryforanobjectivestructuredclinicalexamination AT cormkenneth validationevidenceusinggeneralizabilitytheoryforanobjectivestructuredclinicalexamination AT petitesarahe validationevidenceusinggeneralizabilitytheoryforanobjectivestructuredclinicalexamination AT schroedermichellen validationevidenceusinggeneralizabilitytheoryforanobjectivestructuredclinicalexamination |