Cargando…
The reliability and validity of a portfolio designed as a programmatic assessment of performance in an integrated clinical placement
BACKGROUND: Little is known about the technical adequacy of portfolios in reporting multiple complex academic and performance-based assessments. We explored, first, the influencing factors on the precision of scoring within a programmatic assessment of student learning outcomes within an integrated...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
BioMed Central
2014
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4182797/ https://www.ncbi.nlm.nih.gov/pubmed/25240385 http://dx.doi.org/10.1186/1472-6920-14-197 |
_version_ | 1782337610206674944 |
---|---|
author | Roberts, Chris Shadbolt, Narelle Clark, Tyler Simpson, Phillip |
author_facet | Roberts, Chris Shadbolt, Narelle Clark, Tyler Simpson, Phillip |
author_sort | Roberts, Chris |
collection | PubMed |
description | BACKGROUND: Little is known about the technical adequacy of portfolios in reporting multiple complex academic and performance-based assessments. We explored, first, the influencing factors on the precision of scoring within a programmatic assessment of student learning outcomes within an integrated clinical placement. Second, the degree to which validity evidence supported interpretation of student scores. METHODS: Within generalisability theory, we estimated the contribution that each wanted factor (i.e. student capability) and unwanted factors (e.g. the impact of assessors) made to the variation in portfolio task scores. Relative and absolute standard errors of measurement provided a confidence interval around a pre-determined pass/fail standard for all six tasks. Validity evidence was sought through demonstrating the internal consistency of the portfolio and exploring the relationship of student scores with clinical experience. RESULTS: The mean portfolio mark for 257 students, across 372 raters, based on six tasks, was 75.56 (SD, 6.68). For a single student on one assessment task, 11% of the variance in scores was due to true differences in student capability. The most significant interaction was context specificity (49%), the tendency for one student to engage with one task and not engage with another task. Rater subjectivity was 29%. An absolute standard error of measurement of 4.74%, gave a 95% CI of +/- 9.30%, and a 68% CI of +/- 4.74% around a pass/fail score of 57%. Construct validity was supported by demonstration of an assessment framework, the internal consistency of the portfolio tasks, and higher scores for students who did the clinical placement later in the academic year. CONCLUSION: A portfolio designed as a programmatic assessment of an integrated clinical placement has sufficient evidence of validity to support a specific interpretation of student scores around passing a clinical placement. It has modest precision in assessing students’ achievement of a competency standard. There were identifiable areas for reducing measurement error and providing more certainty around decision-making. Reducing the measurement error would require engaging with the student body on the value of the tasks, more focussed academic and clinical supervisor training, and revisiting the rubric of the assessment in the light of feedback. |
format | Online Article Text |
id | pubmed-4182797 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2014 |
publisher | BioMed Central |
record_format | MEDLINE/PubMed |
spelling | pubmed-41827972014-10-03 The reliability and validity of a portfolio designed as a programmatic assessment of performance in an integrated clinical placement Roberts, Chris Shadbolt, Narelle Clark, Tyler Simpson, Phillip BMC Med Educ Research Article BACKGROUND: Little is known about the technical adequacy of portfolios in reporting multiple complex academic and performance-based assessments. We explored, first, the influencing factors on the precision of scoring within a programmatic assessment of student learning outcomes within an integrated clinical placement. Second, the degree to which validity evidence supported interpretation of student scores. METHODS: Within generalisability theory, we estimated the contribution that each wanted factor (i.e. student capability) and unwanted factors (e.g. the impact of assessors) made to the variation in portfolio task scores. Relative and absolute standard errors of measurement provided a confidence interval around a pre-determined pass/fail standard for all six tasks. Validity evidence was sought through demonstrating the internal consistency of the portfolio and exploring the relationship of student scores with clinical experience. RESULTS: The mean portfolio mark for 257 students, across 372 raters, based on six tasks, was 75.56 (SD, 6.68). For a single student on one assessment task, 11% of the variance in scores was due to true differences in student capability. The most significant interaction was context specificity (49%), the tendency for one student to engage with one task and not engage with another task. Rater subjectivity was 29%. An absolute standard error of measurement of 4.74%, gave a 95% CI of +/- 9.30%, and a 68% CI of +/- 4.74% around a pass/fail score of 57%. Construct validity was supported by demonstration of an assessment framework, the internal consistency of the portfolio tasks, and higher scores for students who did the clinical placement later in the academic year. CONCLUSION: A portfolio designed as a programmatic assessment of an integrated clinical placement has sufficient evidence of validity to support a specific interpretation of student scores around passing a clinical placement. It has modest precision in assessing students’ achievement of a competency standard. There were identifiable areas for reducing measurement error and providing more certainty around decision-making. Reducing the measurement error would require engaging with the student body on the value of the tasks, more focussed academic and clinical supervisor training, and revisiting the rubric of the assessment in the light of feedback. BioMed Central 2014-09-20 /pmc/articles/PMC4182797/ /pubmed/25240385 http://dx.doi.org/10.1186/1472-6920-14-197 Text en © Roberts et al.; licensee BioMed Central Ltd. 2014 This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. |
spellingShingle | Research Article Roberts, Chris Shadbolt, Narelle Clark, Tyler Simpson, Phillip The reliability and validity of a portfolio designed as a programmatic assessment of performance in an integrated clinical placement |
title | The reliability and validity of a portfolio designed as a programmatic assessment of performance in an integrated clinical placement |
title_full | The reliability and validity of a portfolio designed as a programmatic assessment of performance in an integrated clinical placement |
title_fullStr | The reliability and validity of a portfolio designed as a programmatic assessment of performance in an integrated clinical placement |
title_full_unstemmed | The reliability and validity of a portfolio designed as a programmatic assessment of performance in an integrated clinical placement |
title_short | The reliability and validity of a portfolio designed as a programmatic assessment of performance in an integrated clinical placement |
title_sort | reliability and validity of a portfolio designed as a programmatic assessment of performance in an integrated clinical placement |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4182797/ https://www.ncbi.nlm.nih.gov/pubmed/25240385 http://dx.doi.org/10.1186/1472-6920-14-197 |
work_keys_str_mv | AT robertschris thereliabilityandvalidityofaportfoliodesignedasaprogrammaticassessmentofperformanceinanintegratedclinicalplacement AT shadboltnarelle thereliabilityandvalidityofaportfoliodesignedasaprogrammaticassessmentofperformanceinanintegratedclinicalplacement AT clarktyler thereliabilityandvalidityofaportfoliodesignedasaprogrammaticassessmentofperformanceinanintegratedclinicalplacement AT simpsonphillip thereliabilityandvalidityofaportfoliodesignedasaprogrammaticassessmentofperformanceinanintegratedclinicalplacement AT robertschris reliabilityandvalidityofaportfoliodesignedasaprogrammaticassessmentofperformanceinanintegratedclinicalplacement AT shadboltnarelle reliabilityandvalidityofaportfoliodesignedasaprogrammaticassessmentofperformanceinanintegratedclinicalplacement AT clarktyler reliabilityandvalidityofaportfoliodesignedasaprogrammaticassessmentofperformanceinanintegratedclinicalplacement AT simpsonphillip reliabilityandvalidityofaportfoliodesignedasaprogrammaticassessmentofperformanceinanintegratedclinicalplacement |