Cargando…

Construct validation of judgement-based assessments of medical trainees’ competency in the workplace using a “Kanesian” approach to validation

BACKGROUND: Evaluations of clinical assessments that use judgement-based methods have frequently shown them to have sub-optimal reliability and internal validity evidence for their interpretation and intended use. The aim of this study was to enhance that validity evidence by an evaluation of the in...

Descripción completa

Detalles Bibliográficos
Autores principales: McGill, D. A., van der Vleuten, C. P. M., Clarke, M. J.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2015
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4696206/
https://www.ncbi.nlm.nih.gov/pubmed/26715145
http://dx.doi.org/10.1186/s12909-015-0520-1
_version_ 1782407749933465600
author McGill, D. A.
van der Vleuten, C. P. M.
Clarke, M. J.
author_facet McGill, D. A.
van der Vleuten, C. P. M.
Clarke, M. J.
author_sort McGill, D. A.
collection PubMed
description BACKGROUND: Evaluations of clinical assessments that use judgement-based methods have frequently shown them to have sub-optimal reliability and internal validity evidence for their interpretation and intended use. The aim of this study was to enhance that validity evidence by an evaluation of the internal validity and reliability of competency constructs from supervisors’ end-of-term summative assessments for prevocational medical trainees. METHODS: The populations were medical trainees preparing for full registration as a medical practitioner (74) and supervisors who undertook ≥2 end-of-term summative assessments (n = 349) from a single institution. Confirmatory Factor Analysis was used to evaluate assessment internal construct validity. The hypothesised competency construct model to be tested, identified by exploratory factor analysis, had a theoretical basis established in workplace-psychology literature. Comparisons were made with competing models of potential competency constructs including the competency construct model of the original assessment. The optimal model for the competency constructs was identified using model fit and measurement invariance analysis. Construct homogeneity was assessed by Cronbach’s α. Reliability measures were variance components of individual competency items and the identified competency constructs, and the number of assessments needed to achieve adequate reliability of R > 0.80. RESULTS: The hypothesised competency constructs of “general professional job performance”, “clinical skills” and “professional abilities” provides a good model-fit to the data, and a better fit than all alternative models. Model fit indices were χ2/df = 2.8; RMSEA = 0.073 (CI 0.057-0.088); CFI = 0.93; TLI = 0.95; SRMR = 0.039; WRMR = 0.93; AIC = 3879; and BIC = 4018). The optimal model had adequate measurement invariance with nested analysis of important population subgroups supporting the presence of full metric invariance. Reliability estimates for the competency construct “general professional job performance” indicated a resource efficient and reliable assessment for such a construct (6 assessments for an R > 0.80). Item homogeneity was good (Cronbach’s alpha = 0.899). Other competency constructs are resource intensive requiring ≥11 assessments for a reliable assessment score. CONCLUSION: Internal validity and reliability of clinical competence assessments using judgement-based methods are acceptable when actual competency constructs used by assessors are adequately identified. Validation for interpretation and use of supervisors’ assessment in local training schemes is feasible using standard methods for gathering validity evidence.
format Online
Article
Text
id pubmed-4696206
institution National Center for Biotechnology Information
language English
publishDate 2015
publisher BioMed Central
record_format MEDLINE/PubMed
spelling pubmed-46962062015-12-31 Construct validation of judgement-based assessments of medical trainees’ competency in the workplace using a “Kanesian” approach to validation McGill, D. A. van der Vleuten, C. P. M. Clarke, M. J. BMC Med Educ Research Article BACKGROUND: Evaluations of clinical assessments that use judgement-based methods have frequently shown them to have sub-optimal reliability and internal validity evidence for their interpretation and intended use. The aim of this study was to enhance that validity evidence by an evaluation of the internal validity and reliability of competency constructs from supervisors’ end-of-term summative assessments for prevocational medical trainees. METHODS: The populations were medical trainees preparing for full registration as a medical practitioner (74) and supervisors who undertook ≥2 end-of-term summative assessments (n = 349) from a single institution. Confirmatory Factor Analysis was used to evaluate assessment internal construct validity. The hypothesised competency construct model to be tested, identified by exploratory factor analysis, had a theoretical basis established in workplace-psychology literature. Comparisons were made with competing models of potential competency constructs including the competency construct model of the original assessment. The optimal model for the competency constructs was identified using model fit and measurement invariance analysis. Construct homogeneity was assessed by Cronbach’s α. Reliability measures were variance components of individual competency items and the identified competency constructs, and the number of assessments needed to achieve adequate reliability of R > 0.80. RESULTS: The hypothesised competency constructs of “general professional job performance”, “clinical skills” and “professional abilities” provides a good model-fit to the data, and a better fit than all alternative models. Model fit indices were χ2/df = 2.8; RMSEA = 0.073 (CI 0.057-0.088); CFI = 0.93; TLI = 0.95; SRMR = 0.039; WRMR = 0.93; AIC = 3879; and BIC = 4018). The optimal model had adequate measurement invariance with nested analysis of important population subgroups supporting the presence of full metric invariance. Reliability estimates for the competency construct “general professional job performance” indicated a resource efficient and reliable assessment for such a construct (6 assessments for an R > 0.80). Item homogeneity was good (Cronbach’s alpha = 0.899). Other competency constructs are resource intensive requiring ≥11 assessments for a reliable assessment score. CONCLUSION: Internal validity and reliability of clinical competence assessments using judgement-based methods are acceptable when actual competency constructs used by assessors are adequately identified. Validation for interpretation and use of supervisors’ assessment in local training schemes is feasible using standard methods for gathering validity evidence. BioMed Central 2015-12-30 /pmc/articles/PMC4696206/ /pubmed/26715145 http://dx.doi.org/10.1186/s12909-015-0520-1 Text en © McGill et al. 2015 Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
spellingShingle Research Article
McGill, D. A.
van der Vleuten, C. P. M.
Clarke, M. J.
Construct validation of judgement-based assessments of medical trainees’ competency in the workplace using a “Kanesian” approach to validation
title Construct validation of judgement-based assessments of medical trainees’ competency in the workplace using a “Kanesian” approach to validation
title_full Construct validation of judgement-based assessments of medical trainees’ competency in the workplace using a “Kanesian” approach to validation
title_fullStr Construct validation of judgement-based assessments of medical trainees’ competency in the workplace using a “Kanesian” approach to validation
title_full_unstemmed Construct validation of judgement-based assessments of medical trainees’ competency in the workplace using a “Kanesian” approach to validation
title_short Construct validation of judgement-based assessments of medical trainees’ competency in the workplace using a “Kanesian” approach to validation
title_sort construct validation of judgement-based assessments of medical trainees’ competency in the workplace using a “kanesian” approach to validation
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4696206/
https://www.ncbi.nlm.nih.gov/pubmed/26715145
http://dx.doi.org/10.1186/s12909-015-0520-1
work_keys_str_mv AT mcgillda constructvalidationofjudgementbasedassessmentsofmedicaltraineescompetencyintheworkplaceusingakanesianapproachtovalidation
AT vandervleutencpm constructvalidationofjudgementbasedassessmentsofmedicaltraineescompetencyintheworkplaceusingakanesianapproachtovalidation
AT clarkemj constructvalidationofjudgementbasedassessmentsofmedicaltraineescompetencyintheworkplaceusingakanesianapproachtovalidation