Cargando…

Validity of a new assessment rubric for a short-answer test of clinical reasoning

BACKGROUND: The validity of high-stakes decisions derived from assessment results is of primary concern to candidates and certifying institutions in the health professions. In the field of orthopaedic manual physical therapy (OMPT), there is a dearth of documented validity evidence to support the ce...

Descripción completa

Detalles Bibliográficos
Autores principales: Yeung, Euson, Kulasagarem, Kulamakan, Woods, Nicole, Dubrowski, Adam, Hodges, Brian, Carnahan, Heather
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2016
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4962495/
https://www.ncbi.nlm.nih.gov/pubmed/27461249
http://dx.doi.org/10.1186/s12909-016-0714-1
_version_ 1782444848438050816
author Yeung, Euson
Kulasagarem, Kulamakan
Woods, Nicole
Dubrowski, Adam
Hodges, Brian
Carnahan, Heather
author_facet Yeung, Euson
Kulasagarem, Kulamakan
Woods, Nicole
Dubrowski, Adam
Hodges, Brian
Carnahan, Heather
author_sort Yeung, Euson
collection PubMed
description BACKGROUND: The validity of high-stakes decisions derived from assessment results is of primary concern to candidates and certifying institutions in the health professions. In the field of orthopaedic manual physical therapy (OMPT), there is a dearth of documented validity evidence to support the certification process particularly for short-answer tests. To address this need, we examined the internal structure of the Case History Assessment Tool (CHAT); this is a new assessment rubric developed to appraise written responses to a short-answer test of clinical reasoning in post-graduate OMPT certification in Canada. METHODS: Fourteen physical therapy students (novices) and 16 physical therapists (PT) with minimal and substantial OMPT training respectively completed a mock examination. Four pairs of examiners (n = 8) participated in appraising written responses using the CHAT. We conducted separate generalizability studies (G studies) for all participants and also by level of OMPT training. Internal consistency was calculated for test questions with more than 2 assessment items. Decision studies were also conducted to determine optimal application of the CHAT for OMPT certification. RESULTS: The overall reliability of CHAT scores was found to be moderate; however, reliability estimates for the novice group suggest that the scale was incapable of accommodating for scores of novices. Internal consistency estimates indicate item redundancies for several test questions which will require further investigation. CONCLUSION: Future validity studies should consider discriminating the clinical reasoning competence of OMPT trainees strictly at the post-graduate level. Although rater variance was low, the large variance attributed to error sources not incorporated in our G studies warrant further investigations into other threats to validity. Future examination of examiner stringency is also warranted.
format Online
Article
Text
id pubmed-4962495
institution National Center for Biotechnology Information
language English
publishDate 2016
publisher BioMed Central
record_format MEDLINE/PubMed
spelling pubmed-49624952016-07-28 Validity of a new assessment rubric for a short-answer test of clinical reasoning Yeung, Euson Kulasagarem, Kulamakan Woods, Nicole Dubrowski, Adam Hodges, Brian Carnahan, Heather BMC Med Educ Research Article BACKGROUND: The validity of high-stakes decisions derived from assessment results is of primary concern to candidates and certifying institutions in the health professions. In the field of orthopaedic manual physical therapy (OMPT), there is a dearth of documented validity evidence to support the certification process particularly for short-answer tests. To address this need, we examined the internal structure of the Case History Assessment Tool (CHAT); this is a new assessment rubric developed to appraise written responses to a short-answer test of clinical reasoning in post-graduate OMPT certification in Canada. METHODS: Fourteen physical therapy students (novices) and 16 physical therapists (PT) with minimal and substantial OMPT training respectively completed a mock examination. Four pairs of examiners (n = 8) participated in appraising written responses using the CHAT. We conducted separate generalizability studies (G studies) for all participants and also by level of OMPT training. Internal consistency was calculated for test questions with more than 2 assessment items. Decision studies were also conducted to determine optimal application of the CHAT for OMPT certification. RESULTS: The overall reliability of CHAT scores was found to be moderate; however, reliability estimates for the novice group suggest that the scale was incapable of accommodating for scores of novices. Internal consistency estimates indicate item redundancies for several test questions which will require further investigation. CONCLUSION: Future validity studies should consider discriminating the clinical reasoning competence of OMPT trainees strictly at the post-graduate level. Although rater variance was low, the large variance attributed to error sources not incorporated in our G studies warrant further investigations into other threats to validity. Future examination of examiner stringency is also warranted. BioMed Central 2016-07-26 /pmc/articles/PMC4962495/ /pubmed/27461249 http://dx.doi.org/10.1186/s12909-016-0714-1 Text en © The Author(s). 2016 Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
spellingShingle Research Article
Yeung, Euson
Kulasagarem, Kulamakan
Woods, Nicole
Dubrowski, Adam
Hodges, Brian
Carnahan, Heather
Validity of a new assessment rubric for a short-answer test of clinical reasoning
title Validity of a new assessment rubric for a short-answer test of clinical reasoning
title_full Validity of a new assessment rubric for a short-answer test of clinical reasoning
title_fullStr Validity of a new assessment rubric for a short-answer test of clinical reasoning
title_full_unstemmed Validity of a new assessment rubric for a short-answer test of clinical reasoning
title_short Validity of a new assessment rubric for a short-answer test of clinical reasoning
title_sort validity of a new assessment rubric for a short-answer test of clinical reasoning
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4962495/
https://www.ncbi.nlm.nih.gov/pubmed/27461249
http://dx.doi.org/10.1186/s12909-016-0714-1
work_keys_str_mv AT yeungeuson validityofanewassessmentrubricforashortanswertestofclinicalreasoning
AT kulasagaremkulamakan validityofanewassessmentrubricforashortanswertestofclinicalreasoning
AT woodsnicole validityofanewassessmentrubricforashortanswertestofclinicalreasoning
AT dubrowskiadam validityofanewassessmentrubricforashortanswertestofclinicalreasoning
AT hodgesbrian validityofanewassessmentrubricforashortanswertestofclinicalreasoning
AT carnahanheather validityofanewassessmentrubricforashortanswertestofclinicalreasoning