Cargando…
Validation of undergraduate medical student script concordance test (SCT) scores on the clinical assessment of the acute abdomen
BACKGROUND: Health professionals often manage medical problems in critical situations under time pressure and on the basis of vague information. In recent years, dual process theory has provided a framework of cognitive processes to assist students in developing clinical reasoning skills critical es...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
BioMed Central
2016
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4989333/ https://www.ncbi.nlm.nih.gov/pubmed/27535826 http://dx.doi.org/10.1186/s12893-016-0173-y |
_version_ | 1782448551614218240 |
---|---|
author | Goos, Matthias Schubach, Fabian Seifert, Gabriel Boeker, Martin |
author_facet | Goos, Matthias Schubach, Fabian Seifert, Gabriel Boeker, Martin |
author_sort | Goos, Matthias |
collection | PubMed |
description | BACKGROUND: Health professionals often manage medical problems in critical situations under time pressure and on the basis of vague information. In recent years, dual process theory has provided a framework of cognitive processes to assist students in developing clinical reasoning skills critical especially in surgery due to the high workload and the elevated stress levels. However, clinical reasoning skills can be observed only indirectly and the corresponding constructs are difficult to measure in order to assess student performance. The script concordance test has been established in this field. A number of studies suggest that the test delivers a valid assessment of clinical reasoning. However, different scoring methods have been suggested. They reflect different interpretations of the underlying construct. In this work we want to shed light on the theoretical framework of script theory and give an idea of script concordance testing. We constructed a script concordance test in the clinical context of “acute abdomen” and compared previously proposed scores with regard to their validity. METHODS: A test comprising 52 items in 18 clinical scenarios was developed, revised along the guidelines and administered to 56 4(th) and 5(th) year medical students at the end of a blended-learning seminar. We scored the answers using five different scoring methods (distance (2×), aggregate (2×), single best answer) and compared the scoring keys, the resulting final scores and Cronbach’s α after normalization of the raw scores. RESULTS: All scores except the single best answers calculation achieved acceptable reliability scores (>= 0.75), as measured by Cronbach’s α. Students were clearly distinguishable from the experts, whose results were set to a mean of 80 and SD of 5 by the normalization process. With the two aggregate scoring methods, the students’ means values were between 62.5 (AGGPEN) and 63.9 (AGG) equivalent to about three expert SD below the experts’ mean value (Cronbach’s α : 0.76 (AGGPEN) and 0.75 (AGG)). With the two distance scoring methods the students’ mean was between 62.8 (DMODE) and 66.8 (DMEAN) equivalent to about two expert SD below the experts’ mean value (Cronbach’s α: 0.77 (DMODE) and 0.79 (DMEAN)). In this study the single best answer (SBA) scoring key yielded the worst psychometric results (Cronbach’s α: 0.68). CONCLUSION: Assuming the psychometric properties of the script concordance test scores are valid, then clinical reasoning skills can be measured reliably with different scoring keys in the SCT presented here. Psychometrically, the distance methods seem to be superior, wherein inherent statistical properties of the scales might play a significant role. For methodological reasons, the aggregate methods can also be used. Despite the limitations and complexity of the underlying scoring process and the calculation of reliability, we advocate for SCT because it allows a new perspective on the measurement and teaching of cognitive skills. |
format | Online Article Text |
id | pubmed-4989333 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2016 |
publisher | BioMed Central |
record_format | MEDLINE/PubMed |
spelling | pubmed-49893332016-08-19 Validation of undergraduate medical student script concordance test (SCT) scores on the clinical assessment of the acute abdomen Goos, Matthias Schubach, Fabian Seifert, Gabriel Boeker, Martin BMC Surg Research Article BACKGROUND: Health professionals often manage medical problems in critical situations under time pressure and on the basis of vague information. In recent years, dual process theory has provided a framework of cognitive processes to assist students in developing clinical reasoning skills critical especially in surgery due to the high workload and the elevated stress levels. However, clinical reasoning skills can be observed only indirectly and the corresponding constructs are difficult to measure in order to assess student performance. The script concordance test has been established in this field. A number of studies suggest that the test delivers a valid assessment of clinical reasoning. However, different scoring methods have been suggested. They reflect different interpretations of the underlying construct. In this work we want to shed light on the theoretical framework of script theory and give an idea of script concordance testing. We constructed a script concordance test in the clinical context of “acute abdomen” and compared previously proposed scores with regard to their validity. METHODS: A test comprising 52 items in 18 clinical scenarios was developed, revised along the guidelines and administered to 56 4(th) and 5(th) year medical students at the end of a blended-learning seminar. We scored the answers using five different scoring methods (distance (2×), aggregate (2×), single best answer) and compared the scoring keys, the resulting final scores and Cronbach’s α after normalization of the raw scores. RESULTS: All scores except the single best answers calculation achieved acceptable reliability scores (>= 0.75), as measured by Cronbach’s α. Students were clearly distinguishable from the experts, whose results were set to a mean of 80 and SD of 5 by the normalization process. With the two aggregate scoring methods, the students’ means values were between 62.5 (AGGPEN) and 63.9 (AGG) equivalent to about three expert SD below the experts’ mean value (Cronbach’s α : 0.76 (AGGPEN) and 0.75 (AGG)). With the two distance scoring methods the students’ mean was between 62.8 (DMODE) and 66.8 (DMEAN) equivalent to about two expert SD below the experts’ mean value (Cronbach’s α: 0.77 (DMODE) and 0.79 (DMEAN)). In this study the single best answer (SBA) scoring key yielded the worst psychometric results (Cronbach’s α: 0.68). CONCLUSION: Assuming the psychometric properties of the script concordance test scores are valid, then clinical reasoning skills can be measured reliably with different scoring keys in the SCT presented here. Psychometrically, the distance methods seem to be superior, wherein inherent statistical properties of the scales might play a significant role. For methodological reasons, the aggregate methods can also be used. Despite the limitations and complexity of the underlying scoring process and the calculation of reliability, we advocate for SCT because it allows a new perspective on the measurement and teaching of cognitive skills. BioMed Central 2016-08-17 /pmc/articles/PMC4989333/ /pubmed/27535826 http://dx.doi.org/10.1186/s12893-016-0173-y Text en © The Author(s). 2016 Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated. |
spellingShingle | Research Article Goos, Matthias Schubach, Fabian Seifert, Gabriel Boeker, Martin Validation of undergraduate medical student script concordance test (SCT) scores on the clinical assessment of the acute abdomen |
title | Validation of undergraduate medical student script concordance test (SCT) scores on the clinical assessment of the acute abdomen |
title_full | Validation of undergraduate medical student script concordance test (SCT) scores on the clinical assessment of the acute abdomen |
title_fullStr | Validation of undergraduate medical student script concordance test (SCT) scores on the clinical assessment of the acute abdomen |
title_full_unstemmed | Validation of undergraduate medical student script concordance test (SCT) scores on the clinical assessment of the acute abdomen |
title_short | Validation of undergraduate medical student script concordance test (SCT) scores on the clinical assessment of the acute abdomen |
title_sort | validation of undergraduate medical student script concordance test (sct) scores on the clinical assessment of the acute abdomen |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4989333/ https://www.ncbi.nlm.nih.gov/pubmed/27535826 http://dx.doi.org/10.1186/s12893-016-0173-y |
work_keys_str_mv | AT goosmatthias validationofundergraduatemedicalstudentscriptconcordancetestsctscoresontheclinicalassessmentoftheacuteabdomen AT schubachfabian validationofundergraduatemedicalstudentscriptconcordancetestsctscoresontheclinicalassessmentoftheacuteabdomen AT seifertgabriel validationofundergraduatemedicalstudentscriptconcordancetestsctscoresontheclinicalassessmentoftheacuteabdomen AT boekermartin validationofundergraduatemedicalstudentscriptconcordancetestsctscoresontheclinicalassessmentoftheacuteabdomen |