Cargando…

Do medical students’ scores using different assessment instruments predict their scores in clinical reasoning using a computer-based simulation?

PURPOSE: The development of clinical problem-solving skills evolves over time and requires structured training and background knowledge. Computer-based case simulations (CCS) have been used for teaching and assessment of clinical reasoning skills. However, previous studies examining the psychometric...

Descripción completa

Detalles Bibliográficos
Autores principales: Fida, Mariam, Kassab, Salah Eldin
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Dove Medical Press 2015
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4345894/
https://www.ncbi.nlm.nih.gov/pubmed/25759603
http://dx.doi.org/10.2147/AMEP.S77459
_version_ 1782359648640172032
author Fida, Mariam
Kassab, Salah Eldin
author_facet Fida, Mariam
Kassab, Salah Eldin
author_sort Fida, Mariam
collection PubMed
description PURPOSE: The development of clinical problem-solving skills evolves over time and requires structured training and background knowledge. Computer-based case simulations (CCS) have been used for teaching and assessment of clinical reasoning skills. However, previous studies examining the psychometric properties of CCS as an assessment tool have been controversial. Furthermore, studies reporting the integration of CCS into problem-based medical curricula have been limited. METHODS: This study examined the psychometric properties of using CCS software (DxR Clinician) for assessment of medical students (n=130) studying in a problem-based, integrated multisystem module (Unit IX) during the academic year 2011–2012. Internal consistency reliability of CCS scores was calculated using Cronbach’s alpha statistics. The relationships between students’ scores in CCS components (clinical reasoning, diagnostic performance, and patient management) and their scores in other examination tools at the end of the unit including multiple-choice questions, short-answer questions, objective structured clinical examination (OSCE), and real patient encounters were analyzed using stepwise hierarchical linear regression. RESULTS: Internal consistency reliability of CCS scores was high (α=0.862). Inter-item correlations between students’ scores in different CCS components and their scores in CCS and other test items were statistically significant. Regression analysis indicated that OSCE scores predicted 32.7% and 35.1% of the variance in clinical reasoning and patient management scores, respectively (P<0.01). Multiple-choice question scores, however, predicted only 15.4% of the variance in diagnostic performance scores (P<0.01), while students’ scores in real patient encounters did not predict any of the CCS scores. CONCLUSION: Students’ scores in OSCE are the most important predictors of their scores in clinical reasoning and patient management using CCS. However, real patient encounter assessment does not appear to test a construct similar to what is tested in CCS.
format Online
Article
Text
id pubmed-4345894
institution National Center for Biotechnology Information
language English
publishDate 2015
publisher Dove Medical Press
record_format MEDLINE/PubMed
spelling pubmed-43458942015-03-10 Do medical students’ scores using different assessment instruments predict their scores in clinical reasoning using a computer-based simulation? Fida, Mariam Kassab, Salah Eldin Adv Med Educ Pract Original Research PURPOSE: The development of clinical problem-solving skills evolves over time and requires structured training and background knowledge. Computer-based case simulations (CCS) have been used for teaching and assessment of clinical reasoning skills. However, previous studies examining the psychometric properties of CCS as an assessment tool have been controversial. Furthermore, studies reporting the integration of CCS into problem-based medical curricula have been limited. METHODS: This study examined the psychometric properties of using CCS software (DxR Clinician) for assessment of medical students (n=130) studying in a problem-based, integrated multisystem module (Unit IX) during the academic year 2011–2012. Internal consistency reliability of CCS scores was calculated using Cronbach’s alpha statistics. The relationships between students’ scores in CCS components (clinical reasoning, diagnostic performance, and patient management) and their scores in other examination tools at the end of the unit including multiple-choice questions, short-answer questions, objective structured clinical examination (OSCE), and real patient encounters were analyzed using stepwise hierarchical linear regression. RESULTS: Internal consistency reliability of CCS scores was high (α=0.862). Inter-item correlations between students’ scores in different CCS components and their scores in CCS and other test items were statistically significant. Regression analysis indicated that OSCE scores predicted 32.7% and 35.1% of the variance in clinical reasoning and patient management scores, respectively (P<0.01). Multiple-choice question scores, however, predicted only 15.4% of the variance in diagnostic performance scores (P<0.01), while students’ scores in real patient encounters did not predict any of the CCS scores. CONCLUSION: Students’ scores in OSCE are the most important predictors of their scores in clinical reasoning and patient management using CCS. However, real patient encounter assessment does not appear to test a construct similar to what is tested in CCS. Dove Medical Press 2015-02-20 /pmc/articles/PMC4345894/ /pubmed/25759603 http://dx.doi.org/10.2147/AMEP.S77459 Text en © 2015 Fida and Kassab. This work is published by Dove Medical Press Limited, and licensed under Creative Commons Attribution – Non Commercial (unported, v3.0) License The full terms of the License are available at http://creativecommons.org/licenses/by-nc/3.0/. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed.
spellingShingle Original Research
Fida, Mariam
Kassab, Salah Eldin
Do medical students’ scores using different assessment instruments predict their scores in clinical reasoning using a computer-based simulation?
title Do medical students’ scores using different assessment instruments predict their scores in clinical reasoning using a computer-based simulation?
title_full Do medical students’ scores using different assessment instruments predict their scores in clinical reasoning using a computer-based simulation?
title_fullStr Do medical students’ scores using different assessment instruments predict their scores in clinical reasoning using a computer-based simulation?
title_full_unstemmed Do medical students’ scores using different assessment instruments predict their scores in clinical reasoning using a computer-based simulation?
title_short Do medical students’ scores using different assessment instruments predict their scores in clinical reasoning using a computer-based simulation?
title_sort do medical students’ scores using different assessment instruments predict their scores in clinical reasoning using a computer-based simulation?
topic Original Research
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4345894/
https://www.ncbi.nlm.nih.gov/pubmed/25759603
http://dx.doi.org/10.2147/AMEP.S77459
work_keys_str_mv AT fidamariam domedicalstudentsscoresusingdifferentassessmentinstrumentspredicttheirscoresinclinicalreasoningusingacomputerbasedsimulation
AT kassabsalaheldin domedicalstudentsscoresusingdifferentassessmentinstrumentspredicttheirscoresinclinicalreasoningusingacomputerbasedsimulation