Cargando…
Validation of response processes in medical assessment using an explanatory item response model
BACKGROUND: Response process validation is a crucial source of test validity. The expected cognitive load scale was created based on the reflection of the mental effort by which borderline students solve an item defined by experts. The stem length affects the students’ extraneous cognitive load. The...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
BioMed Central
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9737731/ https://www.ncbi.nlm.nih.gov/pubmed/36496386 http://dx.doi.org/10.1186/s12909-022-03942-2 |
_version_ | 1784847362161639424 |
---|---|
author | Vattanavanit, Veerapong Ngudgratoke, Sungworn Khaninphasut, Purimpratch |
author_facet | Vattanavanit, Veerapong Ngudgratoke, Sungworn Khaninphasut, Purimpratch |
author_sort | Vattanavanit, Veerapong |
collection | PubMed |
description | BACKGROUND: Response process validation is a crucial source of test validity. The expected cognitive load scale was created based on the reflection of the mental effort by which borderline students solve an item defined by experts. The stem length affects the students’ extraneous cognitive load. The purposes of this study were to develop an exam for medical students and corroborate the response process validity by analyzing the correlation between the expected cognitive load, stem length, and the difficulty. METHODS: This was a correlational study. Five medical teachers as the experts and 183 third-year medical students were enrolled from the Faculty of Medicine, Prince of Songkla University, Thailand. The instruments used were a medical physiology exam and a three-level expected cognitive load evaluation form judged by medical teachers. Data were analyzed using an explanatory item response model. RESULTS: The test consists of 20 items and 21 possible scores. The median score was 8, with a quartile deviation of 1.5. Nine items had long stems (more than two lines). Sixteen items were judged as high (level 2 or 3) expected cognitive load. When adding the expected cognitive load in a Rasch model, the expected cognitive load significantly correlated with item difficulty. In the Rasch model that included both the expected cognitive load and stem length, a long stem had a greater effect on item difficulty than low expected cognitive load. However, the Rasch model showed the best fit. CONCLUSIONS: The long stem had a stronger correlation with test difficulty than expected cognitive load, which indirectly implied response process validity. We suggest incorporating stem length and expected cognitive load to create an appropriate distribution of the difficulty of the entire test. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12909-022-03942-2. |
format | Online Article Text |
id | pubmed-9737731 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | BioMed Central |
record_format | MEDLINE/PubMed |
spelling | pubmed-97377312022-12-11 Validation of response processes in medical assessment using an explanatory item response model Vattanavanit, Veerapong Ngudgratoke, Sungworn Khaninphasut, Purimpratch BMC Med Educ Research BACKGROUND: Response process validation is a crucial source of test validity. The expected cognitive load scale was created based on the reflection of the mental effort by which borderline students solve an item defined by experts. The stem length affects the students’ extraneous cognitive load. The purposes of this study were to develop an exam for medical students and corroborate the response process validity by analyzing the correlation between the expected cognitive load, stem length, and the difficulty. METHODS: This was a correlational study. Five medical teachers as the experts and 183 third-year medical students were enrolled from the Faculty of Medicine, Prince of Songkla University, Thailand. The instruments used were a medical physiology exam and a three-level expected cognitive load evaluation form judged by medical teachers. Data were analyzed using an explanatory item response model. RESULTS: The test consists of 20 items and 21 possible scores. The median score was 8, with a quartile deviation of 1.5. Nine items had long stems (more than two lines). Sixteen items were judged as high (level 2 or 3) expected cognitive load. When adding the expected cognitive load in a Rasch model, the expected cognitive load significantly correlated with item difficulty. In the Rasch model that included both the expected cognitive load and stem length, a long stem had a greater effect on item difficulty than low expected cognitive load. However, the Rasch model showed the best fit. CONCLUSIONS: The long stem had a stronger correlation with test difficulty than expected cognitive load, which indirectly implied response process validity. We suggest incorporating stem length and expected cognitive load to create an appropriate distribution of the difficulty of the entire test. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s12909-022-03942-2. BioMed Central 2022-12-10 /pmc/articles/PMC9737731/ /pubmed/36496386 http://dx.doi.org/10.1186/s12909-022-03942-2 Text en © The Author(s) 2022 https://creativecommons.org/licenses/by/4.0/Open AccessThis article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/ (https://creativecommons.org/publicdomain/zero/1.0/) ) applies to the data made available in this article, unless otherwise stated in a credit line to the data. |
spellingShingle | Research Vattanavanit, Veerapong Ngudgratoke, Sungworn Khaninphasut, Purimpratch Validation of response processes in medical assessment using an explanatory item response model |
title | Validation of response processes in medical assessment using an explanatory item response model |
title_full | Validation of response processes in medical assessment using an explanatory item response model |
title_fullStr | Validation of response processes in medical assessment using an explanatory item response model |
title_full_unstemmed | Validation of response processes in medical assessment using an explanatory item response model |
title_short | Validation of response processes in medical assessment using an explanatory item response model |
title_sort | validation of response processes in medical assessment using an explanatory item response model |
topic | Research |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9737731/ https://www.ncbi.nlm.nih.gov/pubmed/36496386 http://dx.doi.org/10.1186/s12909-022-03942-2 |
work_keys_str_mv | AT vattanavanitveerapong validationofresponseprocessesinmedicalassessmentusinganexplanatoryitemresponsemodel AT ngudgratokesungworn validationofresponseprocessesinmedicalassessmentusinganexplanatoryitemresponsemodel AT khaninphasutpurimpratch validationofresponseprocessesinmedicalassessmentusinganexplanatoryitemresponsemodel |