Cargando…

Automated Item Generation: impact of item variants on performance and standard setting

BACKGROUND: Automated Item Generation (AIG) uses computer software to create multiple items from a single question model. There is currently a lack of data looking at whether item variants to a single question result in differences in student performance or human-derived standard setting. The purpos...

Descripción completa

Detalles Bibliográficos
Autores principales: Westacott, R., Badger, K., Kluth, D., Gurnell, M., Reed, M. W. R., Sam, A. H.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10496230/
https://www.ncbi.nlm.nih.gov/pubmed/37697275
http://dx.doi.org/10.1186/s12909-023-04457-0
_version_ 1785105065461153792
author Westacott, R.
Badger, K.
Kluth, D.
Gurnell, M.
Reed, M. W. R.
Sam, A. H.
author_facet Westacott, R.
Badger, K.
Kluth, D.
Gurnell, M.
Reed, M. W. R.
Sam, A. H.
author_sort Westacott, R.
collection PubMed
description BACKGROUND: Automated Item Generation (AIG) uses computer software to create multiple items from a single question model. There is currently a lack of data looking at whether item variants to a single question result in differences in student performance or human-derived standard setting. The purpose of this study was to use 50 Multiple Choice Questions (MCQs) as models to create four distinct tests which would be standard set and given to final year UK medical students, and then to compare the performance and standard setting data for each. METHODS: Pre-existing questions from the UK Medical Schools Council (MSC) Assessment Alliance item bank, created using traditional item writing techniques, were used to generate four ‘isomorphic’ 50-item MCQ tests using AIG software. Isomorphic questions use the same question template with minor alterations to test the same learning outcome. All UK medical schools were invited to deliver one of the four papers as an online formative assessment for their final year students. Each test was standard set using a modified Angoff method. Thematic analysis was conducted for item variants with high and low levels of variance in facility (for student performance) and average scores (for standard setting). RESULTS: Two thousand two hundred eighteen students from 12 UK medical schools participated, with each school using one of the four papers. The average facility of the four papers ranged from 0.55–0.61, and the cut score ranged from 0.58–0.61. Twenty item models had a facility difference > 0.15 and 10 item models had a difference in standard setting of > 0.1. Variation in parameters that could alter clinical reasoning strategies had the greatest impact on item facility. CONCLUSIONS: Item facility varied to a greater extent than the standard set. This difference may relate to variants causing greater disruption of clinical reasoning strategies in novice learners compared to experts, but is confounded by the possibility that the performance differences may be explained at school level and therefore warrants further study.
format Online
Article
Text
id pubmed-10496230
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher BioMed Central
record_format MEDLINE/PubMed
spelling pubmed-104962302023-09-13 Automated Item Generation: impact of item variants on performance and standard setting Westacott, R. Badger, K. Kluth, D. Gurnell, M. Reed, M. W. R. Sam, A. H. BMC Med Educ Research BACKGROUND: Automated Item Generation (AIG) uses computer software to create multiple items from a single question model. There is currently a lack of data looking at whether item variants to a single question result in differences in student performance or human-derived standard setting. The purpose of this study was to use 50 Multiple Choice Questions (MCQs) as models to create four distinct tests which would be standard set and given to final year UK medical students, and then to compare the performance and standard setting data for each. METHODS: Pre-existing questions from the UK Medical Schools Council (MSC) Assessment Alliance item bank, created using traditional item writing techniques, were used to generate four ‘isomorphic’ 50-item MCQ tests using AIG software. Isomorphic questions use the same question template with minor alterations to test the same learning outcome. All UK medical schools were invited to deliver one of the four papers as an online formative assessment for their final year students. Each test was standard set using a modified Angoff method. Thematic analysis was conducted for item variants with high and low levels of variance in facility (for student performance) and average scores (for standard setting). RESULTS: Two thousand two hundred eighteen students from 12 UK medical schools participated, with each school using one of the four papers. The average facility of the four papers ranged from 0.55–0.61, and the cut score ranged from 0.58–0.61. Twenty item models had a facility difference > 0.15 and 10 item models had a difference in standard setting of > 0.1. Variation in parameters that could alter clinical reasoning strategies had the greatest impact on item facility. CONCLUSIONS: Item facility varied to a greater extent than the standard set. This difference may relate to variants causing greater disruption of clinical reasoning strategies in novice learners compared to experts, but is confounded by the possibility that the performance differences may be explained at school level and therefore warrants further study. BioMed Central 2023-09-11 /pmc/articles/PMC10496230/ /pubmed/37697275 http://dx.doi.org/10.1186/s12909-023-04457-0 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/ (https://creativecommons.org/publicdomain/zero/1.0/) ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
spellingShingle Research
Westacott, R.
Badger, K.
Kluth, D.
Gurnell, M.
Reed, M. W. R.
Sam, A. H.
Automated Item Generation: impact of item variants on performance and standard setting
title Automated Item Generation: impact of item variants on performance and standard setting
title_full Automated Item Generation: impact of item variants on performance and standard setting
title_fullStr Automated Item Generation: impact of item variants on performance and standard setting
title_full_unstemmed Automated Item Generation: impact of item variants on performance and standard setting
title_short Automated Item Generation: impact of item variants on performance and standard setting
title_sort automated item generation: impact of item variants on performance and standard setting
topic Research
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10496230/
https://www.ncbi.nlm.nih.gov/pubmed/37697275
http://dx.doi.org/10.1186/s12909-023-04457-0
work_keys_str_mv AT westacottr automateditemgenerationimpactofitemvariantsonperformanceandstandardsetting
AT badgerk automateditemgenerationimpactofitemvariantsonperformanceandstandardsetting
AT kluthd automateditemgenerationimpactofitemvariantsonperformanceandstandardsetting
AT gurnellm automateditemgenerationimpactofitemvariantsonperformanceandstandardsetting
AT reedmwr automateditemgenerationimpactofitemvariantsonperformanceandstandardsetting
AT samah automateditemgenerationimpactofitemvariantsonperformanceandstandardsetting