Cargando…
Calibrating the Medical Council of Canada’s Qualifying Examination Part I using an integrated item response theory framework: a comparison of models and designs
PURPOSE: The aim of this research was to compare different methods of calibrating multiple choice question (MCQ) and clinical decision making (CDM) components for the Medical Council of Canada’s Qualifying Examination Part I (MCCQEI) based on item response theory. METHODS: Our data consisted of test...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
National Health Personnel Licensing Examination Board of the Republic of Korea
2016
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4751294/ https://www.ncbi.nlm.nih.gov/pubmed/26883811 http://dx.doi.org/10.3352/jeehp.2016.13.6 |
_version_ | 1782415563099734016 |
---|---|
author | De Champlain, Andre F. Boulais, Andre-Philippe Dallas, Andrew |
author_facet | De Champlain, Andre F. Boulais, Andre-Philippe Dallas, Andrew |
author_sort | De Champlain, Andre F. |
collection | PubMed |
description | PURPOSE: The aim of this research was to compare different methods of calibrating multiple choice question (MCQ) and clinical decision making (CDM) components for the Medical Council of Canada’s Qualifying Examination Part I (MCCQEI) based on item response theory. METHODS: Our data consisted of test results from 8,213 first time applicants to MCCQEI in spring and fall 2010 and 2011 test administrations. The data set contained several thousand multiple choice items and several hundred CDM cases. Four dichotomous calibrations were run using BILOG-MG 3.0. All 3 mixed item format (dichotomous MCQ responses and polytomous CDM case scores) calibrations were conducted using PARSCALE 4. RESULTS: The 2-PL model had identical numbers of items with chi-square values at or below a Type I error rate of 0.01 (83/3,499 or 0.02). In all 3 polytomous models, whether the MCQs were either anchored or concurrently run with the CDM cases, results suggest very poor fit. All IRT abilities estimated from dichotomous calibration designs correlated very highly with each other. IRT-based pass-fail rates were extremely similar, not only across calibration designs and methods, but also with regard to the actual reported decision to candidates. The largest difference noted in pass rates was 4.78%, which occurred between the mixed format concurrent 2-PL graded response model (pass rate= 80.43%) and the dichotomous anchored 1-PL calibrations (pass rate= 85.21%). CONCLUSION: Simpler calibration designs with dichotomized items should be implemented. The dichotomous calibrations provided better fit of the item response matrix than more complex, polytomous calibrations. |
format | Online Article Text |
id | pubmed-4751294 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2016 |
publisher | National Health Personnel Licensing Examination Board of the Republic of Korea |
record_format | MEDLINE/PubMed |
spelling | pubmed-47512942016-03-01 Calibrating the Medical Council of Canada’s Qualifying Examination Part I using an integrated item response theory framework: a comparison of models and designs De Champlain, Andre F. Boulais, Andre-Philippe Dallas, Andrew J Educ Eval Health Prof Technical Report PURPOSE: The aim of this research was to compare different methods of calibrating multiple choice question (MCQ) and clinical decision making (CDM) components for the Medical Council of Canada’s Qualifying Examination Part I (MCCQEI) based on item response theory. METHODS: Our data consisted of test results from 8,213 first time applicants to MCCQEI in spring and fall 2010 and 2011 test administrations. The data set contained several thousand multiple choice items and several hundred CDM cases. Four dichotomous calibrations were run using BILOG-MG 3.0. All 3 mixed item format (dichotomous MCQ responses and polytomous CDM case scores) calibrations were conducted using PARSCALE 4. RESULTS: The 2-PL model had identical numbers of items with chi-square values at or below a Type I error rate of 0.01 (83/3,499 or 0.02). In all 3 polytomous models, whether the MCQs were either anchored or concurrently run with the CDM cases, results suggest very poor fit. All IRT abilities estimated from dichotomous calibration designs correlated very highly with each other. IRT-based pass-fail rates were extremely similar, not only across calibration designs and methods, but also with regard to the actual reported decision to candidates. The largest difference noted in pass rates was 4.78%, which occurred between the mixed format concurrent 2-PL graded response model (pass rate= 80.43%) and the dichotomous anchored 1-PL calibrations (pass rate= 85.21%). CONCLUSION: Simpler calibration designs with dichotomized items should be implemented. The dichotomous calibrations provided better fit of the item response matrix than more complex, polytomous calibrations. National Health Personnel Licensing Examination Board of the Republic of Korea 2016-01-20 /pmc/articles/PMC4751294/ /pubmed/26883811 http://dx.doi.org/10.3352/jeehp.2016.13.6 Text en © 2016, National Health Personnel Licensing Examination Board of the Republic of Korea This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. |
spellingShingle | Technical Report De Champlain, Andre F. Boulais, Andre-Philippe Dallas, Andrew Calibrating the Medical Council of Canada’s Qualifying Examination Part I using an integrated item response theory framework: a comparison of models and designs |
title | Calibrating the Medical Council of Canada’s Qualifying Examination Part I using an integrated item response theory framework: a comparison of models and designs |
title_full | Calibrating the Medical Council of Canada’s Qualifying Examination Part I using an integrated item response theory framework: a comparison of models and designs |
title_fullStr | Calibrating the Medical Council of Canada’s Qualifying Examination Part I using an integrated item response theory framework: a comparison of models and designs |
title_full_unstemmed | Calibrating the Medical Council of Canada’s Qualifying Examination Part I using an integrated item response theory framework: a comparison of models and designs |
title_short | Calibrating the Medical Council of Canada’s Qualifying Examination Part I using an integrated item response theory framework: a comparison of models and designs |
title_sort | calibrating the medical council of canada’s qualifying examination part i using an integrated item response theory framework: a comparison of models and designs |
topic | Technical Report |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4751294/ https://www.ncbi.nlm.nih.gov/pubmed/26883811 http://dx.doi.org/10.3352/jeehp.2016.13.6 |
work_keys_str_mv | AT dechamplainandref calibratingthemedicalcouncilofcanadasqualifyingexaminationpartiusinganintegrateditemresponsetheoryframeworkacomparisonofmodelsanddesigns AT boulaisandrephilippe calibratingthemedicalcouncilofcanadasqualifyingexaminationpartiusinganintegrateditemresponsetheoryframeworkacomparisonofmodelsanddesigns AT dallasandrew calibratingthemedicalcouncilofcanadasqualifyingexaminationpartiusinganintegrateditemresponsetheoryframeworkacomparisonofmodelsanddesigns |