Cargando…

BMAT's predictive validity for medical school performance: A retrospective cohort study

BACKGROUND: Although used widely, there is limited evidence of the BioMedical Admissions Test's (BMAT) predictive validity and incremental validity over prior educational attainment (PEA). We investigated BMAT's predictive and incremental validity for performance in two undergraduate medic...

Descripción completa

Detalles Bibliográficos
Autores principales: Davies, Daniel J., Sam, Amir H., Murphy, Kevin G., Khan, Shahid A., Choe, Ruth, Cleland, Jennifer
Formato: Online Artículo Texto
Lenguaje:English
Publicado: John Wiley and Sons Inc. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9545404/
https://www.ncbi.nlm.nih.gov/pubmed/35514145
http://dx.doi.org/10.1111/medu.14819
_version_ 1784804813705314304
author Davies, Daniel J.
Sam, Amir H.
Murphy, Kevin G.
Khan, Shahid A.
Choe, Ruth
Cleland, Jennifer
author_facet Davies, Daniel J.
Sam, Amir H.
Murphy, Kevin G.
Khan, Shahid A.
Choe, Ruth
Cleland, Jennifer
author_sort Davies, Daniel J.
collection PubMed
description BACKGROUND: Although used widely, there is limited evidence of the BioMedical Admissions Test's (BMAT) predictive validity and incremental validity over prior educational attainment (PEA). We investigated BMAT's predictive and incremental validity for performance in two undergraduate medical schools, Imperial College School of Medicine (ICSM), UK, and Lee Kong Chian School of Medicine (LKCMedicine), Singapore. Our secondary goal was to compare the evidence collected with published evidence relating to comparable tools. METHODS: This was a retrospective cohort study of four ICSM (1188 students, entering 2010–2013) and three LKCMedicine cohorts (222 students, 2013–2015). We investigated associations between BMAT Section 1 (‘Thinking Skills’), Section 2 (‘Scientific Knowledge and Applications’) and Section 3a (‘Writing Task’) scores, with written and clinical assessment performance across all programme years. Incremental validity was investigated over PEA (A‐levels) in a subset of ICSM students. RESULTS: When BMAT sections were investigated independently, Section 2 scores predicted performance on all written assessments in both institutions with mainly small effect sizes (standardised coefficient ranges: ICSM: 0.08–0.19; LKCMedicine: 0.22–0.36). Section 1 scores predicted Years 5 and 6 written assessment performance at ICSM (0.09–0.14) but nothing at LKCMedicine. Section 3a scores only predicted Year 5 clinical assessment performance at ICSM with a coefficient <0.1. There were no positive associations with standardised coefficients >0.1 between BMAT performance and clinical assessment performance. Multivariable regressions confirmed that Section 2 scores were the most predictive. We found no clear evidence of incremental validity for any BMAT section scores over A‐level grades. DISCUSSION: Schools who wish to assess scientific knowledge independently of A‐levels may find BMAT Section 2 useful. Comparison with previous studies indicates that, overall, BMAT seems less useful than comparable tools. Larger scale studies are needed. Broader questions regarding why institutions adopt certain admissions tests, including those with little evidence, need consideration.
format Online
Article
Text
id pubmed-9545404
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher John Wiley and Sons Inc.
record_format MEDLINE/PubMed
spelling pubmed-95454042022-10-14 BMAT's predictive validity for medical school performance: A retrospective cohort study Davies, Daniel J. Sam, Amir H. Murphy, Kevin G. Khan, Shahid A. Choe, Ruth Cleland, Jennifer Med Educ Research Articles BACKGROUND: Although used widely, there is limited evidence of the BioMedical Admissions Test's (BMAT) predictive validity and incremental validity over prior educational attainment (PEA). We investigated BMAT's predictive and incremental validity for performance in two undergraduate medical schools, Imperial College School of Medicine (ICSM), UK, and Lee Kong Chian School of Medicine (LKCMedicine), Singapore. Our secondary goal was to compare the evidence collected with published evidence relating to comparable tools. METHODS: This was a retrospective cohort study of four ICSM (1188 students, entering 2010–2013) and three LKCMedicine cohorts (222 students, 2013–2015). We investigated associations between BMAT Section 1 (‘Thinking Skills’), Section 2 (‘Scientific Knowledge and Applications’) and Section 3a (‘Writing Task’) scores, with written and clinical assessment performance across all programme years. Incremental validity was investigated over PEA (A‐levels) in a subset of ICSM students. RESULTS: When BMAT sections were investigated independently, Section 2 scores predicted performance on all written assessments in both institutions with mainly small effect sizes (standardised coefficient ranges: ICSM: 0.08–0.19; LKCMedicine: 0.22–0.36). Section 1 scores predicted Years 5 and 6 written assessment performance at ICSM (0.09–0.14) but nothing at LKCMedicine. Section 3a scores only predicted Year 5 clinical assessment performance at ICSM with a coefficient <0.1. There were no positive associations with standardised coefficients >0.1 between BMAT performance and clinical assessment performance. Multivariable regressions confirmed that Section 2 scores were the most predictive. We found no clear evidence of incremental validity for any BMAT section scores over A‐level grades. DISCUSSION: Schools who wish to assess scientific knowledge independently of A‐levels may find BMAT Section 2 useful. Comparison with previous studies indicates that, overall, BMAT seems less useful than comparable tools. Larger scale studies are needed. Broader questions regarding why institutions adopt certain admissions tests, including those with little evidence, need consideration. John Wiley and Sons Inc. 2022-05-16 2022-09 /pmc/articles/PMC9545404/ /pubmed/35514145 http://dx.doi.org/10.1111/medu.14819 Text en © 2022 The Authors. Medical Education published by Association for the Study of Medical Education and John Wiley & Sons Ltd. https://creativecommons.org/licenses/by-nc-nd/4.0/This is an open access article under the terms of the http://creativecommons.org/licenses/by-nc-nd/4.0/ (https://creativecommons.org/licenses/by-nc-nd/4.0/) License, which permits use and distribution in any medium, provided the original work is properly cited, the use is non‐commercial and no modifications or adaptations are made.
spellingShingle Research Articles
Davies, Daniel J.
Sam, Amir H.
Murphy, Kevin G.
Khan, Shahid A.
Choe, Ruth
Cleland, Jennifer
BMAT's predictive validity for medical school performance: A retrospective cohort study
title BMAT's predictive validity for medical school performance: A retrospective cohort study
title_full BMAT's predictive validity for medical school performance: A retrospective cohort study
title_fullStr BMAT's predictive validity for medical school performance: A retrospective cohort study
title_full_unstemmed BMAT's predictive validity for medical school performance: A retrospective cohort study
title_short BMAT's predictive validity for medical school performance: A retrospective cohort study
title_sort bmat's predictive validity for medical school performance: a retrospective cohort study
topic Research Articles
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9545404/
https://www.ncbi.nlm.nih.gov/pubmed/35514145
http://dx.doi.org/10.1111/medu.14819
work_keys_str_mv AT daviesdanielj bmatspredictivevalidityformedicalschoolperformancearetrospectivecohortstudy
AT samamirh bmatspredictivevalidityformedicalschoolperformancearetrospectivecohortstudy
AT murphykeving bmatspredictivevalidityformedicalschoolperformancearetrospectivecohortstudy
AT khanshahida bmatspredictivevalidityformedicalschoolperformancearetrospectivecohortstudy
AT choeruth bmatspredictivevalidityformedicalschoolperformancearetrospectivecohortstudy
AT clelandjennifer bmatspredictivevalidityformedicalschoolperformancearetrospectivecohortstudy