Cargando…

Predictive Validity Evidence for Medical Education Research Study Quality Instrument Scores: Quality of Submissions to JGIM’s Medical Education Special Issue

BACKGROUND: Deficiencies in medical education research quality are widely acknowledged. Content, internal structure, and criterion validity evidence support the use of the Medical Education Research Study Quality Instrument (MERSQI) to measure education research quality, but predictive validity evid...

Descripción completa

Detalles Bibliográficos
Autores principales: Reed, Darcy A., Beckman, Thomas J., Wright, Scott M., Levine, Rachel B., Kern, David E., Cook, David A.
Formato: Texto
Lenguaje:English
Publicado: Springer-Verlag 2008
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2517948/
https://www.ncbi.nlm.nih.gov/pubmed/18612715
http://dx.doi.org/10.1007/s11606-008-0664-3
Descripción
Sumario:BACKGROUND: Deficiencies in medical education research quality are widely acknowledged. Content, internal structure, and criterion validity evidence support the use of the Medical Education Research Study Quality Instrument (MERSQI) to measure education research quality, but predictive validity evidence has not been explored. OBJECTIVE: To describe the quality of manuscripts submitted to the 2008 Journal of General Internal Medicine (JGIM) medical education issue and determine whether MERSQI scores predict editorial decisions. DESIGN AND PARTICIPANTS: Cross-sectional study of original, quantitative research studies submitted for publication. MEASUREMENTS: Study quality measured by MERSQI scores (possible range 5–18). RESULTS: Of 131 submitted manuscripts, 100 met inclusion criteria. The mean (SD) total MERSQI score was 9.6 (2.6), range 5–15.5. Most studies used single-group cross-sectional (54%) or pre-post designs (32%), were conducted at one institution (78%), and reported satisfaction or opinion outcomes (56%). Few (36%) reported validity evidence for evaluation instruments. A one-point increase in MERSQI score was associated with editorial decisions to send manuscripts for peer review versus reject without review (OR 1.31, 95%CI 1.07–1.61, p = 0.009) and to invite revisions after review versus reject after review (OR 1.29, 95%CI 1.05–1.58, p = 0.02). MERSQI scores predicted final acceptance versus rejection (OR 1.32; 95% CI 1.10–1.58, p = 0.003). The mean total MERSQI score of accepted manuscripts was significantly higher than rejected manuscripts (10.7 [2.5] versus 9.0 [2.4], p = 0.003). CONCLUSIONS: MERSQI scores predicted editorial decisions and identified areas of methodological strengths and weaknesses in submitted manuscripts. Researchers, reviewers, and editors might use this instrument as a measure of methodological quality. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.1007/s11606-008-0664-3) contains supplementary material, which is available to authorized users.