Cargando…

Quality of cost evaluations of physician continuous professional development: Systematic review of reporting and methods

INTRODUCTION: We sought to evaluate the reporting and methodological quality of cost evaluations of physician continuing professional development (CPD). METHODS: We conducted a systematic review, searching MEDLINE, Embase, PsycInfo, and the Cochrane Database for studies comparing the cost of physici...

Descripción completa

Detalles Bibliográficos
Autores principales: Cook, David A., Wilkinson, John M., Foo, Jonathan
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Bohn Stafleu van Loghum 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9240125/
https://www.ncbi.nlm.nih.gov/pubmed/35357652
http://dx.doi.org/10.1007/s40037-022-00705-z
Descripción
Sumario:INTRODUCTION: We sought to evaluate the reporting and methodological quality of cost evaluations of physician continuing professional development (CPD). METHODS: We conducted a systematic review, searching MEDLINE, Embase, PsycInfo, and the Cochrane Database for studies comparing the cost of physician CPD (last update 23 April 2020). Two reviewers, working independently, screened all articles for inclusion. Two reviewers extracted information on reporting quality using the Consolidated Health Economic Evaluation Reporting Standards (CHEERS), and on methodological quality using the Medical Education Research Study Quality Instrument (MERSQI) and a published reference case. RESULTS: Of 3338 potentially eligible studies, 62 were included. Operational definitions of methodological and reporting quality elements were iteratively revised. Articles reported mean (SD) 43% (20%) of CHEERS elements for the Title/Abstract, 56% (34%) for Introduction, 66% (19%) for Methods, 61% (17%) for Results, and 66% (30%) for Discussion, with overall reporting index 292 (83) (maximum 500). Valuation methods were reported infrequently (resource selection 10 of 62 [16%], resource quantitation 10 [16%], pricing 26 [42%]), as were descriptions/discussion of the physicians trained (42 [68%]), training setting (42 [68%]), training intervention (40 [65%]), sensitivity analyses of uncertainty (9 [15%]), and generalizability (30 [48%]). MERSQI scores ranged from 6.0 to 16.0 (mean 11.2 [2.4]). Changes over time in reporting index (initial 241 [105], final 321 [52]) and MERSQI scores (initial 9.8 [2.7], final 11.9 [1.9]) were not statistically significant (p ≥ 0.08). DISCUSSION: Methods and reporting of HPE cost evaluations fall short of current standards. Gaps exist in the valuation, analysis, and contextualization of cost outcomes. SUPPLEMENTARY INFORMATION: The online version of this article (10.1007/s40037-022-00705-z) contains supplementary material, which is available to authorized users. This material includes the full search strategy, operational definitions of the CHEERS elements, and a list of all included studies with key information.