Cargando…

Designs of trials assessing interventions to improve the peer review process: a vignette-based survey

BACKGROUND: We aimed to determine the best study designs for assessing interventions to improve the peer review process according to experts’ opinions. Furthermore, for interventions previously evaluated, we determined whether the study designs actually used were rated as the best study designs. MET...

Descripción completa

Detalles Bibliográficos
Autores principales: Heim, Amytis, Ravaud, Philippe, Baron, Gabriel, Boutron, Isabelle
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2018
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6192007/
https://www.ncbi.nlm.nih.gov/pubmed/30318018
http://dx.doi.org/10.1186/s12916-018-1167-7
Descripción
Sumario:BACKGROUND: We aimed to determine the best study designs for assessing interventions to improve the peer review process according to experts’ opinions. Furthermore, for interventions previously evaluated, we determined whether the study designs actually used were rated as the best study designs. METHODS: Study design: A series of six vignette-based surveys exploring the best study designs for six different interventions (training peer reviewers, adding an expert to the peer review process, use of reporting guidelines checklists, blinding peer reviewers to the results (i.e., results-free peer review), giving incentives to peer reviewers, and post-publication peer review). Vignette construction: Vignettes were case scenarios of trials assessing interventions aimed at improving the quality of peer review. For each intervention, the vignette included the study type (e.g., randomized controlled trial [RCT]), setting (e.g., single biomedical journal), and type of manuscript assessed (e.g., actual manuscripts received by the journal); each of these three features varied between vignettes. Participants: Researchers with expertise in peer review or methodology of clinical trials. Outcome: Participants were proposed two vignettes describing two different study designs to assess the same intervention and had to indicate which study design they preferred on a scale, from − 5 (preference for study A) to 5 (preference for study B), 0 indicating no preference between the suggested designs (primary outcome). Secondary outcomes were trust in the results and feasibility of the designs. RESULTS: A total of 204 experts assessed 1044 paired comparisons. The preferred study type was RCTs with randomization of manuscripts for four interventions (adding an expert, use of reporting guidelines checklist, results-free peer review, post-publication peer review) and RCTs with randomization of peer reviewers for two interventions (training peer reviewers and using incentives). The preferred setting was mainly several biomedical journals from different publishers, and the preferred type of manuscript was actual manuscripts submitted to journals. However, the most feasible designs were often cluster RCTs and interrupted time series analysis set in a single biomedical journal, with the assessment of a fabricated manuscript. Three interventions were previously assessed: none used the design rated first in preference by experts. CONCLUSION: The vignette-based survey allowed us to identify the best study designs for assessing different interventions to improve peer review according to experts’ opinion. There is gap between the preferred study designs and the designs actually used. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (10.1186/s12916-018-1167-7) contains supplementary material, which is available to authorized users.