Cargando…

How well can we assess the validity of non-randomised studies of medications? A systematic review of assessment tools

OBJECTIVE: To determine whether assessment tools for non-randomised studies (NRS) address critical elements that influence the validity of NRS findings for comparative safety and effectiveness of medications. DESIGN: Systematic review and Delphi survey. DATA SOURCES: We searched PubMed, Embase, Goog...

Descripción completa

Detalles Bibliográficos
Autores principales: D'Andrea, Elvira, Vinals, Lydia, Patorno, Elisabetta, Franklin, Jessica M., Bennett, Dimitri, Largent, Joan A., Moga, Daniela C., Yuan, Hongbo, Wen, Xuerong, Zullo, Andrew R., Debray, Thomas P. A., Sarri, Grammati
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BMJ Publishing Group 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7993210/
https://www.ncbi.nlm.nih.gov/pubmed/33762237
http://dx.doi.org/10.1136/bmjopen-2020-043961
Descripción
Sumario:OBJECTIVE: To determine whether assessment tools for non-randomised studies (NRS) address critical elements that influence the validity of NRS findings for comparative safety and effectiveness of medications. DESIGN: Systematic review and Delphi survey. DATA SOURCES: We searched PubMed, Embase, Google, bibliographies of reviews and websites of influential organisations from inception to November 2019. In parallel, we conducted a Delphi survey among the International Society for Pharmacoepidemiology Comparative Effectiveness Research Special Interest Group to identify key methodological challenges for NRS of medications. We created a framework consisting of the reported methodological challenges to evaluate the selected NRS tools. STUDY SELECTION: Checklists or scales assessing NRS. DATA EXTRACTION: Two reviewers extracted general information and content data related to the prespecified framework. RESULTS: Of 44 tools reviewed, 48% (n=21) assess multiple NRS designs, while other tools specifically addressed case–control (n=12, 27%) or cohort studies (n=11, 25%) only. Response rate to the Delphi survey was 73% (35 out of 48 content experts), and a consensus was reached in only two rounds. Most tools evaluated methods for selecting study participants (n=43, 98%), although only one addressed selection bias due to depletion of susceptibles (2%). Many tools addressed the measurement of exposure and outcome (n=40, 91%), and measurement and control for confounders (n=40, 91%). Most tools have at least one item/question on design-specific sources of bias (n=40, 91%), but only a few investigate reverse causation (n=8, 18%), detection bias (n=4, 9%), time-related bias (n=3, 7%), lack of new-user design (n=2, 5%) or active comparator design (n=0). Few tools address the appropriateness of statistical analyses (n=15, 34%), methods for assessing internal (n=15, 34%) or external validity (n=11, 25%) and statistical uncertainty in the findings (n=21, 48%). None of the reviewed tools investigated all the methodological domains and subdomains. CONCLUSIONS: The acknowledgement of major design-specific sources of bias (eg, lack of new-user design, lack of active comparator design, time-related bias, depletion of susceptibles, reverse causation) and statistical assessment of internal and external validity is currently not sufficiently addressed in most of the existing tools. These critical elements should be integrated to systematically investigate the validity of NRS on comparative safety and effectiveness of medications. SYSTEMATIC REVIEW PROTOCOL AND REGISTRATION: https://osf.io/es65q.