Cargando…

Evaluation of randomized controlled trials: a primer and tutorial for mental health researchers

BACKGROUND: Considered one of the highest levels of evidence, results of randomized controlled trials (RCTs) remain an essential building block in mental health research. They are frequently used to confirm that an intervention “works” and to guide treatment decisions. Given their importance in the...

Descripción completa

Detalles Bibliográficos
Autores principales: Harrer, Mathias, Cuijpers, Pim, Schuurmans, Lea K. J., Kaiser, Tim, Buntrock, Claudia, van Straten, Annemieke, Ebert, David
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10469910/
https://www.ncbi.nlm.nih.gov/pubmed/37649083
http://dx.doi.org/10.1186/s13063-023-07596-3
_version_ 1785099553060421632
author Harrer, Mathias
Cuijpers, Pim
Schuurmans, Lea K. J.
Kaiser, Tim
Buntrock, Claudia
van Straten, Annemieke
Ebert, David
author_facet Harrer, Mathias
Cuijpers, Pim
Schuurmans, Lea K. J.
Kaiser, Tim
Buntrock, Claudia
van Straten, Annemieke
Ebert, David
author_sort Harrer, Mathias
collection PubMed
description BACKGROUND: Considered one of the highest levels of evidence, results of randomized controlled trials (RCTs) remain an essential building block in mental health research. They are frequently used to confirm that an intervention “works” and to guide treatment decisions. Given their importance in the field, it is concerning that the quality of many RCT evaluations in mental health research remains poor. Common errors range from inadequate missing data handling and inappropriate analyses (e.g., baseline randomization tests or analyses of within-group changes) to unduly interpretations of trial results and insufficient reporting. These deficiencies pose a threat to the robustness of mental health research and its impact on patient care. Many of these issues may be avoided in the future if mental health researchers are provided with a better understanding of what constitutes a high-quality RCT evaluation. METHODS: In this primer article, we give an introduction to core concepts and caveats of clinical trial evaluations in mental health research. We also show how to implement current best practices using open-source statistical software. RESULTS: Drawing on Rubin’s potential outcome framework, we describe that RCTs put us in a privileged position to study causality by ensuring that the potential outcomes of the randomized groups become exchangeable. We discuss how missing data can threaten the validity of our results if dropouts systematically differ from non-dropouts, introduce trial estimands as a way to co-align analyses with the goals of the evaluation, and explain how to set up an appropriate analysis model to test the treatment effect at one or several assessment points. A novice-friendly tutorial is provided alongside this primer. It lays out concepts in greater detail and showcases how to implement techniques using the statistical software R, based on a real-world RCT dataset. DISCUSSION: Many problems of RCTs already arise at the design stage, and we examine some avoidable and unavoidable “weak spots” of this design in mental health research. For instance, we discuss how lack of prospective registration can give way to issues like outcome switching and selective reporting, how allegiance biases can inflate effect estimates, review recommendations and challenges in blinding patients in mental health RCTs, and describe problems arising from underpowered trials. Lastly, we discuss why not all randomized trials necessarily have a limited external validity and examine how RCTs relate to ongoing efforts to personalize mental health care. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s13063-023-07596-3.
format Online
Article
Text
id pubmed-10469910
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher BioMed Central
record_format MEDLINE/PubMed
spelling pubmed-104699102023-09-01 Evaluation of randomized controlled trials: a primer and tutorial for mental health researchers Harrer, Mathias Cuijpers, Pim Schuurmans, Lea K. J. Kaiser, Tim Buntrock, Claudia van Straten, Annemieke Ebert, David Trials Methodology BACKGROUND: Considered one of the highest levels of evidence, results of randomized controlled trials (RCTs) remain an essential building block in mental health research. They are frequently used to confirm that an intervention “works” and to guide treatment decisions. Given their importance in the field, it is concerning that the quality of many RCT evaluations in mental health research remains poor. Common errors range from inadequate missing data handling and inappropriate analyses (e.g., baseline randomization tests or analyses of within-group changes) to unduly interpretations of trial results and insufficient reporting. These deficiencies pose a threat to the robustness of mental health research and its impact on patient care. Many of these issues may be avoided in the future if mental health researchers are provided with a better understanding of what constitutes a high-quality RCT evaluation. METHODS: In this primer article, we give an introduction to core concepts and caveats of clinical trial evaluations in mental health research. We also show how to implement current best practices using open-source statistical software. RESULTS: Drawing on Rubin’s potential outcome framework, we describe that RCTs put us in a privileged position to study causality by ensuring that the potential outcomes of the randomized groups become exchangeable. We discuss how missing data can threaten the validity of our results if dropouts systematically differ from non-dropouts, introduce trial estimands as a way to co-align analyses with the goals of the evaluation, and explain how to set up an appropriate analysis model to test the treatment effect at one or several assessment points. A novice-friendly tutorial is provided alongside this primer. It lays out concepts in greater detail and showcases how to implement techniques using the statistical software R, based on a real-world RCT dataset. DISCUSSION: Many problems of RCTs already arise at the design stage, and we examine some avoidable and unavoidable “weak spots” of this design in mental health research. For instance, we discuss how lack of prospective registration can give way to issues like outcome switching and selective reporting, how allegiance biases can inflate effect estimates, review recommendations and challenges in blinding patients in mental health RCTs, and describe problems arising from underpowered trials. Lastly, we discuss why not all randomized trials necessarily have a limited external validity and examine how RCTs relate to ongoing efforts to personalize mental health care. SUPPLEMENTARY INFORMATION: The online version contains supplementary material available at 10.1186/s13063-023-07596-3. BioMed Central 2023-08-30 /pmc/articles/PMC10469910/ /pubmed/37649083 http://dx.doi.org/10.1186/s13063-023-07596-3 Text en © The Author(s) 2023 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/ (https://creativecommons.org/publicdomain/zero/1.0/) ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
spellingShingle Methodology
Harrer, Mathias
Cuijpers, Pim
Schuurmans, Lea K. J.
Kaiser, Tim
Buntrock, Claudia
van Straten, Annemieke
Ebert, David
Evaluation of randomized controlled trials: a primer and tutorial for mental health researchers
title Evaluation of randomized controlled trials: a primer and tutorial for mental health researchers
title_full Evaluation of randomized controlled trials: a primer and tutorial for mental health researchers
title_fullStr Evaluation of randomized controlled trials: a primer and tutorial for mental health researchers
title_full_unstemmed Evaluation of randomized controlled trials: a primer and tutorial for mental health researchers
title_short Evaluation of randomized controlled trials: a primer and tutorial for mental health researchers
title_sort evaluation of randomized controlled trials: a primer and tutorial for mental health researchers
topic Methodology
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10469910/
https://www.ncbi.nlm.nih.gov/pubmed/37649083
http://dx.doi.org/10.1186/s13063-023-07596-3
work_keys_str_mv AT harrermathias evaluationofrandomizedcontrolledtrialsaprimerandtutorialformentalhealthresearchers
AT cuijperspim evaluationofrandomizedcontrolledtrialsaprimerandtutorialformentalhealthresearchers
AT schuurmansleakj evaluationofrandomizedcontrolledtrialsaprimerandtutorialformentalhealthresearchers
AT kaisertim evaluationofrandomizedcontrolledtrialsaprimerandtutorialformentalhealthresearchers
AT buntrockclaudia evaluationofrandomizedcontrolledtrialsaprimerandtutorialformentalhealthresearchers
AT vanstratenannemieke evaluationofrandomizedcontrolledtrialsaprimerandtutorialformentalhealthresearchers
AT ebertdavid evaluationofrandomizedcontrolledtrialsaprimerandtutorialformentalhealthresearchers