Cargando…

Beyond the Randomized Controlled Trial: A Review of Alternatives in mHealth Clinical Trial Methods

BACKGROUND: Randomized controlled trials (RCTs) have long been considered the primary research study design capable of eliciting causal relationships between health interventions and consequent outcomes. However, with a prolonged duration from recruitment to publication, high-cost trial implementati...

Descripción completa

Detalles Bibliográficos
Autores principales: Pham, Quynh, Wiljer, David, Cafazzo, Joseph A
Formato: Online Artículo Texto
Lenguaje:English
Publicado: JMIR Publications 2016
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5035379/
https://www.ncbi.nlm.nih.gov/pubmed/27613084
http://dx.doi.org/10.2196/mhealth.5720
Descripción
Sumario:BACKGROUND: Randomized controlled trials (RCTs) have long been considered the primary research study design capable of eliciting causal relationships between health interventions and consequent outcomes. However, with a prolonged duration from recruitment to publication, high-cost trial implementation, and a rigid trial protocol, RCTs are perceived as an impractical evaluation methodology for most mHealth apps. OBJECTIVE: Given the recent development of alternative evaluation methodologies and tools to automate mHealth research, we sought to determine the breadth of these methods and the extent that they were being used in clinical trials. METHODS: We conducted a review of the ClinicalTrials.gov registry to identify and examine current clinical trials involving mHealth apps and retrieved relevant trials registered between November 2014 and November 2015. RESULTS: Of the 137 trials identified, 71 were found to meet inclusion criteria. The majority used a randomized controlled trial design (80%, 57/71). Study designs included 36 two-group pretest-posttest control group comparisons (51%, 36/71), 16 posttest-only control group comparisons (23%, 16/71), 7 one-group pretest-posttest designs (10%, 7/71), 2 one-shot case study designs (3%, 2/71), and 2 static-group comparisons (3%, 2/71). A total of 17 trials included a qualitative component to their methodology (24%, 17/71). Complete trial data collection required 20 months on average to complete (mean 21, SD 12). For trials with a total duration of 2 years or more (31%, 22/71), the average time from recruitment to complete data collection (mean 35 months, SD 10) was 2 years longer than the average time required to collect primary data (mean 11, SD 8). Trials had a moderate sample size of 112 participants. Two trials were conducted online (3%, 2/71) and 7 trials collected data continuously (10%, 7/68). Onsite study implementation was heavily favored (97%, 69/71). Trials with four data collection points had a longer study duration than trials with two data collection points: F(4,56)=3.2, P=.021, η(2)=0.18. Single-blinded trials had a longer data collection period compared to open trials: F(2,58)=3.8, P=.028, η(2)=0.12. Academic sponsorship was the most common form of trial funding (73%, 52/71). Trials with academic sponsorship had a longer study duration compared to industry sponsorship: F(2,61)=3.7, P=.030, η(2)=0.11. Combined, data collection frequency, study masking, sample size, and study sponsorship accounted for 32.6% of the variance in study duration: F(4,55)=6.6, P<.01, adjusted r(2)=.33. Only 7 trials had been completed at the time this retrospective review was conducted (10%, 7/71). CONCLUSIONS: mHealth evaluation methodology has not deviated from common methods, despite the need for more relevant and timely evaluations. There is a need for clinical evaluation to keep pace with the level of innovation of mHealth if it is to have meaningful impact in informing payers, providers, policy makers, and patients.