Cargando…

Power analysis for random‐effects meta‐analysis

One of the reasons for the popularity of meta‐analysis is the notion that these analyses will possess more power to detect effects than individual studies. This is inevitably the case under a fixed‐effect model. However, the inclusion of the between‐study variance in the random‐effects model, and th...

Descripción completa

Detalles Bibliográficos
Autores principales: Jackson, Dan, Turner, Rebecca
Formato: Online Artículo Texto
Lenguaje:English
Publicado: John Wiley and Sons Inc. 2017
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5590730/
https://www.ncbi.nlm.nih.gov/pubmed/28378395
http://dx.doi.org/10.1002/jrsm.1240
Descripción
Sumario:One of the reasons for the popularity of meta‐analysis is the notion that these analyses will possess more power to detect effects than individual studies. This is inevitably the case under a fixed‐effect model. However, the inclusion of the between‐study variance in the random‐effects model, and the need to estimate this parameter, can have unfortunate implications for this power. We develop methods for assessing the power of random‐effects meta‐analyses, and the average power of the individual studies that contribute to meta‐analyses, so that these powers can be compared. In addition to deriving new analytical results and methods, we apply our methods to 1991 meta‐analyses taken from the Cochrane Database of Systematic Reviews to retrospectively calculate their powers. We find that, in practice, 5 or more studies are needed to reasonably consistently achieve powers from random‐effects meta‐analyses that are greater than the studies that contribute to them. Not only is statistical inference under the random‐effects model challenging when there are very few studies but also less worthwhile in such cases. The assumption that meta‐analysis will result in an increase in power is challenged by our findings.