Cargando…

Do alternative methods for analysing count data produce similar estimates? Implications for meta-analyses

BACKGROUND: Many randomised trials have count outcomes, such as the number of falls or the number of asthma exacerbations. These outcomes have been treated as counts, continuous outcomes or dichotomised and analysed using a variety of analytical methods. This study examines whether different methods...

Descripción completa

Detalles Bibliográficos
Autores principales: Herbison, Peter, Robertson, M. Clare, McKenzie, Joanne E.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BioMed Central 2015
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4650317/
https://www.ncbi.nlm.nih.gov/pubmed/26577545
http://dx.doi.org/10.1186/s13643-015-0144-x
_version_ 1782401474843639808
author Herbison, Peter
Robertson, M. Clare
McKenzie, Joanne E.
author_facet Herbison, Peter
Robertson, M. Clare
McKenzie, Joanne E.
author_sort Herbison, Peter
collection PubMed
description BACKGROUND: Many randomised trials have count outcomes, such as the number of falls or the number of asthma exacerbations. These outcomes have been treated as counts, continuous outcomes or dichotomised and analysed using a variety of analytical methods. This study examines whether different methods of analysis yield estimates of intervention effect that are similar enough to be reasonably pooled in a meta-analysis. METHODS: Data were simulated for 10,000 randomised trials under three different amounts of overdispersion, four different event rates and two effect sizes. Each simulated trial was analysed using nine different methods of analysis: rate ratio, Poisson regression, negative binomial regression, risk ratio from dichotomised data, survival to the first event, two methods of adjusting for multiple survival times, ratio of means and ratio of medians. Individual patient data was gathered from eight fall prevention trials, and similar analyses were undertaken. RESULTS: All methods produced similar effect sizes when there was no difference between treatments. Results were similar when there was a moderate difference with two exceptions when the event became more common: (1) risk ratios computed from dichotomised count outcomes and hazard ratios from survival analysis of the time to the first event yielded intervention effects that differed from rate ratios estimated from the negative binomial model (reference model) and (2) the precision of the estimates differed depending on the method used, which may affect both the pooled intervention effect and the observed heterogeneity. The results of the case study of individual data from eight trials evaluating exercise programmes to prevent falls in older people supported the simulation study findings. CONCLUSIONS: Information about the differences in treatments is lost when event rates increase and the outcome is dichotomised or time to the first event is analysed otherwise similar results are obtained. Further research is needed to examine the effect of differing variances from the different methods on the confidence intervals of pooled estimates. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.1186/s13643-015-0144-x) contains supplementary material, which is available to authorized users.
format Online
Article
Text
id pubmed-4650317
institution National Center for Biotechnology Information
language English
publishDate 2015
publisher BioMed Central
record_format MEDLINE/PubMed
spelling pubmed-46503172015-11-19 Do alternative methods for analysing count data produce similar estimates? Implications for meta-analyses Herbison, Peter Robertson, M. Clare McKenzie, Joanne E. Syst Rev Research BACKGROUND: Many randomised trials have count outcomes, such as the number of falls or the number of asthma exacerbations. These outcomes have been treated as counts, continuous outcomes or dichotomised and analysed using a variety of analytical methods. This study examines whether different methods of analysis yield estimates of intervention effect that are similar enough to be reasonably pooled in a meta-analysis. METHODS: Data were simulated for 10,000 randomised trials under three different amounts of overdispersion, four different event rates and two effect sizes. Each simulated trial was analysed using nine different methods of analysis: rate ratio, Poisson regression, negative binomial regression, risk ratio from dichotomised data, survival to the first event, two methods of adjusting for multiple survival times, ratio of means and ratio of medians. Individual patient data was gathered from eight fall prevention trials, and similar analyses were undertaken. RESULTS: All methods produced similar effect sizes when there was no difference between treatments. Results were similar when there was a moderate difference with two exceptions when the event became more common: (1) risk ratios computed from dichotomised count outcomes and hazard ratios from survival analysis of the time to the first event yielded intervention effects that differed from rate ratios estimated from the negative binomial model (reference model) and (2) the precision of the estimates differed depending on the method used, which may affect both the pooled intervention effect and the observed heterogeneity. The results of the case study of individual data from eight trials evaluating exercise programmes to prevent falls in older people supported the simulation study findings. CONCLUSIONS: Information about the differences in treatments is lost when event rates increase and the outcome is dichotomised or time to the first event is analysed otherwise similar results are obtained. Further research is needed to examine the effect of differing variances from the different methods on the confidence intervals of pooled estimates. ELECTRONIC SUPPLEMENTARY MATERIAL: The online version of this article (doi:10.1186/s13643-015-0144-x) contains supplementary material, which is available to authorized users. BioMed Central 2015-11-17 /pmc/articles/PMC4650317/ /pubmed/26577545 http://dx.doi.org/10.1186/s13643-015-0144-x Text en © Herbison et al. 2015 Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
spellingShingle Research
Herbison, Peter
Robertson, M. Clare
McKenzie, Joanne E.
Do alternative methods for analysing count data produce similar estimates? Implications for meta-analyses
title Do alternative methods for analysing count data produce similar estimates? Implications for meta-analyses
title_full Do alternative methods for analysing count data produce similar estimates? Implications for meta-analyses
title_fullStr Do alternative methods for analysing count data produce similar estimates? Implications for meta-analyses
title_full_unstemmed Do alternative methods for analysing count data produce similar estimates? Implications for meta-analyses
title_short Do alternative methods for analysing count data produce similar estimates? Implications for meta-analyses
title_sort do alternative methods for analysing count data produce similar estimates? implications for meta-analyses
topic Research
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4650317/
https://www.ncbi.nlm.nih.gov/pubmed/26577545
http://dx.doi.org/10.1186/s13643-015-0144-x
work_keys_str_mv AT herbisonpeter doalternativemethodsforanalysingcountdataproducesimilarestimatesimplicationsformetaanalyses
AT robertsonmclare doalternativemethodsforanalysingcountdataproducesimilarestimatesimplicationsformetaanalyses
AT mckenziejoannee doalternativemethodsforanalysingcountdataproducesimilarestimatesimplicationsformetaanalyses