Cargando…
Agreement between ranking metrics in network meta-analysis: an empirical study
OBJECTIVE: To empirically explore the level of agreement of the treatment hierarchies from different ranking metrics in network meta-analysis (NMA) and to investigate how network characteristics influence the agreement. DESIGN: Empirical evaluation from re-analysis of NMA. DATA: 232 networks of four...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
BMJ Publishing Group
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7440831/ https://www.ncbi.nlm.nih.gov/pubmed/32819946 http://dx.doi.org/10.1136/bmjopen-2020-037744 |
_version_ | 1783573190398705664 |
---|---|
author | Chiocchia, Virginia Nikolakopoulou, Adriani Papakonstantinou, Theodoros Egger, Matthias Salanti, Georgia |
author_facet | Chiocchia, Virginia Nikolakopoulou, Adriani Papakonstantinou, Theodoros Egger, Matthias Salanti, Georgia |
author_sort | Chiocchia, Virginia |
collection | PubMed |
description | OBJECTIVE: To empirically explore the level of agreement of the treatment hierarchies from different ranking metrics in network meta-analysis (NMA) and to investigate how network characteristics influence the agreement. DESIGN: Empirical evaluation from re-analysis of NMA. DATA: 232 networks of four or more interventions from randomised controlled trials, published between 1999 and 2015. METHODS: We calculated treatment hierarchies from several ranking metrics: relative treatment effects, probability of producing the best value [Formula: see text] and the surface under the cumulative ranking curve (SUCRA). We estimated the level of agreement between the treatment hierarchies using different measures: Kendall’s [Formula: see text] and Spearman’s [Formula: see text] correlation; and the Yilmaz [Formula: see text] and Average Overlap, to give more weight to the top of the rankings. Finally, we assessed how the amount of the information present in a network affects the agreement between treatment hierarchies, using the average variance, the relative range of variance and the total sample size over the number of interventions of a network. RESULTS: Overall, the pairwise agreement was high for all treatment hierarchies obtained by the different ranking metrics. The highest agreement was observed between SUCRA and the relative treatment effect for both correlation and top-weighted measures whose medians were all equal to 1. The agreement between rankings decreased for networks with less precise estimates and the hierarchies obtained from [Formula: see text] appeared to be the most sensitive to large differences in the variance estimates. However, such large differences were rare. CONCLUSIONS: Different ranking metrics address different treatment hierarchy problems, however they produced similar rankings in the published networks. Researchers reporting NMA results can use the ranking metric they prefer, unless there are imprecise estimates or large imbalances in the variance estimates. In this case treatment hierarchies based on both probabilistic and non-probabilistic ranking metrics should be presented. |
format | Online Article Text |
id | pubmed-7440831 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
publisher | BMJ Publishing Group |
record_format | MEDLINE/PubMed |
spelling | pubmed-74408312020-08-28 Agreement between ranking metrics in network meta-analysis: an empirical study Chiocchia, Virginia Nikolakopoulou, Adriani Papakonstantinou, Theodoros Egger, Matthias Salanti, Georgia BMJ Open Epidemiology OBJECTIVE: To empirically explore the level of agreement of the treatment hierarchies from different ranking metrics in network meta-analysis (NMA) and to investigate how network characteristics influence the agreement. DESIGN: Empirical evaluation from re-analysis of NMA. DATA: 232 networks of four or more interventions from randomised controlled trials, published between 1999 and 2015. METHODS: We calculated treatment hierarchies from several ranking metrics: relative treatment effects, probability of producing the best value [Formula: see text] and the surface under the cumulative ranking curve (SUCRA). We estimated the level of agreement between the treatment hierarchies using different measures: Kendall’s [Formula: see text] and Spearman’s [Formula: see text] correlation; and the Yilmaz [Formula: see text] and Average Overlap, to give more weight to the top of the rankings. Finally, we assessed how the amount of the information present in a network affects the agreement between treatment hierarchies, using the average variance, the relative range of variance and the total sample size over the number of interventions of a network. RESULTS: Overall, the pairwise agreement was high for all treatment hierarchies obtained by the different ranking metrics. The highest agreement was observed between SUCRA and the relative treatment effect for both correlation and top-weighted measures whose medians were all equal to 1. The agreement between rankings decreased for networks with less precise estimates and the hierarchies obtained from [Formula: see text] appeared to be the most sensitive to large differences in the variance estimates. However, such large differences were rare. CONCLUSIONS: Different ranking metrics address different treatment hierarchy problems, however they produced similar rankings in the published networks. Researchers reporting NMA results can use the ranking metric they prefer, unless there are imprecise estimates or large imbalances in the variance estimates. In this case treatment hierarchies based on both probabilistic and non-probabilistic ranking metrics should be presented. BMJ Publishing Group 2020-08-20 /pmc/articles/PMC7440831/ /pubmed/32819946 http://dx.doi.org/10.1136/bmjopen-2020-037744 Text en © Author(s) (or their employer(s)) 2020. Re-use permitted under CC BY. Published by BMJ. https://creativecommons.org/licenses/by/4.0/ https://creativecommons.org/licenses/by/4.0/This is an open access article distributed in accordance with the Creative Commons Attribution 4.0 Unported (CC BY 4.0) license, which permits others to copy, redistribute, remix, transform and build upon this work for any purpose, provided the original work is properly cited, a link to the licence is given, and indication of whether changes were made. See: https://creativecommons.org/licenses/by/4.0/. |
spellingShingle | Epidemiology Chiocchia, Virginia Nikolakopoulou, Adriani Papakonstantinou, Theodoros Egger, Matthias Salanti, Georgia Agreement between ranking metrics in network meta-analysis: an empirical study |
title | Agreement between ranking metrics in network meta-analysis: an empirical study |
title_full | Agreement between ranking metrics in network meta-analysis: an empirical study |
title_fullStr | Agreement between ranking metrics in network meta-analysis: an empirical study |
title_full_unstemmed | Agreement between ranking metrics in network meta-analysis: an empirical study |
title_short | Agreement between ranking metrics in network meta-analysis: an empirical study |
title_sort | agreement between ranking metrics in network meta-analysis: an empirical study |
topic | Epidemiology |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7440831/ https://www.ncbi.nlm.nih.gov/pubmed/32819946 http://dx.doi.org/10.1136/bmjopen-2020-037744 |
work_keys_str_mv | AT chiocchiavirginia agreementbetweenrankingmetricsinnetworkmetaanalysisanempiricalstudy AT nikolakopoulouadriani agreementbetweenrankingmetricsinnetworkmetaanalysisanempiricalstudy AT papakonstantinoutheodoros agreementbetweenrankingmetricsinnetworkmetaanalysisanempiricalstudy AT eggermatthias agreementbetweenrankingmetricsinnetworkmetaanalysisanempiricalstudy AT salantigeorgia agreementbetweenrankingmetricsinnetworkmetaanalysisanempiricalstudy |