Cargando…

Model selection on solid ground: Rigorous comparison of nine ways to evaluate Bayesian model evidence

Bayesian model selection or averaging objectively ranks a number of plausible, competing conceptual models based on Bayes' theorem. It implicitly performs an optimal trade-off between performance in fitting available data and minimum model complexity. The procedure requires determining Bayesian...

Descripción completa

Detalles Bibliográficos
Autores principales: Schöniger, Anneli, Wöhling, Thomas, Samaniego, Luis, Nowak, Wolfgang
Formato: Online Artículo Texto
Lenguaje:English
Publicado: BlackWell Publishing Ltd 2014
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4328146/
https://www.ncbi.nlm.nih.gov/pubmed/25745272
http://dx.doi.org/10.1002/2014WR016062
_version_ 1782357192701116416
author Schöniger, Anneli
Wöhling, Thomas
Samaniego, Luis
Nowak, Wolfgang
author_facet Schöniger, Anneli
Wöhling, Thomas
Samaniego, Luis
Nowak, Wolfgang
author_sort Schöniger, Anneli
collection PubMed
description Bayesian model selection or averaging objectively ranks a number of plausible, competing conceptual models based on Bayes' theorem. It implicitly performs an optimal trade-off between performance in fitting available data and minimum model complexity. The procedure requires determining Bayesian model evidence (BME), which is the likelihood of the observed data integrated over each model's parameter space. The computation of this integral is highly challenging because it is as high-dimensional as the number of model parameters. Three classes of techniques to compute BME are available, each with its own challenges and limitations: (1) Exact and fast analytical solutions are limited by strong assumptions. (2) Numerical evaluation quickly becomes unfeasible for expensive models. (3) Approximations known as information criteria (ICs) such as the AIC, BIC, or KIC (Akaike, Bayesian, or Kashyap information criterion, respectively) yield contradicting results with regard to model ranking. Our study features a theory-based intercomparison of these techniques. We further assess their accuracy in a simplistic synthetic example where for some scenarios an exact analytical solution exists. In more challenging scenarios, we use a brute-force Monte Carlo integration method as reference. We continue this analysis with a real-world application of hydrological model selection. This is a first-time benchmarking of the various methods for BME evaluation against true solutions. Results show that BME values from ICs are often heavily biased and that the choice of approximation method substantially influences the accuracy of model ranking. For reliable model selection, bias-free numerical methods should be preferred over ICs whenever computationally feasible.
format Online
Article
Text
id pubmed-4328146
institution National Center for Biotechnology Information
language English
publishDate 2014
publisher BlackWell Publishing Ltd
record_format MEDLINE/PubMed
spelling pubmed-43281462015-03-03 Model selection on solid ground: Rigorous comparison of nine ways to evaluate Bayesian model evidence Schöniger, Anneli Wöhling, Thomas Samaniego, Luis Nowak, Wolfgang Water Resour Res Research Articles Bayesian model selection or averaging objectively ranks a number of plausible, competing conceptual models based on Bayes' theorem. It implicitly performs an optimal trade-off between performance in fitting available data and minimum model complexity. The procedure requires determining Bayesian model evidence (BME), which is the likelihood of the observed data integrated over each model's parameter space. The computation of this integral is highly challenging because it is as high-dimensional as the number of model parameters. Three classes of techniques to compute BME are available, each with its own challenges and limitations: (1) Exact and fast analytical solutions are limited by strong assumptions. (2) Numerical evaluation quickly becomes unfeasible for expensive models. (3) Approximations known as information criteria (ICs) such as the AIC, BIC, or KIC (Akaike, Bayesian, or Kashyap information criterion, respectively) yield contradicting results with regard to model ranking. Our study features a theory-based intercomparison of these techniques. We further assess their accuracy in a simplistic synthetic example where for some scenarios an exact analytical solution exists. In more challenging scenarios, we use a brute-force Monte Carlo integration method as reference. We continue this analysis with a real-world application of hydrological model selection. This is a first-time benchmarking of the various methods for BME evaluation against true solutions. Results show that BME values from ICs are often heavily biased and that the choice of approximation method substantially influences the accuracy of model ranking. For reliable model selection, bias-free numerical methods should be preferred over ICs whenever computationally feasible. BlackWell Publishing Ltd 2014-12 2014-12-19 /pmc/articles/PMC4328146/ /pubmed/25745272 http://dx.doi.org/10.1002/2014WR016062 Text en © 2014. The Authors. http://creativecommons.org/licenses/by-nc-nd/4.0/ This is an open access article under the terms of the Creative Commons Attribution-NonCommercial-NoDerivs License, which permits use and distribution in any medium, provided the original work is properly cited, the use is non-commercial and no modifications or adaptations are made.
spellingShingle Research Articles
Schöniger, Anneli
Wöhling, Thomas
Samaniego, Luis
Nowak, Wolfgang
Model selection on solid ground: Rigorous comparison of nine ways to evaluate Bayesian model evidence
title Model selection on solid ground: Rigorous comparison of nine ways to evaluate Bayesian model evidence
title_full Model selection on solid ground: Rigorous comparison of nine ways to evaluate Bayesian model evidence
title_fullStr Model selection on solid ground: Rigorous comparison of nine ways to evaluate Bayesian model evidence
title_full_unstemmed Model selection on solid ground: Rigorous comparison of nine ways to evaluate Bayesian model evidence
title_short Model selection on solid ground: Rigorous comparison of nine ways to evaluate Bayesian model evidence
title_sort model selection on solid ground: rigorous comparison of nine ways to evaluate bayesian model evidence
topic Research Articles
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4328146/
https://www.ncbi.nlm.nih.gov/pubmed/25745272
http://dx.doi.org/10.1002/2014WR016062
work_keys_str_mv AT schonigeranneli modelselectiononsolidgroundrigorouscomparisonofninewaystoevaluatebayesianmodelevidence
AT wohlingthomas modelselectiononsolidgroundrigorouscomparisonofninewaystoevaluatebayesianmodelevidence
AT samaniegoluis modelselectiononsolidgroundrigorouscomparisonofninewaystoevaluatebayesianmodelevidence
AT nowakwolfgang modelselectiononsolidgroundrigorouscomparisonofninewaystoevaluatebayesianmodelevidence