Cargando…

Response

<!--HTML-->Approximate Bayesian computation (ABC) is the name given to a collection of Monte Carlo algorithms used for fitting complex computer models to data. The methods rely upon simulation, rather than likelihood based calculation, and so can be used to calibrate a much wider set of simul...

Descripción completa

Detalles Bibliográficos
Autor principal: BENDAVID, Josh
Lenguaje:eng
Publicado: 2015
Materias:
Acceso en línea:http://cds.cern.ch/record/2067051
_version_ 1780948715231510528
author BENDAVID, Josh
author_facet BENDAVID, Josh
author_sort BENDAVID, Josh
collection CERN
description <!--HTML-->Approximate Bayesian computation (ABC) is the name given to a collection of Monte Carlo algorithms used for fitting complex computer models to data. The methods rely upon simulation, rather than likelihood based calculation, and so can be used to calibrate a much wider set of simulation models. The simplest version of ABC is intuitive: we sample repeatedly from the prior distribution, and accept parameter values that give a close match between the simulation and the data. This has been extended in many ways, for example, reducing the dimension of the datasets using summary statistics and then calibrating to the summaries instead of the full data; using more efficient Monte Carlo algorithms (MCMC, SMC, etc); and introducing modelling approaches to overcome computational cost and to minimize the error in the approximation. The two key challenges for ABC methods are i) dealing with computational constraints; and ii) finding good low dimensional summaries. Much of the early work on i) was based upon finding efficient sampling algorithms, adapting methods such as MCMC and sequential Monte Carlo methods, to more efficiently find good regions of parameter space. Although these methods can dramatically reduce the amount of computation needed, they still require hundreds of thousands of simulations. Recent work has instead focused on the use of meta-models or emulators. These are cheap statistical surrogates that approximate the simulator, and which can be used in place of the simulator to find the posterior distribution. A key question when using these methods concerns the experimental design: where should we next run the simulator, in order to maximise our information about the posterior distribution?
id cern-2067051
institution Organización Europea para la Investigación Nuclear
language eng
publishDate 2015
record_format invenio
spelling cern-20670512022-11-02T22:33:49Zhttp://cds.cern.ch/record/2067051engBENDAVID, JoshResponseData Science @ LHC 2015 WorkshopLPCC Workshops<!--HTML-->Approximate Bayesian computation (ABC) is the name given to a collection of Monte Carlo algorithms used for fitting complex computer models to data. The methods rely upon simulation, rather than likelihood based calculation, and so can be used to calibrate a much wider set of simulation models. The simplest version of ABC is intuitive: we sample repeatedly from the prior distribution, and accept parameter values that give a close match between the simulation and the data. This has been extended in many ways, for example, reducing the dimension of the datasets using summary statistics and then calibrating to the summaries instead of the full data; using more efficient Monte Carlo algorithms (MCMC, SMC, etc); and introducing modelling approaches to overcome computational cost and to minimize the error in the approximation. The two key challenges for ABC methods are i) dealing with computational constraints; and ii) finding good low dimensional summaries. Much of the early work on i) was based upon finding efficient sampling algorithms, adapting methods such as MCMC and sequential Monte Carlo methods, to more efficiently find good regions of parameter space. Although these methods can dramatically reduce the amount of computation needed, they still require hundreds of thousands of simulations. Recent work has instead focused on the use of meta-models or emulators. These are cheap statistical surrogates that approximate the simulator, and which can be used in place of the simulator to find the posterior distribution. A key question when using these methods concerns the experimental design: where should we next run the simulator, in order to maximise our information about the posterior distribution?oai:cds.cern.ch:20670512015
spellingShingle LPCC Workshops
BENDAVID, Josh
Response
title Response
title_full Response
title_fullStr Response
title_full_unstemmed Response
title_short Response
title_sort response
topic LPCC Workshops
url http://cds.cern.ch/record/2067051
work_keys_str_mv AT bendavidjosh response
AT bendavidjosh datasciencelhc2015workshop