Cargando…

A Hybrid Optimization Method for Solving Bayesian Inverse Problems under Uncertainty

In this paper, we investigate the application of a new method, the Finite Difference and Stochastic Gradient (Hybrid method), for history matching in reservoir models. History matching is one of the processes of solving an inverse problem by calibrating reservoir models to dynamic behaviour of the r...

Descripción completa

Detalles Bibliográficos
Autores principales: Zhang, Kai, Wang, Zengfei, Zhang, Liming, Yao, Jun, Yan, Xia
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Public Library of Science 2015
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4529104/
https://www.ncbi.nlm.nih.gov/pubmed/26252392
http://dx.doi.org/10.1371/journal.pone.0132418
_version_ 1782384741617500160
author Zhang, Kai
Wang, Zengfei
Zhang, Liming
Yao, Jun
Yan, Xia
author_facet Zhang, Kai
Wang, Zengfei
Zhang, Liming
Yao, Jun
Yan, Xia
author_sort Zhang, Kai
collection PubMed
description In this paper, we investigate the application of a new method, the Finite Difference and Stochastic Gradient (Hybrid method), for history matching in reservoir models. History matching is one of the processes of solving an inverse problem by calibrating reservoir models to dynamic behaviour of the reservoir in which an objective function is formulated based on a Bayesian approach for optimization. The goal of history matching is to identify the minimum value of an objective function that expresses the misfit between the predicted and measured data of a reservoir. To address the optimization problem, we present a novel application using a combination of the stochastic gradient and finite difference methods for solving inverse problems. The optimization is constrained by a linear equation that contains the reservoir parameters. We reformulate the reservoir model’s parameters and dynamic data by operating the objective function, the approximate gradient of which can guarantee convergence. At each iteration step, we obtain the relatively ‘important’ elements of the gradient, which are subsequently substituted by the values from the Finite Difference method through comparing the magnitude of the components of the stochastic gradient, which forms a new gradient, and we subsequently iterate with the new gradient. Through the application of the Hybrid method, we efficiently and accurately optimize the objective function. We present a number numerical simulations in this paper that show that the method is accurate and computationally efficient.
format Online
Article
Text
id pubmed-4529104
institution National Center for Biotechnology Information
language English
publishDate 2015
publisher Public Library of Science
record_format MEDLINE/PubMed
spelling pubmed-45291042015-08-12 A Hybrid Optimization Method for Solving Bayesian Inverse Problems under Uncertainty Zhang, Kai Wang, Zengfei Zhang, Liming Yao, Jun Yan, Xia PLoS One Research Article In this paper, we investigate the application of a new method, the Finite Difference and Stochastic Gradient (Hybrid method), for history matching in reservoir models. History matching is one of the processes of solving an inverse problem by calibrating reservoir models to dynamic behaviour of the reservoir in which an objective function is formulated based on a Bayesian approach for optimization. The goal of history matching is to identify the minimum value of an objective function that expresses the misfit between the predicted and measured data of a reservoir. To address the optimization problem, we present a novel application using a combination of the stochastic gradient and finite difference methods for solving inverse problems. The optimization is constrained by a linear equation that contains the reservoir parameters. We reformulate the reservoir model’s parameters and dynamic data by operating the objective function, the approximate gradient of which can guarantee convergence. At each iteration step, we obtain the relatively ‘important’ elements of the gradient, which are subsequently substituted by the values from the Finite Difference method through comparing the magnitude of the components of the stochastic gradient, which forms a new gradient, and we subsequently iterate with the new gradient. Through the application of the Hybrid method, we efficiently and accurately optimize the objective function. We present a number numerical simulations in this paper that show that the method is accurate and computationally efficient. Public Library of Science 2015-08-07 /pmc/articles/PMC4529104/ /pubmed/26252392 http://dx.doi.org/10.1371/journal.pone.0132418 Text en © 2015 Zhang et al http://creativecommons.org/licenses/by/4.0/ This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are properly credited.
spellingShingle Research Article
Zhang, Kai
Wang, Zengfei
Zhang, Liming
Yao, Jun
Yan, Xia
A Hybrid Optimization Method for Solving Bayesian Inverse Problems under Uncertainty
title A Hybrid Optimization Method for Solving Bayesian Inverse Problems under Uncertainty
title_full A Hybrid Optimization Method for Solving Bayesian Inverse Problems under Uncertainty
title_fullStr A Hybrid Optimization Method for Solving Bayesian Inverse Problems under Uncertainty
title_full_unstemmed A Hybrid Optimization Method for Solving Bayesian Inverse Problems under Uncertainty
title_short A Hybrid Optimization Method for Solving Bayesian Inverse Problems under Uncertainty
title_sort hybrid optimization method for solving bayesian inverse problems under uncertainty
topic Research Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4529104/
https://www.ncbi.nlm.nih.gov/pubmed/26252392
http://dx.doi.org/10.1371/journal.pone.0132418
work_keys_str_mv AT zhangkai ahybridoptimizationmethodforsolvingbayesianinverseproblemsunderuncertainty
AT wangzengfei ahybridoptimizationmethodforsolvingbayesianinverseproblemsunderuncertainty
AT zhangliming ahybridoptimizationmethodforsolvingbayesianinverseproblemsunderuncertainty
AT yaojun ahybridoptimizationmethodforsolvingbayesianinverseproblemsunderuncertainty
AT yanxia ahybridoptimizationmethodforsolvingbayesianinverseproblemsunderuncertainty
AT zhangkai hybridoptimizationmethodforsolvingbayesianinverseproblemsunderuncertainty
AT wangzengfei hybridoptimizationmethodforsolvingbayesianinverseproblemsunderuncertainty
AT zhangliming hybridoptimizationmethodforsolvingbayesianinverseproblemsunderuncertainty
AT yaojun hybridoptimizationmethodforsolvingbayesianinverseproblemsunderuncertainty
AT yanxia hybridoptimizationmethodforsolvingbayesianinverseproblemsunderuncertainty