Cargando…
A reinforcement learning method for optimal control of oil well production using cropped well group samples
The influence of geological development factors such as reservoir heterogeneity needs to be comprehensively considered in the determination of oil well production control strategy. In the past, many optimization algorithms are introduced and coupled with numerical simulation for well control problem...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Elsevier
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10362194/ https://www.ncbi.nlm.nih.gov/pubmed/37483805 http://dx.doi.org/10.1016/j.heliyon.2023.e17919 |
_version_ | 1785076371076153344 |
---|---|
author | Ding, Yangyang Wang, Xiang Cao, Xiaopeng Hu, Huifang Bu, Yahui |
author_facet | Ding, Yangyang Wang, Xiang Cao, Xiaopeng Hu, Huifang Bu, Yahui |
author_sort | Ding, Yangyang |
collection | PubMed |
description | The influence of geological development factors such as reservoir heterogeneity needs to be comprehensively considered in the determination of oil well production control strategy. In the past, many optimization algorithms are introduced and coupled with numerical simulation for well control problems. However, these methods require a large number of simulations, and the experience of these simulations is not preserved by the algorithm. For each new reservoir, the optimization algorithm needs to start over again. To address the above problems, two reinforcement learning methods are introduced in this research. A personalized Deep Q-Network (DQN) algorithm and a personalized Soft Actor-Critic (SAC)algorithm are designed for optimal control determination of oil wells. The inputs of the algorithms are matrix of reservoir properties, including reservoir saturation, permeability, etc., which can be treated as images. The output is the oil well production strategy. A series of samples are cut from two different reservoirs to form a dataset. Each sample is a square area that takes an oil well at the center, with different permeability and saturation distribution, and different oil-water well patterns. Moreover, all samples are expanded by using image enhancement technology to further increase the number of samples and improve the coverage of the samples to the reservoir conditions. During the training process, two training strategies are investigated for each personalized algorithm. The second strategy uses 4 times more samples than the first strategy. At last, a new set of samples is designed to verify the model’s accuracy and generalization ability. Results show that both the trained DQN and SAC models can learn and store historical experience, and push appropriate control strategies based on reservoir characteristics of new oil wells. The agreement between the optimal control strategy obtained by both algorithms and the global optimal strategy obtained by the exhaustive method is more than 95%. The personalized SAC algorithm shows better performance compared to the personalized DQN algorithm. Compared to the traditional Particle Swarm Optimization (PSO), the personalized models were faster and better at capturing complex patterns and adapting to different geological conditions, making them effective for real-time decision-making and optimizing oil well production strategies. Since a large amount of historical experience has been learned and stored in the algorithm, the proposed method requires only 1 simulation for a new oil well control optimization problem, which showing the superiority in computational efficiency. |
format | Online Article Text |
id | pubmed-10362194 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | Elsevier |
record_format | MEDLINE/PubMed |
spelling | pubmed-103621942023-07-23 A reinforcement learning method for optimal control of oil well production using cropped well group samples Ding, Yangyang Wang, Xiang Cao, Xiaopeng Hu, Huifang Bu, Yahui Heliyon Research Article The influence of geological development factors such as reservoir heterogeneity needs to be comprehensively considered in the determination of oil well production control strategy. In the past, many optimization algorithms are introduced and coupled with numerical simulation for well control problems. However, these methods require a large number of simulations, and the experience of these simulations is not preserved by the algorithm. For each new reservoir, the optimization algorithm needs to start over again. To address the above problems, two reinforcement learning methods are introduced in this research. A personalized Deep Q-Network (DQN) algorithm and a personalized Soft Actor-Critic (SAC)algorithm are designed for optimal control determination of oil wells. The inputs of the algorithms are matrix of reservoir properties, including reservoir saturation, permeability, etc., which can be treated as images. The output is the oil well production strategy. A series of samples are cut from two different reservoirs to form a dataset. Each sample is a square area that takes an oil well at the center, with different permeability and saturation distribution, and different oil-water well patterns. Moreover, all samples are expanded by using image enhancement technology to further increase the number of samples and improve the coverage of the samples to the reservoir conditions. During the training process, two training strategies are investigated for each personalized algorithm. The second strategy uses 4 times more samples than the first strategy. At last, a new set of samples is designed to verify the model’s accuracy and generalization ability. Results show that both the trained DQN and SAC models can learn and store historical experience, and push appropriate control strategies based on reservoir characteristics of new oil wells. The agreement between the optimal control strategy obtained by both algorithms and the global optimal strategy obtained by the exhaustive method is more than 95%. The personalized SAC algorithm shows better performance compared to the personalized DQN algorithm. Compared to the traditional Particle Swarm Optimization (PSO), the personalized models were faster and better at capturing complex patterns and adapting to different geological conditions, making them effective for real-time decision-making and optimizing oil well production strategies. Since a large amount of historical experience has been learned and stored in the algorithm, the proposed method requires only 1 simulation for a new oil well control optimization problem, which showing the superiority in computational efficiency. Elsevier 2023-07-04 /pmc/articles/PMC10362194/ /pubmed/37483805 http://dx.doi.org/10.1016/j.heliyon.2023.e17919 Text en © 2023 The Authors https://creativecommons.org/licenses/by/4.0/This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Research Article Ding, Yangyang Wang, Xiang Cao, Xiaopeng Hu, Huifang Bu, Yahui A reinforcement learning method for optimal control of oil well production using cropped well group samples |
title | A reinforcement learning method for optimal control of oil well production using cropped well group samples |
title_full | A reinforcement learning method for optimal control of oil well production using cropped well group samples |
title_fullStr | A reinforcement learning method for optimal control of oil well production using cropped well group samples |
title_full_unstemmed | A reinforcement learning method for optimal control of oil well production using cropped well group samples |
title_short | A reinforcement learning method for optimal control of oil well production using cropped well group samples |
title_sort | reinforcement learning method for optimal control of oil well production using cropped well group samples |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10362194/ https://www.ncbi.nlm.nih.gov/pubmed/37483805 http://dx.doi.org/10.1016/j.heliyon.2023.e17919 |
work_keys_str_mv | AT dingyangyang areinforcementlearningmethodforoptimalcontrolofoilwellproductionusingcroppedwellgroupsamples AT wangxiang areinforcementlearningmethodforoptimalcontrolofoilwellproductionusingcroppedwellgroupsamples AT caoxiaopeng areinforcementlearningmethodforoptimalcontrolofoilwellproductionusingcroppedwellgroupsamples AT huhuifang areinforcementlearningmethodforoptimalcontrolofoilwellproductionusingcroppedwellgroupsamples AT buyahui areinforcementlearningmethodforoptimalcontrolofoilwellproductionusingcroppedwellgroupsamples AT dingyangyang reinforcementlearningmethodforoptimalcontrolofoilwellproductionusingcroppedwellgroupsamples AT wangxiang reinforcementlearningmethodforoptimalcontrolofoilwellproductionusingcroppedwellgroupsamples AT caoxiaopeng reinforcementlearningmethodforoptimalcontrolofoilwellproductionusingcroppedwellgroupsamples AT huhuifang reinforcementlearningmethodforoptimalcontrolofoilwellproductionusingcroppedwellgroupsamples AT buyahui reinforcementlearningmethodforoptimalcontrolofoilwellproductionusingcroppedwellgroupsamples |