Cargando…
Dynamic stock-decision ensemble strategy based on deep reinforcement learning
In a complex and changeable stock market, it is very important to design a trading agent that can benefit investors. In this paper, we propose two stock trading decision-making methods. First, we propose a nested reinforcement learning (Nested RL) method based on three deep reinforcement learning mo...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer US
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9082989/ https://www.ncbi.nlm.nih.gov/pubmed/35572052 http://dx.doi.org/10.1007/s10489-022-03606-0 |
_version_ | 1784703323091238912 |
---|---|
author | Yu, Xiaoming Wu, Wenjun Liao, Xingchuang Han, Yong |
author_facet | Yu, Xiaoming Wu, Wenjun Liao, Xingchuang Han, Yong |
author_sort | Yu, Xiaoming |
collection | PubMed |
description | In a complex and changeable stock market, it is very important to design a trading agent that can benefit investors. In this paper, we propose two stock trading decision-making methods. First, we propose a nested reinforcement learning (Nested RL) method based on three deep reinforcement learning models (the Advantage Actor Critic, Deep Deterministic Policy Gradient, and Soft Actor Critic models) that adopts an integration strategy by nesting reinforcement learning on the basic decision-maker. Thus, this strategy can dynamically select agents according to the current situation to generate trading decisions made under different market environments. Second, to inherit the advantages of three basic decision-makers, we consider confidence and propose a weight random selection with confidence (WRSC) strategy. In this way, investors can gain more profits by integrating the advantages of all agents. All the algorithms are validated for the U.S., Japanese and British stocks and evaluated by different performance indicators. The experimental results show that the annualized return, cumulative return, and Sharpe ratio values of our ensemble strategy are higher than those of the baselines, which indicates that our nested RL and WRSC methods can assist investors in their portfolio management with more profits under the same level of investment risk. |
format | Online Article Text |
id | pubmed-9082989 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Springer US |
record_format | MEDLINE/PubMed |
spelling | pubmed-90829892022-05-09 Dynamic stock-decision ensemble strategy based on deep reinforcement learning Yu, Xiaoming Wu, Wenjun Liao, Xingchuang Han, Yong Appl Intell (Dordr) Article In a complex and changeable stock market, it is very important to design a trading agent that can benefit investors. In this paper, we propose two stock trading decision-making methods. First, we propose a nested reinforcement learning (Nested RL) method based on three deep reinforcement learning models (the Advantage Actor Critic, Deep Deterministic Policy Gradient, and Soft Actor Critic models) that adopts an integration strategy by nesting reinforcement learning on the basic decision-maker. Thus, this strategy can dynamically select agents according to the current situation to generate trading decisions made under different market environments. Second, to inherit the advantages of three basic decision-makers, we consider confidence and propose a weight random selection with confidence (WRSC) strategy. In this way, investors can gain more profits by integrating the advantages of all agents. All the algorithms are validated for the U.S., Japanese and British stocks and evaluated by different performance indicators. The experimental results show that the annualized return, cumulative return, and Sharpe ratio values of our ensemble strategy are higher than those of the baselines, which indicates that our nested RL and WRSC methods can assist investors in their portfolio management with more profits under the same level of investment risk. Springer US 2022-05-09 2023 /pmc/articles/PMC9082989/ /pubmed/35572052 http://dx.doi.org/10.1007/s10489-022-03606-0 Text en © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2022 This article is made available via the PMC Open Access Subset for unrestricted research re-use and secondary analysis in any form or by any means with acknowledgement of the original source. These permissions are granted for the duration of the World Health Organization (WHO) declaration of COVID-19 as a global pandemic. |
spellingShingle | Article Yu, Xiaoming Wu, Wenjun Liao, Xingchuang Han, Yong Dynamic stock-decision ensemble strategy based on deep reinforcement learning |
title | Dynamic stock-decision ensemble strategy based on deep reinforcement learning |
title_full | Dynamic stock-decision ensemble strategy based on deep reinforcement learning |
title_fullStr | Dynamic stock-decision ensemble strategy based on deep reinforcement learning |
title_full_unstemmed | Dynamic stock-decision ensemble strategy based on deep reinforcement learning |
title_short | Dynamic stock-decision ensemble strategy based on deep reinforcement learning |
title_sort | dynamic stock-decision ensemble strategy based on deep reinforcement learning |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9082989/ https://www.ncbi.nlm.nih.gov/pubmed/35572052 http://dx.doi.org/10.1007/s10489-022-03606-0 |
work_keys_str_mv | AT yuxiaoming dynamicstockdecisionensemblestrategybasedondeepreinforcementlearning AT wuwenjun dynamicstockdecisionensemblestrategybasedondeepreinforcementlearning AT liaoxingchuang dynamicstockdecisionensemblestrategybasedondeepreinforcementlearning AT hanyong dynamicstockdecisionensemblestrategybasedondeepreinforcementlearning |