Cargando…
Adversarial Decision-Making for Moving Target Defense: A Multi-Agent Markov Game and Reinforcement Learning Approach
Reinforcement learning has shown a great ability and has defeated human beings in the field of real-time strategy games. In recent years, reinforcement learning has been used in cyberspace to carry out automated and intelligent attacks. Traditional defense methods are not enough to deal with this pr...
Autores principales: | , , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2023
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10137508/ https://www.ncbi.nlm.nih.gov/pubmed/37190393 http://dx.doi.org/10.3390/e25040605 |
_version_ | 1785032481561378816 |
---|---|
author | Yao, Qian Wang, Yongjie Xiong, Xinli Wang, Peng Li, Yang |
author_facet | Yao, Qian Wang, Yongjie Xiong, Xinli Wang, Peng Li, Yang |
author_sort | Yao, Qian |
collection | PubMed |
description | Reinforcement learning has shown a great ability and has defeated human beings in the field of real-time strategy games. In recent years, reinforcement learning has been used in cyberspace to carry out automated and intelligent attacks. Traditional defense methods are not enough to deal with this problem, so it is necessary to design defense agents to counter intelligent attacks. The interaction between the attack agent and the defense agent can be modeled as a multi-agent Markov game. In this paper, an adversarial decision-making approach that combines the Bayesian Strong Stackelberg and the WoLF algorithms was proposed to obtain the equilibrium point of multi-agent Markov games. With this method, the defense agent can obtain the adversarial decision-making strategy as well as continuously adjust the strategy in cyberspace. As verified in experiments, the defense agent should attach importance to short-term rewards in the process of a real-time game between the attack agent and the defense agent. The proposed approach can obtain the largest rewards for defense agent compared with the classic Nash-Q and URS-Q algorithms. In addition, the proposed approach adjusts the action selection probability dynamically, so that the decision entropy of optimal action gradually decreases. |
format | Online Article Text |
id | pubmed-10137508 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2023 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-101375082023-04-28 Adversarial Decision-Making for Moving Target Defense: A Multi-Agent Markov Game and Reinforcement Learning Approach Yao, Qian Wang, Yongjie Xiong, Xinli Wang, Peng Li, Yang Entropy (Basel) Article Reinforcement learning has shown a great ability and has defeated human beings in the field of real-time strategy games. In recent years, reinforcement learning has been used in cyberspace to carry out automated and intelligent attacks. Traditional defense methods are not enough to deal with this problem, so it is necessary to design defense agents to counter intelligent attacks. The interaction between the attack agent and the defense agent can be modeled as a multi-agent Markov game. In this paper, an adversarial decision-making approach that combines the Bayesian Strong Stackelberg and the WoLF algorithms was proposed to obtain the equilibrium point of multi-agent Markov games. With this method, the defense agent can obtain the adversarial decision-making strategy as well as continuously adjust the strategy in cyberspace. As verified in experiments, the defense agent should attach importance to short-term rewards in the process of a real-time game between the attack agent and the defense agent. The proposed approach can obtain the largest rewards for defense agent compared with the classic Nash-Q and URS-Q algorithms. In addition, the proposed approach adjusts the action selection probability dynamically, so that the decision entropy of optimal action gradually decreases. MDPI 2023-04-02 /pmc/articles/PMC10137508/ /pubmed/37190393 http://dx.doi.org/10.3390/e25040605 Text en © 2023 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Yao, Qian Wang, Yongjie Xiong, Xinli Wang, Peng Li, Yang Adversarial Decision-Making for Moving Target Defense: A Multi-Agent Markov Game and Reinforcement Learning Approach |
title | Adversarial Decision-Making for Moving Target Defense: A Multi-Agent Markov Game and Reinforcement Learning Approach |
title_full | Adversarial Decision-Making for Moving Target Defense: A Multi-Agent Markov Game and Reinforcement Learning Approach |
title_fullStr | Adversarial Decision-Making for Moving Target Defense: A Multi-Agent Markov Game and Reinforcement Learning Approach |
title_full_unstemmed | Adversarial Decision-Making for Moving Target Defense: A Multi-Agent Markov Game and Reinforcement Learning Approach |
title_short | Adversarial Decision-Making for Moving Target Defense: A Multi-Agent Markov Game and Reinforcement Learning Approach |
title_sort | adversarial decision-making for moving target defense: a multi-agent markov game and reinforcement learning approach |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10137508/ https://www.ncbi.nlm.nih.gov/pubmed/37190393 http://dx.doi.org/10.3390/e25040605 |
work_keys_str_mv | AT yaoqian adversarialdecisionmakingformovingtargetdefenseamultiagentmarkovgameandreinforcementlearningapproach AT wangyongjie adversarialdecisionmakingformovingtargetdefenseamultiagentmarkovgameandreinforcementlearningapproach AT xiongxinli adversarialdecisionmakingformovingtargetdefenseamultiagentmarkovgameandreinforcementlearningapproach AT wangpeng adversarialdecisionmakingformovingtargetdefenseamultiagentmarkovgameandreinforcementlearningapproach AT liyang adversarialdecisionmakingformovingtargetdefenseamultiagentmarkovgameandreinforcementlearningapproach |