Cargando…

AlphaDDA: strategies for adjusting the playing strength of a fully trained AlphaZero system to a suitable human training partner

Artificial intelligence (AI) has achieved superhuman performance in board games such as Go, chess, and Othello (Reversi). In other words, the AI system surpasses the level of a strong human expert player in such games. In this context, it is difficult for a human player to enjoy playing the games wi...

Descripción completa

Detalles Bibliográficos
Autor principal: Fujita, Kazuhisa
Formato: Online Artículo Texto
Lenguaje:English
Publicado: PeerJ Inc. 2022
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9575865/
https://www.ncbi.nlm.nih.gov/pubmed/36262155
http://dx.doi.org/10.7717/peerj-cs.1123
_version_ 1784811406249426944
author Fujita, Kazuhisa
author_facet Fujita, Kazuhisa
author_sort Fujita, Kazuhisa
collection PubMed
description Artificial intelligence (AI) has achieved superhuman performance in board games such as Go, chess, and Othello (Reversi). In other words, the AI system surpasses the level of a strong human expert player in such games. In this context, it is difficult for a human player to enjoy playing the games with the AI. To keep human players entertained and immersed in a game, the AI is required to dynamically balance its skill with that of the human player. To address this issue, we propose AlphaDDA, an AlphaZero-based AI with dynamic difficulty adjustment (DDA). AlphaDDA consists of a deep neural network (DNN) and a Monte Carlo tree search, as in AlphaZero. AlphaDDA learns and plays a game the same way as AlphaZero, but can change its skills. AlphaDDA estimates the value of the game state from only the board state using the DNN. AlphaDDA changes a parameter dominantly controlling its skills according to the estimated value. Consequently, AlphaDDA adjusts its skills according to a game state. AlphaDDA can adjust its skill using only the state of a game without any prior knowledge regarding an opponent. In this study, AlphaDDA plays Connect4, Othello, and 6x6 Othello with other AI agents. Other AI agents are AlphaZero, Monte Carlo tree search, the minimax algorithm, and a random player. This study shows that AlphaDDA can balance its skill with that of the other AI agents, except for a random player. AlphaDDA can weaken itself according to the estimated value. However, AlphaDDA beats the random player because AlphaDDA is stronger than a random player even if AlphaDDA weakens itself to the limit. The DDA ability of AlphaDDA is based on an accurate estimation of the value from the state of a game. We believe that the AlphaDDA approach for DDA can be used for any game AI system if the DNN can accurately estimate the value of the game state and we know a parameter controlling the skills of the AI system.
format Online
Article
Text
id pubmed-9575865
institution National Center for Biotechnology Information
language English
publishDate 2022
publisher PeerJ Inc.
record_format MEDLINE/PubMed
spelling pubmed-95758652022-10-18 AlphaDDA: strategies for adjusting the playing strength of a fully trained AlphaZero system to a suitable human training partner Fujita, Kazuhisa PeerJ Comput Sci Artificial Intelligence Artificial intelligence (AI) has achieved superhuman performance in board games such as Go, chess, and Othello (Reversi). In other words, the AI system surpasses the level of a strong human expert player in such games. In this context, it is difficult for a human player to enjoy playing the games with the AI. To keep human players entertained and immersed in a game, the AI is required to dynamically balance its skill with that of the human player. To address this issue, we propose AlphaDDA, an AlphaZero-based AI with dynamic difficulty adjustment (DDA). AlphaDDA consists of a deep neural network (DNN) and a Monte Carlo tree search, as in AlphaZero. AlphaDDA learns and plays a game the same way as AlphaZero, but can change its skills. AlphaDDA estimates the value of the game state from only the board state using the DNN. AlphaDDA changes a parameter dominantly controlling its skills according to the estimated value. Consequently, AlphaDDA adjusts its skills according to a game state. AlphaDDA can adjust its skill using only the state of a game without any prior knowledge regarding an opponent. In this study, AlphaDDA plays Connect4, Othello, and 6x6 Othello with other AI agents. Other AI agents are AlphaZero, Monte Carlo tree search, the minimax algorithm, and a random player. This study shows that AlphaDDA can balance its skill with that of the other AI agents, except for a random player. AlphaDDA can weaken itself according to the estimated value. However, AlphaDDA beats the random player because AlphaDDA is stronger than a random player even if AlphaDDA weakens itself to the limit. The DDA ability of AlphaDDA is based on an accurate estimation of the value from the state of a game. We believe that the AlphaDDA approach for DDA can be used for any game AI system if the DNN can accurately estimate the value of the game state and we know a parameter controlling the skills of the AI system. PeerJ Inc. 2022-10-04 /pmc/articles/PMC9575865/ /pubmed/36262155 http://dx.doi.org/10.7717/peerj-cs.1123 Text en © 2022 Fujita https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ Computer Science) and either DOI or URL of the article must be cited.
spellingShingle Artificial Intelligence
Fujita, Kazuhisa
AlphaDDA: strategies for adjusting the playing strength of a fully trained AlphaZero system to a suitable human training partner
title AlphaDDA: strategies for adjusting the playing strength of a fully trained AlphaZero system to a suitable human training partner
title_full AlphaDDA: strategies for adjusting the playing strength of a fully trained AlphaZero system to a suitable human training partner
title_fullStr AlphaDDA: strategies for adjusting the playing strength of a fully trained AlphaZero system to a suitable human training partner
title_full_unstemmed AlphaDDA: strategies for adjusting the playing strength of a fully trained AlphaZero system to a suitable human training partner
title_short AlphaDDA: strategies for adjusting the playing strength of a fully trained AlphaZero system to a suitable human training partner
title_sort alphadda: strategies for adjusting the playing strength of a fully trained alphazero system to a suitable human training partner
topic Artificial Intelligence
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9575865/
https://www.ncbi.nlm.nih.gov/pubmed/36262155
http://dx.doi.org/10.7717/peerj-cs.1123
work_keys_str_mv AT fujitakazuhisa alphaddastrategiesforadjustingtheplayingstrengthofafullytrainedalphazerosystemtoasuitablehumantrainingpartner