Cargando…

Learning to play against any mixture of opponents

Intuitively, experience playing against one mixture of opponents in a given domain should be relevant for a different mixture in the same domain. If the mixture changes, ideally we would not have to train from scratch, but rather could transfer what we have learned to construct a policy to play agai...

Descripción completa

Detalles Bibliográficos
Autores principales: Smith, Max Olan, Anthony, Thomas, Wellman, Michael P.
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Frontiers Media S.A. 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10400709/
https://www.ncbi.nlm.nih.gov/pubmed/37547229
http://dx.doi.org/10.3389/frai.2023.804682
_version_ 1785084504465997824
author Smith, Max Olan
Anthony, Thomas
Wellman, Michael P.
author_facet Smith, Max Olan
Anthony, Thomas
Wellman, Michael P.
author_sort Smith, Max Olan
collection PubMed
description Intuitively, experience playing against one mixture of opponents in a given domain should be relevant for a different mixture in the same domain. If the mixture changes, ideally we would not have to train from scratch, but rather could transfer what we have learned to construct a policy to play against the new mixture. We propose a transfer learning method, Q-Mixing, that starts by learning Q-values against each pure-strategy opponent. Then a Q-value for any distribution of opponent strategies is approximated by appropriately averaging the separately learned Q-values. From these components, we construct policies against all opponent mixtures without any further training. We empirically validate Q-Mixing in two environments: a simple grid-world soccer environment, and a social dilemma game. Our experiments find that Q-Mixing can successfully transfer knowledge across any mixture of opponents. Next, we consider the use of observations during play to update the believed distribution of opponents. We introduce an opponent policy classifier—trained reusing Q-learning data—and use the classifier results to refine the mixing of Q-values. Q-Mixing augmented with the opponent policy classifier performs better, with higher variance, than training directly against a mixed-strategy opponent.
format Online
Article
Text
id pubmed-10400709
institution National Center for Biotechnology Information
language English
publishDate 2023
publisher Frontiers Media S.A.
record_format MEDLINE/PubMed
spelling pubmed-104007092023-08-05 Learning to play against any mixture of opponents Smith, Max Olan Anthony, Thomas Wellman, Michael P. Front Artif Intell Artificial Intelligence Intuitively, experience playing against one mixture of opponents in a given domain should be relevant for a different mixture in the same domain. If the mixture changes, ideally we would not have to train from scratch, but rather could transfer what we have learned to construct a policy to play against the new mixture. We propose a transfer learning method, Q-Mixing, that starts by learning Q-values against each pure-strategy opponent. Then a Q-value for any distribution of opponent strategies is approximated by appropriately averaging the separately learned Q-values. From these components, we construct policies against all opponent mixtures without any further training. We empirically validate Q-Mixing in two environments: a simple grid-world soccer environment, and a social dilemma game. Our experiments find that Q-Mixing can successfully transfer knowledge across any mixture of opponents. Next, we consider the use of observations during play to update the believed distribution of opponents. We introduce an opponent policy classifier—trained reusing Q-learning data—and use the classifier results to refine the mixing of Q-values. Q-Mixing augmented with the opponent policy classifier performs better, with higher variance, than training directly against a mixed-strategy opponent. Frontiers Media S.A. 2023-07-20 /pmc/articles/PMC10400709/ /pubmed/37547229 http://dx.doi.org/10.3389/frai.2023.804682 Text en Copyright © 2023 Smith, Anthony and Wellman. https://creativecommons.org/licenses/by/4.0/This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
spellingShingle Artificial Intelligence
Smith, Max Olan
Anthony, Thomas
Wellman, Michael P.
Learning to play against any mixture of opponents
title Learning to play against any mixture of opponents
title_full Learning to play against any mixture of opponents
title_fullStr Learning to play against any mixture of opponents
title_full_unstemmed Learning to play against any mixture of opponents
title_short Learning to play against any mixture of opponents
title_sort learning to play against any mixture of opponents
topic Artificial Intelligence
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10400709/
https://www.ncbi.nlm.nih.gov/pubmed/37547229
http://dx.doi.org/10.3389/frai.2023.804682
work_keys_str_mv AT smithmaxolan learningtoplayagainstanymixtureofopponents
AT anthonythomas learningtoplayagainstanymixtureofopponents
AT wellmanmichaelp learningtoplayagainstanymixtureofopponents