Cargando…
Some performance considerations when using multi-armed bandit algorithms in the presence of missing data
When comparing the performance of multi-armed bandit algorithms, the potential impact of missing data is often overlooked. In practice, it also affects their implementation where the simplest approach to overcome this is to continue to sample according to the original bandit algorithm, ignoring miss...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Public Library of Science
2022
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9467360/ https://www.ncbi.nlm.nih.gov/pubmed/36094920 http://dx.doi.org/10.1371/journal.pone.0274272 |
_version_ | 1784788176689168384 |
---|---|
author | Chen, Xijin Lee, Kim May Villar, Sofia S. Robertson, David S. |
author_facet | Chen, Xijin Lee, Kim May Villar, Sofia S. Robertson, David S. |
author_sort | Chen, Xijin |
collection | PubMed |
description | When comparing the performance of multi-armed bandit algorithms, the potential impact of missing data is often overlooked. In practice, it also affects their implementation where the simplest approach to overcome this is to continue to sample according to the original bandit algorithm, ignoring missing outcomes. We investigate the impact on performance of this approach to deal with missing data for several bandit algorithms through an extensive simulation study assuming the rewards are missing at random. We focus on two-armed bandit algorithms with binary outcomes in the context of patient allocation for clinical trials with relatively small sample sizes. However, our results apply to other applications of bandit algorithms where missing data is expected to occur. We assess the resulting operating characteristics, including the expected reward. Different probabilities of missingness in both arms are considered. The key finding of our work is that when using the simplest strategy of ignoring missing data, the impact on the expected performance of multi-armed bandit strategies varies according to the way these strategies balance the exploration-exploitation trade-off. Algorithms that are geared towards exploration continue to assign samples to the arm with more missing responses (which being perceived as the arm with less observed information is deemed more appealing by the algorithm than it would otherwise be). In contrast, algorithms that are geared towards exploitation would rapidly assign a high value to samples from the arms with a current high mean irrespective of the level observations per arm. Furthermore, for algorithms focusing more on exploration, we illustrate that the problem of missing responses can be alleviated using a simple mean imputation approach. |
format | Online Article Text |
id | pubmed-9467360 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2022 |
publisher | Public Library of Science |
record_format | MEDLINE/PubMed |
spelling | pubmed-94673602022-09-13 Some performance considerations when using multi-armed bandit algorithms in the presence of missing data Chen, Xijin Lee, Kim May Villar, Sofia S. Robertson, David S. PLoS One Research Article When comparing the performance of multi-armed bandit algorithms, the potential impact of missing data is often overlooked. In practice, it also affects their implementation where the simplest approach to overcome this is to continue to sample according to the original bandit algorithm, ignoring missing outcomes. We investigate the impact on performance of this approach to deal with missing data for several bandit algorithms through an extensive simulation study assuming the rewards are missing at random. We focus on two-armed bandit algorithms with binary outcomes in the context of patient allocation for clinical trials with relatively small sample sizes. However, our results apply to other applications of bandit algorithms where missing data is expected to occur. We assess the resulting operating characteristics, including the expected reward. Different probabilities of missingness in both arms are considered. The key finding of our work is that when using the simplest strategy of ignoring missing data, the impact on the expected performance of multi-armed bandit strategies varies according to the way these strategies balance the exploration-exploitation trade-off. Algorithms that are geared towards exploration continue to assign samples to the arm with more missing responses (which being perceived as the arm with less observed information is deemed more appealing by the algorithm than it would otherwise be). In contrast, algorithms that are geared towards exploitation would rapidly assign a high value to samples from the arms with a current high mean irrespective of the level observations per arm. Furthermore, for algorithms focusing more on exploration, we illustrate that the problem of missing responses can be alleviated using a simple mean imputation approach. Public Library of Science 2022-09-12 /pmc/articles/PMC9467360/ /pubmed/36094920 http://dx.doi.org/10.1371/journal.pone.0274272 Text en © 2022 Chen et al https://creativecommons.org/licenses/by/4.0/This is an open access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/) , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. |
spellingShingle | Research Article Chen, Xijin Lee, Kim May Villar, Sofia S. Robertson, David S. Some performance considerations when using multi-armed bandit algorithms in the presence of missing data |
title | Some performance considerations when using multi-armed bandit algorithms in the presence of missing data |
title_full | Some performance considerations when using multi-armed bandit algorithms in the presence of missing data |
title_fullStr | Some performance considerations when using multi-armed bandit algorithms in the presence of missing data |
title_full_unstemmed | Some performance considerations when using multi-armed bandit algorithms in the presence of missing data |
title_short | Some performance considerations when using multi-armed bandit algorithms in the presence of missing data |
title_sort | some performance considerations when using multi-armed bandit algorithms in the presence of missing data |
topic | Research Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9467360/ https://www.ncbi.nlm.nih.gov/pubmed/36094920 http://dx.doi.org/10.1371/journal.pone.0274272 |
work_keys_str_mv | AT chenxijin someperformanceconsiderationswhenusingmultiarmedbanditalgorithmsinthepresenceofmissingdata AT leekimmay someperformanceconsiderationswhenusingmultiarmedbanditalgorithmsinthepresenceofmissingdata AT villarsofias someperformanceconsiderationswhenusingmultiarmedbanditalgorithmsinthepresenceofmissingdata AT robertsondavids someperformanceconsiderationswhenusingmultiarmedbanditalgorithmsinthepresenceofmissingdata |