Cargando…
Robust Universal Inference
Learning and making inference from a finite set of samples are among the fundamental problems in science. In most popular applications, the paradigmatic approach is to seek a model that best explains the data. This approach has many desirable properties when the number of samples is large. However,...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8235138/ https://www.ncbi.nlm.nih.gov/pubmed/34207449 http://dx.doi.org/10.3390/e23060773 |
_version_ | 1783714246637387776 |
---|---|
author | Painsky, Amichai Feder, Meir |
author_facet | Painsky, Amichai Feder, Meir |
author_sort | Painsky, Amichai |
collection | PubMed |
description | Learning and making inference from a finite set of samples are among the fundamental problems in science. In most popular applications, the paradigmatic approach is to seek a model that best explains the data. This approach has many desirable properties when the number of samples is large. However, in many practical setups, data acquisition is costly and only a limited number of samples is available. In this work, we study an alternative approach for this challenging setup. Our framework suggests that the role of the train-set is not to provide a single estimated model, which may be inaccurate due to the limited number of samples. Instead, we define a class of “reasonable” models. Then, the worst-case performance in the class is controlled by a minimax estimator with respect to it. Further, we introduce a robust estimation scheme that provides minimax guarantees, also for the case where the true model is not a member of the model class. Our results draw important connections to universal prediction, the redundancy-capacity theorem, and channel capacity theory. We demonstrate our suggested scheme in different setups, showing a significant improvement in worst-case performance over currently known alternatives. |
format | Online Article Text |
id | pubmed-8235138 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-82351382021-06-27 Robust Universal Inference Painsky, Amichai Feder, Meir Entropy (Basel) Article Learning and making inference from a finite set of samples are among the fundamental problems in science. In most popular applications, the paradigmatic approach is to seek a model that best explains the data. This approach has many desirable properties when the number of samples is large. However, in many practical setups, data acquisition is costly and only a limited number of samples is available. In this work, we study an alternative approach for this challenging setup. Our framework suggests that the role of the train-set is not to provide a single estimated model, which may be inaccurate due to the limited number of samples. Instead, we define a class of “reasonable” models. Then, the worst-case performance in the class is controlled by a minimax estimator with respect to it. Further, we introduce a robust estimation scheme that provides minimax guarantees, also for the case where the true model is not a member of the model class. Our results draw important connections to universal prediction, the redundancy-capacity theorem, and channel capacity theory. We demonstrate our suggested scheme in different setups, showing a significant improvement in worst-case performance over currently known alternatives. MDPI 2021-06-18 /pmc/articles/PMC8235138/ /pubmed/34207449 http://dx.doi.org/10.3390/e23060773 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Painsky, Amichai Feder, Meir Robust Universal Inference |
title | Robust Universal Inference |
title_full | Robust Universal Inference |
title_fullStr | Robust Universal Inference |
title_full_unstemmed | Robust Universal Inference |
title_short | Robust Universal Inference |
title_sort | robust universal inference |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8235138/ https://www.ncbi.nlm.nih.gov/pubmed/34207449 http://dx.doi.org/10.3390/e23060773 |
work_keys_str_mv | AT painskyamichai robustuniversalinference AT federmeir robustuniversalinference |