Cargando…
Learnability for the Information Bottleneck
The Information Bottleneck (IB) method provides an insightful and principled approach for balancing compression and prediction for representation learning. The IB objective [Formula: see text] employs a Lagrange multiplier [Formula: see text] to tune this trade-off. However, in practice, not only is...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2019
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7514257/ http://dx.doi.org/10.3390/e21100924 |
_version_ | 1783586546458296320 |
---|---|
author | Wu, Tailin Fischer, Ian Chuang, Isaac L. Tegmark, Max |
author_facet | Wu, Tailin Fischer, Ian Chuang, Isaac L. Tegmark, Max |
author_sort | Wu, Tailin |
collection | PubMed |
description | The Information Bottleneck (IB) method provides an insightful and principled approach for balancing compression and prediction for representation learning. The IB objective [Formula: see text] employs a Lagrange multiplier [Formula: see text] to tune this trade-off. However, in practice, not only is [Formula: see text] chosen empirically without theoretical guidance, there is also a lack of theoretical understanding between [Formula: see text] , learnability, the intrinsic nature of the dataset and model capacity. In this paper, we show that if [Formula: see text] is improperly chosen, learning cannot happen—the trivial representation [Formula: see text] becomes the global minimum of the IB objective. We show how this can be avoided, by identifying a sharp phase transition between the unlearnable and the learnable which arises as [Formula: see text] is varied. This phase transition defines the concept of IB-Learnability. We prove several sufficient conditions for IB-Learnability, which provides theoretical guidance for choosing a good [Formula: see text]. We further show that IB-learnability is determined by the largest confident, typical and imbalanced subset of the examples (the conspicuous subset), and discuss its relation with model capacity. We give practical algorithms to estimate the minimum [Formula: see text] for a given dataset. We also empirically demonstrate our theoretical conditions with analyses of synthetic datasets, MNIST and CIFAR10. |
format | Online Article Text |
id | pubmed-7514257 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2019 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-75142572020-11-09 Learnability for the Information Bottleneck Wu, Tailin Fischer, Ian Chuang, Isaac L. Tegmark, Max Entropy (Basel) Article The Information Bottleneck (IB) method provides an insightful and principled approach for balancing compression and prediction for representation learning. The IB objective [Formula: see text] employs a Lagrange multiplier [Formula: see text] to tune this trade-off. However, in practice, not only is [Formula: see text] chosen empirically without theoretical guidance, there is also a lack of theoretical understanding between [Formula: see text] , learnability, the intrinsic nature of the dataset and model capacity. In this paper, we show that if [Formula: see text] is improperly chosen, learning cannot happen—the trivial representation [Formula: see text] becomes the global minimum of the IB objective. We show how this can be avoided, by identifying a sharp phase transition between the unlearnable and the learnable which arises as [Formula: see text] is varied. This phase transition defines the concept of IB-Learnability. We prove several sufficient conditions for IB-Learnability, which provides theoretical guidance for choosing a good [Formula: see text]. We further show that IB-learnability is determined by the largest confident, typical and imbalanced subset of the examples (the conspicuous subset), and discuss its relation with model capacity. We give practical algorithms to estimate the minimum [Formula: see text] for a given dataset. We also empirically demonstrate our theoretical conditions with analyses of synthetic datasets, MNIST and CIFAR10. MDPI 2019-09-23 /pmc/articles/PMC7514257/ http://dx.doi.org/10.3390/e21100924 Text en © 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Wu, Tailin Fischer, Ian Chuang, Isaac L. Tegmark, Max Learnability for the Information Bottleneck |
title | Learnability for the Information Bottleneck |
title_full | Learnability for the Information Bottleneck |
title_fullStr | Learnability for the Information Bottleneck |
title_full_unstemmed | Learnability for the Information Bottleneck |
title_short | Learnability for the Information Bottleneck |
title_sort | learnability for the information bottleneck |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7514257/ http://dx.doi.org/10.3390/e21100924 |
work_keys_str_mv | AT wutailin learnabilityfortheinformationbottleneck AT fischerian learnabilityfortheinformationbottleneck AT chuangisaacl learnabilityfortheinformationbottleneck AT tegmarkmax learnabilityfortheinformationbottleneck |