Cargando…

Markov Information Bottleneck to Improve Information Flow in Stochastic Neural Networks

While rate distortion theory compresses data under a distortion constraint, information bottleneck (IB) generalizes rate distortion theory to learning problems by replacing a distortion constraint with a constraint of relevant information. In this work, we further extend IB to multiple Markov bottle...

Descripción completa

Detalles Bibliográficos
Autores principales: Tang Nguyen, Thanh, Choi, Jaesik
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2019
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7514307/
http://dx.doi.org/10.3390/e21100976
_version_ 1783586557872046080
author Tang Nguyen, Thanh
Choi, Jaesik
author_facet Tang Nguyen, Thanh
Choi, Jaesik
author_sort Tang Nguyen, Thanh
collection PubMed
description While rate distortion theory compresses data under a distortion constraint, information bottleneck (IB) generalizes rate distortion theory to learning problems by replacing a distortion constraint with a constraint of relevant information. In this work, we further extend IB to multiple Markov bottlenecks (i.e., latent variables that form a Markov chain), namely Markov information bottleneck (MIB), which particularly fits better in the context of stochastic neural networks (SNNs) than the original IB. We show that Markov bottlenecks cannot simultaneously achieve their information optimality in a non-collapse MIB, and thus devise an optimality compromise. With MIB, we take the novel perspective that each layer of an SNN is a bottleneck whose learning goal is to encode relevant information in a compressed form from the data. The inference from a hidden layer to the output layer is then interpreted as a variational approximation to the layer’s decoding of relevant information in the MIB. As a consequence of this perspective, the maximum likelihood estimate (MLE) principle in the context of SNNs becomes a special case of the variational MIB. We show that, compared to MLE, the variational MIB can encourage better information flow in SNNs in both principle and practice, and empirically improve performance in classification, adversarial robustness, and multi-modal learning in MNIST.
format Online
Article
Text
id pubmed-7514307
institution National Center for Biotechnology Information
language English
publishDate 2019
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-75143072020-11-09 Markov Information Bottleneck to Improve Information Flow in Stochastic Neural Networks Tang Nguyen, Thanh Choi, Jaesik Entropy (Basel) Article While rate distortion theory compresses data under a distortion constraint, information bottleneck (IB) generalizes rate distortion theory to learning problems by replacing a distortion constraint with a constraint of relevant information. In this work, we further extend IB to multiple Markov bottlenecks (i.e., latent variables that form a Markov chain), namely Markov information bottleneck (MIB), which particularly fits better in the context of stochastic neural networks (SNNs) than the original IB. We show that Markov bottlenecks cannot simultaneously achieve their information optimality in a non-collapse MIB, and thus devise an optimality compromise. With MIB, we take the novel perspective that each layer of an SNN is a bottleneck whose learning goal is to encode relevant information in a compressed form from the data. The inference from a hidden layer to the output layer is then interpreted as a variational approximation to the layer’s decoding of relevant information in the MIB. As a consequence of this perspective, the maximum likelihood estimate (MLE) principle in the context of SNNs becomes a special case of the variational MIB. We show that, compared to MLE, the variational MIB can encourage better information flow in SNNs in both principle and practice, and empirically improve performance in classification, adversarial robustness, and multi-modal learning in MNIST. MDPI 2019-10-06 /pmc/articles/PMC7514307/ http://dx.doi.org/10.3390/e21100976 Text en © 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Tang Nguyen, Thanh
Choi, Jaesik
Markov Information Bottleneck to Improve Information Flow in Stochastic Neural Networks
title Markov Information Bottleneck to Improve Information Flow in Stochastic Neural Networks
title_full Markov Information Bottleneck to Improve Information Flow in Stochastic Neural Networks
title_fullStr Markov Information Bottleneck to Improve Information Flow in Stochastic Neural Networks
title_full_unstemmed Markov Information Bottleneck to Improve Information Flow in Stochastic Neural Networks
title_short Markov Information Bottleneck to Improve Information Flow in Stochastic Neural Networks
title_sort markov information bottleneck to improve information flow in stochastic neural networks
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7514307/
http://dx.doi.org/10.3390/e21100976
work_keys_str_mv AT tangnguyenthanh markovinformationbottlenecktoimproveinformationflowinstochasticneuralnetworks
AT choijaesik markovinformationbottlenecktoimproveinformationflowinstochasticneuralnetworks