Cargando…

Information Bottleneck Theory Based Exploration of Cascade Learning

In solving challenging pattern recognition problems, deep neural networks have shown excellent performance by forming powerful mappings between inputs and targets, learning representations (features) and making subsequent predictions. A recent tool to help understand how representations are formed i...

Descripción completa

Detalles Bibliográficos
Autores principales: Du, Xin, Farrahi, Katayoun, Niranjan, Mahesan
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8535168/
https://www.ncbi.nlm.nih.gov/pubmed/34682084
http://dx.doi.org/10.3390/e23101360
_version_ 1784587713960214528
author Du, Xin
Farrahi, Katayoun
Niranjan, Mahesan
author_facet Du, Xin
Farrahi, Katayoun
Niranjan, Mahesan
author_sort Du, Xin
collection PubMed
description In solving challenging pattern recognition problems, deep neural networks have shown excellent performance by forming powerful mappings between inputs and targets, learning representations (features) and making subsequent predictions. A recent tool to help understand how representations are formed is based on observing the dynamics of learning on an information plane using mutual information, linking the input to the representation ([Formula: see text]) and the representation to the target ([Formula: see text]). In this paper, we use an information theoretical approach to understand how Cascade Learning (CL), a method to train deep neural networks layer-by-layer, learns representations, as CL has shown comparable results while saving computation and memory costs. We observe that performance is not linked to information–compression, which differs from observation on End-to-End (E2E) learning. Additionally, CL can inherit information about targets, and gradually specialise extracted features layer-by-layer. We evaluate this effect by proposing an information transition ratio, [Formula: see text] , and show that it can serve as a useful heuristic in setting the depth of a neural network that achieves satisfactory accuracy of classification.
format Online
Article
Text
id pubmed-8535168
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-85351682021-10-23 Information Bottleneck Theory Based Exploration of Cascade Learning Du, Xin Farrahi, Katayoun Niranjan, Mahesan Entropy (Basel) Article In solving challenging pattern recognition problems, deep neural networks have shown excellent performance by forming powerful mappings between inputs and targets, learning representations (features) and making subsequent predictions. A recent tool to help understand how representations are formed is based on observing the dynamics of learning on an information plane using mutual information, linking the input to the representation ([Formula: see text]) and the representation to the target ([Formula: see text]). In this paper, we use an information theoretical approach to understand how Cascade Learning (CL), a method to train deep neural networks layer-by-layer, learns representations, as CL has shown comparable results while saving computation and memory costs. We observe that performance is not linked to information–compression, which differs from observation on End-to-End (E2E) learning. Additionally, CL can inherit information about targets, and gradually specialise extracted features layer-by-layer. We evaluate this effect by proposing an information transition ratio, [Formula: see text] , and show that it can serve as a useful heuristic in setting the depth of a neural network that achieves satisfactory accuracy of classification. MDPI 2021-10-18 /pmc/articles/PMC8535168/ /pubmed/34682084 http://dx.doi.org/10.3390/e23101360 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Du, Xin
Farrahi, Katayoun
Niranjan, Mahesan
Information Bottleneck Theory Based Exploration of Cascade Learning
title Information Bottleneck Theory Based Exploration of Cascade Learning
title_full Information Bottleneck Theory Based Exploration of Cascade Learning
title_fullStr Information Bottleneck Theory Based Exploration of Cascade Learning
title_full_unstemmed Information Bottleneck Theory Based Exploration of Cascade Learning
title_short Information Bottleneck Theory Based Exploration of Cascade Learning
title_sort information bottleneck theory based exploration of cascade learning
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8535168/
https://www.ncbi.nlm.nih.gov/pubmed/34682084
http://dx.doi.org/10.3390/e23101360
work_keys_str_mv AT duxin informationbottlenecktheorybasedexplorationofcascadelearning
AT farrahikatayoun informationbottlenecktheorybasedexplorationofcascadelearning
AT niranjanmahesan informationbottlenecktheorybasedexplorationofcascadelearning