Cargando…
Convergence Behavior of DNNs with Mutual-Information-Based Regularization
Information theory concepts are leveraged with the goal of better understanding and improving Deep Neural Networks (DNNs). The information plane of neural networks describes the behavior during training of the mutual information at various depths between input/output and hidden-layer variables. Prev...
Autores principales: | , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2020
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7517266/ https://www.ncbi.nlm.nih.gov/pubmed/33286499 http://dx.doi.org/10.3390/e22070727 |
_version_ | 1783587190509404160 |
---|---|
author | Jónsson, Hlynur Cherubini, Giovanni Eleftheriou, Evangelos |
author_facet | Jónsson, Hlynur Cherubini, Giovanni Eleftheriou, Evangelos |
author_sort | Jónsson, Hlynur |
collection | PubMed |
description | Information theory concepts are leveraged with the goal of better understanding and improving Deep Neural Networks (DNNs). The information plane of neural networks describes the behavior during training of the mutual information at various depths between input/output and hidden-layer variables. Previous analysis revealed that most of the training epochs are spent on compressing the input, in some networks where finiteness of the mutual information can be established. However, the estimation of mutual information is nontrivial for high-dimensional continuous random variables. Therefore, the computation of the mutual information for DNNs and its visualization on the information plane mostly focused on low-complexity fully connected networks. In fact, even the existence of the compression phase in complex DNNs has been questioned and viewed as an open problem. In this paper, we present the convergence of mutual information on the information plane for a high-dimensional VGG-16 Convolutional Neural Network (CNN) by resorting to Mutual Information Neural Estimation (MINE), thus confirming and extending the results obtained with low-dimensional fully connected networks. Furthermore, we demonstrate the benefits of regularizing a network, especially for a large number of training epochs, by adopting mutual information estimates as additional terms in the loss function characteristic of the network. Experimental results show that the regularization stabilizes the test accuracy and significantly reduces its variance. |
format | Online Article Text |
id | pubmed-7517266 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2020 |
publisher | MDPI |
record_format | MEDLINE/PubMed |
spelling | pubmed-75172662020-11-09 Convergence Behavior of DNNs with Mutual-Information-Based Regularization Jónsson, Hlynur Cherubini, Giovanni Eleftheriou, Evangelos Entropy (Basel) Article Information theory concepts are leveraged with the goal of better understanding and improving Deep Neural Networks (DNNs). The information plane of neural networks describes the behavior during training of the mutual information at various depths between input/output and hidden-layer variables. Previous analysis revealed that most of the training epochs are spent on compressing the input, in some networks where finiteness of the mutual information can be established. However, the estimation of mutual information is nontrivial for high-dimensional continuous random variables. Therefore, the computation of the mutual information for DNNs and its visualization on the information plane mostly focused on low-complexity fully connected networks. In fact, even the existence of the compression phase in complex DNNs has been questioned and viewed as an open problem. In this paper, we present the convergence of mutual information on the information plane for a high-dimensional VGG-16 Convolutional Neural Network (CNN) by resorting to Mutual Information Neural Estimation (MINE), thus confirming and extending the results obtained with low-dimensional fully connected networks. Furthermore, we demonstrate the benefits of regularizing a network, especially for a large number of training epochs, by adopting mutual information estimates as additional terms in the loss function characteristic of the network. Experimental results show that the regularization stabilizes the test accuracy and significantly reduces its variance. MDPI 2020-06-30 /pmc/articles/PMC7517266/ /pubmed/33286499 http://dx.doi.org/10.3390/e22070727 Text en © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). |
spellingShingle | Article Jónsson, Hlynur Cherubini, Giovanni Eleftheriou, Evangelos Convergence Behavior of DNNs with Mutual-Information-Based Regularization |
title | Convergence Behavior of DNNs with Mutual-Information-Based Regularization |
title_full | Convergence Behavior of DNNs with Mutual-Information-Based Regularization |
title_fullStr | Convergence Behavior of DNNs with Mutual-Information-Based Regularization |
title_full_unstemmed | Convergence Behavior of DNNs with Mutual-Information-Based Regularization |
title_short | Convergence Behavior of DNNs with Mutual-Information-Based Regularization |
title_sort | convergence behavior of dnns with mutual-information-based regularization |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7517266/ https://www.ncbi.nlm.nih.gov/pubmed/33286499 http://dx.doi.org/10.3390/e22070727 |
work_keys_str_mv | AT jonssonhlynur convergencebehaviorofdnnswithmutualinformationbasedregularization AT cherubinigiovanni convergencebehaviorofdnnswithmutualinformationbasedregularization AT eleftheriouevangelos convergencebehaviorofdnnswithmutualinformationbasedregularization |