Cargando…

Measuring the Uncertainty of Predictions in Deep Neural Networks with Variational Inference

We present a novel approach for training deep neural networks in a Bayesian way. Compared to other Bayesian deep learning formulations, our approach allows for quantifying the uncertainty in model parameters while only adding very few additional parameters to be optimized. The proposed approach uses...

Descripción completa

Detalles Bibliográficos
Autores principales: Steinbrener, Jan, Posch, Konstantin, Pilz, Jürgen
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7660222/
https://www.ncbi.nlm.nih.gov/pubmed/33113927
http://dx.doi.org/10.3390/s20216011
_version_ 1783608966325993472
author Steinbrener, Jan
Posch, Konstantin
Pilz, Jürgen
author_facet Steinbrener, Jan
Posch, Konstantin
Pilz, Jürgen
author_sort Steinbrener, Jan
collection PubMed
description We present a novel approach for training deep neural networks in a Bayesian way. Compared to other Bayesian deep learning formulations, our approach allows for quantifying the uncertainty in model parameters while only adding very few additional parameters to be optimized. The proposed approach uses variational inference to approximate the intractable a posteriori distribution on basis of a normal prior. By representing the a posteriori uncertainty of the network parameters per network layer and depending on the estimated parameter expectation values, only very few additional parameters need to be optimized compared to a non-Bayesian network. We compare our approach to classical deep learning, Bernoulli dropout and Bayes by Backprop using the MNIST dataset. Compared to classical deep learning, the test error is reduced by 15%. We also show that the uncertainty information obtained can be used to calculate credible intervals for the network prediction and to optimize network architecture for the dataset at hand. To illustrate that our approach also scales to large networks and input vector sizes, we apply it to the GoogLeNet architecture on a custom dataset, achieving an average accuracy of 0.92. Using 95% credible intervals, all but one wrong classification result can be detected.
format Online
Article
Text
id pubmed-7660222
institution National Center for Biotechnology Information
language English
publishDate 2020
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-76602222020-11-13 Measuring the Uncertainty of Predictions in Deep Neural Networks with Variational Inference Steinbrener, Jan Posch, Konstantin Pilz, Jürgen Sensors (Basel) Article We present a novel approach for training deep neural networks in a Bayesian way. Compared to other Bayesian deep learning formulations, our approach allows for quantifying the uncertainty in model parameters while only adding very few additional parameters to be optimized. The proposed approach uses variational inference to approximate the intractable a posteriori distribution on basis of a normal prior. By representing the a posteriori uncertainty of the network parameters per network layer and depending on the estimated parameter expectation values, only very few additional parameters need to be optimized compared to a non-Bayesian network. We compare our approach to classical deep learning, Bernoulli dropout and Bayes by Backprop using the MNIST dataset. Compared to classical deep learning, the test error is reduced by 15%. We also show that the uncertainty information obtained can be used to calculate credible intervals for the network prediction and to optimize network architecture for the dataset at hand. To illustrate that our approach also scales to large networks and input vector sizes, we apply it to the GoogLeNet architecture on a custom dataset, achieving an average accuracy of 0.92. Using 95% credible intervals, all but one wrong classification result can be detected. MDPI 2020-10-23 /pmc/articles/PMC7660222/ /pubmed/33113927 http://dx.doi.org/10.3390/s20216011 Text en © 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Steinbrener, Jan
Posch, Konstantin
Pilz, Jürgen
Measuring the Uncertainty of Predictions in Deep Neural Networks with Variational Inference
title Measuring the Uncertainty of Predictions in Deep Neural Networks with Variational Inference
title_full Measuring the Uncertainty of Predictions in Deep Neural Networks with Variational Inference
title_fullStr Measuring the Uncertainty of Predictions in Deep Neural Networks with Variational Inference
title_full_unstemmed Measuring the Uncertainty of Predictions in Deep Neural Networks with Variational Inference
title_short Measuring the Uncertainty of Predictions in Deep Neural Networks with Variational Inference
title_sort measuring the uncertainty of predictions in deep neural networks with variational inference
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7660222/
https://www.ncbi.nlm.nih.gov/pubmed/33113927
http://dx.doi.org/10.3390/s20216011
work_keys_str_mv AT steinbrenerjan measuringtheuncertaintyofpredictionsindeepneuralnetworkswithvariationalinference
AT poschkonstantin measuringtheuncertaintyofpredictionsindeepneuralnetworkswithvariationalinference
AT pilzjurgen measuringtheuncertaintyofpredictionsindeepneuralnetworkswithvariationalinference