Cargando…

The Entropy Gain of Linear Systems and Some of Its Implications

We study the increase in per-sample differential entropy rate of random sequences and processes after being passed through a non minimum-phase (NMP) discrete-time, linear time-invariant (LTI) filter G. For LTI discrete-time filters and random processes, it has long been established by Theorem 14 in...

Descripción completa

Detalles Bibliográficos
Autores principales: Derpich, Milan S., Müller, Matias, Østergaard, Jan
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2021
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8392109/
https://www.ncbi.nlm.nih.gov/pubmed/34441087
http://dx.doi.org/10.3390/e23080947
_version_ 1783743427160047616
author Derpich, Milan S.
Müller, Matias
Østergaard, Jan
author_facet Derpich, Milan S.
Müller, Matias
Østergaard, Jan
author_sort Derpich, Milan S.
collection PubMed
description We study the increase in per-sample differential entropy rate of random sequences and processes after being passed through a non minimum-phase (NMP) discrete-time, linear time-invariant (LTI) filter G. For LTI discrete-time filters and random processes, it has long been established by Theorem 14 in Shannon’s seminal paper that this entropy gain, [Formula: see text] , equals the integral of [Formula: see text]. In this note, we first show that Shannon’s Theorem 14 does not hold in general. Then, we prove that, when comparing the input differential entropy to that of the entire (longer) output of G, the entropy gain equals [Formula: see text]. We show that the entropy gain between equal-length input and output sequences is upper bounded by [Formula: see text] and arises if and only if there exists an output additive disturbance with finite differential entropy (no matter how small) or a random initial state. Unlike what happens with linear maps, the entropy gain in this case depends on the distribution of all the signals involved. We illustrate some of the consequences of these results by presenting their implications in three different problems. Specifically: conditions for equality in an information inequality of importance in networked control problems; extending to a much broader class of sources the existing results on the rate-distortion function for non-stationary Gaussian sources, and an observation on the capacity of auto-regressive Gaussian channels with feedback.
format Online
Article
Text
id pubmed-8392109
institution National Center for Biotechnology Information
language English
publishDate 2021
publisher MDPI
record_format MEDLINE/PubMed
spelling pubmed-83921092021-08-28 The Entropy Gain of Linear Systems and Some of Its Implications Derpich, Milan S. Müller, Matias Østergaard, Jan Entropy (Basel) Article We study the increase in per-sample differential entropy rate of random sequences and processes after being passed through a non minimum-phase (NMP) discrete-time, linear time-invariant (LTI) filter G. For LTI discrete-time filters and random processes, it has long been established by Theorem 14 in Shannon’s seminal paper that this entropy gain, [Formula: see text] , equals the integral of [Formula: see text]. In this note, we first show that Shannon’s Theorem 14 does not hold in general. Then, we prove that, when comparing the input differential entropy to that of the entire (longer) output of G, the entropy gain equals [Formula: see text]. We show that the entropy gain between equal-length input and output sequences is upper bounded by [Formula: see text] and arises if and only if there exists an output additive disturbance with finite differential entropy (no matter how small) or a random initial state. Unlike what happens with linear maps, the entropy gain in this case depends on the distribution of all the signals involved. We illustrate some of the consequences of these results by presenting their implications in three different problems. Specifically: conditions for equality in an information inequality of importance in networked control problems; extending to a much broader class of sources the existing results on the rate-distortion function for non-stationary Gaussian sources, and an observation on the capacity of auto-regressive Gaussian channels with feedback. MDPI 2021-07-24 /pmc/articles/PMC8392109/ /pubmed/34441087 http://dx.doi.org/10.3390/e23080947 Text en © 2021 by the authors. https://creativecommons.org/licenses/by/4.0/Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
spellingShingle Article
Derpich, Milan S.
Müller, Matias
Østergaard, Jan
The Entropy Gain of Linear Systems and Some of Its Implications
title The Entropy Gain of Linear Systems and Some of Its Implications
title_full The Entropy Gain of Linear Systems and Some of Its Implications
title_fullStr The Entropy Gain of Linear Systems and Some of Its Implications
title_full_unstemmed The Entropy Gain of Linear Systems and Some of Its Implications
title_short The Entropy Gain of Linear Systems and Some of Its Implications
title_sort entropy gain of linear systems and some of its implications
topic Article
url https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8392109/
https://www.ncbi.nlm.nih.gov/pubmed/34441087
http://dx.doi.org/10.3390/e23080947
work_keys_str_mv AT derpichmilans theentropygainoflinearsystemsandsomeofitsimplications
AT mullermatias theentropygainoflinearsystemsandsomeofitsimplications
AT østergaardjan theentropygainoflinearsystemsandsomeofitsimplications
AT derpichmilans entropygainoflinearsystemsandsomeofitsimplications
AT mullermatias entropygainoflinearsystemsandsomeofitsimplications
AT østergaardjan entropygainoflinearsystemsandsomeofitsimplications