Cargando…
Error Exponents and α-Mutual Information
Over the last six decades, the representation of error exponent functions for data transmission through noisy channels at rates below capacity has seen three distinct approaches: (1) Through Gallager’s [Formula: see text] functions (with and without cost constraints); (2) large deviations form, in t...
Autor principal: | |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7915702/ https://www.ncbi.nlm.nih.gov/pubmed/33562882 http://dx.doi.org/10.3390/e23020199 |
Sumario: | Over the last six decades, the representation of error exponent functions for data transmission through noisy channels at rates below capacity has seen three distinct approaches: (1) Through Gallager’s [Formula: see text] functions (with and without cost constraints); (2) large deviations form, in terms of conditional relative entropy and mutual information; (3) through the [Formula: see text]-mutual information and the Augustin–Csiszár mutual information of order [Formula: see text] derived from the Rényi divergence. While a fairly complete picture has emerged in the absence of cost constraints, there have remained gaps in the interrelationships between the three approaches in the general case of cost-constrained encoding. Furthermore, no systematic approach has been proposed to solve the attendant optimization problems by exploiting the specific structure of the information functions. This paper closes those gaps and proposes a simple method to maximize Augustin–Csiszár mutual information of order [Formula: see text] under cost constraints by means of the maximization of the [Formula: see text]-mutual information subject to an exponential average constraint. |
---|