Cargando…
Conditional Rényi Divergence Saddlepoint and the Maximization of α-Mutual Information
Rényi-type generalizations of entropy, relative entropy and mutual information have found numerous applications throughout information theory and beyond. While there is consensus that the ways A. Rényi generalized entropy and relative entropy in 1961 are the “right” ones, several candidates have bee...
Autores principales: | Cai, Changxiao, Verdú, Sergio |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2019
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7514300/ http://dx.doi.org/10.3390/e21100969 |
Ejemplares similares
-
Saddlepoint approximations with applications
por: Butler, Ronald W
Publicado: (2007) -
Conditional Rényi Divergences and Horse Betting
por: Bleuler, Cédric, et al.
Publicado: (2020) -
Rényi Entropy and Rényi Divergence in Product MV-Algebras
por: Markechová, Dagmar, et al.
Publicado: (2018) -
Error Exponents and α-Mutual Information
por: Verdú, Sergio
Publicado: (2021) -
Quantifying Data Dependencies with Rényi Mutual Information and Minimum Spanning Trees
por: Eggels, Anne, et al.
Publicado: (2019)