Cargando…
Exact Partial Information Decompositions for Gaussian Systems Based on Dependency Constraints
The Partial Information Decomposition, introduced by Williams P. L. et al. (2010), provides a theoretical framework to characterize and quantify the structure of multivariate information sharing. A new method ([Formula: see text]) has recently been proposed by James R. G. et al. (2017) for computing...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
MDPI
2018
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7512755/ https://www.ncbi.nlm.nih.gov/pubmed/33265331 http://dx.doi.org/10.3390/e20040240 |
Sumario: | The Partial Information Decomposition, introduced by Williams P. L. et al. (2010), provides a theoretical framework to characterize and quantify the structure of multivariate information sharing. A new method ([Formula: see text]) has recently been proposed by James R. G. et al. (2017) for computing a two-predictor partial information decomposition over discrete spaces. A lattice of maximum entropy probability models is constructed based on marginal dependency constraints, and the unique information that a particular predictor has about the target is defined as the minimum increase in joint predictor-target mutual information when that particular predictor-target marginal dependency is constrained. Here, we apply the [Formula: see text] approach to Gaussian systems, for which the marginally constrained maximum entropy models are Gaussian graphical models. Closed form solutions for the [Formula: see text] PID are derived for both univariate and multivariate Gaussian systems. Numerical and graphical illustrations are provided, together with practical and theoretical comparisons of the [Formula: see text] PID with the minimum mutual information partial information decomposition ([Formula: see text]), which was discussed by Barrett A. B. (2015). The results obtained using [Formula: see text] appear to be more intuitive than those given with other methods, such as [Formula: see text] , in which the redundant and unique information components are constrained to depend only on the predictor-target marginal distributions. In particular, it is proved that the [Formula: see text] method generally produces larger estimates of redundancy and synergy than does the [Formula: see text] method. In discussion of the practical examples, the PIDs are complemented by the use of tests of deviance for the comparison of Gaussian graphical models. |
---|