Cargando…
Factor analysis models via I-divergence optimization
Given a positive definite covariance matrix [Formula: see text] of dimension n, we approximate it with a covariance of the form [Formula: see text] , where H has a prescribed number [Formula: see text] of columns and [Formula: see text] is diagonal. The quality of the approximation is gauged by the...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer US
2015
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4978782/ https://www.ncbi.nlm.nih.gov/pubmed/26608962 http://dx.doi.org/10.1007/s11336-015-9486-5 |
Sumario: | Given a positive definite covariance matrix [Formula: see text] of dimension n, we approximate it with a covariance of the form [Formula: see text] , where H has a prescribed number [Formula: see text] of columns and [Formula: see text] is diagonal. The quality of the approximation is gauged by the I-divergence between the zero mean normal laws with covariances [Formula: see text] and [Formula: see text] , respectively. To determine a pair (H, D) that minimizes the I-divergence we construct, by lifting the minimization into a larger space, an iterative alternating minimization algorithm (AML) à la Csiszár–Tusnády. As it turns out, the proper choice of the enlarged space is crucial for optimization. The convergence of the algorithm is studied, with special attention given to the case where D is singular. The theoretical properties of the AML are compared to those of the popular EM algorithm for exploratory factor analysis. Inspired by the ECME (a Newton–Raphson variation on EM), we develop a similar variant of AML, called ACML, and in a few numerical experiments, we compare the performances of the four algorithms. |
---|