Cargando…

Nonlinear independent component analysis for principled disentanglement in unsupervised deep learning

A central problem in unsupervised deep learning is how to find useful representations of high-dimensional data, sometimes called “disentanglement.” Most approaches are heuristic and lack a proper theoretical foundation. In linear representation learning, independent component analysis (ICA) has been...

Descripción completa

Detalles Bibliográficos
Autores principales: Hyvärinen, Aapo, Khemakhem, Ilyes, Morioka, Hiroshi
Formato: Online Artículo Texto
Lenguaje:English
Publicado: Elsevier 2023
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10591132/
https://www.ncbi.nlm.nih.gov/pubmed/37876900
http://dx.doi.org/10.1016/j.patter.2023.100844
Descripción
Sumario:A central problem in unsupervised deep learning is how to find useful representations of high-dimensional data, sometimes called “disentanglement.” Most approaches are heuristic and lack a proper theoretical foundation. In linear representation learning, independent component analysis (ICA) has been successful in many applications areas, and it is principled, i.e., based on a well-defined probabilistic model. However, extension of ICA to the nonlinear case has been problematic because of the lack of identifiability, i.e., uniqueness of the representation. Recently, nonlinear extensions that utilize temporal structure or some auxiliary information have been proposed. Such models are in fact identifiable, and consequently, an increasing number of algorithms have been developed. In particular, some self-supervised algorithms can be shown to estimate nonlinear ICA, even though they have initially been proposed from heuristic perspectives. This paper reviews the state of the art of nonlinear ICA theory and algorithms.