Cargando…

An Efficient Orthonormalization-Free Approach for Sparse Dictionary Learning and Dual Principal Component Pursuit

Sparse dictionary learning (SDL) is a classic representation learning method and has been widely used in data analysis. Recently, the [Formula: see text]-norm ([Formula: see text]) maximization has been proposed to solve SDL, which reshapes the problem to an optimization problem with orthogonality c...

Descripción completa

Detalles Bibliográficos
Autores principales: Hu, Xiaoyin, Liu, Xin
Formato: Online Artículo Texto
Lenguaje:English
Publicado: MDPI 2020
Materias:
Acceso en línea:https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7308875/
https://www.ncbi.nlm.nih.gov/pubmed/32471176
http://dx.doi.org/10.3390/s20113041
Descripción
Sumario:Sparse dictionary learning (SDL) is a classic representation learning method and has been widely used in data analysis. Recently, the [Formula: see text]-norm ([Formula: see text]) maximization has been proposed to solve SDL, which reshapes the problem to an optimization problem with orthogonality constraints. In this paper, we first propose an [Formula: see text]-norm maximization model for solving dual principal component pursuit (DPCP) based on the similarities between DPCP and SDL. Then, we propose a smooth unconstrained exact penalty model and show its equivalence with the [Formula: see text]-norm maximization model. Based on our penalty model, we develop an efficient first-order algorithm for solving our penalty model (PenNMF) and show its global convergence. Extensive experiments illustrate the high efficiency of PenNMF when compared with the other state-of-the-art algorithms on solving the [Formula: see text]-norm maximization with orthogonality constraints.