Cargando…
A general double-proximal gradient algorithm for d.c. programming
The possibilities of exploiting the special structure of d.c. programs, which consist of optimising the difference of convex functions, are currently more or less limited to variants of the DCA proposed by Pham Dinh Tao and Le Thi Hoai An in 1997. These assume that either the convex or the concave p...
Autores principales: | , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Springer Berlin Heidelberg
2018
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6830843/ https://www.ncbi.nlm.nih.gov/pubmed/31762494 http://dx.doi.org/10.1007/s10107-018-1292-2 |
Sumario: | The possibilities of exploiting the special structure of d.c. programs, which consist of optimising the difference of convex functions, are currently more or less limited to variants of the DCA proposed by Pham Dinh Tao and Le Thi Hoai An in 1997. These assume that either the convex or the concave part, or both, are evaluated by one of their subgradients. In this paper we propose an algorithm which allows the evaluation of both the concave and the convex part by their proximal points. Additionally, we allow a smooth part, which is evaluated via its gradient. In the spirit of primal-dual splitting algorithms, the concave part might be the composition of a concave function with a linear operator, which are, however, evaluated separately. For this algorithm we show that every cluster point is a solution of the optimisation problem. Furthermore, we show the connection to the Toland dual problem and prove a descent property for the objective function values of a primal-dual formulation of the problem. Convergence of the iterates is shown if this objective function satisfies the Kurdyka–Łojasiewicz property. In the last part, we apply the algorithm to an image processing model. |
---|