Robustifying FISTA via the infinity norm of its smooth component’s gradient
2020
The FISTA is a well-known and fast procedure for solving optimization problems composed by the sum of two convex functions, i.e. F = f + g, such that ∇f is L-Lipschitz continuous and g is possibly nonsmooth.FISTA’s well-studied theoretical RoC (rate of convergence) is $\mathcal{O}\left( {{k^{ - 2}}} \right)$; however, in the praxis, it depends on both, the extragradient rule and the step-size (SS) that estimates L. An ill-chosen SS (i.e. a large pre-defined constant), at worst, can force the objective to diverge; furthermore, some adaptive SS methods (i.e. line search, Cauchy, etc.) can slow down or force the objective to present an oscillatory behavior.In this work we present a simple add-on feature to robustify FISTA against an ill-chosen SS when F is the l 1 regularized problem. It is based on modifying some entries of ∇f k so as to $\left\{ {{{\left\| {\nabla {f_k}} \right\|}_\infty }} \right\}$ is turned into a non-increasing sequence. Furthermore, tracking and limiting $\left\{ {{{\left\| {\nabla {f_k}} \right\|}_\infty }} \right\}$ can be used (i) as an early warning method to avoid divergence k } and (ii) to allow larger or even consistently increasing SS sequences.Our computational results particularly target Convolutional Sparse Representations (CSR), where our method indeed boots FISTA’s practical performance.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
5
References
0
Citations
NaN
KQI