Proximal or gradient steps for cocoercive operators

2021 
This paper provides a theoretical and numerical comparison of classical first order splitting methods for solving smooth convex optimization problems and cocoercive equations. In a theoretical point of view, we compare convergence rates of gradient descent, forward-backward, Peaceman-Rachford, and Douglas-Rachford algorithms for minimizing the sum of two smooth convex functions when one of them is strongly convex. A similar comparison is given in the more general cocoercive setting under the presence of strong monotonicity and we observe that the convergence rates in optimization are strictly better than the corresponding rates for cocoercive equations for some algorithms. We obtain improved rates with respect to the literature in several instances exploiting the structure of our problems. In a numerical point of view, we verify our theoretical results by implementing and comparing previous algorithms in well established signal and image inverse problems involving sparsity. We replace the widely used $\ell_1$ norm by the Huber loss and we observe that fully proximal-based strategies have numerical and theoretical advantages with respect to methods using gradient steps. In particular, Peaceman-Rachford is the more performant algorithm in our examples.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    45
    References
    1
    Citations
    NaN
    KQI
    []