Inexact Proximal-Point Penalty Methods for Constrained Non-Convex Optimization.

2020 
In this paper, an inexact proximal-point penalty method is studied for constrained optimization problems, where the objective function is non-convex, and the constraint functions can also be non-convex. The proposed method approximately solves a sequence of subproblems, each of which is formed by adding to the original objective function a proximal term and quadratic penalty terms associated to the constraint functions. Under a weak-convexity assumption, each subproblem is made strongly convex and can be solved effectively to a required accuracy by an optimal gradient-based method. The computational complexity of the proposed method is analyzed separately for the cases of convex constraint and non-convex constraint. For both cases, the complexity results are established in terms of the number of proximal gradient steps needed to find an $\varepsilon$-stationary point. When the constraint functions are convex, we show a complexity result of $\tilde O(\varepsilon^{-5/2})$ to produce an $\varepsilon$-stationary point under the Slater's condition. When the constraint functions are non-convex, the complexity becomes $\tilde O(\varepsilon^{-3})$ if a non-singularity condition holds on constraints and otherwise $\tilde O(\varepsilon^{-4})$ if a feasible initial solution is available.
    • Correction
    • Cite
    • Save
    • Machine Reading By IdeaReader
    132
    References
    14
    Citations
    NaN
    KQI
    []