Regularized asymptotic descents for nonconvex optimization

2020 
In this paper we propose regularized asymptotic descent (RAD) methods for solving nonconvex optimization problems. Our motivation is first to apply the regularized iteration and then to use an explicit asymptotic formula to approximate the solution of each regularized minimization. We consider a class of possibly nonconvex, nonsmooth, or even discontinuous objectives extended from strongly convex functions with Lipschitz-continuous gradients, in each of which has a unique global minima and is continuously differentiable at the global minimizer. The main theoretical result shows that the RAD method enjoys the global linear convergence with high probability for such a class of nonconvex objectives, i.e., the method will not be trapped in saddle points, local minima, or even discontinuities. Besides, the method is derivative-free and its per-iteration cost, i.e., the number of function evaluations, is bounded, so that it has a complexity bound $\mathcal{O}(\log\frac{1}{\epsilon})$ for finding a point such that the optimality gap at this point is less than $\epsilon>0$.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    19
    References
    0
    Citations
    NaN
    KQI
    []