Linear Convergence of First- and Zeroth-Order Primal-Dual Algorithms for Distributed Nonconvex Optimization

2021 
This paper considers the distributed nonconvex optimization problem of minimizing a global cost function formed by a sum of local cost functions by using local information exchange. We first propose a distributed first-order primal-dual algorithm. We show that it converges sublinearly to the stationary point if each local cost function is smooth and linearly to the global optimum under an additional condition that the global cost function satisfies the Polyak-Łojasiewicz condition. This condition is weaker than strong convexity, which is a standard condition for proving the linear convergence of distributed optimization algorithms, and the global minimizer is not necessarily unique or finite. Motivated by the situations where the gradients are unavailable, we then propose a distributed zeroth-order algorithm, derived from the proposed distributed first-order algorithm by using a deterministic gradient estimator, and show that it has the same convergence properties as the proposed first-order algorithm under the same conditions. The theoretical results are illustrated by numerical simulations.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []