Linear Convergence for Distributed Optimization Without Strong Convexity

2020 
This paper considers the distributed optimization problem of minimizing a global cost function formed by a sum of local smooth cost functions by using local information exchange. Various distributed optimization algorithms have been proposed for solving such a problem. A standard condition for proving the linear convergence for existing distributed algorithms is the strong convexity of the cost functions. However, the strong convexity may not hold for many practical applications, such as least squares and logistic regression. In this paper, we propose a distributed primal-dual gradient descent algorithm and establish its linear convergence under the condition that the global cost function satisfies the Polyak–Łojasiewicz condition. This condition is weaker than strong convexity and the global minimizer is not necessarily unique. The theoretical result is illustrated by numerical simulations.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    27
    References
    1
    Citations
    NaN
    KQI
    []