Exponential Convergence for Distributed Smooth Optimization Under the Restricted Secant Inequality Condition

2020 
This paper considers the distributed smooth optimization problem in which the objective is to minimize a global cost function formed by a sum of local smooth cost functions, by using local information exchange. The standard assumption for proving exponential/linear convergence of first-order methods is the strong convexity of the cost functions, which does not hold for many practical applications. In this paper, we first show that the continuous-time distributed primal-dual gradient algorithm converges to one global minimizer exponentially under the assumption that the global cost function satisfies the restricted secant inequality condition. This condition is weaker than the strong convexity condition since it does not require convexity and the global minimizers are not necessary to be unique. We then show that the discrete-time distributed primal-dual algorithm constructed by using the Euler's approximation method converges to one global minimizer linearly under the same condition. The theoretical results are illustrated by numerical simulations.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    45
    References
    3
    Citations
    NaN
    KQI
    []