Gradient-Consensus Method for Distributed Optimization in Directed Networks

2019 
In this article, we focus on a multi-agent optimization problem of minimizing a sum, $f = \sum_{i=1}^n f_i$, of convex objective functions $f_i$'s, where, $f_i$ is only available locally to the agent $i$, over a network of $n$ agents. The agents are connected in a directed topology and an agent can communicate only to its neighbors connected through a directed edge in the network-connectivity structure. In this article, we propose a novel gradient-based distributed method to solve the above multi-agent convex optimization problem. In this method each agent maintains an estimate of the optimal solution. During each iteration of the proposed algorithm, the agents utilize locally available gradient information along with a finite-time approximate consensus protocol to move towards the optimal solution (hence the name "gradient-consensus" method). We show that the proposed algorithm converges linearly if the aggregate function $f$ is strongly convex and smooth. We also establish that the proposed method has a superior linear rate of convergence until reaching an $O(\eta)$ neighborhood of the optimal objective function value, for a given $\eta > 0$, under the relaxed assumptions of $f_i$'s being convex and smooth. To the best of our knowledge the proposed method achieves a convergence rate better than the best known rate estimates existing in the literature under these assumptions. Further, we numerically evaluate our proposed algorithms by solving three distributed optimization problems and show the enhanced efficacy of the gradient-consensus method over the existing distributed optimization schemes.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    39
    References
    0
    Citations
    NaN
    KQI
    []