Gradient-Consensus Method for Distributed Optimization in Directed Multi-Agent Networks

2020 
In this article, a distributed optimization problem for minimizing a sum, $\sum_{i=1}^n f_i$, of convex objective functions, $f_i,$ is addressed. Here each function $f_i$ is a function of $n$ variables, private to agent $i$ which defines the agent's objective. Agents can only communicate locally with neighbors defined by a communication network topology. These $f_i$'s are assumed to be Lipschitz-differentiable convex functions. For solving this optimization problem, we develop a novel distributed algorithm, which we term as the gradient-consensus method. The gradient-consensus scheme uses a finite-time terminated consensus protocol called $\rho$-consensus, which allows each local estimate to be $\rho$-close to each other at every iteration. The parameter $\rho$ is a fixed constant which can be determined independently of the network size or topology. It is shown that the estimate of the optimal solution at any local agent $i$ converges geometrically to the optimal solution within $O(\rho)$ where $\rho$ can be chosen to be arbitrarily small.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    8
    Citations
    NaN
    KQI
    []