Distributed Linearized Alternating Direction Method of Multipliers for Composite Convex Consensus Optimization

2018 
Given an undirected graph $\mathcal {G}=(\mathcal {N},\mathcal {E})$ of agents $\mathcal {N}=\lbrace 1,\ldots,N\rbrace$ connected with edges in $\mathcal {E}$ , we study how to compute an optimal decision on which there is consensus among agents and that minimizes the sum of agent-specific private convex composite functions $\lbrace \Phi _i\rbrace _{i\in \mathcal {N}}$ , where $\Phi _i\triangleq \xi _i + f_i$ belongs to agent- $i$ . Assuming only agents connected by an edge can communicate, we propose a distributed proximal gradient algorithm (DPGA) for consensus optimization over both unweighted and weighted static (undirected) communication networks. In one iteration, each agent- $i$ computes the prox map of $\xi _i$ and gradient of $f_i$ , and this is followed by local communication with neighboring agents. We also study its stochastic gradient variant, SDPGA, which can only access to noisy estimates of $\nabla f_i$ at each agent- $i$ . This computational model abstracts a number of applications in distributed sensing, machine learning and statistical inference. We show ergodic convergence in both suboptimality error and consensus violation for the DPGA and SDPGA with rates $\mathcal {O}(1/t)$ and $\mathcal {O}(1/\sqrt{t})$ , respectively.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    41
    References
    94
    Citations
    NaN
    KQI
    []