Distributed optimization over directed graphs with row stochasticity and constraint regularity

2019 
Abstract This paper deals with an optimization problem over a network of agents, where the cost function is the sum of the individual (possibly nonsmooth) objectives of the agents and the constraint set is the intersection of local constraints. Most existing methods employing subgradient and consensus steps for solving this problem require the weight matrix associated with the network to be column stochastic or even doubly stochastic, conditions that can be hard to arrange in directed networks. Moreover, known convergence analyses for distributed subgradient methods vary depending on whether the problem is unconstrained or constrained, and whether the local constraint sets are identical or nonidentical and compact. The main goals of this paper are: (i) removing the common column stochasticity requirement; (ii) relaxing the compactness assumption, and (iii) providing a unified convergence analysis. Specifically, assuming the communication graph to be fixed and strongly connected and the weight matrix to (only) be row stochastic, a distributed projected subgradient algorithm and a variation of this algorithm are presented to solve the problem for cost functions that are convex and Lipschitz continuous. The key component of the algorithms is to adjust the subgradient of each agent by an estimate of its corresponding entry in the normalized left Perron eigenvector of the weight matrix. These estimates are obtained locally from an augmented consensus iteration using the same row stochastic weight matrix and requiring very limited global information about the network. Moreover, based on a regularity assumption on the local constraint sets, a unified analysis is given that can be applied to both unconstrained and constrained problems and without assuming compactness of the constraint sets or an interior point in their intersection. Further, we also establish an upper bound on the absolute objective error evaluated at each agent’s available local estimate under a nonincreasing step size sequence. This bound allows us to analyze the convergence rate of both algorithms.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    49
    References
    27
    Citations
    NaN
    KQI
    []