Performing linear convergence for distributed constrained optimisation over time-varying directed unbalanced networks

2019 
The problem of distributed constrained optimisation over a network of agents, where the goal is to cooperatively minimise the sum of all local convex objective functions is studied. Each agent in the network possesses only its private local convex objective function and is constrained to a coupling equality constraint and its local inequality constraint. Moreover, the authors particularly focus on the scenario where each agent is only allowed to interact with their in-neighbours over a series of time-varying directed unbalanced networks. To collectively address the optimisation problem, a novel distributed primal-dual push-DIGing (integrated push-sum strategy with distributed inexact gradient tracking method) algorithm (termed as DPD-PD) in which agents employ uncoordinated step-sizes is proposed. Unlike other methods, DPD-PD allows not only the mixing matrices are column-stochastic, but also the step-sizes are uncoordinated. An important feature of DPD-PD is handling distributed constrained optimisation problems in the case of time-varying directed unbalanced networks. When objective functions are strongly convex and smooth, the authors demonstrate that DPD-PD converges linearly to the optimal solution given that the uncoordinated step-sizes are smaller than an upper bound. Explicit convergence rate is also conducted. Preliminary results on some numerical experiments validate the theoretical findings.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    39
    References
    9
    Citations
    NaN
    KQI
    []