Optimal distributed convex optimization on slowly time-varying graphs

2019 
We study optimal distributed first-order optimization algorithms when the network (i.e., communication constraints between the agents) changes with time. This problem is motivated by scenarios where agents experience network malfunctions. We provide a sufficient condition that guarantees a convergence rate with optimal (up lo logarithmic terms) dependencies on the network and function parameters if the network changes are constrained to a \text{small} percentage ${\alpha}$ of the total number of iterations. We call such networks \textit{slowly} time-varying networks. Moreover, we show that Nesterov's method has an iteration complexity of $\Omega\left(\left(\sqrt{\kappa_\Phi\cdot\bar\chi} + \alpha \log(\kappa_\Phi\cdot\bar\chi)\right)\log({1}/{\varepsilon})\right)$ for decentralized algorithms, where ${\kappa_\Phi}$ is condition number of the objective function, and ${\bar\chi}$ is a worst case bound on the condition number of the sequence of communication graphs. Additionally, we provide an explicit upper bound on ${\alpha}$ in terms of the condition number of the objective function and network topologies.
    • Correction
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    1
    Citations
    NaN
    KQI
    []