Accelerated Primal-Dual Algorithm for Distributed Non-convex Optimization
2021
This paper investigates accelerating the convergence of distributed optimization algorithms on non-convex problems. We propose a distributed primal-dual stochastic gradient descent~(SGD) equipped with "powerball" method to accelerate. We show that the proposed algorithm achieves the linear speedup convergence rate $\mathcal{O}(1/\sqrt{nT})$ for general smooth (possibly non-convex) cost functions. We demonstrate the efficiency of the algorithm through numerical experiments by training two-layer fully connected neural networks and convolutional neural networks on the MNIST dataset to compare with state-of-the-art distributed SGD algorithms and centralized SGD algorithms.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
48
References
1
Citations
NaN
KQI