Decentralized Accelerated Gradient Methods With Increasing Penalty Parameters.

2020 
In this paper, we study the communication and (sub)gradient computation costs in distributed optimization and give a sharp complexity analysis for the proposed distributed accelerated gradient methods. We present two algorithms based on the framework of the accelerated penalty method with increasing penalty parameters. Our first algorithm is for smooth distributed optimization and it obtains the near optimal $O\left(\sqrt{\frac{L}{\epsilon(1-\sigma_2(W))}}\log\frac{1}{\epsilon}\right)$ communication complexity and the optimal $O\left(\sqrt{\frac{L}{\epsilon}}\right)$ gradient computation complexity for $L$-smooth convex problems, where $\sigma_2(W)$ denotes the second largest singular value of the weight matrix $W$ associated to the network and $\epsilon$ is the target accuracy. When the problem is $\mu$-strongly convex and $L$-smooth, our algorithm has the near optimal $O\left(\sqrt{\frac{L}{\mu(1-\sigma_2(W))}}\log^2\frac{1}{\epsilon}\right)$ complexity for communications and the optimal $O\left(\sqrt{\frac{L}{\mu}}\log\frac{1}{\epsilon}\right)$ complexity for gradient computations. Our communication complexities are only worse by a factor of $\left(\log\frac{1}{\epsilon}\right)$ than the lower bounds for the smooth distributed optimization. %As far as we know, our method is the first to achieve both communication and gradient computation lower bounds up to an extra logarithm factor for smooth distributed optimization. Our second algorithm is designed for non-smooth distributed optimization and it achieves both the optimal $O\left(\frac{1}{\epsilon\sqrt{1-\sigma_2(W)}}\right)$ communication complexity and $O\left(\frac{1}{\epsilon^2}\right)$ subgradient computation complexity, which match the communication and subgradient computation complexity lower bounds for non-smooth distributed optimization.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    59
    References
    18
    Citations
    NaN
    KQI
    []