Distributed Gradient Methods for Convex Machine Learning Problems in Networks: Distributed Optimization

2020 
This article provides an overview of distributed gradient methods for solving convex machine learning problems of the form min x i R n (1/ m ) s i = 1 m f i ( x ) in a system consisting of m agents that are embedded in a communication network. Each agent i has a collection of data captured by its privately known objective function f i ( x ). The distributed algorithms considered here obey two simple rules: privately known agent functions f i ( x ) cannot be disclosed to any other agent in the network and every agent is aware of the local connectivity structure of the network, i.e., it knows its one-hop neighbors only. While obeying these two rules, the distributed algorithms that agents execute should find a solution to the overall system problem with the limited knowledge of the objective function and limited local communications. Given in this article is an overview of such algorithms that typically involve two update steps: a gradient step based on the agent local objective function and a mixing step that essentially diffuses relevant information from one to all other agents in the network.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    38
    References
    36
    Citations
    NaN
    KQI
    []