Asynchronous Decentralized Learning of Randomization-Based Neural Networks

2021 
In a communication network, decentralized learning refers to the knowledge collaboration between the different local agents (processing nodes) to improve the local estimation performance without sharing private data. The ideal case is that the decentralized solution approximates the centralized solution, as if all the data are available at a single node, and requires low computational power and communication overhead. In this work, we propose a decentralized learning of randomization-based neural networks with asynchronous communication and achieve centralized equivalent performance. We propose an ARock-based alternating-direction-method-of-multipliers (ADMM) algorithm that enables individual node activation and one-sided communication in an undirected connected network, characterized by a doubly-stochastic network policy matrix. Besides, the proposed algorithm reduces the computational cost and communication overhead due to its asynchronous nature. We study the proposed algorithm on different randomization-based neural networks, including ELM, SSFN, RVFL, and its variants, to achieve the centralized equivalent performance under efficient computation and communication costs. We also show that the proposed asynchronous decentralized learning algorithm can outperform a synchronous learning algorithm regarding computational complexity” especially when the network connections are sparse.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    36
    References
    0
    Citations
    NaN
    KQI
    []