A Neural Network Sparseness Algorithm Based on Relevance Dropout

2019 
Dropout is a regularization method that prevents model over-fitting by randomly deleting some neurons in the original network structure. With the increasing scale and complexity of neural networks, the number of linearly correlated neurons in the network will increase. Correlated neurons will not only form data redundancy, waste resources, but also it is not conducive to sparseness of neural networks, affecting the efficiency of algorithm execution. Based on this, this paper integrates the idea of relevance into Dropout, and improves the original Dropout, marked as R-Dropout (Relevance-Dropout). That is the key point of the R-Dropout method, to improve the sparseness of neural networks, by deleting some high correlation neurons with more probability. The experimental results on the public data sets show that R-Dropout regularization method not only improves the accuracy slightly, but also improves the convergence speed, and more efficient.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    2
    Citations
    NaN
    KQI
    []