On the Approximation Capabilities of ReLU Neural Networks and Random ReLU Features

2018 
Inspired by Barron's seminal work of the quantitative approximation result on neural networks with sigmoidal activation nodes, we study the approximation property of neural networks with ReLU nodes. By considering the functions expressed as the transforms of signed measures under the transformation induced by ReLU nodes, we are able to prove stronger approximation results than Barron's, in which upper bounds on both inner and outer weights are obtained for a given approximation accuracy. We also extend the approximation result to multi-layer cases and prove a depth separation result for the function class we consider. Because of the strong connection between single-hidden-layer neural networks and random features models, we further study the approximation property of random ReLU features. We provide sufficient conditions on the universality of random ReLU features and describe the random ReLU features algorithm with a provable learning rate. We also generalize our result on random ReLU features to a broader class of random features.
    • Correction
    • Cite
    • Save
    • Machine Reading By IdeaReader
    0
    References
    0
    Citations
    NaN
    KQI
    []