Sparse-grid sampling recovery and deep ReLU neural networks in high-dimensional approximation.

2020 
We investigate approximations of functions from the H\"older-Zygmund space of mixed smoothness $H^\alpha_\infty({\mathbb I}^d)$ defined on the $d$-dimensional unit cube ${\mathbb I}^d:=[0,1]^d$, by linear algorithms of sparse-grid sampling recovery and by deep ReLU (Rectified Linear Unit) neural networks when the dimension $d$ may be very large. The approximation error is measured in the norm of the isotropic Sobolev space $W^{1,p}_0({\mathbb I}^d)$. The optimality of this sampling recovery is studied in terms of sampling $n$-widths. Optimal linear sampling algorithms are constructed on sparse grids using the piece-wise linear B-spline interpolation representation. We prove some tight dimension-dependent bounds of the sampling $n$-widths explicit in $d$ and $n$. Based on the results on sampling recovery, we investigate the expressive power of deep ReLU neural networks to approximate functions in H\"older-Zygmund space. Namely, for any function $f\in H^\alpha_\infty({\mathbb I}^d)$, we explicitly construct a deep ReLU neural network having an output that approximates $f$ in the $W_0^{1,p}({\mathbb I}^d)$-norm with a prescribed accuracy $\varepsilon$, and prove tight dimension-dependent bounds of the computation complexity of this approximation, characterized as the number of weights and the depth of this deep ReLU neural network, explicitly in $d$ and $\varepsilon$. Moreover, we show that under a certain restriction the curse of dimensionality can be avoided in the approximations by sparse-grid sampling recovery and deep ReLU neural networks.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    57
    References
    2
    Citations
    NaN
    KQI
    []