Nonclosedness of the Set of Neural Networks in Sobolev Space

2021 
Abstract We examine the closedness of sets of realized neural networks of a fixed architecture in Sobolev spaces. For an exactly m -times differentiable activation function ρ , we construct a sequence of neural networks ( Φ n ) n ∈ N whose realizations converge in order- ( m − 1 ) Sobolev norm to a function that cannot be realized exactly by a neural network. Thus, sets of realized neural networks are not closed in order- ( m − 1 ) Sobolev spaces W m − 1 , p for p ∈ [ 1 , ∞ ) . We further show that these sets are not closed in W m , p under slightly stronger conditions on the m th derivative of ρ . For a real analytic activation function, we show that sets of realized neural networks are not closed in W k , p for any k ∈ N . The nonclosedness allows for approximation of non-network target functions with unbounded parameter growth. We partially characterize the rate of parameter growth for most activation functions by showing that a specific sequence of realized neural networks can approximate the activation function’s derivative with weights increasing inversely proportional to the L p approximation error. Finally, we present experimental results showing that networks are capable of closely approximating non-network target functions with increasing parameters via training.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    18
    References
    0
    Citations
    NaN
    KQI
    []