Analysis of Regularized Learning in Banach Spaces.

2021 
This article presents a new way to study the theory of regularized learning for generalized data in Banach spaces including representer theorems and convergence theorems. The generalized data are composed of linear functionals and real scalars as the input and output elements to represent the discrete information of many engineering and physics models. By the extension of the classical machine learning, the empirical risks are computed by the generalized data and the loss functions. According to the techniques of regularization, the exact solutions are approximated by minimizing the regularized empirical risks over the Banach spaces. The existence and convergence of the approximate solutions are guaranteed by the relative compactness of the generalized input data in the predual spaces of the Banach spaces.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    3
    References
    0
    Citations
    NaN
    KQI
    []