Certified dimension reduction of the input parameter space of vector-valued functions
2018
Approximation of multivariate functions is a difficult task when the number of input parameters is large. Identifying the directions where the function does not significantly vary is a key preprocessing step to reduce the complexity of the approximation algorithms.
In this talk, we propose a methodology for dimension reduction which consists in minimizing an upper bound of the approximation error obtained using Poincare-type inequalities. This approach is fundamentally gradient-based, and generalizes the so-called active subspace method for vector-valued functions, e.g. functions with multiple scalar-valued outputs or functions taking values in function spaces.
We also compare the proposed gradient-based approach with the popular and widely used truncated Karhunen-Loeve decomposition (KL). We show that, from a theoretical perspective, the truncated KL can be interpreted as a method which minimizes a looser upper bound of the error compared to the one we derived. Also, numerical comparisons show that better dimension reduction can be obtained provided gradients of the function are available.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
0
References
0
Citations
NaN
KQI