language-icon Old Web
English
Sign In

Variance-based sensitivity analysis

Variance-based sensitivity analysis (often referred to as the Sobol method or Sobol indices, after Ilya M. Sobol) is a form of global sensitivity analysis. Working within a probabilistic framework, it decomposes the variance of the output of the model or system into fractions which can be attributed to inputs or sets of inputs. For example, given a model with two inputs and one output, one might find that 70% of the output variance is caused by the variance in the first input, 20% by the variance in the second, and 10% due to interactions between the two. These percentages are directly interpreted as measures of sensitivity. Variance-based measures of sensitivity are attractive because they measure sensitivity across the whole input space (i.e. it is a global method), they can deal with nonlinear responses, and they can measure the effect of interactions in non-additive systems. Variance-based sensitivity analysis (often referred to as the Sobol method or Sobol indices, after Ilya M. Sobol) is a form of global sensitivity analysis. Working within a probabilistic framework, it decomposes the variance of the output of the model or system into fractions which can be attributed to inputs or sets of inputs. For example, given a model with two inputs and one output, one might find that 70% of the output variance is caused by the variance in the first input, 20% by the variance in the second, and 10% due to interactions between the two. These percentages are directly interpreted as measures of sensitivity. Variance-based measures of sensitivity are attractive because they measure sensitivity across the whole input space (i.e. it is a global method), they can deal with nonlinear responses, and they can measure the effect of interactions in non-additive systems. From a black box perspective, any model may be viewed as a function Y=f(X), where X is a vector of d uncertain model inputs {X1, X2, ... Xd}, and Y is a chosen univariate model output (note that this approach examines scalar model outputs, but multiple outputs can be analysed by multiple independent sensitivity analyses). Furthermore, it will be assumed that the inputs are independently and uniformly distributed within the unit hypercube, i.e. X i ∈ [ 0 , 1 ] {displaystyle X_{i}in } for i = 1 , 2 , . . . , d {displaystyle i=1,2,...,d} . This incurs no loss of generality because any input space can be transformed onto this unit hypercube. f(X) may be decomposed in the following way, where f0 is a constant and fi is a function of Xi, fij a function of Xi and Xj, etc. A condition of this decomposition is that, i.e. all the terms in the functional decomposition are orthogonal. This leads to definitions of the terms of the functional decomposition in terms of conditional expected values, From which it can be seen that fi is the effect of varying Xi alone (known as the main effect of Xi), and fij is the effect of varying Xi and Xj simultaneously, additional to the effect of their individual variations. This is known as a second-order interaction. Higher-order terms have analogous definitions. Now, further assuming that the f(X) is square-integrable, the functional decomposition may be squared and integrated to give, Notice that the left hand side is equal to the variance of Y, and the terms of the right hand side are variance terms, now decomposed with respect to sets of the Xi. This finally leads to the decomposition of variance expression,

[ "Variance decomposition of forecast errors", "One-way analysis of variance", "Direct material price variance", "Fourier amplitude sensitivity testing", "Elementary effects method", "Direct material usage variance" ]
Parent Topic
Child Topic
    No Parent Topic