language-icon Old Web
English
Sign In

Singular value

In mathematics, in particular functional analysis, the singular values, or s-numbers of a compact operator T : X → Y acting between Hilbert spaces X and Y, are the square roots of non-negative eigenvalues of the self-adjoint operator T*T  (where T* denotes the adjoint of T). In mathematics, in particular functional analysis, the singular values, or s-numbers of a compact operator T : X → Y acting between Hilbert spaces X and Y, are the square roots of non-negative eigenvalues of the self-adjoint operator T*T  (where T* denotes the adjoint of T). The singular values are non-negative real numbers, usually listed in decreasing order (s1(T), s2(T), …). The largest singular value s1(T) is equal to the operator norm of T (see Min-max theorem). In the case that T acts on euclidean space Rn, there is a simple geometric interpretation for the singular values: Consider the image by T of the unit sphere; this is an ellipsoid, and the lengths of its semi-axes are the singular values of T (the figure provides an example in R2). The singular values are the absolute values of the eigenvalues of a normal matrix A, because the spectral theorem can be applied to obtain unitary diagonalization of A as A = UΛU*. Therefore, A ∗ A = U Λ 2 U ∗ = U   | Λ |   U ∗ {displaystyle {sqrt {A^{*}A}}={sqrt {ULambda ^{2}U^{*}}}=U |Lambda | U^{*}} . Most norms on Hilbert space operators studied are defined using s-numbers. For example, the Ky Fan-k-norm is the sum of first k singular values, the trace norm is the sum of all singular values, and the Schatten norm is the pth root of the sum of the pth powers of the singular values. Note that each norm is defined only on a special class of operators, hence s-numbers are useful in classifying different operators. In the finite-dimensional case, a matrix can always be decomposed in the form UΣV*, where U and V* are unitary matrices and Σ is a diagonal matrix with the singular values lying on the diagonal. This is the singular value decomposition. For A ∈ C m × n , {displaystyle Ain mathbb {C} ^{m imes n},} and i = 1 , 2 , … , min { m , n } {displaystyle i=1,2,ldots ,min{m,n}} . Min-max theorem for singular values. Here U : dim ⁡ ( U ) = i {displaystyle U:dim(U)=i} is a subspace of C n {displaystyle mathbb {C} ^{n}} of dimension i {displaystyle i} .

[ "Singular value decomposition", "Matrix (mathematics)", "partial singular value decomposition", "Hankel singular value", "Bidiagonalization", "Heinz mean", "Bidiagonal matrix" ]
Parent Topic
Child Topic
    No Parent Topic