language-icon Old Web
English
Sign In

Gramian matrix

In linear algebra, the Gram matrix (Gramian matrix or Gramian) of a set of vectors v 1 , … , v n {displaystyle v_{1},dots ,v_{n}} in an inner product space is the Hermitian matrix of inner products, whose entries are given by G i j = ⟨ v i , v j ⟩ {displaystyle G_{ij}=langle v_{i},v_{j} angle } . In linear algebra, the Gram matrix (Gramian matrix or Gramian) of a set of vectors v 1 , … , v n {displaystyle v_{1},dots ,v_{n}} in an inner product space is the Hermitian matrix of inner products, whose entries are given by G i j = ⟨ v i , v j ⟩ {displaystyle G_{ij}=langle v_{i},v_{j} angle } . An important application is to compute linear independence: a set of vectors are linearly independent if and only if the Gram determinant (the determinant of the Gram matrix) is non-zero. It is named after Jørgen Pedersen Gram. For finite-dimensional real vectors in R n {displaystyle mathbb {R} ^{n}} with the usual Euclidean dot product, the Gram matrix is simply G = V T V {displaystyle G=V^{mathrm {T} }V} , where V {displaystyle V} is a matrix whose columns are the vectors v k {displaystyle v_{k}} . For complex vectors in C n {displaystyle mathbb {C} ^{n}} , G = V H V {displaystyle G=V^{H}V} , where V H {displaystyle V^{H}} is the conjugate transpose of V {displaystyle V} . Given square-integrable functions { ℓ i ( ⋅ ) , i = 1 , … , n } {displaystyle {ell _{i}(cdot ),,i=1,dots ,n}} on the interval [ t 0 , t f ] {displaystyle } , the Gram matrix G = [ G i j ] {displaystyle G=} is: For any bilinear form B {displaystyle B} on a finite-dimensional vector space over any field we can define a Gram matrix G {displaystyle G} attached to a set of vectors v 1 , … , v n {displaystyle v_{1},dots ,v_{n}} by G i j = B ( v i , v j ) {displaystyle G_{ij}=B(v_{i},v_{j})} . The matrix will be symmetric if the bilinear form B {displaystyle B} is symmetric. This generalizes the classical surface integral of a parametrized surface ϕ : U → S ⊂ R 3 {displaystyle phi :U o Ssubset mathbb {R} ^{3}} for ( x , y ) ∈ U ⊂ R 2 {displaystyle (x,y)in Usubset mathbb {R} ^{2}} : The Gramian matrix is positive-semidefinite, and every positive symmetric semidefinite matrix is the Gramian matrix for some set of vectors. Further, in finite-dimensions it determines the vectors up to isomorphism, i.e. any two sets of vectors with the same Gramian matrix must be related by a single unitary matrix. These facts follow from taking the spectral decomposition of any positive-semidefinite matrix P {displaystyle P} , so that P = U D U H = ( U D ) ( U D ) H {displaystyle P=UDU^{mathrm {H} }=(U{sqrt {D}})(U{sqrt {D}})^{mathrm {H} }} and so P {displaystyle P} is the Gramian matrix of the rows of U D {displaystyle U{sqrt {D}}} .The Gramian matrix of any orthonormal basis is the identity matrix. The infinite-dimensional analog of this statement is Mercer's theorem.

[ "Matrix (mathematics)", "Observability Gramian" ]
Parent Topic
Child Topic
    No Parent Topic