language-icon Old Web
English
Sign In

Arnoldi iteration

In numerical linear algebra, the Arnoldi iteration is an eigenvalue algorithm and an important example of an iterative method. Arnoldi finds an approximation to the eigenvalues and eigenvectors of general (possibly non-Hermitian) matrices by constructing an orthonormal basis of the Krylov subspace, which makes it particularly useful when dealing with large sparse matrices. In numerical linear algebra, the Arnoldi iteration is an eigenvalue algorithm and an important example of an iterative method. Arnoldi finds an approximation to the eigenvalues and eigenvectors of general (possibly non-Hermitian) matrices by constructing an orthonormal basis of the Krylov subspace, which makes it particularly useful when dealing with large sparse matrices. The Arnoldi method belongs to a class of linear algebra algorithms that give a partial result after a small number of iterations, in contrast to so-called direct methods which must complete to give any useful results (see for example, Householder transformation). The partial result in this case being the first few vectors of the basis the algorithm is building. When applied to Hermitian matrices it reduces to the Lanczos algorithm. The Arnoldi iteration was invented by W. E. Arnoldi in 1951. An intuitive method for finding the largest (in absolute value) eigenvalue of a given m × m matrix A {displaystyle A} is the power iteration: starting with an arbitrary initial vector b, calculate Ab, A2b, A3b,… normalizing the result after every application of the matrix A. This sequence converges to the eigenvector corresponding to the eigenvalue with the largest absolute value, λ 1 {displaystyle lambda _{1}} . However, much potentially useful computation is wasted by using only the final result, A n − 1 b {displaystyle A^{n-1}b} . This suggests that instead, we form the so-called Krylov matrix: The columns of this matrix are not in general orthogonal, but we can extract an orthogonal basis, via a method such as Gram–Schmidt orthogonalization. The resulting set of vectors is thus an orthogonal basis of the Krylov subspace, K n {displaystyle {mathcal {K}}_{n}} . We may expect the vectors of this basis to give good approximations of the eigenvectors corresponding to the n {displaystyle n} largest eigenvalues, for the same reason that A n − 1 b {displaystyle A^{n-1}b} approximates the dominant eigenvector. The Arnoldi iteration uses the stabilized Gram–Schmidt process to produce a sequence of orthonormal vectors, q1, q2, q3, …, called the Arnoldi vectors, such that for every n, the vectors q1, …, qn span the Krylov subspace K n {displaystyle {mathcal {K}}_{n}} . Explicitly, the algorithm is as follows: The j-loop projects out the component of q k {displaystyle q_{k}} in the directions of q 1 , … , q k − 1 {displaystyle q_{1},dots ,q_{k-1}} . This ensures the orthogonality of all the generated vectors. The algorithm breaks down when qk is the zero vector. This happens when the minimal polynomial of A is of degree k. In most applications of the Arnoldi iteration, including the eigenvalue algorithm below and GMRES, the algorithm has converged at this point.

[ "Generalized minimal residual method", "Fixed-point iteration", "Power iteration", "Preconditioner" ]
Parent Topic
Child Topic
    No Parent Topic