language-icon Old Web
English
Sign In

Lanczos algorithm

The Lanczos algorithm is a direct algorithm devised by Cornelius Lanczos that is an adaptation of power methods to find the m {displaystyle m} most useful eigenvalues and eigenvectors of an n × n {displaystyle n imes n} Hermitian matrix, where m {displaystyle m} is often but not necessarily much smaller than n {displaystyle n} . Although computationally efficient in principle, the method as initially formulated was not useful, due to its numerical instability. The Lanczos algorithm is a direct algorithm devised by Cornelius Lanczos that is an adaptation of power methods to find the m {displaystyle m} most useful eigenvalues and eigenvectors of an n × n {displaystyle n imes n} Hermitian matrix, where m {displaystyle m} is often but not necessarily much smaller than n {displaystyle n} . Although computationally efficient in principle, the method as initially formulated was not useful, due to its numerical instability. In 1970, Ojalvo and Newman showed how to make the method numerically stable and applied it to the solution of very large engineering structures subjected to dynamic loading. This was achieved using a method for purifying the Lanczos vectors (i.e. by repeatedly reorthogonalizing each newly generated vector with all previously generated ones) to any degree of accuracy, which when not performed, produced a series of vectors that were highly contaminated by those associated with the lowest natural frequencies. In their original work, these authors also suggested how to select a starting vector (i.e. use a random-number generator to select each element of the starting vector) and suggested an empirically determined method for determining m {displaystyle m} , the reduced number of vectors (i.e. it should be selected to be approximately 1.5 times the number of accurate eigenvalues desired). Soon thereafter their work was followed by Paige, who also provided an error analysis. In 1988, Ojalvo produced a more detailed history of this algorithm and an efficient eigenvalue error test. There are in principle four ways to write the iteration procedure. Paige and other works show that the above order of operations is the most numerically stable.In practice the initial vector v 1 {displaystyle v_{1}} may be taken as another argument of the procedure, with β j = 0 {displaystyle eta _{j}=0} and indicators of numerical imprecision being included as additional loop termination conditions. Not counting the matrix–vector multiplication, each iteration does O ( n ) {displaystyle O(n)} arithmetical operations. If d {displaystyle d} is the average number of nonzero elements in a row of A {displaystyle A} , then the matrix–vector multiplication can be done in O ( d n ) {displaystyle O(dn)} arithmetical operations. Total complexity is thus O ( d m n ) {displaystyle O(dmn)} , or O ( d n 2 ) {displaystyle O(dn^{2})} if m = n {displaystyle m=n} ; the Lanczos algorithm can be really fast for sparse matrices. Schemes for improving numerical stability are typically judged against this high performance. The vectors v j {displaystyle v_{j}} are called Lanczos vectors. The vector w j ′ {displaystyle w_{j}'} is not used after w j {displaystyle w_{j}} is computed, and the vector w j {displaystyle w_{j}} is not used after v j + 1 {displaystyle v_{j+1}} is computed. Hence one may use the same storage for all three. Likewise, if only the tridiagonal matrix T {displaystyle T} is sought, then the raw iteration does not need v j − 1 {displaystyle v_{j-1}} after having computed w j {displaystyle w_{j}} , although some schemes for improving the numerical stability would need it later on. Sometimes the subsequent Lanczos vectors are recomputed from v 1 {displaystyle v_{1}} when needed. The Lanczos algorithm is most often brought up in the context of finding the eigenvalues and eigenvectors of a matrix, but whereas an ordinary diagonalization of a matrix would make eigenvectors and eigenvalues apparent from inspection, the same is not true for the tridiagonalization performed by the Lanczos algorithm; nontrivial additional steps are needed to compute even a single eigenvalue or eigenvector. Nonetheless, applying the Lanczos algorithm is often a significant step forward in computing the eigendecomposition.First observe that when T {displaystyle T} is n × n {displaystyle n imes n} , it is similar to A {displaystyle A} : if λ {displaystyle lambda } is an eigenvalue of T {displaystyle T} then it is also an eigenvalue of A {displaystyle A} , and if T x = λ x {displaystyle Tx=lambda x} ( x {displaystyle x} is an eigenvector of T {displaystyle T} ) then y = V x {displaystyle y=Vx} is the corresponding eigenvector of A {displaystyle A} (since A y = A V x = V T V ∗ V x = V T I x = V T x = V ( λ x ) = λ V x = λ y {displaystyle Ay=AVx=VTV^{*}Vx=VTIx=VTx=V(lambda x)=lambda Vx=lambda y} ). Thus the Lanczos algorithm transforms the eigendecomposition problem for A {displaystyle A} into the eigendecomposition problem for T {displaystyle T} .

[ "Lanczos resampling", "lanczos process", "Block Lanczos algorithm", "Lanczos approximation" ]
Parent Topic
Child Topic
    No Parent Topic