language-icon Old Web
English
Sign In

Power iteration

In mathematics, power iteration (also known as the power method) is an eigenvalue algorithm: given a diagonalizable matrix A {displaystyle A} , the algorithm will produce a number λ {displaystyle lambda } , which is the greatest (in absolute value) eigenvalue of A {displaystyle A} , and a nonzero vector v {displaystyle v} , which is a corresponding eigenvector of λ {displaystyle lambda } , that is, A v = λ v {displaystyle Av=lambda v} .The algorithm is also known as the Von Mises iteration. In mathematics, power iteration (also known as the power method) is an eigenvalue algorithm: given a diagonalizable matrix A {displaystyle A} , the algorithm will produce a number λ {displaystyle lambda } , which is the greatest (in absolute value) eigenvalue of A {displaystyle A} , and a nonzero vector v {displaystyle v} , which is a corresponding eigenvector of λ {displaystyle lambda } , that is, A v = λ v {displaystyle Av=lambda v} .The algorithm is also known as the Von Mises iteration. Power iteration is a very simple algorithm, but it may converge slowly. The most time-consuming operation of the algorithm is the multiplication of matrix A {displaystyle A} by a vector, so it is effective for a very large sparse matrix with appropriate implementation. The power iteration algorithm starts with a vector b 0 {displaystyle b_{0}} , which may be an approximation to the dominant eigenvector or a random vector. The method is described by the recurrence relation So, at every iteration, the vector b k {displaystyle b_{k}} is multiplied by the matrix A {displaystyle A} and normalized. If we assume A {displaystyle A} has an eigenvalue that is strictly greater in magnitude than its other eigenvalues and the starting vector b 0 {displaystyle b_{0}} has a nonzero component in the direction of an eigenvector associated with the dominant eigenvalue, then a subsequence ( b k ) {displaystyle left(b_{k} ight)} converges to an eigenvector associated with the dominant eigenvalue. Without the two assumptions above, the sequence ( b k ) {displaystyle left(b_{k} ight)} does not necessarily converge. In this sequence, where v 1 {displaystyle v_{1}} is an eigenvector associated with the dominant eigenvalue, and ‖ r k ‖ → 0 {displaystyle |r_{k}| ightarrow 0} . The presence of the term e i ϕ k {displaystyle e^{iphi _{k}}} implies that ( b k ) {displaystyle left(b_{k} ight)} does not converge unless e i ϕ k = 1 {displaystyle e^{iphi _{k}}=1} . Under the two assumptions listed above, the sequence ( μ k ) {displaystyle left(mu _{k} ight)} defined by converges to the dominant eigenvalue.

[ "Convergence (routing)", "Matrix (mathematics)", "Eigenvalues and eigenvectors", "Iterative method", "Rayleigh quotient iteration", "Arnoldi iteration", "Powered method", "Modified Richardson iteration" ]
Parent Topic
Child Topic
    No Parent Topic