Regularization properties of Krylov iterative solvers CGME and LSMR for linear discrete ill-posed problems with an application to truncated randomized SVDs
2020
For the large-scale linear discrete ill-posed problem $\min \limits \|Ax-b\|$ or Ax = b with b contaminated by Gaussian white noise, the following Krylov solvers are commonly used: LSQR, and its mathematically equivalent CGLS (i.e., the Conjugate Gradient (CG) method applied to ATAx = ATb), CGME (i.e., the CG method applied to $\min \limits \|AA^{T}y-b\|$ or AATy = b with x = ATy), and LSMR (i.e., the minimal residual (MINRES) method applied to ATAx = ATb). These methods have intrinsic regularizing effects, where the number k of iterations plays the role of the regularization parameter. In this paper, we analyze the regularizing effects of CGME and LSMR and establish a number of results including the filtered SVD expansion of CGME iterates, which prove that the 2-norm filtering best possible regularized solutions by CGME and LSMR are less accurate than and at least as accurate as those by LSQR, respectively. We also prove that the semi-convergence of CGME and LSMR always occurs no later and sooner than that of LSQR, respectively. As a byproduct, using the analysis approach for CGME, we improve a fundamental result on the accuracy of the truncated rank k approximate SVD of A generated by randomized algorithms, and reveal how the truncation step damages the accuracy. Numerical experiments justify our results on CGME and LSMR.
Keywords:
- Correction
- Source
- Cite
- Save
- Machine Reading By IdeaReader
47
References
4
Citations
NaN
KQI