On the iterative solution of KKT systems in potential reduction software for large-scale quadratic problems
31
Citation
57
Reference
10
Related Paper
Citation Trend
Keywords:
Karush–Kuhn–Tucker conditions
Interior point method
Current state-of-the-art preconditioners for the reduced Hessian and the Karush--Kuhn--Tucker (KKT) operator for large-scale inverse problems are typically based on approximating the reduced Hessian with the regularization operator. However, the quality of this approximation degrades with increasingly informative observations or data. Thus the best case scenario from a scientific standpoint (fully informative data) is the worse case scenario from a computational perspective. In this paper we present an augmented Lagrangian-type preconditioner based on a block diagonal approximation of the augmented upper left block of the KKT operator. The preconditioner requires solvers for two linear subproblems that arise in the augmented KKT operator, which we expect to be much easier to precondition than the reduced Hessian. Analysis of the spectrum of the preconditioned KKT operator indicates that the preconditioner is effective when the regularization is chosen appropriately. In particular, it is effective when the regularization does not overpenalize highly informed parameter modes and does not underpenalize uninformed modes. Finally, we present a numerical study for a large data/low noise Poisson source inversion problem, demonstrating the effectiveness of the preconditioner. In this example, three MINRES iterations on the KKT system with our preconditioner results in a reconstruction with better accuracy than 50 iterations of CG on the reduced Hessian system with regularization preconditioning.
Karush–Kuhn–Tucker conditions
Hessian matrix
Augmented Lagrangian method
Regularization
Condition number
Cite
Citations (8)
In this paper, we study the problem of solving the band linear systems on distributed memory multi-computer, and emphasis on the research of preconditioned conjugate gradient method. By reconstructing the preconditioner of traditional preconditioned conjugate gradient method, We gain the preconditioner suiting for parallel computation. The efficiency of Conjugate Gradient Method is improved while preserving the parallelism. Moreover, three examples have been implemented on Linux Networx cluster, and the numerical experiments indicate that our algorithm is feasible and effective.
Conjugate
Biconjugate gradient method
Cite
Citations (0)
In this letter, an effective symmetric successive overrelaxation (SSOR) preconditioning scheme is applied to the conjugate-gradient method for solving a large system of linear equations resulting from the use of the edge-based finite-element method (FEM). With SSOR as the preconditioner as well as its efficient implementation in the conjugate-gradient (CG) algorithm, the PCG method converges five times as fast as the CG method. This result demonstrates that SSOR is a good preconditioner for the CG iterative method when the FEM is applied to solve large-scale electromagnetic problems. © 2000 John Wiley & Sons, Inc. Microwave Opt Technol Lett 27: 235–238, 2000.
Conjugate
Cite
Citations (23)
A new algorithm using the primal-dual interior point method with the predictor-corrector for solving nonlinear optimal power flow (OPF) problems is presented. The formulation and the solution technique are new. Both equalities and inequalities in the OPF are considered and simultaneously solved in a nonlinear manner based on the Karush-Kuhn-Tucker (KKT) conditions. The major computational effort of the algorithm is solving a symmetrical system of equations, whose sparsity structure is fixed. Therefore only one optimal ordering and one symbolic factorization are involved. Numerical results of several test systems ranging in size from 9 to 2423 buses are presented and comparisons are made with the pure primal-dual interior point algorithm. The results show that the predictor-corrector primal-dual interior point algorithm for OPF is computationally more attractive than the pure primal-dual interior point algorithm in terms of speed and iteration count.< >
Karush–Kuhn–Tucker conditions
Interior point method
Predictor–corrector method
Cite
Citations (6)
Cite
Citations (0)
Current state of the art preconditioners for the reduced Hessian and the Karush-Kuhn-Tucker (KKT) operator for large scale inverse problems are typically based on approximating the reduced Hessian with the regularization operator. However, the quality of this approximation degrades with increasingly informative observations or data. Thus the best case scenario from a scientific standpoint (fully informative data) is the worse case scenario from a computational perspective. In this paper we present an augmented Lagrangian-type preconditioner based on a block diagonal approximation of the augmented upper left block of the KKT operator. The preconditioner requires solvers for two linear subproblems that arise in the augmented KKT operator, which we expect to be much easier to precondition than the reduced Hessian. Analysis of the spectrum of the preconditioned KKT operator indicates that the preconditioner is effective when the regularization is chosen appropriately. In particular, it is effective when the regularization does not over-penalize highly informed parameter modes and does not under-penalize uninformed modes. Finally, we present a numerical study for a large data/low noise Poisson source inversion problem, demonstrating the effectiveness of the preconditioner. In this example, three MINRES iterations on the KKT system with our preconditioner results in a reconstruction with better accuracy than 50 iterations of CG on the reduced Hessian system with regularization preconditioning.
Karush–Kuhn–Tucker conditions
Hessian matrix
Regularization
Augmented Lagrangian method
Condition number
Operator (biology)
Cite
Citations (0)
为了降低方程组求解中共轭梯度法系数矩阵的条件数,提高收敛速度,常用预处理方法将原方程进行等价转化,同时预条件子既要接近原系数矩阵,又要容易求其逆矩阵。本文从寻求对角预条件子出发,用矩阵的特征值分解方法解出了预处理后系数矩阵特征值矩阵的显式表达,得到对角预条件子矩阵的最优选择,并予以证明。给出了三个p-范数预条件子,将之与常用的预条件子进行对比,实例检验表明三个p-范数预条件子的作用更优越,且使算法收敛更快。 For decreasing the conditional number of the coefficient matrix in solving the linear equations with conjugate gradient methods and accelerating the convergence, it is common to use preconditioned methods to find the equivalent equations, whose conditional numbers are smaller. It is required that the preconditioners should be as close as possible to the original coefficient matrix and their inverse matrices can be easily computed. Starting from diagonal preconditioners, we first compute the eigenvalue decomposition of the coefficient matrix, and obtain the optimal preconditioner. However, it is of high computational complexity to do the eigenvalue decomposition. In this paper, we introduce three p-norm preconditioners to approximate the optimal preconditioner. Comparing with the existing preconditioners, the experimental results show that the proposed three diagonal p-norm preconditioners converge much faster, which demonstrates the advantages of the proposed family of preconditioners.
Coefficient matrix
Matrix (chemical analysis)
Cite
Citations (0)
Current state of the art preconditioners for the reduced Hessian and the Karush-Kuhn-Tucker (KKT) operator for large scale inverse problems are typically based on approximating the reduced Hessian with the regularization operator. However, the quality of this approximation degrades with increasingly informative observations or data. Thus the best case scenario from a scientific standpoint (fully informative data) is the worse case scenario from a computational perspective. In this paper we present an augmented Lagrangian-type preconditioner based on a block diagonal approximation of the augmented upper left block of the KKT operator. The preconditioner requires solvers for two linear subproblems that arise in the augmented KKT operator, which we expect to be much easier to precondition than the reduced Hessian. Analysis of the spectrum of the preconditioned KKT operator indicates that the preconditioner is effective when the regularization is chosen appropriately. In particular, it is effective when the regularization does not over-penalize highly informed parameter modes and does not under-penalize uninformed modes. Finally, we present a numerical study for a large data/low noise Poisson source inversion problem, demonstrating the effectiveness of the preconditioner. In this example, three MINRES iterations on the KKT system with our preconditioner results in a reconstruction with better accuracy than 50 iterations of CG on the reduced Hessian system with regularization preconditioning.
Karush–Kuhn–Tucker conditions
Hessian matrix
Regularization
Augmented Lagrangian method
Operator (biology)
Condition number
Cite
Citations (0)
Multigrid method
Cite
Citations (6)
Abstract This paper presents a preconditioned conjugate gradient approach to structural static reanalysis for general layout modifications. It is suitable for all types of layout modifications, including the general case in which some original members and nodes are deleted and other new members and nodes are added concurrently. The approach is based on the preconditioned conjugate gradient technique. The preconditioner is constructed, and an efficient implementation for applying the preconditioner is presented, which requires the factorization of the stiffness matrix corresponding to the newly added degrees of freedom only. In particular, the approach can adaptively monitor the accuracy of approximate solutions. Numerical examples show that the condition number of the preconditioned matrix is remarkably reduced. Therefore, the fast convergence and accurate results can be achieved by the approach. Copyright © 2006 John Wiley & Sons, Ltd.
Conjugate
Matrix (chemical analysis)
Cite
Citations (17)