A preconditioned second-order convex splitting algorithm with a
difference of varying convex functions and line search
0
Citation
0
Reference
10
Related Paper
Abstract:
This paper introduces a preconditioned convex splitting algorithm enhanced with line search techniques for nonconvex optimization problems. The algorithm utilizes second-order backward differentiation formulas (BDF) for the implicit and linear components and the Adams-Bashforth scheme for the nonlinear and explicit parts of the gradient flow in variational functions. The proposed algorithm, resembling a generalized difference-of-convex-function approach, involves a changing set of convex functions in each iteration. It integrates the Armijo line search strategy to improve performance. The study also discusses classical preconditioners such as symmetric Gauss-Seidel, Jacobi, and Richardson within this context. The global convergence of the algorithm is established through the Kurdyka-{\L}ojasiewicz properties, ensuring convergence within a finite number of preconditioned iterations. Numerical experiments demonstrate the superiority of the proposed second-order convex splitting with line search over conventional difference-of-convex-function algorithms.Keywords:
Line (geometry)
Proper convex function
Finding efficient and provable methods to solve non-convex optimization problems is an outstanding challenge in machine learning and optimization theory. A popular approach used to tackle non-convex problems is to use convex relaxation techniques to find a convex surrogate for the problem. Unfortunately, convex relaxations typically must be found on a problem-by-problem basis. Thus, providing a general-purpose strategy to estimate a convex relaxation would have a wide reaching impact. Here, we introduce Convex Relaxation Regression (CoRR), an approach for learning convex relaxations for a class of smooth functions. The idea behind our approach is to estimate the convex envelope of a function f by evaluating f at a set of T random points and then fitting a convex function to these function evaluations. We prove that with probability greater than 1 - δ, the solution of our algorithm converges to the global optimizer of f with error O((log(1/δ/T)α) for some α > 0. Our approach enables the use of convex optimization tools to solve non-convex optimization problems.
Proper convex function
Conic optimization
Linear matrix inequality
Cite
Citations (1)
Machine learning algorithms typically perform optimization over a class of non-convex functions. In this work, we provide bounds on the fundamental hardness of identifying the global minimizer of a non convex function. Specifically, we design a family of parametrized non-convex functions and employ statistical lower bounds for parameter estimation. We show that the parameter estimation problem is equivalent to the problem of function identification in the given family. We then claim that non convex optimization is at least as hard as function identification. Jointly, we prove that any first order method can take exponential time to converge to a global minimizer.
Proper convex function
Identification
Parameter identification problem
Conic optimization
Cite
Citations (1)
One of major concerns in numerical analyses of the electromagnetic scattering properties of convex objects in the finite difference time domain(FDTD) method is the Yee cell model of the convex object.A model building method for convex object was developed based on the geometric relationship between a point and each volume cell of a convex object based on the convex geometry.According to the geometric relationship between a point and any volume cell of a convex object,a method for modeling convex object Yee cells,which is called convex geometry Yee cells model building of convex object(CGYCMBCO) method,is proposed.Results using Yee cells of four convex objects show that the CGYCMBCO method is applicable to any convex object and that the Yee cells can be easily obtained for any convex object.
Proper convex function
Effective domain
Convex curve
Absolutely convex set
Cite
Citations (1)
Proper convex function
Regularization
Conic optimization
Linear matrix inequality
Cite
Citations (0)
A method is given for the minimization of a convex functional. on a convex set in Banach space. The paper proves the convergence of the method and presents its applications to problems of optimal linear programming and of convex programming.
Proper convex function
Absolutely convex set
Uniformly convex space
Linear matrix inequality
Conic optimization
Minification
Cite
Citations (40)
Randomized optimization is an established tool for control design with modulated robustness. While for uncertain convex programs there exist efficient randomized approaches, this is not the case for non-convex problems. Methods based on statistical learning theory are applicable to non-convex problems, but they usually are conservative in achieving the desired probabilistic guarantees. In this paper, we derive a novel scenario approach for a wide class of random non-convex programs, with a sample complexity similar to that of uncertain convex programs and with probabilistic guarantees that hold not only for the optimal solution of the scenario program, but for all feasible solutions inside a set of a-priori chosen complexity. We also address measure-theoretic issues for uncertain convex and non-convex programs. Among the family of non-convex control-design problems that can be addressed via randomization, we apply our scenario approach to stochastic model predictive control for chance constrained nonlinear control-affine systems.
Proper convex function
Conic optimization
Linear matrix inequality
Robustness
Cite
Citations (76)
Convex optimization can provide both global as well as local solution; in the case of non convex optimization, it is difficult to get global solution. This paper presents some optimality criteria for the non convex programming problem whose objective function is fuzzy pseudo convex functions. Keywords: Fuzzy Pseudo Convex Functions, Fuzzy Quasi Convex Functions, Non Convex Programming Problem
Proper convex function
Conic optimization
Pseudoconvex function
Cite
Citations (4)
This paper discusses the associated results presented by Youness in 1999 for E-convex functions and E-convex programming. By using the basic properties of E-convex functions and E-convex programming and the optimization analysis techniques, some incorrect results for E\|convex functions and E-convex programming are modified.
Proper convex function
Linear matrix inequality
Conic optimization
Cite
Citations (0)
To estimate and compensate disturbances effectively, disturbance observer (DOB) has been widely employed in industrial field. This paper is dedicated to designing DOB by directly utilizing frequency response data. By transforming all the non-convex constraints into convex form, the bandwidth of DOB is maximized through iterative convex optimization process. Simulation results have verified the effectiveness of the proposed method.
Proper convex function
Observer (physics)
Cite
Citations (3)
Proper convex function
Absolutely convex set
Convex polytope
Convex conjugate
Representation
Cite
Citations (29)