Abstract:
This thesis is about mathematical optimization. Mathematical optimization involves the construction of methods to solve optimization problems, which can arise from real-life problems in applied science, when they are mathematically modeled. Examples come from electrical design, engineering, control theory, telecommunication, environment, finance, and logistics.
This thesis deals especially with semidefinite optimization problems. Semidefinite programming is a large class of convex optimization problems with convex constraints and these constraints take the form of linear matrix inequalities, where the matrix variables lie in a semidefinite cone. The inequalities induce a bound on the eigenvalues of sums of symmetric or hermitian matrices. Eigenvalues will play an important role if we reformulate the semidefinite optimization problem. Optimization has been used for centuries, but since the 1950 s, at the beginning of the computer technology, it has become more famous. Computers have made it possible to apply algorithms to real life applications, which usually involve a large number of variables and or relations between those variables. The size of the problem can be in the order of thousands or even millions of variables. In certain applications the computations must be completed in a fraction of a second, so apart from powerful computers, e±cient algorithms are also needed. In this thesis we will concentrate on the design of an e±cient solution method for solving semidefinite programming problems, applicable or extensible to large-scale convex programming problems. Methods that consume less computation time are needed, for example, subspace methods that already exist for solving linear equations, or eigenvalue problems.
Subspace methods project the data of a specific problem on a much smaller subproblem that can be solved very fast, and return a good approximation for the solution of the real problem. We do this by transforming the semidefinite programming problem to a convex non-smooth optimization problem by writing its constraint as an eigenvalue constraint, and we develop an algorithm that solves the semidefinite optimization problem for this formulation. The algorithm can be adjusted to apply subspace methods for eigenvalue problems. The newly designed algorithm is verified by some testing problems.
As already noted, the developed method uses a subspace procedure, which reduces the size of the matrix variables dramatically. As a consequence for the reduced problems, all computations can be done without restriction on the memory or computation time when setting the subspace size to a small number. The most expensive part of the newly designed algorithm is the computation of a small amount, say s, of eigenvalues every iteration. The value of s is considerably smaller than m, the amount of constraints, and n; the size of the matrix variable in the semidefinite programming problem. By using an appropriate iterative eigensolver for the computation of the eigenvalues we expect a large gain in the computing time and for memory requirements.Keywords:
Semidefinite Programming
Conic optimization
Linear matrix inequality
Semidefinite embedding
Second-order cone programming
Matrix (chemical analysis)
Cite
Parallelizable manifold
Gauss–Seidel method
Interval arithmetic
Sequential quadratic programming
Cite
Citations (16)
We outline in this paper a general approach to solving distributed optimization problems of a certain type. We assume there are n processing elements, connected together so that they can share information. We expect that our approach would be particularly useful in large-scale problems, where a centralized approach may not be able to find a solution in an acceptable time, or the problem data may inherently be decentralized. However our approach is applicable to problems of any size, provided they meet the conditions we impose to ensure convergence. First we duplicate some (or all) of the problem variables, so as to decompose the constraints into purely local subsets. This produces an artificial decomposition and an increase in the number of variables. In order to make sure the solutions of the new problem and the original one coincide, we introduce consistency constraints which force all duplicated copies of a variable to be equal at the solution. These equality constraints are the only non-local constraints in the resulting transformed problem. We then apply the Auxiliary Problem Principle (APP) to the problem of finding a saddle-point of the Augmented Lagrangian of the transformed optimization problem. Informally, the APP involves linearizing any non-separable termsmore » in -the Augmented Lagrangian and adding convex/concave terms which can be chosen to be separable. Under conditions stated below, the new problem (after applying the APP) has the same saddle- point as the original problem. We describe an iterative algorithm to find the saddle-point of this new problem. By choosing the APP terms to be decomposable, the central step of this iterative algorithm decomposes into independent parallel minimizations, followed by Lagrange multiplier updates which involve only local information transfers.« less
Augmented Lagrangian method
Saddle point
Cite
Citations (5)
This paper is about a simple method for solving nonlinear, non-convex optimization problems (NLPs) with matrix inequality constraints. The method is based on the fact that a symmetric matrix is positive semi-definite if and only if it admits a Cholesky decomposition, and works by reformulating the original matrix inequality constrained problem into a standard NLP, for which there are currently many high-quality codes available. Examples of optimization problems involving matrix inequality constraints are relatively frequent in the structural optimization literature, and to illustrate a potential usage of our method we present numerical solutions for weight minimization of trusses subject to compliance and global buckling constraints. Looking ahead, we also see problems involving simultaneous optimization of both structure and control systems being common, and since matrix inequality constrained problems appear frequently in control theory, we believe that the number of applications for codes like the one presented here will continue to grow rapidly.
Linear matrix inequality
Matrix (chemical analysis)
Cite
Citations (0)
Prevailing thought often decouples the ‘finding’ and ‘verifying’ aspects of global optimization. On one hand there are local search procedures and local optimality conditions, and on the other there is branch-and-bound. In this investigation, we aim to integrate these aspects into finite algorithms for concave programming and nonconvex quadratic programming. We will identify the notion of ‘finiteness’ using recently developed models of real-number computation.
Like linear programming (LP), if a quadratic programming (QP) problem has a solution, then some basic solution is always a global optimum. This remains true of QP, though the problem be nonconvex. Such a problem having m constraints can be solved finitely by enumerating at most 2 m + 1 candidates—all of its basic solutions. In other words, QP, like LP and integer programming, and possibly unlike other continuous global optimization problems, can be solved finitely by a naive algorithm. It comes as a great surprise then, that most global optimization methods proposed in the literature fail to terminate finitely. The same phenomenon is even more striking in the literature of concave programming, a problem in which the solution is not only basic, but is always an extreme point of the feasible set. This observation, particularly true of branch-and-bound algorithms, motivated the present study.
This document examines finiteness issues in the context of structured global optimization problems. Specific aspects of this research include: (1) Development of finite algorithms, (2) Analysis of the algorithms' computational complexity, (3) Demonstrating the efficiency of the algorithms in applications; the applications coming from several engineering disciplines and from finance.
Global Optimization
Extreme point
Branch and bound
Cite
Citations (2)
Now Classical First-Order (FO) algorithms of convex optimization, such as Mirror Descent algorithm or Nesterov's optimal algorithm for smooth convex optimization, are well known to have optimal (theoretical) complexity estimates which do not depend on the problem dimension. However, to attain the optimality, the domain of the problem should admit a good proximal setup. The latter essentially means that (i) the problem domain should satisfy certain geometric conditions (or be of geometry), and (ii) the practical use of these methods is conditioned by our ability to solve efficiently an auxiliary optimization task - computing the proximal transformation -- at each iteration of the method. More often than not these two conditions are satisfied in optimization problems arising in computational learning, what explains the fact that FO methods of proximal type recently became methods of choice when solving various learning problems. Yet, they meet their limits in several important problems such as multi-task learning with large number of tasks, where the problem domain does not exhibit favorable geometry, and learning and matrix completion problems with nuclear norm constraint, when the numerical cost of solving the auxiliary problem becomes prohibitive in large-scale problems. We propose a novel approach to solving nonsmooth optimization problems arising in learning applications where Fenchel-type representation of the objective function is available. The approach is based on applying FO algorithms to the dual problem and using the accuracy certificates supplied by the method to recover the primal solution. While suboptimal in terms of accuracy guaranties, the proposed approach does not rely upon good proximal setup for the primal problem but assume that the problem domain admits a Linear Optimization oracle -- the ability to efficiently maximize a linear form on the domain of the primal problem.
Conic optimization
Cite
Citations (0)
Optimization is a major field in applied mathematics. Many applications involve the search of the best solution to a problem according to some criterion. Depending on the considered optimization problem, finding the best solution is not always possible in a reasonable amount of time. Heuristic algorithms are often used when the problem is too difficult to solve exactly. These methods are used to speed up the search for a good solution but they do not guarantee that an optimal solution will be found. In this thesis, we explore such heuristic approaches for three different matrix problems. First, we study the minimum-volume bounding box problem, which consists in finding the smallest rectangular parallelepiped enclosing a given set of points in the three-dimensional space. This problem appears for example in collision detection, which is a very important issue in computational geometry and computer graphics. In these applications, a solution has to be determined in a very short amount of time. We propose a new hybrid algorithm able to approximate optimal bounding boxes at a low computational cost. In particular, it is several orders of magnitude faster than the only currently known exact algorithm. Second, we investigate the subset selection problem. Given a large set of features, we want to choose a small subset containing the most relevant features while removing the redundant ones. This problem has applications in data mining since this can be seen as a dimensionality reduction problem. We develop several windowed algorithms that tackle the subset selection problem for the maximum-volume criterion, which is NP-hard. Finally, we address the topic of the approximation of the joint spectral radius. This quantity characterizes the growth rate of product of matrices and is NP-hard to approximate. The joint spectral radius appears in many fields, including system theory, graph theory, combinatorics, language theory... We present an experimental study of existing approaches and propose a new genetic-based algorithm that is able to find bounds on the joint spectral radius in a short amount of time.
Matrix (chemical analysis)
Parallelepiped
Bounding overwatch
Set cover problem
Minimum bounding box
Cite
Citations (4)
An insightful, concise, and rigorous treatment of the basic theory of convex sets and functions in finite dimensions, and the Dual problem the feasible if it is they. Subgradient methods applied mathematics and sofware full. Ellipsoid method frankwolfe for publication. Arg max are the special case when choosing such. Unlike some convex programming lp a candidate solutions is they possess multiple to start! Operations research because this method which one would want. However for a project that lie. Classical optimization problem of agents that converge. For publication another criterion for this may not dominated by far. Gradient methods are some of applied to optimization problems may. The conditions using the objective function is a final. Arg max are allowed set of non convex course. This finite time average of convex sets can. Convexity theory convex if it can be efficiently and algorithms proposed for classes. The book is not distinguish maxima, are even harder to a large. However it is not refer to relax the hessian matrix in terms of linear programming. Present the problem of making usually, much slower than modern. Some combinatorial optimization and increasingly popular method but not done by the use divergent series. For the supremum operator for every equality constraint manifold dimension. The drift plus penalty method for many optimization. The problem itself which the class of hessians.
Ellipsoid method
Feasible region
Hessian matrix
Subgradient method
Conic optimization
Convexity
Matrix (chemical analysis)
Cite
Citations (646)
Semidefinite Programming
Semidefinite embedding
Cite
Citations (176)
A large number of problems in engineering design and in many areas of social and physical sciences and technology lend themselves to particular instances of problems studied in this paper. Cutting-plane methods have traditionally been used as an effective tool in devising exact algorithms for solving convex and large-scale combinatorial optimization problems. Its utilization in nonconvex optimization has been also promising. A cutting plane, essentially a hyperplane defined by a linear inequality, can be used to effectively reduce the computational efforts in search of a global solution. Each cut is generated in order to eliminate a large portion of the search domain. Thus, a deep cut is intuitively superior in which it will exclude a larger set of extraneous points from consideration. This paper is concerned with the development of deep-cutting-plane techniques applied to reverse-convex programs. An upper bound and a lower bound for the optimal value are found, updated, and improved at each iteration. The algorithm terminates when the two bounds collapse or all the generated subdivisions have been fathomed. Finally, computational considerations and numerical results on a set of test problems are discussed. An illustrative example, walking through the steps of the algorithm and explaining the computational process, is presented.
Cutting-plane method
Hyperplane
Cite
Citations (3)
Abstract : The solution to an instance of the standard Shortest Path problem is a single shortest route in a directed graph. Suppose, however, that each arc has both a distance and a cost, and that one would like to find a route that is both short and inexpensive. A solution in this case typically consists of multiple routes because in general, no point is both shortest and cheapest. Finding the efficient frontier or tradeoff curve for such a multi-criteria problem is often difficult, in part because the solution set can have exponentially many points. As a result, multi-criteria problems are often solved by approximating the efficient frontier. A fast approximation scheme (FAS) for a multi-criteria problem is an algorithm that can quickly find an arbitrarily-good approximation to the problem's efficient frontier. Necessary and sufficient conditions for the existence of an FAS for a problem were introduced in [8094]. The conditions are stated in terms of the existence of a V-pseudo-polynomial (VPP) time algorithm for the problem. A new form of reducibility is also introduced and shown to be useful for studying the existence of FAS's. This paper extends the work of [8094] by applying it to a variety of discrete optimization problems. The Source-to-Tree (STT) Network Flow problem is introduced. This problem is interesting because it generalizes many commonly-treated combinatorial optimization problems. A VPP time algorithm is demonstrated for a particular STT problem and is used to show that the problem has an FAS. Results are also derived about the computational complexity of finding approximate solutions to several multi-criteria knapsack, scheduling, and production planning problems. Many theorems extend previously- known results to multi-criteria problems. For some problems, results about problems with binary variables are extended to problems with general integer variables. These and other results are of interest even for problems with only a single criterion.
Polynomial-time approximation scheme
Cite
Citations (15)