logo
    An introduction to convex optimization for communications and signal processing
    531
    Citation
    36
    Reference
    10
    Related Paper
    Citation Trend
    Abstract:
    Convex optimization methods are widely used in the design and analysis of communication systems and signal processing algorithms. This tutorial surveys some of recent progress in this area. The tutorial contains two parts. The first part gives a survey of basic concepts and main techniques in convex optimization. Special emphasis is placed on a class of conic optimization problems, including second-order cone programming and semidefinite programming. The second half of the survey gives several examples of the application of conic programming to communication problems. We give an interpretation of Lagrangian duality in a multiuser multi-antenna communication problem; we illustrate the role of semidefinite relaxation in multiuser detection problems; we review methods to formulate robust optimization problems via second-order cone programming techniques.
    Keywords:
    Conic optimization
    Second-order cone programming
    Semidefinite Programming
    Conic section
    Duality (order theory)
    Second-order cone programming
    Semidefinite Programming
    Interior point method
    Conic optimization
    Semidefinite Programming
    Linear matrix inequality
    Conic optimization
    Semidefinite embedding
    Second-order cone programming
    Interior point method
    Matrix (chemical analysis)
    In this paper, we study possible extensions of the main ideas and methods of constrained DC optimization to the case of nonlinear semidefinite programming problems and more general nonlinear and nonsmooth cone constrained optimization problems. In the first part of the paper, we analyse two different approaches to the definition of DC matrix-valued functions (namely, order-theoretic and componentwise), study some properties of convex and DC matrix-valued mappings and demonstrate how to compute DC decompositions of some nonlinear semidefinite constraints appearing in applications. We also compute a DC decomposition of the maximal eigenvalue of a DC matrix-valued function. This DC decomposition can be used to reformulate DC semidefinite constraints as DC inequality constrains. Finally, we study local optimality conditions for general cone constrained DC optimization problems. The second part of the paper is devoted to a detailed convergence analysis of two extensions of the well-known DCA method for solving DC (Difference of Convex functions) optimization problems to the case of general cone constrained DC optimization problems. We study the global convergence of the DCA for cone constrained problems and present a comprehensive analysis of a version of the DCA utilizing exact penalty functions. In particular, we study the exactness property of the penalized convex subproblems and provide two types of sufficient conditions for the convergence of the exact penalty method to a feasible and critical point of a cone constrained DC optimization problem from an infeasible starting point. In the numerical section of this work, the exact penalty DCA is applied to the problem of computing compressed modes for variational problems and the sphere packing problem on Grassmannian.
    Semidefinite Programming
    Interior point method
    Conic optimization
    Linear matrix inequality
    Second-order cone programming
    Penalty Method
    Constrained optimization
    Semidefinite embedding
    Matrix (chemical analysis)
    Citations (1)
    An interesting recent trend in optimization is the application of semidefinite programming techniques to new classes of optimization problems. In particular, this trend has been successful in showing that under suitable circumstances, polynomial optimization problems can be approximated via a sequence of semidefinite programs. Similar ideas apply to conic optimization over the cone of copositive matrices and to certain optimization problems involving random variables with some known moment information. We bring together several of these approximation results by studying the approximability of cones of positive semidefinite forms (homogeneous polynomials). Our approach enables us to extend the existing methodology to new approximation schemes. In particular, we derive a novel approximation to the cone of copositive forms, that is, the cone of forms that are positive semidefinite over the nonnegative orthant. The format of our construction can be extended to forms that are positive semidefinite over more general conic domains. We also construct polyhedral approximations to cones of positive semidefinite forms over a polyhedral domain. This opens the possibility of using linear programming technology in optimization problems over these cones.
    Semidefinite Programming
    Conic optimization
    Semidefinite embedding
    Orthant
    Conic section
    Cone (formal languages)
    Second-order cone programming
    Citations (43)
    We propose a homogeneous primal-dual interior-point method to solve sum-of-squares optimization problems by combining non-symmetric conic optimization techniques and polynomial interpolation. The approach optimizes directly over the sum-of-squares cone and its dual, circumventing the semidefinite programming (SDP) reformulation which requires a large number of auxiliary variables. As a result, it has substantially lower theoretical time and space complexity than the conventional SDP-based approach. Although our approach avoids the semidefinite programming reformulation, an optimal solution to the semidefinite program can be recovered with little additional effort. Computational results confirm that for problems involving high-degree polynomials, the proposed method is several orders of magnitude faster than semidefinite programming.
    Semidefinite Programming
    Conic optimization
    Semidefinite embedding
    Interior point method
    Second-order cone programming
    Conic section
    Least-squares function approximation
    Citations (0)
    Conic section
    Semidefinite Programming
    Conic optimization
    Second-order cone programming
    Citations (5)
    This paper presents rigorous forward error bounds for linear conic optimization problems. The error bounds are formulated in a quite general framework; the underlying vector spaces are not required to be finite-dimensional, and the convex cones defining the partial ordering are not required to be polyhedral. In the case of linear programming, second order cone programming, and semidefinite programming specialized formulas are deduced yielding guaranteed accuracy. All computed bounds are completely rigorous because all rounding errors due to floating point arithmetic are taken into account. Numerical results, applications and software for linear and semidefinite programming problems are described.
    Conic section
    Semidefinite Programming
    Second-order cone programming
    Conic optimization
    Rounding
    Semidefinite embedding
    Interior point method
    Convex cone
    Citations (5)
    Semidefinite Programming
    Conic optimization
    Second-order cone programming
    Semidefinite embedding
    Convex cone
    Duality (order theory)
    Semidefinite Programming
    Second-order cone programming
    Conic optimization
    Quadratic growth
    Cone (formal languages)
    Semidefinite embedding
    This thesis is about mathematical optimization. Mathematical optimization involves the construction of methods to solve optimization problems, which can arise from real-life problems in applied science, when they are mathematically modeled. Examples come from electrical design, engineering, control theory, telecommunication, environment, finance, and logistics. This thesis deals especially with semidefinite optimization problems. Semidefinite programming is a large class of convex optimization problems with convex constraints and these constraints take the form of linear matrix inequalities, where the matrix variables lie in a semidefinite cone. The inequalities induce a bound on the eigenvalues of sums of symmetric or hermitian matrices. Eigenvalues will play an important role if we reformulate the semidefinite optimization problem. Optimization has been used for centuries, but since the 1950 s, at the beginning of the computer technology, it has become more famous. Computers have made it possible to apply algorithms to real life applications, which usually involve a large number of variables and or relations between those variables. The size of the problem can be in the order of thousands or even millions of variables. In certain applications the computations must be completed in a fraction of a second, so apart from powerful computers, e±cient algorithms are also needed. In this thesis we will concentrate on the design of an e±cient solution method for solving semidefinite programming problems, applicable or extensible to large-scale convex programming problems. Methods that consume less computation time are needed, for example, subspace methods that already exist for solving linear equations, or eigenvalue problems. Subspace methods project the data of a specific problem on a much smaller subproblem that can be solved very fast, and return a good approximation for the solution of the real problem. We do this by transforming the semidefinite programming problem to a convex non-smooth optimization problem by writing its constraint as an eigenvalue constraint, and we develop an algorithm that solves the semidefinite optimization problem for this formulation. The algorithm can be adjusted to apply subspace methods for eigenvalue problems. The newly designed algorithm is verified by some testing problems. As already noted, the developed method uses a subspace procedure, which reduces the size of the matrix variables dramatically. As a consequence for the reduced problems, all computations can be done without restriction on the memory or computation time when setting the subspace size to a small number. The most expensive part of the newly designed algorithm is the computation of a small amount, say s, of eigenvalues every iteration. The value of s is considerably smaller than m, the amount of constraints, and n; the size of the matrix variable in the semidefinite programming problem. By using an appropriate iterative eigensolver for the computation of the eigenvalues we expect a large gain in the computing time and for memory requirements.
    Semidefinite Programming
    Conic optimization
    Linear matrix inequality
    Semidefinite embedding
    Second-order cone programming
    Matrix (chemical analysis)
    Citations (0)