A Convex Approximation of the Relaxed Binaural Beamforming Optimization Problem
5
Citation
28
Reference
10
Related Paper
Citation Trend
Abstract:
The recently proposed relaxed binaural beamforming (RBB) optimization problem provides a flexible trade-off between noise suppression and binaural-cue preservation of the sound sources in the acoustic scene. It minimizes the output noise power, under the constraints which guarantee that the target remains unchanged after processing and the binaural-cue distortions of the acoustic sources will be less than a user-defined threshold. However, the RBB problem is a computationally demanding non-convex optimization problem. The only existing suboptimal method which approximately solves the RBB is a successive convex optimization (SCO) method which, typically, requires to solve multiple convex optimization problems per frequency bin, in order to converge. Convergence is achieved when all constraints of the RBB optimization problem are satisfied. In this paper, we propose a semi-definite convex relaxation (SDCR) of the RBB optimization problem. The proposed suboptimal SDCR method solves a single convex optimization problem per frequency bin, resulting in a much lower computational complexity than the SCO method. Unlike the SCO method, the SDCR method does not guarantee user-controlled upper-bounded binaural-cue distortions. To tackle this problem we also propose a suboptimal hybrid method which combines the SDCR and SCO methods. Instrumental measures combined with a listening test show that the SDCR and hybrid methods achieve significantly lower computational complexity than the SCO method, and in most cases better trade-off between predicted intelligibility and binaural-cue preservation than the SCO method.Keywords:
Conic optimization
The unifying theme of this thesis is the study of measures of conditioning for convex feasibility problems in conic linear form, or conic linear systems. Such problems are an important tool in mathematical programming. They provide a very general format for studying the feasible regions of convex optimization problems (in fact, any convex feasibility problem can be modeled as a conic linear system), and include linear programming feasibility problems as a special case. Over the last decade many important developments in linear programming, most notably, the theory of interior-point methods, have been extended to convex problems in this form. In recent years, a new and powerful theory of “condition numbers” for convex optimization has been developed. The condition numbers for convex optimization capture the intuitive notion of problem “conditioning” and have been shown to be important in studying the efficiency of algorithms, including interior-point algorithms, for convex optimization as well as other behavioral characteristics of these problems such as geometry, etc.
The contribution of this thesis is twofold. We continue the research in the theory of condition numbers for convex problems by developing an elementary algorithm for solving a conic linear system, whose complexity depends on the condition number of the problem.
We also discuss some potential drawbacks in using the condition number as the sole measure of conditioning of a conic linear system, motivating the study of “data-independent” measures. We introduce a new measure of conditioning for feasible conic linear systems in special form and study its relationship to the condition number and other measures of conditioning arising in recent linear programming literature. We study many of the implications of the new measure for problem geometry, conditioning, and algorithm complexity, and demonstrate that the new measure is data-independent. We also introduce the notion of “pre-conditioning” for conic linear systems, i.e., finding equivalent formulations of the problem with better condition numbers. We characterize the best such formulation and provide an algorithm for constructing a formulation whose condition number is within a known factor of the best possible. (Copies available exclusively from MIT Libraries, Rm. 14-0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)
Conic optimization
Conic section
Second-order cone programming
Proper convex function
Interior point method
Cite
Citations (3)
Convex optimization methods are widely used in the design and analysis of communication systems and signal processing algorithms. This tutorial surveys some of recent progress in this area. The tutorial contains two parts. The first part gives a survey of basic concepts and main techniques in convex optimization. Special emphasis is placed on a class of conic optimization problems, including second-order cone programming and semidefinite programming. The second half of the survey gives several examples of the application of conic programming to communication problems. We give an interpretation of Lagrangian duality in a multiuser multi-antenna communication problem; we illustrate the role of semidefinite relaxation in multiuser detection problems; we review methods to formulate robust optimization problems via second-order cone programming techniques.
Conic optimization
Second-order cone programming
Semidefinite Programming
Conic section
Duality (order theory)
Cite
Citations (531)
Conic optimization
Semidefinite Programming
Proper convex function
Conic section
Cite
Citations (6)
<p>Beamforming filter optimization can be performed over a distributed wireless sensor network, but the output calculation remains either centralized or linked in time to the weights optimization. We propose a distributed method for calculating the beamformer output which is independent of the filter optimization. The new method trades a small decrease in signal to noise performance for a large decrease in transmission power. Background is given on distributed convex optimization and acoustic beamforming. The new model is described with analysis of its behaviour under independent noise. Simulation results demonstrate the desirable properties of the new model in comparison with centralized output computation.</p>
Cite
Citations (0)
<p>Beamforming filter optimization can be performed over a distributed wireless sensor network, but the output calculation remains either centralized or linked in time to the weights optimization. We propose a distributed method for calculating the beamformer output which is independent of the filter optimization. The new method trades a small decrease in signal to noise performance for a large decrease in transmission power. Background is given on distributed convex optimization and acoustic beamforming. The new model is described with analysis of its behaviour under independent noise. Simulation results demonstrate the desirable properties of the new model in comparison with centralized output computation.</p>
Cite
Citations (0)
This course focuses on theory, algorithms, and applications of convex optimization. Convex optimization deals with the non‐linear optimization problems where the objective function and the constraints of the problem are both convex. These problems appear
Conic optimization
Proper convex function
Cite
Citations (0)
Convex relaxations are basic tools for solving moment and polynomial optimization problems. This chapter reviews some backgrounds for convex optimization. We introduce basic topics such as vector spaces, convex sets and convex cones, positive semidefinite cones, linear matrix inequalities, semidefinite programming, linear conic optimization, and linear convex formulation for moment and polynomial optimization.
Conic optimization
Semidefinite Programming
Linear matrix inequality
Second-order cone programming
Conic section
Proper convex function
Convex cone
Cite
Citations (0)
<p>Beamforming filter optimization can be performed over a distributed wireless sensor network, but the output calculation remains either centralized or linked in time to the weights optimization. We propose a distributed method for calculating the beamformer output which is independent of the filter optimization. The new method trades a small decrease in signal to noise performance for a large decrease in transmission power. Background is given on distributed convex optimization and acoustic beamforming. The new model is described with analysis of its behaviour under independent noise. Simulation results demonstrate the desirable properties of the new model in comparison with centralized output computation.</p>
Cite
Citations (0)
Intelligent reflecting surface (IRS)-aided millimeter-wave (mmWave) multiple-input single-output (MISO) is considered one of the promising techniques in next-generation wireless communication. However, existing beamforming methods for IRS-aided mm Wave MISO systems require high computational power, so it cannot be widely used. In this paper, we combine an unsupervised learning-based fast beamforming method with IRS-aided MISO systems, to significantly reduce the computational complexity of this system. Specifically, a new beamforming design method is proposed by adopting the feature fusion means in unsupervised learning. By designing a specific loss function, the beamforming can be obtained to make the spectrum more efficient, and the complexity is lower than that of the existing algorithms. Simulation results show that the proposed beamforming method can effectively reduce the computational complexity while obtaining relatively good performance results.
Extremely high frequency
WSDMA
Cite
Citations (1)
We introduce a convex optimization modeling framework that transforms a convex optimization problem expressed in a form natural and convenient for the user into an equivalent cone program in a way that preserves fast linear transforms in the original problem. By representing linear functions in the transformation process not as matrices, but as graphs that encode composition of linear operators, we arrive at a matrix-free cone program, i.e., one whose data matrix is represented by a linear operator and its adjoint. This cone program can then be solved by a matrix-free cone solver. By combining the matrix-free modeling framework and cone solver, we obtain a general method for efficiently solving convex optimization problems involving fast linear transforms.
Conic optimization
Convex cone
Matrix (chemical analysis)
Linear map
Solver
Operator (biology)
Cone (formal languages)
Second-order cone programming
Cite
Citations (0)