Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets. Many classes of convex optimization problems admit polynomial-time algorithms, whereas mathematical optimization is in general NP-hard. Convex optimization is a subfield of mathematical optimization that studies the problem of minimizing convex functions over convex sets. Many classes of convex optimization problems admit polynomial-time algorithms, whereas mathematical optimization is in general NP-hard. Convex optimization has applications in a wide range of disciplines, such as automatic control systems, estimation and signal processing, communications and networks, electronic circuit design, data analysis and modeling, finance, statistics (optimal experimental design), and structural optimization. With recent advancements in computing and optimization algorithms, convex programming is nearly as straightforward as linear programming. A convex optimization problem is an optimization problem in which the objective function is a convex function and the feasible set is a convex set. A function f {displaystyle f} mapping some subset of R n {displaystyle mathbb {R} ^{n}} into R ∪ { ± ∞ } {displaystyle mathbb {R} cup {pm infty }} is convex if its domain is convex and for all θ ∈ [ 0 , 1 ] {displaystyle heta in } and all x , y {displaystyle x,y} in its domain, f ( θ x + ( 1 − θ ) y ) ≤ θ f ( x ) + ( 1 − θ ) f ( y ) {displaystyle f( heta x+(1- heta )y)leq heta f(x)+(1- heta )f(y)} ; a set is convex if for all members x , y {displaystyle x,y} and all θ ∈ [ 0 , 1 ] {displaystyle heta in } ., θ x + ( 1 − θ ) y {displaystyle heta x+(1- heta )y} is also in the set. Concretely, a convex optimization problem is the problem of finding some x ∗ ∈ C {displaystyle mathbf {x^{ast }} in C} attaining where the objective function f {displaystyle f} is convex, as is the feasible set C {displaystyle C} . If such a point exists, it is referred to as an optimal point; the set of all optimal points is called the optimal set. If f {displaystyle f} is unbounded below over C {displaystyle C} or the infimum is not attained, then the optimization problem is said to be unbounded. Otherwise, if C {displaystyle C} is the empty set, the problem is said to be infeasible. A convex optimization problem is said to be in the standard form if it is written as where x ∈ R n {displaystyle xin mathbb {R} ^{n}} is the optimization variable, the functions f , g 1 , … , g m {displaystyle f,g_{1},ldots ,g_{m}} are convex, and the functions h 1 , … , h p {displaystyle h_{1},ldots ,h_{p}} are affine. In this notation, the function f {displaystyle f} is the objective function of the problem, and the functions g i {displaystyle g_{i}} and h i {displaystyle h_{i}} are referred to as the constraint functions. The feasible set of the optimization problem is the set consisting of all points x ∈ R n {displaystyle xin mathbb {R} ^{n}} satisfying g 1 ( x ) ≤ 0 , … , g m ( x ) ≤ 0 {displaystyle g_{1}(x)leq 0,ldots ,g_{m}(x)leq 0} and h 1 ( x ) = 0 , … , h p ( x ) = 0 {displaystyle h_{1}(x)=0,ldots ,h_{p}(x)=0} . This set is convex because the sublevel sets of convex functions are convex, affine sets are convex, and the intersection of convex sets is convex. Many optimization problems can be equivalently formulated in this standard form. For example, the problem of maximizing a concave function f {displaystyle f} can be re-formulated equivalently as the problem of minimizing the convex function − f {displaystyle -f} ; as such, the problem of maximizing a concave function over a convex set is often referred to as a convex optimization problem. The following are useful properties of convex optimization problems: