language-icon Old Web
English
Sign In

Linear-fractional programming

In mathematical optimization, linear-fractional programming (LFP) is a generalization of linear programming (LP). Whereas the objective function in a linear program is a linear function, the objective function in a linear-fractional program is a ratio of two linear functions. A linear program can be regarded as a special case of a linear-fractional program in which the denominator is the constant function one. In mathematical optimization, linear-fractional programming (LFP) is a generalization of linear programming (LP). Whereas the objective function in a linear program is a linear function, the objective function in a linear-fractional program is a ratio of two linear functions. A linear program can be regarded as a special case of a linear-fractional program in which the denominator is the constant function one. Both linear programming and linear-fractional programming represent optimization problems using linear equations and linear inequalities, which for each problem-instance define a feasible set. Fractional linear programs have a richer set of objective functions. Informally, linear programming computes a policy delivering the best outcome, such as maximum profit or lowest cost. In contrast, a linear-fractional programming is used to achieve the highest ratio of outcome to cost, the ratio representing the highest efficiency. For example, in the context of LP we maximize the objective function profit = income − cost and might obtain maximal profit of $100 (= $1100 of income − $1000 of cost). Thus, in LP we have an efficiency of $100/$1000 = 0.1. Using LFP we might obtain an efficiency of $10/$50 = 0.2 with a profit of only $10, which requires only $50 of investment however. Formally, a linear-fractional program is defined as the problem of maximizing (or minimizing) a ratio of affine functions over a polyhedron, where x ∈ R n {displaystyle mathbf {x} in mathbb {R} ^{n}} represents the vector of variables to be determined, c , d ∈ R n {displaystyle mathbf {c} ,mathbf {d} in mathbb {R} ^{n}} and b ∈ R m {displaystyle mathbf {b} in mathbb {R} ^{m}} are vectors of (known) coefficients, A ∈ R m × n {displaystyle Ain mathbb {R} ^{m imes n}} is a (known) matrix of coefficients and α , β ∈ R {displaystyle alpha ,eta in mathbb {R} } are constants. The constraints have to restrict the feasible region to { x | d T x + β > 0 } {displaystyle {mathbf {x} |mathbf {d} ^{T}mathbf {x} +eta >0}} , i.e. the region on which the denominator is positive. Alternatively, the denominator of the objective function has to be strictly negative in the entire feasible region. Under the assumption that the feasible region is non-empty and bounded, the Charnes-Cooper transformation

[ "Linear programming", "Criss-cross algorithm", "Revised simplex method", "bilevel linear programming", "Fundamental theorem of linear programming" ]
Parent Topic
Child Topic
    No Parent Topic