Strong RLT1 bounds from decomposable Lagrangean relaxation for some quadratic 0–1 optimization problems with linear constraints

2018 
The Reformulation Linearization Technique (RLT) of Sherali and Adams (Manag Sci 32(10):1274–1290, 1986; SIAM J Discrete Math 3(3):411–430, 1990), when applied to a pure 0–1 quadratic optimization problem with linear constraints (P), constructs a hierarchy of LP (i.e., continuous and linear) models of increasing sizes. These provide monotonically improving continuous bounds on the optimal value of (P) as the level, i.e., the stage in the process, increases. When the level reaches the dimension of the original solution space, the last model provides an LP bound equal to the IP optimum. In practice, unfortunately, the problem size increases so rapidly that for large instances, even computing bounds for RLT models of level k (called RLTk) for small k may be challenging. Their size and their complexity increase drastically with k. To our knowledge, only results for bounds of levels 1, 2, and 3 have been reported in the literature. We are proposing, for certain quadratic problem types, a way of producing stronger bounds than continuous RLT1 bounds in a fraction of the time it would take to compute continuous RLT2 bounds. The approach consists in applying a specific decomposable Lagrangean relaxation to a specially constructed RLT1-type linear 0–1 model. If the overall Lagrangean problem does not have the integrality property, and if it can be solved as a 0–1 rather than a continuous problem, one may be able to obtain 0–1 RLT1 bounds of roughly the same quality as standard continuous RLT2 bounds, but in a fraction of the time and with much smaller storage requirements. If one actually decomposes the Lagrangean relaxation model, this two-step procedure, reformulation plus decomposed Lagrangean relaxation, will produce linear 0–1 Lagrangean subproblems with a dimension no larger than that of the original model. We first present numerical results for the Crossdock Door Assignment Problem, a special case of the Generalized Quadratic Assignment Problem. These show that just solving one Lagrangean relaxation problem in 0–1 variables produces a bound close to or better than the standard continuous RLT2 bound (when available) but much faster, especially for larger instances, even if one does not actually decompose the Lagrangean problem. We then present numerical results for the 0–1 quadratic knapsack problem, for which no RLT2 bounds are available to our knowledge, but we show that solving an initial Lagrangean relaxation of a specific 0–1 RLT1 decomposable model drastically improves the quality of the bounds. In both cases, solving the fully decomposed rather than the decomposable Lagrangean problem to optimality will make it feasible to compute such bounds for instances much too large for computing the standard continuous RLT2 bounds.
    • Correction
    • Source
    • Cite
    • Save
    • Machine Reading By IdeaReader
    23
    References
    1
    Citations
    NaN
    KQI
    []