Anda di halaman 1dari 16

AIMMS Modeling Guide - Linear Programming Tricks

This le contains only one chapter of the book. For a free download of the
complete book in pdf format, please visit www.aimms.com or order your hard-
copy at www.lulu.com/aimms.
Aimms 3.13
Copyright c 19932012 by Paragon Decision Technology B.V. All rights reserved.
Paragon Decision Technology B.V.
Schipholweg 1
2034 LS Haarlem
The Netherlands
Tel.: +31 23 5511512
Fax: +31 23 5511517
Paragon Decision Technology Inc.
500 108th Avenue NE
Ste. # 1085
Bellevue, WA 98004
USA
Tel.: +1 425 458 4024
Fax: +1 425 458 4025
Paragon Decision Technology Pte. Ltd.
55 Market Street #10-00
Singapore 048941
Tel.: +65 6521 2827
Fax: +65 6521 3001
Paragon Decision Technology
Shanghai Representative Oce
Middle Huaihai Road 333
Shuion Plaza, Room 1206
Shanghai
China
Tel.: +86 21 51160733
Fax: +86 21 5116 0555
Email: info@aimms.com
WWW: www.aimms.com
Aimms is a registered trademark of Paragon Decision Technology B.V. IBM ILOG CPLEX and CPLEX is a
registered trademark of IBM Corporation. GUROBI is a registered trademark of Gurobi Optimization, Inc.
KNITRO is a registered trademark of Ziena Optimization, Inc. XPRESS-MP is a registered trademark of
FICO Fair Isaac Corporation. Mosek is a registered trademark of Mosek ApS. Windows and Excel are
registered trademarks of Microsoft Corporation. T
E
X, L
A
T
E
X, and A
M
S-L
A
T
E
X are trademarks of the American
Mathematical Society. Lucida is a registered trademark of Bigelow & Holmes Inc. Acrobat is a registered
trademark of Adobe Systems Inc. Other brands and their products are trademarks of their respective
holders.
Information in this document is subject to change without notice and does not represent a commitment on
the part of Paragon Decision Technology B.V. The software described in this document is furnished under
a license agreement and may only be used and copied in accordance with the terms of the agreement. The
documentation may not, in whole or in part, be copied, photocopied, reproduced, translated, or reduced to
any electronic medium or machine-readable form without prior consent, in writing, from Paragon Decision
Technology B.V.
Paragon Decision Technology B.V. makes no representation or warranty with respect to the adequacy
of this documentation or the programs which it describes for any particular purpose or with respect
to its adequacy to produce any particular result. In no event shall Paragon Decision Technology B.V.,
its employees, its contractors or the authors of this documentation be liable for special, direct, indirect
or consequential damages, losses, costs, charges, claims, demands, or claims for lost prots, fees or
expenses of any nature or kind.
In addition to the foregoing, users should recognize that all complex software systems and their doc-
umentation contain errors and omissions. The authors, Paragon Decision Technology B.V. and its em-
ployees, and its contractors shall not be responsible under any circumstances for providing information
or corrections to errors and omissions discovered at any time in this book or the software it describes,
whether or not they are aware of the errors or omissions. The authors, Paragon Decision Technology
B.V. and its employees, and its contractors do not recommend the use of the software described in this
book for applications in which errors or omissions could threaten life, injury or signicant loss.
This documentation was typeset by Paragon Decision Technology B.V. using L
A
T
E
X and the Lucida font
family.
Part II
General Optimization Modeling
Tricks
Chapter 6
Linear Programming Tricks
This chapter This chapter explains several tricks that help to transform some models with
special, for instance nonlinear, features into conventional linear programming
models. Since the fastest and most powerful solution methods are those for
linear programming models, it is often advisable to use this format instead of
solving a nonlinear or integer programming model where possible.
References The linear programming tricks in this chapter are not discussed in any partic-
ular reference, but are scattered throughout the literature. Several tricks can
be found in [Wi90]. Other tricks are referenced directly.
Statement of a
linear program
Throughout this chapter the following general statement of a linear program-
ming model is used:
Minimize:

jJ
c
j
x
j
Subject to:

jJ
a
ij
x
j
b
i
i I
x
j
0 j J
In this statement, the c
j
s are referred to as cost coecients, the a
ij
s are re-
ferred to as constraint coecients, and the b
i
s are referred to as requirements.
The symbol denotes any of , =, or constraints. A maximiza-
tion model can always be written as a minimization model by multiplying the
objective by (1) and minimizing it.
6.1 Absolute values
The model Consider the following model statement:
Minimize:

jJ
c
j
|x
j
| c
j
> 0
Subject to:

jJ
a
ij
x
j
b
i
i I
x
j
free
Chapter 6. Linear Programming Tricks 64
Instead of the standard cost function, a weighted sum of the absolute values
of the variables is to be minimized. To begin with, a method to remove these
absolute values is explained, and then an application of such a model is given.
Handling
absolute
values . . .
The presence of absolute values in the objective function means it is not possi-
ble to directly apply linear programming. The absolute values can be avoided
by replacing each x
j
and |x
j
| as follows.
x
j
= x
+
j
x

j
|x
j
| = x
+
j
+x

j
x
+
j
, x

j
0
The linear program of the previous paragraph can then be rewritten as follows.
Minimize:

jJ
c
j
(x
+
j
+x

j
) c
j
> 0
Subject to:

jJ
a
ij
(x
+
j
x

j
) b
i
i I
x
+
j
, x

j
0 j J
. . . correctly The optimal solutions of both linear programs are the same if, for each j, at
least one of the values x
+
j
and x

j
is zero. In that case, x
j
= x
+
j
when x
j
0,
and x
j
= x

j
when x
j
0. Assume for a moment that the optimal values
of x
+
j
and x

j
are both positive for a particular j, and let = min{x
+
j
, x

j
}.
Subtracting > 0 from both x
+
j
and x

j
leaves the value of x
j
= x
+
j
x

j
unchanged, but reduces the value of |x
j
| = x
+
j
+x

j
by 2. This contradicts the
optimality assumption, because the objective function value can be reduced by
2c
j
.
Application:
curve tting
Sometimes x
j
represents a deviation between the left- and the right-hand side
of a constraint, such as in regression. Regression is a well-known statistical
method of tting a curve through observed data. One speaks of linear regres-
sion when a straight line is tted.
Example Consider tting a straight line through the points (v
j
, w
j
) in Figure 6.1. The
coecients a and b of the straight line w = av +b are to be determined.
The coecient a is the slope of the line, and b is the intercept with the w-axis.
In general, these coecients can be determined using a model of the following
form:
Minimize: f(z)
Subject to:
w
j
=av
j
+b z
j
j J
Chapter 6. Linear Programming Tricks 65
v
w
(0, b)
(0, 0)
slope is a
Figure 6.1: Linear regression
In this model z
j
denotes the dierence between the value of av
j
+b proposed
by the linear expression and the observed value, w
j
. In other words, z
j
is the
error or deviation in the w direction. Note that in this case a, b, and z
j
are the
decision variables, whereas v
j
and w
j
are data. A function f(z) of the error
variables must be minimized. There are dierent options for the objective
function f(z).
Dierent
objectives in
curve tting
Least-squares estimation is an often used technique that ts a line such that
the sum of the squared errors is minimized. The formula for the objective
function is:
f(z) =

jJ
z
2
j
It is apparent that quadratic programming must be used for least squares es-
timation since the objective is quadratic.
Least absolute deviations estimation is an alternative technique that minimizes
the sum of the absolute errors. The objective function takes the form:
f(z) =

jJ
|z
j
|
When the data contains a few extreme observations, w
j
, this objective is ap-
propriate, because it is less inuenced by extreme outliers than is least-squares
estimation.
Least maximum deviation estimation is a third technique that minimizes the
maximum error. This has an objective of the form:
f(z) = max
jJ
|z
j
|
This form can also be translated into a linear programming model, as ex-
plained in the next section.
Chapter 6. Linear Programming Tricks 66
6.2 A minimax objective
The model Consider the model
Minimize: max
kK

jJ
c
kj
x
j
Subject to:

jJ
a
ij
x
j
b
i
i I
x
j
0 j J
Such an objective, which requires a maximum to be minimized, is known as a
minimax objective. For example, when K = {1, 2, 3} and J = {1, 2}, then the
objective is:
Minimize: max{c
11
x
1
+c
12
x
2
c
21
x
1
+c
22
x
2
c
31
x
1
+c
32
x
2
}
An example of such a problem is in least maximum deviation regression, ex-
plained in the previous section.
Transforming a
minimax
objective
The minimax objective can be transformed by including an additional decision
variable z, which represents the maximum costs:
z = max
kK

jJ
c
kj
x
j
In order to establish this relationship, the following extra constraints must be
imposed:

jJ
c
kj
x
j
z k K
Now when z is minimized, these constraints ensure that z will be greater than,
or equal to,

jJ
c
kj
x
j
for all k. At the same time, the optimal value of z
will be no greater than the maximum of all

jJ
c
kj
x
j
because z has been
minimized. Therefore the optimal value of z will be both as small as possible
and exactly equal to the maximum cost over K.
The equivalent
linear program
Minimize: z
Subject to:

jJ
a
ij
x
j
b
i
i I

jJ
c
kj
x
j
z k K
x
j
0 j J
The problem of maximizing a minimum (a maximin objective) can be trans-
formed in a similar fashion.
Chapter 6. Linear Programming Tricks 67
6.3 A fractional objective
The model Consider the following model:
Minimize:

jJ
c
j
x
j
+

jJ
d
j
x
j
+

Subject to:

jJ
a
ij
x
j
b
i
i I
x
j
0 j J
In this problem the objective is the ratio of two linear terms. It is assumed
that the denominator (the expression

jJ
d
j
x
j
+) is either positive or neg-
ative over the entire feasible set of x
j
. The constraints are linear, so that a
linear program will be obtained if the objective can be transformed to a linear
function. Such problems typically arise in nancial planning models. Possible
objectives include the rate of return, turnover ratios, accounting ratios and
productivity ratios.
Transforming a
fractional
objective
The following method for transforming the above model into a regular linear
programming model is from Charnes and Cooper ([Ch62]). The main trick is to
introduce variables y
j
and t which satisfy: y
j
= tx
j
. In the explanation below,
it is assumed that the value of the denominator is positive. If it is negative, the
directions in the inequalities must be reversed.
1. Rewrite the objective function in terms of t, where
t = 1/(

jJ
d
j
x
j
+)
and add this equality and the constraint t > 0 to the model. This gives:
Minimize:

jJ
c
j
x
j
t +t
Subject to:

jJ
a
ij
x
j
b
i
i I

jJ
d
j
x
j
t +t = 1
t > 0
x
j
0 j J
2. Multiply both sides of the original constraints by t, (t > 0), and rewrite
the model in terms of y
j
and t, where y
j
= x
j
t. This yields the model:
Chapter 6. Linear Programming Tricks 68
Minimize:

jJ
c
j
y
j
+t
Subject to:

jJ
a
ij
y
j
b
i
t i I

jJ
d
j
y
j
+t = 1
t > 0
y
j
0 j J
3. Finally, temporarily allow t to be 0 instead of t > 0 in order to get a
linear programming model. This linear programming model is equivalent
to the fractional objective model stated above, provided t > 0 at the
optimal solution. The values of the variables x
j
in the optimal solution
of the fractional objective model are obtained by dividing the optimal y
j
by the optimal t.
6.4 A range constraint
The model Consider the following model:
Minimize:

jJ
c
j
x
j
Subject to:
d
i

jJ
a
ij
x
j
e
i
i I
x
j
0 j J
When one of the constraints has both an upper and lower bound, it is called
a range constraint. Such a constraint occurs, for instance, when a minimum
amount of a nutrient is required in a blend and, at the same time, there is a
limited amount available.
Handling
a range
constraint
The most obvious way to model such a range constraint is to replace it by two
constraints:

jJ
a
ij
x
j
d
i
and

jJ
a
ij
x
j
e
i
i I
However, as each constraint is now stated twice, both must be modied when
changes occur. A more elegant way is to introduce extra variables. By intro-
ducing new variables u
i
one can rewrite the constraints as follows:
u
i
+

jJ
a
ij
x
j
= e
i
i I
Chapter 6. Linear Programming Tricks 69
The following bound is then imposed on u
i
:
0 u
i
e
i
d
i
i I
It is clear that u
i
= 0 results in

jJ
a
ij
x
j
= e
i
while u
i
= e
i
d
i
results in

jJ
a
ij
x
j
= d
i
The equivalent
linear program
A summary of the formulation is:
Minimize:

jJ
c
j
x
j
Subject to:
u
i
+

jJ
a
ij
x
j
= e
i
i I
x
j
0 j J
0 u
i
e
i
d
i
i I
6.5 A constraint with unknown-but-bounded coecients
This section This section considers the situation in which the coecients of a linear in-
equality constraint are unknown-but-bounded. Such an inequality in terms
of uncertainty intervals is not a deterministic linear programming constraint.
Any particular selection of values for these uncertain coecients results in an
unreliable formulation. In this section it will be shown how to transform the
original nondeterministic inequality into a set of deterministic linear program-
ming constraints.
Unknown-but-
bounded
coecients
Consider the constraint with unknown-but-bounded coecients a
j

jJ
a
j
x
j
b
where a
j
assumes an unknown value in the interval [L
j
, U
j
], b is the xed
right-hand side, and x
j
refers to the solution variables to be determined. With-
out loss of generality, the corresponding bounded uncertainty intervals can be
written as [a
j

j
, a
j
+
j
], where a
j
is the midpoint of [L
j
, U
j
].
Chapter 6. Linear Programming Tricks 70
Midpoints can
be unreliable
Replacing the unknown coecients by their midpoint results in a deterministic
linear programming constraint that is not necessarily a reliable representation
of the original nondeterministic inequality. Consider the simple linear pro-
gram
Maximize: x
Subject to:
ax 8
with the uncertainty interval a [1, 3]. Using the midpoint a = 2 gives the
optimal solution x = 4. However, if the true value of a had been 3 instead of
the midpoint value 2, then for x = 4 the constraint would have been violated
by 50%.
Worst-case
analysis
Consider a set of arbitrary but xed x
j
values. The requirement that the con-
straint with unknown-but-bounded coecients must hold for the unknown
values of a
j
is certainly satised when the constraint holds for all possible
values of a
j
in the interval [a
j

j
, a
j
+
j
]. In that case it suces to con-
sider only those values of a
j
for which the term a
j
x
j
attains its maximum
value. Note that this situation occurs when a
j
is at one of its bounds. The sign
of x
j
determines which bound needs to be selected.
a
j
x
j
a
j
x
j
+
j
x
j
x
j
0
a
j
x
j
a
j
x
j

j
x
j
x
j
0
Note that both inequalities can be combined into a single inequality in terms
of |x
j
|.
a
j
x
j
a
j
x
j
+
j
|x
j
| x
j
An absolute
value
formulation
As a result of the above worst-case analysis, solutions to the previous formula-
tion of the original constraint with unknown-but-bounded coecients a
j
can
now be guaranteed by writing the following inequality without reference to a
j
.

jJ
a
j
x
j
+

jJ

j
|x
j
| b
A tolerance . . . In the above absolute value formulation it is usually too conservative to require
that the original deterministic value of b cannot be loosened. Typically, a
tolerance > 0 is introduced to allow solutions x
j
to violate the original right-
hand side b by an amount of at most max(1, |b|).
Chapter 6. Linear Programming Tricks 71
. . . relaxes the
right-hand side
The term max(1, |b|) guarantees a positive increment of at least , even in case
the right-hand side b is equal to zero. This modied right-hand side leads to
the following -tolerance formulation where a solution x
j
is feasible whenever
it satises the following inequality.

jJ
a
j
x
j
+

jJ

j
|x
j
| b +max(1, |b|)
The nal
formulation
This -tolerance formulation can be rewritten as a deterministic linear pro-
gramming constraint by replacing the |x
j
| terms with nonnegative variables
y
j
, and requiring that y
j
x
j
y
j
. It is straightforward to verify that these
last two inequalities imply that y
j
|x
j
|. These two terms are likely to be
equal when the underlying inequality becomes binding for optimal x
j
values
in a linear program. The nal result is the following set of deterministic lin-
ear programming constraints, which captures the uncertainty reected in the
original constraint with unknown-but-bounded coecients as presented at the
beginning of this section.

jJ
a
j
x
j
+

jJ

j
y
j
b +max(1, |b|)
y
j
x
j
y
j
y
j
0
6.6 A probabilistic constraint
This section This section considers the situation that occurs when the right-hand side of a
linear constraint is a random variable. As will be shown, such a constraint can
be rewritten as a purely deterministic constraint. Results pertaining to proba-
bilistic constraints (also referred to as chance-constraints) were rst published
by Charnes and Cooper ([Ch59]).
Stochastic
right-hand side
Consider the following linear constraint

jJ
a
j
x
j
B
where J = {1, 2, . . . , n} and B is a random variable. A solution x
j
, j J, is
feasible when the constraint is satised for all possible values of B.
Acceptable
values only
For open-ended distributions the right-hand side B can take on any value be-
tween and +, which means that there cannot be a feasible solution. If
the distribution is not open-ended, suppose for instance that B
min
B B
max
,
then the substitution of B
min
for B results in a deterministic model. In most
Chapter 6. Linear Programming Tricks 72
practical applications, it does not make sense for the above constraint to hold
for all values of B.
A probabilistic
constraint
Specifying that the constraint

jJ
a
j
x
j
B must hold for all values of B
is equivalent to stating that this constraint must hold with probability 1. In
practical applications it is natural to allow for a small margin of failure. Such
failure can be reected by replacing the above constraint by an inequality of
the form
Pr

jJ
a
j
x
j
B

1
which is called a linear probabilistic constraint or a linear chance-constraint.
Here Pr denotes the phrase Probability of, and is a specied constant frac-
tion ( [0, 1]), typically denoting the maximum error that is allowed.
Deterministic
equivalent
Consider the density function f
B
and a particular value of as displayed in
Figure 6.2.

B
B-axis
1
Figure 6.2: A density function f
B
A solution x
j
, j J, is considered feasible for the above probabilistic con-
straint if and only if the term

jJ
a
j
x
j
takes a value beneath point

B. In
this case a fraction (1 ) or more of all values of B will be larger than the
value of the term

jJ
a
j
x
j
. For this reason

B is called the critical value. The
probabilistic constraint of the previous paragraph has therefore the following
deterministic equivalent:

jJ
a
j
x
j


B
Computation of
critical value
The critical value

B can be determined by integrating the density function from
until a point where the area underneath the curve becomes equal to . This
point is then the value of

B. Note that the determination of

B as described in
this paragraph is equivalent to using the inverse cumulative distribution func-
tion of f
B
evaluated at . From probability theory, the cumulative distribution
Chapter 6. Linear Programming Tricks 73
function F
B
is dened by F
B
(x) = Pr[B x]. The value of F
B
is the cor-
responding area underneath the curve (probability). Its inverse species for
each particular level of probability, the point

B for which the integral equals
the probability level. The cumulative distribution function F
B
and its inverse
are illustrated in Figure 6.3.
B-axis
-axis
1
1
-axis
B-axis
Figure 6.3: Cumulative distribution function F and its inverse.
Use
Aimms-supplied
function
As the previous paragraph indicated, the critical

B can be determined through
the inverse of the cumulative distribution function. Aimms supplies this func-
tion for a large number of distributions. For instance, when the underlying
distribution is normal with mean 0 and standard deviation 1, then the value of

B can be found as follows:

B = InverseCumulativeDistribution( Normal(0,1) ,)
Example Consider the constraint

j
a
j
x
j
B with a stochastic right-hand side. Let
B = N(0, 1) and = 0.05. Then the value of

B based on the inverse cumulative
distribution is -1.645. By requiring that

j
a
j
x
j
1.645, you make sure that
the solution x
j
is feasible for 95% of all instances of the random variable B.
Overview of
probabilistic
constraints
The following gure presents a graphical overview of the four linear proba-
bilistic constraints with stochastic right-hand sides, together with their deter-
ministic equivalent. The shaded areas correspond to the feasible region of

jJ
a
j
x
j
.
Chapter 6. Linear Programming Tricks 74
Pr

jJ
a
j
x
j
B

jJ
a
j
x
j


B

B
B-axis
1
Pr

jJ
a
j
x
j
B

jJ
a
j
x
j


B

B
B-axis
1
Pr

jJ
a
j
x
j
B

jJ
a
j
x
j


B

B
B-axis
1
Pr

jJ
a
j
x
j
B

jJ
a
j
x
j


B

B
B-axis
1
Table 6.1: Overview of linear probabilistic constraints
6.7 Summary
This chapter presented a number of techniques to transform some special
models into conventional linear programming models. It was shown that some
curve tting procedures can be modeled, and solved, as linear programming
models by reformulating the objective. A method to reformulate objectives
which incorporate absolute values was given. In addition, a trick was shown
to make it possible to incorporate a minimax objective in a linear program-
ming model. For the case of a fractional objective with linear constraints, such
as those that occur in nancial applications, it was shown that these can be
transformed into linear programming models. A method was demonstrated
to specify a range constraint in a linear programming model. At the end of
this chapter, it was shown how to reformulate constraints with a stochastic
right-hand side to deterministic linear constraints.
Bibliography
[Ch59] A. Charnes and W.W. Cooper, Change-constrained programming, Man-
agement Science 6 (1959), 7380.
[Ch62] A. Charnes and W.W. Cooper, Programming with linear fractional func-
tional, Naval Research Logistics Quarterly 9 (1962), 181186.
[Wi90] H.P. Williams, Model building in mathematical programming, 3rd ed.,
John Wiley & Sons, Chichester, 1990.

Anda mungkin juga menyukai