A b s t r a c t - I n t h e m u l t i p l e x method p r o p o s e d h e r e t h e s o l u t i o n i s f o t ~ t by f o l l o w i n g
a g r a d i e n t path through the i n t e r i o r o f t h e f e a s i b l e r e g i o n and through s u b s p a c e s o f
reduced dimension corresponding to the bounding hypersurfaces of the feasible
region. The path moves from any initial feasible point through a sequence of linear
steps to a vertex of tlie polytope defined by the constraints. Although similar the
current method differs fundamentally from Rosen's [5] gradient projection method and
the strategy adopted in adding and dropping constraints from the enforcing set is
different from that of the reduced gradient method of Wolfe [ 6 ] . Once the path
reaches a vertex the algorithm determines whether or not it is optimal by applying a
simple perturbation procedure for which the perturbed points are generated as a
by-product of the computed path to the vertex. A theoretical convergence a r ~ n t
is put forward and using the computer tile method has successfully been applied to a
large number of test problems.
178
I. INTRODUCTION
2. THE MULTIPLEX ~ I T H M
where
-(ctai)-Igi(x ') if eta. > 0
hi = { 1 (21
-| otherwise.
Notice that if eta i < 0 then c points away from the constraint boundary
Thus even in this case we choose to ignore the constraint and in this way we allow
for maximum freedom in constructing the ascent path.
If h > 0 and finite there exists at least one active oorLstraint at x*, say
180
gk" We now p r o p o s e t h a t the next step taken by the MP-algoritlm be in the affine
2.3 Reduce~t l ~ - p r o b l e m
For greater generality assume t h a t t h e r e a r e p e n f o r c i n g c o n s t r a i n t s identified by
P
the index set I = |wi|i= 1 . Clearly the system of active constraints Aix = b i ,
d ~ Rp and where the relationship between Xa, x ° and the original x is given by
s] for some appropriate permutation matrix E. We may now solve for the p
x =Exo
= e-ld - p - l ~ ° 14)
such that (5 )
g j ( x o) = a*jtXo -b*.j ~_0, j,I
~here
c * t = (c °t - c > - I Q ) , K = e tap - l d
(6)
a*t = t _ c~ p-1Q) and b~ = a~ p - l d - b •
j (ajo ja a ja j'
and xa in the original objective function and constraints. We note that the te-
that in general is not necessarily optimal. We now propose a strategy, the produc t
of experimention, that increases the probability of non-zero steps alone higher
dimensional hyperplanes and hypersurfaces, thus delaying early termination at a
vertex. We also add a perturbation procedure that assures eventual termination at
an optimal vertex.
The basic strategy is as follows. If the computed stepsize is zero, the new
enforcing constraint is added for the next iteration as before. If, however, the
stepsize is greater than zero, then to begin with only one of the new constraints is
enforced in the next iteration. This allows for the attractive possibility of a
non-zero step being taken in the (n-l)-dimensional hyperplane corresponding to the
new constraint. If however the stepsize (with only one constraint enforced) is zero
then we allow for two situations that may arise. In the first case if all the pre-
vious enforcing constraints give h.'s that are zero we add them all to the current
1
new constraint in the next iteration. Otherwise if one or more of the previous
*t *
constraints yield a negative h. (i.e. a. c ~ 0) then we add only one of the
1 1
previous constraints with an hi = 0 to the current constraint for the next ite-
ration. This strategy is formally set our in Step 6 of Section 2.4. Once a vertex
is finally reached the new algorithm tests whether or not it is optimal by applying
a simple perturbation procedure (step 9 of 2.4) for which the perturbed points are
generated as by-products of the computed path to the vertex ~id so that the solu-
tion of a new system of equations is not required (see section 3). ~,e convergence
of the algorithm is discussed in section 4. A formal statement of the algorithm now
follows.
flag v ~ - O.
Step 6. Cases I az,d II: If h > 0 and v = 0 or I, then set v +-- 1 and store
the "old" set of enforcing constraints Wol d +--W and set Pold~--p.
Use only the new enforcir~ constraint in the next iteration, i.e. set
p +-- I, w, ~ - s and go to step 2.
Case III: If h = 0 and v = 0 go to step 7.
Case IV: If h : 0 and v : 1 (i.e. p : I) then (a) if h = 0 for all
w
w e Wold then set p +--P old ÷ l, W +--Wol d u W, v ~--0 and go to Step 2;
step 7.
Step 7. Set p .--p + I and w
.--s.
P
i+l .
Step 8. If p : n then x is a vertex and go to step 9; otherwise go to step 2.
Step 9. (Perturbation procedure) Compute the perturbation points yJ,
i+l
~=l,2,...,n, each lying on a separate edge emanating from the vertex x
and given by the so]utiun of the systems
P~ : d - eJ
a
(7)
where • J : [0,0,...,0,e,0,...,0] t, for some suitably small e > 0
appearing in the ~-th position. If f(yJ)- ~ f(x i+]) for all j then
i41
x is optimal and STOP; otherwise choose the first ? for which
f(~)-- > f(x i+l) and set the new starting point x+ ~-- yi and go to step
1.
3. IMP~ATION
Step 4 i n the hiP-algorithm requires the c o n s t r u c t i o n of eqs. (3) and the computation
of the solution (4). This is done economically by exploiting the fact that in the
application o f t h e M P - a l g o r l t b m we e i t h e r h a v e o n l y one e n f o r c i n g c o n s t r a i n t o r one
enforcing constraint i s added t o a s e t (the previous set or the "old" set) for which
the solution has already been computed. In the ease of only one constraint
(4 - h~ : O) the construction is trivial. Set xa ~ - x .i where ski ~ 0, then
P = ski. In the second case when the p-th constraint (p > 2) is added we already
have available the solution for the previous p-I enforcing constraints in the form
Ix a + P-IQx ° : p-I d
t
aaXa + atx
o o = d p , the solution for the new x
a
is obtained by a p p l y i n g Gauss-Jordan
I p-IQ p-I d
(8)
t t
a a d
a o p
Adding appropriate multiples of the upper rows to the p-th row yields the modified
system:
.| i --1
I . ,I ' B 'P d]
°i
ooo.:
. . . . . . . . . . .
• . 0
I_1
i
i
'a'
i
I
. . . . . . . . .
E"
I . . . .
I
I
'
where a may be zero. If a is zero we interchange the p-th column with a column
to the right (the k-th) for which E k # O. The diagonal entry is then normalized by
dividing the p-th row by a. We now add appropriate multiples of the p-th row to
the u p p e r rows to ensure zero entries in the p-th column above the diagonal and thus
obtain the pxn solution m~trix:
which gives the new dependent vector xa i.t.o, the new reduced independent vector
The calculation of the perturbed points required for the perturbation procedure
may be incorporated in the above ellmlnation procedure by noticing that we need only
extend system (8) at each iteration by [0,0,...,~] t on the right hand side to ob-
tain the perturbation vectors (see Step 9, Section 3.4):
and thus the perturbed points are computed as a by-product of the computed path to
the vertex. ~ must of course be chosen small enough to ensure that, in the absence
of degenei~cy, the perturbed points remain feasible.
Because in practice the computer calculations are done in finite arithmetic we
introduce in the implementation a tolerance parameter, O > 0r that is used when
testing for zero. Thus, for example we set h .--0 if lhl ( ~ (or preferably if
lhlllc*ll < 6 for a more scale free test).
In the application of the MP-algorithm a feasible starting point may be ob-
tained, if necessary, in the usual way of first applying the method to the auxiliary
problem.
184
Lenm~ I. If the LP-problem (I) has a bounded solution then h (k) --,0 where h (k)
denotes the steplength taken at the k-th iteration.
Proof. It can easily be shown that the increase in the objective function is
for this algorithm h (k) > 0. Moreover, and less trivially, we can show that in the
$
c a s e where C(k ) = 0 and h "k' is thus not defined, that we have termination st an
the feasible region, there exists a ~ > 0 such that lie • (k)ll 2 ~ ~ for all possi-
• 2.(k)
ble k. Now the boundeckless of the solution implies that llC(k)llh be conver-
k
z i %
gent and sinee lle(k,l|'/ ~ ~ > 0 for all k it follows that h "k' --. O. Q
Pr(>D f. From step 6 of the MP-algorit}~ we deduce the following possible relation-
ships between iterations:
I v = 0 & h > 0 leads to v = I & p = 1 in next iteration;
II v -- I & h > 0 leads to v = I & p = 1 in next iteration;
III v = 0 & h = 0 leads to p .- p+l & v = 0 in next iteration;
IV v = I & h = 0 leads to either:
(a) p ,- P o l d + l & v : 0 in next iteration;
(b) p ,- p + l : 2 & v : 0 i n n e x t i t e r a t i o n .
If the algorithm does not terminate we m u s t h a v e a n i n f i n i t e incidence of cases I,
II o r IV(b) which a r e the only cases i n w h i c h t h e nt~nber o f e n f o r c i n g constraints
are reduced, p ,-- 1 i n c a s e s I and I I a n d p ~-- 2 in case IV(b). Since IV(b) can
o n l y f o l l o w on s t e p s I and I I n o n - t e r m i n a t i o n due t o t h e i n f i n i t e occurrence of case
185
Figures l(a) and (b) give a three-dimensional representation of the working of the
H P - a l g o r i t h m when a p p l i e d r e s p e c t i v e l y t o t h e s i m p l e problem
(a) maximize f = x, + 2x 2 + 3x, ; such that
0 < x. < I, i=l,Z,3
I -
and the "pathological" Klee-Minty problem
(b) maximize f = 100x, + 10x, + x,; such that
x, _< I, 20x, + x, _< I00, 200x, + 20x, + x, _~ 10000, x i > 0, i = 1 , 2 , 3 .
X3 = MP-ltep
(0; O.10000)
(~;1;1) (0.100;8000~
~ Xt
/
Xr
(o)
Ib)
FIGURE 1 G e o m e t r i c a l r e p r e s e n t a t i o n o f t h e MP-steps r e q u i r e d f o r t h e s o l u t i o n o f
problems (a) and ( b ) . For t h e sake o f c l a r i t y t h e f i g u r e f o r (b) i s n o t
drawn t o s c a l e .
l-dimensional edges of the feasible polytope but also through the internal space and
alone higher dimensional planes.
The MP-algorithm has been implemented in a FORTRAN program that also allows for
the solution of the auxiliary problem if necessary. The program was suooessfully
used in the solution of many problems with n _< 20 and m .( 60. For small values
of n (_< I0) the performance of the MP-algorithm appeared reasonably competitive
with that of the IMSL subroutine ZX3LP [] ], but for larger problems the comparison
wa~ poor and the program, as it stands, is probably not suitable for large problems.
One must however bear in mind that the prototype program developed here was not
designed with optimL~m efficiency in mind, but only to demonstrate that the algorithm
may successfully be applied to LP-problems. The program, for example, uses the
standard linear equations solver L F ~ I F [I] of the l ~ L subroutine library. In
large LP-problems which occur in praotice the coefficient matrix is usually very
sparse and one may expect that if we adapt the program to exploit this sparseness in
the Gauss-Jordan elimination prooedure, that the perfonmlnce of the MP-algorithm
would be greatly improved. It also remains, of course, to take advantage of the
considerable parallelism inherent in the new algorithm.
It is significant that in many eases the first termination occurs at an optimal
or near optimal vertex. This indicates that the current algorithm, in agreement
wit}* the work of Mitts et al [4], may be used in a hybrid scheme that uses both the
new methed and the simplex method. Not only can the ~-algorithm provide a better
starting point for the simplex method but it can also be used to provide a so-called
"purification" step by which a non-basic intel~%sl solution may be improved to an
extreme point basic feasible solution. As reported by Mitts et al [4] the execution
of this purification step is of importance where iterative internal methods, such as
Karmarkar's method, are used and where in about 25% - 30% of the total ntmLber of
iterations one reaches about 80~ of the optimum value of the objective function.
The application of a hybrid multiplex-simplex procedure at this stage may possibly
be the most appropriate termination procedure for such methods.
6. REFERENCKS