Anda di halaman 1dari 14

A Note on the Relationship of Primal and Dual Simplex

SFU-CMPT TR 1998-21

Lou Hafer December, 1998

Abstract This note looks at the correspondence between the primal and dual simplex algorithms, and their respective variables and bases. The rst section examines the correspondence for a dual problem formed from a primal problem with explicit bound constraints and slack variables added before creating the dual. The second section examines the mechanics of the case which occurs in practice, a dual simplex algorithm running off the data structures used by the revised primal simplex with implicit upper and lower bounds on the variables. The third section revisits the expanded primal and dual problems, presenting an alternative view which offers different insights. In essence, a single coe cient, nominally associated with a dual surplus variable, is overloaded and interpreted as the coe cient for one of the two dual variables which would normally be associated with explicit upper and lower bound constraints on a primal variable. Since only one of these dual variables can be dual basic at any given moment, there is no conict over interpretation.

The author was motivated to write this note as a consequence of having to answer these questions while writing a computer code for primal and dual simplex algorithms. Those of you who also count yourself among the ranks of the mathematically challenged may nd it useful.

1 Fully Expanded Primal and Dual


Consider the primal problem max cx Ax b l x u

(1)

We can rewrite (1) by converting the upper and lower bounds to explicit constraints and adding slack variables. Each upper bound results in a constraint of the form x j u j ; each lower bound in a constraint of the form x j l j . The expanded primal problem becomes max
2 

c 0

   x

s
2 3

A I 00 x b 4I 0 I 05 s = 4l 5 I 0 0 I u s0

3 

(2)

Note that the variables x j become free variables and hence will always be part of the primal basis. Pivoting for this problem will involve moving slack variables between the basic and nonbasic partitions. Now consider the dual problem for (2), min y b l u y A I I yI 0
 T   T T

=c

(3)

The constraints y A I I = c are equalities to match the free variables x, the constraints yI 0 are inequalities to match the positive variables s, and the y are free variables to match the equalities of (2). (yI 0 are explicit constraints following from the primal.) Expanding (3) by adding surplus variables to convert the inequalities to equalities, we have min
   T 
0

y y

b 0
0 0



A I =c 0 I

(4)

0
where A = A I I variables.
0

T

, c = c 0 , and b = b l u . Note that there are only m + 2n dual surplus


0 0

With these denitions, we can consider the relationshipof the primal and dual basis partitions.    For (2), partition A I into basic and nonbasic columns B N . Designate the coe cient matrix  T of (4) as A and partition it into basic and nonbasic rows B N .
0

Lets consider rst the structure of the primal basis partition. The size of B is set by the   number of rows in A I m rows for the architectural constraints, and another 2n rows for the constraints introduced for the upper and lower bounds on x, for a total of m + 2n rows. To dene an extreme point, n tight constraints are required,and  the n slack variables associated with these constraints will be nonbasic. Order the rows of A I so that tight constraints occupy the top rows and loose constraints the bottom rows. Order the columns from right to left as architectural variables, basic slacks, and nonbasic slacks. The primal basis partition then becomes
0 0

B N =

At 0 I t Al I l 0

(5)

A t is the n n matrix comprised of the coe cients of the tight constraints and I t is the n n identity matrix formed by the coe cients of the associated slacks. Similarly, A l ((m + n) n) and   l I ((m + n) (m + n)) are formed by the loose constraints. Denote by c B the vector c 0 and note that c N 0. Now consider the structure of the dual basis partition. The size of B is set by the number of  columns in A I n columns for the architectural variables, plus m + 2n columns for the slack variables associated with the constraints, for a total of m + 3n columns. Since all dual variables   are free, all of the m + 2n rows of A I will appear in the basic partition. The remaining n rows must be chosen from the dual surplus variables. The appropriate set can be reasoned as follows: n tight constraints are required to dene an extreme point, and the dual variables associated with these constraints can all be (potentially) nonzero. Looking at (4), notice that for any constraint i, the coe cients associated with the primal slack si and the dual surplus i combine to produce the dual constraint yi i 0. In order to have yi > 0, we must also have i > 0, so the appropriate rows to nish the dual basis are the dual surpluses associated with the primal slacks for the tight constraints in the primal. The remaining m + n dual surpluses (those associated with the loose constraints in the primal) will make up the nonbasic partition. Ordering the variables as   B y N , we have
0 0

t 2 3 "  6 0 0 I 7 0 I t 6 At 0 I t 7 6 B 7=4B N 7 =6 5 6 l l 7 0 5 N 4A I

(6)

0 I l 0

With these partitions, we can proceed to verify the equivalences between the primal and dual simplex, and see how the necessary coe cients for the dual simplex tableau can be read from the primal tableau. In the dual simplex, we have B y = c B 0 B 1 . The inverse of B can be calculated from the    1  0 I t B N B 1 partition as B 1 = , which gives B N I t 0
    B   B  1  B   1 y = c 0 B = c 0 B N B 1 = c B B 1 N c B B 1 .    

(7)

We can see that the dual variables are indeed properly calculated if we look a bit more closely. We

have
  B t l  2 3  (A t )1 (A t )1 0 = c(A t )1 c(A t )1 0 , y y = c 00 4A l (A t )1 A l (A t )1 I l 5

(8)

I t

illustrating that only the dual variables associated with tight constraints take on nonzero values (complementary slackness). Equation (8) also shows the expected correspondence that the primal reduced costs c = c B B 1 N are the negative of the values of the dual variables y t . (More typically, one sees c = c N c B B 1 N , but here the nonbasic partition contains only slack variables, with objective function coe cients of 0.) Its also interesting to see the partition in the matrix calculation between the dual variables and the surplus variables. To see that the dual objective and reduced costs are also properly reected in the primal, we can proceed as follows:
 B    y B + N N = cB 0  B   B 

y = c

0 B 1 N N B 1

(9)

and then z= y
     

b 0
0

T
0

= B y 0 b
  

T

+ N
T

  T

= ( c B 0 B 1 N N B 1 ) 0 b = c B 0 B 1 0 b = c 0 0
2 
0

T 
0

N N B 1 0 b

T 3  2 32 3

(A t )1 (A t )1 0 0 N 0 I l 0 (A t )1 (A t )1 0 0 l t 1 l t 1 l 5 4(b )t 5 l t 1 4A (A ) A (A ) I 4A (A ) A l (A t )1 I l 5 4(b )t 5 (10) I t 0 0 (b )l I t 0 0 (b )l


0 0 0 0

32

= c(A t )1 c(A t )1 0

2

0 N A l (A t )1 A l (A t )1 I l 0 4(b )t 5 4(b )t 5 (b )l (b )l
0 0 0 0 0 0

2

= c(A t )1 (b )t N (A l (A t )1 (b )t (b )l )
0

Taking into account that




(A t )1 0 x 1 B = B b = s A l (A t )1 I l
0



(b )t (A t )1 (b )t l = l (b ) A (A t )1 (b )t + (b )l
0 0 0 0 0

we have the desired correspondence: the values calculated for the basic slacks in the primal problem match the reduced costs needed to select an entering dual surplus variable in the dual problem. Notice that here too theres a neat partition between the values of the architectural variables and the slack variables.

2 Compact Primal and Dual


Consider the traditional primal problem max cx Ax b x 0

(11)

In contrast to the previous section, we will not convert the bounds to explicit constraints. The bounds will be handled implicitly, and any of the variables x can become nonbasic. For the moment, well consider only lower bounds x 0, then generalise to arbitrary upper and lower bounds l x u. Add slack variables s and partition A I into basic and nonbasic portions as
   

Bt 0 N t I t Bl I l N l 0
T

(12)
 T

with corresponding partitions x B s B x N s N for x, s, and b t b l for b. The objective c is augmented with 0s in the columns corresponding to the slack variables, then partitioned as  B  c 0 c N 0 . The basis inverse will be B We then have

1

(B t )1 0 = . B l (B t )1 I l
     

(13)

xB xN 1 1 N N B = B bB s s
 

= = and z = c B 0 x B sB = c B 0 B 1 b + = c

B

(B t )1 0 B l (B t )1 I l



(B t )1 b t (B t )1 N t (B t )1 l l t 1 t l l t 1 t b B (B ) b N B (B ) N B l (B t )1
   T 

bt (B t )1 0 l b B l (B t )1 I l


Nt It Nl 0

xN sN

(14) xN sN




 

  

T ,

+ c N 0 x N sN
  

c N 0 c B 0 B 1 N

x N sN


T  

 N   B  (B t )1 b t (B t )1 0 c 0 c 0 l l t 1 t + B l (B t )1 I l b B (B ) b  

= c B (B t )1 b t +

cN 0 cB 0

(B ) N (B ) B l (B t )1 N t + N l 0


t 1

t 1

Nt It Nl 0

 T

x N sN

T

(15)

x N sN

= c B (B t )1 b t + c N c B (B t )1 N t c B (B t )1

x N sN

T

Now consider the traditional dual problem for (11), min yb yA c y0 (16)

The bounds on y will be handled implicitly, so that any of the variables y can become nonbasic. Add surplus variables and partition A I
"   T

into basic and nonbasic portions as


3

B N

=6


0 I B 6 7 6 Bt N t 7 6 7
4 I N

Bl


0 5 Nl
 

(17)

with corresponding partitions B yB N yN for y, , and c B c N for c. The right-hand side b is augmented with 0s in the rows corresponding to the surplus variables, then partitioned as  T 0 b t 0 b l . The rationale behind the partition is that the dual variables associated with tight constraints should be basic, while those associated with loose constraints should be nonbasic. Then, since ya j should equal c j for basic variables x j , the dual surpluses j associated with basic columns should be 0, hence nonbasic in the dual. The remaining surpluses go to the basic partition. The basis inverse will be

B
We then have

(B t )1 N t (B t )1 = . I B 0

(18)

 B B   y = cB 1 N yN N B 1    B N  (B t )1 N t (B t )1

= c


I B

N yN


  I N  

Bl

0 Nl



(B t )1 N t (B t )1 I B 0
t t 1

 

(19)

= c B (B t )1 N t c N c B (B t ) and z = B yB
   

 1

N yN

(B ) N (B ) B l (B t )1 N t N l B l (B t )1

t 1

0 bt


T

+ N yN




0 bl

T      

= cB 1 bB + N yN (b N N B 1 bB ) = cB cN = c
B

    (B t )1 N t (B t )1 0  (B t )1 b 

I B 0
t

 t

0 +


bt

N


yN


yN


0 I N 0 l b Bl N l


(B t )1 N t (B t )1 I B 0

0 bt

0 (B t )1 b t l b B l (B t )1 b t


(20)

= c (B ) b +

t 1

yN

(B t )1 b t l b B l (B t )1 b t

Pulling together the relevant bits, we have


 

(B t )1 b t (B t )1 N t (B t )1 xB = l l t 1 t l l t 1 t sB b B (B ) b N B (B ) N B l (B t )1
     



xN sN

 

B yB = c B (B t )1 N t c N c B (B t )1 N yN
B t 1 t

z = c (B ) b + c c (B ) N c (B ) z = c B (B t )1 b t + N yN The expected equivalences are apparent:


  

(B t )1 N t (B t )1 l t 1 t l B (B ) N N B l (B t )1


t 1

t 1

(B ) b b l B l (B t )1 b t

t 1

 N T

(21)

d The values of the primal basic variables x B are equal to the reduced costs of the dual nonbasic surplus variables N . d The values of the primal basic slack variables s B are equal to the reduced costs of the dual nonbasic variables yN . d The values of the reduced costs for the primal nonbasic variables x N are the negative of the values of the dual basic surplus variables B . d The values of the reduced costs for the primal nonbasic slack variables s N are the negative of the values of the dual basic variables yB . d The coe cients of B 1 N are the negative of the coe cients of N B 1 . But the question remains: Whats really happening when a primal variable x j moves between B and N due to reaching or leaving its lower bound? After all, the bound constraints are not explicitly present in the constraint matrix. Yet if they were, one would expect that the associated dual would be in the dual basis and be non-zero when the bound was tight. How is it that we can run a dual simplex algorithm from the primal tableau under these circumstances? If the lower bound constraints x 0 were represented explicitly in A, x j would appear to be a free variable and never leave the basis. The corresponding dual surplus j would not exist, since a free variable in the primal requires an equality in the dual. There would be a dual variable ym+ j attached to the lower bound constraint x j 0. The dual constraint would be ya j ym+ j = c j . (22)

In fact, ym+ j is indistinguishable from the surplus variable j in all but its interpretation, and this is the key observation. What is happening is that the revised dual simplex algorithm, extended for implicit bounds and running from the primal tableau, is algorithmically enforcing the observations of the previous paragraph and constraint (22). The presence of a dual surplus variable j is in fact a sort of convenient ction. The coe cient nominally associated with j should really be interpreted as the coe cient of ym+ j . The ction is su cient only because the bound on x j is 0, hence the absence of (ym+ j )(0) in the dual objective function yb makes no difference. In the remainder of this section, j will be used for convenience in parts of the exposition, but it should be understood that its simply standing in for one of ym+ j or ym+n+ j . With this observation in hand, it is relatively easy to extend the explanation to the general case of arbitrary primal bounds, l x u. There would be a dual variable ym+n+ j attached to the upper bound constraint x j u j and a dual variable ym+ j attached to the lower bound constraint x j l j . The dual constraint would be ya j ym+ j + ym+n+ j = c j . (23)

Note that only one of ym+ j and ym+n+ j will be basic at a time, since only one of the constraints can be tight. (In the case where l j = u j , this argument still holds, as only one of the constraints is needed.) If x j is nonbasic at a lower bound, the proper dual variable would be ym+ j , and the 1 is the proper coe cient. The only difference is that l j is no longer restricted to 0. If x j is nonbasic at an upper bound, the proper dual variable would be ym+n+ j . We require a +1 as the coe cient but have only the same 1 as in the previous cases. This leads directly to a

subset of the dual variables apparently taking on negative values. This is exactly what is required if we are to read the values of the basic dual variables from the primal tableau as the negative of the primal reduced costs. (In a maximisation problem, the reduced cost of a variable nonbasic at its upper bound should be positive at optimality.) When x j is basic (and, in the absence of degeneracy, not at bound), both ym+ j and ym+n+ j are nonbasic and 0. We dont need to decide which variable to use until were considering a pivot, at which time the choice will become clear (by virtue of which bound x j is approaching as its driven out of the primal basis). The only permanent damage is to the dual objective function. If l j or u j are non-zero, the absence of ym+ j l j or ym+n+ j u j will make the value of yb incorrect, because it does not capture the contribution of the implicit bound constraints. With the reasoning in hand, this can be seen by some judicious manipulation of the matrix partitions. Consider a primal basis partition with x j nonbasic and assume that x j occupies the rst column of the nonbasic partition:
"
t B t 0 a t N\ j I t j l B l I l a lj N\ j 0

(24)

The dual basis that matches (24) is


2 6 6 6 6 6 6 6 l 4 B 3

0 0 Bt

1 0 7 0 I B 7 7 t 7 a t N\ j 7 j a lj 0
7 N\ j 7 5
l

(25)

I N

cient of the surplus variable j associated where the 1 that occurs above a t is nominally the coe j with column j. Now consider the primal partition if we were to make the bound constraint explicit, as x j + sm+ j l j . The primal partition would become
2
t B t a t 0 N\ j I t 0 j 6 0 1 0 0 0 1 7 4 5 l l l l B a j I N\ j 0 0

(26)

The dual partition would have the same coe cients as (25), but would be better viewed as
2

0 0 6 6 Bt a t j

6 6 0 1 6 6 4 Bl a l j

I B t 7 N\ j 7
7 N\ j 5
l

3 7

0 7 7 0

(27)

I N 0

where the 1 below a t is now the coe cient of the dual variable ym+ j . j If we had chosen to consider an upper bound constraint, x j + sm+n+ j u j , the correspondence would be the same, with the exception that a 1 should appear in the basic partition, as opposed

to a 1. This means that when we force 1 into the role of coe cient for ym+n+ j when running the dual simplex from the primal tableau, the variable appears to take on a negative value. The interpretation that a single 1, nominally the coe cient for j , is serving as the coe cient for ym+ j and ym+n+ j does an adequate job of explaining the mechanics of running the revised dual simplex extended algorithm from the primal tableau. In particular, it provides one explanation for the negative value apparently taken on by a dual variable which should be the multiplier for an upper bound inequality x j u j . As well see in 3, theres a better way of approaching this from a theoretical point of view, but well need to once again work with the expanded problem. One more point is worth mentioning here. Because the algorithm chooses the correct interpretation (ym+ j or ym+n+ j ) at the time of the pivot, and maintains the equality ya j ym+ j + ym+n+ j = c j at all times, there is no problem when the range of x j crosses 0. We can restate the previous line of reasoning from the viewpoint of the relationships between the dual variables and the primal reduced costs, keeping in mind that a dual feasible solution must be primal optimal. The dual variable yi corresponds to constraint i. When this constraint is tight, the dual variable should be basic and greater than 0. From (21), the value of yi corresponds to the reduced cost of the the slack variable si associated with constraint i. These variables most often have only a lower bound, si 0, so that if they are nonbasic the associated reduced cost must be less than 0 to retain optimality. Thus the two interpretations are consistent. If a slack si has an upper bound, this comes about as a compact means of handling a range constraint bi ai x bi . When the slack is nonbasic at its upper bound, we have the tight inequality i . In a world of constraints, a negative yi gets the normal oriented in the correct direction ai x b in the absence of an explicit inequality ai x bi . For architectural variables x j and the corresponding dual variables j , the story is a bit different but the result is similar. If x j is nonbasic at its lower bound, j will be basic and greater than 0, but if x j is nonbasic at its upper bound, j will be basic and less than 0. Again, this is consistent with forcing j into the roles of ym+ j and ym+n+ j , and with the sign of reduced costs for primal optimality. The rule for choosing the entering dual variable is generally stated as choose a primal variable x j whose value b i is out of bound, where i is the row in B of the constraint solved for x j . Since b i is in this case serving as the reduced cost of a dual variable, itd be nice to see the rule interpreted from this view.
0

Again, suppose that there are explicit bound constraints on x j . Since x j is a free variable, it will be in the basis. Since x j is not at bound, the slack variables for the lower and upper bound constraints will also be in the basis. Suppose the constraint in position k of B t has been solved for x j and that the lower and upper bound constraints for x j occupy the last two rows of the set of loose constraints B l . Then we have
2
t x\k 6 xj 7 6 6 7 6 6 s l 7 = 6b l 6 7 6 4 sm+ j 5 4 sm+n+ j
0

(B t )1 b t \k 7 k b t 7 l t 1 t 7 B (B ) b 7 l j + k b t 5 u j k b t
0

(28)

where l = l \{m + j, m + n + j}, k = ek (B t )1 , and ek = 0 . . . 1 . . . 0 . (I.e., ek is the unit vector with a 1 in the k th position and k is row k of (B t )1 .)
0

When x j < l j , we have sm+ j = x j l j = k b t l j 0. Since its reduced cost is negative, ym+ j is a

candidate to enter the basis. A similar analysis says that ym+n+ j should enter when x j > u j . But notice when x j leaves at its lower bound, ym+ j will enter at 0 and increase, while when x j leaves at its upper bound, ym+n+ j will enter at 0 and apparently decrease, due to the incorrect sign of the coe cient. Turning to the criteria for choosing the leaving dual variable, we have to maintain dual feasibility. Dual variables yi corresponding to constraints i (thus primal slacks si nonbasic at their lower bound of 0), and dual surpluses j corresponding to primal variables x j which are nonbasic at their lower bound will have positive values (negative reduced costs in the primal tableau). Dual variables yi corresponding to hidden constraints derived from range constraints (thus primal slacks si nonbasic at their upper bound bi bi ), and dual surpluses j corresponding to primal variables x j which are nonbasic at their upper bound will have negative values (positive reduced costs in the primal tableau). In each case, well drive the dual variable toward 0; the limiting variable will leave the basis. For yi or j entering rising from 0 (si or x j leaving, increasing to their lower bound), we have: d yi or j decreasing to 0 and leaving (si or x j entering, rising from their lower bound). This requires the pivot element to be positive, hence a i j < 0. d yi or j increasing to 0 and leaving (si or x j entering, decreasing from their upper bound). This requires the pivot element to be negative, hence a i j > 0. For yi or j entering decreasing from 0 (si or x j leaving, decreasing to their upper bound), we have: d yi or j decreasing to 0 and leaving (si or x j entering, rising from their lower bound). This requires the pivot element to be negative, hence a i j > 0. d yi or j increasing to 0 and leaving (si or x j entering, decreasing from their upper bound). This requires the pivot element to be positive, hence a i j < 0. As a nal observation, the usual technique for constructing a dual feasible basis (assuming primal maximisation) is: d If c j > 0, set x j to its upper bound. d If c j < 0, set x j to its lower bound. The validity of this is immediately obvious from (23). If the upper bound is tight well have ym+n+ j = c j 0, and if the lower bound is tight, well have ym+ j = c j 0. We thus have n dual variables with feasible values as a starting solution.

3 Another View of the Expanded Problems


The previous sections have examined the relationship between the primal and dual problems in their fully expanded form and their compact form, and specied the mechanical relationships which are used when running the revised dual simplex algorithm from the primal simplex data structures. There is still some additional insight to be gained by looking at the expanded forms from a slightly different viewpoint.

Well again start with an expanded version of the primal problem, but this time well add explicit bound constraints for the slack variables s associated with the architectural constraints, and omit the slack variables that would be added to the explicit bound constraints. This will allow us to incorporate range constraints, which require an upper bound on the associated slack variables. The expanded primal constraint system will be max
 

c 0 x s
   x    



T    

A I

b 0

x l I s 0

 

(29)

x u I s s u where u s species upper bounds for the slack variables.

Both x and s are free variables in (29), but we can sort them into basic and nonbasic partitions as they would appear if the bounds were handled implicitly. A superscript N will represent the set of variables at their lower bound, N the set of variables at their upper bound, and B the set of basic variables. The dual of (29) is min
   T

y y y b l u
0

2 03  A y y y 4 I 5 = c 0

(30)

I y, y 0 where l = l 0 , u = u u s , A = A I , c = c 0 , and the dual variables have been split into three groups y, y, and y corresponding to the architectural constraints, lower bound constraints, and upper bound constraints, respectively.
0 0 0 0

T

T

The variables y are free variables, to match the equalities in the primal, and will always be part of the dual basis. The variables y and y can be separated into three groups, matching the status of their associated primal variable. For y, the variables y N will be part of the dual basis because the primal lower bound is tight. The variables y N and y B will be dual nonbasic, as they are associated with primal variables not at their lower bound. For y, it will be the case that the variables y N are dual basic and the variables y N and y B are dual nonbasic. We can now rewrite the dual constraint system (30) to show the basic and nonbasic partitions as
h i2

y y N yN y N yN yB yB

B 60 6 60 6 60 6 60 6 4I I

N I 0 I 0 0 0

N = cB c N cN 07 7 I 7 7 07 7 I 7 7 05 0

(31)

10

From this, we can see that there are three groups of dual constraints, yB y B I + y B I = c B yN y N I + y NI = c N yN y N I + y N I = c N The variables y B , y B , y N, and y N are nonbasic and therefore have the value zero. We can rewrite the dual constraints as yB = c B yN y NI = c N yN + y I = c
N N

(32)

One way of interpreting these is that yN y NI = c N is the inequality yN c N, with surplus variables y N, and yN + y N I = c N is the inequality yN c N with slack variables y N . Then, for example, N choosing the variable yq to leave the dual basis (in primal terms, choosing xq to enter by rising from its lower bound) causes the dual constraint yaq cq to become tight. Choosing the variable yq to leave (choosing xq to enter by dropping from its upper bound) causes the dual constraint N yaq cq to become tight. To solve (31) for the 2 B N inverse. For B = 4 0 I 0 0 as
h i h

basic variables in terms of the nonbasic variables, we rst need the basis 3 2 1 1 3 N B B N B 1 N 5. Then we can rewrite (31) 0 5, the inverse is B 1 = 4 0 I 0 I 0 0 I
i2

y y N yN = cB c N c N

0 B 1 B 1 N B 1 N y N y N y B y B 60 4 0 5 I 0 6 4I 0 0 I I

3 h

i2

I 0 0 0

0 B 1 B 1 N B 1 N 5 I 7 4 0 I 0 7 05 0 0 I 0

32

which simples to
h

y y N y N = c B B 1 c B B 1 N c N c B B 1 N + c N y N y N y B y B

i h

i2

0 I 0 6 0 0 I 7 (33) 6 7 4B 1 B 1 N B 1 N 5 B 1 B 1 N B 1 N

There are a number of observations that can be made from (33). Looking at the rst term, we can see that the values of the dual variables y are c B B 1 , as expected. The values of the variables y N are the negative of the primal reduced costs, while those of y N are equal to the reduced costs, so that both sets of dual variables have positive values. (As outlined in the previous section, the absence of the proper coe cient when the dual simplex is run from the primal data structures, coupled with a somewhat imperfect framework for an explanation, causes these variables to apparently take on negative values.) Turning to the second term, its interesting to note that increasing the variables y N or y N can only affect y N and y N , respectively. We can pivot only if some y N or y N has a negative (dual infeasible) value. This is the equivalent of a primal bound-to-bound pivot in the dual simplex. Such a pivot is possible only if one of the initial or nal solution is dual infeasible.

11

Dual pivots from one basic feasible solution to another are accomplished by increasing some y B or y B in order to drive some y N or y N to zero. To see that we can easily derive the usual dual pivoting rules, consider a phase I primal pivot in which variable x j enters by rising from its lower bound and xi leaves by falling to its upper bound. This would be accomplished by increasing yiB N in order to drive y j to zero. This requires that the pivot coe cient a i j be positive, as expected. Pivots involving slack variables are no different. Each slack si for a range constraint has an associated variable in y and y. The dual constraint for the slack is yi yi + yi = 0. Recalling the comments about constraints (32), we can see that when si is nonbasic at its lower bound, the dual constraint is yi yi = 0 and yi will take on the expected positive value. When si becomes basic, the constraint devolves to yi = 0. When si becomes nonbasic at its upper bound, the constraint becomes yi + yi = 0, and yi takes on the expected negative value.

12

Anda mungkin juga menyukai