Anda di halaman 1dari 21

ec1

Electronic Companion Exploiting the structure of


two-stage robust optimization problems with integer variables
in the adversarys problem
EC.1. Proof of Lemma 1
We use the following notation.
(Y, W ) : The uncertainty set that is defined as (Y, W ) = {(y, w)| constraints (25)-(28) are
satisfied}.
J

: The index set of (Y, W ) that is defined as J = 1, 2, ..., |(Y, W )| where |(Y, W )| presents
the cardinality of (Y, W ).

(y j , wj ) : The j-th member of (Y, W ).


We also define fk (x, w) and gk (x, eks ) as follows.
fk (x, w) = min c|2k zk

(EC.1.1)

zk

Ak x +

eks wks + Ck zk bk

kK

(EC.1.2)

z k Zk

kK

(EC.1.3)

kK

0
gk (x, eks ) = min
c|2k zks
0

(EC.1.4)

zks

0
Ak x + eks + Ck zks
bk

kK

(EC.1.5)

0
zks
Zk

kK

(EC.1.6)

For each adversary scenario (y, w) (Y, W ) with index j J, with respect to constraints
j
s Sk is equal to 1 for each k K. Let sj
(17) and (25), exactly one of the variables wks
j
denote the index in Sk for which wks
is equal to 1. Therefore we have the following relations.
j

j
wks
=1
j

jJ

(EC.1.7)

j
wks
=0

j J, s 6= sj

(EC.1.8)

In the following we prove the if -statement of Lemma 1. The only if -statement of this
lemma can be proven in a reverse direction. Assume that x
is a first-stage feasible solution
of Model (P1). In the following we separately prove that

ec2

- x
is also a first-stage feasible solution of Model (P2).
- The objective values of (P1) and (P2) for this fixed first-stage solution are the
same if the adversary and the decision maker choose their optimal solutions in the
adversarys problem and the second stage subsequently.
Proof of Part 1: Since Model (P1) is feasible, there is at least a feasible second-stage policy
hkj i(kK) for each j J such that
Ak x
+

j
+ Ck kj bk
eks wks

k K, j J

(EC.1.9)

kj Zk

k K, j J

(EC.1.10)

kK

Using (EC.1.7) and (EC.1.8), we can rewrite relations (EC.1.9)-(EC.1.10) as follows.


Ak x
+ eksj + Ck kj bk

k K, j J

(EC.1.11)

kj Zk

k K, j J

(EC.1.12)

Relations (EC.1.11)-(EC.1.12) demonstrate that for each k K and s Sk there is at least


0
0
0
one j J such that for zks
= kj constraints Ak x + eks + Ck zks
bk and zks
Zk are satisfied.

Therefore, x
is also a first-stage feasible solution of Model (P2).
Proof of Part 2: To prove that the objective values of (P1) and (P2) for the fixed first-stage
solution x
are the same, it is enough to prove that relation (EC.1.13) or its equivalent, relation
(EC.1.14), holds.


P
|
max
c1 x
+
fk (
x, w) =
(y,w)(Y,W )

kK


max

(y,w)(Y,W )

kK


fk (
x, w) =

!
max
(y,w)(Y,W )

c|1 x
+

P P

gk (
x, eks )wks

(EC.1.13)

kK sSk

!
max

P P

(y,w)(Y,W )

kK sSk

gk (
x, eks )wks

(EC.1.14)

Moreover, regarding (EC.1.7) and (EC.1.8), in constraint (EC.1.2) of fk (


x, wj ) we can
P
j
substitute
eks wks
by eksj . It is then clear that mathematical programs corresponding to
sSk

gk (
x, eksj ) and fk (
x, wj ) have the same structure and following relations hold.
gk (
x, eksj ) = fk (
x, wj )

k K, j J

(EC.1.15)


arg min gk (
x, eksj ) = arg min (fk (
x, wj ))

k K, j J

(EC.1.16)

0
zks

zk

The following stream of equalities proves the validity of (EC.1.14). In the following relations the second equality is obtained using (EC.1.15). The third equality is valid because of
(EC.1.7)-(EC.1.8).

ec3






P
P
j
gk (
x, eksj ) =
fk (
x, w ) = max
max
fk (
x, w) = max
jJ
jJ
(y,w)(Y,W ) kK
kK
! kK
!
P
P
P P
j
= max
gk (
x, eks )wks
max
gk (
x, eks )wks


jJ

kK sSk

(y,w)(Y,W )

kK sSk

In addition, (EC.1.18) shows that we can obtain the second-stage optimal policies for
0
.
variables zk in Model (P1) from the optimal values of variables zks

ec4

EC.2. Proof of Theorem 2


As discussed in Appendix EC.1, we can present the inner max problem in Model (P2) by
!
P
P
max
c|1 x
+
gk (
x, eks )wks
(EC.2.1)
(y,w)(Y,W )

kK sSk

where gk (
x, eks ) is defined as follows.
0
gk (x, eks ) = min
c|2k zks
0

(EC.2.2)

zks

0
bk
Ak x + eks + Ck zks

kK

(EC.2.3)

0
Zk
zks

kK

(EC.2.4)

0
for k K, s Sk are independent of (y, w)
It is clear that the optimal values of vectors zks

(Y, W ) and are defined by


0
0
zks
= arg min (c|2k zks
)
0 G
zks
ks

k K, s Sk

(EC.2.5)

0
0
where Gks = {zks
Zk |Ak x
+ eks + Ck zks
bk }. Therefore, because of the independence of
0
zks
, k K, s Sk from (y, w) (Y, W ), we can swap max and min
in Model (P2) and Theorem
0
(y,w)

2 is proven.

(z )

ec5

EC.3. Proof of Theorem 3


Consider the following problem.
!!
(MP0 )

min

max

(x,z 0 )(X,Z 0 )

(y,w)(Y,W )0

c|1 x +

0
wks
c|2k zks

P P

(EC.3.1)

kK sSk

where (Y, W )0 = {(y j , wj ), j = 1, 2, ..., m}. Since (Y, W )0 (Y, W ) the optimal objective value
of Model (MP0 ) is a valid lower bound for the optimal objective value of the original robust
problem (P4). In the following we demonstrate that (MP0 ) is equivalent to (MP). By writing
the convex combination of m scenarios (y j , wj ), Model (MP0 ) can be rewritten as follows.

(MP00 )

min

(x,z 0 )(X,Z 0 )

max c|1 x +

m
P
j=1
m
P

!!!
P P

j
0
w
ks
c|2k zks

(EC.3.2)

kK sSk

j = 1

(EC.3.3)

j=1

j 0

j = 1, 2, ..., m.

(EC.3.4)

In Model (MP00 ), for a fixed value of (x, z 0 ), the inner max problem is a linear programming
model and one of its extreme points will be the optimal solution. Each extreme point of this
model corresponds to one of the scenarios (y j , wj ). Therefore, Model (MP00 ) is equivalent to
Model(MP0 ). By dualizing the inner max problem in Model (MP00 ) and assuming as the
dual variables of constraint (EC.3.3) we obtain Model (MP) and Theorem 3 is proven.

ec6

EC.4. Proof of Theorem 4


To prove that the Benders algorithm without stopping conditions converges in at most |W | + 1
iterations, it is enough to show that if the algorithm visits an adversary scenario with a
repeated vector w in the subproblem, then the optimal solution is found and the Benders
algorithm is converged. Lets denote this adversary scenario by (
y , w).
We also assume that the
algorithm obtains solution (
x, z0 ) by solving the master problem just before the subproblem
in which scenario (
y , w)
is found. Since in scenario (
y , w),
vector w is repeated, the above
master problem includes an instance of constraint (42) corresponding to vector w.
Further,
since (
x, z0 ) is a feasible solution in the master problem and we have
c|1 x
+

XX

0
w
ks Opt
c|2k zks

(EC.4.1)

kK sSk

where is the optimal solution of the master problem in this iteration and Opt is the optimal
objective value of the robust problem. Besides, in the Benders algorithm without stopping
conditions for the master problem and subproblem, the adversary scenario (
y , w)
is visited in
the subproblem if it is the optimal solution of the subproblem. Therefore, we have
Opt c|1 x
+

XX

0
c|2k zks
w
ks

(EC.4.2)

kK sSk

Relation (EC.4.2) is valid because the optimal objective value of the subproblem is a valid
upper bound for the optimal objective value of the robust problem. Consequently, (EC.4.1)(EC.4.2) results in relation (EC.4.3).
Opt = c|1 x
+

XX

0
c|2k zks
w
ks

(EC.4.3)

kK sSk

This relation means that (


x, z0 ) is the optimal solution of the robust problem and the
Benders algorithm is converged in at most |W | + 1 iterations. Since for each w W there is
at least one y Y satisfying (y, w) (Y, W ), |W | + 1 is bounded above by n + 1.

ec7

Notation used in EC.5 to E.9


We use the following notation in the proofs of Appendices EC.5 to EC.9.
W : The set of vectors w for which there is y Y such that (y, w) (Y, W ).
n : The number of adversary scenarios in (Y, W ).
n0 : The number of unique vectors w that the algorithm visits in the subproblem before it
converges.
n00 : The number of times that the algorithm visits an already encountered vector w in the
subproblem before it converges.
: A positive constant used in stopping conditions of the master problem and subproblem.
M P (i): The master problem in iteration i.
SP (i) : The subproblem in iteration i.
Opt : The optimal objective value of the original robust problem.
UiM P : The upper bound of the master problem in iteration i.
OiM P : The optimal objective value of the master problem in iteration i.
P
: The lower bound of the master problem in iteration i.
LM
i

UiSP : The upper bound of the subproblem in iteration i.


OiSP : The optimal objective value of the subproblem in iteration i.
: The lower bound of the subproblem in iteration i.
LSP
i
f (j) : The iteration in which for the j-th times the algorithm generates an adversary scenario
with a new vector w in the subproblem.
g(i) : The iteration in which for the i-th times the algorithm re-visits any of the generated
vectors w in the subproblem.
Ii

: An indicator that is equal to 1 if in iteration i the algorithm generates an adversary


scenario with a repeated vector w, 0 otherwise.

ec8

EC.5. Proof of Lemma 2


Let (
x, z0 ) and respectively denote the solution and the objective value of the master problem
in iteration i 1. Further (
y , w)
denotes the adversary scenario with the repeated vector
w=w
found in the subproblem in iteration i. Since vector w = w
is repeated, we have already
included an instance of constraint (42) corresponding to this vector in the master problem in
iteration i 1 and the following relation holds.
c|1 x
+

XX

0
w
ks
c|2k zks

(EC.5.1)

kK sSk

The Benders algorithm applies solution (


x, z0 ) to modify the objective function of the
subproblem in iteration i. If (
y , w)
is not the optimal solution of subproblem then it means
that in the subproblem the following stopping condition is satisfied.

c|1 x
+

XX

0
c|2k zks
w
ks +

(EC.5.2)

kK sSk

Obviously relation (EC.5.2) is in contrast with (EC.5.1) and we conclude that if the algorithm visits an adversary scenario with a repeated vector w in the subproblem, this adversary
is the optimal solution of the subproblem. To prove that the optimal objective value of the
subproblem is equal to the upper bound of the master problem in iteration i 1, we have to
show that in the master problem, an instance of constraint (42) corresponding to the repeated
vector w
is binding. If for another adversary scenario with a different repeated vector w = w0 ,
P P | 0 0
P P | 0
c2k zks w
ks < c|1 x
+
c2k zks wks
constraint (42) is binding, then we must have c|1 x
+
kK sSk

kK sSk

which is a contradiction regarding the optimality of (


y , w)
in the subproblem in iteration
i. Therefore, if the algorithm finds an adversary scenario with a repeated vector w
in the
subproblem, the optimal objective value of the subproblem is equal to the upper bound of
the recent master problem.

ec9

EC.6. Proof of Lemma 3


Equivalently this lemma states that if in k = b(OiSP Opt)/c iterations after iteration i the
SP
algorithm does not find any adversary scenario with a repeated vector w then Oi+k
Opt

holds. In iteration i, since the algorithm found an adversary scenario with a repeated vector
w in subproblem SP(t), regarding Lemma 2 this adversary scenario is the optimal solution of
= OiSP holds. Further, in the master problem M P (i) that is solved
the subproblem and LSP
i
after subproblem SP (i), two cases are possible.
Case 1) OiM P > OiSP holds. First note that OiM P < Opt is a valid regarding Theorem
3. OiM P > OiSP together with OiM P < Opt results in Opt > OiSP . The latter relation
contradicts with the initial assumption OiSP Opt > . Therefore, this case does not happen.
with respect
Case 2) OiM P OiSP holds. This relation is equivalent to OiM P LSP
i
= OiSP . Regarding OiM P LSP
to relation LSP
i , the stopping condition in master problem
i
M P (i) is satisfied and the master problems stops when it finds a feasible solution with an
upper bound UiM P satisfying the following relation.
SP
UiM P LSP

i = Oi

(EC.6.1)

We have assumed that no adversary scenario with a new vector w is generated in k =


b(OiSP Opt)/c iterations after iteration i. Therefore, in iteration i + 1 an adversary scenario
SP
= UiM P . The
with a repeated vector w is generated and regarding Lemma 2 we have Oi+1

recent relation together with (EC.6.1) results in the following relation.


SP
Oi+1
OiSP

(EC.6.2)

Similarly we can show that for k b(OiSP Opt)/c relation (EC.6.3) holds. This is because
it is supposed form iteration i to iteration i + b(OiSP Opt)/c all visited adversary scenarios
have repeated vectors w.
SP
SP
Oi+h
Oi+h1

h {1, 2, ..., k}

(EC.6.3)

h {1, 2, ..., k}

(EC.6.4)

Relation (EC.6.3) is equivalent to (EC.6.4).


SP
SP
Oi+h
Opt Oi+h1
Opt

From (EC.6.4) we can simply obtain


SP
SP
Oi+k
Opt Oi+h1
Opt

(EC.6.5)

ec10

and by setting k = b(OiSP Opt)/c we will have


SP
O(i+k)
Opt

SP
O(i+h1)
Opt

OiSP Opt


(EC.6.6)

which is equivalent to
SP
Opt
O(i+k)

(EC.6.7)

Therefore, we proved that if in k = b(OiSP Opt)/c iterations after iteration i the algorithm
SP
does not find any adversary scenario with a repeated vector w then Oi+k
Opt holds.

ec11

EC.7. Proof of Lemma 4


Three cases are possible.
Case 1) OiSP Opt and OiSP OiM P hold. We show that in this case in the next
iteration the algorithm generates an adversary scenario with a new vector w. Because of
OiSP OiM P , the stopping condition in the master problem in iteration i is satisfied and
the following relation holds.
UiM P OiSP Opt

(EC.7.1)

If the algorithm visits an adversary scenario with a repeated vector w in the subproblem
SP
regarding Lemma 2. Then with respect to
in iteration i + 1, we must have UiM P = Oi+1
SP
< Opt holds which is a contradiction because the optimal objective value of
(EC.7.1), Oi+1

the subproblem is an upper bound of the optimal objective of the robust problem. Therefore,
in this case in the next iteration an adversary scenario with a new vector w will be generated.
Case 2) OiSP Opt , OiSP OiM P and OiM P < Opt hold. We show in the next
iteration the algorithm generates an adversary scenario with a new vector w. Because of
OiSP OiM P , in the master problem in iteration i there is not any adversary scenario
satisfying the stopping condition. Thus, the master problem is solved optimally and we will
have the following relation.
OiM P = UiM P

(EC.7.2)

In the subproblem of next iteration, if the algorithm visits an adversary scenario with a
repeated vector w, then regarding Lemma 2 we must have relation (EC.7.3).
SP
UiM P = Oi+1

(EC.7.3)

Considering the primary assumption OiM P < Opt and relations (EC.7.2)- (EC.7.3) we must
SP
have Oi+1
< Opt which is a contradiction because the optimal objective value of the subprob-

lem is an upper bound of the optimal objective value of the robust problem. Therefore, in
this case in iteration i + 1 the algorithm generates an adversary scenario with a new vector
w.
Case 3) OiSP Opt , OiSP OiM P and OiM P = Opt hold. We show that in this case
in the next iteration either the Benders algorithm converges or it generates an adversary
scenario with a new vector w. Because of OiSP OiM P , in the master problem in iteration
i there is not any adversary scenario satisfying the stopping condition. Therefore, the master
problem is solved optimally and relation (EC.7.4) holds.
P
LM
= OiM P = UiM P
i

(EC.7.4)

ec12

In the subproblem of iteration i + 1, the algorithm generates an adversary scenario with


either a new vector w or a repeated vector w. In the latter case regarding Lemma 2 we
must have relation (EC.7.3). Considering the primary assumption OiM P = Opt and relations
SP
P
. This relation demonstrates that the optimal
= Opt = Oi+1
(EC.7.3)-(EC.7.4) we have LM
i

solution of the robust problem is obtained and the Benders algorithm is converged. Therefore,
in this case, in the next iteration either the Benders algorithm converges or it generates an
adversary scenario with a repeated vector w.

ec13

EC.8. Proof of Lemma 5


Regarding constraint (45) since the algorithm visits an adversary scenario with a repeated
vector w in the subproblem of iteration g(i1 ), in any iteration j g(i1 ), the inequality UjM P
SP
holds and by setting j = g(i2 ) 1 g(i1 ) we obtain the following relation.
Og(i
1)
MP
SP
Ug(i
Og(i
2 )1
1)

(EC.8.1)

Note that g(i2 )1 g(i1 ) holds because i1 < i2 . Also regarding Lemma 2, in the subproblem
of iteration g(i2 ) that the algorithm has visited an adversary scenario with a repeated vector
SP
MP
. This relation together with (EC.8.1) demonstrates the validity
= Og(i
w, we have Ug(i
2)
2 )1
SP
SP
of Og(i
Og(i
.
2)
1)

ec14

EC.9. Proof of Theorem 5


To prove that the Benders algorithm converges in at most

n0
P
j=1

(1 + (b(OfSP
(j)+1 Opt)/c +

1)If (j)+1 ) iterations it is enough to show it takes at most 1 + (b(OfSP


(j)+1 Opt)/c + 1)If (j)+1
iterations between visiting j-th and (j + 1)-th new vector w in the subproblem. Lets assume
j < n0 . Two cases are possible.
Case 1) we have If (j)+1 = 0 that means in the iteration f (j) + 1 the algorithm finds an
adversary scenario with a new vector w. In this case the number of between visiting j-th and
(j + 1)-th new adversary scenarios is 1.
Case 2) we have If (j)+1 = 1 that means in iteration f (j) + 1 the algorithm visits an
adversary scenario with a repeated vector w. In this case, after visiting the an adversary
scenario with a repeated vector w in iteration f (j) + 1, with respect to Lemma 3 it takes
at most k = b(OfSP
(j)+1 Opt)/c to find an adversary scenario with a new vector w or to
have OfSP
(j)+1+k Opt . In the latter case, regarding Lemma 4, we know that in the next
iteration f (j) + k + 2 either the Benders algorithm converges or an adversary scenario with
a new vector w is found. Since it is assumed that j < n0 , the Benders algorithm does not
converge before finding the (j + 1)-th adversary scenario with a new vector w. Thus, we
expect that the algorithm generates (j + 1)-th new vector w by iteration f (j) + k + 2. In other
words, the number of iterations between visiting j-th and (j + 1)-th new vector w is at most
0
b(OfSP
(j)+1 Opt)/c + 2. Therefore, for j < n the number of iterations between visiting j-th

and (j + 1)-th new vector w is computed by relation (EC.9.1).


$ SP
%
!
Of (j)+1 Opt
(1 If (j)+1 ) +
+ 2 If (j)+1

(EC.9.1)

For j = n0 we can use a similar reasoning as presented above for j < n0 . The difference is that
only Case 2 is applicable because regarding the definition of n0 no new vector w is visited
after visiting the n0 -th new vector w. Moreover, when we use Lemma 3 and 4 in Case 2,
the generation of an adversary scenario with a new vector w is not an option and we are
sure that after finding the n0 -th new vector w, the Benders algorithm converges in at most


b(OfSP

Opt)/c
+
2
iterations that is the same as (EC.9.1) with respect to If (j)+1 = 1
(j)+1
for j = n0 . Therefore, by summing the number of iterations computed by (EC.9.1) from j = 1
to j = n0 we obtain the following maximum number of iterations.
j
k




n0 
n0 
P
P
SP
0
1 + (Of (j)+1 Opt)/ + 1 If (j)+1 = n +
b(OfSP

Opt)/c
+
1
I
f
(j)+1
(j)+1
j=1

j=1

n0 +

n0

P
j=1




SP
0
SP
b(Of (j)+1 Opt)/c + 1 = n b(Og(1) Opt)/c + 2


SP
|W | b(Og(1)
Opt)/c + 2

ec15

Proof of the first inequality: We know that in n0 iterations the algorithm visits at least one
adversary scenario with a repeated vector w. g(1) denotes the iteration in which a repeated
vector w is visited for the first time. To prove the first inequality it is enough to show the
validity of the following relation (EC.9.2).
$ SP
%
$ SP
%
!
Og(1) Opt
Of (j)+1 Opt
+1
+ 1 If (j)+1

j {1, 2, ..., n0 }

(EC.9.2)

SP
As Og(1)
Opt is a valid relation, (EC.9.2) holds when If (j)+1 equal 0. In the case that

If (j)+1 is equal to 1, regarding the definition of g(1) and If (j)+1 we know that g(1) f (j) + 1.
SP
SP
SP
O(f
Thus, with respect to Lemma 5, we have Og(1)
(j)+1 that results in b(Og(1) Opt)/c + 1

b(OfSP
(j)+1 Opt)/c + 1. Therefore, relation (EC.9.2) is valid.

ec16

EC.10. An example to show the local optimality of the dual algorithm


Consider the problem min(x1 ,x2 )X (2x1 + 1.5x2 + max(y1 ,y2 )Y (x1 y1 + x2 y2 )) where
Y = {(y1 , y2 ) N 2 |y1 2, y2 1, 0.99y1 + 2y2 5.98, 1.99y1 + y2 2.99}
and
X = {(x1 , x2 ) R2 |x1 + x2 = 1, (x1 , x2 ) 0, 12 }.
The solution space of the adversary variables (y1 , y2 ) are four points A, B, C, and D in
Figure EC.10.1. The optimal solution of this problem is (x1 , x2 ) = (0, 1). For this solution the
objective line in the adversary problem is Line L1. This objective line shows that scenarios A
and B in the adversarys problem are optimal with a total objective value of 3.5. If we relax
the integrality constraints on variables y1 and y2 the solution space of adversary extends to
polytope E-B-C-D. In this case, for solution (x1 , x2 ) = (0, 1) the worst adversary scenario is
Point E with an objective value of 4.49. However, for solution (x1 , x2 ) = (1, 0) the objective
line L2 presents the objective function of the inner max problem. This objective line finds
points B and C as the optimal adversary scenarios with an objective value of 4. In this
example, if we apply the dual algorithm to solve this problem the algorithm converges in the
first iteration by finding the non-optimal solution (x1 , x2 ) = (1, 0).

Figure EC.1

The adversary solution space in the example presented to show the non-optimality of the dual algorithm.

ec17

EC.11. Non-adjustable nurse planning problem


In a non-adjustable robust problem, the decision maker is not allowed to take recourse actions
in the second stage. Therefore, to formulate and solve the non-adjustable nurse planning
0
to zero. In this case,
problem, in Model (32)-(40) we should set the second-stage variables zds

Model (32)-(40) reduces to the following problem.





P
min max
c1 xd
x

y,w

(EC.11.1)

dD

xd s

d D, s Sd

(EC.11.2)

dD

(EC.11.3)

dD

(EC.11.4)

dD

(EC.11.5)

tT

(EC.11.6)

wds {0, 1}

d D, s Sd

(EC.11.7)

ytp {0, 1}

t T, p Pt

(EC.11.8)

xd 0, integer
P

wds = 1

sSd

P P

ytp =

tT pPtd

swds

sSd

ytp = 1

pPt

It is clear that we can remove adversarys variables ytp and wds from the above model.
Also we can consider constraint (EC.11.2) only for the highest value s Sd for each d D.
Therefore, assuming that smax,d denotes the highest value s Sd , the non-adjustable nurse
planning problem reduces to the following model.


P
min
c1 xd
x

(EC.11.9)

dD

xd smax,d

dD

(EC.11.10)

xd 0, integer

dD

(EC.11.11)

The above model is trivial and its optimal solution is presented as follows.
 

smax,d

xd = max 0,
dD

(EC.11.12)

ec18

EC.12. Details on the number of first-stage nurses in the best and


non-adjustable solutions in Tables 2 to 5.
Table EC.12.1- Details of the number of first-stage nurses in the best and non-adjustable solutions
for instances with a planning horizon of two weeks (L = 2).
Data Info.

The number of first-stage nurses


in the best solution

The number of first-stage nurses


in the non-adjustable solution

IF

OR

Sur.

Ave

Min

Max

STD

Ave

Min

Max

STD

1.1

1
2
3
4
5

39
79
119
157
202

2.63
5.00
7.46
9.26
11.69

2.14
4.29
7.14
8.21
9.57

2.93
5.86
8.07
10.86
13.36

0.22
0.46
0.30
0.80
0.89

3.15
6.20
9.10
12.07
15.19

2.79
5.50
8.64
11.07
13.43

3.71
7.36
9.71
13.57
16.21

0.28
0.50
0.36
0.79
0.71

1.3

1
2
3
4
5

39
79
119
157
202

2.63
5.00
7.46
9.51
11.89

2.14
4.29
7.14
8.00
9.86

2.93
5.86
8.07
10.36
12.71

0.22
0.46
0.30
0.72
0.78

3.15
6.20
9.10
12.07
15.19

2.79
5.50
8.64
11.07
13.43

3.71
7.36
9.71
13.57
16.21

0.28
0.50
0.36
0.79
0.71

1.5

1
2
3
4
5

39
79
119
157
202

2.65
5.02
7.51
9.58
12.01

2.14
4.50
7.14
8.07
10.00

3.00
5.86
8.50
10.50
13.29

0.24
0.43
0.45
0.72
0.80

3.15
6.20
9.10
12.07
15.19

2.79
5.50
8.64
11.07
13.43

3.71
7.36
9.71
13.57
16.21

0.28
0.50
0.36
0.79
0.71

1.7

1
2
3
4
5

39
79
119
157
202

2.67
5.06
7.54
9.76
12.49

2.14
4.50
7.14
8.36
10.14

3.00
5.86
8.50
10.86
13.79

0.24
0.43
0.43
0.70
0.88

3.15
6.20
9.10
12.07
15.19

2.79
5.50
8.64
11.07
13.43

3.71
7.36
9.71
13.57
16.21

0.28
0.50
0.36
0.79
0.71

1.9

1
2
3
4
5

39
79
119
157
202

2.67
5.06
7.54
9.75
12.89

2.14
4.50
7.14
8.43
10.36

3.00
5.86
8.50
10.71
15.43

0.24
0.43
0.43
0.66
1.24

3.15
6.20
9.10
12.07
15.19

2.79
5.50
8.64
11.07
13.43

3.71
7.36
9.71
13.57
16.21

0.28
0.50
0.36
0.79
0.71

Average

119

7.39

6.38

8.31

0.54

9.14

8.29

10.11

0.53

ec19

Table EC.12.2- Details of the number of first-stage nurses in the best and non-adjustable solutions
for instances with a planning horizon of three weeks (L = 3).
Data Info.

The number of first-stage nurses


in the best solution

The number of first-stage nurses


in the non-adjustable solution

IF

OR

Sur.

Ave

Min

Max

STD

Ave

Min

Max

STD

1.1

1
2
3
4
5

59
121
182
240
300

3.36
6.09
8.71
11.31
14.17

3.00
5.05
7.76
10.67
13.43

3.76
7.52
9.52
12.81
15.10

0.20
0.67
0.53
0.56
0.54

4.31
8.64
12.61
16.57
20.71

3.90
7.48
11.57
15.67
19.43

4.71
9.86
13.19
17.33
21.95

0.23
0.63
0.48
0.56
0.70

1.3

1
2
3
4
5

59
121
182
240
300

3.36
6.28
8.95
11.73
14.66

3.00
5.43
7.95
11.14
13.90

3.76
7.19
9.76
13.00
15.38

0.20
0.53
0.52
0.51
0.52

4.31
8.64
12.61
16.57
20.71

3.90
7.48
11.57
15.67
19.43

4.71
9.86
13.19
17.33
21.95

0.23
0.63
0.48
0.56
0.70

1.5

1
2
3
4
5

59
121
182
240
300

3.48
6.67
9.25
11.78
14.90

3.14
5.76
8.48
11.14
14.05

3.90
7.95
9.90
13.14
15.52

0.23
0.66
0.42
0.55
0.52

4.31
8.64
12.61
16.57
20.71

3.90
7.48
11.57
15.67
19.43

4.71
9.86
13.19
17.33
21.95

0.23
0.63
0.48
0.56
0.70

1.7

1
2
3
4
5

59
121
182
240
300

3.50
6.86
9.90
12.30
15.90

3.14
6.29
9.14
11.52
14.24

3.9
7.81
10.57
14.00
20.86

0.24
0.46
0.44
0.72
1.77

4.31
8.64
12.61
16.57
20.71

3.90
7.48
11.57
15.67
19.43

4.71
9.86
13.19
17.33
21.95

0.23
0.63
0.48
0.56
0.7

1.9

1
2
3
4
5

59
121
182
240
300

3.50
6.88
10.83
15.70
20.40

3.14
6.24
9.14
12.14
19.19

3.90
7.81
12.67
17.00
21.52

0.24
0.47
1.16
1.43
0.66

4.31
8.64
12.61
16.57
20.71

3.90
7.48
11.57
15.67
19.43

4.71
9.86
13.19
17.33
21.95

0.23
0.63
0.48
0.56
0.70

Average

180

9.62

8.72

10.73

0.59

12.57

11.61

13.41

0.52

ec20

Table EC.12.3- Details of the number of first-stage nurses in the best and non-adjustable solutions
for instances with a planning horizon of three weeks (L = 4).
Data Info.

The number of first-stage nurses


in the best solution

The number of first-stage nurses


in the non-adjustable solution

IF

OR

Sur.

Ave

Min

Max

STD

Ave

Min

Max

STD

1.1

1
2
3
4
5

80
163
241
318
397

3.67
6.79
9.45
12.13
15.33

3.21
5.93
8.36
11.43
14.68

4.18
7.54
9.89
12.93
16.00

0.31
0.44
0.45
0.46
0.43

5.01
9.88
14.34
18.74
23.09

4.36
8.86
13.71
18.18
22.18

5.39
10.71
14.71
19.64
23.79

0.26
0.56
0.31
0.46
0.48

1.3

1
2
3
4
5

80
163
241
318
397

3.83
7.13
9.77
12.74
15.82

3.46
6.21
8.57
12.00
15.14

4.18
7.96
10.32
13.79
16.46

0.22
0.48
0.50
0.52
0.41

5.01
9.88
14.34
18.74
23.09

4.36
8.86
13.71
18.18
22.18

5.39
10.71
14.71
19.64
23.79

0.26
0.56
0.31
0.46
0.48

1.5

1
2
3
4
5

80
163
241
318
397

3.96
7.47
10.10
13.33
19.44

3.57
6.50
8.79
12.29
15.61

4.29
7.86
10.79
14.57
23.14

0.23
0.38
0.61
0.70
3.16

5.01
9.88
14.34
18.74
23.09

4.36
8.86
13.71
18.18
22.18

5.39
10.71
14.71
19.64
23.79

0.26
0.56
0.31
0.46
0.48

1.7

1
2
3
4
5

80
163
241
318
397

4.00
7.78
10.75
16.10
22.84

3.57
7.21
9.54
12.64
22.00

4.29
8.64
11.54
19.39
23.57

0.19
0.43
0.53
2.78
0.46

5.01
9.88
14.34
18.74
23.09

4.36
8.86
13.71
18.18
22.18

5.39
10.71
14.71
19.64
23.79

0.26
0.56
0.31
0.46
0.48

1.9

1
2
3
4
5

80
163
241
318
397

4.01
8.52
14.04
18.44
22.79

3.57
7.32
13.39
17.71
21.96

4.29
10.50
14.46
19.39
23.54

0.18
1.25
0.35
0.50
0.47

5.01
9.88
14.34
18.74
23.09

4.36
8.86
13.71
18.18
22.18

5.39
10.71
14.71
19.64
23.79

0.26
0.56
0.31
0.46
0.48

Average

240

11.21

10.19

12.14

0.66

14.21

13.46

14.85

0.42

ec21

Table EC.12.4- Details of the number of first-stage nurses in the best and non-adjustable solutions
for instances with a planning horizon of three weeks (L = 5).
Data Info.

The number of first-stage nurses


in the best solution

The number of first-stage nurses


in the non-adjustable solution

IF

OR

Sur.

Ave

Min

Max

STD

Ave

Min

Max

STD

1.1

1
2
3
4
5

101
202
302
401
503

3.76
6.71
10.04
13.04
17.06

3.51
6.46
9.46
12.49
16.49

3.94
7.11
11.00
13.89
17.86

0.17
0.21
0.44
0.41
0.40

5.29
10.33
15.31
20.19
25.30

5.11
10.00
14.74
19.54
24.49

5.49
11.03
16.06
20.89
26.31

0.11
0.28
0.41
0.44
0.55

1.3

1
2
3
4
5

101
202
302
401
503

4.03
7.29
10.44
13.7
17.51

3.71
6.86
9.80
13.34
16.89

4.26
7.66
11.26
14.57
18.14

0.16
0.26
0.45
0.37
0.40

5.29
10.33
15.31
20.19
25.30

5.11
10.00
14.74
19.54
24.49

5.49
11.03
16.06
20.89
26.31

0.11
0.28
0.41
0.44
0.55

1.5

1
2
3
4
5

101
202
302
401
503

4.30
7.51
11.59
17.98
25.03

3.89
7.03
9.91
13.74
24.37

4.66
8.23
15.69
20.4
26.17

0.22
0.37
2.05
2.68
0.55

5.29
10.33
15.31
20.19
25.30

5.11
10.00
14.74
19.54
24.49

5.49
11.03
16.06
20.89
26.31

0.11
0.28
0.41
0.44
0.55

1.7

1
2
3
4
5

101
202
302
401
503

4.23
7.93
11.88
19.99
25.02

3.97
7.57
10.8
19.34
24.31

4.54
8.77
15.63
20.66
26.00

0.16
0.36
1.36
0.45
0.52

5.29
10.33
15.31
20.19
25.30

5.11
10.00
14.74
19.54
24.49

5.49
11.03
16.06
20.89
26.31

0.11
0.28
0.41
0.44
0.55

1.9

1
2
3
4
5

101
202
302
401
503

4.26
8.89
15.03
19.96
25.01

4.06
7.57
14.40
19.34
24.34

4.46
9.97
15.63
20.66
26.00

0.13
0.97
0.39
0.48
0.52

5.29
10.33
15.31
20.19
25.30

5.11
10.00
14.74
19.54
24.49

5.49
11.03
16.06
20.89
26.31

0.11
0.28
0.41
0.44
0.55

Average

302

12.49

11.75

13.49

0.58

15.28

14.78

15.95

0.36

Anda mungkin juga menyukai