Anda di halaman 1dari 294

Genetic Aigorith ms a nd

Fuzzy Multiobjective Optimization


OPERATIONS RESEARCH/COMPUTER SCIENCE
INTERFACES SERIES

Series Editors

Professor Ramesh Sharda Prof. Dr. Stefan Vo/3


Ok/ahoma State University Technische Universitiil Braunschweig

Other published titles in the series:

Brown, Donald/Scherer, William T.


Intel/igent Scheduling Systems

Nash, Stephen G./Sofer, Ariela


The Impact of Emerging Technologies on Computer Science & Operations Research

Barth, Peter
Logic-Based 0-1 Conslrainl Programming

Jones, Christopher V.
Visualization and Optimization

Barr, Richard S./ Helgason, Richard V./ Kennington, Jeffery L.


Interfaces in Computer Science and Operations Research: Advances in
Metaheuristics, Optimization, and Slochastic Modeling Technologies

Ellacott, Stephen W./ Mason, John C.I Anderson, lain J.


Mathemalics of Neural Networks : Mode/s, A/gorilhms & Applications

Woodruff, David L.
Advances in Compulational & SlOchastic Optimizalion, Logic Programming, and Heuristic Search

Klein, Robert
Scheduling of Resource-Constrained Projecis

Bierwirth, Christian
Adaptive Search and the Managemenl of Logistics Systems

Laguna, Manuell Gonzlez-Velarde, Jose Luis


Compuling Toolsfo1' Modeling, Optimization and Simulation

Stilman, Boris
Linguistic Geomeuy: From Sea1'ch to Construction
GENETIC ALGORITHMS ANO
FUZZY MULTIOBJECTIVE OPTIMIZATION

MASATOSHI SAKAWA
Department of Artificial Complex Systems Engineering
Graduate School of Engineering
Hiroshima University
Higashi-Hiroshima, 739-8527, Japan

Springer Science+Business Media, LLC


Library of Congress Cataloging-in-Publication Data

Sakawa, Masatoshi, 1947-


Genetic algorithms and fuzzy multiobjeetive optimization 1 Masatoshi Sakawa.
p. em. -- (Operations researeh/computer scienee interfaees series ; ORes 14)
Includes bibliographical referenees and index.
ISBN 978-1-4613-5594-6 ISBN 978-1-4615-1519-7 (eBook)
DOI 10.1007/978-1-4615-1519-7
1. Genetic algorithms. 2. Mathematical optimization. 3.Fuzzy logic. 4. Fuzzy systems.
5. Fuzzy algorithms. LTitle. II . Series.

QA402 .5 S247 2001


519.3--de21 2001038702

Copyright 2002 by Springer Science+Business Media New York


Originally published by Kluwer Academic Publishers in 2002
Softcover reprint of the hardcover 1st edition 2002
Ali rights reserved. No part ofthis publication may be reproduced, stored in a retrieval system
or transmitted in any form or by any means, mechanical, photo-copying, recording, or
otherwise, without the prior written permission ofthe publisher ,Springer Science+BusinessMedia,llC.

Printed on acid-free paper.


To my parents,
Takeshige and Toshiko;
my wife Masako; and
my son Hideaki
Contents

Preface IX

1. INTRODUCTION 1
1.1 Introduction and historical remarks 1
1.2 Organization of the book 7
2. FOUNDATIONS OF GENETIC ALGORITHMS 11
2.1 Outline of genetic algorithms 11
2.2 Coding, fitness, and genetic operators 15
3. GENETIC ALGORITHMS FOR 0-1 PROGRAMMING 29
3.1 Introduction 29
3.2 Multidimensional 0-1 knapsack problems 30
3.3 0-1 programming 39
3.4 Conclusion 52
4. FUZZY MULTIOBJECTIVE 0-1 PROGRAMMING 53
4.1 Introduction 53
4.2 Fuzzy multiobjective 0-1 programming 54
4.3 Fuzzy multiobjective 0-1 programming wit.h fuzzy numbers 70
4.4 Conclusion 81
5. GENETIC ALGORITHMS FOR INTEGER PROGRAMMING 83
5.1 Introduction 83
5.2 Multidimensional integer knapsack problems 84
5.3 Integer programming 98
5.4 Conclusion 104
6. FUZZY MULTIOBJECTIVE INTEGER PROGRAMMING 107
6.1 Introduction 107
6.2 Fuzzy multiobjective integer programming 108
6.3 Fuzzy multiobjective integer programming wit.h fuzzy
numbers 118
Vlll Contents

6.4 Conclusion 131


7. GENETIC ALGORlTHMS FOR NONLINEAR
PROGRAMMING 133
7.1 Introduction 133
7.2 Floating-point genetic algorithms 134
7.3 GE:'-l'OCOP III 141
7.4 Revised GE:'-l'OCOP III 143
7.5 Conclusion 151
8. FUZZY MULTIOBJECTIVE NONLINEAR PROGRAYIMING 153
8.1 Introduction 153
8.2 :'vlultiobjective nonlinear programming 154
8.3 '\1ultiobjective nonlinear programming problem with fuzzy
numhers 159
8.4 Conclusion 167
9. GENETIC ALGORITHMS FOR JOB-SHOP SCHEDULING 169
9.1 Introduction 169
9.2 Job-shop scheduling 171
9.3 Genetic algorithms for job-shop scheduling 174
1O.FUZZY MULTIOBJECTTVE JOB-SHOP SCHEDULING 189
10.1 Introduction 189
10.2 Job-shop scheduling with fuzzy processing time and fuzzy
due date 191
10.3 Multiobjective job-shop scheduling under fuzziness 208
11.S0ME APPLICATIONS 223
11.1 Flexible scheduling in a machining center 223
11.2 Operation planning of district heating and cooling plants 236
11.3 Coal purchase planning in electric power plants 253
References 273
Index 287
Preface

In the early 1970s, genetic algorithms were initially proposed by Hol-


land, his colleagues, and his students at the University of Michigan as
stochastic search techniques based on the mechanism of natural selection
and natural genetics. Although genetic algorithms were not well-known
at the beginning, since the First International Conference on Genetic Al-
gorithms was held at the Carnegie-Mellon University in 1985, an enor-
mous number of articles together with several significant monographs
and books have been published, and nowadays, genetic algorithms make
a major contribution to optimization, adaptation, and learning in a wide
variety of unexpected fields.
As we look at recent applications of genetic algorithms to optimization
problems, especially to various kind of discrete optimization problems,
global optimization problems, or other hard optimization problems, we
can see continuing advances. However, there seemed to be no genetic
algorithm approach to deal with multiobjective programming problems,
until Schaffer first proposed the so-called VEGA (Vector Evaluated Ge-
netic Algorithm). Although VEGA was implemented to find Pareto
optimal solutions of several multiobjective numerical optimization test
problems, the algorithm seems to have bias toward some Pareto optimal
solutions. Since then, several articles have been published to overcome
the weakness of VEG A.
Unfortunately, however, these papers focused on multiobjective non-
linear programming problems with continuous variables and were mainly
weighted toward finding Pareto optimal solutions, not toward deriving
a compromise or satisficing (see p. 2) solution for the decision maker.
Although several excellent books in the field of genetic algorithm opti-
mization have already been published in recent years, they focus mainly
on single-objective discrete or other hard optimization problems under
certainty. In spite of its urgent necessity, there seems to be no book that
x Preface

is designed to present genetic algorithms for solving not only single-


objective but also fuzzy and multiobjective optimization problems in a
unified way.
In this book, the author is concerned with introducing the latest
advances in the field of genetic algorithm optimization for 0-1 pro-
gramming, integer programming, nonconvex programming, and job-shop
scheduling problems under multiobjectiveness and fuzziness together
with a wide range of actual applications on the basis of the author's con-
tinuing research works. Special stress is placed on interactive decision-
making aspects of fuzzy multiobjective optimization for human-centered
systems in most realistic situations when dealing with fuzziness.
The intended readers of this book are senior undergraduate students,
graduate students, researchers, and practitioners in the fields of oper-
ations research, computer science, industrial engineering, management
science, systems engineering, and other engineering disciplines that deal
with the subjects of multiobjective programming for discrete or other
hard optimization problems under fuzziness. In order to master all the
material discussed in this book, the readers would probably be required
to have some background in linear algebra and mathematical program-
ming. However, by skipping the mathematical details, much can be
learned about fuzzy multiobjective programming through genetic algo-
rithms for human-centered systems in most realistic settings without
prior mathematical sophistication.
The author would like to express his sincere appreciation to Profes-
sor Yoshikazu Sawaragi, chairman of the Japan Institute of Systems
Research and emeritus professor of Kyoto University, Department of
Applied Mathematics and Physics, for his invariant stimulus and en-
couragement ever since the author's student days at Kyoto University.
The author is also thankful to Dr. Kosuke Kato of Hiroshima University
for his contribution to Chapters 3 through 6 and Dr. Masahiro Inuiguchi
of Osaka University and Dr. Isao Shiroumaru of The Chugoku Electric
Power Co., Inc. for their contributions to Section 11.3. Further thanks
are due to Dr. Kosllke Kato of Hiroshima University for reviewing parts
of the manuscript and for his helpful comments and suggestions. The
author also wishes to thank all of his undergraduate and graduate stu-
dents at Hiroshima University. Finally, the author would like to thank
Dr. Gary Folven, the Managing Editor of Kluwer Academic Publishers,
Boston, for his assistance in the publication of this book.
Hiroshima, April 2001 Masatoshi Sakawa
Chapter 1

INTRODUCTION

1.1 Introduction and historical remarks


Genetic algorithms, originally called genetic plans, were initiated by
Holland, his colleagues, and his students at the University of Michigan
in the 1970s as stochastic search techniques based on the mechanism of
natural selection and natural genetics. In his 1975 monograph Adap-
tation in Natural and Artificial Systems [75], Holland presented genetic
algorithms as an abstraction of biological evolution with a theoretical
framework for adaptation. In the same year, De Jong completed his
dissertation An Analysis of the Behavior of a Class of Genetic Adap-
tive Systems [46]. De Jong considered genetic algorithms in a function
optimization setting, and since then genetic algorithms have attracted
considerable attention as global methods for complex function optimiza-
tion.
Although genetic algorithms were not well-known at the beginning,
since the First International Conference on Genetic Algorithms [70] was
held at the Carnegie-Mellon University, Pittsburgh, in 1985, an enor-
mous number of articles together with several significant monographs
and books have been published. Especially, after the publication of Gold-
berg's 1989 book entitled Genetic Algorithms in Search, Optimization,
and Machine Learning [66], genetic algorithms have attracted consider-
able attention in a number of fields as a methodology for search, opti-
mization, and learning [12, 21, 43, 44, 54, 58, 71, 109, 188, 190, 212]. In
1992, with the publication of his book Genetic Algorithms + Data Struc-
tures = Evolution Programs [112], Michalewicz presented three years of
research results from early 1989 through 1991, including various mod-
ifications of genetic algorithms for numerical optimization. The third
2 1. INTRODUCTION

revised and extended edition was issued in 1996, and nowadays, genetic
algorithms are considered to make a major contribution to optimiza-
tion, adaptation, and learning in a wide variety of unexpected fields
[11, 13, 20, 24, 32, 39, 40, 43, 60, 72, 74, 112, 127, 165, 189].
As we look at recent applications of genetic algorithms to optimization
problems, especially to various kind of discrete optimization problems,
global optimization problems, or other hard optimization problems, we
can see continuing advances [13, 55, 60, 61, 111, 136]. However, t.here
seemed to be no genetic algorithm approach to deal with multiobjective
programming problems until Schaffer [187] first proposed the so-called
Vector Evaluated Genetic Algorit.hm (VEGA) as a natural extension of
GrefensteUe's GENESIS program [69] to include multiobject.ive nonlin-
ear functions. Although VEGA was implemented to find Pareto optimal
solutions of several multiobjective nonlinear optimization test problems,
the algorithm seems to have bias toward some Pareto optimal solutions.
In his famous book, Goldberg [66] suggested a nondominated sorting
procedure to overcome the weakness of VEGA. By extending the idea of
Goldberg [66], Fonseca and Fleming [62, 63] proposed the Multiple Ob-
jective GA (MOGA). Horn, Nafpliot.is, and Goldberg [76] int.roduced the
Niched Pareto GA (NPGA) as an algorithm for finding diverse Pareto
optimal solutions based on Pareto domination tournaments and sharing
on the nondominated surface. Similarly, to eliminat.e the bias in VEGA,
Srinivas and Deb [197] proposed the Nondominat.ed Sorting GA (NSGA)
on the basis of Goldberg's idea of nondominat.ed sorting together with
a niche and speciation method. However, these papers focused on mul-
tiobjective nonlinear programming problems wit.h continuous variables
and were mainly weighted toward finding Pareto optimal solutions, not
toward deriving a compromise or satisficing 1 solution for the decision
maker.
As a natural extension of single-objective 0-1 knapsack problems, in
the mid-1990s, Sakawa et al. [138, 144, 148] formulated multiobjective
multidimensional 0-1 knapsack problems by assuming that the decision
maker may have a fuzzy goal for each of the objective functions. Once
the linear membership functions that well-represent the fuzzy goals of
the decision maker have been elicited, the fuzzy decision of Bellman and
Zadeh [22] can be adopted for combining them. In order to derive a
compromise solution for the decision maker by solving the formulated
problem, genetic algorithms with double strings that decode an individ-

1 "Satisficing" is a term or concept defined by March and Simon [108]. An alternative


is satisficing if: (1) there exists a set of criteria that describe minimally satisfactory
alternatives and (2) the alternative in quest.ion meets or exceeds all these criteria.
1.1. Introduction and historical remar'ks 3

ual represented by a double string to the corresponding feasible solution


for treating the constraints of the knapsack type have been proposed
[144, 148]. Also, through the combination of the desirable features of
both the interactive fuzzy satisficing methods for continuous variables
[135] and the genetic algorithms with double strings [144], an interactive
fuzzy satisficing method to derive a satisficing solution for the decision
maker to multiobjective multidimensional 0-1 knapsack problems has
been proposed [160, 161]. These results are immediately extended to
multiobjective multidimensional 0-1 knapsack problems involving fuzzy
numbers reflecting the experts' ambiguous understanding of the nature
of the parameters in the problem-formulation process [162].
Unfortunately, however, because these genetic algorithms with double
strings are based mainly on the decoding algorithm for treating the con-
straints of the knapsack type, they cannot be applied to more general
0-1 programming problems involving positive and negative coefficients
in both sides of the constraints.
In order to overcome such difficulties, Sakawa et al. [137, 143] re-
visited genetic algorithms with double strings for multidimensional 0-1
knapsack problems [144, 148] with some modifications and examined
their computational efficiency and effectiveness through a lot of compu-
tational experiments. Then Sakawa et al. [137, 143] extended the ge-
netic algorithms with double strings for 0-1 knapsack problems to deal
with more general 0-1 programming problems involving both positive
and negative coefficients in the constraints. New decoding algorithms
for double strings using reference solutions both without and with t.he
reference solution updating procedure were especially proposed so that
each of the individuals would be decoded to the corresponding feasible
solution for the general 0-1 programming problems. Using several nu-
merical examples, the proposed genetic algorithms and the branch and
bound method were compared with respect to the solution accuracy and
computation time. Moreover, Sakawa et al. [149, 155] presented fuzzy
and interactive fuzzy programming for multiobjective 0-1 programming
problems by incorporating the fuzzy goals of the decision maker.
Recently, to deal with multidimensional integer knapsack problems,
Sakawa et al. [164] proposed genetic algorithms with double strings
through the modification of the decoding algorithms for multidimen-
sional 0-1 knapsack problems. They also formulated multiobjective mul-
tidimensional integer knapsack problems by assuming that the decision
maker may have a fuzzy goal for each of the objective functions and
proposed an interactive fuzzy sat.isficing method t.o derive a sat.isficing
solution for the decision maker through the proposed genetic algorithms
[164]. In order t.o improve the accuracy or precision of solutions, Sakawa
4 1. INTRODUCTION

et al. [145] proposed genetic algorithms with double strings for multidi-
mensional integer knapsack problems through the use of information of
optimal solutions to the corresponding continuous relaxation problems.
Furthermore, Sakawa et al. extended genetic algorithms with double
strings based on reference solution updating for 0-1 programming prob-
lems into integer programming problems [146]. For dealing with general
integer programming problems involving positive and negative coeffi-
cients, Sakawa et al. [140, 155] further extended coding and decoding of
the genetic algorithms with double strings based on reference solution
updating for multiobjective multidimensional 0-1 knapsack problems.
Since De Jong [46] considered genetic algorithms in a function opti-
mization setting, genetic algorithms have attracted considerable atten-
tion as global methods for complex function optimization. However,
many of the test function minimization problems solved by a lot of re-
searchers during the past 20 years involve only specified domains of
variables. Only recently several approaches have been proposed for solv-
ing general nonlinear programming problems through genetic algorithms
[60, 88, 112, 165].
For handling nonlinear constraints of general nonlinear programming
problems through genetic algorithms, most of them are based on the
concept of penalty functions, which penalize infeasible solutions [60, 88,
112, 114, 119]. Although several ideas have been proposed about how the
penalty function is designed and applied to infeasible solutions, penalty-
based methods have several drawbacks, and the experimental results on
many test cases have been disappointing [112, 119], as pointed out in
the field of nonlinear optimization.
In 1995, as a new constraint-handling method for avoiding many draw-
backs of these penalty met.hods, Michalewicz and Nazhiyat.h [118] and
Michalewicz and Schoenauer [119] proposed GENOCOP (GEnetic algo-
rithm for Numerical Optimization of COnstrained Problems) III for solv-
ing general nonlinear programming problems. GENOCOP III incorpo-
rates the original GENOCOP syst.em for linear constraints [112, 113, 116]
but extends it. by maint.aining two separate populations in which a de-
velopment in one population influences evaluations of individuals in the
other population. The first population consists of so-called search points
that satisfy linear constraints of the problem as in the original GENO-
COP system. The second population consists of so-called reference
points that satisfy all constraints of the problem. Recent excellent sur-
vey papers of Michalewicz and Schoenauer [119] and Michalewicz and
associates [115] are devoted to reviewing and classifying the major tech-
niques for constrained optimization problems.
1.1. Introduction and historical remarks 5

Unfortunately, however, in GENOCOP III, because an initial refer-


ence point is generated randomly from individuals satisfying the lower
and upper bounds, it is quite difficult to generate an initial reference
point in practice. Furthermore, because a new search point is randomly
generated on the line segment between a search point and a reference
point, effectiveness and speed of search may be quite low.
Realizing such difficulties, in the late 1990s Sakawa and Yauchi [179]
proposed the coevolutionary genetic algorithm called the revised GENO-
COP III through the introduction of a generating method of an initial
reference point by minimizing the sum of squares of violated nonlinear
constraints and a bisection method for generating a new feasible point on
the line segment between a search point and a reference point efficiently.
Sakawa and Yauchi [181] also formulated nonconvex multiobjective
nonlinear programming problems and presented an interactive fuzzy sat-
isficing method through the revised GENOCOP III. After determining
the fuzzy goals of the decision maker for the objective functions, if the de-
cision maker specifies the reference membership values, the correspond-
ing Pareto optimal solutions can be obtained by solving the augmented
minimax problems for which the revised GENOCOP III is effectively
applicable. An interactive fuzzy satisficing method for deriving a satis-
ficing solution for the decision maker from a Pareto optimal solution
set is presented. Furthermore, by considering the experts' vague or
fuzzy understanding of the nature of the parameters in the problem-
formulation process, nonconvex multiobjective programming problems
with fuzzy numbers are formulated. Using the a-level sets of fuzzy num-
bers, the corresponding nonfuzzy a-multiobjective programming and
an extended Pareto optimality concept were introduced. Sakawa and
Yauchi [180, 182, 183] then presented interactive decision-making meth-
ods through the revised GENOCOP III, both without and with the fuzzy
goals of the decision maker, to derive a satisficing solution for the deci-
sion maker efficiently from an extended Pareto optimal solution set as a
generalization of their previous results.
The job-shop scheduling problem [19, 27, 31, 36, 59,121] has been well-
known as one of the hardest combinatorial optimization problems, and
numerous exact and heuristic algorithms have been proposed [6]. One
of the first attempts to approach a simple job-shop scheduling problem
through the application of genetic algorithms [66] can be seen in the re-
search of Davis [42] in 1985. Since then, a significant number of success-
ful applications of genetic algorithms to job-shop scheduling problems
have been appearing [16, 42, 43, 60, 99, 112, 123, 165, 198, 203, 219].
A comprehensive survey of conventional and new solution techniques
6 1. INTRODUCTION

for solving the job-shop scheduling problems proposed through the mid-
1990s can be found in the invited review of Blazewicz et a1. [26J.
In 1997, by incorporating the concept of similarity among individuals
into the genetic algorithm that uses a set of completion times as in-
dividual representation and the GifRer and Thompson algorithm-based
crossover [219], Sakawa and Mori [157J proposed an efficient genetic al-
gorithm for job-shop scheduling problems.
However, when formulating job-shop scheduling problems that closely
describe and represent the real-world problems, various factors involved
in the problems are often only imprecisely or ambiguously known to
the analyst. This is particularly true in the real-world situations when
human-centered factors are incorporated into the problems. In such sit-
uations, it may be more appropriate to consider fuzzy processing time
because of man-made factors and fuzzy due date, tolerating a certain
amount of delay in the due date [81, 139, 206J. Recently, in order to
reflect such situations, a mathematical programming approach to a sin-
gle machine fuzzy scheduling problem with fuzzy precedence relation
[81J and job-shop scheduling incorporating fuzzy processing time using
genetic algorithms [206] have been proposed.
In order to more suitably model actual scheduling situations, Sakawa
and Mori [158, 159J formulated job-shop scheduling problems incorpo-
rating fuzzy processing time and fuzzy due date. On the basis of the
concept of an agreement index for fuzzy due date and fuzzy completion
time for each job, the formulated problem is interpreted as seeking for a
schedule that maximizes the minimum agreement index. For solving the
formulated fuzzy job-shop scheduling problems, an efficient genetic al-
gorithm for job-shop scheduling problems proposed by Sakawa and Mori
[157J is extended to deal with the fuzzy due dates and fuzzy completion
time.
Unfortunately, however, in these fuzzy job-scheduling problems, only
a single objective function is considered, and extensions to multiobjec-
tive job-scheduling problems are desired for reflecting real-world situa-
tions more adequately. On the basis of the agreement index of fuzzy
due date and fuzzy completion time, multiobjective job-shop scheduling
problems with fuzzy due date and fuzzy processing time are formulated
as three-objective problems that not only maximize the minimum agree-
ment index but also maximize the average agreement index and mini-
mize the maximum fuzzy completion time. Moreover, by considering
the imprecise nature of human judgments, the fuzzy goals of the deci-
sion maker for the objective functions are introduced. After eliciting the
linear membership functions through the interaction with the decision
maker, the fuzzy decision of Bellman and Zadeh or minimum operator
1.2. Organization of the book 7

[22] is adopted for combining them. Then, a genetic algorithm that is


suitable for solving the formulated problems is proposed.
Finally, it is appropriate to mention some application aspects of ge-
netic algorithms. Although some of the early practical applications can
be found in Goldberg [66], many real-world optimization problems are
inherently complex and quite difficult to solve by conventional optimiza-
tion techniques. Genetic algorithms have attracted considerable atten-
tion regarding their potential as an optimization technique for complex
optimization problems and have been applied successfully in the areas
of operations research, computer science, industrial engineering, man-
agement science, and systems engineering [13, 32, 33, 39, 50, 55, 60,
61, 111, 112, 127, 136, 213]. They can be found, for example, in the
areas of scheduling in hot rolling process [205], production ordering in
acid rinsing of steelmaking plants [186], vehicle routing and scheduling
[34], self-organizing manufacturing systems [102], manufacturing cell de-
sign [87,120], design of flexible electronic assembly systems [128], flexible
scheduling in a machining center [142], inspection allocation in manufac-
turing systems [211], multiprocessor scheduling [207], mobile robot path
planning [218], optimal design of reliable networks [49], optimization of
textile processes [9], operation optimization of an industrial cogeneration
system [110], channel resource management in cellular mobile systems
[185], time scheduling of transit systems [45], optimization' of the low-
pressure spool speed governor of a Pegasus gas turbine engine [64], mul-
timedia multicast routing [223], resource-constrained project scheduling
[126], operation planning of district heating and cooling plants [150-
152, 154], modeling of ship trajectory in collision situations [196], coal
purchase planning in an electric power plant [192]' capacitated multi-
point network design [106], camera calibration [85], container loading
[28], synthesis of low-power digital signal processing (DSP) systems [30],
and scheduling of a high-throughput screening (HTS) system [7].

1.2 Organization of the book


The organization of each chapter is briefly summarized as follows.
Chapter 2 is devoted to foundations of genetic algorithms that will be
used in the remainder of this book. Starting with several basic notions
and definitions in genetic algorithms, fundamental procedures of genetic
algorithms are outlined. The main idea of genetic algorithms, involving
coding, fitness, scaling, and genetic operators, is then examined without
going into unnecessary details. Some of the important genetic operators
are also discussed in the context of bit string representations by putting
special emphasis on implementation issues for genetic algorithms.
8 1. INTRODUCTION

Chapter 3 presents a detailed treatment of genetic algorithms with


double strings as developed for multidimensional 0-1 knapsack prob-
lems. Through the introduction of a double string representation and
the corresponding decoding algorithm, it is shown that a potential solu-
tion satisfying constraints can be obtained for each individual. Then the
genetic algorithms with double strings are extended to deal with more
general 0-1 programming problems involving both positive and nega-
tive coefficients in the constraints. New decoding algorithms for double
strings using reference solutions both without and with the reference
solution updating procedure are introduced especially so that each of
the individuals is decoded to the corresponding feasible solution for the
general 0-1 programming problems. The detailed comparative numerical
experiments with a branch and bound method are also provided.
In Chapter 4, as a natural extension of single-objective 0-1 program-
ming problems discussed in the previous chapter, multiobjective 0-1 pro-
gramming problems are formulated by assuming that the decision maker
may have a fuzzy goal for each of the objective functions. Through
the combination of the desirable features of both the interactive fuzzy
satisficing methods for continuous variables and the genetic algorithms
with double strings discussed in the previous chapter, an interactive
fuzzy satisficing method to derive a satisficing solution for the decision
maker is presented. Furthermore, by considering the experts' imprecise
or fuzzy understanding of the nature of the parameters in the problem-
formulation process, the multiobjective 0-1 programming problems in-
volving fuzzy parameters are formulated. Through the introduction
of extended Pareto optimality concepts, an interactive decision-making
method for deriving a satisficing solution for the decision maker from
among the extended Pareto optimal solution set is presented together
with detailed numerical examples.
In Chapter 5, as the integer version of Chapter 3, genetic algorithms
with double strings for 0-1 programming problems are extended to deal
with integer 0-1 programming problems. New decoding algorithms for
double strings using reference solution updating procedure are especially
introduced so that each of individuals is decoded to the corresponding
feasible solution for integer programming problems. The chapter also
includes several numerical experiments.
Chapter 6 can be viewed as the fuzzy multiobjective version of Chap-
ter 5 and is devoted to an integer generalization along the same lines as
Chapter 3. Through the use of genetic algorithms with double strings,
considerable effort is devoted to the development of interactive fuzzy
rnultiohjective integer programming as well as fuzzy multiobjective in-
1.2. Organization of the book 9

teger programming with fuzzy numbers together with several numerical


experiments.
In Chapter 7, after introducing genetic algorithms for nonlinear pro-
gramming including the original GENOCOP system for linear constraints,
the coevolutionary genetic algorithm called GENOCOP III proposed by
Michalewicz et al. is discussed in detail. Realizing some drawbacks of
GENOCOP III, the coevolutionary genetic algorithm called the revised
GENOCOP III is presented through the introduction of a generating
method of an initial reference point by minimizing the sum of squares
of violated nonlinear constraints and a bisection method for generating
a new feasible point on the line segment between a search point and a
reference point efficiently.
In Chapter 8, attention is focused on not only multiobjective nonlinear
programming problems but also multiobjective nonlinear programming
problems with fuzzy numbers. Along the same lines as Chapters 4 and
6, through the revised GENOCOP III, some refined interactive fuzzy
multiobjective nonlinear programming as well as fuzzy multiobjective
nonlinear programming with fuzzy numbers are developed for deriving
a satisficing solution for the decision maker.
Chapter 9 treats job-shop scheduling problems that are to determine a
processing order of operations on each machine in order to minimize the
maximum completion time. By incorporating the concept of similarity
among individuals into the genetic algorithm that uses a set of comple-
tion times as individual representation and the Giffier and Thompson
algorithm-based crossover, an efficient genetic algorithm for job-shop
scheduling problems is presented. The chapter also includes the com-
parative numerical experiments with simulated annealing and the branch
and bound method for job-shop scheduling problems.
In Chapter 10, by considering the imprecise or fuzzy nature of the
data in real-world problems, job-shop scheduling problems with fuzzy
processing time and fuzzy due date are formulated. On the basis of
the agreement index of fuzzy due date and fuzzy completion time, the
formulated fuzzy job-shop scheduling problems are interpreted so as to
maximize the minimum agreement index. Furthermore, multiobjective
job-shop scheduling problems with fuzzy due date and fuzzy processing
time are formulated as three-objective problems. Having elicited the
linear membership functions reflecting the fuzzy goals of the decision
maker, the fuzzy decision of Bellman and Zadeh is adopted for combin-
ing them. The genetic algorithm introduced in the previous chapter is
extended to be suitable for solving the formulated problems.
Finally, Chapter 11 is concerned with some application aspects of ge-
netic algorithms. As examples of Japanese case studies, we present some
10 1. INTRODUCTION

applications of genetic algorithms to flexible scheduling in a machining


center, operation planning of district heating and cooling plants, and
coal purchase planning in an actual electric power plant.
Chapter 2

FOUNDATIONS OF GENETIC
ALGORITHMS

This chapter is devoted to the foundations of the genetic algorithms


that will be used in the remainder of this book. Starting with several
basic notions and definitions in genetic algorithms, fundamental pro-
cedures of genetic algorithms are outlined. The main idea of genetic
algorithms, involving coding, fitness, scaling, and genetic operators, is
then examined. In the context of bit string representations, some of
the important genetic operators are also discussed by putting special
emphasis on implementation issues for genetic algorithms.

2.1 Outline of genetic algorithms


Genetic algorithms [75], initiated by Holland, his colleagues, and his
students at the University of Michigan in the 1970s as stochastic search
techniques based on the mechanism of natural selection and natural
genetics, have received a great deal of attention regarding their potential
as optimization techniques for solving discrete optimization problems or
other hard optimization problems. Although genetic algorithms were not
well-known at the beginning, after the publication of Goldberg's book
[66J, genetic algorithms attracted considerable attention in a number
of fields as a methodology for optimization, adaptation, and learning
[11, 13, 32, 39, 43, 60, 112, 127, 165, 189J.
Genetic algorithms start with an initial population of individuals gen-
erated at random. Each individual in the population represents a poten-
tial solution to the problem under consideration. The individuals evolve
through successive iterations, called generations. During each genera-
tion, each individual in the population is evaluated using some measure
of fitness. Then the population of the next generation is created through
genetic operators. The procedure continues until the termination con-
12 2. FOUNDATIONS OF GENETIC ALGORITHMS

dition is satisfied. The general framework of genetic algorithms is de-


scribed as follows [112], where P(t} denotes the population at generation
t:

procedure: Genetic Algorithms


begin
t:= 0;
initialize P(t};
evaluate P(t);
while (not termination condition) do
begin
t:=t+1;
select P(t) from P(t - 1);
alter P(t};
evaluate P(t);
end
end.

To explain the fundamental procedures of genetic algorithms, consider


a population that consists of N individuals representing potential solu-
tions to a problem. In genetic algorithms, an individual in a population
is represented by a string s of length n as follows:

The string s is regarded as a chromosome that consists of n genes.


The character Sj is a gene at the jth locus, and the different values of a
gene are called alleles. The chromosome s is called the genotype of an
individual; a potential solution to a problem corresponding to a string
s is called the phenotype. Usually, it is assumed to establish a one-to-
one correspondence between genotypes and phenotypes. The mapping
from phenotypes to genotypes is called a coding, and the mapping from
genotypes to phenotypes is called a decoding.
The fitness is the link between genetic algorithms and the problem to
be solved. In maximization problems, the fitness of a string s is usually
kept the same as the objective function value f(x) of its phenotype x.
In minimization problems, the fitness of a string s should increase as
the objective function value f(x) of its phenotype x decreases. Thus, in
minimization problems, the string with a smaller objective function value
has a higher fitness. Through three main genetic operators together with
fitness, the population P(t) at generation t evolves to form the next
population P(t + 1). After some number of generations, the algorithms
2.1. Outline of genetic algorithms 13

converge to the best string s*, which hopefully represents the optimal
or approximate optimal solution x* to the optimization problem.
In genetic algorithms, the three main genetic operators-reproduction,
crossover, and mutation-are usually used to create the next generation.
Reproduction: According to the fitness values, increase or decrease
the number of offspring for each individual in the population P(t).
Crossover: Select two distinct individuals from the population at ran-
dom and exchange some portion of the strings between the strings
with a probability equal to the crossover rate Pc.
Mutation: Alter one or more genes of a selected individual with a
probability equal to the mutation rate Pm.
The probability to perform crossover operation is chosen in a way
so that recombination of potential strings (highly fitted individuals) in-
creases without disruption. Generally, the crossover rate lies between
0.6 to 0.9. Since mutation occurs occasionally, it is clear that the prob-
ability of performing mutation operation will be quite low. Typically,
the mutation rate lies between 0.001 to 0.01.
After the preceding discussions, the fundamental procedures of genetic
algorithms can be summarized as follows:

Fundamental procedures of genetic algorithms


Step 0: (Initialization)
Generate N individuals at random to form the initial population
P(O). Set the generation index t := 0 and determine the value of the
maximal generation T.
Step 1: (Evaluation)
Calculate the fitness value of each individual in the population P(t).
Step 2: (Reproduction)
Apply the reproduction operator to the population P(t).
Step 3: (Crossover)
Apply the crossover operator to the population after reproduction.
Step 4: (Mutation)
Apply the mutation operator to the population after crossover to
create the new population P(t + 1) of the next generation t + l.
Step 5: (Termination test)
14 2. FOUNDATIONS OF GENETIC ALGORITHMS

If t = T, stop. Then an individual with the maximal fitness obtained


thus far is regarded as an approximate optimal solution. Otherwise,
set t := t + 1 and return to step 1.
Such fundamental procedures of genetic algorithms are shown as a
flowchart in Figure 2.1.

No

( Stop

Figurf 2.1. Flowchart of fundamental procedures of genetic algorithms

Figure 2.2 illustrates the fundamental structure of genetic algorithms.


Here, potential solutions of phenotype are coded into individuals of geno-
type to form an initial population. Each individual in the population is
evaluated using its fitness. Through reproduction, crossover, and muta-
tion, the population of the next generation is created. The procedure
continues in this fashion, and when the termination condition is satisfied,
the best individual obtained is regarded as an optimal or approximate
optimal solution to the problem.
In applying genetic algorithms to solve particular optimization prob-
lems, further detailed considerations concerning (1) a genetic represen-
tation for potential solutions, (2) a way to create an initial population,
(3) an evaluation process in terms of their fitness, (4) genetic operators,
(5) constraint-handling techniques, and (6) values for various parameters
in genetic algorithms, such as population size, probabilities of applying
genetic operators, termination conditions, and so on, are required.
As Goldberg [66] summarized, genetic algorithms differ from conven-
tional optimization and search procedures in the following four ways:
2.2. Coding, fitness, and genetic operators 15

Problem Space
(Phenotype)
.~.
.-,I" ~ ................. .
/
/

/i'/ ",'" """'\


: ..
.. .
Decoding / Fitness \,
,,
Dec~ding
,
/ ;' ,/
: l / ",. ,,.,
: : /
: : :
: : : GA Space \, \
"-
: : : (Genotype)
', '. ,
::: /
, ' ,

\, [I::J ~ ~~r" ~ :. /
Figure 2.2. Fundamental structure of genetic algorithms

(1) Genetic algorithms work with a coding of the solution set, not the
solutions themselves.
(2) Genetic algorithms search from a population of solutions, not a sin-
gle solution.
(3) Genetic algorithms use fitness information, not derivatives or other
auxiliary knowledge.
(4) Genetic algorithms use probabilistic transformation rules, not de-
terministic ones.

2.2 Coding, fitness, and genetic operators


2.2.1 Coding
To explain how genetic algorithms work for an optimization problem,
consider a population that consists of N individuals representing poten-
16 2. FOUNDATIONS OF GENETIC ALGORITHMS

tial solutions to the problem. In genetic algorithms, an n-dimensional


vector x of decision variables corresponding to an individual is repre-
sented by a string s of length n as follows:

(2.1)

The string s is regarded as a chromosome that consists of n genes.


The character 3j is a gene at the jth locus, and the different values of a
gene are called alleles. The chromosome s is called the genotype of an
individual; the x corresponding to s is called the phenotype. Usually,
it is assumed that it establishes a one-to-one correspondence between
genotypes and phenotypes. However, depending on the situation, m-
to-one and one-to-m correspondences are also useful. In either case,
the mapping from phenotypes to genotypes is called a coding, and the
mapping from genotypes to phenotypes is called a decoding. The length
of a chromosome is fixed at a certain value n in many cases, but a
chromosome of variable length is more convenient in some cases.
Although real numbers, integers, alphabets, or some symbols may be
used to represent strings, in all of the work of Holland [75], individuals
are represented in binary strings of Os and Is. Such binary {O, I} strings
are often called bit strings or binary strings, and an individual s is
represented as

s = 31,82, .. . ,3j, ... ,Sn, 8j E {O,I} (2.2)

Such bit strings have been shown to be capable of usefully coding a


wide variety of information, and they have been shown to be effective
representation mechanisms in unexpected areas. The properties of bit
string representations for genetic algorithms have been extensively stud-
ied, and a good deal is known about the genetic operators and parameter
values that work well with them.

2.2.2 Fitness and scaling


Nature obeys the principle of Darwinian "survival of the fittest," the
individuals with high fitness values will, on average, reproduce more of-
ten than will those with low fitness values. In genetic algorithms, fitness
is defined in such a way that highly fitted strings have high fitness values,
and it is used to evaluate every individual in a population. Observe that
the fitness is the only link between genetic algorithms and the problem
to be solved, and it is the measure to select an individual to reproduce
for the next generation.
2.2. Coding, fitness, and genetic operators 17

As discussed in Goldberg [66], in minimization problems such as the


minimization of some cost function z{x), by introducing C max satisfying
C max - z{x) 2: 0, it is desirable to define the fitness of a string 8 as
f(8) = Cmax-z{x). However, the value ofCmax is not known in advance;
C max may be taken as the largest z(x) value observed thus far, as the
largest z( x) value in the current population, or as the largest z( x) value
of the t generations. Consequently, in minimization problems the fitness
of a string 8 is defined as

f(8) = { Cmax - z(x), if z(x) < Cmax (2.3)


0, otherwise

Similarly, in maximization problems such as the maximization of some


profit or utility function u(x), if u(x) < 0 for some x, by introducing
Cmin satisfying u(x) + Cmin 2: 0, the fitness functions should be defined
as f(8) = u(x) + Cmin. However, the value of Cmin is not known in
advance; Cmin may be taken as the absolute value of the smallest u(x)
value observed thus far, in the current population, or the t generations.
Hence, in maximization problems the fitness of a string 8 is defined as

f(8) = { u(x) + Cmin , if u(x).+ Cmin >0 (2.4)


0, otherWIse

In a reproduction operator based on the ratio of fitness of each indi-


vidual to the total fitness such as roulette or expected value selection,
which will be discussed in the next subsection, it is frequently pointed
out that the probability of selection depends on the relative ratio of fit-
ness of each individual. Thus, several scaling mechanisms, such as linear
scaling, sigma truncation, and power law scaling, have been introduced,
as well summarized in Goldberg [66] and Michalewicz [112].
In the linear scaling, fitness Ii of an individual i is transformed into
fI according to
fI = a fi + b, (2.5)
where the coefficients a and b are determined so that the mean fitness
frnean of the population should be a fixed point and the maximal fitness
fmax of the population should be equal to Crnult . frnean. The constant
Crnult, usually set as 1.2 :s; Crnult :s: 2.0, means the expected value of the
number of the best individual in the current generation surviving in the
next generation.
Figure 2.3 illustrates the linear scaling.
Unfortunately, however, in the linear scaling rule (2.5), when a few
strings are far below the mean and maximal fitness, there is a possibility
18 2. FOUNDATIONS OF GENETIC ALGORITHMS

f' = emuI!f
f :laX --- ---- ---- -- ----- -------- ---- f'=f

f~ean ----------------.,.'.:.'---

OL-------~----~--~--------~
o fmin fmc'dII f

Figure 2.3. Linear scaling

that the low fitness values become negative after scaling, as shown in
Figure 2.4.

f,~ax -- --------- -- -------- --- ------ j'=f

f ~ean - - - - --- - ---- ------ -,. ----

O~------r_~----~--~-------.
f

Figure 2.4. Linear scaling with negative fitness values

In order that iI will be nonnegative for all i, Goldberg [66] proposed


the following algorithm for linear scaling.
2.2. Coding, fitness, and genetic operators 19

Algorithm for linear scaling


Step 1: Calculate the mean fitness fmean, the maximal fitness fmax, and
the minimal fitness fmin of the population.

S tep 2: If f min> Cmult . fmean - fmax h 3


, t en go to step . Otherwise, go
Crnult - 1.0
to step 4.
(Cmult - 1.0) . fmean b.= (fmax. -
Step 3: Set a := ---::-----:---- frnean' Cmult . fmean)

fmax - fmean ' . fmax - fmean '


and go to step 5.

Step 4: Set a := fmean , b:= fmin' fmean , and go to step 5.


frnean - !min /lnean - Jmin

Step 5: Calculate fI = a . fi + b for i = 1,2, ... ,N.


To deal with the negative fitness values as well as to incorporate the
problem-dependent information into the mapping, sigma scaling, also
called sigma truncation, is introduced. Goldberg [66J called it sigma (a)
truncation because of the use of population standard deviation informa-
tion; a constant is subtracted from row fitness values as follows:

i = Ii - (fmean - C. a), (2.6)

where the constant C is chosen as a reasonable multiple of the population


standard deviation (between 1 and 3) and negative fitness values are set
to O.
Power law scaling is defined as the following specified power of the
low fitness:
(2.7)
where k is constant. In limited studies, a value of k = 1.005 is suggested.
Unfortunately, however, in general the k value is problem dependent.

2.2.3 Genetic operators


2.2.3.1 Reproduction
In genetic algorithms, the idea of natural selection-that highly fitted
individuals will reproduce more often at the cost of lower fitted ones--is
called reproduction or selection. Reproduction or selection concerns how
to select the individuals in the population who will create offspring for
the next generation and how many offspring each will create. There are
many methods for implementing this, and one commonly used method
is roulette selection, originally proposed by Holland [75]. The basic idea
is to determine selection probability for each individual proportional to
20 2. FOUNDATIONS OF GENETIC ALGORITHMS

the fitness value. Namely, in roulette selection, calculating the fitness


h (~ 0), i = 1, ... , N of each individual i and the whole sum L:f=l ij,
the selection probability, or survival probability of each individual i is
determined as
(2.8)

Figure 2.5 illustrates the roulette selection. Observe that each indi-
vidual is assigned a slice of a circular roulette wheel, the size of the slice
being proportional to the individual's fitness. Then, conceptually, the
wheel is spun N times, where N is the number of individuals in the
population. On each spin, an individual marked by the roulette wheel
pointer is selected as a parent for the next generation.

Figure 2.5. Roulette selection

The algorithm of the roulette selection is summarized as follows.

Algorithm of roulette selection


Step 1: Calculate the fitness ii, i = 1, ... , N, of N individuals and
their whole sum isum = L:~l h in a population at generation t.

Step 2: Generate a real random number randO in [0, 1J, and set s =
randO x fsum.
Step 3: Obtain the minimal k such that L:7=1 h ~ s, and select the
kth individual at generation t + 1.

Step 4: Repeat steps 2 and 3 until the number of selected individuals


becomes N.

In addition to roulette selection, various selection operators, such as


expected-value selection, ranking selection, elitist preserving selection,
and so forth, have been proposed.
2.2. Coding, fitness, and genetic operators 21

For reducing the stochastic errors of roulette selection, De Jong [46)


first introduced expected-value selection. In expected-value selection,
the expected value of the number of offspring is calculated for an in-
dividual i as hi j, where j is an average fitness value in the current
population.
In expected-value selection for a population consisting of N individu-
als, the expected number of the ith individual is calculated by

(2.9)

Then, the integer part of Ni denotes the deterministic number of the ith
individual preserved in the next population. The fractional part of Ni is
regarded as a probability for one of the individual i to survive, in other
words, N - l:~Il Nd; individuals are determined on the basis of this
probability.
An example of reproduction by expected-value selection is shown in
Table 2.1.

Table 2.1. Reproduction by expected-value selection

fitness 6 1 10 11 17 32 4 12 5 2
expected value 0.6 0.1 1.0 1.1 1.7 3.2 0.4 1.2 0.5 0.2
number of offspring 1 0 1 1 2 3 0 1 1 0

Ranking selection means that only the rank order of the fitness of the
individuals within the current population determines the probability of
selection. In ranking selection, the population is sorted from the best
to the worst, the expected value of each individual depends on its rank
rather than on its absolute fitness, and the selection probability of each
individual is assigned according to the ranking rather than its raw fitness.
There is no need to scale fitness in ranking selection, because absolute
differences in fitness are obscured. There are many methods to assign a
selection probability to each individual on the basis of ranking, including
linear and nonlinear ranking methods.
In the linear ranking method proposed by Baker [18], each individual
in the population is ranked in increasing order of fitness and the selection
probability of each individual i in the population is determined by

Pi = N
1(+ + _
'fJ - ('fJ
i-I)
- Tl ). N _ 1 (2.10)
22 2. FOUNDATIONS OF GENETIC ALGORITHMS

where the constants 7]+ and 7]- denote the maximum and minimum
expected values, respectively, and determine the slope of the linear
function. The condition 2:[':1 Pi = 1 requires that 1 :S 7]+ :S 2 and
",- = 2 - ",+ are fulfilled. Normally, a value of ",+ = 1.1 is recommended.
An example of reproduction by linear ranking selection with 7]+ = 2
and rounding is shown in Table 2.2.

Table 2.2. Reproduction by linear ranking selection

fitness 32 17 12 11 10 6 5 4 2 1
rank 1 2 3 4 5 6 7 8 9 10
number of offspring 2 2 2 1 1 1 0 0 0

As one possible way to nonlinear ranking, Michalewicz [112] proposed


the exponential ranking method. Adopting the exponential ranking
method, the selection probability Pi for the individual of rank i is deter-
mined by
Pi = c (1 - C )
i-1
, (2.11)
where C E (0,1) represents the probability when an individual of rank
1 is selected. Observe that a larger value of c implies stronger selective
pressure.
Elitist preserving selection, also called elitism, first introduced by De
Jong [46], is an addition to many selection operators. In elitism, if the
fitness of an individual in the past populations is larger than that of
every individual in the current population, preserve this individual into
the current generation. Introducing the elitism, the best individual gen-
erated up to generation t can be included in the population at generation
t + 1, if this best individual is lost at generation t + 1.
Observe that elitism, when combined with the selection operators dis-
cussed thus far, produces elitist roulette selection, elitist expected value
selection, and elitist ranking selection.
Concerning the comparison of important selection operators, the pa-
pers of Back [10], and Goldberg and Deb [67] would be useful for inter-
ested readers.

2.2.3.2 Crossover
It is well-recognized that the main distinguishing feature of genetic
algorithms is the use of crossover. Crossover, also called recombination,
is an operator that creates new individuals from the current population.
The main role of this operator is to combine pieces of information COIIl-
ing from different individuals in the population. Actually, it recombines
2.2. Coding, fitness, and genetic operators 23

genetic material of two parent individuals to create offspring for the next
generation. The basic crossover operation, introduced by Holland [75],
is a three-step procedure. First, two individuals are selected at ran-
dom from the population of parent strings generated by the selection.
Second, one or more string locations are selected as crossover points de-
lineating the string segment to exchange. Finally, parent string segments
are exchanged and then combined to produce two resulting offspring in-
dividuals. The proportion of parent strings undergoing crossover during
a generation is controlled by the crossover rate Pc E [0,1], which deter-
mines how frequently the crossover operator is invoked.
In addition to the crossover rate Pc and the number of crossover points


CP, generation gap G was introduced by De Jong [46] to permit over-
lapping populations, where G = 1 and < G < I, respectively, imply
nonoverlap ping populations and overlapping populations.
The general algorithm of crossover is summarized as follows:

General algorithm of crossover


Step 0: Let i := 1.
Step 1: Select an individual mating with the ith individual at random
from the current population including N individuals.
Step 2: Generate a real random number randO in [0,1]. For a given
crossover rate Pc, if Pc 2: rand (), then go to step 3. Otherwise, go to
step 4.
Step 3: Mate two individuals using an appropriate crossover technique,
and go to step 5.
Step 4: Preserve the two individuals that are not mated, and go to step
6.
Step 5: Preserve the mated two individuals, and go to step 6.
Step 6: If i < N, set i := i + 1 and return to step 1. Otherwise, go to
step 7.
Step 7: Select NG individuals from 2N preserved individuals randomly,
and replace NG individuals of the current population consisting of
N individuals with the NG selected individuals.
Depending on the ways of individual representations, many different
crossover techniques have been proposed.
When individuals are represented by binary {O, I} strings, also called
bit strings, some of the commonly used crossover techniques are one-
point crossover, multipoint crossover and uniform crossover. One-point
24 2. FOUNDATIONS OF GENETIC ALGORITHMS

crossover, also called simple crossover, is the simplest crossover tech-


nique. In one-point crossover, a single crossover point" I" is randomly
selected on the two strings of two parents, then the substrings on the
right side of the crossover point are exchanged for creating two offspring.
An example of one-point crossover is illustrated as

Parent 1: 1100010001 Offspring 1: 110000100


Parent 2: 1011110100 =:::} Offspring 2: 101110001

In multipoint crossovers, an extension of one-point crossover, several


crossover points "I" are randomly selected on the two strings of two
parents, then the segments between the two parents are exchanged for
creating two offspring. An example of two-point crossover is illustrated
as
Parent 1: 11100001001 Offspring 1: 111110001
Parent 2: 10111101100 =:::} Offspring 2: 100000100
Two-point crossover is commonly used as multipoint crossover.
Observe that the extreme case of multipoint crossover is known as
uniform crossover [202]. In uniform crossover, a randomly selected n
bits mask is used. The parity of each bit in the mask determines, for
each corresponding bit. in a offspring, from which parent it will receive
that bit. To be more explicit, for each bit position on the mask, its
value "1" or "0," respectively, indicates that the first. parent or second
parent. contributes it.s value in that position t.o the first offspring, and
vice versa for the second offspring. An example of uniform crossover is
illustrated as
Parent 1: 110000001
Parent 2: 101110100
Mask: 101101101
Offspring 1: 100010001
Offspring 2: 111000100
It. should be not.ed here that in the general algorithm of crossover,
appropriat.e values for the crossover rate Pc, number of crossover points
CP, and generat.ion gap G must be set. Also, for replacing NG indi-
viduals of t.he current population consisting of N individuals with the
NG-selected individuals from 2N preserved individuals, 2N memory
storage is required. As one possible way to circumvent such problems,
Sakawa et al. [138, 148] proposed the following simplified algorithm of
crossover, which approximately satisfies Pc .:; 1, 1 .:; CP :S n - 1 (n
is a length of a string), and 0 < G .:; 1 without requiring 2N memory
storage.
2.2. Coding, fitness, and genetic operators 25

Simplified algorithm of crossover


Step 0: Set the iteration index i := 1.
Step 1: Select a pair of individuals at random form the current popu-
lation including N individuals.
Step 2: Generate a real random number randO in [0,1]. For a given
crossover rate Pc, if Pc 2 rand (), then go to step 3. Otherwise, go to
step 5.
Step 3: Choose a crossover point at random and generate a pair of new
individuals using an one-point crossover operator.
Step 4: Replace a pair of individuals after crossover with a pair of in-
dividuals before crossover in the population.
Step 5: Set i := i + 1. If i > N /2, stop. Otherwise, return to step 1.

It is significant to emphasized here that, if a one-point or multipoint


crossover operator is applied to individuals represented by permutations
of {1, 2, ... ,n} for permutation problems such as traveling salesperson
problems (TSP) or scheduling problems, frequently neither of the two
offspring represent a valid permutation.
For solving such problems, partially matched crossovers (PMX), or-
dered crossovers (OX), and cycle crossovers (CX) are proposed. PMX
was first proposed by Goldberg and Lingle [68] for tackling a blind TSP.
OX was proposed by Davis [42] for a job-shop scheduling problem. CX
was proposed by Oliver et al. [124] for a TSP.
For convenience in our subsequent discussion, we introduce the PMX
for individuals represented by permutations of {1, 2, ... , n}.
To explain the PMX for permutations, assume that two individu-
als x and yare represented by two strings s:c(1)sx(2) ... s:c(j) ... s:c{n)
and sy(1)sy{2) Sy(j) sy{n), respectively, where sx(j) E {1, ... , n},
s:c(j) i- s:c(j') for j i= j' and Sy{j) E {1, ... , n}, 8 y (j) i= Sy(j') for j i- j'.
The PMX for permutations can be described as follows:

Partially Matched Crossover (PMX) for permutations


Step 0: Choose x and y as parent individuals. Then, prepare copies x'
and y' of x and y, respectively.
Step 1: Choose two crossover points at random on these strings, say, h
and k (h < k).
Step 2: Repeat the following procedures.
26 2. FOUNDATIONS OF GENETIC ALGORITHMS

(a) Set j := h.
(b) Find j' such that Sx' (j') = Sy (j). Then, interchange Sx' (j) with
S;r:' (j') and set j := j + 1.

(c) If j > k, stop. Otherwise, return to (b).


Step 3: Let x' be the offspring of x. This procedure is carried out for
y' and x in the same manner and let y' be the offspring of y.
Observe that the PMX can be viewed as a natural extension of two-
point crossover for binary strings to permutation representation.

2.2.3.3 Mutation
It is well-recognized that a mutation operator plays a role of local
random search in genetic algorithms. For bit strings, the following algo-
rithm of mutation of bit reverse type is proposed.

Mutation of bit reverse type


Step 0: Set r := 1.
Step 1: Set j := 1.

Step 2: Generate a real random number rand() in [0,1]. For a given


mutation rate Pm, if rand 0 :S Pm, then go to step 3. Otherwise, go
to step 4.

Step 3: If Sj = 0, let Sj := 1 and go to step 4. Otherwise, i.e., if Sj = 1,


let Sj := 0 and go to step 4.

Step 4: If j < 71" set j := j + 1 and return to step 2. Otherwise, go to


step 5.
Step 5: If r < N, set r := r + 1 and return to step 1. Otherwise, stop.
An example of mutation of bit reverse type is illustrated as

Parent: 11101001 ===> Offspring: 10101001


when the second charactor of the bit string is selected for mutation.
Another genetic operator, called an inversion, is introduced by Hol-
land [75]. The inversion operator proceeds by inverting the order of
genes on a randomly selected segment on the string. The inversion pro-
ceeds as follows:
2.2. Coding, fitness, and genetic operators 27

Inversion

Step 0: Set r := 1.

Step 1: Generate a real random number randO in [O,lJ. For a given


inversion rate Pi, if rand 0 ~ Pi, then go to step 2. Otherwise, go to
step 4.

Step 2: Choose two inversion points hand k (1 ~ h < k ~ n) at


random.

Step 3: Invert the substring between the two inversion points hand k
(h ~ k).
Step 4: If r < N, set r := r + 1 and return to step 1. Otherwise, stop.
An example of inversion is illustrated as

h k h k
Parent: 111001110 =} Offspring: 111100110

when the substrings between hand k are selected for inversion.


Further details of the theory, methods and applications of genetic
algorithms can be found in Goldberg [66], Michalewicz [112], Gen and
Cheng [60, 61]' Sakawa and Tanaka [165J, and Back et al. [13-15J.
Chapter 3

GENETIC ALGORITHMS FOR 0-1


PROGRAMMING

In this chapter, genetic algorithms with double strings (GADS) as


developed for multidimensional 0-1 knapsack problems are discussed in
detail. Through the introduction of a double string representation and
the corresponding decoding algorithm, it is shown that a potential so-
lution satisfying constraints can be obtained for each individual. Then
the GADS are extended to deal with more general 0-1 programming
problems involving both positive and negative coefficients in the con-
straints. Especially, new decoding algorithms for double strings using
reference solutions both without and with the reference solution updat-
ing procedure are introduced so that each individual is decoded to the
corresponding feasible solution for the general 0-1 programming prob-
lems. The detailed comparative numerical experiments with a branch
and bound method are also provided.

3.1 Introduction
Recently, as a natural extension of single-objective 0-1 knapsack prob-
lems, Sakawa et al. [144, 148] formulated multiobjective multidimen-
sional 0-1 knapsack problems by assuming that the decision maker (DM)
may have a fuzzy goal for each of the objective functions. Having elicited
the linear membership functions that represent the fuzzy goals of the
DM well, the fuzzy decision of Bellman and Zadeh [22] is adopted for
combining them. In order to derive a compromise solution for the DM
by solving the formulated problem, GADS that decode an individual
represented by a double string to the corresponding feasible solution
for treating the constraints of the knapsack type have been proposed
[144, 148]. Also, through the combination of the desirable features of
both the interactive fuzzy satisficing methods for continuous variables
30 3. GENETIC ALGORITHMS FOR 0-1 PROGRAMMING

[135] and the GADS [144], an interactive fuzzy satisficing method to de-
rive a satisficing solution for the DM to multiobjective multidimensional
0-1 knapsack problems has been proposed [160, 161]. These results are
immediately extended to multiobjective multidimensional 0-1 knapsack
problems involving fuzzy numbers [162].
Unfortunately, however, because these GADS are based mainly on the
decoding algorithm for treating the constraints of the knapsack type,
they cannot be applied to more general 0-1 programming problems in-
volving positive and negative coefficients in both sides of the constraints.
In this chapter, we first revisit GADS for multidimensional 0-1 knap-
sack problems [144, 148] with some modifications and examine their
comput.ational efficiency and effectiveness t.hrough computational exper-
iments. Then we extend the GADS for 0-1 knapsack problems to deal
with more general 0-1 programming problems involving both positive
and negative coefficients in the constraints. New decoding algorithms
for double strings using reference solutions both without and with the
reference solution updating procedure are especially proposed so that
each individual will be decoded to the corresponding feasible solution
for the general 0-1 programming problems. Moreover, to demonstrate
the efficiency and effectiveness of the proposed genetic algorithms, the
proposed methods and the branch and bound method for several numer-
ical examples are compared with respect to the solution accuracy and
computational time.

3.2 Multidimensional 0-1 knapsack problems


In this section, for convenience in our subsequent discussion, genetic
algorit.hms with double strings for multidimensional 0-1 knapsack prob-
lems proposed by Sakawa et al. [138, 144, 147, 148, 160, 161, 163] are
revisited with some modifications, and their computational efficiency
and effectiveness are examined through computational experiments.

3.2.1 Problem formulation


As is well-known, a multidimensional 0-1 knapsack problem is formu-
lated as
minimize
subject to ~:
::; b } , (3.1)
:r: j E {O, I}, ) = 1, ... ,11,

where C = (CI,' .. , cn) is an n-dimensional row vector, x = (Xl, . .. , xn)T


is an n-dimensional column vect.or of 0-1 decision variables; A = [aij],
i = 1, ... , Tn, .1 = 1, ... , n, is an Tn x n coefficient matrix; and b =
(b l , . .. , bm)T is an Tn-dimensional column vector. It should be noted
.7.2. Multidimensional 0-1 knapsack problems 31

here that, in a multidimensional 0-1 knapsack problem, each element of


e is assumed to be nonpositive and each element of A and b is assumed
to be nonnegative.

3.2.2 Coding and decoding


For solving 0-1 programming problems through genetic algorithms,
an individual is usually represented by a binary 0-1 string of length n
[66, 112]. For handling m constraints defined by Ax ~ b in a multi-
dimensional 0-1 knapsack problem, the most straightforward technique
is to transform the constrained problem into an unconstrained problem
by penalizing infeasible solutions, namely, penalty term is added to the
objective function for any violation of the constraints. Based on the
concept of penalty functions, it is possible to define the fitness function
of each individual 8 by

-ex, if Ax ~b
f(8) = { (3.2)
0, if Ax >b
or
f(8) = _ ex - (). max
i==l""jm
{a aX-b}
'
z
b i
z
'
(3.3)

where ai, i = 1, ... , m, is an n-dimensional ith row vector of the coeffi-


cient matrix A; bi, i = 1, ... , m is an ith element of a vector b; and () is
a positive parameter to adjust the penalty value.
The fitness function equation (3.2) or equation (3.3) is defined for
preventing to generate infeasible solutions by imposing penalties on in-
dividuals that violate the constraints. Although several ideas have been
proposed about how the penalty function is designed and applied to in-
feasible solutions, it is generally recognized that the smaller the feasible
region, the harder it is for the penalty function methods to generate
feasible solutions, as pointed out in the field of nonlinear optimization.
For multidimensional 0-1 knapsack problems, Sakawa et al. [138, 144,
147, 148, 160, 161, 163] proposed a double string representation as shown
in Figure 3.1, where 9s(j) E {a, I}, s(j) E {I, . .. ,n}, and s(j) =I- .s(j')
for j =I- j'.

index of variable
0-1 value

Figure 3.1. Dou ble string

In a double string, regarding s(j) and 9s(j) as the index of an element


in a solution vector and the value of the element, respectively, a string
32 3. GENETIC ALGORITHMS FOR 0-1 PROGRAMMING

8 can be transformed into a solution x = (Xl, ... ,:rn ) as

Xi(j) = gs(j), j = 1, ... , n. (3.4)

Unfortunately, however, because this mapping may generate infeasi-


ble solutions, the following decoding algorithm for eliminating infeasible
solutions has been proposed [138,148]. In the algorithm, n, j, s(j), xs(j),
and Ps(j) denote length of a string, a position in a string, an index of a
variable, 0-1 value of a variable with index s(j) decoded from a string,
and a s(j)th column vector of the coefficient matrix A, respectively.

Decoding algorithm for double string

Step 1: Set j := 1 and sum := O.

Step 2: If gs(j) = 0, set xs(j) := 0 and j := j + 1, and go to step 4.


Otherwise, i.e., if gs(j) = 1, then go to step 3.

Step 3: If sum + Ps(j) S b, set xs(j) := 1, sum := sum + Ps(j) and


j := j + 1, and go to step 4. Otherwise, set xs(j) := 0 and j := j + 1,
and go to step 4.

Step 4: If j > n, stop. Otherwise, return to step 2.

3.2.3 Fitness and scaling


For multidimensional 0-1 knapsack problems, it seems quite natural
to define the fitness function of each individual 8 by
ex
f(8) = -n-' (3.5)
2>j
j=1

where 8 denotes an individual represented by a double string and x


is the phenotype of 8. Observe that the fitness is normalized by the
minimum of the objective function, and hence the fitness f( 8) satisfies
Os f(8) S 1.
In a reproduction operator based on the ratio of fitness of each individ-
ual to the total fitness such as an expected value model, it is frequently
pointed out that the probability of selection depends on the relative ratio
of fitness of each individual. Thus, several scaling mechanisms have been
introduced [66, 112J. Here, a linear scaling [66] discussed in Chapter 2
is adopted.
3.2. Multidimensional 0-1 knapsack problems 33

3.2.4 Genetic operators


3.2.4.1 Reproduction
Up to now, various reproduction methods have been proposed and
considered [66, 112]. Using several multiobjective 0-1 programming test
problems, Sakawa et al. [138, 148] investigated the performance of each
of the six reproduction operators-ranking selection, elitist ranking se-
lection, expected value selection, elitist expected value selection, roulette
wheel selection, and elitist roulette wheel selection-and as a result con-
firmed that elitist expected value selection is relatively efficient. Based
mainly on our experience [138, 148] as a reproduction operator, elitist
expected value selection is adopted here. Elitist expected value selection
is a combination of elitism and expected value selection as mentioned
below.
Elitism: If the fitness of a string in the past populations is larger than
that of every string in the current population, preserve this string
into the current generation.
Expected value selection: For a population consist.ing of N st.rings, the
expected value of t.he number of t.he ith string Si in the next popula-
tion
(3.6)

i=l
is calculated. Then, the integral part of Ni denotes the deterministic
number of t.he string Si preserved in the next population. The decimal
part of Ni is regarded as probability for one of the string Si to survive;
in other words, N - 2:~ll Nd strings are determined on the basis of
this probability.

3.2.4.2 Crossover
If a single-point or multipoint crossover operator is applied to indi-
viduals represented by double strings, an index s(j) in an offspring may
take the same number that an index S(j') (j -=I j') takes. Recall that the
same violation occurs in solving TSPs or scheduling problems through
genetic algorithms. One possible approach to circumvent such violation,
the crossover method called PMX is useful. It enables us to generate
desirable offsprings without changing the double string structure, unlike
the ordinal representation [73]. However, in order to process each el-
ement gs(j) in the double string structure efficiently, it is necessary to
modify some points of the procedures. The PMX for double strings can
be described as follows:
34 3. GENETIC ALGORITHMS FOR 0-1 PROGRAMMING

Partially Matched Crossover (PMX) for double string

Step 0: Set T := 1.

Step 1: Choose X and Y as parent individuals. Then, let X' := X and


y':= Y.

Step 2: Generate a real random number rand() in [0,1]. For a given


crossover rate Pc, if rand 0 ::::; Pc, then go to step 3. Otherwise, go to
step 8.

Step 3: Choose two crossover points h, k (h -=I- k) from {I, 2, ... ,n}
at random. Then, set 1 := h. First, perform operations in steps 4
through 6 for X' and Y.

Step 4: Let j := ((/- 1)%71.) + 1 (p%q is defined as the remainder when


an integer p is divided by an integer q). After finding j' such that
8y(j) = 8XI(j'), interchange (8xl(j),gsx 1 (j)f with (8XI(jI),9s x l(j')T.
Furthermore, set 1 := I + 1, and go to step 5.

Step 5: 1) If h < k and l > k, then go to step 6. If h < k and l ::::; k


then return to step 4. 2) If h > k and l > (k + 71.), then go to step 6.
If h < k and l ::::; (k + 71.), then return to step 4.

Step 6: 1) If h < k, let 9s x ,(j) := 9s y (j) for all j su"h that h ::::; .i : : ; k,
and go to step 7. 2) If h > k, let 9s x ,(j) := gsy(j) for all j such that
1 ::::; j ::::; k or h ::::; j ::::; IL, and go to step 7.

Step 7: Carry out the same operations as in steps 4 through 6 for Y'
and X.

Step 8: Preserve X' and Y' as the offsprings of X and Y.

Step 9: If T < N, set T := T + 1 and return to step l. Otherwise, go to


step 10.

Step 10: Choose N . G individuals from 2 . N preserved individuals


randomly, and replace N . G individuals of the current population
consisting of N individuals with the N G chosen individuals. Here,
G is a constant called a generation gap.

It should be noted here that the original PMX for double strings is
extended to deal with the substrings not only between hand k but also
between k and h.
3.2. Multidimensional 0-1 knapsack problems 35

3.2.4.3 Mutation
It is well-recognized that a mutation operator plays a role in local
random search in genetic algorithms. Here, for the lower string of a
double string, mutation of bit reverse type is adopted.

Mutation of bit reverse type for double strings

Step 0: Set r := 1.

Step 1: Set j := 1.

Step 2: Generate a random number randO in [O,lJ. For a given mu-


tation rate Pm, if rand 0 ~ Pm, then go to step 3. Otherwise, go to
step 4.

Step 3: If gs(j) = 0, let gs(j) := 1 and go to step 4. Otherwise, i.e., if


gs(j) = 1, let gs(j) := 0, and go to step 4.

Step 4: If j < n, set j := j + 1 and return to step 2. Otherwise, go to


step 5.

Step 5: If r < N, set r := r + 1 and return to step 1. Otherwise, stop.

We may introduce another genetic operat.or, called an inversion. The


inversion for double strings proceeds as follows:

Inversion for double strings

Step 0: Set r := 1.

Step 1: Generate a random number randO in [0, 1J. For a given inver-
sion rate Pi, if rand 0 ~ Pi, then go to step 2. Otherwise, go to step
4.
Step 2: Choose two points h, k (h -# k) from {I, 2, ... , n} at random.
Then, set l := h.

Step 3: Let j := ((l - l)%n) + 1. Then, interchange (s(j), gs(j))'I' with


(s( (n + k - (l - h) - l)%n + 1), gs((n+k-(l-h)-l)%n+l )T. Furthermore,
set l : = l + 1 and go to step 4.

Step 4: 1) If h < k and l < h + l(k - h + 1)/2j, return to step 3.


If h < k and l 2: h + l(k - h + 1)/2j, go to step 5. 2) If h > k
and l < h + l(k + n - h + 1)/2j, return to step 3. If h > k and
l 2: h + l(k + n - h + 1)/2j, go to step 5.

Step 5: If r < N, set r ;= r + 1 and return to step 1. Otherwise, stop.


36 3. GENETIC ALGORITHMS FOR 0-1 PROGRAMMING

Observe that the original inversion for double strings is extended to


deal with the substrings not only between hand k but also between k
and h.

3.2.5 Termination conditions


When applying genetic algorithms to the multidimensional 0-1 knap-
sack problem (3.1), an approximate solution of desirable precision must
be obtained in a proper time. For this reason, two parameters Imin
and I max , which respectively denote the number of generations to be
searched at least and at most, are introduced. Then the following ter-
mination conditions are imposed.
Step 1: Set the iteration (generation) index t := 0 and the parameter
of the termination condition c > o.
Step 2: Carry out a series of procedures for search through genetic
algorithms (reproduction, crossover, and mutation).
Step 3: Calculate the mean fitness fmean and the maximal fitness fmax
of the population.

Step 4: If t > Imin and (fmax - fmean)/ fmax < c, stop.


Step 5: If t > I max , stop. Otherwise, set t := t + 1 and return to step
2.

3.2.6 Numerical experiments


For investigating the feasibility and efficiency of the GADS, consider
multidimensional 0-1 knapsack problems with 30, 50, 100, 150, and 200
variables. Each element of Cj and aij, i = 1, ... , m, j = 1, ... ,n in the
numerical examples corresponding to the multidimensional 0-1 knapsack
problem (3.1) was selected at random from the closed interval [-999,0]
and [0,999], respectively. On the basis of these values, each element bi ,
i = 1, ... , m, was determined by
n
bi =" Laij, (3.7)
j=l

where a positive constant, denotes the degree of strictness of the con-


straints.
Our numerical experiments were performed on a personal computer
(processor: Celeron 333MHz, memory: 128MB, operating system (OS):
Windows NT 4.0) using a Visual C++ compiler (version 6.0). Also,
in order to compare the obtained results with the corresponding exact
3.2. Multidimensional 0-1 knapsack problems 37

optimal solutions or incumbent values, the same problems were solved


using LP _SOLVE [23] by M. Berkelaar. 1
The parameter values used in GADS were set as follows: population
size = 100, generation gap G = 0.9, probability of crossover Pc = 0.9,
probability of mutation Pm = 0.01, probability of inversion Pi = 0.03,
E = 0.02, Imax = 1000, Imin = 100, and Cmult. = 1.8. Observe that these
parameter values were found through our experiences and these values
used all of the trials of GADS.
First consider the multidimensional 0-1 knapsack problem with 30
variables and 10 constraints. For '"Y = 0.25, 0.50, and 0.75, 10 trials
to each example were performed through GADS. Also, in comparison
with exact optimal solutions, each example was solved by LP _SOLVE
[23]. Table 3.1 shows the experimental results for '"Y = 0.25, 0.50, and
0.75, where best, average, worst, time, and # represent the best value,
average value, worst value, average processing time, and the number of
best solution in 10 trials, respectively.
For problems with 30 variables, it can be seen from Table 3.1 that
optimal solutions were obtained 10 times out of 10 trials for GADS.
However, concerning the processing time, as expected, LP _SOLVE was
faster than GADS was. As a result, for problems with 30 variables, there
is no evidence that would reveal an advantage of GADS over LP _SOLVE.

Table .1.1. Experimental results for 30 variables and 10 constraints (10 trials)

, Methods Best I Average I Worst Time (sec) #


0.25
GADS -5378 I -5378.0 I -5378 5.07 10/10
LP_SOLVE -5378 (optimal) 2.30 x 10- 1 -

0.50
GADS -9661 J -9661.0 I
-9661 5.97 10/10
LP_SOLVE -9661 (optimal) 7.71 x 10 1 -

0.75
GADS -12051 I I
-12051.0 -12051 6.24 10/10
LP_SOLVE -12051 (optimal) 9.00 x 10 2 -

Next, consider another multidimensional 0-1 knapsack problem with


50 variables and 20 constraints. The results obtained through 10 times
trials of GADS for each of the problems are shown in Table 3.2 together
with the experimental results by LP _SOLVE. From Table 3.2, it can be
seen that GADP succeeds 10 times out of 10 trials for '"Y = 0.25 and 0.75

1 LP .-SOLVE [23J solves (mixed integer) linear programming problems. The imple-
mentation of the simplex kernel was mainly based on the text by Orchard-Hays [125J.
The mixed integer branch and bound part was inspired by Dakin [37J.
38 S. GENETIC ALGORITHMS FOR 0-1 PROGRAMMING

and 8 times out of 10 trials for 'Y = 0.50. Furthermore, for 'Y = 0.25 and
0.50, the required processing time of GADS is about 40 to 50% of that
of LP _SOLVE. As a result, for problems with 50 variables, GADS seems
to be more desirable than LP _SOLVE is.

Table 3.2. Experimental results for 50 variables and 20 constraints (10 trials)

, Methods Best I Average I Worst Time (sec) #


0.25
GADS -894U I -8940 I -8940 1.28 x 10 1 10/10
LP_SOLVE -8940 (optimal) 3.01 x 10 1 -

0.50
GADS -16485 I
-16483.2 I
-16476 1.37 x 10 1 8/10
LP_SOLVE -16485 (optimal) 3.34 x 10 1 --

0.75
GADS -21931 I
-21931.0 I
-21931 1.47 x 10 1 10/10
1
LP_SOLVE -21931 (optimal) 8.91 x 10 -

Similar computational experiences were performed on numerir:'},l ex-


amples with 100 variables and 30 constraints, 150 variables and 40 con-
straints, and 200 variables and 50 constraints, and the wrresponding
results are shown in Tables 3.3, 3.4, and 3.5, respectively, where for nu-
merical examples with 200 variables and 50 constraints, Imax is increased
from Imax = 1000 to Imax = 2000.
It is significant to note here that although the accuracy of the best
solutions obtained through GADS tends to decrease if compared with
the case of 30 variables or 50 variables, on the average GADS gives
relatively desirable results. Especially concerning the processing times,
GADS is much faster than LP _SOLVE is.

Table 3.3. Experimental results for 100 variables and 30 constraints (10 trials)

, Methods Best I Average I Worst Time (sec) #


0.25
GADS -20846 I
-20493.7 l
-20244 3.89 x 10 1 --

LP_SOLVE -19807 (incumbent) 1.08 x 10 4 -

0.50
GADS -36643 I
-36242.1 I
-35802 4.33 x 10 1 0/10
LP_SOLVE -36818 (optimal) 1.50 x 10 3 ---

0.75
GADS -46097 I
-45892.5 I
-45771 4.77 x 10 1 0/10
LP_SOLVE -46198 (optimal) 1.13 x 10 2 --
3.3. 0-1 programming 39

Table 3.4. Experimental results for 150 variables and 40 constraints (10 trials)

'Y Methods Best I Average I Worst Time (sec) #


0.25
GADS -30697 I
-30044.1 I
-29116 8.08 x 10 1 ,--

LP_SOLVE -30343 (incumbent) 1.08 x 10 4 -

0.50
GADS -52768 I
-51700.0 I
-51283 8.57 x 10 1 -

LP_SOLVE -53755 (incumbent) 1.08 x 10 4 -

0.75
GADS -65989 I
-65285.8 I
-64708 9.30 x 10 1 0/10
LP_SOLVE -66543 (optimal) 4.44 x 10 3 --

Table 3.5. Experimental results for 200 variables and 50 constraints (10 trials)

'Y Methods Best I Average I Worst Time (sec) #


0.25
GADS -37266 I
-36431.9 I-35534 2.61 x 10 2 -

LP_SOLVE -36915 (incumbent) 1.08 x 10 4 -

0.50
GADS -67060 I
-65384.1 I
-64148 2.85 x 10 2 -

LP_SOLVE -68541 (incumbent) 1.08 x 10 4

0.75
GADS -85169 I
-84341.3 I
-83657 3.11 x 10 2 0/10
LP_SOLVE -86757 (optimal) 1.10 x 10 4 -

3.3 0-1 programming


3.3.1 Problem formulation
In general, a 0-1 programming problems is formulated as
minimize
ex }
subject to Ax < b , (3.8)
x E {O,l}n

where e = (CI' ... , cn ) is an n-dimensional row vector; x = {Xl, ... ,xn)T


is an n-dimensional column vector of 0-1 decision variables; A = [aij] ,
i = 1, ... , m, j = 1, ... , n, is an m x n coefficient matrix; and b =
(b I , ... , bm)T is an m-dimensional column vector.
For such 0-1 programming problems (3.8), as discussed in the previ-
ous section, Sakawa et al. focused on knapsack types in which all of the
elements of A and b are nonnegative and proposed a genetic algorithm
with double string representation on the basis of the decoding algorithm
that decodes an individual represented by a double string to the corre-
sponding feasible solution for treating the the knapsack type constraints
[144, 148].
40 3. GENETIC ALGORITHMS FOR 0-1 PROGRAMMING

Unfortunately, however, the GADS proposed by Sakawa et al. [144,


148] cannot be directly applied to more general 0-1 programming prob-
lems in which not only positive elements but also negative ones of A and
b exist.
In this section, we extend the G ADS to be applicable to more general
0-1 programming problems with A E R mxn and bERm.

3.3.2 Genetic algorithms with double strings based


on a reference solution
In the GADS proposed by Sakawa et al. [144, 148] for multidimen-
sional 0-1 knapsack problems (3.1), each individual is decoded to the
corresponding feasible solution by a decoding algorithm. Unfortunately,
however, it should be noted here that this decoding algorithm does not
work well for more general 0-1 programming problems involving positive
and negative coefficients in both sides of the constraints.
In order to overcome such defect of the original decoding algorithm, by
introducing a reference solution with backtracking and individual mod-
ification, we propose a new decoding algorithm for double strings that
is applicable to more general 0-1 programming problems with positive
and negative coefficients in the constraints.
It is significant to note here that considering x = 0 is always a feasible
solution for 0-1 knapsack problems. This decoding algorithm enables
us to decode each individual to the corresponding feasible solution by
determining values of x s(j) as

if gs(j) = 0, or,
{
xs(j) = { 0 if gs(j) = 1 and the constraints are not satisfied,
1, if gs(j) = 1 and the constraints are satisfied.

Unfortunately, however, this decoding algorithm cannot be directly ap-


plied to 0-1 programming problems with negative elements in b.
Realizing this difficulty, by introducing a reference solution with back-
tracking and individual modification, we propose a new decoding algo-
rithm for double strings that is applicable to more general 0-1 program-
ming problems with positive and negative coefficients in the constraints.
For that purpose, it is required to find a feasible solution xO for gener-
ating a reference solution by some method. One possible way to obtain
a feasible solution to the 0-1 programming problem (3.8) is to maximize
3.3. 0-1 programming 41

the exponential function for the violation of constraints defined by

(3.9)

where ai, i = 1, ... , m, denotes an n-dimensional ith row vector of


the coefficient matrix A, J;t = {j I aij ~ 0,1 ::; j ::; n}, J~ = {j I
aij < 0,1 ::; j::; n}, L:jEJJ" aij, and L: jEJa aij are the maximum and
minimum of aiX, respectiv~ly, () is a positi~e parameter to adjust the
severity of the violation for constraints and

R(O = {~' ~ ~
0, ~ <

(3.10)

Namely, for obtaining a feasible solution xo, solve a maximization prob-


lem
maximize g(x), x E {O, l}n (3.11)
through GADS (without using the decoding algorithm) by regarding the
fitness function of an individual 8 as f(8) = g(x).
If any feasible solution cannot be located for a prescribed number of
times, it is concluded that the original 0-1 programming problem (3.8)
has no feasible solutions.
By using the feasible solution Xo thus obtained, the decoding algo-
rithm using a reference solution with backtracking and individual mod-
ification can be described as follows:

Decoding algorithm using a reference solution with backtrack-


ing and individual modification
Step 1: Set j = 1, 1=0 and sum = o.
Step 2: If gs(j) = 0, set j := j + 1 and go to step 4. If gs(j) = 1, let
sum = sum + Ps(j) and go to step 3.
Step 3: If sum ::; b, let I := j and j := j + 1, and go to step 4.
Otherwise, let j := j + 1 and go to step 4.
Step 4: If j > n, go to step 5. Otherwise, return to step 2.
Step 5: If > 0, let x:(j) := gs(j) for all j such that 1 ::; j ::; I and
I
x;(j) :=0 for all j such that I + 1 ::; j ::; n, and go to step 6.
Otherwise, let x* := xU and go to step 6.
42 30 GENETIC ALGORITHMS FOR 0-1 PROGRAMMING

Step 6: Let j := 1 and sum := 2.::k==l Ps(k)X;(k)O


Step 7: If 9s(j) = x;(j)' let xs(j) := 9s(j) and j := j + 1, and go to step
9. If 9s(j) #- x;(j)' then go to step 8.

Step 8: 1) If 9s(j) = 1 and sum + Ps(j) .::; b, let xs(j) := 1, sum :=


sum + Ps(j) and j := j + 1. Here, if there exists at least one negative
element in Ps(j)' then go to substep 1 for backtracking and individual
modification. If not, go to step 9. If 9s(j) = 1 and sum + Ps(j) i b,
let xs(j) := 2 and j := j + 1, and go to step 9. 2) If 9s(j) = 0 and
sum - Ps(j) .::; b, let xs(j) := 0, sum := sum - Ps(j) and j := j + 1.
Here, if there exists at least one positive element in Ps(j)' then go to
substep 1 for backtracking and individual modification. If not, go to
step 9. If 9s(j) = 0 and sum - Ps(j) i b, let xs(j) := 2 and j := j + I,
and go to step 9.
Substeps for backtracking and individual modification
Substep 1: Set h := 1.
Substep 2: If xs(h) = 2, go to substep 3. Otherwise, let h := h + 1
and go to substep 4.
Substep 3: 1) If 9s(h) = 1 and sum + Ps(h) .::; b, let xs(h) :=
1, sum := sum + Ps(h) and h := h + 1. Then, interchange
(s(j),9s(j)T with (8(h),9s(h)f". If there exists at least one neg-
ative element in Ps(h)' then return to substep 1. If not, go to
substep 4. If 9s(h) = 1 and sum + Ps(h) i b, let h := h + 1
and go to substep 4. 2) If 9s(h) = 0 and sum - Ps(h) .::; b, let
Xs(h) := 0, sum := sum - Ps(h) and h := h + 1. Then, inter-
change (s(j),9s(j)T with (8(h),9s(h)T. If there exists at least
one positive element in Ps(h)' then return to substep 1. If not, go
to sub step 4. If 9s(h) = 0 and sum - Ps(h) i b, let h := h + 1 and
go to substep 4.
Substep 4: If h ~ j, then go to step 9. Otherwise, return to substep
2.
Step 9: If j > n, let h := 1 and go to step 10. Otherwise, return to
step 7.
Step 10: If Xs(h) = 2, let xs(/t) := x;(/t) and h := h + 1, and go to step
11. Otherwise, let h := h + 1 and go to step 11.
Step 11: If h > n, stop. Otherwise, return to step 10.
Examples of the decoding algorithm (steps 0 through 5) are illustrated
in Figure 3.2. In the figure, 0 or x indicates whether the inequality
3.3. 0-1 programming 43

sum::; b holds at the locus or not. First, find a feasible solution xO =


(1,0,0,1,0,1) in step 0, and let j = 1, l = and sum = 0 in step 1.
Then, apply steps 2 through 4 repeatedly.
For the individual a) in Figure 3.2, because the maximal j such that
sum::; b holds is equal to 4, l = 4. Thus, a reference solution, which
becomes the basis of decoding, x* = (1,1,0,1,0,0) is obtained. For
the individual b) in Figure 3.2, because there exists no locus such that
sum::; b holds, let x* = XO be a reference solution.

A feasible solution xO =( J, 0, 0,1,0,1)

a) 1=4
f----l---+--+---+---+---I ------- x * = ( 1 , 1 , 0, 1 , 0 , 0 )

b) [=0
f----1-1-1-1-1-----1 ------- x' = xO =( 1 ,0,0, 1,0, I)

Figure S.2. Examples of decoding (steps 0 through 5)

Examples of the decoding algorithm (steps 6 through 11) are illus-


trated in Figure 3.3. First, using a reference solution obtained by step
5, calculate sum = "'7=1 Ps(j)X;(j) in step 6. Then, according to steps
7 through 11, decode the individual from the left-hand side as follows:
1) Because 8(1) = 2 and g2 = 1 = xi, let X2 = 1. 2) Because 8(2) = 4,
g4 =
i x4 and sum - P4 1. b, let X4 = 2 temporarily. 3) Because

8(3) = and g3 = = X 3, let X3 = 0. 4) Because s(4) = 5, 95 = 1 i Xs
and sum + P5 ::; b, let X5 = 1 and sum = sum + P5' When back-
tracking to j = 2, because sum - P4 ::; b, let X4 = 0. Then, apply the
individual modification to exchange (8(2),gs(2))T for (8(4),gs(4))T. In
this example, a feasible solution x = (1,1,0,0,1, O)T is obtained by use
of the decoding algorithm.
As can be seen from the previous discussions, for general 0-1 pro-
gramming problems involving positive and negative coefficients in the
constraints, this newly developed decoding algorithm enables us to de-
code each of the individuals represented by the double strings to the
corresponding feasible solution.
44 3. GENETIC ALGORITHMS FOR 0-1 PROGRAMMING

n
A reference solution x * = ( I , I , 0 , 0,
I ,I ) sum == ~ p . x
j= ] J J

2 4 3 5 6 I
1 0 0 I 0 I

2 4 3 5 6 I sum - p 4~b
- - - ' - -.... x = ( , I , ,2, , )
I 0 0 I 0 I

2 4 3 5 6 I
I 0 0 I 0 I

2 4 3 5 6 I sum + PS ~ b
-------''-----.... x = ( ,I , D, 2, I, )
1 0 0 1 0 1

---
2 4 3 5 6 1 sum - p4 ~ b
---'-----.... x= ( , 1, 0,0, 1, )
I 0 o I 0 1 sum - = P 4

2 5 3 4 6 I
(Individual modification)
I I I 0 0 I

.......
"'---"'"
2 5 3 4 6 1 ----
I I I 0 0 1

The obtained solution x = ( I, I , 0, 0, I , 0)

Figure 3.3. Examples of decoding (steps 6 through 11)

3.3.2.1 Fitness
For general 0-1 programming problems, it seems quite natural to de-
fine the fitness function of each individual 8 by

ex - L Cj
jot
1(8) = --=--=--------:=-- (3.12)
L Cj - L C/
jEJ~ jEJt

where J: = {j I Cj ~ 0, 1 ::; j ::; n} and J;; = {j I Cj < 0,1 ::; j ::; n}. It
should be noted here that the fitness 1(8) becomes as

f(8) ={ 0, if Xj = 1, j E J: and Xj = 0, j E .1;; (3.13)


1, if Xj = 0, j E J: and Xj = 1, j E J;;
3.3. 0-1 programming 45

and the fitness /(8) satisfies 0 :::;: f(8) :::;: 1.

3.3.2.2 Genetic operators


Quite similar to GADS, elitist expected value selection, which is the
combination of expected value selection and elitist preserving selection,
is adopted, and the linear scaling technique is used. Also, PMX for
double strings is adopted. A mutation operator of bit reverse type and
a inversion operator are used.

3.3.3 Genetic algorithms with double strings based


on reference solution updating
In the GADS based on a reference solution (GADSRS) introduced
thus far, through the use of a reference solution with backtracking and
individual modification the original decoding algorithm in GADS is ex-
tended to deal with general 0-1 programming problems with positive and
negative coefficients in the constraints.
Unfortunately, however, backtracking and individual modification re-
quire a great deal of computational time in the decoding algorithm. Also,
the diversity of phenotypes x greatly depends on the reference solution
used in the decoding algorithm. To overcome such problems, we further
propose a modified decoding algorithm using a reference solution with-
out backtracking and individual modification as well as the reference
solution updating procedure.
After eliminating the backtracking and individual modification pro-
cedures in the decoding algorithm, by introducing the solutions for the
relaxed problem with the constraints corresponding to the positive right-
hand side constants, the following decoding algorithm using a reference
solution can be proposed. In the following decoding algorithm, b+ de-
notes a column vector of positive right-hand side constants and the cor-
responding coefficient matrix is denoted by A + = (pi, ... , p~). Also,
gs(j) and x;(j)' j = 1, ... ,n, respectively, denote the values of variables
of an individual and a reference solution.

Decoding algorithm using reference solution


Step 1: Set j := 1 and psum := O.
Step 2: If gs(j) = 0, let qs(j) := 0 and j := j + 1, and go to step 4. If
gs(j) = 1, then go to step 3.

Step 3: Ifpsum+p~j) :::;: b+, let qs(j) := 1, psum:= psum+p~j) and


j := j + 1, and go to step 4. Otherwise, let qs(j) := 0 and j := j + 1,
and go to step 4.
46 3. GENETIC ALGORITHMS FOR 0-1 PROGRAMMING

Step 4: If j > n, go to step 5. Otherwise, return to step 2.


Step 5: Let j := 1, l := 0 and sum := O.
Step 6: If 9s(j) = 0, let j := j + 1, and go to step 8. If 9s(j) = 1, let
sum := sum + Ps(j) , and go to step 7.
Step 7: If sum :::; b, let. I := j and j := j + 1, and go to step 8.
Otherwise, let j := j + 1, and go to step 8.
Step 8: If j > n, go to step 9. Otherwise, return to step 6.
Step 9: If l > 0, go to step 10. Otherwise, go to step 11.
Step 10: For x.s(j) satisfying 1 :::; j :::; l, let xs(j) . - 9s(j). For Xs(j)
satisfying l + 1 :::; j :::;
n, let :ry(j) := 0, and stop.

Step 11: Let sum := L:k=l Ps(k)X;;(k) and j := 1.

Step 12: If 9s(j) = x:(j)' let :rs(j) := 9s(j) and j := j + 1, and go to step
14. If gs(j) oft x:(j)' go to step 13.

Step 13: 1) In the case of 9s(j) = 1: If sum + Ps(j) :::; b, let :rs(j) := 1,
sum:= sum+ps(j) andj:= ]+1, and go to step 14. Ifsum+ps(j) 1.:
b, let :cs(j) := 0 and j := j + 1, and go to step 14. 2) In the case of
gs(j) = 0: If sum - Ps(j) :::; b, let. xs(j) := 0, sum := sum - Ps(j) and
] := j + 1, and go to step 14. If sum - Ps(j) 1.: b, let xs(j) := 1 and
j := j + 1, and go to step 14.
Step 14: If j > n, stop. Otherwise, return to step 12.
In the decoding algorithm, from st.ep 1 to step 4, the values of variables
satisfying only the constraints corresponding to the positive right-hand
side const.ants are decoded into q, and from step 5 to step 14, each
individual to the corresponding feasible solution is decoded into x.
Examples of t.he decoding algorithm are illustrated in Figure 3.4.
First, in a), according to st.eps 1 through 4, decode the individual satis-
fying the constraints corresponding to the positive right-hand side con-
stants into q. Then, in b), according to steps 7 through 11, I is found.
In this example, since I = 0, go to step 11. Then, in c), according to
st.eps 11 through 14, using a reference solut.ion x*, decode the individual
satisfying the constraints int.o x.
As can be seen from the previous discussions, for general 0-1 pro-
gramming problems involving positive and negative coefficients in the
const.raints, this newly developed decoding algorithm enables us to de-
code t.he individuals represent.ed by the double st.rings to t.he correspond-
ing feasible solution. However, the diversity of phenot.ypes x greatly
3.3. 0-1 programming 47

a)

4 1 5 2 3 6
- - - -.... q= (1,0, 1, 1,0,0)
I 1 0 0 I I

b)

4 1 5 2 3 6
------
.. 1=0
I I 01 10
x X X X X X

c)
n
A reference solution x*=(O,O,O, 1, 1,0) sum =.1: pJ' x)~
J=I

4 I 5 2 3 6 84= x.;
- - - -- . x= (, , , 1, , )
1 I
1 1

4 1 5 2 3 6 sum+PI~ b
-----0 x=(I , , ,1, , )
1 1
I 1

sum -Psi. b
4 1 5 2 3 6
-----'=----0'-0 x = (1, , ,I, I, )
I

4
I 0

1 5 2 3 6
1 1

1 1 0 0 1 1

The obtained solution x = (I , 0, I , I , 0)


I,

Figure 3.4. Examples of decoding

depends on the reference solution used in the previous decoding algo-


rithm. To overcome such situations, we propose the following reference
solution updating procedure in which that the current reference solution
is updating by another feasible solution if the diversity of phenotypes
seems to be lost. To do so, for every generation, check the dependence
on the reference solution through the calculation of the mean of the
Hamming distance between all individuals of phenotypes and the ref-
erence solution, and when the dependence on the reference solution is
strong, replace the reference solution by the individual of phenotype
having maximum Hamming distance.
Let N, x*, rt 1.0), and x T denote the number of individuals, the
reference solution, a parameter for reference solution updating, and a
feasible solution decoded by the rth individual, respectively, then the
reference solution updating procedure can be described a.s follows.
48 3. GENETIC ALGORITHMS FOR 0-1 PROGRAMMING

The reference solution updating procedure


Step 1: Set r := 1, rmax := 1, d max := 0 and d sum := O.

Step 2: Calculate d r = 2:/J=1 IX:(j) - x~(j) I and let dsum := dsum + dr


If dr- > d max and exT < exo, let d max := dr, rmax := rand r := r + 1,
and go to Htep 3. Otherwise, let r := r + 1 and go to step 3.
Step 3: If r > n, go to step 4. Otherwise, return to step 2.
Step 4: If dsum/(N n) < T/, then update the reference solution as x* :=
x Trnax , and stop. Otherwise, stop without updating the reference
solution.
It should be observed here that when the constraints of the problem
are strict, there exist a posHibility that all of the individuals are decoded
in the neighborhood of the reference solution. To avoid a such possibility,
in addition to the reference solution updating procedure, after every P
generations, the reference Holution is replaced by the feasible solution
obtained by solving the maximization problem (3.11) through GADS
(without using the decoding algorithm).

3.3.3.1 Fitness
In GADS based on reference solution updating (GADSRSU), two
kinds of fitness functions are defined as

ex - L Cj

!I(8) = jEJt
. 1~
exp ( - - ~ I) (3.14)
L L
Igs(j) - Xs(j)
Cj - c) n j=1
jEJ;; jEJt

eq - L Cj
jut
12(8) = -=---=- (3.15)
~C,- ~c,'
~ J ~.1
jEJ;; jEJt

where J: = {j I Cj ?: 0,1 :::; j :::; n}, J; = {j I Cj < 0,1 :::; j :::; n}, and
the last terrn of !I (8) is added so that the smaller the difference between
the genotype 9 and the phenotype x is, the larger the corresponding
fitness becomes. Observe that !I (8) and 12 (s) indicate the goodness of
the phenotype x of an individual s and that of the phenotype q of an
individual s, respectively. Using these two kinds of fitness functions,
we attempt to reduce the reference solution dependence. For these two
kinds of fitness functions, the linear scaling technique is used.
3.3. 0-1 programming 49

3.3.3.2 Genetic operators


As a reproduction operator, elitist expected value selection, which is
the combination of expected value selection and elitist preserving selec-
tion, is adopted by using the two kinds of fitness functions defined by
{3.14} and {3.15}. To be more explicit, by introducing a parameter A
(0.5 < A < 1), and modifying the expected value selection as

Ni = h{sd x N(1 - A), (3.16)


N
L h(sn)
n==l

where N A individuals in the population are reproduced on the basis of


hand N{1 - A) individuals are reproduced on the basis of h. Also,
elitist preserving selection is adopted on the basis of h.
Quite similar to GADS and GADSRS, PMX for double strings is
adopted. A mutation operator of bit reverse type and an inversion op-
erator are used.

3.3.4 Numerical experiments


For investigating the feasibility and efficiency of the GADSRS and
the GADSRSU, consider multidimensional 0-1 programming problems
with 30, 50, and 100 variables. Each element of Cj and aij, i = 1, ... , m,
j = 1, ... , n, in the numerical example corresponding to the problem
(3.8) was selected at random from the closed interval [-500, 499J. On
the basis of these values, each element bi , i = 1, ... ,m, was determined
by

{3.17}

where J;t = {j I aij ~ 0, 1 ~ j ~ n}, J;;, = {j I aij < 0, 1 ~ j ~ n}, and


'Y is a parameter to control the strictness of constraints.
It is significant to note here that 2: 'Er aij and 2: 'EJ+ aij denote
J ai J ai
the minimum and maximum values of the left-hand side for the ith
constraint, respectively. Hence, for 'Y = 0 or 1, the value of bi becomes
the minimum or maximum value of the left-hand side, respectively, and
the smaller the value of 'Y becomes, the stronger the strictness of the
constraints becomes.
Our numerical experiments were performed on a personal computer
(processor: Celeron 333MHz, memory: 128MB, as: Windows NT 4.0)
using a Visual C++ compiler (version 6.0). Also, in order to compare
50 3. GENETIC ALGORITHMS FOR 0-1 PROGRAMMING

the obtained results with the corresponding exact optimal solutions or


incumbent values, the same problems were solved using LP _SOLVE [23].
The parameter values used in both GADSRS and GADSRSU were set
as follows: population size = 100, generation gap G = 0.9, probability
of crossover Pc = 0.9, probability of mutation Pm = 0.05, probability of
inversion Pi = 0.05, E = 0.01, Imax = 1000, Imin = 100, Cmult = 1.6,
,\ = 0.9, TJ = 0.2, and P = 50. Observe that these parameter values were
found through our experiences, and these values used all of the trials of
both GADSRS and GADSRSU.
First. consider multidimensional 0-1 programming problems with 30
variables and 10 constraints. For 'Y = 0.50 and 0.55, 10 trials for each
example were performed through both GADSRS and GADSRSU. Also,
in comparison with exact optimal solutions, each example was solved
by LP _SOLVE [23]. Table 3.6 shows the experimental results for 'Y =
0.50 and 0.55, where best, average, worst, time and # represent the
best value, average value, worst value, average processing time, and the
number of best solution in 10 trials, respectively.
For problems with 30 variables, it can be seen from Table 3.6 that op-
timal solut.ions were obtained on 10 times out of 10 trials for G ADSRSU
and 6 or 9 times out of 10 trials for GADSRS. However, concerning the
processing time, although GADSRSU reduced to approximat.ely 50% of
GADSRS, LP _SOLVE was faster than GADSRS and GADSRSU were.
As a result, for problems with 30 variables, there is no evidence that
would reveal an advantage of GADSRSU and GADSRS over LP _SOLVE.

Table 3.6. Experimental results for 30 variables and 10 constraints (10 trials)

I Methods Best Average Worst. Time (sec) #


GADSRS -2543 -2529.3 -2448 1.98 x 10 1 6/10
0.50 GADSRSU -2543 -2543.0 -2543 1.11 x 10 1 10/10
LP_SOLVE -2543 (optimal) 5.50 x 10- 1 -

GADSRS -2976 -2969.5 -2911 1.96 x 10 1 9/10


0.55 GADSRSU -2976 -2976.0 -2976 1.08 x 10 1 10/10
LP_SOLVE -2976 (opt.imal) 3.80 x 10 1 --

Next, consider another multidimensional 0-1 programming problem


with 50 variables and 20 constraints. The results obtained through 10
times trials of GADSRS and GADSRSU for each of t.he problems are
shown in Table 3.7 together wit.h the experimental results by LP _SOLVE.
From Table 3.7, although the accuracy of the best solutions obtained
through GADSRSU tends to decrease if compared with the case of 30
3.3. 0-1 programming 51

variables, on the average GADSRSU gives better results than GAD-


SRS does with respect to the accuracy of the obtained solutions. Fur-
thermore, the required processing time of GADSRSU is reduced to ap-
proximately 50% of that of GADSRS and approximately 20% of that
of LP _SOLVE. As a result, for problems with 50 variables, GADSRSU
seems to be more desirable than GADSRS and LP _SOLVE are.

Table 3.7. Experimental results for 50 variables and 20 constraints (10 trials)

I Methods Best Average Worst Time (sec) #


GADSRS -4876 -4376.4 -4115 5.03 x 10 1 0/10
0.50 GADSRSU -6014 -5966.6 -5774 2.37 x 10 1 0/10
LP_SOLVE -6071 (optimal) 1.58 x 10 2 -
GADSRS -6494 -5779.0 -4939 5.02 x 10 1 0/10
0.55 GADSRSU -7081 -6969.2 -6803 2.31 x 10 1 3/10
LP_SOLVE -7081 (optimal) 8.90 x 10 1 -

Similar computational experiences were performed on a numerical ex-


ample with 100 variables, and the corresponding results are shown in
Table 3.8.
From Table 3.8, it can be seen that although the accuracy of the best
solutions obtained through GADSRSU tends to decrease if compared
with the case of 30 variables or 50 variables, on the average GADSRSU
gives better results than GADSRS does with respect to the accuracy
of the obtained solutions. Especially concerning the processing times,
GADSRSU is much more faster than LP _SOLVE is.

Table 3.8. Experimental results for 100 variables and 30 constraints (10 trials)

I Methods Best Average Worst Time (sec) #


GADSRS -5439 -3185.6 -1335 2.00 x 10 2 --

0.50 GADSRSU -8959 -8713.4 -8433 8.24 x 10 1 -

LP --SOLVE -6173 (incumbent) 3.60 x 10 4 -

GADSRS -8247 -5575.8 -4108 1.51 x 10 2 0/10


0.55 GADSRSU -10606 -10282.7 -9864 6.64 x 10 1 0/10
LP_SOLVE -11249 (optimal) 2.36 x 10 4 -
52 S. GENETIC ALGORITHMS FOR 0-1 PROGRAMMING

3.4 Conclusion
In this chapter, GADS for multidimensional 0-1 knapsack problems
were first revisited with some modifications, and their computational
efficiency and effectiveness were examined through a number of compu-
tational experiments. Then, to deal with more general 0-1 programming
problems involving positive and negative coefficients in both sides of the
constraints, GADS were extended to GADSRS By introducing a ref-
erence solution with backtracking and individual modification, a new
decoding algorithm for double strings was proposed so that individuals
could be decoded to the corresponding feasible solution for the general
0-1 programming problems. Moreover, to reduce computational time
required for backtracking and individual modification as well as to cir-
cumvent the dependence on the reference solution, through the introduc-
tion of a modified decoding algorithm using a reference solution without
backtracking and individual modification as well as the reference solu-
tion updating procedure, GADSRSU were proposed. From a number
of computational results for several numerical examples, the efficiency
and effectiveness of the proposed GADSRSU and GADSRSU were ex-
amined. As a result, proposed genetic algorithms, especially GADSRSU,
were shown to be efficient and effective with respect to both the solution
accuracy and the processing time. Extensions of the proposed method
to more general cases such as multiobjective multidimensional 0-1 pro-
gramming problems with positive and negative coefficients are now un-
der investigation and will be reported elsewhere. Further extensions to
interactive decision making will also be reported elsewhere.
Chapter 4

FUZZY MULTIOBJECTIVE 0-1


PROGRAMMING

In this chapter, as a natural extension of single-objective 0-1 pro-


gramming problems discussed in the previous chapter, multiobjective
0-1 programming problems are formulated by assuming that the deci-
sion maker may have a fuzzy goal for each of the objective functions.
Through the combination of the desirable features of both the interactive
fuzzy satisficing methods for continuous variables and the genetic algo-
rithms with double strings (GADS) discussed in the previous chapter,
an interactive fuzzy satisficing method to derive a satisficing solution for
the decision maker is presented. Furthermore, by considering the ex-
perts' imprecise or fuzzy understanding of the nature of the parameters
in the problem-formulation process, the multiobjective 0-1 programming
problems involving fuzzy parameters are formulated. Through the intro-
duction of extended Pareto optimality concepts, an interactive decision-
making method for deriving a satisficing solution of the decision maker
from among the extended Pareto optimal solution set is presented to-
gether with detailed numerical examples.

4.1 Introduction
In the mid-1990s, as a natural extension of single-objective 0-1 knap-
sack problems, Sakawa et al. [138, 144, 148] formulated multiobjective
multidimensional 0-1 knapsack problems by assuming that the decision
maker may have a fuzzy goal for each of the objective functions. Having
elicited the linear membership functions that well-represent the fuzzy
goals of the decision maker, the fuzzy decision of Bellman and Zadeh
[22] is adopted for combining them. In order to derive a compromise so-
lution for the decision maker by solving the formulated problem, GADS
that decode an individual represented by a double string to the corre-
54 4. FUZZY MULTIOBJECTIVE 0-1 PROGRAMMING

sponding feasible solution for treating the constraints of the knapsack


type have been proposed [144, 148]. Also, through the combination
of the desirable features of both the interactive fuzzy satisficing meth-
ods for continuous variables [135] and the GADS [144], an interactive
fuzzy satisficing method to derive a satisficing solution for the decision
maker to multiobjective multidimensional 0-1 knapsack problems has
been proposed [160, 161]. These results are immediately extended to
multiobjective multidimensional 0-1 knapsack problems involving fuzzy
numbers reflecting the experts' ambiguous understanding of the nature
of the parameters in the problem-formulation process [162].
Unfortunately, however, because these GADS are based mainly on the
decoding algorithm for treating the constraints of the knapsack type,
they cannot be applied to more general 0-1 programming problems in-
volving positive and negative coefficients in both sides of the constraints.
In order to overcome such difficulties, Sakawa et al. [137, 143] revis-
ited GADS for multidimensional 0-1 knapsack problems [144, 148] with
some modifications and examined their computational efficiency and ef-
fectiveness through a lot of computational experiments. Then Sakawa
et al. [137, 143] extended the GADS for 0-1 knapsack problems to deal
with more general 0-1 programming problems involving both positive
and negative coefficients in the constraints. New decoding algorithms
for double strings using reference solutions both without and with the
reference solution updating procedure were especially proposed so that
individuals would be decoded to the corresponding feasible solution for
the general 0-1 programming problems. Using several numerical exam-
ples, the proposed genetic algorithms and the branch and bound method
were compared with respect to the solution accuracy and computation
time. Moreover, Sakawa et al. [149, 155] presented fuzzy and interac-
tive fuzzy programming for multiobjective 0-1 programming problems
by incorporating the fuzzy goals of the decision maker.

4.2 Fuzzy multiobjective 0-1 programming


4.2.1 Problem formulation and solution concept
In general, a multiobjective 0-1 programming problem with k conflict-
ing objective functions CiX, i = I, ... ,k, is formulated as

muumlze
subject to (4.1 )

where Ci = (Cil, ... ,Cin), i = 1, ... ,k, are n-dimensional row vectors;
x = (Xl"'" xnf is an n-dimensional column vector of 0-1 decision
4.2. Fuzzy multiobjective 0-1 programming 55

variables; A = [aij], i = 1, ... ,m, j = 1, ... , n, is an m x n coefficient


matrix; and b = (bl' ... , bm)T is an m-dimensional column vector.
It should be noted here that in a multiobjective 0-1 programming
problem (4.1), when each element of A and b is assumed to be nonneg-
ative, then the problem (4.1) can be viewed as a multiobjective multidi-
mensional 0-1 knapsack problem.
For example, consider a project selection problem in a company in
which the manager determines the projects to actually be approved to
maximize the total profit and minimize the total amount of waste un-
der the resource constraints. Such a project selection problem can be
formulated as a multiobjective multidimensional 0-1 knapsack problem
expressed by (4.1).
In the following, for notational convenience, let X denote the feasible
region satisfying all of the constraints of the problem (4.1), namely

X ~ {x E {O, I} n I Ax :S b}. (4.2)

If we directly apply the notion of optimality for an ordinary single-


objective 0-1 programming problem to this multiobjective 0-1 program-
ming problem (4.1), we arrive at the following notion of a complete
optimal solution.

DEFINITION 4.1 (COMPLETE OPTIMAL SOLUTION)


x* is said to be a complete optimal solution if and only if there exists
x* E X such that CiX* :S CiX, i = 1, ... , k, for all x E X.
However, in general, such a complete optimal solution that simultane-
ously minimizes all of the multiple objective functions does not always
exist when the objective functions conflict with each other. Thus, instead
of a complete optimal solution, a new solution concept, called Pareto op-
timality, is introduced in multiobjective programming [135, 199, 222].

DEFINITION 4.2 (PARETO OPTIMAL SOLUTION)


X* E X is said to be a Pareto optimal solution if and only if there
does not exist another x E X such that CiX :S CiX* for all i = 1, ... , k,
and CjX < Cjx* for at least one j E {I, ... , k}.
A Pareto optimal solution is sometimes called a noninferior solution
because it is not inferior to other feasible solutions.
The following weak Pareto optimality is defined as a slightly weaker
solution concept than is the Pareto optimality.
56 4. FUZZY MULTIOBJECTIVE 0-1 PROGRAMMING

DEFINITION 4.3 (WEAK PARETO OPTIMAL SOLUTION)


x* E X is said to be a weak Pareto optimal solution if and only if
there does not exist another x E X such that CiX < CiX*, 1, ... , k.

For notational convenience, let X eo , X P, or X W P denote complete


optimal, Pareto optimal, or weak Pareto optimal solution sets, respec-
tively. Then from their definitions, it can be easily understood that the
following relation holds:

(4.3)

4.2.2 Interactive fuzzy mUltiobjective programming


For such a multiobjective 0-1 programming problem (4.1), considering
the vague or fuzzy nature of human judgments, it is quite natural to
assume that the decision maker (DM) may have a fuzzy goal for each of
the objective functions CiX. In a minimization problem, the goal stated
by the DM may be to achieve "substantially less than or equal to some
value ai" [135, 170, 176, 184, 225, 227J. Such a fu;r.zy goal of the DM
can be quantified by eliciting the corresponding membership function
through the interaction with the DM. Here, for simplicity, the linear
membership function
. 0

~i( ~
0, ctx> zi
CiX - zp zl < Ci X < (4.4)
Ci"') { zl - zO ' t -
zO
t
t t
1, Ci X :s; z}

is assumed for representing the fuzzy goal of the DM [135, 225, 228],
where z? zI
and denote the values of the objective function CiX whose
degree of membership function are 0 and 1, respectively. These values
are subjectively determined through an interaction with the DM. Figure
4.1 illustrates the graph of the possible shape of the linear membership
function.
As one of the possible ways to help the DM determine and zl, itzp
is convenient to calculate the individual minimum ziin = min"'Ex CiX
and maximum zi ax = max",EX CiX of each objective function under the
given constrained set.
Then by taking account of the calculated individual minimum and
maximum of each objective function, the DM is asked to assess and zp
l I d ' I
zi In t Ie c ose Interva zi
l ' [min max]
, Zi , 1 = 1 k
, ... , .
Zimmermann [225J suggested a way to determine the linear member-
ship function J.Li (CiX) by assuming the existence of an optimal solution
x io of the individual objective function minimization problem under the
4.2. Fuzzy multiobjective 0-1 programming 57

1. 0 1---______

o z? CiX

Figure 4.1. Linear membership function for fuzzy goal

constraints defined by

min { Ci x Ix EX}, i = 1, ... , k. (4.5)

To be more specific, using the individual minimum

ziin = CiX io = min {CiX I Ax :::; b, x E {a, l}n}, i = 1, ... , k, (4.6)

together with

ziIII _- max (CtX


. 10 .
, ... , CtX i-1,0 , CtX
. i+1,0 , ... , CtX 1 ... , k , (4.7)
. kO) ,~=,
.

Zimmermann [225] determined the linear membership function as III


(4.4) by choosing zl = ziin and z? = zIn.
Having elicited the linear membership functions {li(CiX), i = 1, ... , k,
from the DM for each of the objective function CiX, i = 1, ... ,k, if we
introd uce a general aggregation function

(4.8)

a fuzzy multiobjective decision-making problem can be defined by

maximize {l D ( x ) . (4.9)
",EX

Observe that the value of the aggregation function {lD(X) can be inter-
preted as representing an overall degree of satisfaction with the DM's
multiple fuzzy goals [135, 170, 176, 184, 225]. As the aggregation func-
tion, if we adopt the well-known fuzzy decision of Bellman and Zadeh
or minimum operator

(4.10)
58 4. FUZZY MULTIOBJECTIVE 0-1 PROGRAMMING

the multiobjective 0-1 programming problem (4.1) can be interpreted as

maximize i~~l,i~'k. {/-Li( Ci X )} }


subject to Ax::; b (4.11)
x E {O,l}n

In the conventional fuzzy approaches, it has been implicitly assumed


that the fuzzy decision of Bellman and Zadeh [22] or the minimum op-
erator is the proper representation of the DM's fuzzy preferences, and
hence, the multiobjective 0-1 programming problem (4.1) has been in-
terpreted as (4.11).
However, it should be emphasized here that this approach is prefer-
able only when the DM feels that the minimum operator is appropriate.
In other words, in general decision situations, the DM does not always
use the minimum operator when combining the fuzzy goals and/or con-
straints. Probably the most crucial problem in (4.9) is the identification
of an appropriate aggregation function that well-represents the DM's
fuzzy preferences. If /-LD(') can be explicitly identified, then (4.9) re-
duces to a standard mathematical programming problem. However, this
rarely happens, and as an alternative, an interaction with the DM is
necessary for finding the satisficing solution of (4.9).
To generate a candidate for the satisficing solution that is also Pareto
optimal, in interactive fuzzy multiobjective 0-1 programming, the DM is
asked to specify the value of the corresponding membership function for
each fuzzy goal, called reference membership levels. Observe that the
idea of the reference membership levels [135] can be viewed as an obvi-
ous extension of the idea of the reference point of Wierzbicki [215, 216].
For the DM's reference membership levels fli, i = 1, ... , k, the corre-
sponding Pareto optimal solution, which is nearest to the requirement
or better than if the reference membership levels are attainable in the
minimax sense, is obtained by solving the following minimax problem in
a membership function space [135]:

mlnlIlllze
i;re~k {.fli - /-Li (Zi (x))} }
(4.12)
subject to Ax ::; b
x E {O, l}n

It must be noted here that for generating Pareto optimal solutions by


solving the minimax problem, if the uniqueness of the optimal solution
is not guaranteed, it is necessary to perform the Pareto optimality test.
To circumvent the necessity to perform the Pareto optimality test in
4.2. Fuzzy multi objective 0-1 programming 59

the minimax problems, it is reasonable to use the following augmented


minimax problem instead of the minimax problem (4.12) [135]:

minimize { . max (Pi - J.Li(Ci X )) + p t(Pi - /li(Ci X ))} }


t=l, ... ,k i=l (4.13)
subject to Ax::::; b
x E {O, l}n
The term augmented is adopted because the term p 2::7=
1 (Pi - P'i (CiX )) is
added to the standard minimax problem, where p is a sufficiently small
positive number.
It is significant to note here that this problem preserves the linearities
of the constraints. For this problem, it is quite natural to define the
fitness function by
k
f(8) = (1.0 + kp) - i~~k { (jli - J.Li(Ci X )) + p ~(Pi - J.Li(Ci X ))}, (4.14)

where 8 denotes an individual represented by a double string and x is


the phenotype of 8.
With this observation in mind, the augmented minimax problem (4.13)
can be effectively solved through GADS or genetic algorithms with dou-
ble strings based on reference solution updating (G ADSRSU) introd uced
in the preceding sections.
Incorporating GADS [138, 144, 147, 148, 160, 161, 163] introduced in
the previous chapter int.o t.he interactive fuzzy satisficing methods [135,
170, 176, 184J both proposed by Sakawa et al., it becomes possible t.o
introduce the following interactive algorit.hm for deriving a satisficing
solution for the DM for multiobjective 0-1 programming problems [144,
147, 160, 165J. The st.eps marked with an asterisk involve interact.ion
with the DM.

Interactive fuzzy multiobjective 0-1 programming


Step 0: Calculate the individual minimum and t.he maximum of each
objective function under the given constraint.s.
Step 1*: Elicit a membership funct.ion from the DM for each of the
objective functions by taking account of the calculated individual
minimum and the maximum of each objective function. Then ask the
DM to select the initial reference membership levels Pi, i = 1, ... ,k
(if it is difficult to determine these values, set them to 1).
Step 2: For the reference membership values specified by the DM, solve
the augmented minimax problem.
60 4. FUZZY MULTIOBJECTIVE 0-1 PROGRAMMING
Step 3*: If the DM is satisfied with the current values of the mem-
bership functions and/or objective functions given by the current
best solution, stop. Otherwise, ask the DM to update the reference
membership levels by taking into account the current values of mem-
bership functions and/or objective functions, and return to step 2.
It must be observed here that, in this interactive algorithm, GADS or
GADSRSU are used mainly in Step 2. However, observe that in step 0
for calculating ziin and zi ax , i = 1, ... , k, GADS or G ADSRSU can be
used

4.2.3 Multiobjective project selection problems


As a simple numerical example of multiobjective multidimensional
0-1 knapsack problems, consider the following two-objective project se-
lection problems.

Two-objective project selection problems: A firm has n projects


awaiting approval. If the ith project is approved it requires a budget
of alj million yen and manpower of a2] persons. The total available
funds and manpower are respectively limited to be bl million yen and b2
persons. If the ith project is approved, the expected profit, its success
probability, and the amount of waste are d j million yen, Pj, and gj tons,
respectively. The problem is to determine the projects to be actually
approved to maximize the total profit and minimize the total amount of
waste under the constraints of total funds and manpower.
By introducing the integer variables Xj with the interpretation that
Xj = 1 if project j is approved and Xj = 0 if project j is not approved,
the problem can be formulated as the following two-objective 0-1 pro-
gramming problem.
n
mmlmlze CIX ~ - L djpjxj
j=1
n
mInImIze C2 X ~ 2:= gjXj
j=1
n
(4.15)
subject to 2:= U1jXj ::; bl
j=1
n
L U2jXj ::; b2
j=1
Xj = 0 or 1 j = 1, ... , n
As a numerical example assume that there are 30 different projects
and for each project the required budget, available manpower, expected
4.2. Fuzzy multiobjective 0-1 programming 61

profit, success probability, and resulting amount of waste are assumed


to be as shown in Table 4.1. It is also assumed that the total funds
and total available manpower are limited to 5617 million yen and 487
persons, respectively.

Table 4-1. Budget, manpower, profit, probability, and waste for each project

Project number Budgets Manpower Profits Probability Waste


1 336 18 790 0.41 386
2 100 3 831 0.96 318
3 10 3 161 0.42 78
4 328 48 341 0.65 715
5 444 44 697 0.43 99
6 67 18 259 0.52 425
7 173 30 220 0.72 171
8 302 32 479 0.94 851
9 624 22 953 0.33 837
10 28 16 957 0.19 812
11 704 14 929 0.36 996
12 555 18 799 0.38 68
13 616 49 673 0.27 285
14 517 24 950 0.66 855
15 616 16 857 0.93 154
16 143 24 514 0.25 421
17 367 40 438 0.50 960
18 406 8 624 0.92 123
19 180 37 906 0.58 655
20 258 12 505 0.89 359
21 258 36 349 0.23 332
22 27 45 381 0.54 799
23 750 47 789 0.25 612
24 161 6 515 0.39 926
25 39 33 730 0.52 267
26 453 20 564 0.94 180
27 81 47 594 0.83 170
28 617 46 848 0.99 900
29 762 30 858 0.95 260
30 146 16 868 0.50 108

The numerical experiments were performed on a personal computer


(processor: Celeron 333MHz, memory: 128MB, OS: Windows 2000), and
a Visual C++ compiler (version 6.0) was used. The parameter values
of GADS are set as population size N = 50, generation gap G = 0.9,
probability of crossover Pc = 0.9, probability of mutation Pm = 0.01,
probability of inversion Pi = 0.03, minimal search generation Imin = 150,
62 4. FUZZY MULTIOBJECTIVE 0-1 PROGRAMMING

maximal search generation Ima:x = 500, E = 0.02, and Cmult = 1.8. The
coefficient p of the augmented minimax problem is set as 0.005.
Concerning the fuzzy goals of the DM, following Zimmermann [225],
for i = 1,2, after calculating the individual minimum ziin together with
zi, each linear membership function I~i (CiX) is determined by choosing
zl = ziin and z? = zi. For this numerical example, zjin = -9126.77,
zfin = 0, zj = 0, and zf = 8725 are obtained through GADS.
At each interaction with the DM, the corresponding augmented min-
imax problem is solved through 10 trials of GADS, as shown in Table
4.2.
The augmented minimax problem is solved for the initial reference
membership levels, and the DM is supplied with the corresponding
Pareto optimal solution and membership values, as is shown in the first
interaction of Table 4.2. On the basis of such information, because the
DM is not satisfied with the current membership values, the DM up-
dates the reference membership values to Pi = 0.8 and P2 = 1.0 for
improving the satisfaction levels for fL2 (C2X) at the expense of ILl (C1 x).
For the updated reference membership values, the corresponding aug-
mented minimax problem yields the Pareto optimal solution and the
membership values, as is shown in the second interaction of Table 4.2.
The same procedure continues in t.his manner until the DM is satisfied
with the current values of the membership functions. In this example,
as shown in Table 4.2, at the third iteration, a satisficing solution for
the DM is derived.

Table 4.2. Interactive processes (10 trials)

Interaction J-t) (cJx) I l 2(C2 X ) CIX C2X #


1st 0.696392 0.694785 -6355.81 2663.00 10
(1.00,1.00)
2nd 0.594302 0.799771 -5424.06 1747.00 10
(0.80,1.00)
3rd 0.619311 0.769628 -5652.31 2010.00 9
(0.80,0.95) 0.619067 0.771232 -5650.08 1996.00 1
#: Number of solutIOns

4.2.4 Numerical experiments


To illustrate the proposed method, the following numerical exam-
ples were considered. The numerical experiments were performed on a
personal computer (processor: Celeron 333MHz, memory: 128MB, OS:
Windows 2000), and a Visual C++ compiler (version 6.0) was used.
4.2. Fuzzy multiobjective 0-1 programming 63

4.2.4.1 Multiobjective 0-1 knapsack problems


As a numerical example, consider a three objective 0-1 knapsack prob-
lem with 50 variables and 10 constraints.
The coefficients involved in this numerical example are randomly gen-
erated in the following way. Coefficients Clj are randomly chosen from
the closed interval [-1000, 0). Coefficients C2j are randomly chosen from
the closed interval (0,1000]. Coefficients C3j are randomly chosen from
the closed interval [-500,500). Coefficients aij are randomly chosen
from the closed interval [0,1000). On the basis of these aij values, us-
ing a positive constant 'Y that denotes the degree of strictness of the
constraints, each element bi, i = 1, ... ,m, is determined by

"'a
n
bt -- 'Y'
I ~ tJ' (4.16)
j=l

where the value of'Y = 0.25 is adopted in this example.


As a numerical example generated in this way, we use the values of
coefficients as shown in Tables 4.3 and 4.4.

Table 4.3. Values of coefficients Ci]

Cl -997 -909 -881 -151 -218 -87 -528 -365 -291 -224
-893 -241 -581 -232 -975 -189 -897 -458 -388 -64
-904 -57 -244 -982 -645 -258 -220 -169 -271 -329
-421 -210 -585 -97 -828 -77 -523 -722 -715 -610
-99 -355 -510 -291 -339 -470 -782 -134 -323 -315
C2 358 710 369 503 558 330 712 560 19 521
889 735 761 363 27 689 69 613 155 961
444 789 13 302 888 892 859 23 841 745
681 128 97 377 580 327 42 628 208 258
473 135 262 632 135 82 807 52 905 348
C3 -148 -243 98 342 470 161 397 109 -1 -214
-215 121 -342 -323 -140 -371 -339 245 58 -59
-416 -370 476 -24 -132 -342 461 -160 -216 -161
-158 -27 -44 370 343 416 336 -27 -352 -405
33 261 472 -122 72 -152 278 158 -349 -97

The parameter values of GADS are set as population size N = 100,


generation gap G = 0.9, probability of crossover Pc = 0.9, probability
of mutation Pm = 0.01, probability of inversion Pi = 0.03, minimal
search generation Imin = 100, maximal search generation Imax = 1000,
E = 0.02, and Cmult = 1.8. The coefficient p of the augmented minimax
problem is set as 0.005.
64 4. FUZZY MULTIOBJECTIVE 0-1 PROGRAMMING

Table 4.4. Values of coefficients aij and bi

al 999 884 671 435 36 55 590 570 71 438


999 877 52 458 671 829 811 719 240 884
149 500 940 830 437 145 584 747 767 898
113 580 906 509 20 272 346 472 392 304
682 456 134 749 67 983 444 796 839 293
a2 524 633 716 299 105 803 270 873 370 357
195 25 453 219 793 832 558 864 103 580
410 276 203 840 669 782 888 371 347 96
520 319 460 982 196 550 163 770 182 766
628 224 329 385 721 757 573 852 793 699
a3 568 915 346 810 235 947 730 543 472 560
897 818 466 320 461 207 268 256 329 226
566 913 516 458 264 11 691 231 198 16
550 380 8 877 399 482 282 862 224 824
320 713 887 557 832 963 520 184 286 888
a4 249 406 587 584 314 210 183 884 283 371
706 362 739 112 550 44 214 253 513 996
552 707 410 936 572 6 568 637 653 113
444 395 134 246 925 425 981 459 43 331
582 532 922 775 787 756 969 270 781 600
a[l 386 903 716 119 350 842 141 282 625 636
322 993 17 593 868 473 502 882 855 928
291 902 227 344 344 368 821 408 900 854
903 909 766 692 818 295 393 999 537 942
980 981 104 688 202 904 672 740 307 444
a6 420 679 486 290 464 859 45 423 333 647
806 433 126 509 997 728 522 114 861 879
693 493 464 831 897 56 196 636 447 276
277 53 522 258 86 784 408 153 142 223
769 190 819 821 255 191 311 439 552 579
a7 769 766 576 521 634 226 840 538 678 138
712 52 696 732 848 682 472 38 778 948
371 409 717 83 907 791 459 467 263 352
534 808 405 919 524 859 327 438 213 352
114 622 328 784 351 343 231 266 236 156
a8 706 508 6 299 627 679 503 943 63 312
654 273 829 393 178 378 680 725 832 39
27 324 755 867 108 416 576 102 841 679
217 731 756 872 302 269 948 110 522 678
854 407 862 911 726 937 446 63 417 542
ag 197 923 449 540 231 100 914 583 263 834
302 45 249 159 175 404 118 167 877 449
641 124 670 868 266 719 475 35 547 318
251 828 902 46 726 236 230 37 735 471
715 285 26 291 761 730 846 648 607 630
alO 46 35 436 542 877 323 686 200 96 778
41 724 867 418 503 779 135 543 755 634
781 212 613 508 503 365 573 289 496 193
484 31 779 89 517 201 928 741 110 97
167 734 40 926 337 432 676 902 598 299
b 6660.756330.756319.006267.757542.005860.50 6318.256473.005660.755759.75
4.2. Fuzzy multiobjective 0-1 programming 65

The individual minimum ziin and maximum zi 3X of each objective


function CiX, i = 1,2,3 are respectively calculated through GADS, as
shown in Table 4.5.

Table 4.5. Individual minimum and maximum of each of objective functions

Minimum (zi ln ) Maximum (ziax)


CjX -9760.0 0.0
C2X 0.0 9575.0
C3 X -4144.0 4189.0

By taking account of the calculated individual minimum and maxi-


mum of each objective function, the DM subjectively determines linear
membership functions /1i(CiX), i = 1,2,3, as shown in Table 4.6.

Table 4.6. Membership functions for objective functions

zi zi
Ilj(CjX) -9700.0 0.0
1l2(C2X) 0.0 9500.0
1l3(C3 X ) -4000.0 4000.0

Having determined the linear membership functions in this way, for


this numerical example, at each interaction with the DM, the corre-
sponding augmented minimax problem (4.13) is solved through 10 trials
of GADS, as shown in Table 4.7.
For the initial reference membership levels (1.00,1.00,1.00), which
can be viewed as the ideal values, the corresponding augmented mini-
max problem (4.13) is solved by GADS, and the DM is supplied with
the corresponding membership function values, as is shown in the first
interaction of Table 4.7. On the basis of such information, because the
DM is not satisfied with the current membership function values, the
DM updates the reference membership values as P1 = 1.00, Jl2 = 0.80,
and Jl3 = 1.00 for improving the satisfaction levels for /11 and /13 at the
expense of !l2. For the updated reference membership values, the cor-
responding augmented minimax problem (4.13) is solved by the GADS,
and the corresponding membership function values are calculated as
shown in the second interaction of Table 4.7. Because the D M is not sat-
isfied with the current membership function values, the DM updates the
reference membership values as P1 = 0.90, Jl2 = 0.80, and P3 = 1.00 for
improving the satisfaction levels for /12 and /13 at the expense of /11. For
66 4. FUZZY MULTIOBJECTIVE 0-1 PROGRAMMING

the updated reference membership values, the corresponding augmented


minimax problem (4.13) is solved by the GADS, and the corresponding
membership function values are calculated as shown in the third inter-
action of Table 4.7. The same procedure continues in this manner until
the DM is satisfied with the current values of the memhership functions
and the objective functions. In this example, a satisficing solution for
the DM is derived at the fourth interaction.

Table 4.7. Interactive processes (10 trials)

Interaction 111(CjX) 112(C2 X ) 113(C3 X ) CjX C2 X C3 X #


1st 0.789485 0.765579 0.793000 -7658.00 2227.00 -2344.00 10
(1.00,1.00,1.00)
2nd 0.832474 0.635789 0.834625 -8075.00 3460.00 -2677.00 6
( 1.00,0.80,1.00) 0.833505 0.631579 0.833500 -8085.00 3500.00 -2668.00 4
3rd 0.771237 0.662421 0.859500 -7481.00 3207.00 -2876.00 8
(0.90,0.80,1.00) 0.767423 0.658211 0.862000 -7444.00 3247.00 -2896.00 2
4th 0.806495 0.693263 0.850250 -7823.00 2914.00 -2802.00 10
(0.90,0.80,0.95)
#: Number of solutIOns

4.2.4.2 Multiobjective 0-1 programming problems


Next, as a numerical example of multiobjective general 0-1 program-
ming problems involving positive and negative coefficients, consider a
three-objective general 0-1 programming problem with 50 variables and
10 constraints.
The coefficients involved in this numerical example are randomly gen-
erated in the following way. Coefficients Clj are randomly chosen from
the dosed interval [-1000,0). Coefficients C'2j are randomly chosen from
the closed interval (0,1000]. Coefficients C3] are randomly chosen from
the closed interval [-500,500). Coefficients aij are randomly chosen
from the closed interval [-500,500). On the basis of these aij values,
coefficients bi , i = 1, ... , 10 are determined On the basis of these values,
using a positive constant I that denotes the degree of strictness of the
constraints, each element bi , i = 1, ... ,Tn, is determined by

(4.17)

where J;t = {j I aij 2' 0,1 ::; j ::; n}, .1;;' = {j I aij < 0,1 ::; j ::; n}, and
the value of I = 0.50 is adopted in this example.
4.2. Fuzzy multiobjective 0-1 programming 67

Table 4.8. Values of coefficients Cij

Cj -3 -281 -678 -374 -116 -494 -731 -874 -518 -76


-803 -986 -567 -357 -92 -337 -547 -186 -16 -625
-65 -889 -84 -886 -836 -837 -489 -798 -534 -208
-10 -134 -932 -359 -322 -968 -772 -601 -919 -10
-753 -94 -166 -58 -639 -478 -585 -504 -613 -444
C2 89 632 698 296 945 816 582 797 57 601
580 774 992 120 636 783 383 760 481 740
684 96 38 594 9 291 781 397 106 411
151 36 26 170 892 103 607 928 42 365
169 98 792 866 917 23 556 81 858 87
C3 196 -278 -97 59 257 -140 447 326 101 81
258 187 -365 418 -305 -38 43 -447 60 375
217 142 101 -467 -382 -249 458 -75 -490 115
-6 -114 -90 -186 -304 50 87 -427 -303 15
422 176 -276 -433 -221 171 -348 103 330 270

As a numerical example generated in this way, we use the values of


coefficients as shown in Tables 4.8 and 4.9.
The parameter values of genetic algorithms with double strings based
on reference solution updating (GADSRSU) are set as population size
N = 100, generation gap G = 0.9, probability of crossover Pc = 0.9,
probability of mutation Pm = 0.01, probability of inversion Pi = 0.03,
minimal search generation Imin = 100, maximal search generation Imax =
WOO, E = 0.02, and Cmult = 1.8, A = 0.9, 'f] = 0.1, () = 5.0, and P = 200.
The coefficient p of the augmented minimax problem is set as 0.005.
The individual minimum ziin and maximum zi ax of each of objective
functions CiX, i = 1,2,3 are calculated by GADSRSU, as shown in Table
4.10.
By considering the calculated individual minimum and maximum,
the DM subjectively determines linear membership functions /1i (CiX),
i = 1,2,3 as shown in Table 4.11.
Having determined the linear membership functions in this way, for
this numerical example, at each interaction with the DM, the corre-
sponding augmented minimax problem (4.13) is solved through 10 trials
of GADSRSU, as shown in Table 4.12.
The augmented minimax problem (4.13) is solved by the GADSRSU
for the initial reference membership levels (1.00,1.00, l.OO), which can
be viewed as the ideal values, and the DM is supplied with the corre-
sponding membership function values, as is shown in the first interaction
of Table 4.12. On the basis of such information, because the DM is not
68 4. FUZZY MULTIOBJECTIVE 0-1 PROGRAMMING

Table {9. Values of coefficients a;] and b;

aj 77 18 117 228 331 -193 147 154 -223 -451


-492 225 182 93 11 336 -233 -346 89 82
-270 215 383 -272 -60 -127 -60 -140 227 -64
-58 -420 -490 -285 -193 341 116 -493 79 0
251 -294 455 -12 -429 44 -82 -383 -107 -39
a2 439 -41 124 150 207 272 269 -325 343 257
-99 -271 -52 -311 -281 -333 -418 -402 -173 -494
82 -158 -490 310 487 -486 -403 -382 323 -122
139 119 -442 212 109 450 60 -255 363 -385
307 -478 -299 -55 466 8 98 347 172 -7
a3 303 -269 134 193 -3 -248 197 12 -136 32
-205 475 -23 394 -145 -287 0 -443 168 323
248 -92 322 262 -54 195 -440 303 183 465
48 -222 320 362 -413 -65 240 -448 434 432
-245 274 269 -47 117 23 273 -65 340 334
a4 492 424 207 202 264 -253 -29 -485 -187 238
-471 -185 316 391 -48 -281 105 258 -354 111
189 276 -297 279 418 -294 -134 -340 270 -307
135 259 -33 -12 467 -28 154 220 70 -275
430 118 -146 -464 107 -2 -377 62 357 -105
ail 109 -204 120 377 137 203 141 255 404 -283
360 -493 385 -60 -443 103 102 -20 244 -59
-223 221 -398 -5 -369 -307 -55 -260 -367 349
214 -456 41 40 -404 86 -421 249 -487 -234
-40 180 183 -228 -318 222 -410 -453 214 151
a6 -40 126 -468 -208 58 -396 -247 -423 346 346
31 -171 315 461 -23 -370 -209 104 435 178
404 290 456 14 218 -377 -1 -407 481 439
424 218 488 251 -18 90 -193 258 377 -199
-192 165 126 464 -169 -373 479 -32 252 -105
a7 201 435 -410 364 17 166 415 -276 -97 -67
92 125 92 238 252 -115 105 -7 372 468
456 387 -445 332 432 -390 -401 -299 -281 188
-313 434 -46 247 41 95 -489 428 325 -221
-366 -272 -236 -347 202 457 329 18 420 -185
a8 180 45 -230 286 179 201 -374 187 -395 -21
-451 -49 -222 152 -357 -283 227 -454 -179 -254
72 340 -359 284 -415 319 312 -301 343 308
-328 24 238 -225 354 -191 -157 121 -173 418
-32 -490 268 -223 -479 495 290 27 -496 129
ag -451 -172 358 -290 100 468 466 191 -372 308
-239 -360 -485 -143 -399 -141 -11 -164 -310 -413
12 21 362 -390 491 -80 -382 -91 128 -463
486 -180 -265 -159 328 465 -170 -439 484 -491
160 -156 -103 -109 -126 -379 252 -498 -74 -496
alO 447 367 -411 360 112 376 131 -310 -425 196
-265 463 175 -84 -277 -70 -460 -137 -393 353
-254 -383 -405 -13 383 -271 -312 -430 425 -455
145 -441 166 398 245 23 460 206 243 15
-57 163 210 -245 -2 465 432 -308 359 -318
b -2067.2 -1852.0 760.0 -336.6 -2162.2 545.0 95.4 -1963.2 -3368.6 -1108.4
4.2. Fuzzy multi objective 0-1 programming 69

Table 4.10. Individual minimum and maximum of each of objective functions

Minimum (z:mn) Maximum (zr aX )


CIX -18301.0 -5343.0
C2X 5731.0 18960.0
C3 X -3453.0 2130.0

Table 4.11. Membership functions for objective functions

zt zi
J.!I (CIX) -19000.0 -4000.0
J.!2(C2 X ) 6000.0 19000.0
J.!3(C3 X ) -4000.0 3000.0

Table 4.12. Interactive processes (10 trials)

Interaction J.!I (CIX) J.!2(C2X) J.!3(C3 X ) CIX C2X C3 X #


1st 0.641067 0.643385 0.685143 -13616.0 10636.0 -1796.0 2
(1.00,1.00,1.00) 0.640600 0.650462 0.642286 -13609.0 10544.0 -1496.0 1
0.645733 0.656538 0.639714 -13686.0 10465.0 -1478.0 1
0.641400 0.644538 0.638143 -13621.0 10621.0 -1467.0 1
0.645600 0.638385 0.635714 -13684.0 10701.0 -1450.0 1
0.640467 0.649000 0.635143 -13607.0 10563.0 -1446.0 3
0.630867 0.620231 0.673000 -13463.0 10937.0 -1711.0 1
2nd 0.702867 0.700462 0.487857 -14543.0 9894.0 -415.0 1
(1. 00,1. 00,0.80) 0.687533 0.689385 0.493857 -14313.0 10038.0 -457.0 2
0.692067 0.685154 0.510714 -14381.0 10093.0 -575.0 3
0.684000 0.674000 0.559000 -14260.0 10238.0 -913.0 2
0.676400 0.673000 0.479857 -14146.0 10251.0 -359.0 2
3rd 0.774733 0.602231 0.549429 -15621.0 11171.0 -846.0 2
(1.00,0.85,0.80) 0.749000 0.586385 0.549000 -15235.0 11377.0 -843.0 2
0.753400 0.592231 0.532000 -15301.0 11301.0 -724.0 3
0.754200 0.578692 0.586286 -15313.0 11477.0 -1104.0 2
0.728733 0.580538 0.539000 -14931.0 11453.0 -773.0 1
4th 0.727600 0.608154 0.613857 -14914.0 11094.0 -1297.0 3
(0.95,0.85,0.80) 0.724200 0.606923 0.606429 -14863.0 11110.0 -1245.0 4
0.704333 0.622769 0.564429 -14565.0 10904.0 -95l.0 3
#: Number of solutlOns

satisfied with the current membership function values, the DM updates


the reference membership values as ih = 1.00, f12 = 1.00, and {i3 = 0.85
for improving the satisfaction levels for J.Ll and J.L2 at the expense of
70 4. FUZZY MULTIOBJECTIVE 0-1 PROGRAMMING

/1.3' For the updated reference membership values, the corresponding


augmented minimax problem (4.13) is solved by the GAOSRSU and
the corresponding membership function values are calculated as shown
in the second interaction of Table 4.12. Because the OM is not satis-
fied with the current membership function values, the OM updates the
reference membership values as PI = 0.90, P2 = 1.00, and P3 = 0.85
for improving the satisfaction levels for I'/'2 and /13 at the expense of
/11. For the updated reference membership values, the corresponding
augmented minimax problem (4.13) is solved by the GAOSRSU, and
the corresponding membership function values are calculated as shown
in the third interaction of Table 4.12. The same procedure continues
in this manner until the OM is satisfied with the current values of the
membership functions and the objective functions. In this example, a
satisficing solution for the OM is derived at the fourth interaction.

4.3 Fuzzy multiobjective 0-1 programming with


fuzzy numbers
4.3.1 Problem formulation and solution concept
As discussed in the previous section, the problem for optimizing multi-
ple conflicting linear objective functions simultaneously under the given
linear constraints and 0-1 conditions for decision variables is called the
multiobjective 0-1 programming (MOO-IP) problem and is formulated
as (4.1). In addition, fundamental to the MOO-IP is the Pareto optimal
con"ept, also known as a noninferior solution. Qualitatively, a Pareto
optimal solution of the MOO-IP is one in which any improvement of one
objective function "an be a"hieved only at the expense of another.
In practice, however, it would certainly be more appropriate to l,On-
sider that the possible values of the parameters in the description of
the objective functions and the constraints usually involve the ambigu-
ity of the experts' understanding of the real system. For this reason,
consider a multiobjective 0-1 programming problem with fuzzy numbers
(MOO-I-FN) formulated as

minimize
subject to (4.18)

where x = (Xl, ... ,xnf is an n-dimensional column vector of 0-1 deci-


sion variables, A is an_ m x n matrix whose clements are fuzzy numbers,
Ci, i = 1, ... , k, and bare n- and m-dimensional vectors, respectively,
whose elements are fuzzy numbers. These fuzzy numbers, reflecting the
experts' vague understanding of the nature of the parameters in the
4.3. Fuzzy multiobjective 0-1 programming with fuzzy numbers 71

problem-formulation process, are assumed to be characterized as fuzzy


numbers introduced by Dubois and Prade [52, 53J.
It is significant to note that, in a multiobjective 0-1 programming
problem with fuzzy numbers (4.18), when all of the fuzzy numbers in A
and b are assumed to be nonnegative, the problem (4.18) can be viewed
as a multiobjective multidimensional 0-1 knapsack problem with fuzzy
numbers.
Observing that this problem involves fuzzy numbers in both the ob-
jective functions and the constraints, it is evident that the notion of
Pareto optimality cannot be applied. Thus, it seems essential to extend
the notion of usual Pareto optimality in some sense. For that purpose,
we first introduce the a-level set of all of the vectors and matrices whose
clements are fuzzy numbers.

DEFINITION 4.4 (a-LEvEL SET)


The a-level set of fuzzy parameters A, b, and c is defined as the
ordinary set (A, b, c)oo for which the degree of its membership functions
exceeds the level a.

Now suppose that the DM decides that the degree of all of the mem-
bership functions of the fuzzy numbers involved in the MOO-I-FN should
be greater than or equal to some value a. Then for such a degree
a, the MOO-I-FN can be interpreted as a non fuzzy multiobjective 0-1
programming (MOO-I-FN(A, b, c)) problem that depends on the coeffi-
cient vector (A, b, c) E (A, b, c)oo. Observe that there exists an infinite
number of such MOO-I-FN(A, b, c) depending on the coefficient vector
(A, b, c) E (A, b, c)oo, and the values of (A, b, c) are arbitrary for any
(A, b, c) E (A, b, c)oo in the sense that the degree of all of the member-
ship functions for the fuzzy numbers in the MOO-I-FN exceeds the level
a. However, if possible, it would be desirable for the DM to choose
(A, b, c) E (A, b, c)oo in the MOO-I-FN(A, b, c) to minimize the objec-
tive functions under the constraints. From such a point of view, for a
certain degree a, it seems to be quite natural to have the MOO-I-FN
as the following nonfuzzy a-multiobjective 0-1 programming (a-MOO-l)
problem.
minimize
subject to
(4.19 )

In the following, for notational convenience, we denote the feasible region


satisfying the constraints of the problem (4.19) with respect to x by
72 4. FUZZY MULTIOBJECTIVE 0-1 PROGRAMMING

X(A, b), in other words,

X(A, b) ~ {x E {O, l}n I Ax ::; b, (A, b, c) E (A, b, c)n}. (4.20)

It should be emphasized here that in problem (4.19), the parameters


(A, b, c) are treated as decision variables rather than as constants.
On the basis of the n-Ievel sets of the fuzzy numbers, we can introduce
the concept of an n-Pareto optimal solution to the problem (4.19) as a
natural extension of the usual Pareto optimality concept.

DEFINITION 4.5 (n-PARETO OPTIMAL SOLUTION)


X* E X(A*,b*) is said to be an n-Pareto optimal solution to the
problem (4.19J if and only if there does not exist another x E X ( A, b) ,
(A,b,c) E (A,b,c)n such that CiX ::; CiX*, i = 1, ... ,k, with strict
inequality holding for at least one -i, where the corresponding values of
pammeters (A * ) b*, c*) are called a-level optimal pammeters.

Observe that a-Pareto optimal solutions and a-level optimal parame-


ters can be obtained through a direct application of the usual scalarizing
methods for generating Pareto optimal solutions by regarding the deci-
sion variables in the problem (4.19) as (x, A, b, c).

4.3.2 Interactive fuzzy multiobjective 0-1


programming with fuzzy numbers
For such an a-MOO-1 problem (4.19), considering the vague nature
of human judgments, it is quite natural to assume that the DM may
have a fuzzy goal for each of the objective functions Zi(X) = CiX. In
a minimization problem, the goal stated by the DM may be to achieve
"substantially less than or equal to some value Pi" [135, 225, 228]. These
fuzzy goals can be quantified by eliciting the corresponding membership
functions through the interaction with the DM.
To elicit a linear membership function {1>i(CiX) for each i from the DM
for each of the fuzzy goals, the DM is asked to assess a minimum value
of unacceptable levels for CiX, denoted by z? and a maximum value of
totally desirable levels for CiX, denoted by Z[. Then linear membership
functions {1>i (CiX), i = 1, ... , k, for the fuzzy goal of the D M are defined
by

(4.21)

These membership functions are depicted in Figure 4.2.


4.3. Fuzzy multiobjective 0-1 programming with fuzzy numbers 73

J.i (c,x)

1. 0 f-------.

o z; z? CiX

Figure 4-2. Linear membership function for fuzzy goal

As one possible way to help the DM determine z2 and zl, it is con-


venient to calculate the minimal value ziin and the maximal value zi ax
of each objective function under the given constraints. Then by taking
account of the calculated individual minimum and maximum of each
objective function, the DM is asked to assess z2 and zl in the interval
[zimin , zimaxJ , Z. -- 1, , k
Zimmermann [225J suggested a way to determine the linear member-
ship function J.ti( CiX) by assuming the existence of an optimal solution
x io of the individual objective function minimization problem under the
constraints defined by
min {CiX I Ax ~ b, x E {O,l}n}, i = I, ... ,k (4.22)
To be more specific, using the individual minimum
ziin = CiXio = min {CiX I Ax ~ b, x E {O, l}n}, i = 1, ... , k, (4.23)
together with
zim -_ max (CtX
. 10 , . , . i-l,o
C~X
. ko) ,
. i+l,o , ... , CtX
, CtX i = I, ... ,k, (4.24)
Zimmermann [225J determined the linear membership function as in
problem (10.10) by choosing zl = ziin and z2 = zi.
Having elicited the linear membership functions J.ti(CiX) from the DM
for each of the objective function CiX, i = 1, ... k, if we introduce a
general aggregation function
(4.25)
the problem to be solved is transformed into a fuzzy multiobjective de-
cision making problem defined by
maximize J.tD(f.11(CIX), f.12(C2 X ), ... , f.1k(CkX), a) }
(4.26)
subject to (x,a,b,c) E P(a) a E [O,IJ '
74 4 FUZZY MULTIOBJECTIVE 0-1 PROGRAMMING

where P(a) is the set of a-Pareto optimal solutions and the correspond-
ing a-level optimal parameters to the problem (4.19). Observe that the
value of the aggregation function J.L D(-) can be interpreted as represent-
ing an overall degree of satisfaction with the DM's k fuzzy goals [135]
If J.LD(-) can be explicitly identified, then (4.26) reduces to a standard
mathematical programming problem. However, this rarely happens, and
as an alternative an interaction with the DM is necessary for finding a
satisficing solution for the DM to (4.26).
To generate a candidate for the satisficing solution that is also a-
Pareto optimal, in our interactive decision-making method, the DM is
asked to specify the degree a of the a-level set and the reference mem-
bership values. Observe that the idea of the reference membership val-
ues can be viewed as an obvious extension of the idea of the reference
point in Wierzbicki [215]. To be more explicit, for the DM's degree a
and reference membership values Pi, i = 1, ... , k, the corresponding a-
Pareto optimal solution, which is, in the minimax sense, nearest to the
requirement or better than that if the reference membership values are
attainable, is obtained by solving the following minimax problem.

mInImIZe i~~k {Pi - J.Li (CiX)} }


subject to Ax ~ b (4.27)
Xj E {a,l}, i ~ 1, ... ,n
(A, b, c) E (A, b, c)a

It must be noted here that, for generating a-Pareto optimal solutions by


solving the minimax problem, if the uniqueness of the optimal solution
x* is not guaranteed, it is necessary to perform the a-Pareto optimality
test of x*. To circumvent the necessity to perform the a-Pareto optimal-
ity test in the minimax problems, it is reasonable to use the following
augmented minimax problem instead of the minimax problem (4.27).

k
minimize . max {(Pi - J.L(Ci X )) + P 2:(Pi - J.L(Ci X ))}
t=l, ... ,k i=l
subject to Ax :S b (4.28)
Xj E {a,l}, j = 1, ... ,n
(A, b, c) E (A, b, c)a
where p is a sufficiently small positive number.
In this formulation, however, constraints are nonlinear because the
parameters A, b, and C are treated as decision variables. To deal with
such nonlinearities, we introduce the set-valued functions Si(-) and T(-')
4.3. Fuzzy multiobjective 0-1 programming with fuzzy numbers 75
for i = 1, ... ,k.

Then it can be easily verified that the following relations hold for Si ( .)
and T(-,') when x ~ O.

PROPOSITION 4.1
(1) c; cr
~ => Si(Ct) 2 Si(Cr)
(2) b i ~ b2 => T(A, b l ) ~ T(A, b;)
(3) Al ~ A2 => T(Al,b) 2 T(A2,b)

Now from the properties of the a-level set for the vectors and/or
matrices of fuzzy numbers, it should be noted here that the feasible
regions for A, b, Ci can be denoted by the closed intervals [A~, A~J,
[b~, b~J, [cfo" c{;J, respectively, where y~ or y;; represents the left or
right extreme point of the a-level set Yo,. Therefore, through the use of
Proposition 4.1, we can obtain an optimal solution of the problem (4.28)
by solving the following 0-1 programming problem.

minimize t;S~.~k {(fit - /l(cro,x)) + p ~_~I(fit


.
- /l(cfo,x))} )
(4.30)
subject to ALx
0,
< bR
- 0,

x J E {0,1}, j = 1, ... ,n

Observe that this problem preserves the linearities of the constraints and
hence it is quite natural to define the fitness function by
k
1(8) = (1.0 + kp) - . max {(fii - /l(cfo,x)) + P i)fii - /l(cfo,x))},
t=l,oo.,k i=l
(4.31)
where s and x respectively denote an individual represented by a double
string and phenotype of s.
With this observation in mind, the augmented minimax problem (4.30)
can be effectively solved through GADS or GADSRSU, introduced in the
preceding sections.
We can now construct the interactive algorithm in order to derive a
satisficing solution for the DM from the a-Pareto optimal solution set.
The steps marked with an asterisk involve interaction with the DM.
76 4. FUZZY MULTIOBJECTIVE 0-1 PROGRAMMING

Interactive fuzzy multiobjective 0-1 programming with fuzzy


numbers
Step 0: Calculate the individual minimum and maximum of each ob-
jective function under the given constraints for a = 0 and 0: = 1,
respectively.
Step 1*: Elicit a membership function ILi(CiX) from the DM for each
of the objective functions by considering the calculated individual
minimum and maximum of each objective function.
Step 2*: Ask the DM to select the initial value of a (0 ~ a ~ 1) and
the initial reference membership values Pi, i = 1, ... ,k.
Step 3: For the degree 0: and the reference membership values Pi, i =
1, ... , k specified by the DM, solve the augmented minimax problem.
Step 4*: If the DM is satisfied with the current values of the member-
ship functions and/or objective functions given by the current best
solution, stop. Otherwise, ask the DM to update the reference mem-
bership levels and/or a by taking into account the current values of
membership functions and/or objective functions, and return to step
3.
It must be observed here that, in this interactive algorithm, GADS or
GADSRSU are used mainly in Step 3. However, observe that, in step
0, GADS or GADSRSU can be used for calculating zfin and zfax, i =
1, ... , k.

4.3.3 Numerical experiments


To illustrate the proposed method, the following numerical examples
are considered. The numerical experiments were performed on a personal
computer (processor: Celeron 333MHz, memory: 128MB, OS: Windows
2000), and a Visual C++ compiler (version 6.0) was used.

4.3.3.1 Multiobjective 0-1 knapsack problems with fuzzy


numbers
As a numerical example, consider a three-objective 0-1 knapsack prob-
lem with 50 variables and 10 constraints involving fuzzy numbers. For
convenience, it is assumed here that some of the coefficients, Cij, aij and
bi in Tables 4.3 and 4.4 are fuzzy numbers. To be more explicit, among
the coefficients in Tables 4.3 and 4.4, the followings are assumed to be
triangular fuzzy numbers, as shown in Table 4.13.
The parameter values of GADS are set as population size N = 100,
generation gap G = 0.9, probability of crossover Pc = 0.9, probability
4.3. Fuzzy multiobjective 0-1 programming with fuzzy numbers 77

Table 4.13. 'Triangular fuzzy numbers

Cl,14 ( -348.00, -232.00, -207.00) Cl,22 ( -71.00, -57.00, -56.00)


C2,5 ( 358.00, 558.00, 596.00) C2,8 ( 306.00, 560.00, 697.00)
C2,23 ( 12.00, 13.00, 18.00) C2,33 ( 68.00, 97.00, 109.00)
C2,44 ( 615.00, 632.00, 922.00) C2,46 ( 52.00, 82.00, 91. (0)
C3,2 ( -338.00, -243.00, -161.00) C3,8 ( 69.00, 109.00, 145.00)
C3,9 ( -1.00, -1.00, 0.(0) C3,21 ( -463.00, -416.00, -378.00)
C3,39 (-447.00, -352.00, -211.(0) C3,41 ( 19.00, 33.00, 38.00)
al,20 ( 573.00, 884.00, 933.00) al,22 ( 298.00, 500.00, 735.00)
(J.l,Z3 ( 596.00, 940.00, 1188.00) al,45 ( 47.00, 67.00, 78.00)
(J.l,49 ( 694.00, 839.00, 1209.(0) fl2,6 ( 491.00, 803.00, 1148.00)
(J.2,7 ( 162.00, 270.00, 273.00) (J.2,22 ( 175.00, 276.00, 376.(0)
(J.2,26 ( 725.00, 782.00, 1092.00) (J.2,33 ( 394.00, 460.00, 577.00)
a3,14 ( 301.00, 320.00, 355.00) a3,37 ( 187.00, 282.00, 322.(0)
a3,43 ( 467.00, 887.00, 1050.00) a4,4 ( 494.00, 584.00, 675.00)
a4,u ( 158.00, 210.00, 234.00) a4,21 ( 395.00, 552.00, 790.00)
a4,38 ( 381.00, 459.00, 515.00) a5,4 ( 90.00, 119.00, 172.00)
a5,9 ( 390.00, 625.00, 635.00) a5,16 ( 297.00, 473.00, 625.00)
a.,>, 19 ( 547.00, 855.00, 1048.00) a5,21 ( 268.00, 291.00, 312.00)
a5,21'i ( 321.00, 344.00, 468.(0) a5,26 ( 203.00, 368.00, 475.00)
a5,29 ( 484.00, 900.00, 1102.00) a5,31 ( 464.00, 903.00, 1211.00)
a6,1 ( 304.00, 420.00, 501.00) a6,14 ( 413.00, 509.00, 763.00)
a6,32 ( 33.00, 53.00, 60.(0) a7,3 ( 522.00, 576.00, 679.00)
a7,14 ( 559.00, 732.00, 1001.(0) a7,19 ( 586.00, 778.00, 1042.00)
a7,34 ( 722.00, 919.00, 1005.(0) a7,35 ( 494.00, 524.00, 550.00)
a8,7 ( 372.00, 503.00, 707.(0) a8,19 ( 446.00, 832.00, 1073.00)
a8,35 ( 222.00, 302.00, 431. (0) a8,37 ( 680.00, 948.00, 1136.00)
a8,45 ( 616.00, 726.00, 1027.(0) a8,47 ( 234.00, 446.00, 629.00)
a9,3 ( 446.00, 449.00, 458.00) a9,5 ( 131.00, 231.00, 247.00)
a9,8 ( 471.00, 583.00, 796.00) a9,14 ( 86.00, 159.00, 180.00)
a9,18 ( 161.00, 167.00, 233.(0) a9,22 ( 123.00, 124.00, 126.(0)
a9,42 ( 159.00, 285.00, 375.00) a9,47 ( 516.00, 846.00, 1201.(0)
alO,18 ( 516.00, 543.00, 652.00) fllO,24 ( 441.00, 508.00, 712.(0)
alO,29 ( 288.00, 496.00, 549.00) (J.I0,31 ( 425.00, 484,00, 511.00)
bl ( 3766.25, 6660.75, 6960.00)

of mutation Pm = 0.01, probability of inversion Pi = 0.03, minimal


search generation Imin = 100, maximal search generation Imax = 2000,
E = 0.02, and Cmult = 1.8. The coefficient p of the augmented minimax
problem is set as 0.005.
The individual minimum and maximum of each objective function
CiX, i = 1,2,3 for a = 1 and a = 0 are calculated by GADS, as shown
in Table 4.14.
78 4. FUZZY MULTIOBJECTIVE 0-1 PROGRAMMING

Table 4. LJ, Individual minimum and maximum of each objective function

Minimum (zrlD) Maximum (zr ax )


0'=1 0'=0 0' = 1 0'=0
Ct X ~9760.0 ~9907.0 0.0 0.0
C2X 0.0 0.0 9575.0 9907.0
C3 X -4144.0 -4342.0 4189.0 4328.0

By taking account of these values, assume that the DM subjectively


determines linear membership functions /-li(CiX), i = 1,2,3, as shown in
Table 4.15.

Table 4.15. Membership functions for objective functions

zl ,
ZO

{1t(Ctx) -9900.0 0.0


{12(C2X) 0.0 9900.0
{13(C3 X ) -4300.0 4300.0

Having determined the linear membership functions in this way, for


this numerical example, at each interaction with the DM, the corre-
sponding augmented minimax problem (4.30) is solved through 10 trials
of GADS, as shown in Table 4.16.
The augmented minimax problem (4.30) is solved by the GADS for the
initial reference membership levels (1.00,1.00,1.00) and Q = 1.00. Then,
the DM is supplied with the corresponding membership function values,
as is shown in the first interaction of Table 4.16. Because the DM is not
satisfied with the current membership function values, the DM updates
the reference membership values as jh = 1.00, j12 = 0.80, and j13 = 1.00
for improving the satisfaction levels for 1}'1 and /-l3 at the expense of
/-l2' For the updated reference membership values, the corresponding
augmented minimax problem (4.30) is solved by the GADS, and the
corresponding membership function values are calculated as shown in
the second interaction of Table 4.16. Because the DM is not satisfied
with the current membership function values, the DM updates the ref-
erence membership values as j11 = 0.95, j12 = 0.80, and j13 = 1.00
for improving the satisfaction levels for /-l2 and /-l3 at the expense of
/-l1. For the updated reference membership values, the corresponding
augmented minimax problem (4.30) is solved by the GADS, and the
corresponding membership function values are calculated as shown in
4.3. Fuzzy multiobjective 0-1 programming with fuzzy numbers 79

the third interaction of Table 4.16. Furthermore, the DM updates the


degree a = 1.0 ~ 0.6 to improve all membership function values at the
cost of the degree of realization of coefficients a. In this example, a
satisficing solution for the DM is derived at the fourth interaction.

Table 4.16. Interactive processes (10 trials)

Interaction I'-I(CIX) 1'-2 (qx) 1'-3 (C3 X )' CIX C2X C3 X #


1st 0.773535 0.775051 0.772558 -7658.00 2227.00 -2344.00 9
(1.00,1.00,1.00) 0.753636 0.753535 0.752442 -7461.00 2440.00 -2171.00 1
0: = 1.00
2nd 0.835556 0.638182 0.835349 -8272.00 3582.00 -2884.00 10
(1.00,0.80,1.00)
0: = 1.00
3rd 0.795455 0.658889 0.837674 -7875.00 3377.00 -2904.00 10
(0.95,0.80,1.00)
0: = 1.00
4th 0.800141 0.661273 0.848698 -7921.40 3353.40 -2998.80 10
(0.95,0.80,1.00)
0: = 0.60
#: Number of solutlOns

4.3.3.2 Multiobjective 0-1 programming problems with fuzzy


numbers
Next, as a numerical example of multiobjective general 0-1 program-
ming problems involving fuzzy numbers, consider a three-objective gen-
eral 0-1 programming problem having 50 variables and 10 constraints
involving fuzzy numbers. For convenience, it is assumed here that some
of the coefficients Cij, aij, and bi in Tables 4.8 and 4.9 are fuzzy num-
bers. To be more explicit, among the coefficients in Tables 4.8 and 4.9,
the followings are assumed to be triangular fuzzy numbers as shown in
Table 4.17.
The parameter values of G ADSRSU are set as population size N =
100, generation gap G = 0.9, probability of crossover Pc = 0.9, probabil-
ity of mutation Pm = 0.01, probability of inversion Pi = 0.03, minimal
search generation I min = 100, maximal search generation Imax = 1000,
E = 0.02, and Crnult = 1.8, A = 0.9, "7 = 0.1, () = 5.0, and P = 200. The
coefficient p of the augmented minimax problem is set as 0.005.
In order to determine the linear membership functions, which well-
represent the fuzzy goals of the DM, the individual minimum and max-
80 4. FUZZY MULTIOBJECTIVE 0-1 PROGRAMMING

Table 4. 17. Fuzzy parameters

C1.3 (-876.00, -678.00, -588.00) Cl,6 ( -520.00, -494.00, -467.00)


C1,14 ( -531.00, -357.00, -277.00) C1,lf) ( -119.00, -92.00, -56.00)
C1,45 (-913.00, -639.00, -631.00) Cl,48 ( -739.00, -504.00, -336.00)
C2 1 2 ( 349.00, 632.00, 906.00) C2,20 ( 504.00, 740.00, 890.00)
C2,36 ( 54.00, 103.00, 146,00) C3,9 ( 87.00, 101.00, 133.00)
C3,24 (-476.00, -467,00, -396.00) C3,33 ( -93.00, -90.00, -88.(0)
a1,1 ( 48.00, 77.00, 87.00) al,2 ( -22.00, -18.00, -15.00)
a1,3 ( 61.00, 117.00, 128.00) a],4 ( 219.00, 228.00, 322.00)
a1,16 ( 294.00, 336.00, 384.00) a1,50 ( -49.00, -39.00, -23.00)
a2,6 ( 262.00, 272.00, 405.00) a2,16 ( -384.00, -333.00, -194.00)
a2,19 (-238.00, -173.00, -126.00) a2,27 ( -568.00, -403.00, -358.00)
a2,39 ( 358.00, 363.00, 501.(0) a3,5 ( -4.00, -3.00, -2.00)
a3,13 ( -31.00, -23.00, -12.00) a3,48 ( -70.00, -65.00, -53.(0)
a1,31 ( 116.00, 135.00, 159.(0) a4,10 ( -305.00, -275.00, -217.(0)
a5,10 (-378.00, -283.00, -165.(0) a5,35 ( -542.00, -404.00, -255.00)
a6,2 ( 66.00, 126.00, 152.(0) a6,12 ( -194.00, -171.00, -102.(0)
a6,19 ( 389.00, 435.00, 594.(0) a6,16 ( -378.00, -373.00, -313.(0)
a6,48 ( -44.00, -32.00, -23.(0) (1.7,1 ( 189.00, 201.00, 208.(0)
a7,19 ( 311.00, 372.00, 417.(0) (1.7,23 ( -644.00, -445.00, -388.(0)
a7,27 (-494.00, -401.00, -339.(0) a7,33 ( -51.00, -46.00, -38.(0)
a7,42 (-320.00, -272.00, -237.(0) a8,2 ( 37.00, 45.00, 61.(0)
/l8,5 ( 103.00, 179.00, 261.00) /lS,7 ( -520.00, -374.00, -188.(0)
(1.8,31 (-447.00, -328.00, -318.00) a8,46 ( 480.00, 495.00, 699.(0)
(l9,2 ( -221.00, -172.00, -97.(0) /l9,5 ( 96.00, 100.00, 119.(0)
a9,10 ( 292.00, 308.00, 420.00) /l9,28 ( -127.00, -91.00, -58.(0)
a9,42 (-216.00, -156.00, -140.00) (l10,2 ( 208.00, 367.00, 375.00)
alO,3 (-481.00, -411.00, -331.(0) /llO,19 ( -399.00, -393.00, -292.(0)
/l1O,20 ( 302.00, 353.00, 385.(0) (l.lO,26 ( -389.00, -271.00, -197.(0)
alO,27 (-438.00, -312.00, -212.(0) b1 ( -2648.00, -2067.20, -964.40)

imum of each of objective functions CiX, i 1,2,3 are calculated by


GADSRSU, as shown in Table 4.18.

Table 4.18. Individual minimum and maximum of each of objective functions

Minimum (zi>ll) Maximum (zi ax )


0'=1 0'=0 0'=1 0'=0
C1 X -18301.0 -19754.0 -5343.0 -3954.0
C2X 5731.0 5271.0 18960.0 20773.0
C3X -3453.0 -3860.0 2130.0 2570.0
4.4. Conclusion 81

Based on the values in Table 4.18, assume that the DM subjectively


specified parameter values of linear membership functions Ji'i (CiX), i =
1,2,3, as shown in Table 4.19.

Table 4.19. Membership functions for objective functions

zt zi
tLl (CIX) -20000.0 -3000.0
tL2 (C2X) 5000.0 20000.0
tL3(C3 X ) -4000.0 3000.0

Having determined the linear membership functions in this way, for


this numerical example, at each interaction with the DM, the corre-
sponding augmented minimax problem (4.30) is solved through 10 trials
of GADSRSU, as shown in Table 4.20.
The augmented minimax problem (4.30) is solved by the GADSRSU
for the initial reference membership levels (1.0,1.0,1.0) and a = 1.0,
and the DM is supplied with the corresponding membership function
values as shown in the first interaction of Table 4.20. Because the DM is
not satisfied with the current membership function values, the DM up-
dates the reference membership values as ih = 0.8, il2 = 1.0, il3 = 1.0,
and a = 1.0 for improving the satisfaction levels for /-l2 and /-l3 at the
expense of /-ll. For the updated reference membership values, the corre-
sponding augmented minimax problem (4.30) is solved by the proposed
GADSRSU, and the corresponding membership function values are cal-
culated as shown in the second interaction of Table 4.20. Because the
DM is not satisfied with the current membership function values, the
DM updates the reference membership values as ill = 0.8, il2 = 0.9,
il3 = 1.0, and a = 0.6 for improving the satisfaction levels for /-ll and /-l3
at the expense of /-l2' For the updated reference membership values, the
corresponding augmented minimax problem (4.30) is solved by the pro-
posed GADSRSU, and the corresponding membership function values
are calculated as shown in the third interaction of Table 4.20. The same
procedure continues in this manner until the DM is satisfied with the
current values of the membership functions In this example, a satisficing
solution for the DM is derived at the fourth interaction.

4.4 Conclusion
In this chapter, as a natural extension of single-objective 0-1 program-
ming problems discussed in the previous chapter, multiobjective 0-1 pro-
gramming problems are formulated by assuming that the decision maker
82 4 FUZZY MULT/OBJECTIVE 0-1 PROGRAMMING

Table 4.20. Interactive processes (10 trials)

Interaction /1-1 (CIX) /1-2 (C2 X ) /1-3 (C3 X ) CjX C2X C3X #
1st 0.624471 0.624267 0.685143 -13616.0 10636.0 -1796.0 1
(1.00,1.00,1.00) 0.628588 0.635667 0.639714 -13686.0 10465.0 -1478.0 1
Q = 1.0 0.662412 0.643733 0.625571 -14261.0 10344.0 -1379.0 4
0.624059 0.630400 0.642286 -13609.0 10544.0 -1496.0 2
0.615000 0.593933 0.599000 -14162.0 11091.0 -1193.0 1
0.616118 0.628733 0.611571 -13474.0 10569.0 -1281.0 1
2nd 0.498235 0.706067 0.720286 -11470.0 9409.0 -2042.0 3
(0.80,1.00,1.00) 0.506471 0.696133 0.707429 -11610.0 9558.0 -1952.0 1
Q = 1.0 0.505118 0.693467 0.693714 -11587.0 9598.0 -1856.0 1
0.510882 0.692467 0.739714 -11685.0 9613.0 -2178.0 3
0.492294 0.692333 0.735429 -11369.0 9615.0 -2148.0 1
0.483118 0.683667 0.709000 -11213.0 9745.0 -1963.0 1
3rd 0.615847 0.640480 0.743343 -13469.4 10392.8 -2203.4 1
(0.80,0.90,1.00) 0.558035 0.655480 0.747200 -12486.6 10167.8 -2230.4 1
Q = 0.6 0.544682 0.652547 0.826486 -12259.6 10211.8 -2785.4 1
0.542624 0.643133 0.810543 -12224.6 10353.0 -2673.8 1
0.570800 0.652080 0.742343 -12703.6 10218.8 -2196.4 2
0.540647 0.671187 0.741543 -12191.0 9932.2 -2190.8 1
0.551153 0.639200 0.743343 -12369.6 10412.0 -2203.4 3
#: Number
T
of solutIOns

may have a fuzzy goal for each of the objective functions. Through the
combination of the desirable features of both the interactive fuzzy sat-
isficing methods for continuous variables and the GADS discussed in
the previous chapter, an interactive fuzzy satisficing method to derive a
satisficing solution for the decision maker is presented. Furthermore, by
considering the experts' imprecise or fuzzy understanding of the nature
of the parameters in the problem-formulation process, the multiobjective
0-1 programming problems involving fuzzy parameters are formulated.
Through the introduction of extended Pareto optimality concepts, an in-
teractive decision-making method for deriving a satisficing solution of the
DM from among the extended Pareto optimal solution set is presented
together with detailed numerical examples. An integer generalization
along the same lines as in Chapter 3 will be found in Chapter 6.
Chapter 5

GENETIC ALGORITHMS FOR INTEGER


PROGRAMMING

This chapter is the integer version of Chapter 3, and genetic algo-


rithms with double strings (GADS) for 0-1 programming problems are
extended to deal with integer 0-1 programming problems. New decod-
ing algorithms for double strings using reference solutions with the ref-
erence solution updating procedure are proposed especially so that in-
dividuals are decoded to the corresponding feasible solution for integer
programming problems. The chapter also includes several numerical
experiments.

5.1 Introduction
As discussed in Chapter 3, GADS performed efficiently for not only
multidimensional 0-1 knapsack problems but also general 0-1 program-
ming problems involving positive and negative coefficients. To deal with
multidimensional integer knapsack problems, a direct generalization of
our previous results along this line is first performed. Unfortunately,
however, observing that integer ones have a vast search space compared
with 0-1 ones, it is quite evident that the computational time for finding
an approximate optimal solution with high accuracy becomes enormous.
Realizing this difficulty, information about an optimal solution to the
corresponding linear programming relaxation problems is incorporated
for improving the search efficiency and processing time, because it is
expected to be useful for searching an approximate optimal solution to
the integer programming problem. Furthermore, GADS based on refer-
ence solution updating (GADSRSU) for 0-1 programming problems are
extended to deal with integer programming problems [140, 146].
84 5. GENETIC ALGORITHMS FOR INTEGER PROGRAMMING

5.2 Multidimensional integer knapsack problems


5.2.1 Problem formulation
As an integer version of a multidimensional 0-1 knapsack problem, a
multidimensional integer knapsack problem is formulated as

mmlmlze
subject to ~: ~ b }, (5.1 )
Xj E {O, ... , lIj}, j = 1, ... ,n
where C = (CI,' .. , cn) is an n-dimensional row vector, x = (Xl"'" xn)T
is an n-dimensional column vector of integer decision variables, A =
[aij], i = 1, ... , rn, j = 1, ... , n, is an rn x n coefficient matrix, b =
(b l , ... , bmf is an rn-dimensional column vector, and lIj, j = 1, ... , n are
nonnegative integers. It should be noted here that, in a multidimensional
integer knapsack problem, each element of c is assumed to be nonposit.ive
and each clement of A and b is assumed to be nonnegat.ive.

5.2.2 Linear programming relaxation


It is expected t.hat. an optimal solution to t.he linear programming
relaxat.ion problem becomes a good approximate optimal solution of the
original integer programming problem. Wit.h this observat.ion in mind,
to check t.he relationships between optimal solutions to int.eger knapsack
problems and those to the corresponding linear programming relaxation
problems, 20 integer programming problems with 50 variables and 10
constraints are generated at. random. To be more explicit, t.he values
for coefficients ej, j = 1, ... ,n, and aij, i = 1, ... ,rn, j = 1, ... ,n are
determined by uniform integer random numbers in [-999,0] and [0,999],
respectively, and the values of bi , i = 1, ... , rn are defined as
n
bi = I .L aij, i = 1, ... , rn, (5.2)
j=l

where a positive constant I is a parameter to control the degree of st.rict-


ness of the const.raints and determined by a uniform real random number
ranging from 10 to 20. In addition, upper bounds lIj of Xj, j = 1, ... , n
are set at 20 for all j.
Table 5.1 shows t.he relationships between optimal solutions xj t.o inte-
ger knapsack problems and those xj t.o the corresponding linear program-
ming relaxation problems. Figure 5.1 shows the frequency dist.ribut.ion
of differences bet.ween the values of an opt.imal solution Xi of int.eger pro-
gramming problems and an opt.imal solut.ion Xi of linear programming
relaxation problems.
5.2. Multidimensional integer knapsack problems 85

Table 5.1. Relationships between optimal solutions xj to integer knapsack problems


and those Xj to the corresponding linear programming relaxation problems

xj = Xj xj # Xj
x* = 0 328 2
xi # 0 585 85

u
'hequency
927

30

-5 o 5

Figure 5.1. Frequency distribution of Xj - Xj

As a result, it is recognized that each variable Xi takes exactly or


approximately the same value that Xi does, such variables Xi as Xi =
are very likely to be equal to 0.

5.2.3 Coding and decoding
For multidimensional 0-1 knapsack problems, Sakawa et al. [138, 144,
147,148, 160, 161, 163J proposed a double string representation, in which
an element in the upper row denotes an index of an element in a solution
vector and an element in the lower row denotes the corresponding value
of variable. To be more explicit, for multidimensional 0-1 knapsack
problems, in a double string representation as shown in Figure 5.2, it is
natural to assume that gs(j) E {O, I}, s(j) E {I, ... , n}, and s(j) i- s(j')
for j i- j'.

Indices of variables
Values of variables

Figure 5.2. Individual representation by double string


86 5. GENETIC ALGORITHMS FOR INTEGER PROGRAMMING

Keeping the same spirit as a double string for multidimensional 0-1


knapsack problems, it is reasonable to assume that gs(j) E {O, ... , Vj},
s(j) E {l, ... ,n}, and s(j) f- S(j') for j f- j' for multidimensional integer
knapsack problems.
For dealing with multidimensional integer knapsack problems, decod-
ing algorithm for double strings for 0-1 ones [138, 144, 147, 148, 160,
161, 163], which maps an individual represented by double string to a
feasible solution, can be generalized as follows.

Decoding algorithm for double string


Step 1: Set j:= 1, sumi := 0, i = 1, ... ,m.
Step 2: Let ais(j) denote the (i, oS en) element of the coefficient matrix
A. Then xs(j) is determined as

xs(j) := .( . l
mm ._mm
~-l, ... ,rn
bi - sumi
ais(j)
j )
,gs(j) , (5.3)

where ais(j) f- O.
Step 3: Let SUmi := SUmi + ais(j)xs(j), i = 1, ... , Tn and j := j + l.
Step 4: If j > n, stop. Otherwise, go to step 2.
In the preceding decoding algorithm for double strings depicted in
Figure 5.2, a double string is decoded from the left edge to the right
edge in order. Namely, a gene (s(j), gs(j)T located in the left part of a
double string tends to be decoded to a value around gs(j), whereas one
in the right part is apt to be decoded to nearly O. By taking account
of the relationships between optimal solutions to integer knapsack prob-
lems and those to the corresponding continuous relaxation problems,
we propose a new decoding algorithm in which decision variables for
the corresponding solution to the continuous relaxation problem greater
than 0 are given priority in decoding. As a result, decoded solutions will
be closer to an optimal solution to the continuous relaxation problem.
However, some optimal solutions x* to integer programming problems
may not be very close to optimal solutions x to the corresponding linear
programming relaxation problems even if about 90% of elements of x*
are equal to those of x, as shown in the previous section. In consider-
at.ion of the estrangement between x* and x, we introduce a constant
R that means the degree of use of information about solutions to linear
programming relaxation problems. Now we are ready to introduce the
following decoding algorithm for double strings using linear program-
ming relaxation.
5.2. Multidimensional integer knapsack problems 87

Decoding algorithm using linear programming relaxation


Step 0: If a uniform random number randO in [O,lJ is less than or
equal to a constant R, go to step 1. Otherwise, use the original
decoding algorithm mentioned earlier.
Step 1: Let j:= 1 and sumi := 0, i = 1, .. . ,m.
Step 2: If xs(j) > 0, proceed to step 3. Otherwise, i.e., if xs(j) = 0, let
j := j + 1 and go to step 5.
Step 3: Let ais(j) denote the (i, s(j)) element of the coefficient matrix
A. Then, x s(j) is determined as

Xs(j) := ( l
min_min
z-l, ... ,m
b. - sum'J
Z
ais(j)
Z ,9s(j)
) , (5.4)

where ais(j) =I O.
Step 4: Let sumi := sumi + ais(j)xs(j) , i = 1, ... , m and j := j + 1.
Step 5: If j > n, proceed to step 6. Otherwise, return to step 2.
Step 6: Let j := 1.
Step 7: If xs(j) = 0, proceed to step 8. Otherwise, i.e., if xs(j) > 0, let
j := j + 1 and go to step 10.
Step 8: Let ais(j) denote the (i, s(j)) element of the coefficient matrix

.( . l
A. Then, xs(j) is determined as

xs(j) := mm . mm bi - sumi J ,9s(j) ) , (5.5)


z=l, ... ,m ais(j)

where ais(j) =I O.
Step 9: Let sumi := sumi + ais(j)Xs(j), i = 1, ... ,m and j := j + 1.
Step 10: If j > n, stop. Otherwise, return to step 7.
In the previous algorithm, the optimal solution x to the linear pro-
gramming relaxation problem of the integer programming problem is
supposed to be obtained in advance.
Figure 5.3 shows an example of decoding a double string. When the
original decoding algorithm is used, the double string will be decoded
from left to right in order. On the other hand, when the proposed
decoding algorithm using linear programming relaxation is used, after
such genes (S(j),9s(j)T as xs(j) > 0 are decoded from left to right, the
remainder, in other words, genes such as xs(j) = 0 will be decoded from
left to right.
88 5. GENETIC ALGORITHMS FOR INTEGER PROGRAMMING

.b (1.4,0,3.8,2.2,0)

Figure 5.3. An illustration of decoding of a double string

5.2.4 Generation of initial population


In genetic algorithms, an initial population is usually generated at
random. However, it is important to generate a more promising initial
population to obtain a good approximate optimal solution. For this
reason, the information about the solution of the linear programming
relaxation problem x should be used. To be more specific, determine
gs(j)' j = 1, ... ,11 randomly according to the corresponding Gaussian
distribution with mean xs(j) and variance a 2 , shown in Figure 5.4.

Figure 5.4. Gaussian distribution for generation of initial population

It should be noted here that because there exist no appropriate policies


and standards for determining the value of a, it must be determined so
5.2. Multidimensional integer knapsack problems 89

that the whole genetic algorithm works better after repeated trial and
error at present.
The procedure of generation of initial population is summarized as
follows.

Generation of initial population

Step 1: Let r := 1.

Step 2: If a uniform random number rand 0 in [0, 1] is less than or


equal to the constant R, go to step 3. Otherwise, go to step 7.

Step 3: Let j := 1.

Step 4: Determine the value of s(j) by a uniform integer random num-


ber in {I, ... ,n} so that s(j) i= S(j'), j' = 1, ... ,j-1.

Step 5: Determine the value of gs(j) by a Gaussian random number


with mean xs(j) and variance 0'2, and let j := j + 1.
Step 6: If j > n, let r := r + 1 and go to step 11. Otherwise, return to
step 4.
Step 7: Let j := 1.

Step 8: Determine the value of s(j) by a uniform integer random num-


ber in {I, ... , n} so that s(j) i= S(j'), j' = 1, ... ,j - 1.

Step 9: Determine the value of gs(j) by a uniform integer random num-


ber in {O, 1, ... , I/s(j)}' and let j := j + 1.
Step 10: If j > n, let r := r + 1 and go to step 11. Otherwise, return
to step 8.

Step 11: If r > N, stop. Otherwise, return to step 2.

5.2.5 Fitness and scaling


For multidimensional integer knapsack problems, it seems quite nat-
ural to define the fitness function of each individual 8 by

f(8) = n ex (5.6)
LCjl/j
j=l

where 8 denotes an individual represented by a double string and x


is the phenotype of 8. Observe that the fitness is normalized by the
90 5. GENETIC ALGORITHMS FOR INTEGER PROGRAMMING

minimum of the objective function, and hence the fitness f (8) satisfies
0:::::f(8):::::1.
When the variance of fitness in a population is small, the ordinary
roulette wheel selection often does not work well because there is little
difference between the probability of a good individual surviving and
that of a bad one surviving. In order to overcome this problem, quite
similar to [138, 144, 147, 148, 160, 161, 163], the linear scaling technique
[66] is adopted.

Algorithm for linear scaling

Step 1: Calculate the mean fitness fmean, the maximal fitness fmax, and
the minimal fitness fmin in the current population.

2 If f mlii
St ep. . > Cmul e . fmean - fmax 1 t
,
._
e a.-
1.0) . fmean b-
(Cmult -
, .-
Cmult - 1.0 fmax - fmean
fmean . (fmax _
- fCmult . fmean) an d go t,0 s t ep.
30therWlse,.
. let a '.=
f
rnax mean

--::----"------:--, b .-
fmean
.- _ fmin . fmean an d go t 0 s t ep 3 .
/Inean - f min f mean - lInin

Step 3: Perform the transformation such as l' := a f + b and regard


l' as a new fitness.
In the previous procedure, Cmul t denotes the expectation of the number
of the best individual that will survive in the next generation, usually
set as 1.2 ::::: Cmult ::::: 2.0.

5.2.6 Genetic operators


5.2.6.1 Reproduction
As discussed in Chapter 3, various reproduction methods have been
proposed [66, 112]. Sakawa et al. [138, 148] have investigated the per-
formance of each of the six reproduction operators-ranking selection,
elitist ranking selection, expected value selection, elitist expected value
selection, roulette wheel selection, and elitist roulette wheel selection-
and as a result confirmed that elitist expected value selection is relatively
efficient. Based mainly on our experience [138, 148], we adopt elitist
expected value selection, which is a combination of elitist preserving
selection and expected value selection, as a reproduction operator.

Elitist preserving selection: One or more individuals with the largest fit-
ness up to the current population is unconditionally preserved in the
next generation.
5.2. Multidimensional integer knapsack problems 91

Expected value selection: Let N denote the number of individuals in the


population. The expected value of the number of the ith individual
8i in the next population is calculated as

In expected value selection, the integral part of Ni (= l Nd) denotes


the definite number of individual 8i preserved in the next population.
Using the fractional part of Ni (= Ni - lNd), the probability to
preserve 8i in the next population is determined by

N
L(Ni - lNd)
i=l

5.2.6.2 Crossover
If a single-point crossover or multipoint crossover is directly applied
to individuals of double string type, the kth element of an offspring
may take the same number that the k'th element takes. Similar viola-
tion occurs in solving traveling salesman problems (TSPs) or scheduling
problems through genetic algorithms. In order to avoid this violation,
a crossover method called partially matched crossover (PMX) was pro-
posed [68J and was modified to be suitable for double strings [148]. The
PMX for double strings can be described as follows:

Partially Matched Crossover (PMX) for double string


Step 0: Set r := l.
Step 1: Choose X and Y as parent individuals. Then, let X' := X and
yI:= y.

Step 2: Generate a real random number randO in [O,IJ. For a given


crossover rate Pc, if randO ::; Pc, then go to step 3. Otherwise, go to
step 8.
Step 3: Choose two crossover points h, k (h i= k) from {I, 2, ... , n}
at random. Then, set l := h. First, perform operations in steps 4
through 6 for X' and Y.
Step 4: Let j := ((l-I)%n) + 1 (p%q is defined as the remainder when
an integer p is divided by an integer q). After finding j' such that
92 5. GENETIC ALGORITHMS FOR INTEGER PROGRAMMING

sy(j) = SXI(j'), interchange (sX (j),9s x l(j))'I' with (sX,(j'),9s x l(j,)T.


1

Furthermore, set l := l + 1, and go to step 5.


Step 5: 1) If h < k and l > k, then go to step 6. If h < k and l :::; k
then return to step 4. 2) If h > k and l > (k + n), then go to step 6.
If h < k and l :::; (k + n), then return to step 4.

Step 6: 1) If h < k, let 9s x ,(j) := 9s y (j) for all j such that h :::; j :::; k,
and go to step 7. 2) If h > k, let 9s x ,(j) := 9s y (j) for all j such that
1 :::; j :::; k or h :::; j :::; n, and go to step 7.
Step 7: Carry out the same operations as in steps 4 through 6 for Y'
and X.
Step 8: Preserve X' and Y' as the offsprings of X and Y.
Step 9: If r < N, set r := r + 1 and return to step 1. Otherwise, go to
step 10.
Step 10: Choose N . G individuals from 2 . N preserved individuals
randomly, and replace N . G individuals of the current population
consisting of N individuals with the N . G chosen individuals. Here,
G is a constant called a generation gap.
It should be noted here that the original PMX for double strings is
extended to deal with the substrings not only between hand k but also
between k and h.
An illustrative example of crossover is shown in Figure 5.5.

5.2.6.3 Mutation operator


It is considered that mutation plays the role of local random search in
genetic algorithms. A direct extension of mutation for 0-1 problems is to
change the value of 9s(j) at random in [0, vs(j)J uniformly, when mutation
occurs at 9s(j)' The mutation operator is further refined by using the
information about the solution of the linear programming relaxation
problem x. To be more explicit, change 9s(j)' j = 1, ... , n randomly,
according to the corresponding Gaussian distribution with mean xs(j)
and variance T2, as shown in Figure 5.6.
Just like a in Figure 5.4, it should be noted that because no appropri-
ate policies and standards exist for determining the value of T, it must
be determined, so that the whole genetic algorithm works better after
repeated trial and error at present.
The procedures of mutation and inversion for double strings are sum-
marized as follows.
5.2. Multidimensional integer knapsack problems 93

Figure 5.5. Example of crossover

Figure 5.6. Gaussian distribution for mutation

Mutation for double strings

Step 0: Let r := 1.

Step 1: Let j := 1.

Step 2: If a random number rand 0 in [0, 1J is less than or equal to the


probability of mutation Pm, go to step 3. Otherwise, go to step 4.

Step 3: If another random number rand 0 in [0, 1J is less than or equal


to a constant R, determine Xs(j) randomly according to the Gaussian
94 5. GENETIC ALGORITHMS FOR INTEGER PROGRAMMING

distribution with mean xs(j) and variance T2, and go to step 4. Other-
wise, determine xs(j) randomly according to the uniform distribution
in [0, Vj]' and go to step 4.

Step 4: If j < n, let j := j + 1 and return to step 2. Otherwise, go to


step 5.

Step 5: If r < N, let r := r + 1 and return to step 1. Otherwise, stop.

Inversion for double strings

Step 0: Set r := 1.

Step 1: Generate a random number rand 0 in [0,1]. For a given inver-


sion rate Pi, if rand 0 :s; Pi, then go to step 2. Otherwise, go to step
4.

Step 2: Choose two points h, k (h 1= k) from {I, 2, ... , n} at random.


Then, set l := h.

Step 3: Let.i:= ((l-l)%n) + 1. Then, interchange (s(j),gs(j))T with


(.s( (n + k - (l - h) - l)%n + 1), gs((n+k-(I-h)-l)%n+l))T. Furthermore,
set l := l + 1 and go to step 4.

Step 4: 1) If h < k and l < h + l(k - h + 1)/2j, return to step 3.


If h < k and l ~ h + l(k - h + 1)/2j, go to step 5. 2) If h > k
and l < h + l(k + n - h + 1)/2j, return to step 3. If h > k and
l ~ h + l(k + n - h + 1)/2j, go to step 5.

Step 5: If r < N, set r := r + 1 and return to step 1. Otherwise, stop.

Observe that the original inversion for double strings is extended to


deal with the substrings not only between hand k but also between k
and h.
Mutation and inversion are illustrated in Figure 5.7.

Mutation

Figure 5.7. Example of mutation and inversion


5.2. Multidimensional integer knapsack problems 95

5.2.7 Termination conditions


In some generation t, the genetic algorithm terminates if either of the
following condition is fulfilled: (1) t > fmin and (frnax - fmean)/ fmax < E,
or (2) t > f max , for the minimal search generation fmin, the maximal
fitness fmax, and the mean fitness fmean of the current population, the
convergence criterion E, and the maximal fitness fmax.

5.2.8 Genetic algorithms with double strings using


linear programming relaxation
Now we are ready to summarize the GADS using linear programming
relaxation (GADSLPR) for solving multidimensional integer knapsack
problems.

Genetic algorithms with double strings using linear program-


ming relaxation

Step 0: Determine values of the parameters used in the genetic algo-


rithm: the population size N, the generation gap G, the probability
of crossover Pc, the probability of mutation Pm, the probability of
inversion Pi, the minimal search generation fmin, the maximal search
generation f max (> fmin), the scaling constant Cmlllt, the convergence
criterion E, and the degree of use of information about solutions to
linear programming relaxation problems R and set the generation
counter t at o.

Step 1: Generate the initial population consisting of N individuals.

Step 2: Decode each individual (genotype) in the current population


and calculate its fitness based on the corresponding solution (pheno-
type).

Step 3: If the termination condition is fulfilled, stop. Otherwise, let


t := t + 1 and go to step 4.

Step 4: Apply reproduction operator using elitist expected value selec-


tion, after performing linear scaling.

Step 5: Apply crossover operator, the PMX for double string.

Step 6: Apply mutation operator.

Step 7: Apply inversion operator. Return to step 2.


96 5. GENETIC ALGORITHMS FOR INTEGER PROGRAMMING

5.2.9 Numerical experiments


For investigating the feasibility and efficiency of the proposed GAD-
SLPR, as numerical examples, consider several multidimensional integer
knapsack problems with 50, 80, and 100 variables. In these numerical
examples, the values for coefficients Cj, aij, i = 1, ... , m, j = 1, ... , n
are randomly chosen from {-100, -99, ... , -I} and {O, 1, ... , 999}, and
Vj are all set at 30. The right-hand side constants bi are determined by
(5.2).
The numerical experiments were carried out on a personal computer
(CPU: Intel Celeron Processor 333MHz, OS: Microsoft Windows 2000,
C_Compiler: Microsoft Visual C++ 6.0).
For comparison, the direct generalization of the GADS without using
linear programming relaxation is used for solving the same problems.
Also, in order to compare the obtained results with the corresponding
exact optimal solutions or incumbent values, the same problems arc
solved using a software LP _SOLVE by M. Berkelaar. 1
In these numerical experiments, GADSLPR and GADS arc applied 10
times to every problem, where the following parameter values are used
in both genetic algorithms: the population size N = 100, the genera-
tion gap G = 0.9, the probability of crossover Pc = 0.9, the probabil-
ity of mutation Pm = 0.05, the probability of inversion Pi = 0.03, the
minimal search generation Imin = 100, the maximal search generation
I max (> Imin) = 500, the scaling constant Cmult = 2.0, the convergence
criterion E = 0.01, the degree of use of information about solutions to
linear programming relaxation problems R = 0.9. Furthermore, in the
proposed method, the variances a and T are set at 2.0 and 3.0, respec-
tively, after several preliminary trials.
First, the proposed GADSLPR, GADS, and LP _SOLVE are applied
to two problems with 50 variables and 20 constraints (n = 50, m = 20),
where the values of'Y are set at 5.0 (tight) and 10.0 (relatively loose),
respectively. Results for these problems are shown in Table 5.2. In Ta-
ble 5.2, best, average, worst, time, and AG represent the best value,
average value, worst value, average computation time, and average gen-
eration for obtaining the best value of 10 trials, respectively. Concerning
LP _SOLVE, optimal and incumbent represent the kind of obtained solu-
tion, and time represents the computation time. In these experiments,
for 'Y = 5.0, the proposed GADSLPR obtains more accurate results in
than both GADS and LP _SOLVE. In time, GADSLPR needs only about

1 LP _SOLVE [23] solves (mixed integer) linear programming problems. The imple-
mentation of the simplex kernel was mainly based on the text by Orchard-Hays [125].
The mixed integer branch and bound part was inspired by Dakin [37].
5.2. Multidimensional integer knapsack problems 97

0.5% of the computation time of LP _SOLVE. Although the computation


time of GADSLPR is almost equal to that of GADS, considering that
AG of GADSLPR is much smaller than that of GADS, it is supposed
that the computation time of GADSLPR is substantially shorter than
that of GADS, in other words, GADSLPR can search solutions more
effectively than GADS can. For, = 10.0, fortunately, an exact optimal
solution is found by LP _SOLVE. The proposed GADSLPR can also ob-
tain the exact optimal solution in slightly more time than LP _SOLVE
can, whereas GADS cannot. This means that the information about
solutions to the linear programming relaxation problem plays a key role
in search.

Table 5.2. Experimental results for 50 variables and 20 constraints (10 trials)

'Y Methods Best Average Worst Time (sec) AG


GADSLPR -21158 -21134.3 -21116 5.79 x 10 1 197.3
5.0 GADS -20887 -20653.4 -20423 5.77 x 10 1 418.3
LP.BOLVE -21045 (incumbent) 1.08 x 10 4 -
GADSLPR -38943 -38915.8 -38943 5.85 x 10 1 295.3
10.0 GADS -37564 -36826.1 -35772 5.86 x 10 1 464.1
LP_SOLVE -38943 (optimal) 4.34 x 10 1 -

In addition, we also apply GADSLPR, GADS, and LP _SOLVE to two


problems with 80 variables and 25 constraints (n = 80, m = 25) and to
problems with 100 variables and 30 constraints (n = 100, m = 30), where
the values of, are set at 5.0 and 10.0. Tables 5.3 and 5.4 show results for
these problems, and it can be seen from these results that GADSLPR ob-
tains much better approximate optimal solutions than both GADS and
LP _SOLVE do in about 1 to 2% of the computation time of LP _SOLVE.

Table 5.3. Experimental results for 80 variables and 25 constraints (10 trials)

'Y Methods Best Average Worst Time (sec) AG


GADSLPR -34548 -34529.9 -34508 1.16 x 10 2 318.5
5.0 GADS -33271 -32815.2 -32023 1.15 x 10 2 471.2
LP.BOLVE -33539 (incumbent) 1.08 x 10 4 -

GADSLPR -64405 -64380.0 -64346 1.18 x 10 2 291.2


10.0 GADS -50230 -58767.3 -57356 1.16 x 10 2 441.7
LP.BOLVE -64284 (incumbent) 1.08 x 10 4 -
98 5. GENETIC ALGORITHMS FOR INTEGER PROGRAMMING

Table 5.4- Experimental results for 100 variables and 30 constraints (10 trials)

'Y Methods Best Average Worst Time (sec) AG


GADSLPR -46373 -46290.1 -46152 1.76 x 10 2 192.9
5.0 GADS -43785 -42872.6 -41439 1.74 x 10 2 449.5
LP_SOLVE -44881 (incumbent) 1.08 x 10 4 -

GADSLPR -86068 -86040.9 -85894 1.75 x 10 2 119.6


10.0 GADS -78362 -75947.4 -72933 1.73 x 10 2 434.5
LP_SOLVE -85715 (incumbent) 1.08 x 10 4 -

From the results in Tables 5.2, 5.3, and 5.4, we can conclude that
GADSLPR is considerably effective as an approximate solution method
for multidimensional integer knapsack problems because it can obtain
more accurate approximate optimal solutions in much (substantially)
shorter computation time than GADS and LP _SOLVE can in most cases,
and the information about solutions to linear programming relaxation
problems is indispensable for efficient search.

5.3 Integer programming


5.3.1 Problem formulation
In general, an integer programming problem is formulated as
minimize
subject to ~~~b }, (5.7)
Xj E {O, ... ,l!j}, j = 1, ... ,n

where C = (CI," ., cn) is an n-dimensional row vector; x = (Xl, .. , xn)T


is an n-dimensional column vector of integer decision variables; A =
[aijJ, i = 1, ... , m, j = 1, ... , n, is an m x n coefficient matrix; b =
(b 1 , . .. , bm)T is an m-dimensional column vector; and l!j, j = 1, ... , n
are nonnegative integers.
For such integer programming problems (5.7), as discussed in the
previous section, Sakawa et al. focused on the knapsack type, in which
all of the elements of A and b are nonnegative and proposed GADS using
linear programming relaxation [145, 146J.
Unfortunately, however, the GADSLPR proposed by Sakawa et al.
[145, 146J cannot be directly applied to more general integer program-
ming problems in which not only positive elements but also negative
elements of A and b exist.
In this section, we extend the GADSLPR to be applicable to more
general integer programming problems with A E R mxn and bERm.
5.3. Integer programming 99

5.3.2 Genetic algorithms with double strings based


on reference solution updating
5.3.2.1 Coding and decoding
In GADS for multidimensional integer knapsack problems introduced
in the previous section, each individual is decoded to the corresponding
feasible solution by a decoding algorithm. Unfortunately, however, it
should be noted here that this decoding algorithm does not work well
for more general integer programming problems involving positive and
negative coefficients in both sides of the constraints.
In order to overcome such defects of the original decoding algorithm,
by introducing a reference solution, we propose a new decoding algorithm
for double strings that is applicable to more general integer programming
problems with positive and negative coefficients in the constraints.
Considering x = 0 is always a feasible solution for multidimensional
integer knapsack problems, it is significant that this decoding algorithm
enables us to decode each individual to the corresponding feasible solu-
tion by determining values of Xs(j) as

depending on the value of 9s(j) and whether the constraints are satis-
fied. Unfortunately, however, this decoding algorithm cannot be applied
directly to integer programming problems with negative elements in b.
Realizing this difficulty, by introducing a reference solution, we pro-
pose a new decoding algorithm for double strings that is applicable to
more general integer programming problems with positive and negative
coefficients in the constraints.
For that purpose, a feasible solution xO for generating a reference so-
lution by some method is required. One possible way to obtain a feasible
solution to the integer programming problem (5.7) is to maximize the
exponential function for the violation of constraints defined by

(5.8)

where ai, i = 1, ... ,m, denotes an n-dimensional ith row vector of the
coefficient matrix A; Jt; = {j I aij 2:: 0,1 ::; j ::; n}; J~ = {j I aij <
0,1 ::; j ::; n}; LJEJ+
a
aij and L EJ-. aij are the maximum and minimum
J a
of aiX, respectively; '() is a positive 'parameter to adjust the severity of
100 5. GENETIC ALGORITHMS FOR INTEGER PROGRAMMING

the violation for constraints; and

R(~) = {~' ~::::: 0 (5.9)


0, ~ <0
N amcly, for obtaining a feasible solution x O, solve a maximization prob-
lem

maXImIze g(x) ~ g(Xl, ... , x n ), Xj E {O, 1, ... , v.j}, i = 1, ... , n


(5.10)
through GADS (without using the decoding algorithm) by regarding the
fitness function of an individual 8 as f(8) = g(x).
If any feasible solution cannot be located for a prescribed number of
times, it is concluded that the original integer programming problem
(5.7) has no feasible solutions.
By using the feasible solution xO as the reference solution the following
decoding algorithm using a reference solution can be proposed. In the
following decoding algorithm, b+ denotes a column vector of positive
right-hand side constants, and the corresponding coefficient matrix is
denoted by A+ = (pt, ... ,p~). Also, gs(j) and x;(j)' j = 1, ... ,n,
denote the values of variables of an individual and a reference solution,
respectively.

Decoding algorithm using reference solution

Step 1: Let j := 1 and psum := O.

Step 2: If gs(j) = 0, set qs(j) := () and j := j + 1, and go to step 4. If


gs(j) i- 0, go to step 3.

Step 3: If psum + P~j) . gs(j) :=:; b t , set qs(j) := 9s(j)' psum := psum +
P~j) . 9s(j) and j := j + 1, and go to step 4. Otherwise, set qs(j) :=
and j := j + 1, and go to step 4.
Step 4: If j > n, go to step 5. Otherwise, return to step 2.

Step 5: Let j := 1, l := 0 and sum := O.

Step 6: If 9s(j) = 0, set j := j + 1 and go to step 8. If 9s(j) i- 0, set


sum := sum + Ps(j) . 9s(j) and go to step 7.
Step 7: If sum :=:; b, set l := j and j := j + 1, and go to step 8.
Otherwise, set j := j + 1 and go to step 8.
Step 8: If j > n, go to step 9. Otherwise, return to step 6.
5.3. Integer programming 101

Step 9: If 1 > 0, go to step 10. Otherwise go to step II.

Step 10: For xs(j) satisfying 1 ::; j ::; I, let xs(j) . - gs(j)' For xs(j)
satisfying l + 1 ::; j ::; n, let xs(j) := 0, and stop.
Step 11: Let sum := Lk=l Ps(k) . x;(k) and j := 1.

Step 12: If gs(j) = x:(j)' let xs(j) := gs(j) and j := j + 1, and go to step
16. If gs(j) =f. x;(j)' go to step 13.

Step 13: If sum - Ps(j) . x;(j) + Ps(j) . gs(j)


::; b, set sum := sum - Ps(j) .
x;(j) + Ps(j) . gs(j) and Xs(j) := gs(j)' and go to step 16. Otherwise, go
to step 14.

Step 14: Let ts(j) : = lO.5 . (x;(j) + gs(j) J and go to step 15.

Step 15: If sum - Ps(j) . x;(j) + Ps(j) . ts(j)


set sum := sum - Ps(j) .
::; b,
x;(j) + Ps(j) . ts(j), gs(j) := ts(j) and xs(j) := ts(j)' and go to step 16.
Otherwise, set xs(j) := x;(J) and go to step 16.

Step 16: If j > n, stop. Otherwise, return to step 12.

As can be seen from the previous discussions, for general integer pro-
gramming problems involving positive and negative coefficients in the
constraints, this newly developed decoding algorithm enables us to de-
code each individual represented by the double strings to the correspond-
ing feasible solution. However, the diversity of phenotypes x greatly
depends on the reference solution used in the preceding decoding algo-
rithm. To overcome such situations, we propose the following reference
solution updating procedure in which the current reference solution is
updated by another feasible solution if the diversity of phenotypes seems
to be lost. To do so, for every generation, check the dependence on the
reference solution through the calculation of the mean of the L1 dis-
tance between all individuals of phenotypes and the reference solution,
and when the dependence on the reference solution is strong, replace the
reference solution by the individual of phenotype having maximum L1
distance.
Let N, x*, TJ 1.0), and x T denote the number of individuals, the
reference solution, a parameter for reference solution updating and a
feasible solution decoded by the rth individual, respectively; then the
reference solution updating procedure can be described as follows:

The reference solution updating procedure

Step 1: Set r := 1, rmax := 1, d max := 0, and d sum := O.


102 5. GENETIC ALGORITHMS FOR INTEGER PROGRAMMING

Step 2: Calculate d r = '2:,)=1 IX~(j) -x;(j)1 and let dsum := dsum +dr . If
d r > d max and exT < ex*, let d max := dr, rmax := T, and r := r + 1,
and go to step 3. Otherwise, let r := r + 1 and go to step 3.
Step 3: If r > n, go to step 4. Otherwise, return to step 2.
Step 4: If dsum / (N . '2:,j!=l Vj) < T], then update the reference solution
as x* := x rmax , and stop. Otherwise, stop without updating the
reference solution.
It should be observed here that when the constraints of the problem
are strict, a possibility exists that all of the individuals in the neighbor-
hood of the reference solution are decoded. To avoid a such possibility,
in addition to the reference solution updating procedure, after every P
generations, the reference solution is replaced by the feasible solution
obtained by solving the maximization problem (5.10) through GADS
(without using the decoding algorithm).

5.3.3 Generation of initial population


The procedure of generation of initial population in GADSLPR is
adopted in a manner quite similar to that of GADSLPR.

5.3.4 Fitness and scaling


Two kinds of fitness functions are defined as

(5.11)

(5.12)

where J"t = {j I Cj ::::: 0,1 :S j :S n}, J; = {j I Cj < 0,1 :S j :S n},


and the last term of II (s) is added so that the smaller the difference be-
tween the genotype g and the phenotype x, the larger the corresponding
fitness becomes. Observe that II (s) and 12 (s) indicate the goodness of
the phenotype x of an individual s and that of the phenotype q of an
individual s, respectively. Using these two kinds of fitness functions,
we attempt to reduce the reference solution dependence. For these two
kinds of fitness functions, the linear scaling technique is used.
5.3. Integer programming 103

5.3.5 Genetic operators


As a reproduction operator, elitist expected value selection, which is
the combination of expected value selection and elitist preserving selec-
tion, is adopted by using the two kinds of fitness functions defined by
(5.11) and (5.12). To be more explicit, by introducing a parameter oX
(0.5 < oX < 1), and modifying the expected value selection as

(5.13)

n=l n=l

where N oX individuals in the population are reproduced on the basis of


hand N(l - oX) individuals are reproduced on the basis of h. Also,
elitist preserving selection is adopted on the basis of h.
Quite similar to GADSLPR, PMX for double strings is adopted. A
mutation operator and an inversion operator are used.

5.3.6 Numerical experiments


In order to show the effectiveness of the proposed GADSLPR based
on reference solution updating (GADSLPRRSU), we apply GADSL-
PRRSU and LP _SOLVE [23] to several single-objective integer program-
ming problems involving positive and negative coefficients with 50, 80,
and 100 variables and compare the results of the two methods. In these
experiments, values of coefficients Cj, aij, i = 1, ... , m, j = 1, ... , n
are randomly chosen from {-500, -499, ... ,499}. Right-hand side con-
stants bi, i = 1, ... ,m are defined as

(5.14)

where (3 = maXj=l, ... ,n Vj and a positive constant 'T denotes the degree
of strictness of the constraints.
In these numerical experiments, GADSLPRRSU is applied 10 times
to every problem, in which the following parameter values are used in
both genetic algorithms: the population size N = 100, the generation
gap G = 0.9, the probability of crossover Pc = 0.9, the probability of
mutation Pm = 0.05, the probability of inversion Pi = 0.03, the minimal
search generation [min = 100, the maximal search generation [max(>
[min) = 1000, the scaling constant Cmult = 2.0, the convergence criterion
c = 0.001, the degree of use of information about solutions to linear
programming relaxation problems R = 0.9, a parameter for reproduction
104 5. GENETIC ALGORITHMS FOR INTEGER PROGRAMMING

).. = 0.9, and a parameter for reference solution updating rl = 0.05.


Furthermore, in the proposed method, the variances a and T are set at
2.0 and 3.0, respectively, after several preliminary trials.
The experimental results for an integer programming problem with
50 variables (71. = 50) and 20 constraints (m = 20) are shown in Table
5.5, where the values of'Y are set at 0.50 (tight) and 0.55 (relatively
loose). For GADSLPRRSU, the best objective function value, the av-
erage objective function value, the worst objective value, and the aver-
age processing time of 10 trials are exhibited. On the other hand, for
LP _SOLVE, t.he obtained objective function value and the processing
t.ime are writ.ten. For 'Y = 0.50, GADSLPRRSU obtains better approx-
imate solutions than the incumbent solutions of LP _SOLVE in quite
shorter time than that for LP _SOLVE. For 'Y = 0.55, GADSLPRRSU
finds highly accurate approximate solutions in quite shorter time (about
40 seconds); LP _SOLVE takes a much longer time (about llOO seconds)
to obtain an opt.imal solution. These results imply that. GADSLPRRSU
can be used as a fast approximate solution met.hod for general int.eger
programming problems.

Table 5.5. Experimental results for 50 variables and 20 constraints (10 trials)

I Methods Best I Average I Worst Time (sec) AG


0.50 GADSLPRRSU -128838 I -127953.8 J -127294 7.40 x 10 1 412.3
LP_SOLVE -106973 (incumbent) 1.08 x 10 4 --

0.55 GADSLPRRSU I I
-152968 -152796.9 -152703 6.00 x 10 1 553.7
LP~OLVE -153053 (optimal) 1.10 x 10 3 -

In Table 5.6, the results for an integer programming problem with 80


variables (71. = 80), 25 constraints (m = 25), and the degree of strictness
of const.raints 'Y = 0.50,055 are shown. Furthermore, in Table 5.7, the
results for an integer programming problem with 100 variables (71. = 100),
30 constraints (m = 30), and the degree of strictness of constraints
'Y = 0.50,0.55 are shown. In both cases, we can observe similar results
to Table 5.5.
From these numerical experiments, it is suggested that GADSLPRRSU
is an effective and promising approximate solution method for general
integer programming problems.

5.4 Conclusion
In this chapter, GADS for multidimensional 0-1 knapsack problems
have been extended to deal with multidimensional integer knapsack
5.4. Conclusion 105

Table 5.6. Experimental results for 80 variables and 25 constraints (10 trials)

"( Methods Best I


Average I
Worst Time (sec) AG
0.50 GADSLPRRSU I I
-202322 -201471.0 -200610 1.86 x 10 2 546.8
LPllOLVE -167052 (incumbent) 1.08 x 10 4 -
0.55 GADSLPRRSU I I
-246605 -245390.6 -242758 9.49 x 101 542.4
LPllOLVE -247137 (optimal) 1.94 x 10 3 -

Table 5.7. Experimental results for 100 variables and 30 constraints (10 trials)

"( Methods Best I Average I Worst Time (sec) AG


0.50 GADSLPRRSU I I
-359483 -357547.6 -353851 2.45 x 10 2 422.1
LP_SOLVE -354704 (incumbent) 1.08 x 10 4 --

0.55 GADSLPRRSU I I
-380573 -379438.5 -377287 1.35 x 10 2 576.0
LP_SOLVE -381085 (optimal) 1.10 x 10 3 -

problems. Information about solutions of linear programming relax-


ation problems is especially incorporated for improving the accuracy of
solutions and processing time, and their computational efficiency and
effectiveness are examined through numerical experiments. Then, to
deal with more general integer programming problems involving positive
and negative coefficients in both sides of the constraints, a new decod-
ing algorithm for double strings using the reference solution updating
procedure are introduced so that each individual will be decoded to
the corresponding feasible solution for the general integer programming
problems. From computational results for several numerical examples,
the efficiency and effectiveness of the proposed methods are examined.
Chapter 6

FUZZY MULTIOBJECTIVE INTEGER


PROGRAMMING

This chapter can be viewed as the fuzzy multiobjective version of


Chapter 5 and is devoted to a integer generalization along the same
lines as in Chapter 3. Through the use of genetic algorithms with dou-
ble strings (GADS), considerable effort is devoted to the development
of fuzzy multiobjective integer programming as well as fuzzy multiob-
jective integer programming with fuzzy numbers together with several
numerical experiments.

6.1 Introduction

In the late 1990s, Sakawa et al. [164] formulated multiobjective mul-


tidimensional integer knapsack problems by assuming that the decision
maker may have a fuzzy goal for each of the objective functions. Through
the introduction of GADS using linear programming relaxation, they
proposed an interactive fuzzy satisficing method to derive a satisficing
solution for the decision maker for multiobjective multidimensional in-
teger knapsack problems [164]. Furthermore, Sakawa et al. [140, 146]
extended GADS based on reference solution updating for 0-1 program-
ming problems to deal with integer programming problems. Through
the use of the GADS, they developed interactive fuzzy multiobjective
integer programming as well as interactive fuzzy multiobjective integer
programming with fuzzy numbers. In this chapter, these interactive
fuzzy multiobjective integer programming methods for deriving a sat-
isficing solution for the decision maker efficiently are presented as an
integer generalization of Chapter 4.
108 6. FUZZY MULTIOBJECTIVE INTEGER PROGRAMMING

6.2 Fuzzy multiobjective integer programming


6.2.1 Problem formulation and solution concept
In general, a multiobjective integer programming problem with k con-
flicting objective functions CiX, i = 1, ... , k, is formulated as

minimize (CIX, C2X, ... , ckxf }


subject to Ax:s: b , (6.1 )
Xj E {O, ... ,Vj}, j = 1, ... ,n
where Ci = (Cil,"" Cin), i = 1, ... , k, are n-dimensional row vectors;
x = (Xl, ... , Xn)T is an n-dimensional column vector of integer decision
variables; A = [aij], i = 1, ... ,m, j = 1, ... , n, is an m x n coefficient
matrix; b = (b 1 , . , bmf is an m-dimensional column vector; and vj,
.i = 1, ... ,n are nonnegative integers.
It should be noted here that in a multiobjective integer program-
ming problem (6.1), when each element of A and b is assumed to be
nonnegative, then the problem (6.1) can be viewed as a multiobjective
multidimensional integer knapsack problem.
In general, however, for multiobjective programming problems, a com-
plete optimal solution that simultaneously minimizes all of the mUltiple
objective functions does not always exist when the objective functions
conflict with each other. Thus, instead of a complete optimal solution,
a new solution concept, called Pareto optimality, is introduced in mul-
tiobjective integer programming, as discussed in Chapter 4.

6.2.2 Interactive fuzzy multiobjective programming


For such a multiobjective integer programming problem (6.1), consid-
ering the vague or fuzzy nature of human judgments, it is quite natural
to assume that the decision maker (DM) may have a fuzzy goal for each
of the objective functions CiX. In a minimization problem, the goal
stated by the DM may be to achieve "substantially less than or equal
to some value ai" [135, 170, 176, 184, 225, 227]. Such a fuzzy goal of
the DM can be quantified by eliciting the corresponding membership
function through the interaction with the DM. Here, for simplicity, the
linear membership function

(6.2)

is assumed for representing the fuzzy goal of the DM [135, 225, 228],
where z?and zldenote the values of the objective function CiX whose
6.2. Fuzzy multiobjective integer programming 109


degree of membership function are and 1, respectively. These values
are subjectively determined through an interaction with the DM. Figure
6.1 illustrates the graph of the possible shape of the linear membership
function.

1. 0 1------.

o o CiX
Zi

Figure 6.1. Linear membership function for fuzzy goal

As one of the possible ways to help the DM determine and Z[, it is z?


convenient to calculate the individual minimum ziin = min",Ex CiX and
maximum zi ax = max",EX CiX of each objective function under the given
constrained set, where X is the constrained set of the multiobjective
multidimensional integer knapsack problem (4.1), namely,

X = {x I Ax::::: b, Xj E {O, ... ,Vj}, j = 1, ... ,n} (6.3)

Then by taking account of the calculated individual minimum and max-


imum of each objective function, the DM is asked to assess and Zi1 in z?
the closed interval [ziin,ziax], i = I, ... ,k.
Having elicited the linear membership functions /1i(CiX), i = 1, ... , k,
from the DM for each of the objective function CiX, i = 1, ... , k, if we
introduce a general aggregation function

(6.4)

a fuzzy multiobjective decision-making problem can be defined by

maximize /1 D (x) (6.5)


",EX

Observe that the value of the aggregation function /1D(X) can be inter-
preted as representing an overall degree of satisfaction with the DM's
multiple fuzzy goals [135, 170, 176, 184, 225]. If we adopt the well-
known fuzzy decision of Bellman and Zadeh or minimum operator as
llO 6. FUZZY MULTIOBJECTIVE INTEGER PROGRAMMING

the aggregation function

(6.6)

the multiobjective integer programming problem (4.1) can be interpreted


as
. min {tLi(CiX)}

~
t=l, ... ,k
subject to Ax ~ b (6.7)
xjE{O, ... ,Vj}, j 1, ... , n }

As will be seen later, with an appropriate fitness function, genetic


algorithms are applicable for solving this problem.
As discussed in the previous subsection, in the conventional fuzzy
approaches, it has been implicitly assumed that the fUilzy decision or
the minimum operator of Bellman and Zadeh [22] is the proper repre-
sentation of the DM's fuzzy preferences, and, hence, the multiobjedive
integer programming problem (6.1) has been interpreted as (6.7).
However, it should be emphasized here that this approach is prefer-
able only when the DM feels that the minimum operator is appropriate.
In other words, in general decision situations, the DM does not always
use the minimum operator when combining the fuzzy goals and/or con-
straints. Probably the most crucial problem in (6.5) is the identification
of an appropriate aggregation function that well-represents the DM's
fuzzy preferences. If ILDe) can be explicitly identified, then (6.5) re-
duces to a standard mathematical programming problem. However, this
rarely happens, and, as an alternative, an interaction with the DM is
necessary for finding the satisficing solution of (4.9).
To generate a candidate for the satisficing solution that is also Pareto
optimal, in interactive fuzzy multiobjective integer programming, the
DM is asked to specify the value of the corresponding membership func-
tion for each fuzzy goal, called reference membership levels. Observe
that the idea of the reference membership levels [135] can be viewed
as an obvious extension of the idea of the reference point. of Wierzbicki
[215,216]. For the DM's reference membership levels ili, i = 1, ... ,k, the
corresponding Pareto optimal solution, which is nearest to the require-
ment or better than if the reference membership levels are attainable in
the minimax sense, is obtained by solving the following minimax problem
in a membership function space [135]:

mmmllze i;n~~k {JLi - JI i (Zt. (_X)) } }


(6.~)
subject to Ax ~ b
Xj E {O, ... , Vj }, J - 1, ... , n
6.2. Fuzzy multiobjective integer programming 111

It must be noted here that for generating Pareto optimal solutions by


solving the minimax problem, if the uniqueness of the optimal solution
is not guaranteed, it is necessary to perform the Pareto optimality test.
To circumvent the necessity to perform the Pareto optimality test in
the minimax problems, it is reasonable to use the following augmented
minimax problem instead of the minimax problem (6.8) [135]:

mmimize t~~~'k (Pi - /Li(Ci X)) + P ~(Pi -lli(Ci X))} } (6.9)


subject to Ax :s: b
Xj E {O, ... ,Vj}, j = 1, ... ,n

The term augmented is adopted because the term p L:f=


1 (Pi - /Li (CiX )) is
added to the standard minimax problem, where p is a sufficiently small
positive number.
It is significant to note here that this problem preserves the linearities
of the constraints. For this problem, it is quite natural to define the
fitness function by

where s denotes an individual represented by a double string and x is


the phenotype of s.
With this observation in mind, the augmented minimax problem (4.13)
can be effectively solved through G ADS using linear programming relax-
ation (GADSLPR) or GADSLPR based on reference solution updating
(GADSLPRRSU) introduced earlier.
Incorporating GADSLPR or GADSLPRRSU [138, 148J, introduced in
the previous chapter, into the interactive fuzzy satisficing methods [135,
170, 176, 184]' both of which are proposed by the authors, it becomes
possible to introduce the following interactive algorithm [144, 147, 160,
165] for deriving a satisficing solution for the DM for multiobjective in-
teger programming problems. The steps marked with an asterisk involve
interaction with the DM.

Interactive fuzzy multiobjective integer programming


Step 0: Calculate the individual minimum and the maximum of each
objective function under the given constraints.
Step 1*: Elicit a membership function from the DM for each of the
objective functions by taking account of the calculated individual
112 6. FUZZY MULTIOBJECTIVE INTEGER PROGRAMMING

minimum and the maximum of each objective function. Then, ask the
DM to select the initial reference membership levels Pi, i = 1, ... , k
(if it is difficult to determine these values, set them to 1).

Step 2: For the reference membership values specified by the DM, solve
the corresponding augmented minimax problem.

Step 3*: If the DM is satisfied with the current values of the mem-
bership functions and/or objective functions given by the current
best solution, stop. Otherwise, ask the DM to update the reference
membership levels by taking into account the current values of mem-
bership functions and/or ohjective functions, and return to step 2.

It must be ohserved here that, in this interactive algorithm, GADSLPR


or GADSLPRRSU are used mainly in Step 2. However, observe that,
in step 0, for calculating ztin and zt ax , i = 1, ... , k, GADSLPR or
GADSLPRRSU can be used

6.2.3 Numerical experiments


To illustrate the proposed method, the following numerical exam-
ples were considered. The numerical experiments were performed on a
personal computer (processor: Celeron 333MHz, memory: 128MB, OS:
Windows 2000), and a Visual C++ compiler (version 6.0) was used.

6.2.3.1 Multiobjective integer knapsack problems


As a numerical example, consider a three-objective integer knapsack
prohlem with 30 integer decision variables .rj E {O, ... , 1O}, j = 1, ... ,30
and 10 constraints.
The coefficients involved in this numerical example are randomly gen-
erated in the following way. Coefficients Clj are randomly chosen from
the closed interval [-1000,0). Coefficients C2j are randomly chosen from
the closed interval (0,1000]. Coefficients C3j are randomly chosen from
the closed interval [-500,500). Coefficients aij are randomly chosen
from the closed interval [0,1000). On the basis of these aij values, us-
ing a positive constant /" which denotes the degree of strictness of the
constraints, each clement bi, i = 1, ... ,Tn, is determined by
n
bi = /'. L aij, i = 1, ... , m, (6.11)
j=l

where the value of/, = 2.5 is adopted in this example.


As a numerical example generated in this way, we use the coefficients
as shown in Table 6.1.
6.2. Fuzzy multiobjective integer programming 113

Table 6.1. Values of coefficients

Cl -566 -589 -791 -438 -381 -604 -126 -442 -646 -525
-914 -602 -398 -206 -976 -877 -271 -186 -500 -392
-927 -94 -525 -905 -405 -971 -105 -553 -565 -740
C2 225 612 975 836 837 644 140 718 2 160
956 650 457 182 466 732 198 929 702 161
601 443 505 869 754 941 808 312 197 613
C3 -299 262 391 170 221 243 -202 -413 -4 328
85 -45 6 67 464 479 -8 38 -359 94
458 21 400 -308 -163 302 316 490 96 -472
al 597 453 763 487 657 362 827 171 586 95
464 74 134 591 748 617 772 161 343 598
925 3 350 978 432 710 4 40 727 833
a2 825 298 475 579 357 668 816 522 677 82
347 700 92 567 986 947 841 620 887 389
62 316 193 271 12 38 689 917 992 837
a3 323 93 693 716 670 480 125 954 117 466
352 344 568 401 707 753 529 579 975 615
554 305 964 964 652 222 423 437 320 947
a4 159 604 567 187 677 634 60 659 323 305
163 711 957 542 757 815 291 363 950 476
281 480 905 282 705 257 43 504 309 2
as 84 963 701 564 629 638 416 482 778 2
180 47 541 262 858 533 821 998 108 784
30 26 887 359 712 814 535 505 850 27
a6 596 302 965 460 325 196 330 127 307 419
622 213 187 100 893 207 856 93 631 958
349 312 93 232 570 55 572 776 823 907
a7 799 989 192 966 937 556 437 205 874 162
146 754 925 777 585 602 135 566 115 154
926 470 816 958 964 806 743 19 125 579
as 434 171 361 460 432 289 830 709 745 926
927 82 42 685 761 741 230 561 887 472
659 659 696 941 642 488 296 992 435 337
a9 428 155 920 677 469 839 391 172 934 598
351 796 41 412 290 372 542 583 257 947
311 176 581 315 874 368 864 954 178 309
alO 721 173 422 984 28 990 392 503 202 651
891 761 719 229 129 256 427 551 789 971
274 915 193 823 522 517 271 748 272 861
b 36255.0 40005.0 40620.0 34920.0 37835.0 33690.0 43205.0 42225.0 37760.0 40462.5

The parameter values of the proposed GADSLPR are set as popu-


lation size N = 100, generation gap G = 0.9, probability of crossover
Pc = 0.9, probability of mutation Pm = 0.05, probability of inversion
Pi = 0.03, minimal search generation [min = 100, maximal search gener-
114 6. FUZZY MULTIOBJECTIVE INTEGER PROGRAMMING

ation Imax = 2000, E = 0.02, Cmult = 1.8, a = 2.0, p = 3.0, and R = 0.9.
The coefficient p of the augmented minimax problem is set as 0.005.
First, the individual minimum zr
in and maximum zr
ax of each of
objective functions Zi(X) = CiX, i = 1,2,3 are calculated by GADSLPR,
as shown in Table 6.2.

Table 6.2. Individual minimum and maximum of each of the objective functions

Minimum (z)"W) Maximum (zt ax )


C1X -58091.0 0.0
C2X 0.0 63460.0
C3X -19573.0 27770.0

By taking account of these values, the DM subjectively determined


linear membership functions J.li(CiX), i = 1,2,3 as shown in Table 6.3.

Table 6.3. Membership functions for objective functions

zt zy
{l1(C1X) -60000.0 0.0
J-t2( C2 X ) 0.0 65000.0
{l3(C3 X ) -20000.0 30000.0

Having determined the linear membership functions in this way, for


this numerical example, at each interaction with the DM, the corre-
sponding augmented minimax problem (6.9) was solved through 10 trials
of the GADSLPR, as shown in Table 4.2.
The augmented minimax problem (6.9) is solved by the GADSLPR
for the initial reference membership levels (1.00,1.00,1.00), which can
be viewed as the ideal values, and the DM is supplied with the corre-
sponding membership function values in the first interaction of Table
6.4. On the basis of such information, because the DM is not satis-
fied with the current membership function values, the DM updates the
reference membership values as PI = 1.00, P2 = 1.00, and P3 = 0.80
for improving the satisfaction levels for J.ll and J.l2 at the expense of
{J'3' For the updated reference membership values, the corresponding
augmented minimax problem (6.9) is solved by the GADSLPR and the
corresponding membership function values are calculated as shown in
the second interaction of Table 6.4. Because the DM is not satisfied
with the current membership function values, the DM updates the ref-
erence membership values as PI = 1.00, P2 = 0.90, and P3 = 0.80 for
6.2. Fuzzy multiobjective integer programming 115

improving the satisfaction levels for /11 and /13 at the expense of /12.
For the updated reference membership values, the corresponding aug-
mented minimax problem (6.9) is solved by the GADSLPR, and the
corresponding membership function values are calculated as shown in
the third interaction of Table 6.4. The same procedure continues in this
manner until the DM is satisfied with the current values of the member-
ship functions and the objective functions. In this example, a satisficing
solution for the DM is derived at the fourth interaction.

Table 6.4. Interactive processes (10 trials)

Interaction t'l(CIX) t'2(C2 X ) t'3(C3 X ) CIX C2X C3X #


1st 0.674467 0.674508 0.679740 -40468.0 21157.0 -3987.0 1
(1.00,1.00,1.00) 0.674667 0.674446 0.676100 -40480.0 21161.0 -3805.0 4
0.674417 0.676862 0.675600 -40465.0 21004.0 -3780.0 5
2nd 0.704150 0.703169 0.509840 -42249.0 19294.0 4508.0 2
(1.00,1.00,0.80) 0.704467 0.702723 0.504220 -42268.0 19323.0 4789.0 5
0.703550 0.705200 0.502580 -42213.0 19162.0 4871.0 2
0.702433 0.702231 0.517340 -42146.0 19355.0 4133.0 1
3rd 0.743983 0.643846 0.544940 -44639.0 23150.0 2753.0 1
(1.00,0.90,0.80) 0.743167 0.644631 0.545200 -44590.0 23099.0 2740.0 1
0.743117 0.644477 0.543940 -44587.0 23109.0 2803.0 5
0.742950 0.645800 0.543680 -44577.0 23023.0 2816.0 1
0.742833 0.643815 0.545840 -44570.0 23152.0 2708.0 1
0.746367 0.642708 0.548360 -44782.0 23224.0 2582.0 1
4th 0.719117 0.668246 0.568820 -43147.0 21564.0 1559.0 10
(0.95,0.90,0.80)
#: Number of solutlOns

6.2.3.2 Multiobjective integer programming problems


Next, as a numerical example of multiobjective general integer pro-
gramming problems involving positive coefficients and negative ones,
consider a three-objective general integer programming problem with 30
variables and 10 constraints.
The coefficients involved in this numerical example are randomly gen-
erated in the following way. Coefficients Clj are randomly chosen from
the closed interval [-1000,0). Coefficients C2j are randomly chosen from
the closed interval (0,1000]. Coefficients C3j are randomly chosen from
the closed interval [-500,500). Coefficients aij are randomly chosen
from the closed interval [-500,500). On the basis of these aij values,
using a positive constant 'Y that denotes the degree of strictness of the
116 6. FUZZY MULTIOBJECTIVE INTEGER PROGRAMMING

constraints, coefficients bi, i = 1, ... , 10 are determined by

(6.12)

where (3 = maxj=l, ... ,n l/j, and the value of I = 0.40 is adopted in this
example.
As a numerical example generated in this way, we use the coefficients
as shown in Tables 6.5 and 6.6.
The parameter values of GADSLPRRSU are set as population size
N = 100, generation gap G = 0.9, probability of crossover Pc = 0.9,
probability of mutation Pm = 0.05, probability of inversion Pi = 0.03,
minimal search generation I min = 100, maximal search generation Imax =
2000, E = 0.02, Cmult = 1.8, a = 2.0, p = 3.0, R = 0.9, A = 0.9, 'f/ = 0.1,
() = 5.0, and P = 100.
First, the individual minimum zr
in and maximum zr
ax of each of
objective functions Zi(X) = CiX, i = 1,2,3 were calculated by GADSL-
PRRSU, as shown in Table 6.7.
By considering these values, the DM subjectively determined linear
membership functions /ti(CiX), i = 1,2,3, as shown in Table 6.8.
Having determined the linear membership functions in this way, the
augmented minimax problem (6.9) is solved by the GADSLPRRSU for
the initial reference membership levels (1.00,1.00,1.00), which can be
viewed as the ideal values, and the DM is supplied with the correspond-
ing membership function values as shown in the first interaction of Table
6.9. On the basis of such information, because the DM is not satisfied
with the current membership function values, the DM updates the ref-
erence membership values as PI = 1.00, P2 = 0.70, and P3 = 1.00 for
improving the satisfaction levels for ttl and /t3 at the expense of /t2.
For the updated reference membership values, the corresponding aug-
mented minimax problem (6.9) is solved by the GADSLPRRSU and the
corresponding membership function values are calculated as shown in
the second interaction of Table 6.9. Because the DM is not satisfied
with the current membership function values, the DM updates the ref-
erence membership values as PI = 0.80, ih = 0.70, and P3 = 1.00 for
improving the satisfaction levels for /t2 and /t3 at the expense of ttl, For
the updated reference membership values, the corresponding augmented
minimax problem (6.9) is solved by the GADSLPRRSU, and the cor-
responding membership function values are calculated as shown in the
third interaction of Table 6.9. The same procedure continues in this
manner until the DM is satisfied with the current values of the member-
6.2. Fuzzy multiobjective integer programming 117

Table 6.5. Values of coefficients Cij and aij

Cl -529 -59 -629 -413 -306 -415 -608 -898 -584 -188
-167 -593 -236 -450 -599 -284 -534 -468 -195 -586
-223 -373 -393 -464 -451 -200 -55 -65 -360 -732
C2 32 37 15 794 126 634 30 685 123 666
253 632 688 918 854 61 884 981 206 414
82 787 469 84 877 785 206 747 863 66
C3 367 -215 217 5 -245 216 72 -66 -157 378
-290 302 -3 246 -366 -130 -222 283 -18 -159
445 112 457 23 127 -367 -332 -74 -242 40
al -306 258 150 79 400 122 243 -50 -412 -116
-386 462 384 190 194 -431 248 316 -191 -199
386 295 -176 151 -315 256 387 -5 -153 -290
a2 343 168 -206 -250 -209 337 175 -332 -268 317
-459 -307 43 14 -485 84 -278 106 357 -468
-398 432 261 352 -318 -387 -180 260 -36 -210
a3 -116 309 -387 108 356 -418 -63 -473 80 -213
-160 139 478 479 333 350 -154 -384 -170 147
-23 -416 -138 -336 242 186 -59 59 -103 -150
a4 31 225 428 -151 -178 -463 438 345 344 -252
-449 -350 -311 -83 -49 -1 16 147 -327 -29
-32 40 -251 75 -430 -264 -72 -406 -41 3
a5 271 -448 -330 -7 327 -412 306 223 -385 -66
287 -137 8 17 -297 -349 118 195 84 441
-215 27 242 -325 -172 232 -109 176 7 -36
a6 376 -247 -384 330 -64 -97 -294 114 -311 492
-463 137 284 439 8 210 289 150 -346 -360
-369 -353 444 225 -279 -40 -398 466 399 -186
a7 438 34 -420 142 283 -156 -241 -336 164 239
288 -474 -371 -177 327 263 139 10 379 -185
444 265 -231 -450 -313 -306 -373 189 71 463
a8 270 -440 -314 -193 27 68 -208 242 -280 203
-109 -127 -325 386 -276 -37 -406 -382 -427 212
-199 206 92 182 103 -353 -274 -198 357 225
ag -241 -296 156 0 209 24 217 -432 -125 453
-408 120 -224 431 136 -249 -90 -56 429 299
-6 -56 -216 -16 -26 -295 -301 -422 433 -118
alO 100 -250 217 -12 72 -18 -20 -148 -171 -256
379 -497 -382 -497 195 -365 53 242 -460 240
131 -264 281 405 -137 -445 268 -478 -201 -148

Table 6.6. Values of coefficients bi

3679.5 -11730.0 -5999.5 -13350.5 -4759.5


b -3417.0 -3560.5 -13435.5 -6592.0 -14496.0
118 6. FUZZY MULTJQBJECTIVE INTEGER PROGRAMMING

Table 6.7. Individual minimum and maximum of each of objective functions

Minimum (zi'"n) Maximum (z?,ax)


CjX -109291.0 -32137.0
C2X 34701.0 124267.0
C2X -20524.0 20141.0

Table 6.8. Membership functions for objective functions

Zi zy
IldclX) -100000.0 -30000.0
112 (C2X) -10000.0 130000.0
/13 (C3 X ) -20000.0 20000.0

ship functions and the objective functions. In this example, a satisficing


solution for the OM is derived at the fourth interaction.

6.3 Fuzzy multiobjective integer programming with


fuzzy numbers
6.3.1 Problem formulation and solution concept
As discussed in the previous section, the problem to optimize multiple
conflicting linear objective functions simultaneously under the given lin-
ear constraints and integer conditions for decision variables is called the
multiobjective integer programming problem and is formulated as (6.1).
Also, fundamental to the rnultiobjective integer programming problem
is the Pareto optimal concept, also known as a noninferior solution.
Qualitatively, a Pareto optimal solution of the multiobjective integer
programming problem is one in which any improvement of one objective
function can be achieved only at the expense of another.
In practice, however, it would certainly be more appropriate to con-
sider that the possible values of the parameters in the description of the
objective functions and the constraints usually involve the ambiguity
of the experts' understanding of the real system. For this reason, con-
sider a rnultiobjective integer programming problem with fuzzy numbers
(MOIP-FN) formulated as

minimize
subject to (6.13)
6.3. Fuzzy multiobjective integer programming with fuzzy numbers 119

Table 6.9. Interactive processes (10 trials).

Interaction JLl(C1X) JL2(C2 X ) JL3(C3 X ) C1X C2X C3X #


1st 0.664029 0.664590 0.666125 -76482.0 63541.0 -6645.0 2
(1.00,1.00,1.00) 0.663843 0.664680 0.663675 -76469.0 63532.0 -6547.0 2
0.662000 0.662270 0.672825 -76340.0 63773.0 -6913.0 1
0.662829 0.661390 0.668975 -76398.0 63861.0 -6759.0 1
0.663271 0.661370 0.666725 -76429.0 63863.0 -6669.0 1
0.660886 0.660700 0.659450 -76262.0 63930.0 -6378.0 1
0.659914 0.658640 0.669125 -76194.0 64136.0 -6765.0 1
0.657714 0.656190 0.673675 -76040.0 64381.0 -6947.0 1
2nd 0.800500 0.498460 0.801725 -86035.0 80154.0 -12069.0 1
(1.00,0.70,1.00) 0.800214 0.498280 0.798775 -86015.0 80172.0 -11951.0 1
0.797857 0.500300 0.800600 -85850.0 79970.0 -12024.0 1
0.797600 0.498320 0.799050 -85832.0 80168.0 -11962.0 1
0.797929 0.498970 0.797475 -85855.0 80103.0 -11899.0 1
0.797414 0.498200 0.799975 -85819.0 80180.0 -11999.0 1
0.797571 0.497460 0.797175 -85830.0 80254.0 -11887.0 1
0.797014 0.500490 0.798875 -85791.0 79951.0 -11955.0 1
0.798129 0.496780 0.797225 -85869.0 80322.0 -11889.0 1
0.794300 0.494140 0.800775 -85601.0 80586.0 -12031.0 1
3rd 0.667529 0.567610 0.870475 -76727.0 73239.0 -14819.0 2
(0.80,0.70,1.00) 0.668571 0.564960 0.870400 -76800.0 73504.0 -14816.0 1
0.664957 0.565680 0.868575 -76547.0 73432.0 -14743.0 1
0.665557 0.564690 0.864850 -76589.0 73531.0 -14594.0 1
0.663800 0.564100 0.874625 -76466.0 73590.0 -14985.0 1
0.663429 0.564890 0.868950 -76440.0 73511.0 -14758.0 1
0.662600 0.567550 0.862275 -76382.0 73245.0 -14491.0 1
0.661629 0.561890 0.861050 -76314.0 73811.0 -14442.0 2
4th 0.691014 0.593160 0.790125 -78371.0 70684.0 -11605.0 2
(0.80,0.70,0.90) 0.690586 0.589610 0.790575 -78341.0 71039.0 -11623.0 1
0.689143 0.589330 0.797750 -78240.0 71067.0 -11910.0 1
0.688671 0.592200 0.788800 -78207.0 70780.0 -11552.0 1
0.688800 0.588350 0.791525 -78216.0 71165.0 -11661.0 1
0.688286 0.590200 0.792475 -78180.0 70980.0 -11699.0 1
0.687614 0.588890 0.794375 -78133.0 71111.0 -11775.0 1
0.691000 0.587170 0.791100 -78370.0 71283.0 -11644.0 1
0.691086 0.584820 0.791950 -78376.0 71518.0 -11678.0 1
#: Number of solutIOns

where x = (Xl, ... ,xn)T is an n-dimensional column vector of integer de-


cision variables, A is an m x n matrix whose elements are fuzzy numbers
and Ci, i = 1, ... , k, and jj are respectively n- and m-dimensional vec-
tors whose elements are fuzzy numbers. These fuzzy numbers, reflecting
120 6. FUZZY MULTIOBJECTIVE INTEGER PROGRAMMING

the experts' vague understanding of the nature of the parameters in the


problem-formulation process, are assumed to be characterized as fuzzy
numbers introduced by Dubois and Prade [52, 53].
It is significant to note that in a multiobjective integer programming
problem with fuzzy numbers (6.13), when all of the fuzzy numbers in A
and b are assumed to be nonnegative, then the problem (4.18) can be
viewed as a multiobjective multidimensional integer knapsack problem
with fuzzy numbers.
Observing that this problem involves fuzzy numbers in both the ob-
jective functions and the constraints, it is evident that the notion of
Pareto optimality cannot be applied. Thus, it seems essential to extend
the notion of usual Pareto optimality in some sense. For that purpose,
we first introduce the a-level set of all of the vectors and matrices whose
elements are fuzzy numbers.
DEFINITION 6.1 (a-LEVEL SET)
The n-level set oj Juzzy parameters A, b, and C is defined as the
ordinary set (A, b, c)Q Jor which the degree oj their membership Junctions
exceeds the level 0'.

Now suppose that the DM decides that the degree of all of the mem-
bership functions of the fuzzy numbers involved in the MOIP-FN should
be greater than or equal to some value n. Then for such a degree 0:,
the MOIP-FN can be interpreted as a nonfuzzy multiobjective integer
programming (MOIP-FN(A, b, c)) problem that depends on the coeffi-
cient vector (A, b, c) E (A, b, c)Q. Observe that there exist an infinite
number of such MOIP-FN(A, b, c) depending on the coefficient vector
(A, b, c) E (A., b, c)Q' and the values of (A, b, c) are arbitrary for any
(A, b, c) E (A, il, c)Q in the sense that the degree of all of the member-
ship functions for the fuzzy numbers in the MOIP-FN exceeds the level
a. However, if possible, it would be desirable for the DM to choose
(A, b, c) E (A, il, c)Q in the MOIP-FN(A, b, c) to minimize the objective
functions under the constraints. From such a point of view, for a cer-
tain degree n, it seems to be quite natural to have the MOIP-FN as the
following nonfuzzy o:-multiobjective integer programming (O'-MOO-IP)
problem.
minimize
subject to
(6.14)

In the following, for simplicity, we denote the feasible region satisfying


the constraints of the problem (6.14) with respect to x by X(A, b). It.
6.3. Fuzzy multiobjective integer programming with fuzzy numbers 121

should be emphasized here that in the problem (4.19), the parameters


(A, b, c) are treated as decision variables rather than as constants.
On the basis of the a-level sets of the fuzzy numbers, we can introduce
the concept of an a-Pareto optimal solution to the problem (6.14) as a
natural extension of the usual Pareto optimality concept.

DEFINITION 6.2 (a-PARETO OPTIMAL SOLUTION)


X* E X (A *, b*) is said to be an a-Pareto optimal solution to the
problem (6.14) if and only if there does not exist another x E X(A, b),
(A, b, c) E (A, b, c)a such that CiX ::; CiX*, i = 1, ... , k, with strict
inequality holding for at least one i, where the corresponding values of
parameters (A*, b*, c*) are called a-level optimal parameters.
Observe that a-Pareto optimal solutions and a-level optimal parame-
ters can be obtained through a direct application of the usual scalarizing
methods for generating Pareto optimal solutions by regarding the deci-
sion variables in the problem (6.14) as (x, A, b, c).

6.3.2 Interactive fuzzy multiobjective integer


programming with fuzzy numbers
For such an a-MOO-1P problem (6.14), considering the vague nature
of human judgments, it is quite natural to assume that the OM may
have a fuzzy goal for each of the objective functions Zi(X) = CiX. In
a minimization problem, the goal stated by the OM may be to achieve
"substantially less than or equal to some value Pi" [135, 225, 228]. These
fuzzy goals can be quantified by eliciting the corresponding membership
functions through the interaction with the OM.
To elicit a linear membership function JLi(CiX) for each i from the OM
for each of the fuzzy goals, the OM is asked to assess a minimum value
of unacceptable levels for CiX, denoted by z2, and a maximum value of
totally desirable levels for CiX, denoted by zl. Then linear membership
functions {ti(CiX), i = 1, ... , k, for the fuzzy goal of the OM are defined
by

(6.15)

These membership functions are depicted in Figure 6.2.


As one of the possible ways to help the OM determine z2
and zl,
it is convenient to calculate the minimal value ziin and the maximal
value zi ax of each objective function under the given constraints. Then
by taking account of the calculated individual minimum and maximum
122 6. FUZZY MULTIOBJECTIVE INTEGER PROGRAMMING

1.01-------....

o CiX

Fignnc 6.2. Linear membership function for fuzzy goal

of each objective function, the OM is asked to assess zP and zl in the


. t erva1 [min
In zi , Z = 1, ... , k .
, zimax]
Having elicited the linear membership functions {li{CiX) from the OM
for each of the objective functions CiX, i = 1, ... k, if we introduce a
general aggregation function

(6.16)
the problem to be solved is transformed into a fuzzy multiobjective
decision-making problem defined by
maxnnlze {lD({ll(CIX), {l2(C2 X ), ... , {lk(CkX), a) }
(6.17)
subject to (x, a, b, c) E P(a), a E [0,1] ,

where P{a) is the set. of a-Pareto optimal solutions and the wrrespond-
ing a-level optimal parameters to the problem (6.14). Observe that the
value of the aggregation function {lD (-) can be interpreted as represent-
ing an overall degree of satisfaction with the OM's k fuzzy goals [135]
If {lD(-) can be explicitly identified, then (6.17) reduces to a standard
mathematical programming problem. However, this rarely happens, and
as an alternative, an interaction with the OM is necessary for finding a
satisficing solution for the OM to (6.17).
To generate a candidate for the satisficing solution that is also a-
Pareto optimal, in our interactive decision-making method, the OM is
asked to specify the degree a of the a-level set and the reference mem-
bership values. Observe that the idea of the reference membership val-
ues can be viewed as an obvious extension of the idea of the reference
point in Wierzbicki [215]. To be more explicit, for the OM's degree a
and reference membership values {ii, i = 1, ... , k, the corresponding a-
Pareto optimal solution, which is, in the minimax sense, nearest to the
6.3. Fuzzy multiobjective integer programming with fuzzy numbers 123

requirement or better than that if the reference membership values are


attainable, is obtained by solving the following minimax problem.
mInImIZe
subject to (6.18)

It must be noted here that, for generating a-Pareto optimal solutions by


solving the minimax problem, if the uniqueness of the optimal solution
x* is not guaranteed, it is necessary to perform the a-Pareto optimality
test of x*. To circumvent the necessity to perform the a-Pareto optimal-
ity test in the minimax problems, it is reasonable to use the following
augmented minimax problem instead of the minimax problem (6.18).
k
minimize i;!l~k { (Pi - /-l(Ci X )) + P ~(Pi - IL(Ci X )) }
subject to Ax:::; b (6.19)
Xj E {O, ... ,Vj}, j = 1, ... ,n
(A, b, c) E (A, b, c)Q
where p is a sufficiently small positive number.
In this formulation, however, constraints are nonlinear because the
parameters A, b, and C are treated as decision variables. To deal with
such nonlinearities, we introduce the following set-valued functions SiC)
and T (., .) for i = 1, ... , k.

Si(Ci) = {xl i;!l~k {(Pi - /-l(Ci X )) + P ~(Pi - /-l(Ci X )) } } } (6.20)


T(A,b) = {x I Ax:::; b}
Then it can be easily verified that the following relations hold for SiC)
and Te, .)
when x ~ o.

PROPOSITION 6.1
(1) cl : :; c;
=} Si (en ;2 Si (e;)

2
(2) b :::; b =} T(A, b I ) ~ T(A, bJ)
1

(3) AI:::; A2 =} T(AI,b) ;2 T(A2,b)

Now, from the properties of the a-level set for the vectors and/or
matrices of fuzzy numbers, it should be noted here that the feasible
regions for A, b, Ci can be denoted by the closed intervals [A~, A1:"l,
124 6. FUZZY MULTIOBJECTIVE INTEGER PROGRAMMING

[b~,b~l, [cfo'c~], respectively, where y~ or y[; represents the left or


right extreme point of the a-level set Yo. Therefore, through the use of
Proposition 6.1, we can obtain an optimal solution of the problem (6.19)
by solving the following integer programming problem.

mInImIZe

subject to

Observe that this problem preserves the linearities of the constraints,


and hence it is quite natural to define the fitness function by
k
1(8) = (1.0 + kp) - i~~k {(P'i - fL(cfox)) + P ~(Pi - fL(C~X)) },
(6.22)
where 8 and x denote an individual represented by double string and
phenotype of 8, respectively.
With this observation in mind, the augmented minimax problem (4.30)
can be effectively solved through GADSLPR or GADSLPRRSU intro-
duced in the preceding sections.
Following the preceding discussions, we can now construct the inter-
active algorithm in order to derive a satisficing solution for the DM from
the a-Pareto optimal solution set. The steps marked with an asterisk
involve interaction with the DM.

Interactive fuzzy multiobjective integer programming with fuzzy


numbers

Step 0: Calculate the individual minimum and maximum of each ob-


jective function under the given constraints for a = 0 and a = 1,
respectively.

Step 1 *: Elicit a membership function fLi (Ci x) from the D M for each
of the objective functions by considering the calculated individual
minimum and maximum of each objective function.

Step 2*: Ask the DM to select the initial value of a (0 S; a S; 1) and


the initial reference membership values Pi, i = 1, ... ,k.

Step 3: For the degree a and the reference membership values Pi,
i = 1, ... ,k specified by the DM, solve the corresponding augmented
minimax problem.
6.3. Fuzzy multiobjective integer programming with fuzzy numbers 125

Step 4*: If the DM is satisfied with the current values of the member-
ship functions and/or objective functions given by the current best
solution, stop. Otherwise, ask the DM to update the reference mem-
bership levels and/or a by taking into account the current values of
membership functions and/or objective functions, and return to step
3.

It must be observed here that, in this interactive algorithm, GADSLPR


or GADSLPRRSU are used mainly in Step 3. However, obscrve that in
step 0 GADSLPR or GADSLPRRSU can be used for calculating zfin
and zi ax , i = 1, ... , k.

6.3.3 Numerical experiments


To illustrate the proposed method, the following numerical examples
are considered. The numerical experiments were performed on a personal
computer (processor: Celcron 333MHz, memory: 128MB, OS: Windows
2000), and a Visual C++ compiler (version 6.0) was used.

6.3.3.1 Multiobjective integer knapsack problems with fuzzy


numbers
As a numerical example, consider a three-objective integer knapsack
problem with 30 variables and 10 constraints that involves fuzzy num-
bers. For convenience, it is assumed here that some of the coefficients Cij,
aij and bi in Table 6.1 are fuzzy numbers. To be more explicit, among
the coefficients in Table 6.1, the following are assumed to be triangular
fuzzy numbers, as shown in Table 6.10.
The parameter values of the proposed GADSLPR are set as popu-
lation size N = 100, generation gap G = 0.9, probability of crossover
Pc = 0.9, probability of mutation Pm = 0.05, probability of inversion
Pi = 0.03, minimal search generation [min = 100, maximal search gener-
ation [max = 2000, E = 0.02, Cmult = 1.8, (J = 2.0, p = 3.0, and R = 0.9.
The coefficient p of the augmented minimax problem is set as 0.005.
First, the individual minimum and maximum of each objective func-
tion CiX, i = 1,2,3 for a = 1 and 0: = 0 are calculated by GADSLPR,
as shown in Table 6.1l.
By taking these values into account, the DM subjectively determined
linear membership functions ILi(CiX), i = 1,2,3, as shown in Table 6.12.
Having determined the linear membership functions in this way, for
this numerical example, at each interaction with the DM, the corre-
sponding augmented minimax problem (6.21) was solved through 10
trials of GADSLPR, as shown in Table 4.16.
126 6. FUZZY MULTIOBJECTIVE INTEGER PROGRAMMING

Table 6.10. Triangular fuzzy numbers

CI,7 ( -184.00, -126.00, -91.00) Cl,18 ( -199.00, -186.00, -177.00)


CI,19 ( -714.00, -500.00, -442.00) Cl,27 ( -121.00, -105.00, -98.00)
C2,5 ( 672.00, 837.00, 1229.00) C2,15 ( 377.00, 466.00, 646.00)
C2,16 ( 545.00, 732.00, 833.00) C3,28 ( 303.00, 490.00, 685.00)
C3,29 ( 51.00, 96.00, 102.00) C3,30 ( -674.00, -472.00, -449.00)
al,IO ( 89.00, 95.00, 111.00) al,14 ( 392.00, 591.00, 841.00)
al,19 ( 328.00, 343.00, 378.00) CL2,4 ( 364.00, 579.00, 763.00)
a2,15 ( 777.00, 986.00, 1157.00) a2,26 ( 26.00, 38.00, 51.00)
a3,5 ( 597.00, 670.00, 713.00) a3,18 ( 486.00, 579.00, 649.00)
a3,23 ( 526.00, 964.00, 1200.00) CL3,27 ( 315.00, 423.00, 439.00)
CL4.12 ( 550.00, 711.00, 971.00) a5,3 ( 553.00, 701.00, 721.00)
a5,4 ( 414.00, 564.00, 621.00) a5,8 ( 477.00, 482.00, 515.00)
CL5,10 ( 1.00, 2.00, 2.00) a6,2 ( 239.00, 302.00, 360.00)
a6,13 ( 135.00, 187.00, 203.00) a6, lfi ( 147.00, 207.00, 291.00)
a7,1l ( 89.00, 146.00, 160.00) a7,20 ( 134.00, 154.00, 167.00)
as,5 ( 399.00, 432.00, 528.00) as,13 ( 33.00, 42.00, 42.00)
ag,1 ( 423.00, 428.00, 488.00) (19,6 ( 568.00, 839.00, 860.00)
a9.R ( 107.00, 172.00, 255.00) (19,29 ( 122.00, 178.00, 247.00)
alO,12 ( 499.00, 761.00, 984.00) (110,14 ( 181.00, 229.00, 301.00)
alO,17 ( 309.00, 427.00, 490.(0) (l1O,21 ( 230.00, 274.00, 388.00)
1l1O,29 ( 219.00, 272.00, 304.(0) b7 (40974.50, 43205.00, 4,3773.75)
bs (28445.75, 42225.00, 48512.(0) bfj (35442.00, 37760.00, 46930.75)

Table 6.11. Individual minimum and maximum of each objective function

Minimum (ZiIllIl) Maximum (z?laX)


0'=1 0'=0 0'=1 0'=0
CIX -58091.0 -60804.0 0.0 0.0
C2X 0.0 0.0 63460.0 71438.0
C3X -19573.0 -21687.0 27770.0 32322.0

Table 6.12. Membership functions for objective functions

zt zy
1l1(CI X ) -60000.0 0.0
112 (C2X) 0.0 70000.0
113 (C3 X ) -21000.0 32000.0
6.,'1. Fuzzy multiobjective integer programming with fuzzy numbers 127

The augmented minimax problem is solved by the GADSLPR for the


initial reference membership levels (1.00,1.00,1.00), which can be viewed
as the ideal values and 0' = 1.00. Then, the DM is supplied with the
corresponding membership function values in the first interaction of Ta-
ble 4.16. Because the DM is not satisfied with the current membership
function values, the DM updates the reference membership values as
i1,l = 0.70, M2 = 1.00, and M3 = 1.00 for improving the satisfaction levels
for /-l2 and /-l3 at the expense of /-l1' For the updated reference member-
ship values, the corresponding augmented minimax problem is solved by
the GADSLPR, and the corresponding membership function values are
calculated as shown in the second interaction of Table 6.13. Because the
DM is not satisfied with the current membership function values, the
DM updates the reference membership values as M1 = 0.70, M2 = 0.80,
and M3 = 1.00 for improving the satisfaction levels for J-l1 and J-l3 at the
expense of /-l2. For the updated reference membership values, the corre-
sponding augmented minimax problem is solved by the GADSLPR, and
the corresponding membership function values are calculated as shown
in the third interaction of Table 6.13. Furthermore, the DM updates the
degree 0' = 1.0 --t 0.6 to improve all membership function values at the
cost of the degree of realization of coefficients Q. In this example, the
satisficing solution for the DM is derived at the fourth interaction.

Table 6.13. Interactive processes (10 trials)

Interaction fIol (ClX) IL2(C2 X ) flo 3 (C3 X ) CIX C2X C3X #


1st 0.685317 0.685214 0.683849 -41119.0 22035.0 -4244.0 4
(1.00,1.00,1.00) 0.683550 0.686986 0.685038 -41013.0 21911.0 -4307.0 5
a = 1.0 0.683350 0.685814 0.685226 -41001.0 21993.0 -4317.0 1
2nd 0.485467 0.785171 0.784925 -29128.0 15038.0 -9601.0 1
(0.70,1.00,1.00) 0.484600 0.785371 0.787547 -29076.0 15024.0 -9740.0 1
a = 1.0 0.485000 0.786871 0.784472 -29100.0 14919.0 -9577.0 3
0.484283 0.785029 0.785132 -29057.0 15048.0 -9612.0 1
0.4842l7 0.784200 0.784566 -29053.0 15106.0 -9582.0 3
0.484033 0.786643 0.785491 -29042.0 14935.0 -9631.0 1
3rd 0.567167 0.667729 0.868358 -34030.0 23259.0 -14023.0 2
(0.70,0.80,1.00) 0.569317 0.668100 0.867132 -34159.0 23233.0 -13958.0 6
a = 1.0 0.568083 0.667057 0.867415 -34085.0 23306.0 -13973.0 1
0.570283 0.668329 0.866113 -34217.0 23217.0 -13904.0 1
4th 0.574410 0.674286 0.876830 -34464.6 22800.0 -14472.0 3
(0.70,0.80,1.00) 0.574340 0.673900 0.873943 -34460.4 22827.0 -14319.0 2
a = 0.6 0.577710 0.673214 0.873981 -34662.6 22875.0 -14321.0 5
#: ~umber of solutIOns
128 6. FUZZY MULTIOBJECTIVE INTEGER PROGRAMMING

6.3.3.2 Multiobjective integer programming problems with


fuzzy numbers
As a numerical example of multiobjective general 0-1 programming
problems involving fuzzy numbers, consider a three-objective general
integer programming problem having 50 variables and 10 constraints
involving fuzzy numbers. It is assumed here that some of the coefficients
Cij, aij, and bi in Tables 6.5 and 6.6 are fuzzy numbers. To be more
explicit, among the coefficients in Tables 6.5 and 6.6, the following are
assumed to be triangular fuzzy numbers, as shown in Table 6.14.

Table 6.14. Fuzzy paramters

Cl,2 ( -62.00, -59.00, -34.00) Cl,12 ( -664.00, -593.00, -405.00)


C!,18 ( -477.00, -468.00, -297.00) Cl,2l ( -248.00, -223.00, -166.00)
Cl,3D ( -1095.00, -732.00, -701.00) C2,18 ( 738.00, 981.00, 1022.00)
C2,19 ( 133.00, 206.00, 210.00) C2,2D ( 379.00, 414.00, 507.00)
C2,22 ( 599.00, 787.00, 1073.00) C2,26 ( 690.00, 785.00, 1119.00)
C2,30 ( 62.00, 66.00, 84.00) C3,13 ( -3.00, -3.00, -1.00)
C3,14 ( 224.00, 246.00, 255.00) C3,29 ( -350.00, -242.00, -148.00)
al,G ( 88.00, 122.00, 140.00) al,lD ( -170.00, -116.00, -106.00)
al,20 ( -269.00, -199.00, -187.00) a2,5 ( -306.00, -209.00, -180.00)
a2,15 ( -626.00, -485.00, -419.00) a3,3 ( -564.00, -387.00, -203.00)
(l3,4 ( 97.00, 108.00, 124.00) a3,7 ( -83.00, -63.00, -46.00)
(1.3)27 ( -77.00, -59.00, -46.00) a4,3 ( 219.00, 428.00, 582.00)
a4,17 ( 8.00, 16.00, 20.00) a4,19 ( -424.00, -327.00, -204.00)
a4)26 ( -389.00, -264.00, -221.00) as,l1 ( 202.00, 287.00, 376.00)
a5,21 ( -231.00, -215.00, -160.00) a5,28 ( 120.00, 176.00, 215.00)
ao)Hl ( -441.00, -346.00, -272.00) a6,26 ( -41.00, -40.00, -36.00)
a6,28 ( 261.00, 466.00, 597.00) a7,6 ( -170.00, -156.00, -137.00)
a7,17 ( 85.00, 139.00, 147.00) a7,27 ( -506.00, -373.00, -265.00)
a7,29 ( 52.00, n.oo, 98.00) as,4 ( -245.00, -193.00, -151.00)
a8,6 ( 37.00, 68.00, 88.00) (lS,l1 ( -125.00, -109.00, -76.00)
as,14 ( 232.00, 386.00, 446.00) a9,6 ( 18.00, 24.00, 31.00)
a9,12 ( 90.00, 120.00, 140.00) a9,13 ( -293.00, -224.00, -152.00)
a9,14 ( 307.00, 431.00, 575.00) alO,8 ( -221.00, -148.00, -99.00)
(llO,lO ( -376.00, -256.00, -131.00) alO,16 ( -404.00, -365.00, -267.00)
alO,19 ( -478.00, -460.00, -442.00) alO,21 ( 73.00, 131.00, 141.00)
alO,25 ( -196.00, -137.00, -126.00) alO,28 ( -619.00, -478.00, -332.00)
alO,29 ( -297.00, -201.00, -200.00) b2 ( -22102.50, -11730.00, 2782.05)
b5 ( -14250.90, -4759.50, 7399.95)

The parameter values of GADSLPRRSU are set as population size


N = 100, generation gap G = 0.9, probability of crossover Pc = 0.9,
probability of mutation Pm = 0,01, probability of inversion Pi = 0.03,
6.3. Fuzzy multiobjective integer programming with fuzzy numbers 129

minimal search generation Imin = 100, maximal search generation Imax =


2000, c = 0.02, Cmult = 1.8, a = 2.0, p = 3.0, R = 0.9, A = 0.9, 71 = 0.1,
() = 5.0, and P = 100. The coefficient p of the augmented minimax
problem is set as 0.005.
First, for determining the linear membership functions, the individual
minimum and maximum of each objective function CiX, i = 1,2,3 are re-
spectively calculated by GADSLPRRSU, as shown in Table 6.15. Based
on the values in Table 6.15, the DM subjectively specified parameters
of linear membership functions J.li(CiX), i = 1,2,3 as shown in Table
6.16. At each interaction with the DM, the corresponding augmented
minimax problem (6.21) is solved through 10 trials of GADSLPRRSU,
as shown in Table 6.17.

Table 6.15. Individual minimum and maximum of each objective function

Minimum (zitn) Maximum (zi ax )


0'=1 a=O a=1 0'=0
CIX -109291.0 -117922.0 -32137.0 -15100.0
C2X 34701.0 22987.0 124267.0 139535.0
C3X -20524.0 -24274.0 20141.0 24802.0

Table 6.16. Membership functions for objective functions

z; zy
Ml(CIX) -120000.0 -20000.0
M2(C2 X ) 20000.0 140000.0
M3(C3 X ) -30000.0 30000.0

The augmented minimax problem is solved by the GADSLPRRSU


for the initial reference membership levels (1.00, 1.00, 1.00) and 0: = 1.0,
which can be viewed as the ideal values, and the DM is supplied with
the corresponding membership function values in the first interaction
of Table 6.17. Because the DM is not satisfied with the current mem-
bership function values, the DM updates the reference membership val-
ues as ill = 1.00, il2 = 0.80, il3 = 1.00, and 0: = 1.0 for improving
the satisfaction levels for J.lI and J.l3 at the expense of J.l2. For the up-
dated reference membership values, the corresponding augmented min-
imax problem is solved by the proposed GADSLPRRSU, and the cor-
responding membership function values are calculated as shown in the
second interaction of Table 6.17. Because the DM is not satisfied with
130 6. FUZZY MULTIOBJECTIVE INTEGER PROGRAMMING

Table 6.17. Interactive processes (10 trials)

Interaction tll (CIX) p2 (C2X) p3(C3 X ) CIX C2X C3X #


1st 0.599660 0.600142 0.608500 -79966.0 67983.0 -6510.0 1
(1.00,1.00,1.00) 0.599320 0.599308 0.599217 -79932.0 68083.0 -5953.0 2
Q = 1.0 0.599070 0.600008 0.600533 -79907.0 67999.0 -6032.0 1
0.600550 0.599058 0.599250 -80055.0 68113.0 -5955.0 1
0.598720 0.601983 0.598733 -79872.0 67762.0 -5924.0 1
0.598330 0.597817 0.599050 -79833.0 68262.0 -5943.0 1
0.597840 0.601458 0.597750 -79784.0 67825.0 -5865.0 1
0.598420 0.597733 0.597867 -79842.0 68272.0 -5872.0 1
0.596740 0.600142 0.597800 -79674.0 67983.0 -5868.0 1
2nd 0.682120 0.480375 0.680767 -88212.0 82355.0 -10846.0 1
(1.00,0.80,1.00) 0.680460 0.480208 0.680667 -88046.0 82375.0 -10840.0 1
Q = 1.0 0.681400 0.480008 0.682433 -88140.0 82399.0 -10946.0 1
0.679750 0.481158 0.685483 -87975.0 82261.0 -11129.0 1
0.679570 0.479633 0.682300 -87957.0 82444.0 -10938.0 1
0.679410 0.480075 0.681633 -87941.0 82391.0 -10898.0 1
0.678850 0.480742 0.684383 -87885.0 82311.0 -11063.0 1
0.678400 0.479775 0.684850 -87840.0 82427.0 -11091.0 1
0.677820 0.477942 0.678300 -87782.0 82647.0 -10698.0 1
0.679820 0.482008 0.677267 -87982.0 82159.0 -10636.0 1
3rd 0.621460 0.521317 0.724950 -82146.0 77442.0 -13497.0 2
(0.90,0.80,1.00) 0.620720 0.522900 0.721500 -82072.0 77252.0 -13290.0 1
Q = 1.0 0.620310 0.522808 0.723083 -82031.0 77263.0 -13385.0 1
0.621090 0.520183 0.719700 -82109.0 77578.0 -13182.0 1
0.619710 0.519533 0.720900 -81971.0 77656.0 -13254.0 1
0.619370 0.522317 0.721233 -81937.0 77322.0 -13274.0 1
0.619240 0.520642 0.721167 -81924.0 77523.0 -13270.0 1
0.618790 0.518650 0.723917 -81879.0 77762.0 -13435.0 1
0.618040 0.519925 0.718067 -81804.0 77609.0 -13084.0 1
4th 0.640835 0.544483 0.741927 -84083.5 74662.0 -14515.6 1
(0.90,0.80,1.00) 0.643561 0.542167 0.740767 -84356.1 74940.0 -14446.0 1
Q = 0.7 0.640645 0.541192 0.740660 -84064.5 75057.0 -14439.6 2
0.641774 0.540546 0.746543 -84177.4 75134.5 -14792.6 1
0.643176 0.539967 0.742027 -84317.6 75204.0 -14521.6 1
0.641244 0.539925 0.741043 -84124.4 75209.0 -14462.6 1
0.639856 0.543083 0.743200 -83985.6 74830.0 -14592.0 1
0.639694 0.539479 0.746910 -83969.4 75262.5 -14814.6 1
0.639813 0.539000 0.742103 -83981.3 75320.0 -14526.2 1
#: Number of solutlOns

the current membership function values, the DM updates the reference


membership values as [i,1 = 0.90, [i,2 = 1.00, [i,3 = 1.00, and a = 1.0 for
improving the satisfaction levels for J.L2 and J.L3 at the expense of J.LI. For
6.4. Conclusion 131

the updated reference membership values, the corresponding augmented


minimax problem (6.21) is solved by the proposed GADSLPRRSU, and
the corresponding membership function values are calculated as shown
in the third interaction of Table 6.17. The same procedure continues
in this manner until the DM is satisfied with the current values of the
membership functions and the objective functions. In this example, a
satisficing solution for the DM is derived at the fourth interaction.

6.4 Conclusion
In this chapter, as the fuzzy multiobjective version of Chapter 5 and
an integer generalization along the same lines as Chapter 3, interactive
fuzzy multiobjective integer programming methods have been discussed.
Through the use of GADS for integer programming introduced in Chap-
ter 4, considerable effort is devoted to the development of interactive
fuzzy multiobjective integer programming as well as fuzzy multiobjec-
tive integer programming with fuzzy numbers together with several nu-
merical experiments.
In the next chapter, we will proceed to genetic algorithms for non-
linear programming. In Chapter 8, attention is focused on not only
multiobjective nonlinear programming problems but also multiobjective
nonlinear programming problems with fuzzy numbers as a generalized
version of this chapter.
Chapter 7

GENETIC ALGORITHMS FOR


NONLINEAR PROGRAMMING

In this chapter, after introducing genetic algorithms for nonlinear


programming including the original GEnetic algorithm for Numerical
Optimization of COnstrained Problems (GENOCOP) system for linear
constraints, the coevolutionary genetic algorithm, called GENOCOP
III, proposed by Michalewicz et al. is discussed in detail. Realizing
some drawbacks of GENOCOP III, the coevolutionary genetic algorithm,
called the revised GENOCOP III, is presented through the introduction
of a generating method of an initial reference point by minimizing the
sum of squares of violated nonlinear constraints and a bisection method
for generating a new feasible point on the line segment between a search
point and a reference point efficiently. Illustrative numerical examples
are provided to demonstrate the feasibility and efficiency of the revised
GENOCOP III.

7.1 Introduction
Genetic algorithms (GAs) initiated by Holland [75] have attracted
considerable attention as global methods for complex function optimiza-
tion since De Jong considered GAs in a function optimization setting
[66], However, many of the test function minimization problems solved
by a number of researchers during the past 20 years involve only spec-
ified domains of variables. Only recently, several approaches have been
proposed for solving general nonlinear programming problems through
GAs [60, 88, 112, 165].
For handling nonlinear constraints of general nonlinear programming
problems by GAs, most of them are based on the concept of penalty
functions, that penalize infeasible solutions [60, 88, 112, 114, 115, 119].
Although several ideas have been proposed about how the penalty func-
134 7. GENETIC ALGORITHMS FOR NONLINEAR PROGRAMMING

tion is designed and applied to infeasible solutions, penalty-based meth-


ods have several drawbacks, and the experimental results on many test
cases have been disappointing [112, 115, 119], as pointed out in the field
of nonlinear optimization.
In 1995, Michalewicz et al. [118, 119] proposed GENOCOP III for
solving general nonlinear programming problems as a new constraint-
handling method for avoiding many drawbacks of these penalty meth-
ods. GENOCOP III incorporates the original GENOCOP system for
linear constraints [112, 113, 116], but extends it by maintaining two
separate populations, where a development in one population influences
evaluations of individuals in the other population. The first population
consists of so-called search points that satisfy linear constraints of the
problem as in the original GENOCOP system. The second population
consists of so-called reference points that satisfy all constraints of the
problem.
Unfortunately, however, in GENOCOP III, because an initial refer-
ence point is generated randomly from individuals satisfying the lower
and upper bounds, it is quite difficult to generate an initial reference
point in practice. Furthermore, because a new search point is randomly
generated on the line segment between a search point and a reference
point, effectiveness and speed of search may be quite low.
Realizing such difficulties, in this chapter, we propose the coevolu-
tionary genetic algorithm, called the revised GENOCOP III, through
the introduction of a generating method of an initial reference point by
minimizing the sum of squares of violated nonlinear constraints and a
bisection method for generating a new feasible point on the line segment
between a search point and a reference point efficiently. Illustrative nu-
merical examples demonstrate both the feasibility and the efficiency of
the proposed method.

7.2 Floating-point genetic algorithms


As indicated by Michalewicz [112], the binary representation tradi-
tionally used in genetic algorithms has some drawbacks when applied
to multidimensional, high-precision constrained optimization problems.
For example, for multidimensional constrained optimization problems
having 100 variables with domains in the range [-500,500]' where a
precision of six digits after the decimal point is required, the length
of the binary solution vector is 3000. This, in turn, generates a search
space of about 101000. For such problems genetic algorithms using binary
representations perform poorly.
With this observation in mind, in recent years, for most applications
of genetic algorithms to constrained optimization problems the floating-
7.2. Floating-point genetic algorithms 135

point coding techniques have been used to represent potential solutions


to the problems. In genetic algorithms with floating-point representa-
tion, each string is coded as a vector of real numbers of the same lengths
as the solution vector. Such coding is also known as real-valued represen-
tation or real number representation. Using the floating-point represen-
tation, several constraint-preserving operators for handling constraints
are designed by Michalewicz [112].
In this section, for convenience in our subsequent discussion, following
Michalewicz [112], genetic algorithms with floating-point representation
for constrained optimization problems, including the GENOCOP sys-
tem, are introduced. The GENECOP system is based on maintaining
feasible solutions from the convex search space.

7.2.1 Nonlinear programming problems


In general, nonlinear programming, also called constrained nonlinear
optimization, seeks to select values of decision variables that minimize
an objective function subject to a set of constraints. To be more explicit,
nonlinear programming is concerned with solving a constrained nonlinear
optimization problem of the form
mmlmlze f(x) }
(7.1)
subject to x EX'
where x = (Xl, ... , Xn) is an n-dimensional vector of decision variables,
f(x) is an objective function, and X is a feasible set or feasible region
defined as
X ~{xERnl 9j(X):SO,j=1, ... ,ml, hj(x) =0,
(7.2)
j=ml+1, ... ,m, li:Sxi:Sui, i=l, ... ,n}
The constraints 9j(X) :S 0, j = 1, ... , ml, are ml inequality constraints;
hj(x) = 0, j = ml +1, ... ,m, are m-m1 equality constraints; and Ii and
Ui, i = 1, ... , n, are lower and upper bounds on the decision variables,
which are usually called the domain constraint.
Depending on the nature of the objective function and the constraints,
the nonlinear programming problem (7.1) can be classified accordingly.
When X = Rn, (7.1) becomes an unconstrained minimization problem.
If the objective function f(x) is convex and the constrained set X is
convex, (7.1) is called a convex programming problem. 1 When the con-
vexity conditions of the objective function and/or the feasible region are

1 A nonempty set C is called convex if the line segment joining any two points of the set also
belongs to the set, i.e., >'::Cl + (1 - >')::C2 E C V::Cl,::C2 E C and V>' E [0,1]. A function f(::c)
defined on a non empty convex set C in Rn is said to be convex if f(>'::Cl +(1->')::C2) :<::: >'f(::Cl)+
136 7. GENETIC ALGORITHMS FOR NONLINEAR PROGRAMMING

not satisfied, (7.1) becomes a nonconvex programming problem. The


desirable feat.ure of convex programming is based on the fact t.hat any
local minimum of a convex programming problem is a global minimum. 2
In this section, we consider the convex programming problem formu-
lated as
minimize f (x) }
(7.3)
subject to x = (Xl"'" J: n ) ED C R n ,

where D is a feasible region defined by the lower and the upper bounds
of the decision variables (li 'S Xi 'S Ui, i = 1, ... , n) and by a set of
convex constraints C. Hence the feasible region D is a convex set.
From the convexity of the feasible region D, it follows that for each
point in the search space (:r 1, ... ,X n ) E D there exists a feasible range
(l(i), u(i)) of a variable Xi, i = 1, ... , n, where other variables Xj, j =
1, ... , i - 1, i + 1, ... ,n, remain fixed. To be more explicit, for a given
(:rl, ... ,:ri, ... ,X n ) E D, it holds that

Y E (l(i), u(i)) if and only if (Xl"'" Xi-I, y, Xi+l ... , :r: n ) ED, (7.4)

where all Xj, j = 1, ... , i - 1, i + 1, ... , n, remain constant. In this


section, it is also assumed that the ranges (l(i),u(i)) can be efficiently
computed.
For example, as in Michalewicz [112], if D ~ R2 is defined as

then for a given point (2,5) E D, l(i) and u(i), i = 1,2 become as

l(1) = 1, u(l) = V5,


l(2) = 4, u(2) = 6

This means that the first element of the vector (2,5) can vary from 1 to
V5 while X2 = 5 remains constant and the second element of the vector
(2,5) can vary from 4 to 6 while Xl = 2 remains constant.. Naturally, if
the set of constraint C is empty, then the convex feasible region becomes
as D = n~=:l(li' Ui) with l(i) = Ii and u(i) = Ui, i = 1, ... , n.

(1- A)J(a!2) \fa!j,a!2 E C and \fA E [0,1]. A function J(re) is said to be concave if -f(re) is
convex. It should be noted here that if all the functions f(re) and 9j (re), ] =: 1, ... , Tn), are
convex and all the functions hj(re), j "" Tn1 + 1, ... ,m, are linear, (7.1) becomes a convex
programming problem.
2 A point re* is said to be a local minimum point of (7.1) if there exists a real number 6 > 0
such that f(re) 2: J(a!*) for all re E X satisfying lire - re*11 < 6. A point re* is called a global
minimum point of (7.1) if J(re) ::: J(re') for all re E X.
7.2. Floating-point genetic algorithms 137

7.2.2 Individual representation


In the floating-point representation, each individual is coded as a vec-
tor of floating-point numbers of the same length as the solution vector,
and each element is forced to be within the feasible region D.
Naturally, the precision of t.he floating-point representation depends
on the underlying machine, but. it is pointed out that the precision is gen-
erally much better than that of the binary representation. Although the
precision of the binary representation can always be extend by introduc-
ing more bits, this considerably slows down the algorithm. In addition,
t.he float.ing-point representation is capable of representing quite large
domains. On the other hand, the binary representation must sacrifice
precision with an increase in domain size, given fixed binary length.
Also, in the floating-point representation it is much easier to design spe-
cial genetic operators for handling constraints.

7.2.3 Initial population


An initial population for the floating-point genetic algorithms is usu-
ally generated in the following way.

(1) Some specified percentage of initial individuals is randomly gener-


ated in the feasible region D.

(2) The remaining individuals are generated on the boundary of the


feasible region D.

Observe that the minimum or maximum of the problems to be solved


frequently lies on the boundary of the feasible region.

7.2.4 Crossover
For the floating-point genetic algorithms, several interesting types of
crossover operators have been proposed. Some of them are discussed in
turn following Michalewicz [112, 116J.

7.2.4.1 Simple crossover


The simple crossover operator is similar to that of the binary rep-
resentations. The basic one is one-point crossover. For two parents
v = (VI,""Vn ) and w = (WI, ... ,Wn ), if they are crossed after the ith
position, the resulting offspring are Vi = (VI, ... , Vi, Wi+! , ... , Wn) and
Wi = (WI, ... , Wi, Vi+ I, ... , v n ). Unfortunately, however, such an opera-
tor may generate offspring outside the feasible region D. To avoid this,
using the convexity of the feasible region D, from two parents v and w,
two offspring Vi and Wi, that are convex combinations of their parents
138 1. GENETIC ALGORITHMS FOR NONLINEAR PROGRAMMING

after the ith position, are generated, namely

v' = (VI, ... ,Vi,awi+I + (1 - a)Vi+I, ... ,aWn + (1 - a)v n ) (7.5)

w' = (WI, ... ,Wi,aVi+l + (1 - a)Wi+I, ... ,avn + (1 - a)w n ), (7.6)


where a E [0,1]. It should be noted here that the only permissible split
points are between individual floating-points, because it is impossible to
split anywhere else when using floating-point representation.

7.2.4.2 Single arithmetic crossover


The single arithmetic crossover operates as follows. For two parents
v = (VI'"'' v n ) and w = (WI, ... , W n ), if they are crossed at the ith
position, the resulting offspring are

v' = (Vl, ... ,V~, ... ,vn) and w' = (Wl, ... ,W~, ... ,wn), (7.7)

where
V~ = aWi + {I - a)vi and w~ = aVi + {I - a)wi (7.8)
Here, a is a parameter so that the resulting two offspring v' and w' are
in the the convex feasible region D. Actually, as can been seen from
simple calculations, the value of a is randomly chosen as follows:

[max{a, (3), min(r, 8)] , if Vi > Wi


aE { [0,0], if Vi = Wi (7.9)
[max(r, 8), min(a, (3)] , if Vi < Wi,
where

a = (I(Wi) - Wd/{Vi - Wi), (3 = (U(Vi) - Vi)/(Wi - Vi), (7.10)

'Y = (l(Vi) - Vi)/(Wi - vz), 8 = (U(Wi) - Wz)/(Vi - Wi) (7.11)


Here, l(Wi) or u{ Wi) denotes the lower or upper bound of Wi, respectively,
when the remaining components are fixed. l(uz) or U(Ui) is similarly
defined.
The whole arithmetic crossover operates as follows. From two parents
v and w, the operator generates two offspring v' and w', that are convex
combinations of their parents, namely

v' = aw + (1 - a)v and w' = av + (1 - a)w, a E [0,1] (7.12)

The operator uses a random value a E [0,1], and newly generated two
offspring v' and w' always become feasible when the feasible region D
is convex.
7.2. Floating-point genetic algorithms 139

When a = 1/2, the operator is called average crossover, as in Davis


[44J.
Observe that arithmetic crossover can be generalized into a multipar-
ent operator.
Both the single arithmetic crossover and whole arithmetic crossover
are illustrated in Figure 7.1.

Feasible region

Figure 7.1. Single arithmetic crossover and whole arithmetic crossover

7.2.4.3 Heuristic crossover


The heuristic crossover is a unique operator, proposed by Wright [217J.
It uses values of the objective function in determining search direction
and generates only one offspring z from two parents v and w, according
to the following rule:

z=r(w-v)+w, (7.13)

where r is a random number between 0 and 1 and the parent w is not


worse than Vi that is, f(w) S f(v) for minimization problems.
This operator may generate an offspring vector that is not feasible.
In such a case, generate another random value r and create another
offspring. If after a prescribed number of attempts no new solution
satisfying the constraints is found, the operator generates no offspring.
It seems that heuristic crossover contributes to the precision of the
solution found; its major responsibilities are fine local tuning and search
in the most promising direction.

7.2.5 Mutation
Mutation operators are somehow different from the traditional ones.
Following Michalewicz [112], some of them are discussed in turn.
140 7. GENETIC ALGORITHMS FOR NONLINEAJ( PROGRAMMING

7.2.5.1 Uniform mutation


The definition of uniform mutation is similar to that for traditional
mutation. This operator generates a single offspring Vi from a single
parent v. The operator selects a random component i E {1, ... , n} of
the vector v = (VI, ... , Vi, ... , vn) and generates Vi = (VI, ... , v~, ... , v n ).
Here v~ is a random value (uniform probability distribution) from the
range (u(vi),I(Vi)), where U(Vi) and l(vd denote the upper bound and
lower bound of Vi, respectively.

7.2.5.2 Boundary mutation


The boundary mutation also requires a single parent v and generates
a single offspring Vi. The operator is a variation of the uniform mutation,
with v~ being either U(Vi) or l(vd, each with equal probability.
Both the uniform mutation and the boundary mutation are illustrated
in Figure 7.2.

Boundary mutation
-- ,
"

Figure 7.2. Uniform mutation and boundary mutation

7.2.5.3 Nonuniform mutation


The nonuniform mutation operator, originally proposed by Janikow
and Michalewicz [83], is designed for fine tuning capabilities aimed at
achieving high precision. For a given parent v, if the ith element Vi is
selected for mutation, the resulting offspring is Vi = (VI, ... , V~, ... , v n ),
where
Vi = { Vi + 6.(t, U(Vi) - Vi), if x = 0 (7.14)
~ Vi - 6.(t, Vi - U(Vi)), if x = 1
The function 6.(t, y) returns a value in the range [0, y] such that the
probability of 6.(t, y) being close to 0 increases as t increases, where t
is the generation number. This property causes this operator to search
7.3. GENOCOP III 141

the space uniformly initially (when t is small) and very locally at later
stage. In Michalewicz et al. [117], the function

6.(t,y) = yr (l_;)b (7.15)

is used, where r is a random number from [0,1], T is the maximal gen-


eration number, and b is a parameter determining the degree of nonuni-
formity.

7.2.5.4 Whole nonuniform mutation


When a nonuniform mutation operator is applied t.o a whole solution
vector rather than to a single element of it., the whole vector is slightly
slipped in t.he space. In this case, the operator is called whole nonuniform
mutation.

7.3 GENOCOP III


7.3.1 Nonlinear programming problems
As discussed in the previous section, in general, a nonlinear program-
ming problem is formulated as
minimize f (x ) }
=:
subject to gj(x) ~ 0, j. 1, ... , ml , (7.16)
hJ{x) - 0, J - ml + 1, ... , m
li~Xi~Ui, i=l, ... ,n

where x = (Xl, . .. , xn) is an n-dimensional vector of decision variables;


f{x) is an objective function; gj{x) ~ 0, j = 1, ... , ml, are ml inequal-
ity constraints; and hj(x) = 0, j = ml + 1, ... , m, are m - ml equality
constraints, and these functions are assumed to be either linear or non-
linear real-valued ones. Moreover, li and Ui, i = 1, ... , n, are the lower
and upper bounds of the decision variables, respectively.
It should be noted here that the nonlinear programming problem
(7.16) not satisfying the convexity conditions of the objective function
and/or the feasible region becomes a non convex programming problem.
In the following, for notational convenience, we denote the feasible
region satisfying all of the constraints of the nonlinear programming
problem (7.16) by X; in other words,
X ~{xERnl gj(x)~O,j=l, ... ,ml' hj(x) =0,
(7.17)
j = ml + 1, ... ,m, li ~ Xi ~ Ui, i = 1, ... ,n}
Moreover, the feasible region satisfying only the linear constraints and
the upper and lower bounds of the nonlinear programming problem
(7.16) is denoted by S.
142 1. GENETIC ALGORITHMS FOR NONLINEAR PROGRAMMING

7.3.2 Coevolutionay genetic algorithms:


GENOCOP III
GEl\OCOP III [118, 119], which is based on the ideas of coevolution
and repair algorithms, unlike the methods based on penalty functions,
incorporates the original GENOCOP system for linear constraints [112,
113, 116] but extends it by maintaining two separate populations, where
a development in one population influences evaluations of individuals
in the other population. The first population consists of the so-called
search points that satisfy the linear constraints 5 of the problem as in
the original GENOCOP system. The second population consists of the
so-called reference points that satisfy all of the constraints X of the
problem.
GENOCOP III uses the objective function for evaluation of fully fea-
sible individuals (reference points) only, so the evaluation function is not
distorted as in the penalty-based methods.
In GENOCOP III, an initial reference point is assumed to be gener-
ated randomly from individuals satisfying the lower and upper bounds.
If GENOCOP III has difficulties in locating such a reference point for
creating an initial population, the user is prompted to supply it.
Assuming that there is a search point s ~ X, the repair process of
GENOCOP III works as follows.

(1) Select one reference point rEX.

(2) Create random points z from a segment between sand r according


to z = as + (1 - a)r by generating random numbers a from the range
(0, 1).

(3) Once a feasible z is found and if the evaluation of z is better than


that of r, then replace r by the point z as a new reference point.
Also, replace s by z with some probability of replacement Pro

Such a repair process is performed for one reference point and all
search points. When a feasible region is nonconvex or a feasible region
is very small, it becomes very difficult to generate a feasible individual.
Thus, if a newly generated point z is infeasible, a process of generating
random numbers a is repeated until either a feasible point is found or
the prescribed number of iterations is reached .
In this way, two separate populations in GENOCOP III coevolve in
such a way that a development in one population influences evaluations
of individuals in the other population. Also, by evaluating reference
points directly via the objective function, fully feasible individuals can be
obtained. Although GENOCOP III can be applied to general nonconvex
7.4. Revised GENOCOP III 143

programming problems, it should be noted here that at least one initial


reference point is required to create an initial population of reference
points because new individuals are created randomly from a segment
between a search point and a reference point.

7.4 Revised GENOCOP III


7.4.1 Some difficulties in GENOCOP III
GENOCOP III uses two separate populations consisting of search
points and reference points. As a result, at least one initial reference
point is required to create an initial population of reference points. U n-
fortunately, however, in GENOCOP III, because an initial population is
randomly generated from individuals satisfying the lower and the upper
bounds, it is quite difficult to generate an initial reference point for many
problems. Also, the prompt message that GENOCOP III cannot locate
an initial reference point is meaningless to the user in practice, because
in general the user does not know an appropriate initial reference point.
This is one of the major difficulties in GENOCOP III: when GENO-
COP III cannot find an initial reference point, it is impossible to continue
the following procedures.
Creating random points from a segment between a search point and
a reference point causes another difficulty in GENOCOP III, especially
in the following cases, in which
(a) search space is very large and feasible space is very small
(b) an optimal solution lies in the neighborhood of the boundary of a
feasible space.
In general an optimal solution frequently lies on the boundary of fea-
sible space and, in such a case, individuals are required to be evolved
to the boundary generation after generation. Thus, case (b) happens
frequently.
Therefore, in GENOCOP III improvement of individuals becomes
worse as the generations pass larger. Namely, effectiveness and speed of
search may become worse as individuals approach the boundary points
of the feasible space.
For example, as illustrated in Figure 7.3, in case (1) it is relatively
easy to find feasible individuals, but in case (2) it becomes harder to
find feasible points and hence the number of searches tends to increase.

7.4.2 Modification of GENOCOP III


In order to overcome the drawbacks of GENOCOP III discussed thus
far, we propose the revised GENOCOP III through the introduction of
144 7. GENETIC ALGORITHMS FOR NONLINEAR PROGRAMMING

oS
search
point z
(I)
reference
search point point
r
~
S
(2)
reference
point

search space

r

feasible space

Figure 7.:;. Search by GENOCOP III

a generating method of an initial reference point by minimizing the sum


of squares of violated nonlinear constraints and a bisection method for
generating a new feasible point on the line segment between a search
point and a reference point efficiently.
In our revised method, like GENOCOP III, we use two separate popu-
lations; the first one consists of search points and the second one consists
of reference points. As a result, at least one initial reference point is re-
quired to create an initial population of reference points. Such an initial
reference point is not always easy to find, as in GENOCOP III, and, in
fact, none will exist if the constraints are inconsistent.
In contrast to GENOCOP III, in which an initial population is ran-
domly generated from individuals satisfying the lower and upper bounds,
in the revised GENOCOP III, an initial reference point is generated by
minimizing the sum of squares of violated nonlinear constraints.
To be more explicit, for some xES, using the index set violating the
nonlinear inequality constraints

Ig = {j I 9j (x) > 0, j = 1, ... ,q} (7.18)

and the index set violating the nonlinear equality constraints

h = {j I hj(x) 1= O,j = q + 1"", m} (7.19)

formulates an unconstrained optimization problem that minimizes the


sum of squares of violated nonlinear constraints

minimize (7.20)
feES
7.4. Revised GENOCOP III 145

and solves the formulated problem (7.20) for obtaining one initial refer-
ence point or yielding the information that none exists through the orig-
inal GENOCOP system [112, 113, 116] that uses the elementary opera-
tors consisting of simple crossover, whole arithmetic crossover, boundary
mutation, uniform mutation, and nonuniform mutation.
Then an initial population of reference points is created via multiple
copies of the initial reference point obtained in this way.
An initial population of search points is created randomly from indi-
viduals satisfying the lower and upper bounds determined by both the
linear constraints and the original lower and upper bounds.
In this way, two initial separate populations consisting of search points
and reference points, respectively, can be generated efficiently.
Similar to GENOCOP III, our revised method searches a new point
on the line segment between a search point and a reference point. To
overcome the difficulty for creating feasible points from a segment be-
tween a search point and a reference point in GENOCOP III, we propose
a bisection method for generating a new search point on the line segment
between a search point and a reference point efficiently.
In the proposed method, we consider two cases-(a) search points are
feasible individuals and (b) search points are infeasible individuals-and
present an efficient search method for generating feasible individuals.
If search points are feasible, we generate a new point on the line
segment between a search point and a reference point. In this case, if
the feasible space is convex, a newly generated point becomes feasible. If
the feasible space is nonconvex, a newly generated point does not always
become feasible. Thus we search for a feasible point using a bisection
method in the following way.
Let 8 E Sand rEX be a search point and a reference point, respec-
tively, and set s = 8 and r = r.

Step 1: Create random points z from a segment between 8 and r ac-


cording to z = aT + (1 - a)s by generating random numbers a from
the range (0,1).

Step 2: If z is feasible, go to step 7. If infeasible, go to step 3.

Step 3: Determine a search direction in the direction of either the ref-


erence point r or the search point 8 with the probability 1/2. For
convenience, in the following steps let it be the direction of the ref-
erence point r. Set s = z and go to step 4.

Step 4: If the distance between sand T becomes less than a sufficiently


small value, set z = r and go to step 7. Otherwise, go to step 5.
146 7. GENETIC ALGORITHMS FOR NONLINEAR PROGRAMMING

Step 5: Generate a new individual z by z = ~.5 + ~f' and go to step 6.


Step 6: If z is feasible, go to step 7. If infeasible, set .5 = z and return
to step 4.

Step 7: If the evaluation of either z or s is better than that of r, then


replace r by the better point as a new reference point. Also, replace
s with either z or r with some probability of replacement Pr.

If search points are infeasible, we first search for a boundary point t


and then generate a new point z on the line segment between a boundary
point t and a reference point r. In this case, we search for feasible points
by making use of a bisection method in the following way.

Step 1: Create a new individual t according t.o t = ~s + ~f'.


Step 2: If t is feasible, set f' = t and go to step 3. If infeasible, set
S = t and go to step 3.

Step 3: If the distance between sand f' is less than the prescribed
sufficiently small value, set t = f' as a boundary point and go to step
4. Otherwise, return to step 1.

Step 4: Using a boundary point t and a reference point r, generate a


new individual z similar to the case in which a search point is feasible,
and go to step 5.

Step 5: If the evaluation of either t or z is better than that of r, then


replace r by the better point as a new reference point. Also, replace
r by either t or z with some probability of replacement Pr.

Two kinds of search algorithms discussed thus far are illustrated in


Figure 7.4.
Using the previous search algorithms, it becomes possible to search for
the boundary points on that an optimal solution often lies. Also, because
the search is performed using boundary points, it becomes possible to
search a feasible space globally. Furthermore, when randomly generated
individuals are feasible, feasible solutions can be found effectively using
the bisection method.

7.4.3 Revised GENOCOP III


For presenting the details of the revised GENOCOP III, we will give
a brief explanation of the genetic operators.
7.4. Revised GENOCOP III 147

bisection
method
,' .... ,." r

(I) Z
search
point boundary
~
S t z reference
r

~
search space
feasible space

Figure 7.4. Search by bisection method

7.4.3.1 Selection operator


We use ranking selection as a selection operator, where the population
is sorted from the best to the worst and the selection probability of
each individual is assigned according to the ranking. Among linear and
nonlinear ranking methods, the exponential ranking method is adopted
following GENOCOP III. Following the exponential ranking method,
the selection probability Pi for the individual of rank i is determined by

Pi = c (1 - c)i-l, (7.21)

where C E (0,1) represents the probability when an individual of rank


1 is selected. Observe that a larger value of c implies stronger selective
pressure.

7.4.3.2 Crossover and mutation operators


It should be emphasized here that crossover and mutation operators
used in the original GENOCOP system [112, 113, 116] are closed op-
erators in the sense that the resulting offsprings always satisfy linear
constraints 5. For example, arithmetic crossover for two points x, y E 5
yields ax + (1 - a)y E 5, O:S a :S 1, and the resulting offspring always
belongs to 5 when 5 is a convex set.
As genetic operators for crossover and mutation, simple crossover,
whole arithmetic crossover, heuristic crossover, boundary mutation, uni-
form and nonuniform mutations, and whole nonuniform mutation [112,
117] are adopted. These operators are randomly selected by random
numbers.
148 7. GENETIC ALGORITHMS FOR NONLINEAR PROGRAMMING

7.4.4 Numerical examples


7.4.4.1 Numerical example for convex programming problem
Using the four test problems in Michalewicz [113], Sakawa and Yauchi
[179, 180] compared GENOCOP III with the revised GENOCOP III.
The experimental results show that the revised GENOCOP III gives
better results than GENOCOP III did with respect to the computa-
tional time and the precision of solution because of the introduction of
a generating method of an initial reference point by minimizing the sum
of squares of violated nonlinear constraints and a bisection method for
generating a new feasible point on the line segment between a search
point and a reference point efficiently.
All four test problems are summarized in Table 7.1; for each problem,
number n of variables, type ofthe objective function f, the ratio p = Ixn
SIIISI, the number of constraints of each category (linear inequalities LI
and nonlinear inequalities NI), the number a of active constraints at the
optimum, and the optimum value of the objective functions are listed.
Observe that the ratio p was determined experimentally by generating
1,000,000 random points fron S and checking they belong to X. The
full description of these test problems can be found in [113].

'Table 7.1. Summary of four test problems

Problem n Objective function p (%) LI NI a Optimum


1 13 quadratic 0.0111 9 0 6 -15.000
2 8 linear 0.0010 3 3 6 7049.331
3 7 nonlinear 0.5121 0 4 2 680.630
4 10 quadratic 0.0003 3 5 6 24.306

Because of space constraints, we will only explain the experimental


results for the single-objective quadratic programming problem (#4).
2 2 2
minimize f(x) = Xl + +
X2 XIX2 - 14xI - 16x2 (X3 - 10) +
+4(X4 - 5) + (X5 - 3) + 2(X6 - 1) + 5X7
2 2 2 2

+7(xs - 11) 2 + 2(xg - 10) 2 + (XJO - 7) 2 + 45


subject to 105 ~ 4Xl ~ 5:r2 + 3X7 ~ 9xs ~ 0
~lOXI + 8X2 + 17x7 ~ 2xs ~ 0
8Xl - 2X2 ~ 5xg + 2XlO + 12 ~ 0
~3(XI ~ 2)2 - 4(X2 ~ 3)2 ~ 2x5 + 7:1:4 + 120 ~ 0
~5xT ~ 8X2 ~ (X3 - 6)2 + 2X4 + 40 ~ 0
~xr ~ 2(X2 ~ 2)2 + 2XIX2 ~ 142:5 + 6x(i ~ 0
7.4. Revised GENOCOP III 149

-0.5(Xl - 8)2 - 2(X2 - 4)2 - 3x~ + X6 + 30 2: 0


3Xl - 6X2 - 12(x9 - 8)2 + 7XlO 2: 0
-10.0::; Xi ::; 10.0, i = 1, ... ,10
This problem is to minimize the quadratic objective function under
the three linear and five nonlinear constraints. The global optimal solu-
tion of this problem is

x* = (2.171996,2.363683,8.773926,5.095984,0.9906548,
1.430574,1.321644,9.828726,8.280092,8.375927)
f(x*) = 24.3062091,

where the first six constraints are active at the global minimum.
For this numerical example, the parameter values ofthe revised GENO-
COP III are set as follows: both population sizes are 70, replacement
probability Pr = 0.2, c = 0.1 in the exponential ranking selection, the
number of generations is 5000, and the trials are performed 10 times.
In all trials, the same initial populations, operators, and probabilities
of all operators are used.
The maximum number of searches for generating an initial reference
point in GENOCOP III is set to be 100, and the distance parameter for
judging a boundary point in a bisection method is set to be 0.001.
The obtained solutions with computat.ional times are shown in Table
7.2.

Table 7.2. Result for convex example

GENOCOP III revised GENOCOP III


best solution 25.328 24.556
worst solution 32.019 30.201
average 28.943 27.538
mean computational time (second) 733.388 233.950

From Table 7.2 it can been seen that the obt.ained optimal solutions
of the revised GENOCOP III are a little bit better than those of GENO-
COP III. Furthermore, the required number of searches of the revised
GENOCOP III is three times smaller than that of GENOCOP III.

7.4.4.2 Numerical example for nonconvex programming


problem
Now it is appropriate to compare GENOCOP III with the revised
GENOCOP III using general nonconvex programming problems. For
150 7. GENETIC ALGORITHMS FOR NONLINEAR PROGRAMMING

that purpose, Sakawa and Yauchi [179, 180] used the following single
objective nonconvex programming problem.

minimize 1(x) = xr + (X2 - 5)2 + 3(X3 - 9)2 - 12x3 + 2xl + 4x;


+(X6 - 5)2 - 6x + 3{X7 - 2)x~ - X9XlO
+4x~ + 5XIX3 - 3XIX7 + 2XSX7
subject to -3(Xl - 2) 2 - 4(X2 - 3) 2 - 2X32 + 7X4 - 2X5X6XS
+1202:0
-5xI - 8X2 - (X3 - 6)2 + 2X4 + 40 2: 0
-xI - 2{X2 - 2)2 + 2XIX2 - 14x5 - 6X5X6 2: 0
-0.5(XI - 8)2 - 2(X2 - 4)2 - 3x; + X5XS + 30 2: 0
3Xl - 6X2 - 12(xg - 8)2 + 7XlO 2: 0
4Xl + 5X2 - 3X7 + 9xs ::; 105
lOXI - 8X2 - 17x7 + 2xs ::; 0
-8Xl + 2X2 + 5xg - 2XlO ::; 12
-5.0 ::; Xi ::; 10.0, i = 1, ... , 10

The parameter values of the revised GENOCOP III are set to the
same values in the quadratic programming example.
The obtained number of search and the obtained solutions are shown
in Table 7.3 and Table 7.4, respectively.

Table 7.3. Comparison of number of searches for non convex example


GENOCOP III revised GENOCOP III
maximum search number 100 30
average search number 52.263 11.450

From Table 7.3, the required search number of the revised GENOCOP
III is much smaller than that of GENOCOP III, and in the worst case,
GENOCOP III sometimes requires the maximum search number 100,
which means that an initial feasible point cannot be located.
Naturally, the difference of the search numbers between GENOCOP
III and the revised GENOCOP III has great influence on the computa-
tion time shown in Table 7.4.
As can be seen from Table 7.4, the revised GENOCOP III gives better
results than GENOCOP III does.
Furthermore, for comparing the generation methods of an initial fea-
sible point, 10 trials are performed for both the revised GENOCOP III
and the GENOCOP III. The experimental results show that the revised
7.5. Conclusion 151

Table 7.4- Comparison of optimal solution for nonconvex example


GENOCOP III revised GENOCOP III
best solution -160.163 -216.602
worst solution 7.513 -13.531
average -48.975 -124.312
mean computational time (second) 763.918 275.193

GENOCOP III succeeds 10 times out of 10 trials, but GENOCOP III


generate no initial feasible solution at all.
As discussed thus far, because of the introduction of a generating
method of an initial reference point and a bisection method for gener-
ating a new feasible point, the revised GENOCOP III can perform a
global search and save computation time.
Finally, it is significant to point out here that, through the intro-
duction of a homomorphous mapping between n-dimensional cube and
a feasible search space, the GENOCOP III presented in this chapter
has recently been extended by Koziel and Michalewicz [100, 101]. This
approach constitutes an example of the fifth decoder-based category of
constraint handling techniques, and hence is called GENOCOP V. Ob-
serving that GENOCOP V uses the original GENOCOP III engine, as
discussed in this chapter, it is recommended to use the revised GENO-
COP III instead of GENOCOP III.

7.5 Conclusion
In this chapter, we focused on general nonlinear programming prob-
lems and considered an applicability of the coevolutionary genetic al-
gorithm, called GENOCOP III. Unfortunately, however, in GENOCOP
III, because an initial population is randomly generated, it is quite dif-
ficult to generate reference points. Furthermore, a new search point is
randomly generated on the line segment between a search point and a
reference point and effectiveness and speed of search may be quite low.
In order to overcome such drawbacks of GENOCOP III, we proposed
the revised GENOCOP III by introducing a method for generating a
reference point by minimizing the sum of squares of violated nonlinear
constraints and a bisection method for generating a new search point
on the line segment between a search point and a reference point. Il-
lustrative numerical examples demonstrated both the feasibility and the
effectiveness of the proposed method.
Chapter 8

FUZZY MULTIOBJECTIVE NONLINEAR


PROGRAMMING

In this chapter, attention is focused on not only multiobjective nonlin-


ear programming (MONLP) problems but also MONLP problems with
fuzzy numbers. Along the same lines as in Chapters 4 and 6, through
the revised GENOCOP Ill, some refined interactive fuzzy MONLP and
fuzzy MONLP with fuzzy numbers are developed for deriving a satisfic-
ing solution for the decision maker.

8.1 Introduction
In the late 1990s, Sakawa and Yauchi [181] formulated nonconvex
MONLP problems and presented an interactive fuzzy satisficing method
through the revised GENOCOP III introduced in the previous chap-
ter. Having determined the fuzzy goals of the decision maker (DM) for
the objective functions, if the DM specifies the reference membership
values, the corresponding Pareto optimal solutions can be obtained by
solving the augmented minimax problems for which the revised GENO-
COP III is effectively applicable. An interactive fuzzy satisficing method
for deriving a satisficing solution for the decision maker from a Pareto
optimal solution set is presented. Furthermore, by considering the ex-
perts' vague or fuzzy understanding of the nature of the parameters
in the problem-formulation process, multiobjective nonconvex program-
ming problems with fuzzy numbers are formulated. Using the a-level
sets of fuzzy numbers, the corresponding nonfuzzy a-multiobjective pro-
gramming and an extended Pareto optimality concept were introduced.
Sakawa and Yauchi [180, 182, 183] then presented interactive decision-
making methods through the revised GENOCOP III, both without and
with the fuzzy goals of the DM, to derive a satisficing solution for the
154 8. FUZZY MULTIOBJECTIVE NONLINEAR PROGRAMMING

DM efficiently from an extended Pareto optimal solution set as a gener-


alization of their previous results.

8.2 Multiobjective nonlinear programming


8.2.1 Problem formulation and solution concept
In general, a multiobjective nonlinear programming (MONLP) prob-
lem is formulated as
mInImIZe
subject to
(8.1)

where x = (Xl, ... , xn) is an n-dimensional vector of decision variables;


fi(X), i = 1, ... , k, are k conflicting objective functions; gj(x) :'S; 0,
j = 1, ... ,ml, are ml inequality constraints; and hj(x) = 0, j =
ml + 1, ... , m, are m - ml equality constraints, and these functions
are assumed to be either linear or nonlinear real-valued ones. Moreover,
li and Ui, i = 1, ... , n, are the lower and the upper bounds of the decision
variables, respectively.
It should be noted here that if the convexity conditions ofthe objective
functions and/or the feasible region are not satisfied, the MONLP (8.1)
becomes a nonconvex MONLP problem.
In the following, for notational convenience, we denote the feasible
region satisfying all of the constraints of the MONLP (8.1) by X, in
other words,
X ~ {x ERn I gj(X):'S; 0, j = 1, ... ,ml,
hj(x) = 0,
(8.2)
j=ml+1, ... ,m, li:'S;Xi:'S;Ui, i=l, ... ,n}
Moreover, the feasible region satisfying only the linear constraints and
the upper and lower bounds of the MONLP (8.1) is denoted by S.
In general, for multiobjective programming problems, a complete opti-
mal solution that simultaneously minimizes all of the multiple objective
functions does not always exist when the objective functions conflict
with each other. Thus, instead of a complete optimal solution, Pareto
optimality is introduced in multiobjective programming problems [135J.
DEFINITION 8.1 (PARETO OPTIMAL SOLUTION)
x* is said to be a Pareto optimal solution to the MONLP (8.1) if and
only if there does not exist another x E X such that Ji(x) :'S; Ji(x*) for
all i and fj(x) =f fj(x*) for at least one j.
In practice, however, because only local optimal solutions are guaran-
teed in solving a single-objective nonlinear programming problem by any
8.2. Multiobjective nonlinear programming 155
available nonlinear programming technique, unless the problem is con-
vex, the local Pareto optimality concept is also defined for the MONLP
(8.1).

DEFINITION 8.2 (LOCAL PARETO OPTIMAL SOLUTION)


x* E X is said to be a local Pareto optimal solution to the MONLP
(8.1) if and only if there exists a real number 6 > 0 such that x* is
Pareto optimal in X n N(x*, 6), i.e., there does not exist another x E
X n N{x*, 6) such that fi{x) ~ h(x*) for all i and fJ(x) 1 fj(x*) for
at least one j, where N (x*, 6) denotes the 6 neighborhood of x* defined
by {x E R n I /Ix - x*/I < 6}.

8.2.2 Fuzzy goals


For the MONLP (8.1), considering the vague or fuzzy nature of human
judgments, it is quite natural to assume that the DM may have a fuzzy
goal for each of the objective functions h(x). In a minimization problem,
the fuzzy goal stated by the DM may be to achieve "substantially less
than or equal to some value Pi'" This type of statement can be quantified
by eliciting a corresponding membership function JLi(fi(X)), which is a
strictly monotone decreasing function with respect to h (x).
In the fuzzy approaches, however, we can further treat a more general
MONLP problem in which the DM has two types of fuzzy goals, namely
fuzzy goals expressed as "h(x) should be in the vicinity of ri" (called
fuzzy equal) as well as "h(x) should be substantially less than or equal
to Pi or greater than or equal to qi" (called fuzzy min or fuzzy max).
Such a generalized multiobjective nonlinear programming problem
(GMONLP) can be expressed as

fuzzy min
fuzzy max
h(x),
h(x),
i E
i E
h}
h (8.3)
fuzzy equal h{x), i E h '
subject to xEX

where h Ul2Uh = {1,2, ... ,k}, linlj = 0, i = 1,2,3,i Ij.


To elicit a membership function JLi(fi(X)) from the DM for a fuzzy
goal such as "fi (x) should be in the vicinity of ri," it is obvious that we
can use different functions for the left and right side of rio
When the fuzzy equal is included in the fuzzy goals of the DM, it is de-
sirable that h(x) should be as close to ri as possible. Consequently, the
notion of Pareto optimal solutions defined in terms of objective functions
cannot be applied. For this reason, the concept of (local) M-Pareto opti-
mal solutions that are defined in terms of membership functions instead
of objective functions is introduced, where M refers to membership.
156 8. FUZZY MULTIOBJECTIVE NONLINEAR PROGRAMMING

DEFINITION 8.3 ((LOCAL) M-PARETO OPTIMAL SOLUTION)


X* E X is said to be a (local) M-Pareto optimal solution to the
GMONLP (8.3) if and only if there does not exist another x E X
(nN(x*,J)) such that /1i(/i(X)) 2: Jl'i(/i(X*)) for all i and /1j (fj (x)) #
/1j(fj(x*)) for' at least one.j.
Unfortunately, however, (local) M-Pareto optimal solutions consist of
an infinite number of points, and thus the DM must select a (local) final
solution from (local) (M-) Pareto optimal solutions as the satisficing
solution.

8.2.3 Interactive fuzzy multiobjective programming


After determining the membership functions /1i (fi (x)) for each of
the objective functions fi(X), i = 1, ... , k, for the MONLP (8.1) or
GMONLP (8.3), the DM is asked to specify the reference membership
levels for all the membership functions. For the DM's reference member-
ship levels Pi, i = 1, ... ,k, the corresponding (local) (M-) Pareto optimal
solution, which is, in the minimax sense, nearest to the requirement or
Letter than that if the reference membership levels are attainable, is
obtained by solving the minimax problem

minimize . max {Pi - /1i (/i (x))) (8.4)


:1!EX l=l, ... ,k

To circumvent the necessity to perform the (local) (M-) Pareto opti-


mality tests in the minimax problems, use of augmented minimax prob-
lems is recommended instead of minimax problems.

mi~~r.Qize i;S~.~k {Pi - /1i(fi(X)) + P ~(Pi - /1i(/i(X))) } (8.5)

The term augmented is adopted because the term p E7=1 (Pi -/1i (/i (x)))
is added to the standard minimax problem, where p is a sufficiently small
positive scalar.
Although, for the nonconvex MONLP, the augmented minimax prob-
lem (8.5) involves nonconvexity, if we define the fitness function

f(s) = 1.0 + kp- i;S~~k { (Pi - /1i (fi (x )) )+p ~ (Pi - /1i(/i(X))) }
(8.6)
for each string s, the revised GENOCOP III [179, 181] proposed by
Sakawa and Yauchi is applicable.
8.2. Multiobjective nonlinear programming 157

The algorithm of the revised GENOCOP III for solving the augmented
minimax problem (8.5) can be summarized as follows.

Step 1: Generate two separate initial populations.

Step 2: Apply the crossover and mutation operators to the population


of search points.

Step 3: Create a new point z on a segment between a search point


and a reference point using a bisection method, and perform the
replacement procedure.

Step 4: After evaluating the individuals, apply the selection operator


for generating individuals of the next generation.

Step 5: If termination conditions arc satisfied, stop. Otherwise, return


to step 2.

We are now ready to propose an interactive algorithm for deriving


a satisficing solution for the DM to the MONLP (8.1) or GMONLP
(8.3) by incorporating the revised GENOCOP III into interactive fuzzy
satisficing methods [135]. The steps marked with an asterisk involve
interaction with the DM.

Interactive fuzzy multiobjective nonlinear programming


Step 0: Calculate the individual minimum and maximum of each ob-
jective function under the given constraints.

Step 1*: Elicit a membership function {ti(fi(X)) from the DM for each
of the objective functions by considering the calculated individual
minimum and maximum of each objective function.

Step 2: Set the initial reference membership levels to be 1.

Step 3: Generate two separate initial populations.

Step 4: Apply the crossover and mutation operators to the population


of search points.

Step 5: Using a bisection method, create a new point on a segment


between a search point and a reference point and perform the re-
placement procedure.

Step 6: Having evaluated the individuals via fitness function, apply the
selection operator for generating individuals of the next generation.
158 8. FUZZY MULTIOBJECTIVE NONLINEAR PROGRAMMING

Step 7: If termination conditions are satisfied, go to step 8. Otherwise,


return to step 4.

Step 8*: If the DM is satisfied with the current solution, stop. Oth-
erwise, ask the DM to update the reference membership values and
return to step 3.

8.2.4 Numerical example


To demonstrate the feasibility and efficiency of the proposed inter-
active fuzzy satisficing method, consider the following three-objective
nonconvex programming problem with 10 variables and 8 linear and
nonlinear constraints and lower and upper bounds.

h(x) = 7:I - x~ + XjX2 - 14:1 - l6x2 + 8(X3 - 10)2


IIlIIllIIllze

+4(X4 - 5)2 + (xs - 3)2 + 2(:z;G - 1)2 + 5x?


+7(xs - 11)2 + 2(xg - 10)2 + :Io + 45
m1l1lmiZe h(x) = (Xl - 5)2 + 5{:r:2 - 12)2 + O.5x~ + 3(:r:4 - 11)2
+O.2x~ + 7x~ + o.hi - 4X6X7 - lOxG _. 8X7 + X~
+3(xg - 5) 2 + (XIO - 5) 2
minimize h(x) = :1:13 + (X2 - 5) 2 + 3(;Z;3 - 9) 2 - 12x3 + 2X43 + 4.TS2
+(:r:G - 5)2 + 6x? + 3(X7 - 2)x~ - :TgXIO + 4x~
+5Xl - 8XIX7
subject to -3(Xl - 2)2 - 4(:r;2 - 3)2 - 2x~ + 7:C4 - 2XSX6XS
+120 ~ 0
-5xi - 8X2 - (X.3 - 6)2 + 2X4 + 40 ~ 0
-xi - 2(X2 - 2)2 + 2:YIX2 - 14xs - 6:c.sxG ~ 0
-0.5(xl - 8)2 - 2(X2 - 4)2 - 3xg + :CSXs + 30 ~ 0
.3Xl - 6X2 - l2(xg - 8)2 + 7XIO ~ 0
4Xl + 5X2 - 3X7 + 9xs ~ 105
10XI - 8;Z;2 - l7x7 + 2xs ~ 0
-8:Cl + 2X2 + 5xg - 2XlO ~ 12
-5.0 ~ Xi ~ 10.0, i = 1, ... , 10

The parameter values of the revised GENOCOP III are set to the
same values as the nonconvex nonlinear programming example. The
coefficient p of the augmented minimax problem is set as 0.0001.
After calculating the individual minimum and maximum of the objec-
tive functions, assume that the DM subjectively determines the mem-
8.3. Multiobjective nonlinear programming problem with fuzzy numbers 159

bership functions for the objective functions as

,(1 ( )) = 1500-h(x)
/11 1 X 1410

(f ( )) = 3500 - h(x)
/12 2 X 3150

(f ( )) = 3000 - h(x)
/13 3 X 2950
For this numerical example, at each interaction with the DM, the
corresponding augmented minimax problem is solved through the revised
GENOCOP III for obtaining a Pareto optimal solution.
As shown in Table 8.1, in this example, the reference membership
values of (P1, P2, P3) are updated from (1.0, 1.0, 1.0) to (0.8, 1.0, 1.0),
(0.8, 1.0, 0.9), and (0.8, 1.0, 0.95) sequentially.
In the whole interaction processes as shown in Table 8.1, the aug-
mented minimax problem is solved for the initial reference membership
levels and the DM is supplied with the corresponding Pareto optimal
solution and membership values as is shown in interaction 1 of Table
8.1. On the basis of such information, because the DM is not satisfied
with the current membership values (0.81766, 0.81766, 0.81765), the
DM updates the reference membership values to P1 = 0.80, P2 = 1.0,
and P'3 = 1.0 for improving the satisfaction levels for /12 and /1<i at the
expense of /11. For the updated reference membership values, the cor-
responding augmented minimax problem yields the Pareto optimal so-
lution and membership values as is shown in interaction 2 of Table 8.1.
The same procedure continues in this manner until the DI\1 is satisfied
with the current values of the membership functions. In this example,
after updating the reference membership values (P1, P'2, P:J three times,
at the fourth interaction the satisficing solution of the DM is derived
and the entire interactive process is summarized in Table 8.1.

8.3 Multiobjective nonlinear programming problem


with fuzzy numbers
8.3.1 Problem formulation and solution concept
As discussed in the previous section, the problem to optimize multiple
conflicting nonlinear objective functions simultaneously under the given
nonlinear constraints is called the multiobjective nonlinear programming
(MONLP) problem and is formulated as (8.1).
However, it. would certainly be more appropriate to consider that
the possible values of the parameters in the description of the objec-
tive functions and the constraints of the MONLP (8.1), although they
160 8. FUZZY MULTIOBJECTIVE NONLINEAR PROGRAMMING

Table 8.1. Interactive processes fo mult.iobjective nonCOIlvex example


Interaction 1 2 3 4
tLI 1 0.8 0.8 0.8
{iz I 1 1 1
{i3 1 1 0.9 0.95
h(x) 347.101 538.983 496.228 512.934
h(x) 924.365 761.788 628.111 669.255
h(x) 587.942 435.589 604.252 496.469
tLl(X) 0.81766 0.68157 0.71190 0.70005
tL2 (x) 0.81766 0.86927 0.91171 0.89865
/13(X) 0.81765 0.86929 0.81212 0.84866

are fixed at some values in the conventional approaches, usually involve


the ambiguity of the experts' understanding of the real system in the
problem-formulation process. For this reason, in this chapter, we con-
sider the following multiobjective nonlinear programming problem with
fuzzy numbers (MONLP-FN)

nllmmlze f(X'.~) = (h(~'~d,,,.'fdX'iik))}


subject to 9J(x,bJ ):S 0, J -l, ... ,ml
(8.7)
hj(x,bj ) =0, ,j=ml+1, ... ,m '
li :S :ri :S Ui, i = 1, ... , n

where x = (Xl, ... , xn) is an n-dimensional vector of decision variables;


h(x,iid, i = 1, ... ,k, are k conflicting objective functions; gj(x,bj):S
0, ,j = 1, ... ,ml, are ml inequality constraints; and hj(x,bj) = 0,
j = ml + 1, ... , m, are m - ml equality constraints, and li and lli,
i = 1, ... , n, are lower and bounds on decision variables. Further-
more, iii = (ail"'" a.ikJ and bj = (b j1 , ... , bjmj) represent vectors
of fuzzy numbers involved in the ith objective function and the jth
constraint function, respectively. These fuzzy numbers, which reflect
the experts' ambiguous understanding of the nature of the parameters
in the problem-formulation process, are assumed to be characterized
by their membership functions J..t,d ad = (flail (aid, ... ,flaiki (aikJ) and
J..tb (b J) = (flb' (bJ d, ... , flb' (bJ' m J. )), respectively.
J .11 Jrnj

Because the MONLP-FN involves fuzzy numbers in both the objective


functions and the constraints, it is necessary to extend the notion of usual
Pareto optimality in some sense. For that purpose, we first introduce
the following a-level set [135J to the vectors of fuzzy numbers iii and bj .
8.S. Multiobjective nonlinear programming problem with fuzzy numbers 161

DEFINITION 8.4 (O'-LEVEL SET)


The O'-level set of the vectors of fuzzy numbers iii and bj is defined
as the ordinary set (ii, b)a for which the degree of their membership
functions exceeds the level 0'

(ii,b)a = {(a,b) I tLiiir(air) ~ 0', tLb (b js ) ~ 0', i = 1, ... ,k,


J'
l' = 1, ... , ki, j = 1, ... , m, s = 1, ... , mj},

where ai = (ail, ... ,aikJ, bj = (bjl, ... ,bjmj)' a = (al, ... ,ak), and
b = (b 1 , . ,b m ).

For a certain degree a, the MONLP-FN can be interpreted as the


following nonfuzzy O'-MONLP problem [135, 174, 177, 178]

mllllmlze f(x, a) = (fdx, ad, .. , h(x, ak))


subject to 9j(x,bj) ~ 0, j = 1, ... ,ml
hj(x,bj) =0, j=ml+1, ... ,m (8.8)
li ~ Xi ~ Ui, i = 1, ... , n
(a, b) E (ii, b)a

It must be observed here that in the O'-MONLP the parameters (a, b)


are treated as decision variables rather than as constants.
Through the introduction of the O'-MONLP, we can now define the
concepts of O'-Pareto optimality and local O'-Pareto optimality, where
X (b) denotes the feasible region satisfying all of the constraints of the
O'-MONLP with respect to x, as follows.

DEFINITION 8.5 ((LOCAL) O'-PARETO OPTIMAL SOLUTION)


X* E X(b) is said to be an o:-Pareto optimal solution to the O'-MONLP
if and only if there does not exist another x E X (b) (nN (x*; 6)) and
(a, b) E (a, b)Q (nN(a*, b*; 6')) such that h(x, ad ~ h(x*, an, i =
1, ... ,k, with strict inequality holding for at least one i, where the corre-
sponding values of parameters a* and b* are called O'-level (local) optimal
parameters.

It should be noted here that (local) O'-Pareto optimal solutions can be


obtained through a direct application of the usual scalarizing methods
for generating (local) Pareto optimal solutions just by regarding the
decision variables in the a-MONLP as (x, a, b).

8.3.2 Fuzzy goals


For the a-MONLP (8.8), considering the vague or fuzzy nature of
human judgments, it is quite natural to assume that the DM may have
162 8. FUZZY MULTIOBJECTIVE NONLINEAR PROGRAMMING

a fuzzy goal for each of the objective functions !i(X, ai). In a mini-
mization problem, the fuzzy goal stated by the DM may be to achieve
"substantially less than or equal to some value Pi." This type of state-
ment can be quantified by eliciting a corresponding membership function
J.Li(fi(X, ai)) that is a strictly monotone decreasing function with respect
to !i(X, ai).
In the fuzzy approaches, however, we can further treat a more general
MONLP problem in which the DM has two types of fuzzy goals, namely
fuzzy goals expressed as "!i(X, ai) should be in the vicinity of r/' (fuzzy
equal) as well as "!i(X) should be substantially less than or equal to Pi
or greater than or equal to qi" (fuzzy min or fuzzy max).
Such a generalized o:-MONLP (Go:-MONLP) problem can be ex-
pressed as
fuzzy min !i(X, ai), i E h
fuzzy max !i(X, ad, i E 12
fuzzy equal hex, ai), i E h (8.9)
subject to x E X(b)
(a, b) E (a, b)a
where hUh U h = {I, 2, ... ,p}, Ii n Ij = 0, i = 1,2,3, i =I- j.
To elicit a membership function J.Li(fi(X, ai)) from the DM for a fuzzy
goal such as "!i(X, ai) should be in the vicinity of ri," it is obvious that
we can use different functions for the left and right side of ri.
When the fuzzy equal is included in the fuzzy goals of the DM, it
is desirable that hex, a;) be as close to ri as possible. Consequently,
the notion of o:-Pareto optimal solutions defined in terms of objective
functions cannot be applied. For this reason, the concept of (local) M-
n-Pareto optimal solutions which are defined in terms of membership
functions instead of objective functions, is introduced, where M refers
to membership.

DEFINITION 8.6 ((LOCAL) M-o:-PARETO OPTIMAL SOLUTION)


x* E X(b) is said to be a (local) M-cr-Pareto optimal solution to
the Go:-MONLP (8.9) if and only if there does not exist another x E
X(b) (nN(x*,8)) (nN(a*, b*; 8')) and (a, b) E (a, b)a (nN(a*, b*; 8'))
such that J.Li (fi (x, ai)) :2: J.Li (fi (x*, an) for all i and J.Lj (fj (x, aj)) =I-
J.Lj (fj (x*, aj)) for at least one j, where the corresponding values of pa-
rameters a* and b* are called o:-level (local) optimal parameters.

Unfortunately, however, (local) M-Pareto optimal solutions consist of


an infinite number of points, and thus the DM must select a (local) final
solution from (local) (M-) Pareto optimal solutions as the satisficing
solution.
8.3. Multiobjective nonlinear programming problem with fuzzy numbers 163

8.3.3 Interactive fuzzy multiobjective programming


After determining the membership functions {Li(!i(X, ad) for each of
the objective functions !i(x, ai), i = 1, ... , k, for the a-MONLP (8.8)
or Ga-MONLP (8.9), the DM is asked to specify the reference mem-
bership levels for all the membership functions and the degree a. For
the DM's reference membership levels Pi, i = 1, .... k, and the degree
a, the corresponding (local) (M-) a-Pareto optimal solution that is, in
the minimax sense, nearest to the requirement or better than that if the
reference membership levels are attainable, is obtained by solving the
minimax problem
minimize max {tii - {Li(!i(x,ai))} (8.10)
"'Ex(b) ~=l, ... ,k
(a,b)E(ii,b)a

To circumvent the necessity to perform the (local) (1'v1-) a-Pareto opti-


mality tests in the minimax problems, use of augmented minimax prob-
lems instead of minimax problems is recommended.

minimize. max {tii - {Li(!i(X, ad) + p t(tii - {Li(!i(X, ad))}


aJEx(b)_ ~=l, ... ,k i=l
(a,b)E(ii,b)"
(8.11)
Note that the term augmented is adopted because the term p L:~=1 (Pi -
{Li(!i(X, ai))) is added to the standard minimax problem, where p is a
sufficiently small positive scalar.
Although, for the nonconvex MONLP, the augmented minimax prob-
lem (8.11) involves nonconvexity, if we define the fitness function
k
f(8) = 1.0+kp- i~~~k {(j~i - {Li(!i(X, ai))) + P ~(jli - {Li(!i(X, ad))}
(8.12)
for each string 8, the revised GENOCOP III [179, 181] proposed by
Sakawa and Yauchi is applicable.
The algorithm of the revised GENOCOP III for solving the augmented
minimax problem (8.11) can be summarized as follows.
Step 1: Generate two separate initial populations.
Step 2: Apply the crossover and mutation operators to the population
of search points.
Step 3: Create a new point z on a segment between a search point and a
reference point using a bisection method and perform the replacement
procedure.
164 8. FUZZY MULTIOBJECTIVE NONLINEAR PROGRAMMING

Step 4: After evaluating t.he individuals, apply the selection operator


for generating individuals of the next generation.

Step 5: If termination conditions are satisfied, stop. Otherwise, return


to step 2.

We are now ready to propose an interactive algorithm for deriving a


satisficing solution for the DM to the O:'-MONLP (8.8) or Ga-MONLP
(8.9) by incorporat.ing the revised GENOCOP III into interactive fuzzy
satisficing methods [135]. The steps marked with an asterisk involve
interaction with the DM.

Interactive fuzzy multiobjective nonlinear programming with


fuzzy numbers

Step 0: Calculat.e the individual minimum and maximum of each ob-


jective function under the given constraints for a = 0 and a = 1.

Step 1*: Elicit a membership function JLi(fi(x,ad) from the DM for


each of the objective functions by considering the calculated individ-
ual minimum and maximum of each objective function.

Step 2* Ask the DM to select the initial values of 0:' (0 'S- a 'S- 1) and
set the initial reference membership levels to be 1.

Step 3: Generate two separate initial populations.

Step 4: Apply the crossover and mutation operators to the populat.ion


of search points.

Step 5: Using a bisection method, create a new point on a segment.


between a search point and a reference point and perform the re-
placement procedure.

Step 6: Having evaluat.ed the individuals via fitness function, apply the
selection operator for generating individuals of the next generation.

Step 7: If termination conditions are satisfied, go to step 8. Otherwise,


return to step 4.

Step 8*: If the DM is satisfied with the current solution, stop. Other-
wise, ask the DM to update the reference membership values and/or
the degree a and return to step 3.
8.3. Multiobjective nonlinear' programming problem with fuzzy numbers 165

8.3.4 Numerical example


To demonstrate the feasibility and efficiency of the proposed inter-
active fuzzy satisficing method, consider the following three-objective
nonconvex programming problem involving fuzzy numbers.

minimize h(x, ad = 7x~ - x~ + XjX2 + allXl + al2X2 + 8(X3 - 10)2


+a13(X4 - 5)2+ (X5 - 3)2 + 2(X6 - 1)2 + a14X~
+aI5(XS - 11) + 2(.T9
2
- 10)
2
+ XlO
2
+ 45
minimiz,e h(x, a2) = (Xl - + a21 (X2 - 12)2 + 0.5xj + iin(.T4
5)2 - 11)2
+0.2xg + a23x~ + O.lxj + a24X6X7
+a25X6 - 8X7 + Xs2 + 3(xg - 5)
2
+ (XlO - 5)
2

minimize h(x, a3) = xr + (X2 - 5)2 + a31 (X3 - 9)2 + a32X3


+2X4 + a33X5 + (X6 - 5) + a34X7
3 2 2 2

+a35(X7 - 2)x~ - X9XlO + 4x~ + 5XI - 8XIX7


subject to b- ll (Xl - 2) 2 - 4(X2 - 3) 2 - 2X3
2
+ 7X4
+b12X5X6XS + 120 2: 0
2
+- 6)
2 -
+ b22X4 + 40 2: 0

-5XI b21X2 - (X3 -
2 2 - -
-Xl _. 2(X2 - 2) + b31XIX2 + b32 X 5 - 6X5X6 2:
- 2 2 2-
b 41 (Xl - 8) - 2(X2 - 4) - 3X5 + b42:r5xS
+302:0
- 2-
3Xl - 6 X 2 + b5dx9 - 8) + b52 XlO 2: 0
- -
b61XI + 5X2 + b62X7 + 9xs ::; 105
10XI + b71X2 + b72X7 + 2xs ::; 0
-8XI + 2X2 + 5xg + bS1XlO ::; 12
-5.0 ::; Xi ::; 10.0, i = 1, ... ,10

For simplicity, it is assumed here that all of the membership functions


for the fuzzy numbers involved in this example are triangular ones, as
shown in Table 8.2, where L, 1'v[ and R denote the left, mean, and right
point of the fuzzy numbers, respectively.
The parameter values of the revised GENOCOP III are set to the same
ones as the nonconvex nonlinear programming example. The coefficient
p of the augmented minimax problem is set as 0.0001.
For illustrative purposes, assume that the DM subjectively determines
the membership functions for the objective functions as

(/ ( )) _ 1500 - h(x, al)


fl.l I x, al - 1420
166 8. FUZZY MULTIOBJECTIVE NONLINEAR PROGRAMMING

Table 8.2. Fuzzy numbers for nonconvex MONLP-FP


L M R L M R
all -14.5 -14 -13.5 b ll -3.3 -3 -2.6
al2 -16.5 -16 -15.5 b l2 -2.2 -2 -1.8
a13 3.7 4 4.4 b21 -8.6 -8 -7.4
a14 4.5 5 5.5 b22 1.8 2 2.2
a15 6.4 7 7.5 b31 1.7 2 2.4
a21 4.5 5 5.5 b32 -14.8 -14 -13.2
an 2.7 3 3.3 b41 -0.9 -0.5 -0.2
a23 6.5 7 7.5 b42 0.8 1 1.2
a24 -4.5 -4 -3.6 b51 -12.8 -12 -11.2
a21i -10.7 -10 -9.3 b52 6.5 7 7.5
a31 2.8 3 3.3 b61 3.6 4 4.4
1L32 -12.8 -12 -11.2 b62 -3.4 -3 -2.6
a33 3.5 4 4.5 b71 -8.4 -8 -7.6
a34 5.6 6 6.4 b72 -18 -17 -19
a35 2.8 3 3.2 b81 -2.2 -2 -1.8

(f ( a)) = 3500 - h(x, a2)


112 2 X, 2 3300
(f ( X a )) = 3000 - h(x, a3)
113 3 , 3 3050

For this numerical example, at each interaction with the DM, the
corresponding augmented minimax problem is solved through the revised
GENOCOP III for obtaining an o:-Pareto optimal solution.
As shown in Table 8.3, in this example, the values of (ill,il2,fJ,3; 0:)
are updated from (1.0, 1.0, 1.0; 1.0) to (0.8, 1.0, 1.0; 0.9), (0.8, 1.0, 0.9;
0.9), and (0.8, 1.0, 0.95; 0.8) sequentially.
In the whole interaction processes as shown in Table 8.3, the aug-
mented minimax problem is solved for the initial reference membership
levels and the degree 0:, and the DM is supplied with the correspond-
ing o:-Pareto optimal solution and membership values, as is shown in
interaction 1 of Table 8.3. On the basis of such information, because
the DM is not satisfied with the current membership values (0.80406,
0.80406, 0.80407) and the degree 0: = 1, the DM updates the reference
membership values and the degree 0: to ill = 0.80, il2 = 1.0, il3 = 1.0,
and 0: = 0.9 for improving the satisfaction levels for 112 and 113 at the
expense of 111 and the degree 0:. For the updated reference membership
values and the degree 0:, the corresponding augmented minimax prob-
lem yields the o:-Pareto optimal solution and membership values, as is
shown in interaction 2 of Table 8.3. The same procedure continues in
this manner, and in this example session, after updating the reference
8.4. Conclusion 167

membership values ([tl, [t2, [t3) three times and updating the degree a
two times, at the fourth interaction, the satisficing solution for the OM
is derived and the whole interactive processes are summarized in Table
8.3, and the example session takes about 5 minutes.
The satisficing values for the membership (objective) functions for the
compromised degree a = 0.8 can be interpreted as compromised values
of the OM between the conflicting membership (objective) functions.

Table 8.3. Interactive processes for nonconvex multiobjective example


interaction 1 2 3 4
Q 1 0.9 0.9 0.8
ttl 1 0.8 0.8 0.8
il2 1 1 1 1
il3 1 1 0.9 0.95
It 358.237 577.095 522.099 535.736
h 846.607 766.002 573.928 610.764
h 647.595 573.117 699.483 582.128
ttl 0.80406 0.64993 0.68866 0.67906
tt2 0.80406 0.82848 0.88669 0.87553
tt3 0.80407 0.82849 0.78706 0.82553

In this example session, with the OM at the fourth interaction, the


satisficing solution for the OM is derived, but even if the OM is not
satisfied with the current values of the membership (objective) functions
and/or the degree a of the a-Pareto optimal solution, it is possible for
the OM to continue the same procedure in this manner until the OM is
satisfied with the current values of the membership (objective) functions
and the degree a of the a-Pareto optimal solution.

8.4 Conclusion
In this chapter, nonconvex MONLP problems were formulated and
an interactive fuzzy satisficing method through the revised GENOCOP
III was presented. After determining the fuzzy goals of the OM, if the
OM specifies the reference membership values, the corresponding Pareto
optimal solutions can be obtained by solving the augmented minimax
problems for which the revised GENOCOP III is effectively applicable.
Furthermore, by considering the experts' vague or fuzzy understanding
of the nature of the parameters in the problem-formulation process, non-
convex MONLP-FN problems were formulated. Using the a-level sets
of fuzzy numbers, the corresponding nonfuzzy a-MONLP problem was
introduced. Having determined the fuzzy goals of the OM, if the DM
specified the degree a and the reference membership values, the cor-
168 8. FUZZY MULTIOBJECTIVE NONLINEAR PROGRAMMING

responding (M}-a-Pareto optimal solution was obtained by solving the


augmented minimax problems for which the revised GENOCOP III is ef-
fectively applicable. Then an interactive fuzzy satisficing method for de-
riving a satisficing solution for the DM efficiently from an (M}-a-Pareto
optimal solution set was presented. In the near future, applications of
the proposed method to the real-world decision-making situations as well
as extensions to more general cases will be required.
Chapter 9

GENETIC ALGORITHMS FOR JOB-SHOP


SCHEDULING

This chapter considers job-shop scheduling problems, which determine


a processing order of operations on each machine in order to minimize the
maximum completion time. By incorporating the concept of similarity
among individuals into the genetic algorithm that uses a set of comple-
tion times as individual representation and the Giffier and Thompson
algorithm-based crossover, an efficient genetic algorithm for job-shop
scheduling problems is presented. As illustrative numerical examples,
6 x 6, 10 x 10, and 15 x 15 job-shop scheduling problems are considered.
The comparative numerical experiments with simulated annealing and
the branch and bound method for job-shop scheduling problems are also
demonstrated.

9.1 Introduction
Scheduling is the allocation of shared resources over time in order
to perform a number of tasks. In manufacturing systems, scheduling
allocates a set of jobs on a set of machines for processing. Scheduling
problems are found in diverse areas such as manufacturing, production
planning, computing, communications, transportation, logistics, health-
care, and so on.
The classic job-shop scheduling problem (JSP) is generally described
as follows: There is a set of jobs to be processed through a set of ma-
chines. Each job must pass through each machine once and once only,
and each machine is capable of processing at most one job at a time.
The processing of a job on a machine is called an operation. Technolog-
ical constraints demand that each job should be processed through the
machines in a particular order. The problem is to determine a process-
ing order of operations on each machine in order to minimize the time
170 9. GENETIC ALGORITHMS FOR JOB-SHOP SCHEDULING

required to complete all jobs. As an important special case, when all


the jobs share the same processing order, a job-shop scheduling prob-
lem is reduced to a flow-shop scheduling problem because the jobs flow
between the machines is in the same order.
Since Johnson [86] published the first paper on the two machine flow-
shop scheduling problem to minimize the time required to complete all
jobs in 1954, an enormous amount of literature on machine scheduling,
including job-shop scheduling, has appeared. Among others, some of the
well-known books on scheduling are the first book on scheduling theory
by Conway et al. [36], the books by Baker [19] and French [59], and the
recent books by Morton and Pentico [121], Blazewicz et al. [27], and
Brucker [31].
The job-shop scheduling problem has been well-known as one of the
hardest combinatorial optimization problems, and numerous exact and
heuristic algorithms have been proposed [6, 26]. A comprehensive survey
of conventional and new solution techniques for solving the job-shop
scheduling problems proposed through the mid-1990s can be found in
the invited review of Blazewicz et al. [26].
The first attempts to approach a simple job-shop scheduling prob-
lem through genetic algorithms can be found in the research of Davis
[42] in 1985. Since then, a significant number of papers on solving
job-shop scheduling problems through genetic algorithms has appeared
[16, 25, 42, 43, 48, 51, 99, 112, 123, 198, 203, 219]. Among them, observ-
ing that a schedule can be represented by using the completion times of
operations, in 1992 Yamada and Nakano [219] used a more natural and
direct representation of individuals and first introduced the GifHer and
Thompson algorithm as a crossover operator. Using a set of comple-
tion times as an individual representation, they designed the GifHer and
Thompson algorithm-based crossover for generating feasible offspring.
Starting with an empty schedule, when generating offspring, all process-
ing conflicts are identified at each stage in a manner similar to that of
the GifHer and Thompson algorithm, and then one operation is chosen
from the conflict set of operations according to one of their parents'
schedules.
In order to maintain the diversity of individuals during the evolution,
in 1997 Sakawa and Mori [158] proposed an efficient genetic algorithm for
job-shop scheduling problems by incorporating the concept of similarity
among individuals into the genetic algorithm proposed by Yamada and
Nakano [219]. Using 6 x 6, 10 x 10, and 15 x 15 job-shop scheduling
problems, the feasibility and efficiency of the proposed genetic algorithm
were demonstrated through comparative numerical experiments with the
9.2. Job-shop scheduling 171

genetic algorithm of Yamada and Nakano [219], simulated annealing, and


the branch and bound method.

9.2 Job-shop scheduling


In general, an n x m job-shop scheduling problem (JSP) is formulated
as follows. There are n jobs Jj, j = 1,2, ... , n, to be processed on m
machines M r , r = 1,2, ... , m. The processing of job Jj on machine
Mr is the operation OJ,i,Tl where i E {I, ... , m} indicates the position
of the operation in the technological sequence of the job. Operation
OJ,i,r requires the exclusive use of Mi for an uninterrupted duration
Pj,i,r, its processing time. In addition to some assumptions mentioned
explicitly before, the following assumptions are made for the general
job-shop scheduling problem [59]:

No two operations of the same job may be processed simultaneously.

No preemption, that is, process interruption, of operations is allowed.

Each machine can process only one job at a time.

Each job must be processed to completion.

The processing times are independent of the schedule.

Jobs may be started at any time, in other words, no release time


exists.

Jobs may be finished at any time, in other words, no due date exists.

Machine setup times are negligible.

Machines may be idle.

Machines never breakdown and are available throughout the schedul-


ing.

The technological constraints are known m advance and are im-


mutable.

There is no randomness.
Denoting the completion time of job Jj by Cj, j = 1, 2, ... , n, the time
required to complete all jobs, called the maximum completion time or
makespan, is defined by C max = max { C 1 , C2 , . . , Cn}. The problem is
to determine a processing order of operations on each machine in order
to minimize the maximum completion time C max .
172 9. GENETIC ALGORITHMS FOR JOB-SHOP SCHEDULING

For notational convenience, in the following, denote the JSP of n jobs


and m machines by an n x m JSP.
An example of a 2 x 3 JSP is shown in Table 9.1 using its processing
time and machine sequence matrix.

Table 9.1. Example of 2 x 3 JSP.

Processing time Machine sequence


Operations Operations
Job 1 2 3 Job 1 2 3
J1 3 2 1 Jl Ml M2 M3
J2 1 1 3 h Ml M3 M2

In the following, for the sake of simplicity an example of the 2 x 3 JSP


in Table 9.1 is represented as in Table 9.2.

Table 9.2. Example of 2 x 3 JSP.

Job Processing machines (processing time)


.ft 1(3) 2(2) 3(1)
J2 1(1) 3(1) 2(3)

It is convenient to use so-called Gannt charts for graphical representa-


tion of schedules. Using the machine Gannt chart, one feasible schedule
for the 2 x 3 JSP in Table 9.1 is shown in Figure 9.1 through a graphi-
cal description consisting of a collection of blocks, where the horizontal
axis represents time. The length of the block is equal to the processing
time of the associated operation using the scale of the Gantt chart. The
chart in Figure 9.1 indicates the schedule from the perspective of what
time the various jobs are on each machine. The completion time of the
rightmost job in the Gannt chart gives the maximum completion time
achieved.

Ml 11 112) 12
I11 I 121
M2
M3
3 4 5 6 7 10
.. t

Figure 9.1. Gannt chart (feasible schedule)


9.2. lob-shop scheduling 173

In principle, there are an infinite number of feasible schedules for any


JSP, because an arbitrary amount of idle time can be inserted between
two operations. However, it is desirable to shift operation to the left
and be as compact as possible. For JSPs, the following three types of
schedules are introduced [19, 121].
DEFINITION 9.1 (SEMIACTIVE SCHEDULE)
A schedule is semiactive if no operation can be started earlier with-
out altering the operation sequences, i. e., a semiactive schedule contains
none of the superfluous idle time.
DEFINITION 9.2 (ACTIVE SCHEDULE)
A schedule is active if no operation can be started earlier without either
delaying some other operation or violating the technological constraints.
DEFINITION 9.3 (NONDELAY SCHEDULE)
A schedule is nondelay if no machine is kept idle at a time when it
could start processing some operations.
From their definitions, the relationship among nondelay, active, and
semiactive schedules can be stated as follows:
(Nondelay schedules) ~ (Active schedules) ~ (Semiactive schedules)
Figure 9.2 illustrates the three types of schedules to be distinguished
for a 2 x 3 JSP.

J2
J2
J2
i;
IJ I I
Semiaciive

P Iill
i l
MI J2 J2
J2
J3
M~21 MI2 MI
M2 J3

Active

MI
J\ I JII J2
M2 h
Nondelay

Figure 9.2. Example of semiactive, active, and nondelay schedules.

Although the nondelay schedules are smaller than active schedules are,
the nondelay schedules will not necessarily contain an optimal schedule.
174 9. GENETIC ALGORITHMS FOR JOB-SHOP SCHEDULING

The desirable feature of active schedules is that an optimal schedule is


within the set of active schedules [19, 59].

9.3 Genetic algorithms for job-shop scheduling


In this section, detailed discussions of the efficient genetic algorithm
for job-shop scheduling problems proposed by Sakawa and Mori [158]
are given. Using 6 x 6, 10 x 10, and 15 x 15 job-shop scheduling prob-
lems, the feasibility and efficiency of the proposed genetic algorithm were
demonstrated through comparative numerical experiments with the ge-
netic algorithm of Yamada and Nakano [219], simulated annealing, and
the branch and bound method.

9.3.1 Active schedule generating algorithm


As discussed previously, in JSPs, an optimal schedule exists within
the set of active schedules.
The Giffier and Thompson algorithm is well-known as the procedure
for generating active schedules [19, 59, 65, 219].

The GifHer and Thompson algorithm

Step 1 (Fig. 9.3): Find the set C of all the earliest operations in tech-
nological sequence among the operations that are not yet scheduled.
The set C is called "cut."
Cute

Figure 9.3. Step 1 of the Giffier and Thompson algorithm

Step 2 (Fig. 9.4): Disregarding that only one operation can be pro-
cessed on each machine at a time, create a schedule of earliest com-
pletion time for each operation and denote the obtained earliest com-
pletion time of each operation OJ,i,r E C by ECj,i,r.

Step 3 (Fig. 9.5): Find the operation OJ*,i*,r* that has a minimum EC
in the set C: ECj*,i*,r* = min {ECj,i,r I OJ,i,r E C}, where ties
are broken randomly. Find the set G of operations that consists of
9.3. Genetic algorithms for job-shop scheduling 175

Figure g.4 . Step 2 of the GifHer and Thompson algorithm

the operations OJ,i,r< E G sharing the same machine Mr< with the
operation OJ< ,i< ,r< and the processing of OJ,i,r< and OJ< ,i< ,r< overlaps.
Because the operations in G overlap in time, G is called the conflict
set.

Conflict

Minimum earliest
completion time

Figure 9.5. Step 3 of the GifHer and Thompson algorithm

Step 4 (Fig. 9.6): Randomly select one operation Ojs ,is ,r' among the
conflict set G.

Selected operation

Figure 9.6. Step 4 of the GifHer and Thompson algorithm

Step 5 (Fig. 9.7): Taking the selected operation as a standard, update


EG. Remove the selected operation from the cut.

Step 6: Repeat steps 3 to 5 until all operations are scheduled.

In step 4 of the GifHer and Thompson algorithm, it is possible to


generate all active schedules by considering all possible selections.
176 9. GENETIC ALGORITHMS FOR JOB-SHOP SCHEDULING

Update EC Update cut

Figure 9.7. Step 5 of the Giffier and Thompson algorithm

9.3.2 Individual representation


For a JSP with m machines and n jobs, the schedules are represented
as m x n matrices of earliest completion times. Hence, it is quite natural
to represent individuals by the matrices of earliest completion times for
each operation.

9.3.3 Method for generating initial populations


It is possible to generate an initial population trough the random se-
lection in step 4 of the Giffier and Thompson algorithm for generating
active schedules. In order to keep the diversity in the initial populations,
we propose to generate only individuals of a certain degree of similarity
or less by introducing the concept of similarity between individuals and
calculating the degree of similarity between individuals at the time of
initial generation of individuals, Through a number of simulations using
this method, it has been found that the most stable solutions are ob-
tained when generating individuals with degrees of similarity of 0.8 or
less as the initial individuals. In the following numerical experiments,
therefore, we generate individuals with degrees of similarity of 0.8 or less
as the initial individuals.

9.3.4 Calculating the degree of similarity


Here, we explain the method for calculating the degree of similarity,
using a 4x4 JSP as an example. It is evident that similar calculations can
be done for the general case. In Figure 9.8, we assume a job-processing
sequencing table for each machine of individual 1 and individual 2. For
individual 1, job 1 at machine 1 has priority over jobs 2, 4, and 3.
Whereas for individual 2, job 1 at machine 1 loses priority to job 2, it
has priority over jobs 4 and 3. Because the two individuals have the
same priority relationship for job 1 at machine lover jobs 4 and 3, they
are given value 2. If we perform the same calculations for jobs 2, 4,
and 3 at machine 1, we obtain 2, 3, and 3, respectively, which when
added to the total becomes 10. If we make this calculation for all of the
9.3. Genetic algorithms for job-shop scheduling 177

machines, the answers are 12, 12, and 10, respectively, to obtain a total
of 44. Because 12 x 4 is 48 when the order of priority is the same for all,
the ratio becomes 44/48, and we can see that the degree of similarity is
0.917.

Job-processing sequence of Job-processing sequence of


individual I for each machine individual 2 for each machine

Job Job
Machine I I 2 4 3 2+2+3+3=10 Machine 1 2 1 4 3
Machine 2 3 1 4 2 3 + 3 + 3 + 3 = 12 Machine 2 3 1 4 2
Machine 3 4 3 1 2 3 + 3 + 3 + 3 = 12 Machine 3 4 3 1 2
Machine 4 3 2 1 4 3+3+2+2=10 Machine 4 3 2 4 1

10+12+12+10 =~=00917
48 48

Figure go 80 Calculation of degrees of similarity

9.3.5 Crossover and selection


Crossover is an operator that out of two parent individuals creates two
offspring individuals that retain their character in some form. The Giffler
and Thompson algorithm for generating active schedules can generate all
of the active schedules. Characteristics of individuals in this algorithm
appear in step 4. Therefore, each time a conflict occurs, there is a
one-half probability that the sequencing will proceed toward eliminating
conflict in the selected parent.
Based on the GifRer and Thompson algorithm, similar to Yamada and
Nakano [219], we adopt the following crossover method that creates one
offspring individual from two parent individuals.

Giffier and Thompson algorithm-based crossover

Step 1: Perform steps 1 to 3 in the GifRer and Thompson algorithm for


generating active schedules and obtain the cut set C and the conflict
set G.

Step 2: Select one of the two parent individuals with the same proba-
bility 1/2. From among the conflict set G, choose one operation with
the minimum earliest completion time in the schedule represented by
the selected parent individual and denote it by Djs,is,ro .
178 9. GENETIC ALGORITHMS FOR JOB-SHOP SCHEDULING

Step 3: Perform the steps 5 and 6 in the Giffier and Thompson algo-
rithm for generating active schedules.
The Giffier and Thompson algorithm-based crossover is illustrated in
Figure 9.9. As shown in Figure 9.9, in offspring 1, assume that 01"T* is
the operation having a minimum EC in the cut set C, then the conflict
exists among the operations 01 " T*' O 2 ,,,*, and 0 3 ",*. If parent 1 is
selected with the probability 1/2, in parent 1, because O 2 ,,,* is processed
with highest priority among the conflict set, 02"r* is selected toward
eliminating conflict.
Parent 1

I
Parent 1
is selected J~ll I I
D
I
Conflict slet
I ~ Selected ope ratio n

t
~ I I DD~D
Select one of them I I I
with probability 1/2
c=J 0
t CD

I]
Offspring 1 Offspring 1

!I I
c=J
Parent 2

Figure 9.9. Gifficr and Thompson algorithm-based crossover

According to the Giffier and Thompson algorithm-based crossover


procedure, one offspring individual is newly generated. This procedure
is repeated c times to generate c offspring individuals.
To prevent the extinction of good individuals, from the (c + 2) indi-
viduals consisting of c offspring individuals plus two parent individuals,
select two individuals that are preserved to the next generation in the
following way.
(1) Among c offspring individuals, select one individual with the small-
est maximum completion time.
(2) Among (c+ 1) individuals consisting of (c-1) offspring not selected in
(1) plus two parents, select one individual with the smallest maximum
completion time.
9.3. Genetic algorithms for job-shop scheduling 179

In other words, local ranking selection is performed among the (c + 2)


individuals. Observe that no fitness function is set, because there is no
need to calculate the fitness.
In this crossover method, because a larger value of c means a larger
number of offspring, the probability of excellent offspring being gener-
ated also becomes high. However, because they are generated from the
same parents, it is expected that the degree of similarity among the off-
spring individuals becomes high. With this observation, we set c = 3 in
the following numerical experiments.

9.3.6 Mutation
At the time of the Giffier and Thompson algorithm-based crossover,
without selecting from either parent, mutation is performed by selecting
a operation from t.he set G at random wit.h a mut.ation rat.io of p % [219].

9.3.7 Population construction


In order to prevent t.he premature convergence and then to keep the
diversit.y in the populations, Sakawa and Mori [157, 158] proposed t.o
generate populat.ions on the basis of the degrees of similarit.y between
individuals. To be more specific, based on the degrees of similarity,
a specified number of individuals are generated. The procedure is re-
peated n times to generate n subpopulations of t.he specified number of
individuals. At a generation where each of t.he n subpoplllations have
converged t.o a certain degree, all of the subpopulations are merged and
t.hen continue with more generations until convergence is achieved or a
termination condition is satisfied. This population construction can be
expected to prevent. convergence into local solutions. As an example,
the case of three subpopulations is shown in Figure 9.10.

9.3.8 Branch and bound method


For comparison with an exact optimal solution, the branch and bound
method (BAB) for JSPs, introduced by Ashour and Hiremath [8] follow-
ing the Giffier and Thompson algorit.hm for generating active schedules,
is adopted with some modifications [122]. Observe that the BAB algo-
rithm [8, 122] is based on the concepts of the successive resolution of
conflicts among jobs to be performed on the same machine during com-
mon time int.ervals and the application of a lower bound on the schedule
time for generating the resolution of conflict.s and eventually identifying
the optimal solution.
The BAB algorithm for JSPs can be summarized as follows [8, 122]:
180 9. GENETIC ALGORITHMS FOR JOB-SHOP SCHEDULING

o 0 o 0 0
00 0 0 00 0
0 0
0 o o 0 0 0
o 00 0 00
o 00 0

Repeat genetic operations at every subpopulation


until a specified generation number.

Figure 9.10. Population construction

The branch and bound algorithm for JSPs


Step 1: Initialize the scheduling tree.
1.1 Set conflict level index L := 0 and the minimum upper bound of
minimum maximum completion time f* := 00.
1.2 Disregarding any conflicts, construct the initial completion time
table EC# for each operation.
1.3 Determine the initial schedule time interval [5, T] by setting 5 =
o and T = mini,T ECt, where ECi~ is the earliest completion
time for operation of job i on machine r in the earliest completion
timetable of 1.2.
Step 2: For each machine, check the conflict among the operations with
their earliest completion times EC[;T = T and ECtT ~ T. Namely,

2.1 For all operations on a particular machine, if T ::; Ee/'T - Pi,T,


i.e., T + Pi,T ::; ECtT' no conflict exists, and go to step where 7,
Pi,r is a processing time for operation of job i on machine r.
9.3. Genetic algorithms for job-shop scheduling 181

2.2 For more than one operation on a particular machine, if T >


Ectr - Pi,r, i.e., T + Pi,r > Ectr' a conflict exists. Set L := L + 1
and go to step 3.

Step 3: For each node under conflict at level L, calculate the lower
bound LBtr' which will be defined below.
Step 4: Among the unbranched nodes at level L, find the node with
the minimum lower bound (LBL = mini,r LBtr)'

Step 5: Compare the minimum lower bound LBL at level L with 1*.
If LBL < 1*, go to step 6. If not, i.e., LBL 2: 1*, go to step 8.
Step 6: Branch from an unexplored node with the minimum lower
bound and update the completion time table EC L . If there are more
than two nodes having the minimum lower bound, select one node
by a particular rule and branch from the resulting active node. Oth-
erwise, branch from that node.
Step 7: Update the scheduling time interval as follows:

7.1 If T is not the highest element in the updated completion


timetable Eel., replace S by T and T by the same, if any, or
the next higher element in the table. Then return to step 2.
7.2 If T is the highest element in the updated completion timetable
EC L , set 1* = T and go to step 8.
Step 8: Backtrack along the same branch of the tree by setting L :=
L - 1. Then, compare the lower bounds for all unexplored nodes, if
any, at this level.

8.1 If there exists one or more unexplored nodes with a lower bound
such that LBfr < 1*, revise the scheduling time interval by set-
ting T = min~,rE{SL} Ectr-I, where {SL} is the conflict set at
level L, and S by the next lower element in the completion time
table EC L - I . Then return to step 4.
8.2 If all unexplored nodes have lower bounds such that LBtr 2: 1*,
go to step 9.
Step 9: Check for an optimal solution.

9.1 If L > 1, return to step 8.


9.2 If L = 1, 1* is the minimum maximum completion time and the
earliest completion timetable on the node giving 1* is an optimal
schedule.
182 9. GENETIC ALGORITHMS FOR JOB-SHOP SCHEDULING

The lower bound LBtr to be calculated at step 3 in the BAB algorithm


can be determined on the basis of the following theorem [57].
THEOREM 9.1 Assume that at least one terminal opemtion of a job is
processed on a machine M k , and let Sk be the set of unscheduled opem-
tions. Then, in order to minimize the maximum completion time of all
operations in Ski starting with the earliest possible starting opemtion,
these operations have to be processed in increasing order of their earliest
start times ES, where each opemtion is processed after its ES.
Let the operation of job i on machine r be denoted by Oi,T' Then,
through the use of Theorem 9.1, the lower bound LBtrl of partial sched-
ules until operation Oi,T! is given by [122]

LBtTJ
,
= max [maxEC;/"max{min
zEJ repl z,TER
ES.1L,r + . "L pj,r}] , (9.1)
z,rER

where J is a total number of jobs, R is a set of unscheduled operations,


Pj,r is processing time of operation OJ,r, EC~L is completion of operation
Oi,l at conflict level L, and ESJ~ is starting time of operation OJ,T at
conflict level L.
A detailed development of the lower bound LBtTJ given by (9.1) to-
gether with additional results can be found in [122j.

9.3.9 Simulated annealing


For comparison, simulated annealing (SA) [98, 43] is adopted as an
another probabilistic method for JSP. Here, observe that SA searches
for solutions by exchanging the job processing order for each machine.
The algorithm of SA used for JSPs is briefly summarized as follows.

Simulated annealing
Step 1: Generate one solution Xc through the random selection in step
4 of an active scheduling generating algorithm; i.e., generate an initial
solution similar to the GA in the previous section. Set an initial
temperature.
Step 2: Represent the job processing order for each machine of a solu-
tion Xc by the corresponding matrix.
Step 3: From the matrix, select a certain machine at random. Select
two job-processing orders of the machine and exchange them.
Step 4: Based on the job-processing order after exchange, generate a
solution that becomes an active schedule and denote the solution by
X.
9.3. Genetic algorithms for job-shop scheduling 183

Step 5: If the maximum completion time of the obtained solution X is


greater than that of the solution Xc before exchange, set Xc = X.

Step 6: Even if the maximum completion time of the obtained solution


X is smaller than that of the solution Xc before exchange, set Xc = X
with the acceptance probability.

Step 7: Update the search number and the temperature.

Step 8: If the search number reaches the prescribed search number,


stop to obtain an approximate optimal solution Xc. Otherwise, re-
turn to step 2.

9.3.10 Numerical experiments


Now we are ready to apply the GAs, both with and without the
degrees of similarity. the BAB, and the SA, presented in the previous
sections, to JSPs for comparing the accuracy of the solution and the
state of convergence. As illustrative numerical examples, 6 x 6, 10 x 10,
and 15 x 15 JSPs are considered. Examples of 6 x 6, 10 X 10, and 15 x 15
JSPs are shown in Tables 9.3, 9.4, and 9.5, respectively. It is assumed
here that each of the operations of all jobs shown in the tables must be
processed in the sequencing.

Table 9.3. Numerical example of 6 x 6 JSP

Job Processing machines (processing time)


J1 3(1) 1(3) 2(6) 4(7) 6(3) 5(6)
h 2(8) 3(5) 5(10) 6(10) 1(10) 4(4)
h 3(5) 4(4) 6(8) 1(9) 2(1) 5(7)
J4 2(5) 1(5) 3(5) 4(3) 5(8) 6(9)
J5 3(9) 2(3) 5(5) 6(4) 1(3) 4(1)
J6 2(3) 4(3) 6(9) 1(10) 5(4) 3(1)

Simulations through GAs are performed for the GAs, both with and
without the degrees of similarity. Each of the parameter values of GAs
as are shown in Table 9.6 is found through a number of experiences, and
these values are used in each of the trials of GAs. In the population
construction, three subpopulations are prepared, and at about two third
of the specified generation number, each of the subpopulations is merged
to one population. The search numbers of SA are set as shown in T'dble
9.7 by considering the population sizes and the numbers of generations
of GAs. Although it may be appropriate to set the search numbers of
SA as 30 x 50 = 1500 and 40 x 80 = 3200 for the 6 x 6 and 10 x 10 JSPs
184 9. GENETIC ALGORITHMS FOR JOB-SHOP SCHEDULING

Table 9.4. Numerical example of 10 x 10 JSP

Job Processing machines (processing time)


J1 3(1) 2(1) 8(4) 6(2) 5(1) 10(3) 7(2) 9(1) 1(4) 4(5)
J2 5(3) 4(2) 10(4) 2(3) 7(5) 3(1) 9(1) 8(4) 6(5) 1(5)
h 1(3) 9(3) 6(3) 5(1) 4(1) 2(1) 10(3) 3(2) 7(3) 8(2)
J4 4(5) 10(2) 7(2) 6(1) 1(2) 9(5) 3(3) 5(5) 2(3) 8(1)
Js 4(5) 9(5) 8(2) 3(1) 2(1) 7(4) 6(2) 5(1) 1(5) 10(2)
J6 5(4) 7(3) 10(5) 1(2) 4(1) 2(1) 6(4) 3(2) 8(5) 9(1)
h 5(5) 2(2) 8(4) 7(5) 10(5) 9(5) 4(1) 1(3) 3(3) 6(1)
J8 3(2) 6(5) 4(1) 5(5) 2(5) 9(1) 1(5) 7(2) 10(1) 8(1)
Jg 8(2) 5(1) 1(5) 9(4) 7(2) 4(5) 10(3) 3(4) 2(3) 6(4)
JlO 1(3) 6(4) 8(1) 2(3) 3(1) 4(1) 9(3) 10(5) 7(5) 5(2)

Table .9.5. Numerical example of 15 x 15 JSP

Job Processing machines (processing time)


h 15(3) 5(2) 1(5) 7(1) 9(1) 13(3) 8(4) 2(1) 14(2) 12(3) 11(1) 4(5) 3(1) 10(2) 6(3)
h 4(3) 15(3) 5(2) 14(1) 12(5) 10(5) 6(2) 13(1) 11(4) 7(4) 2(3) 9(3) 3(2) 8(1) 1(4)
h 5(3) 1(2) 2(3)14(4) 7(1) 9(2) 13(3) 11(3) 12(1) 4(1) 8(4) 15(5) 3(4) 6(1) 10(1)
J4 9(1) 1(1) 15(5) 10(4) 12(3) 7(5) 13(1) 6(2) 14(3) 3(3) 5(5) 8(3) 2(5) 11(2) 4(5)
Js 10(4) 6(1) 13(3) 11(5) 4(5) 5(5) 3(4) 7(4) 8(2) 12(2) 1(5) 9(1) 15(2) 14(5) 2(4)
JG 1(2) 8(4) 6(4) 15(1) 4(3) 2(4) 13(1) 9(5) 10(3) 11(3) 14(4) 7(2) 3(1) 12(2) 5(1)
h 15(3) 12(2) 8(5) 1(4) 4(1) 10(3) 3(1) 9(2) 5(3) 11(2) 2(4) 14(2) 6(5) 13(1) 7(1)
J8 7(3) 9(5) 10(4) 3(2) 12(4) 11(1) 8(4) 6(1) 14(4) 2(2) 13(1) 5(3) 1(5) 4(5) 15(4)
Jg 6(1) 12(3) 3(2) 4(1) 2(1) 5(5) 10(4) 11(3) 15(1) 14(5) 1(2) 7(3) 9(3) 8(5) 13(1)
JlO 13(4) 3(2) 12(4) 2(2) 7(4) 8(2) 4(3) 6(3) 15(4) 1(4) 10(4) 11(4) 5(2) 9(4) 14(5)
J ll 14(4) 10(5) 8(1) 12(4) 3(1) 1(2) 4(3) 7(5) 6(4) 5(2) 2(1) 13(2) 15(3) 9(2) 11(2)
J 12 4(3) 13(4) 10(3) 15(2) 7(5) 2(3) 3(1) 9(1) 1(2) 14(5) 12(2) 11(5) 8(5) 5(5) 6(2)
J 13 11(2) 13(5) 12(3) 6(4) 5(3) 15(1) 8(5) 10(3) 3(3) 4(5) 1(5) 14(2) 9(4) 7(4) 2(5)
J 14 3(1) 9(4) 14(5) 2(2) 11(5) 13(4) 10(2) 4(5) 8(2) 7(5) 1(5) 6(1) 5(3) 15(4) 12(4)
J15 7(1) 5(1) 4(1) 9(3) 12(4) 8(2) 2(1) 15(4) 11(4) 1(3) 3(2) 13(1) 10(5) 6(4) 14(3)

respectively, we set the numbers that are larger than them as shown in
Table 9.7 for comparing the accuracy of the solutions and the state of
convergence through GAs and SA.
All of the trials of GAs and SA are performed 10 times for each prob-
lem using a Fujitsu S-4/1O workstation. The average time required for
the simulations is shown in Table 9.8. Naturally, the computation time
of SA is much larger than that of GAs because of the predetermined
search numbers shown in Table 9.7. The maximum completion times for
the approximate optimal solutions obtained from these trials are shown
9.3. Genetic algorithms for job-shop scheduling 185

in Tables 9.9 and 9.10. Also, the maximum completion time for an
optimal solutions obtained from BAB is shown in Table 9.11.

Table 9.6. Parameter values of GAs

Problem Population size N umber of generations Mutation rate (%)


6x6 30 50 5
10 x 10 40 80 5
15 x 15 50 100 5

Table 9.7. Search numbers of SA

Problem Search number


6x6 3000
10 x 10 5000
15 x 15 10000

Table 9.8. Average computation time for simulations

Problem GA/DS GA SA BAB


6 x 6 20.1 sec 18.2 sec 50.4 sec 2 sec
10 x 10 127.8 sec 120.1 sec 275.7 sec 168 hour
15 x 15 789.9 sec 752.1 sec 1328.5 sec
GA, genetic algorithm; DS, degree of similarity

Table 9.9. Simulation results for 6 x 6 JSP

Trial 1 2 3 4 5 6 7 8 9 10
GA/DS 55 55 55 55 55 55 55 55 55 55
GA 55 55 55 55 55 55 55 55 55 55
SA 56 57 57 57 58 56 55 55 55 57
BAB 55
GA, genetic algorithm; DS, degree of similarity

For the 6 x 6 JSP, an optimal solution with a maximum completion


time of 55 is obtained by the GA with the degrees of similarity on 10
times out of 10 trials and by the GA without the degrees of similarity
186 9. GENETIC ALGORITHMS FOR JOB-SHOP SCHEDULING

Table 9.10. Simulation results for 10 x 10 JSP

Trial 1 2 3 4 5 6 7 8 9 10
GA/DS 47 47 47 47 47 47 47 47 47 47
GA 47 48 47 48 48 47 49 47 49 48
SA 51 50 49 51 51 49 50 50 50 52
BAB 46
GA, genetic algorithm; DS, degree of similarity

Table 9.11. Simulation results for 15 x 15 JSP

Trial 1 2 3 4 5 6 7 8 9 10
GA/DS 73 73 73 73 73 73 73 73 73 73
GA 73 73 74 73 73 73 74 73 74 73
SA 81 77 80 76 79 81 81 80 78 79
BAB
GA, genetic algorithm; DS, degree of similarity

on 9 times out of 10 trials. On the contrary, the maximum completion


time of 55 is obtained by SA on 3 times out of 10 trials.
For the 10 x 10 JSP, a solution with a maximum completion time of
47 is obtained by the GA with the degrees of similarity on 9 times out of
10 trials and by the GA without the degrees of similarity on 4 times out
of 10 trials. Unfortunately, however, the maximum completion time of
the best solution obtained by SA through 10 trials for the 10 x 10 JSP is
49. Observe that an optimal solution with a maximum completion time
of 46 is obtained by BAB.
Furthermore, for the 15 x 15 JSP, a solution with a maximum comple-
tion time of 73 is obtained by the GA with the degrees of similarity on
10 times out of 10 trials. Unfortunately, however, the maximum comple-
tion time of the best solution obtained by SA through 10 trials for the
15 x 15 JSP is 76. Observe that an optimal solution cannot be located
by BAB because of an unrealistic amount of computation time.
Concerning the computation time, for 6 x 6 JSP, BAB produce an
optimal solution using only 2 seconds, and BAB is most effective. For
10 x 10 JSP, although an optimal solution with a maximum completion
time of 46 is obtained by BAB, it requires about 168 hours of computa-
tion time and is not realistic. On the contrary, the maximum completion
time of 47 is obtained by the GA with the degrees of similarity using
127.8 seconds of computation time and seems to be effective. For 15 x 15
JSP, BAB requires a forbidden computation time and cannot locate an
9.3. Genetic algorithms for job-shop scheduling 187

optimal solution. On the contrary, the maximum completion time of 73


is obtained by the GA with the degrees of similarity using 789.9 seconds
of computation time and seems to be effective.
Furthermore, to clarify the state of the trend toward convergence in
the 10 x 10 JSP, the changes occurring in each generation of the average
maximum completion time and the minimum maximum completion time
for all trial populations in GAs both with and without the degree of
similarity are shown in Figures 9.11 and 9.12, respectively. Also, the
changes occurring in each search number of the minimum maximum
completion time for all trial populations in SA are shown in Figure 9.13.
60 60

Q)
Q)

, ,
c
8 55 .g 55
.~ Q)

a. a.
E E
0
0 8
E E

..
:::l

..
:::l
E E
.;it ' )(

E 50 E 50
Q) E
en :::l
~ E
'2
~
c( ~

45 45
80 80
Generation Generation
(a) Average maximum completion lime (b) Minimum maximum completion time

Figure 9.11. Convergence of GA/DS (degree of similarity) for 10 x 10 JSP

As can be seen in Figures 9.11,9.12, and 9.13, compared with GAs, the
solutions obtained through SA have the evenness, and the convergence
trends vary widely during searches. This may be because of the one
point search of SA that is influenced by an initial solution and the trend
in solutions during searches.
As can be seen in Figures 9.11 and 9.12, compared with GA without
the degree of similarity, the solutions obtained through GA with the
degree of similarity are much more stable. This may be because of the
introduction of the degree of similarity.
188 9. GENETIC ALGORITHMS FOR JOB-SHOP SCHEDULING

60 60

Q)

~
c
o
..
Q)
E
c
.2
'al 55 ~ 55
a. E
E o
8 u
E E
:>
:>
E E
' ;(
.;(
<II
<II
E E
~ 50 5 50
\:! -~
~ c
~

45L-----------------~
80~ 45 L-----------------~80+

Generation Generation
(a) Average maximum completion time (b) Minimum maximum completion time

Figure 9.12. Convergence of GA for 10 x 10 JSP

65

Q)

~ 60
c:
o
'al
a.
E
o
u
E
:>
E
-~
E
E
:>
E
'c
~

45L------------------------------------+
sooo
Search number

Figure 9.13. Convergence of SA for 10 x 10 JSP


Chapter 10

FUZZY MULTIOBJECTIVE JOB-SHOP


SCHEDULING

In this chapter, by considering the imprecise or fuzzy nature of the


data in real-world problems, job-shop scheduling problems with fuzzy
processing time and fuzzy due date are formulated. On the basis of the
agreement index of fuzzy due date and fuzzy completion time, the for-
mulated fuzzy job-shop scheduling problems are interpreted to maximize
the minimum agreement index. Furthermore, multiobjective job-shop
scheduling problems with fuzzy due date and fuzzy processing time are
formulated as three-objective problems. Having elicited the linear mem-
bership functions reflecting the fuzzy goals of the decision maker (DM),
the fuzzy decision of Bellman and Zadeh is adopted for combining them.
The genetic algorithm introduced in the previous chapter is extended
for solving the formulated problems.

10.1 Introduction
As discussed in the previous chapter, a significant number of successful
applications of genetic algorithms to job-shop scheduling problems have
appeared [16, 25, 42, 43, 48, 51, 60, 99, 123, 198, 203, 219]. Naturally, in
these job-shop scheduling problems, various factors, such as processing
time, due date, and so forth, have been precisely fixed at some crisp
values.
However, when formulating job-shop scheduling problems that closely
describe and represent the real-world problems, various factors involved
in the problems are often only imprecisely or ambiguously known to
the analyst. This is particularly true in the real-world situations when
human-centered factors are incorporated into the problems. In such sit-
uations, it may be more appropriate to consider fuzzy processing time
because of man-made factors and fuzzy due date tolerating a certain
190 10. FUZZY MULTIOBJECTIVE JOB-SHOP SCHEDULING

amount of delay in the due date [81, 139, 206]. To be more specific,
considering a number of man-made factors that exist in operations and
planning and other arrangements is also a part of operations, and fuzzi-
ness in processing times undeniably exists. Concerning due date, we
can also imagine many situations in which it is desirable in principle to
strictly satisfy the due date; a certain amount of delay may be tolerated
with longer delay tends to be lower evaluations.
Recently, in order to reflect such situations, a mathematical program-
ming approach to a single machine fuzzy scheduling problem with fuzzy
precedence relation [81] and job-shop scheduling incorporating fuzzy pro-
cessing time using genetic algorithms [206] has been proposed.
In order to more suitably model actual scheduling situations, in 1999,
Sakawa and Mori [158, 159] formulated job-shop scheduling problems
with fuzzy processing time and fuzzy due date by incorporating fuzzy
processing time and fuzzy due date.
On the basis of the concept of an agreement index for fuzzy due date
and fuzzy completion time for each job, the formulated problem was in-
terpreted as seeking a schedule that maximizes the minimum agreement
index. For solving the formulated fuzzy job-shop scheduling problems,
an efficient genetic algorithm for job-shop scheduling problems intro-
duced in the previous chapter is extended to deal with the fuzzy due
dates and fuzzy completion time [157]. As illustrative numerical exam-
ples, both 6 x 6 and 10 x 10 job-shop scheduling problems with fuzzy
processing time and fuzzy due date were considered for demonstrating
the feasibility and efficiency of the proposed genetic algorithm.
Unfortunately, however, in these fuzzy job scheduling problems, only
a single objective function is considered and extensions to multiobjec-
tive job-scheduling problems are desired for reflecting real-world situa-
tions more adequately. On the basis of the agreement index of fuzzy
due date and fuzzy completion time, in 2000, Sakawa and Kubota [156]
formulated multiobjective job-shop scheduling problems with fuzzy due
date and fuzzy processing time as three-objective problems that not
only maximize the minimum agreement index but also maximize the
average agreement index and minimize the maximum fuzzy completion
time. Moreover, by considering the imprecise nature of human judg-
ments, fuzzy goals of the DM for the objective functions are introduced.
After eliciting the linear membership functions through interaction with
the DM, the fuzzy decision of Bellman and Zadeh or minimum operator
[22] is adopted for combining them. Then a genetic algorithm that is
suitable for solving the formulated problems is proposed [156]. As il-
lustrative numerical examples, both 6 x 6 and 10 x 10 three-objective
job-shop scheduling problems with fuzzy due date and fuzzy process-
10.2. lob-shop scheduling with fuzzy processing time and fuzzy due date 191

ing time were considered, and the feasibility and effectiveness of the
proposed method were demonstrated by comparing with the simulated
annealing method.

10.2 Job-shop scheduling with fuzzy processing


time and fuzzy due date
10.2.1 Problem formulation
In contrast to the n x m job-shop scheduling problem (JSP) discussed
in the previous chapter, in this section, a job-shop scheduling problem
incorporating fuzzy processing time and fuzzy due date is formulated
as a fuzzy job-shop scheduling problem (FJSP). In F.JSP considered
in this section, the fuzzy processing time of operation OJ,i,r is repre-
sented by a triangular fuzzy number (TFN) Aj,i,r and denoted by a
triplet (a]\r,a]2
,, "ir , ) as shown in Figure 10.1 (a). As shown in Fig-
,a]3,ir
ure 10.1 (b), the fuzzy due date Dj is represented by the degree of
satisfaction with respect to the job completion time, and denoted by a
doublet (dj, d;). For the fuzzy due date Dj of each job, when the fuzzy
completion time for each job is expressed as TFN OJ, we seek the sched-
ule that maximizes the minimum value of the agreement index AI of the
fuzzy completion time OJ with respect to the fuzzy due date Dj . Here,
as shown in Figure 10.2, the agreement index AI of the fuzzy completion
time Bj with respect to the fuzzy due date Dj is defined as value of the
area of membership function intersection divided by the area of the OJ
membership function [97, 139]. To be more explicit,

(10.1)

The agreement index AI can be viewed 3".<; the portion of OJ that was
completed by the due date.
It should be noted here that, although the area of the fuzzy com-
pletion time becomes zero for a conventional scheduling problem with
nonfuzzy processing time and nonfuzzy due date and the definition of the
agreement index AI itself becomes meaningless, it is possible to define
the nonfuzzy case as an extreme limit. More specifically, if we consider
the fuzzification of the nonfuzzy processing time a as (a - <5, a, a + (5),
<5 > 0; regard the agreement index AI(<5) 3"." a function of lj with respect
to the fuzzified processing time and due date; and define the agreement
index AI of the nonfuzzy processing time and due date as the extreme
limit of AI(<5) such that

AI = lim AI(<5), (10.2)


6--+0
192 10. FUZZY MULTIOBJECTIVE JOB-SHOP SCHEDULING

1.0 1.0 1--------..

o x 0 d'J dJ x
(a) Fuzzy processing time (b) Fuzzy duedate

Figure 10.1. Fuzzy processing time and fuzzy due date

1.0 f------.

o dJ d; x
Agreement index

Figure 10.2. Agreement index AI

then we can consider the nonfuzzy scheduling problem as being encom-


passed in the whole.
This kind of F JSP operates under the constraints of the technologi-
cal sequence, machine processing capacity, and so on, to search for the
schedule that shows the fuzzy starting time or fuzzy completion time
of each operation OJ,i,r' For notational convenience, in the following,
denote the F JSP of n jobs and m machines by an n x m F JSP.

10.2.2 Operations on fuzzy processing time


For convenience in our subsequent discussion, consider the operations
on the fuzzy processing time represented as a TFN that become neces-
sary for generating the schedule in the FJSP [97, 135].
10.2. lob-shop scheduling with fuzzy processing time and fuzzy due date 193

Denote the TFN A by a triplet (aI, a 2 , a 3 ). Then, as is well-known,


the addition of the two TFNs A = (aI, a 2 , a 3 ) and .8 = (b 1 , b2 , b3 ) is
shown by the following formula.
Addition of A + .8:

(10.3)

Figure 10.3 depicts the addition of the TFNs A and .8. The addition is
used when calculating the fuzzy completion time of each operation.

Figure 10.3. Addition of triangular fuzzy numbers

Furthermore, denote the membership functions of the two TFNs A =


and .8 = (bl, b2 , b3 ) by J..l A and J..liJ, respectively. Then, ac-
(aI, a 2 , a 3 )
cording to the extension principle of Zadeh [221], the membership func-
tion J..l AviJ(z} of Av.8 through the V (max) operation becomes as follows.
V (max) operation A V .8:

(10.4)

where
(10.5)

Unfortunately, however, because the fuzzy number obtained as a result


of the V (max) operation on the basis of the extension principle of Zadeh
does not always become a TFN, in this chapter, for simplicity, we shall
approximate V (max) with the following formula.

(1O.6)

Here, although this operation formula is written with the approximat-


ing equal sign ~, the varying relationships of the left and right spread
of A and .8 could well result in cases in which an equal sign and not
194 10. FUZZY MULTIOBJECTIVE JOB-SHOP SCHEDULING

(a) (b)

(c)

Figure 10.4. V operation of triangular fuzzy numbers

just an approximation is established. Such situations are illustrated in


Figure lO.4's (a), (b) or (c). When A and iJ are in the positional re-
lationships of (a) and (b), A V iJ = iJ holds exactly. Although A V iJ
does not become a TFN in case (c), we can use the V (max) operation
shown earlier to approximate it as a TFN. Such V (max) operation is
used when calculating the fuzzy starting time for each operation.
Here, to illustrate how the addition or V (max) operation of the TFNs
for calculating the fuzzy completion time and the fuzzy starting time
for each operation is used in generating F JSP schedules, consider the
following 2 x 2 F JSP as a simple example.
Job 1: Machine 1 (2,5,6), Machine 2 (5,7,8)
Job 2: Machine 2 (3,4,7), Machine 1 (1,2,3)
Here, (.,.,.) represents the TFN that determines the fuzzy processing
time. Then, the operation on the fuzzy completion time in the same job,
concerning Job 1, for example, is expressed as follows through addition
of fuzzy processing time (2,5,6) and fuzzy processing time (5,7,8).
Job 1: Machine 1 (2,5,6), Machine 2 (7,12,14)
Here, (-,.,.) represents the fuzzy completion time.
10.2. Job-shop scheduling with fuzzy processing time and fuzzy due date 195

Furthermore, the fuzzy starting time becomes necessary when the


jobs are multiple. We illustrate the calculation method when the first
operation is scheduled as
Job 1: Machine 1 (2,5,6)
Job 2: Machine 2 (3,4,7)
Here, if we focus on the fuzzy starting time of the second operation
of Job 2, we can see that it comes after the processing of the first opera-
tion of Job 1 is completed because the second operation is processed on
Machine 1, and that the operation must start after the processing of the
first operation of Job 2 is completed. Therefore, the fuzzy starting time
becomes (3,4,7) by using the V operation on the fuzzy completion time
(2,5,6) of Job l's first operation and the fuzzy completion time (3,4,7)
of Job 2's first operation.

10.2.3 Genetic algorithms for fuzzy job-shop


scheduling
10.2.3.1 Active schedule generating algorithm
As discussed in the previous chapter, in JSP, it is known that the op-
timal schedule exists within the set of active schedules [19, 59]. Here, an
active schedule has the property that no operation can be started earlier
without either delaying some other operation or violating the technolog-
ical constraints. In other words, a schedule is active if there exists no
continuous span of idle time on any machine great enough to process a
delayed operation. For JSP, and even for F JSP that incorporates fuzzy
processing time and fuzzy due date, because inclusion of idle time is not
preferable, we can expect an optimal schedule of F JSP to exist within
the set of active schedules. Therefore, we use the set of active schedules
as the search space.
The Giffier and Thompson algorithm, introduced in the previous chap-
ter, is well-known as the procedure for generating active schedules [19,
59, 219]. To deal with FJSP, this algorithm can be extended in the
following way.

The GifHer and Thompson algorithm for F JSPs


Step 1 (Fig. 10.5): Find the set C of all the earliest operations in tech-
nological sequence among the operations that are not yet scheduled.
The set C is called cut.
Step 2 (Fig. 10.6): Disregarding that only one operation can be pro-
cessed on each machine at a time, create a schedule of fuzzy comple-
tion time for each operation and denote the obtained fuzzy comple-
tion time of each operation OJ,i,r E C by (ECJ,i,r, EC],i,r, EC;'i,r).
196 10. FUZZY MULTIOBJECTIVE JOB-SHOP SCHEDULING

Cut

Figure 10.5. Step 1 of the Giffier and Thompson algorithm for FJSPs

Figure 10.6. Step 2 of the Giffier and Thompson algorithm for FJSPs

Step 3 (Fig. 10.7): Find the operation OJ<,i',r< that has a minimum
EC l in the set C: ECJ.,i.,r. = min{EC},i,rIOj,i,r E C}. Find the set
G of operations that consists of the operations OJ,i,r' E C sharing the
same machine Mr' with the operation OJ* ,i' ,r' and the processing of
OJ,i,r* and OJ,j,r* overlaps. Because the operations in G overlap in
time, G is called the conflict set.

Operation whose fuzzy


completion time is earliest Conflict set
t

Figure 10.7. Step 3 of the Giffier and Thompson algorithm for FJSPs

Step 4 (Fig. 1O.8): Randomly select one operation OJ,,i,,r' among the
conflict set G.

Step 5 (Fig. 1O.9): Taking the selected operation as a standard, update


EC l , EC 2 , and EC 3 by adding the processing time to the operation
starting time determined by the max of the selected operation as
the precedent of the same job among the jobs, including the opera-
10.2. Job-shop scheduling with fuzzy processing time and fuzzy due date 197

Selected operation

Figure 10.B. Step 4 of the Giffier and Thompson algorithm for FJSPs

tions with conflicts. Observe that the conflict is not considered here.
Remove the selected operation from the cut.

Update EC Update cut

Figure 10.9. Step 5 of the Giffier and Thompson algorithm for FJSPs

Step 6: Repeat steps 3 to 5 until all operations are scheduled.


Observe here that the conflict in the Giffler and Thompson algorithm
is dissolved as time passes. In applying this algorithm to F JSP, because
the completion time is expressed by a TFN, it becomes necessary to
select one of the three numbers as a standard. Although it is possible
to set up the algorithm and generate the corresponding solution with
any of these numbers as a standard, it should be noted here that Eel
is selected as a standard.

10.2.3.2 Individual representation


For a JSP with n jobs and m machines, the corresponding schedules
are represented as n x 3m matrices of fuzzy completion times. Here, as
shown in Figure 10.10, the individuals are represented by the matrices
of fuzzy completion times for each operation.

10.2.3.3 Method for generating initial population


Random selection in step 4 of the Giffler and Thompson algorithm
for F JSPs generates an initial population. As discussed in the previous
chapter, to keep the diversity in the initial populations, by introducing
198 10. FUZZY MULTIOBJECTIVE JOB-SHOP SCHEDULING

J1 (1,2,3) (5,7,10) (11,14,19) )


Jz (3,4,6) (5,7,11) (13, 18,28)
Ja (2,5,7) (12,16,25) (19, 24, 35)

Figure 10.10. Example of individual representation (3 x 3 FJSP)

the concept of similarity between individuals and calculating the degree


of similarity between individuals at the time of initial generation of in-
dividuals, we propose to generate only individuals of a certain degree of
similarity or less. Through a number of simulations using this method,
it has been shown that the most stable solutions are obtained when gen-
erating individuals with degrees of similarity of 0.8 or less as the initial
individuals. Therefore, we generate individuals with degrees of similarity
of 0.8 or less as the initial individuals.

10.2.3.4 Crossover and selection


Crossover is an operator that out of two parent individuals creates
two offspring individuals that retain their character in some form. The
Giffier and Thompson algorithm for F JSPs can generate all of the active
schedules. Characteristics of individuals in this algorithm appear in step
4.
Similar to the Giffier and Thompson algorithm-based crossover intro-
duced in the previous chapter, on the basis of the Giffier and Thompson
algorithm for F JSPs, we adopt the following crossover method that cre-
ates one offspring individual from two parent individuals.

Giffier and Thompson algorithm-based crossover for FJSPs


Step 1: Perform the steps 1 to 3 in the Giffier and Thompson algorithm
for F JSPs, and obtain the cut set C and the conflict set G.
Step 2: Select one of the two parent individuals with the same proba-
bility 1/2. From among the conflict set G, choose one operation with
the minimum fuzzy completion time in the schedule represented by
the selected parent individual and denote it by OJ.,i.,r*.
Step 3: Perform the steps 5 and 6 in the Giffier and Thompson algo-
rithm for F JSPs.
Figure 10.11 illustrates the Giffier and Thompson algorithm-based
crossover for FJSPs. As shown in Figure 10.11, in offspring 1, assume
10.2. lob-shop scheduling with fuzzy processing time and fuzzy due date 199

that Ol"r* is the operation having a minimum EC I in the cut set C;


then, the conflict exists among the operations Ol"r*, 02"r* and 03"r*' If
parent 1 is selected with the probability 1/2, because 02"r* is processed
with highest priority among the conflict set, in parent 1 02"r* is selected
to eliminate conflict.
Parent 1

Parent 1
is selected

~ A~ 'J
Offsp ring 1 Offspring 1

/\ J\ \
f\\
\. r--=\
Parent 2

Figure 10.11. Giffier and Thompson algorithm-based crossover

According to the Giffier and Thompson algorithm-based crossover


procedure for F JSPs, one offspring individual is newly generated. This
procedure is repeated c times to generate c offspring individuals.
As discussed in the previous chapter, to prevent the extinction of
good individuals from the (c + 2) individuals consisting of c offspring
individuals plus two parent individuals, select two individuals that are
preserved to the next generation in the following way.

(1) Among c offspring individuals, select one individual with the great-
est minimum agreement index.
(2) Among (c+ 1) individuals consisting of (c-1) offspring not selected in
(1) plus two parents, select one individual with the greatest minimum
agreement index.

In other words, local ranking selection is performed among the (c + 2)


individuals. Observe that no fitness function is set because there is no
need to calculate the fitness.
200 10. FUZZY MULTIOBJECTIVE JOB-SHOP SCHEDULING

In this crossover method, because the larger the value of c, the larger
the number of offspring, the probability of excellent offspring being gen-
erated also becomes high. However, because they are generated from
the same parents, the degree of similarity among the offspring individ-
uals becomes high. With this observation in mind, we set c = 3 in the
numerical experiments.

10.2.3.5 Mutation
At the time of crossover, without selecting from either parent, ran-
domly select operations from the set G with a mutation ratio of p%
[219].

10.2.3.6 Population construction


As discussed in the previous chapter, in order to prevent the prema-
ture convergence and then keep the diversity in the populations, popu-
lations are generated on the basis of the degrees of similarity between
individuals [157, 158]. In the numerical experiments, we prepare three
subpopulations and generate 10 individuals in each subpopulation based
on the degrees of similarity. At a generation where the subpopulations
have converged to a certain degree, we merge all of the subpopulation
and then continue with more generations until convergence is achieved.
This population construction can be expected to prevent convergence
into local solutions.
So far, we have proposed a genetic algorithm for F JSP that introduces
degrees of similarity and population construction. As discussed in the
previous section, this is based on our experience in comparing the effects
on conventional JSP of a genetic algorithm that introduces degrees of
similarity and population construction with that of a genetic algorithm
that makes no such introduction.

10.2.4 Simulated annealing


For comparison, simulated annealing (SA) [43, 98] is adopted as an-
other probabilistic method for F JSPs. It should be noted here that SA
searches for solutions by exchanging the job processing order for each
machine.
The algorithm of SA adopted here is briefly summarized as follows.

The algorithm of SA

Step 1: Generate one solution XC through the random selection in step


4 of an active scheduling generating algorithm, i.e., generate an initial
10.2. lob-shop scheduling with fuzzy processing time and fuzzy due date 201

solution similar to the genetic algorithm in the previous chapter. Set


an initial temperature.

Step 2: Represent the job processing order for each machine of a solu-
tion Xc by the corresponding matrix.

Step 3: From the matrix, select a certain machine at random. Select


two job processing orders of the machine and exchange them.

Step 4: Based on the job processing order after exchange, generate a


solution that becomes an active schedule and denote the solution by
X.
Step 5: If the minimum agreement index of the obtained solution X is
greater than that of the solution Xc before exchange, set XC = X.

Step 6: Even if the minimum agreement index of the obtained solution


X is smaller than that of the solution Xc before exchange, set Xc = X
with the acceptance probability.

Step 7: Update the search number and the temperature.

Step 8: If the search number reaches the prescribed search number,


stop to obtain an approximate optimal solution Xc. Otherwise, re-
turn to step 2.

10.2.5 Numerical experiments


As illustrative numerical examples, consider 6 x 6 and 10 x 10 F JSPs
as shown in Tables 10.1 and 10.2. Here, it is assumed that each of
the operations of all jobs shown in the tables must be processed in the
sequencing.

Table 10. 1. Numerical example of 6 x 6 F JSP

Processing machines (fuzzy processing time)


Job 1 1(5,6,13) 5(3,4,5) 2(1,2,3) 6(3,4,5) 4(2,3,4) 3(2,3,4)
Job 2 1(3,4,5) 2(2,4,5) 3(1,3,5) 6(4,5,6) 4(5,6,7) 5(6,7,8)
Job 3 3(1,2,3) 6(5,6,7) 5(4,5,6) 4(3,4,5) 2(1,2,3) 1(1,2,3)
Job 4 6(2,3,4) 5(1,2,3) 4(2,3,4) 2(2,3,5) 1(3,4,6) 3(3,4,5)
Job 5 6(3,4,5) 5(2,3,4) 4(1,2,3) 3(2,3,4) 2(4,5,6) 1(2,3,4)
Job 6 5(6,7,8) 6(4,5,6) 1{2,3,4) 2(3,4,5) 3(2,3,4) 4{1,2,3)
Fuzzy Job 1 Job 2 Job 3 Job 4 Job 5 Job 6
due date 30,40 35,40 20,28 32,40 30,35 40,45
N
o
N

"-
Table 10.2. Numerical example of 10 x 10 FJSP ~

"'l
Processing machines (fuzzy processing time) ~
~
Job 1 8(2,3,4) 6(3,5,6) 5(2,4,5) 2( 4,5,6) 1(1,2,3) 3(3,5,6) 9(2,3,5) 4(1,2,3) 7(3,4,5) 10(2,3,4) ~
Job 2 10(2,3,4) 7(2,3,5) 4{2,4,5) 6(1,2,3) 8(4,5,6) 3(2,4,6) 2(2,3,4) 1{1,3,4) 5{2,3,4) 9{3,4,5) ~
Job 3 6{2,4,5) 9(1,2,3) 10(2,3,5) 8(1,2,4) 1(3,5,6) 7(1,3,4) 4(1,3,5) 2(1,2,4) 5(2,4,5) 3(1,3,5) ~
Job 4 1(1,2,3) 5(3,4,5) 8(1,3,5) 9(2,4,6) 10(2,4,5) 6(1,2,4) 7(3,4,5) 2(1,3,5) 4(1,3,6) 3(1,3,4) ~
3(1,3,4) 5(1,2,3) 8(1,3,5) 9(2,3,4) 10(3,4,5) 6{1,3,4) 1 (3,4,5) 4(1,3,4)
a
Job 5 2{2,3,4) 7(1,3,4) t:l:i
Job 6 4(2,3,4) 2(2,3,4) 3{1,2,3) 5(2,4,5) 6(1,3,4) 8(1,3,4) 7(3,4,5) 9(1,2,3) 10(2,4,5) 1(1,3,4) ~
C':l
Job 7 3(2,3,4) 5(1,4,5) 4(1,3,5) 1(3,4,5) 9(2,3,4) 7(3,4,5) 2(1,2,3) 10(3,5,6) 8(3,5,6) 6(1,2,3)
~
Job 8 7(3,4,5) 1(1,2,3) 9(3,4,5) 6(2,4,5) 10(1,3,4) 2(2,3,4) 5(1,2,3) 3(2,4,5) 4(3,4,5) 8(2,3,5)
~
Job 9 9(3,4,5) 4(1,3,4) 10(1,3,5) 2(2,3,4) 3(3,5,6) 6(2,4,5) 8(1,3,4) 1(3,4,5 ) 5{1,2,3) 7(3,4,5)
Job 10 7(2,4,5) 5(1,2,3) 2(3,4,5) 4(2,3,4) 1(1,2,3) 8(3,4,5) 10(2,4,5) 6(3,4,5 ) 3(1,2,3) 9(1,2,4) a'-<
t:l:i
Fuzzy Job 1 Job 2 Job 3 Job 4 Job 5 Job 6 Job 7 Job 8 Job 9 Job 10
50,65 50,65 50,65 45,60 45,60 50,60 45,60 50,60
~
a
due date 45,60 50,60
"ti
V:l
~
~
~
~
~
10.2. Job-shop scheduling with fuzzy processing time and fuzzy due date 203

Now we are ready to apply both the genetic algorithm and the SA
presented in this section to the 6 x 6 and 10 x 10 F JSPs in Tables
10.1 and 10.2 to compare the accuracy of the solution and the state of
convergence.
Each of the parameter values of genetic algorithm as shown in Table
10.3 is found through a lot of experiences, and these values are used in
each of the trials of genetic algorithm. The search numbers of SA are set
as shown in Table lOA by considering the population sizes and the num-
bers of generations of genetic algorithm. Although it may be appropriate
to set the search numbers of SA as 30 x 50 = 1500 and 40 x 80 = 3200
for the 6 x 6 and 10 x 10 F JSPs, respectively, we set the numbers that
are larger than them as shown in Table lOA for comparing the accuracy
of the solutions and the state of convergence through genetic algorithm
and SA.

Table 10.3. Parameter values of GA

Problem Population size Number of generations Mutation rate (%)


6x6 30 50 5
10 x 10 40 80 5

Table 10.4. Search number of SA

Problem Search number


6x 6 3000
10 x 10 5000

All of the trials of G A and SA are performed 10 times for each problem
using a Fujitsu S-4/1O workstation. The average time required for the
simulations is shown in Table 10.5. Naturally, the computation time of
SA is much larger than that of GA because of the predetermined search
numbers shown in Table 10.4. The minimum agreement indices for the
approximate optimal solutions obtained from these trials are shown in
Tables 10.6 and 10.7.
Furthermore, to clarify the state of the trend toward convergence in
the 6 x 6 and 10 x 10 F JSPs, the changes occurring in each generation
of the average minimum agreement index and the maximum minimum
agreement index for all trial populations in GA are shown in Figures
10.12 and 10.14, respectively. Also, the changes occurring in each search
204 10. FUZZY MULTIOBJECTIVE JOB-SHOP SCHEDULING

Table 10.5. Average time required for simulation

Problem GA SA
6 x 6 17.6 sec 40.4 sec
10 x 10 170.4 sec 220.5 sec

Table 10.6. Minimum agreement indices for 6 x 6 F JSP

Trial GA SA Trial GA SA
1 0.69 0.59 6 0.69 0.38
2 0.69 0.39 7 0.69 0.38
3 0.69 0.36 8 0.69 0.38
4 0.69 0.36 9 0.69 0.36
5 0.69 0.39 10 0.69 0.36

Table 10.7. Minimum agreement indices for 10 x 10 FJSP

Trial GA SA Trial GA SA
1 0.94 0.76 6 0.93 0.75
2 0.94 0.68 7 0.94 0.67
3 0.94 0.64 8 0.94 0.69
4 0.94 0.76 9 0.93 0.78
5 0.94 0.75 10 0.93 0.60

number of the maximum minimum agreement index for all trial popula-
tions in SA are shown in Figures 10.13 and 10.15.
The fuzzy completion times for each operation for the solutions ob-
tained from GA are also shown in Tables 10.8 and 10.9.

Table 10.B. Approximate optimal solution using GA for 6 x 6 FJSP

Fuzzy completion time


Job 1 (8,10,18) (16,21,26) (17,23,29) (21,27,34) (23,30,38) (25,33,42)
Job 2 (3,4,5) (5,8,10) (6,11,15) (14,18,22) (20,26,32) (26,33,40)
Job 3 (1,2,3) (7,9,11) (11,14,17) (14,18,22) (15,20,25) (16,22,29)
Job 4 (2,3,4) (7,9,11) (9,12,15) (11,15,20) (14,19,26) (20,27,34)
Job 5 (10,13,16) (13,17,21) (15,20,25) (17,23,29) (21,28,35) (23,31,39)
Job 6 (6,7,8) (18,23,28) (20,26,33) (24,32,40) (27,36,46) (28,38,49)
Agreement index = 0.69
10.2. Job-shop scheduling with fuzzy processing time and fuzzy due date 205

1.0 1.0

0.8 0.8

>< ><
'"
'0
.5 0.6
<U
'0
.5 0.6
E C
.,E'" '"
E
'"t'b '"'"
0.4 t'b 0.4
<t: <t:

0.2

O.O'--- - - - - - ----::::so'!' O,O'-------------;::so;!'


Generation Generation
(a) Average minimum agreement (b) Maximum minimum agreement
index of group index of group

Figure 10.12. GA convergence trends for 6 x 6 FJSP

1.0 f-

0.8 f-

0,0 -=-------------------73000~

Search number

Maximum minimum agreement inde)l

Figure 10.13, SA convergence trends for 6 x 6 F JSP


206 10. FUZZY MULTIOBJECTIVE JOB-SHOP SCHEDULING

1.0 1.0

0.8

><
OJ
><
OJ

.=
." ."
c 0.6
E
OJ
E
OJ
E E
OJ OJ
OJ OJ
!;'o 0.4 !;'o 0.4

0.2 0.2

0.0'------------;;:'>80 O.0'-----------::::80~
Generation Generation
(a) Average minimum agreement (b) Maximum minimum agreement
index of group index of group

Figure 10. 14. GA mnvergence trends for 10 x 10 FJSP

1.0

0.8

><
OJ
."
.5O 0.6
C
OJ
E
OJ
~
00

0.2

0.0 L-------------------.,..,sooo~

Search number

Maximum minimum agreement index

Figure 10.15. SA convergence trends for 10 x 10 F JSP


'-
Sl
~

<:..,
c
<:>-
,
;;:,-
'"
.g
~
'";;:,-
Table 10.9. Approximate optimal solution using GA for 10 x 10 FJSP ~
~
~
Fuzzy completion time ~.

Job 1 (2,3,4) (5,9,11) (7,14,18) (11,18,24) (12,20,27) (15,25,33) (17,28,38) (18,33,45) (25,40,53) (27,43,61) ~
...;;:.
Job 2 (2,3,4) (5,7,10) (8,17,23) (10,16,26) (14,21,34) (17,29,40) (19,32,44) (20,34,51) (22,37,55) (25,41,60)
Job 3 (2,4,5) (4,6,8) (7,13,19) (8,15,23) (15,25,33) (16,28,37) (17,31,42) (20,34,48) (24,41,60) (27,44,65) ?
~
'<::!
Job 4 (1,2,3) (4,6,8) (5,9,13) (9,14,19) (12,21,28) (15,30,42) (22,36,48) (23,39,53) (24,42,59) (28,47,69)
~
Job 5 (2,3,4) (6,10,14) (7,13,18) (8,15,21) (9,18,28) (13,23,32) (16,27,37) (17,30,46) (23,38,56) (25,45,63) 21
~
~
Job 6 (2,3,4) (4,6,8) (5,8,11) (11,21,29) (12,24,33) (15,24,38) (19,32,43) (20,34,46) (24,39,57) (25,42,61)
'"'"
;;.
Job 7 (2,3,4) (5,10,13) (6,13,18) (9,17,23) (11,20,27) (14,24,32) (17,27,36) (20,32,43) (23,36,53) (26,41,60) '-Q

Job 8 (3,4,5) (4,6,8) (7,10,13) (9,14,18) (10,17,23) (16,25,33) (17,27,36) (19,33,45) (22,37,50) (25,39,58) ,....
.
Job 9 (3,4,5) (4,7,9) (5,10,14) (7,13,18) (10,18,24) (14,28,38) (16,27,42) (19,31,47) (20,33,50) (28,44,58) ~

!:)
Job 10 (8,14,19) (9,17,24) (14,22,29) (16,25,33) (17,27,36) (20,31,47) (22,35,52) (25,39,57) (26,41,60) (27,43,64) ;:l
~

Agreement index = 0.94 ?


~
'<::!
~
;::
~

~
!:)
...
~

t--;l
o
~
208 10. FUZZY MULTIOBJECTIVE JOB-SHOP SCHEDULING

For the 6 x 6 F JSP, a solution with a minimum agreement index of


0.69 is obtained by the proposed GA 10 times out of 10 trials. On the
contrary, the minimum agreement index of the best solution obtained
by SA through 10 trials is 0.59.
Furthermore, for the 10 x 10 F JSP, a solution with a minimum agree-
ment index of 0.94 is obtained by the proposed GA 7 times out of 10
trials. Unfortunately, however, the minimum agreement index of the
best solution obtained by SA through 10 trials for the 10 x 10 F JSP is
0.78.
As can be seen in Figures 10.13 and 10.15, the solutions obtained
through SA have the evenness and the convergence trends vary widely
during searches. This may be because of the one point search of SA
that is influenced by an initial solution and the trend in solutions during
searches.
On the contrary, as can be seen in Figures 10.12 and 10.14, the solu-
tions obtained through GA are very stable. This is probably because of
the multipoint search of GA, in that the influence of the initial solution
and the trend in solutions during searches tends to average. Moreover,
the uniform generation of initial solutions of the proposed GA also ap-
pears to contribute to stability.

10.3 Multiobjective job-shop scheduling under


fuzziness
10.3.1 Problem formulation
In contrast to the n x m job-shop scheduling problems (JSPs) discussed
in the previous chapter, in the previous section, job-shop scheduling
problems incorporating fuzzy processing time and fuzzy due date were
formulated as fuzzy job-shop scheduling problems (FJSPs), in which
the fuzzy processing time of operation OJ,i,T is represented by a TFN
Aj,i,T and denoted by a triplet (aL,T' aJ,i,T' a;,i,T) and the fuzzy due date
Dj is represented by the degree of satisfaction with respect to the job
completion time, and denoted by a doublet (d], d;).
For the fuzzy completion time for each job expressed as a TFN OJ, as
an index showing the portion of 6j that meets the fuzzy due date Dj, it
is convenient to adopt the agreement index AI of the fuzzy completion
time 6j with respect to the fuzzy due date Dj. Here, 3..<; discussed in
the previous chapter, the agreement index AI of the fuzzy completion
time 6j with respect to the fuzzy due date Dj is defined as value of the
area of membership function intersection divided by the area of the 6 j
membership function [97, 139].
10.3. Multiobjective job-shop scheduling under fuzziness 209

For such types of F JSPs, as discussed in the previous chapter, on the


basis of the concept of an agreement index for fuzzy due date and fuzzy
completion time for each job, single-objective F JSPs that seek a schedule
that maximizes the minimum agreement index are formulated, and an
efficient genetic algorithm is proposed by incorporating the concept of
similarity among individuals into the genetic algorithm [219J that uses
a Gannt chart as individual representation.
In this section, for reflecting real-world situations more adequately, we
formulate multiobjective fuzzy job-shop scheduling problems (MOFJSPs)
as three-objective ones that not only maximize the minimum agreement
index but also maximize the average agreement index and minimize the
maximum fuzzy completion time. To be more explicit, the formulated
problem is to
1 n
maxImIze Zl = - LAlj, (10.7)
n j=l
maximize Z2 = AImin = . min Alj, (10.8)
)=l, ... ,n

minimize Z3 = Omax = )=l,


. max
... ,n
OJ, (10.9)

under the constraints of the technological sequence, where Alj and OJ


denote the agreement index and fuzzy completion time of job Jj, respec-
tively. It should be emphasized here that these objective functions Zl,
Z2, and Z3 denote the average agreement index, the minimum agreement
index, and the maximum fuzzy completion time, respectively. For no-
tational convenience, in the following, denote F JSPs or MOF JSPs of n
jobs and m machines by n x m F JSPs or n x m MOF JSPs, respectively.
Now, considering the imprecise nature of the DM's judgments, it is
natural to assume that the DM may have imprecise or fuzzy goals for the
objective functions in the multiobjective job-shop scheduling problems
[135]. These fuzzy goals can be quantified by eliciting the corresponding
membership functions.
For the sake of simplicity, we adopt linear membership functions for
quantifying the fuzzy goals of the DM. The corresponding linear mem-
bership function I-Li(Zi) is defined as
0 ; Zi < zf
o
. . _ { Zi - zi . 0 . 1
I-L~ (z~) - 1 Zt0 ' Zi < :s; Zi , (10.10)
zt
- zt l
1 ; Zi 2: zi

where zf and zl denote the value of the objective function Zi such that
the degree of membership function is 0 and 1, respectively. Such a linear
membership function is illustrated in Figure 10.16.
210 10. FUZZY MULTIOBJECTIVE JOB-SHOP SCHEDULING

1.0

--r---~--~~--~Zi
o
Figure 10.16. Linear membership function

Having elicited the linear membership functions from the DM for all
of the objective functions, if we adopt the well-known fuzzy decision of
Bellman and Zadeh or minimum operator [22], the problem to be solved
can be transformed to find an optimal schedule so as to maximize

(10.11)

10.3.2 Ranking triangular fuzzy numbers


In FJSPs, fuzzy completion times become TFNs, because they are cal-
culated as the sum of fuzzy processing times. Hence, when considering
minimizing the maximum fuzzy completion time as an objective func-
tion, some ranking methods [29, 96, 206] become necessary for ranking
the fuzzy completion times.
In this section, in addition to the operation sum and operation V
(max), discussed in previous chapter, we adopt the following ranking
method for TFNs [29, 96, 206]. In this ranking method, the criterion for
dominance is one of the following three in the order given below.

Criterion 1. The greatest associate ordinary number


1 2 2 3
C1(A) = a + a +a (10.12)
4
is used as a first criterion for ranking the two TFNs. It is illustrated
in Figure 10.17.

Criterion 2. If C1 does not rank the two TFNs, those that have the
best maximal presumption (the mode)

{1O.13}
10.3. Multiobjective job-shop scheduling under fuzziness 211

1.0 ................................................... ...

Figure 10.17. The greatest associate ordinary number of A

will be chosen as a second criterion.

Criterion 3. If C 1 and C 2 do not rank the TFNs, the difference of the


spreads
(10.14)

will be used as a third criterion.

According to these three criteria, it becomes possible to rank all TFNs.


Among TFNs Aj (j = 1, ... , n), the maximum and minimum TFNs are
denoted by Amax and Amin , respectively.
For example, according to these three criteria, determine the order
of TFNs of S = {AI, A 2, A 3, A 4}, where Al = (2,5,8), A2 = (3,4,9),
A3 = (3,5,7), A4 = (4,5,8). Using the first criterion C1, ct{A4 ) = 5.5
and C1 (AI) = C 1(A 2) = Ct{A3) = 5.0. Hence Amax = A 4. By the
second criterion C2, C 2 (At} = C2(A 3) = 5 and C2 (A 2) = 4. Thus
Amin = A 2 . The third criterion C3 shows C3(A 1 ) = 6 and C3(A 3) = 4.
In this way, the decreasing order of TFNs of S becomes A 4, AI, A 3 , A 2

10.3.3 Genetic algorithms for fuzzy multiobjective


job-shop scheduling
10.3.3.1 Individual representation
For F JSPs with n jobs and m machines, the corresponding schedules
are represented as n x 3m matrices of fuzzy completion times. Here, as
shown in Figure 10.18, the individuals are represented by the matrices
of fuzzy completion times for each operation.
212 10. FUZZY MULTIOBJECTIVE JOB-SHOP SCHEDULING

J1 ((1,2,3) (5,7,10) (11,14,19) )


h (3,4,6) (5,7,11) (13,18,28)
J3 (2,5,7) (12,16,25) (19,24,35)

Figu1'e 10.lS. Example of individual representation (3 x 3 F JSP)

10.3.3.2 Method for generating initial populations


Random selection in step 4 of the Giffier and Thompson algorithm
for F JSPs generates an initial population. Here, to keep the diversity in
the initial populations, by introducing the concept. of similarit.y between
individuals and calculating the degree of similarity bet.ween individuals
at the time of initial generation of individuals, we propose to generate
only individuals of a certain degree of similarity or less. Through a
number of simulations using this method, it is recognized that t.he most
st.able solut.ions are obtained when generating individuals wit.h degrees
of similarity of 0.8 or less as the initial individuals [157, 158]. Therefore,
we generate individuals with degrees of similarity of 0.8 or less as the
initial individuals.

10.3.3.3 Crossover and selection


Crossover is an operator that generates out of two parent individu-
als two offspring individuals that retain their character in some form.
Based on the active schedule generating algorithm, we adopt the fol-
lowing crossover method t.hat creates one offspring individual from two
parent individuals.

GifHer and Thompson algorithm-based crossover for MOFJSPs

Step 1: Perform steps 1 to 3 in the active schedule generating algorithm


and obtain the set of cut C and the conflict. set G.

Step 2: Select. one of the two parent individuals with the same proba-
bility 1/2. Among the conflict. set. G, choose one operation with the
minimum ranking fuzzy completion time in the schedule represented
by the selected parent individual and denote it by OJ.,i.,r>.

Step 3: Perform st.eps 5 and 6 in the Giffier and Thompson algorithm


for FJSPs.
10.3. Multiobjective job-shop scheduling under fuzziness 213

According to this procedure, one offspring individual is newly gener-


ated. This procedure is repeated c times to generate c offspring individ-
uals.
To prevent the extinction of good individuals, from the (c + 2) indi-
viduals consisting of c offspring individuals plus two parent individuals,
select two individuals that are preserved to the next generation in the
following way.

(1) Among c offspring individuals, select one individual with the great-
est value of the objective function.

(2) Among (c + 1) individuals consisting of (c - 1) offspring not selected


in (1) plus two parents, select one individual with the greatest value
of the objective function.

In this crossover method, because the larger the value of c, the larger
generation of offspring, the probability of excellent offspring being gen-
erated also becomes high. However, because they are generated from the
same parents, the degree of similarity among the offspring individuals
becomes high. As discussed previously, we set c = 3 in the numerical
experiments.

10.3.3.4 Mutation operator


At the time of crossover, without selecting from either parent, ran-
domly select operations from the set G with a mutation ratio of p%.

10.3.3.5 Population construction


As discussed in the previous section, populations are generated on the
basis of the degrees of similarity among individuals [157, 158]. In the
numerical experiments, we prepare three subpopulations and generate
10 individuals in each sUbpopulation based on the degrees of similarity.
At a generation in which each of the subpopulations have converged t.o
a certain degree, we merge all of the subpopulat.ion, and then continue
with more generations until convergence is achieved. This population
construction can be expected to prevent convergence into local solutions.

10.3.4 Simulated annealing


For comparison, simulated annealing (SA) [1, 43, 98J is adopted as
another probabilistic search method for F JSP. Here, observe that SA
searches for solutions by exchanging the job processing order for each
machine.
The algorithm of SA used here is summarized as follows.
214 10. FUZZY MULTIOBJECTIVE JOB-SHOP SCHEDULING

Step 1: Generate one solution (schedule) through the random selection


in step 4 of an active scheduling generating algorithm and denote it
by XC. Set an initial temperature To.

Step 2: Represent the job process sequence for each machine of a so-
lution Xc by the corresponding matrix, and select one machine at
random. Select two jobs of the machine and exchange them. For
example in the problem of 3 jobs and 3 machines, when the first job
(h) and the third job (J3 ) of machine 2 (M2 ) are selected, as shown
in Figure 10.19 (a), the result after exchange becomes as shown in
Figure 10.19 (b).

(a) Before job exchange (b) After job exchange

Figure 10.19. Example of job processing order and job exchange (3 x 3 F JSP)

Step 3: Based on the job processing sequence after job exchange, dis-
solve the conflict that occurred in step 4 of an active scheduling gener-
ating algorithm and generate a new solution. If the obtained solution
is different from the solution before job exchange, set the solution as
a neighborhood solution X and go to step 4. Otherwise, return to
step 2 and select a new exchange pair.

Step 4: If the objective function value of the solution through exchange


is improved, accept the exchange and set Xc = X. Otherwise, deter-
mine the acceptance by the following substeps.

(1) Using the decrement tlJ of the objective function value and tem-
perature T, calculate exp( -!:lJ IT).
(2) Generate a uniform random number on the open interval (0,1)
and compare it with the value of exp( - tlJ IT).
(3) If the value of exp( -tlJ IT) is greater than the random number,
accept the exchange and set Xc = X. Otherwise, the exchange
is not accepted.

When the exchange is accepted, go to step 5. Otherwise, return to


step 2 to find the next exchange pair.
10.3. Multiobjective job-shop scheduling under fuzziness 215

Step 5: The equilibrium state test is performed by checking whether


the change of the objective function value obtained through the ex-
changes in the prescribed number of times is small enough. The
number for the equilibrium state test is called the epoch. Here, the
test is performed in the following substeps (1) to (4).

(1) Repeat the procedures from step 2 to step 4 until the exchanges
are performed by the number of epoch. When the epoch number
has been reached, perform the following substeps (2) to (4).
(2) Calculate the average value fe of the objective function values
during the current epoch and the average value j~ of the objective
function values through the exchanges thus far.
(3) Check whether the relative error between the average value R
in
the whole and the average value fe during the epoch is smaller
than the prescribed tolerance value E, i.e., check whether (Ife -
j~l/ j~) < E holds.
(4) When the relative error is smaller than the tolerance value, regard
the equilibrium st.ate is reached at. this temperature and go to st.ep
6 to decrease the temperature. Otherwise, clear the counter of
the epoch and return to step 2 to repeat the job exchange process.

Step 6: Starting with an initial temperature To, decrease the tempera-


ture with the predetermined ratio 0', i.e., Tnew = 0' X Told.
Step 7: If the number of the pair exchanges reaches the predetermined
number, stop the algorithm.

Repeating this process, when the algorithm is terminated, select the


solution with the best objective function value among the obtained so-
lutions.

10.3.5 Numerical experiments


Now we are ready to apply both the genetic algorithm and the SA
presented in the previous chapter to MOF JSPs. As illustrative numerical
examples, consider 6 x 6 and 10 x 10 MOF JSPs and solve three different
problems for each of the 6 x 6 and 10 x 10 MOF JSPs. The fuzzy due
dates involved in these numerical examples are randomly generated in
the following way. Taking the value a;'i,r of the fuzzy processing time for
each operation as standard time, calculate the sum of the standard times
for each job, and the resulting sum is multiplied by some appropriate
k
value for determining each dJ.
Then each dJ
is determined by adding
a randomly generated number on the closed interval [3,15] to each dj.
216 10. FUZZY MULTIOBJECTIVE JOB-SHOP SCHEDULING

The data for Problems 1, 2, and 3 of both 6 x 6 MOF.JSP and 10 x 10


MOFJSP are shown in Tables 10.10 through 10.15. It is assumed here
that each of the operations of all jobs shown in these tables must be
processed in the sequencing.

Table 10.10. Problem 1 of 6 x 6 MOFJSP

Processing machines (fuzzy processing time)


Job 1 4(9,13,17) 3(6,9,12) 1(10,11,13) 5(5,8,11) 2(10,14,17) 6(9,11,15)
Job 2 4(5,8,9) 2(7,8,10) 5(3,4,5) 3(3,5,6) 1(10,14,17) 6(4,7,10)
Job 3 5(3,5,6) 4(3,4,5) 3(2,4,6) 1(5,8,11) 2(3,5,6) 6(1,3,4)
Job 4 6(8,11,14) 3(5,8,10) 1(9,13,17) 4(8,12,13) 2(10,12,13) 5(3,5,7)
Job 5 3(8,12,13) 5(6,9,11) 6(10,13,17) 2( 4,6,8) 1(3,5,7) 4( 4,7,9)
Job 6 2(8,10,13) 4(8,9,10) 6(6,9,12) 3(1,3,4) 5(3,4,5) 1 (2,4,6)
Fuzzy Job 1 Job 2 Job 3 Job 4 Job 5 Job 6
due date 112,121 82,91 49,60 97,102 83,89 54,59

Table 10.11. Problem 2 of 6 x 6 MOFJSP

Processing machines (fuzzy processing time)


Job 1 6(5,7,10) 5(10,14,17) 4(1,3,5) 3(1,3,5) 2(4,6,8) 1(9,10,11)
Job 2 5(6,7,8) 1(9,13,17) 3(8,12,13) 6(2,3,4) 4(10,13,16) 2(2,3,4)
Job 3 3( 4,5,6) 1(10,11,12) 5(9,12,16) 2(8,12,13) 6(6,9,12) 4(4,7,9)
Job 4 4(1,2,4) 5(2,4,5) 6(5,7,8) 3(5,8,10) 1(3,5,7) 2(6,8,10)
Job 5 4(9,11,15) 1(4,6,9) 5(1,2,3) 6(10,11,15) 2(4,7,8) 3(10,11,12)
Job 6 5(6,7,9) 3(1,2,4) 2(6,9,11) 6(10,14,18) 4(1,2,3) 1(9,13,14)
Fuzzy Job 1 Job 2 Job 3 Job 4 Job 5 Job 6
due date 81,88 66,80 89,92 51,60 91,96 75,78

Table 10.12. Problem 3 of 6 x 6 MOFJSP

Processing machines (fuzzy processing time)


Job 1 1(6,7,9) 6(1,2,4) 4(4,7,8) 5(1,2,3) 3(9,10,13) 2(2,3,5)
Job 2 1(10,13,16) 4(7,11,15) 5(6,8,11) 6(2,4,5) 2(9,12,15) 3(2,3,4)
Job 3 2(5,6,9) 6(9,10,11) 3(6,7,10) 4(9,11,14) 1(8,10,14) 5(9,11,12)
Job 4 6(10,11,15) 4(3,4,6) 1 (9,12,16) 2(9,12,15) 5(4,5,7) 3(5,7,9)
Job 5 4(2,3,5) 5(8,12,14) 3(1,3,5) 2(3,4,5) 1(3,4,6) 6( 4,5,6)
Job 6 5(5,8,10) 3(7,10,11) 1(1,3,4) 6(6,8,9) 4( 4,6,7) 2(3,4,6)
Fuzzy Job 1 Job 2 Job 3 Job 4 Job 5 Job 6
due date 43,50 96,102 93,103 71,75 49,54 62,70
.....
::::>
:-0

........~
<:l
~
~
(")
Table 10.13. Problem 1 of 10 x 10 MOFJSP ........
~
~
.....
<:l
Processing machines (fuzzy processing time) 0-
I

Job 1 4(10,13,16) 6(4,7,9) 7(10,12,13) 8(5,6,7) 9(6,8,9) 2(7,8,12) 5(10,12,15) 3(5,6,7) 1(2,3,5) 10(10,14,18) ;;:,-
Job 2 2(3,5,6) 1(9,10,13) 3(5,8,9) 10(9,12,16) 6(5,6,9) 7(7,11,12) 5(9,13,14) 9(8,12,16) 8(2,4,6) 4(4,7,10)
.g'"
(")
'"
Job 3 7(9,12,14) 10(10,13,14) 9(5,7,8) 4(3,4,6) 5(4,7,8) 2(3,5,7) 8(3,4,6) 3(1,2,4) 1(5,7,9) 6(9,11,13) ;;:,-
~

Job 4 5(10,12,16) 7(1,2,4) 2( 4,7,10) 3(2,3,5) ~


10(6,8,10) 6(1,3,5) 4(7,8,11) 1(5,8,10) 8(9,10,14) 9( 4,7,8)
::..
S
Job 5 5(9,12,15) 6(8,11,14) 10(10,14,17) 1(5,7,9) 2(2,4,5) 4(1,3,5) 7(7,8,10) 9(3,4,6) 8(9,11,13) 3(1,2,3)
<l::>
Job 6 8(4,7,9) 2(10,12,15) 3(3,4,5) 4(10,14,18) 1(5,6,9) 5(10,14,16) 9(10,12,15) 6(8,9,12) 10(5,8,9) 7(4,7,10) .,e
~
Job 7 5(8,12,13) 10(2,4,6) 8(10,14,18) 1(5,7,9) 6( 4,5,8) 9( 4,5,7) 7(7,10,11) 2(10,11,12) 3(10,13,15) 4(9,12,13) ~
~
"l
Job 8 2(10,12,15) 6(5,6,9) 3(1,2,4) 8(6,9,12) 4( 4,6,9) 1(7,11,14) 9(7,11,13) 5(6,9,11) 10(8,11,13) 7(7,9,13)
?
Job 9 9(2,4,6) 6(2,3,5) 2(2,3,4) 4(4,6,7) 10(6,8,9) 3(8,12,14) 1(4,7,9) 7(8,11,14) 8(1,2,3) 5(3,5,6)
10(3,4,6) 6(8,11,15)
...~
~
Job 10 7(5,8,9) 8(6,8,9) 3(8,12,16) 1(6,9,12) 9(7,11,13) 5(10,11,14) 4(7,10,11) 2(3,5,7) ~

Fuzzy Job 1 Job 2 Job 3 Job 4 Job 5 Job 6 Job 7 Job 8 Job 9 Job 10 '"'"
due date 169,184 123,134 100,110 102,105 121,136 167,174 120,130 163,176 79,94 160,163

~
......
-:t
t-.:l
I-'
00

......
Table 10.14. Problem 2 of 10 x 10 MOFJSP s::>
'""l
Processing machines (fuzzy processing time) ~
~
Job 1 7(3,4,5) 9(10,12,13) 2(4,7,10) 5(5,8,9) 4(10,12,16) 1(10,11,13) 6(4,7,10) 3( 4,5,8) 8( 4,6, 7) 10(9,12,13) '"'<
Job 2 10(10,11,14) 3(9,12,14) 5(7,10,14) 4(7,11,12) 8(2,4,6) 1(8,10,14) 2(7,11,12) 7(8,11,14) 6(6,9,10) 9(10,14,15) ~
Job 3 5(5,7,10) 7(2,3,5) 4(1,3,4) 3(7,10,12) 1(8,11,12) 8(2,4,5) 6(4,5,7) 2(7,8,10) 9(9,13,14) 10(8,12,15) ~
Job 4 5(5,8,10) 6(4,5,8) 4(7,10,14) 7(3,5,7) 8( 4,5,6) 2(8,10,12) 1 (2,3,4) 9(2,3,5) 3(6,8,11) 10(6,8,11) ~
1(2,3,4) 3(9,12,13) 5(5,6,9) 9(5,7,8) 6(1,2,4) 8(3,4,6) 10(2,4,6)
a
Job 5 2(4,7,10) 7(5,6,7) 4(9,10,14) tl:l
Job 6 7(1,3,4) 3(3,4,5) 8(3,5,6) 2(5,7,8) 4(8,9,13) 9(9,12,14) 1O( 4, 7,8) 1(1,2,4) 6(2,4,5) 5(6,9,12) ~
C":l
Job 7 10(10,11,14) 8(2,3,4) 6(9,10,12) 3(9,10,11) 7( 4,5,6) 2(3,5,7) 9(1,3,4) 4(2,4,5) 1(8,10,13) 5(7,10,11)
~
Job 8 6(7,11,15) 2(9,13,15) 8(5,6,9) 4(8,9,13) 7(6,9,12) 3(6,8,10) 9(6,9,11) 5(1,2,4) 10(8,12,14) 1(6,9,12)
~
Job 9 7(4,7,10) 2(3,5,6) 8(6,9,10) 6(3,5,6) 9(8,11,12) 4(5,7,10) 10(4,6,9) 1(1,2,4) 3(3,5,7) 5(10,12,15) '-.;

Job 10 4(1,2,3) 1 (8,12,13) 9(7,8,9) 10(6,9,12) 5(9,11,15) 2(7,11,15) 6(10,14,18) 3(1,3,5) 8(1,2,4) 7(2,3,5)
a
tl:l
,
Fuzzy Job 1 Job 2 Job 3 Job 4 Job 5 Job 6 Job 7 Job 8 Job 9 Job 10
~
due date 151,156 154,157 106,117 123,138 85,88 86,94 120,135 149,158 117,124 142,148 a
'"0
en
~
~
~
~
Q
'-
:::>
~

~
,.,.
"'.
0
~
~
~
Table 10.15. Problem 3 of 10 x 10 MOFJSP
,.,.
~.
~

'-".
0
Processing machines (fuzzy processing time) 0-
I

Job 1 2(7,11,12) 9(5,7,9) 10( 4,5,8) 8(7,11,15) 6(8,9,13) 3(7,9,12) 1(6,7,9) 5(2,3,4) 7(4,7,8) 4(2,4,5) ~
'"
.
Job 2 6(3,5,6) 1(1,2,3) 5(1,2,4) 8(10,11,14) 2(8,12,14) 4(2,4,5) 7(5,8,10) 10(8,9,11) 3(2,4,5) 9( 4,6,8)
~
'"
Job 3 9(9,11,12) 7(8,9,13) 3(7,9,10) 6(3,5,7) 2(10,13,17) 5(2,4,6) 4(8,9,13) 8(3,4,5) 1(1,2,3) 10( 4,5, 7) ~
~
Job 4 2(8,11,13) 1(2,3,5) 8(4,7,8) 4(4,6,7) 3(5,7,8) 9(6,8,10) 10(2,3,4) 7(8,12,14) 6(5,7,8) 5( 4,6,9) ~
~
Job 5 3(4,6,7) 9(4,7,9) 7(6,9,12) 10(2,4,5 ) 4(1,3,5) 6(9,10,12) 5(10,11,12) 1(5,7,9) 2(8,10,13) 8( 4,6,9) ;S.
~
Job 6 9(7,8,9) 4(4,5,8) 1(4,5,7) 3( 4,6,8) 7(6,7,9) 10(3,5,7) 8(9,11,15) 2(7,10,12) 5(6,9,12) 6(5,7,9) ;:!
;:l
Job 7 6(5,6,8) 2(5,6,9) 9( 4,6,7) 8(8,9,10) 4(9,13,14) 3( 4,5,8) 10(7,10,12) 5(3,4,6) 1(9,12,15) 7(5,8,10) ~
~
'1
Job 8 6(7,11,14) 4(3,5,6) 3(10,13,14) 10(7,9,11) 8(7,10,13) 1(5,8,10) 9(7,11,13) 5(10,12,16) 7(10,14,16) 2(6,7,10)
'it
Job 9 10(2,3,5) 6(1,3,4) 7( 4,5,8) 3(10,14,16) 8(3,4,6) 5(1,3,4) 1(1,2,3) 9(3,5,6) 2(5,8,9) 4(8,11,14) ~
"'.
;:l
Job 10 4(6,8,9) 7(5,7,9) 5(5,6,8) 9(3,5,7) 1(2,3,5) 2(1,2,3) 3(8,10,13) 8(9,13,16) 10( 4,5,8) 6(2,4,5) ~

Fuzzy Job 1 Job 2 Job 3 Job 4 Job 5 Job 6 Job 7 Job 8 Job 9 Job 10 '"'"
due date 124,128 81,95 92,99 91,103 109,115 102,107 118,128 170,178 75,86 94,107

t-,:)
........
'l:>
220 10. FUZZY MULTIOBJECTIVE JOB-SHOP SCHEDULING

The proposed algorithms have been coded with C and run on an


IBM-compatible PC with a Pentium 133MHz processor under Windows
95. For each of the objective functions, the corresponding membership
function is determined on the basis of the best solution through 20 times
application of genetic algorithm and SA. Using the obtained membership
functions, 20 times trials to each example are performed through both
genetic algorithm and SA for seeking the schedule that maximizes the
objective function f.
Each of the parameter values of genetic algorithm is shown in Table
10.16, where N, [min, [ma:x, Pc and Pm denote the population size, the
number of minimum generation, the number of maximum generation,
the crossover rate, and the mutation rate, respectively.

Table 10.16. Parameter values of GA

Problem N Imin Imax


6 x 6 100 50 100 0.9 0.03
10 x 10 200 100 200 0.9 0.03

The parameter values of SA are set as is shown in Table 10.17, where


To, 0:, S, Epock, and E denote the initial temperature, the changing
ratio, the number of search, the number of epoch and the tolerance
value, respectively. Here, an initial temperature is set to be 100 when
solving the problems that minimize the maximum fuzzy completion time
using SA.

Table 10.17. Parameter values of SA

Problem To a S Epock E:

6 x6 0.5 0.9 10000 5 0.2


10 x 10 0.5 0.9 40000 5 0.2

It should be emphasized here that these parameter values are found


through a number of experiences, and these values are used in all of the
trials of GA and SA. All of the trials of GA and SA are performed 20
times for each of the problems. The average times required for computa-
tion are shown in Table 10.18 (6 x 6 MOFJSP) and Table 10.19 (10 x 10
MOFJSP), respectively.
The results obtained through 20 times trials of GA and SA for each of
the problems are shown in Table 10.20 (6 x 6 MOFJSP) and Table 10.21
(10 x 10 MOFJSP). Number (Best) in these tables denotes the number
10.3. Multiobjective job-shop scheduling under fuzziness 221

Table 10.lB. Average computation time (in seconds) for 6 x 6 MOFJSP

GA SA
Problem 1 44.13 53.50
Problem 2 43.97 52.62
Problem 3 40.79 51.29

Table 10.19. Average computation time (in seconds) for 10 x 10 MOFJSP

GA SA
Problem 1 1205.46 1286.77
Problem 2 1158.79 1245.68
Problem 3 1205.51 1186.76

of the best solution obtained among 20 trials. Best, Average and Worst
represent the corresponding values of the minimum satisfaction degree.

Table 10.20. Simulation results for 6 x 6 MOFJSP

Method Number (Best) Best Average Worst Variance


GA 18 0.775 0.761 0.628 1.94 x 10- 3
Problem 1
SA 9 0.775 0.704 0.597 4.51 x 10- 3
GA 19 0.792 0.779 0.542 2.97 x 10- 3
Problem 2
SA 2 0.792 0.423 0.188 3.51 x 10- 2
GA 20 0.580 0.580 0.580 0.00
Problem 3
SA 13 0.580 0.510 0.381 9.02 x 10- 3

Table 10.21. Simulation results for 10 x 10 MOFJSP

Method Number (Best) Best Average Worst Variance


GA 1 0.714 0.574 0.439 9.69 x 10- 3
Problem 1
SA 0 0.627 0.411 0.203 1.43 x 10- 2
GA 8 0.818 0.722 0.545 8.13 x 10 3
Problem 2
SA 0 0.688 0.493 0.188 1.71 x 10- 2
GA 1 0.560 0.525 0.475 3.10 x 10- 3
Problem 3
SA 0 0.395 0.264 0.162 4.05 x 10- 2
222 10. FUZZY MULTIOBJECTIVE JOB-SHOP SCHEDULING

For the three different problems (Problem 1, Problem 2, and Problem


3) of 6 x 6 MOF.JSPs, the proposed GA reaches the best solution for
almost all of the trials. On the contrary, although SA reaches the best
solution for all of the problems, the number of best solutions is quite
small, and it is especially extremely low for Problem 2. Furthermore,
concerning Problem 1, Problem 2, and Problcm 3 of 10 x 10 F .JSPs,
although thc results of GA become worse than it does for 6 x 6 F JSPs,
G A gives many more stable solutions compared with SA. Therefore,
through our numerical experiences for MOF JSPs, it may be concluded
that the proposed GA gives much more efficient search and stability than
SA does.
Chapter 11

SOME APPLICATIONS

It is now appropriate to demonstrate some application aspects of ge-


netic algorithms. As examples of Japanese case studies, we present some
applications of genetic algorithms to flexible scheduling for a machining
center, in the operation planning of district heating and cooling (DHC)
plants, and in the coal purchase planning in a real electric power plant.

11.1 Flexible scheduling in a machining center


In this section, as a practical application, we focus on a scheduling
problem for a machining center that produces a variety of parts with
a monthly processing plan. In order to take account of flexibility, such
as partial troubles of a machining center, urgent orders and so forth,
some parameters that reflect the decision maker's (DM's) judgments for
due dates are introduced into objective functions of scheduling prob-
lems. Realizing that a direct application of conventional simple genetic
algorithms to the formulated problems does not always give acceptable
results, we introduce a genetic algorithm that is suitable for the formu-
lated scheduling problems. Effectiveness of the proposed algorithm is
demonstrated through some numerical simulations.

11.1.1 Introduction
As a real-world application of genetic algorithms, we will consider
a practical scheduling problem of a machining center that produces a
variety of parts with a monthly processing plan. For such a problem,
however, as with several types of scheduling problems including job shop
scheduling, it is generally impossible to obtain an exact optimal solu-
tion because of the well-known combinatorial explosion. Realizing this
224 11. SOME APPLICATIONS

difficulty, approximate algorithms such as greedy algorithms, random


method, simulated annealing (SA), genetic algorithms, and so on are
usually employed for seeking an approximate optimal solution within a
practical period of time. In this section, we formulate the scheduling
problem that adequately reflects the practical situation under consider-
ation. Then we introduce a genetic algorithm suitable for solving it. To
be more specific, in the generation of the initial population, which con-
sists of individuals represented as a matrix, every individual is treated as
feasible. As to crossover or mutation operators, in order to circumvent
many of individuals after these operations become infeasible, these oper-
ators are modified to alter infeasible solutions into feasible ones. More-
over, some parameters that reflect the OM's judgments for due dates are
also introduced in the objective functions of the formulated scheduling
problems. Finally, through some numerical simulations, effectiveness of
the proposed genetic algorithm is demonstrated.

11.1.2 Processing system that includes a machining


center
Consider a practical scheduling problem of a machining center with a
monthly processing plan. A machining center is a machine that produces
a variety of parts by exchanging processing tools for other ones.

11.1.2.1 Working Process


Figure 11.1 shows the machining center considered in this section.
Some works, which are unprocessed parts, are attached to a specific place
of the machining center called a block. The number of works attached
to a block and the maximal number of blocks available in a day are
given for each kind of part. Works attached to blocks are sequentially
processed in the machining center. According to the kind of processing
(cut, punch, and so on.), some tools required for the processing may be
changed from the ones used for the previous processing.

11.1.2.2 Processing Plan


In this section, as is shown in Table 11.1, we adopt the typical monthly
processing plan obtained by modifying the actual data. As described in
Table 11.1, for parts Pi, i = 1, ... ,10, columns I, II, III, IV, and V
represent due dates (days), total number of works processed up to the
due date, maximal number of blocks processed in a day, number of works
attached to a block, and time to process a work (hours), respectively.
For example, as to a part PI, we have to process 50 works in 14 days.
11.1. Flexible scheduling in a machining center 225

work J@J
L_J L_J

~
L_J L_J
block

machining center


tool box

Figure 11.1. A machining center

Also, we can attach two works to a block and use two blocks a day, and
it takes 0.650 hours to process a work.
On the basis of Table 11.1, some numerical values required for schedul-
ing are calculated and shown in Table 11.2.

Table 11.1. Given monthly plan of process

part I II III IV V
PI 14 50 2 2 0.650
P2 23 50 4 2 1.050
P3 17 24 2 2 0.200
P4 11 10 2 2 0.900
P5 15 26 2 2 0.700
P6 17 40 1 4 0.550
P7 12 4 2 2 0.600
P8 20 40 2 4 0.500
P9 8 12 2 4 0.525
PlO 5 12 1 4 0.450

I, due dates; II, total number of works processed up to the due date; III, maximal number
of blocks processed in a day; IV, number of works attached to a block; V, time to process a
work

As described in Table 11.2, columns I', II', III', IV', and V' repre-
sent time to process a block (hours), total processing time (hours), total
226 11. SOME APPLICATIONS

Table 11.2. Calculated monthly plan of process

I' II' III; IV' V'


PI 1.300 32.500 25 0.893 1
P2 2.100 52.500 25 0.272 6
P3 0.400 4.800 12 0.353 5
P4 1.800 9.000 5 0.227 8
P5 1.400 18.200 13 0.433 4
P6 2.200 22.000 10 0.588 3
P7 1.200 2.400 2 0.083 10
Ps 2.000 20.000 10 0.250 7
P9 2.100 6.300 3 0.188 9
PlO 1.800 5.400 3 0.600 2

I', time to process a block; 11', total processing time; Ill', total number of blocks processed
up to the due date; IV', evaluated value of priority; V', order of priority

number of blocks processed up to the due date, evaluated value of pri-


ority, and order of priority, respectively. To be more explicit, (time to
process a block) = (time to process a work) x (number of works at-
tached to a block), (total processing time) = (time to process a work) x
(total number of works processed up to the due date), (total number of
blocks processed up to the due date) = (total number of works processed
up to the due date)/(number of works attached to a block), (evaluated
value of priority) = (total number of works processed up to the due
date)/((number of works attached to a block) x (due dates)) and (order
of priority) denotes the priority order according to the evaluated value
of priority.
Table 11.3 shows the number of tools required for processing of each
part; tool numbers are arranged in the prior order of tools.
To calculate the total number of tools used in a day, we select a tool
of high priority, look at the column corresponding to the tool number,
and sum up all elements in the column related to tools used in the day.
For example, if the parts Pl, P2, and P5 are processed in a day, the total
number of tools used in the day is 49 (24 + 10 + 15).
In addition, by considering the situation in a practical factory, the
constraints that the total number of tools used in a day must be less
than or equal to 99 and that the total processing time in a day must be
less than or equal to 11 hours are imposed on the scheduling problem
under consideration.
11.1. Flexible scheduling in a machining center 227

Table 11.3. The numbers of tools used for processing

PI PZ P3 P4 P5 P6 P7 Ps P9 PlO
ps 24
pz 10 24
P6 14 12 33
P4 8 8 10 34
P3 11 6 15 4 20
PI 15 10 7 4 6 27
P7 5 6 6 3 13 12 19
ps 14 7 10 7 15 13 6 25
Pg 7 9 5 3 6 2 7 8 10
PIO 4 6 7 3 6 10 11 4 24 5

11.1.3 Problem formulation


In order to formulate the scheduling problems under consideration,
first define the following notations, variables, and constants.
z: index of parts
J: date counted from the beginning day of a schedule
Yij: processing time of a part Pi on a date j
b ij : number of blocks used in processing a part Pi on a date j
Cij: number of tools used in processing Pi on a date j
ti: total processing time of a part Pi
bi: maximal number of blocks used in processing a part Pi in a day
Si: due date of a part Pi
Ki: set of dates j when a part Pi is processed

The constraints are itemized as follows.

The processing time in a day must be less than or equal to 11 hours.


The number of tools used in a day must be less than or equal to 99.

The number of blocks processed in a day must not exceed a given


value.
The processing of parts must be finished by the due dates.

Using our notations, variables and constants, these constraints are


formulated in order as follows.
10
L Yij :S 11, j = 1, ... , 23 (11.1)
i=l
228 11. SOME APPLICATIONS

10
Leij ~ 99, j = 1, ... ,23 (11.2)
i=1

bij ~ bi, i = 1, ... ,10; j = 1, ... ,23 ( 11.3)

Si

L Yij = ti, 'i = 1, ... , 10 (11.4)


j=1

Under these four constraints, it may be reasonable to derive the DM's


objective function by simultaneous consideration of the following two
criteria:

minimization of total days from the day when the processing of a


part Pi will be finished to the due date for Pi
minimization of total processing time

These two criteria can be unified into the minimization of the following
single objective function j, because the term (Si - j)2 represents the
square of the total days from the completion time to the due date for
each part.
10
f =L L (Si - j)2 ( 11.5)
i=1 jEKi

Observe that if the objective function f is minimized, these criteria will


be attained simultaneously.
As a result, the formulated scheduling problem is to minimize the
objective function (11.5) under the constraints (11.1), (11.2), (11.3),
and (11.4).
Depending on the situations, however, the DM may wish to move up
the processing of parts or finish it exactly on the due dates. In order to
deal with such requirements, it is significant to introduce parameters (Yi
to the objective function (11.5) that indicates how early the DM wishes
to finish the processing of parts before the due dates. To be more specific,
the DM makes (Yi zero if he or she wishes to move up the processing of
parts, while he or she makes (Yi equal to 1 in the opposite case. A value
(Yi E (0, 1) is used in an intermediate between the two extreme cases.
To reflect such a situation, it is quite reasonable to modify the objective
function f as follows:
10
j =L L (l (Yi X sd - j)2 (11.6)
i=l jEK,
11.1. Flexible scheduling in a machining center 229

For convenience in our subsequent discussion, let us convert variables


of time into those of the number of blocks in the constraints and rewrite
the whole problem as follows.
minimize
lO
f =L L (Si - j)2 (11.7)
i=l jEK;
or
mInImIze
10
f = L L (lpi X siJ - j)2 (11.8)
i=] jEK i

subject to
10
L (bij X hi) ~ 11, j = 1, ... ,23 (11.9)
i=l
10
L Cij ~ 99, .j = 1, ... ,23 (11.10)
i=]

bij ~ bi , i = 1, ... , 10; j = 1, ... ,23 (11.11)

Lbij = ri,
Si

i = 1, ... ,10, (11.12)


j=1

where

hi: time to process a block of Pi a day


ri: number of blocks used in processing Pi a day
O'i: parameter related to Pi

11.1.4 Genetic algorithm


Now we are ready to consider an application of genetic algorithms as a
method for calculating approximate optimal solutions for the formulated
scheduling problem.

11.1.4.1 Coding
Each individual is represented as an m x n matrix

A=
230 11. SOME APPLICATIONS

where ajk represents the number of blocks used in processing a part P.1
on a date k before the due date.

11.1.4.2 Initialization
In the generation of an initial populationin, if every element ajk of
A is assigned a random number, then A may become infeasible. To
circumvent such a phenomenon, we introduce the following method for
generating an initial population that consists of feasible solutions under
the constraints (11.9) through (11.12).

Step 1: Set each element ajk of a matrix A to be zero.

Step 2: Rearrange the rows of A so that a part with less leeway will
have higher priority.

Step 3: For the part of the highest priority among parts that have not
been scheduled yet, choose a processing day up to the due date at
random and add unity to the element ajk as long as the constraints
(11.9) through (11.11) are satisfied. If the constraint (11.12) is satis-
fied, go to the next step.

Step 4: If there are any parts which have not been scheduled yet, return
to the previous step. Otherwise, stop the procedure.

11.1.4.3 Reproduction
The ranking selection is adopted for reproduction; that is, we arrange
all individuals in order of the value of objective function (11.7) or (11.8)
as a fitness function for each one to discard several lower percentages of
those and to reproduce higher.

11.1.4.4 Crossover
Simple crossover of such matrix type individuals may generate a num-
ber of infeasible individuals. Thus, we modify the crossover operator so
that individuals after crossover become feasible.

crossover: First, choose a crossover point between a certain pair of


columns lying side by side. After a single-point crossover, the con-
straints on column, that is, constraints (11.9) through (11.11), are
naturally satisfied, whereas the constraint on row, that is, constraint
(11.12), may not be. In case L;~l bij is more (less) than Ti, it will
be repeated that random choice of j among days before the due date
and subtraction (addition) of unity from (to) aij under the constraints
(11.9), (11.10) and (11.11) while the inequality holds.
11.1. Flexible scheduling in a machining center 231

In this operation, if we cannot subtract (add) unity from (to) aij by


the constraints (11.9), (11.10) and (11.11) in spite of the fact that the
constraint (11.12) has not been satisfied yet, we regard the individual as
a fatal gene and impose some penalty to it. Figure 11.2 illustrates the
crossover for matrix-type individuals.

parent I crossover offspring I

parent 2 offspri ng 2

Figure 11.2. Crossover

11.1.4.5 Mutation
For ensuring that the individuals after mutation become feasible, the
following mutation operation is introduced.

mutation: Randomly choose a couple of rows in the matrix represent-


ing an individual and exchange the two rows. The individual after
this exchange necessarily satisfies all the constraints except the con-
straint (11.12). If the constraint (11.12) is not satisfied, then choose
another couple of rows.

The mutation for a matrix type individual is illustrated in Figure 11.3.

11.1.5 Numerical experiments


It is now appropriate to present the whole procedure through the pro-
posed genetic algorithm for applying the scheduling problem formulated
in this section.

Step 1: Establish the number of generations, the number of individ-


uals, the crossover rate, the mutation rate, and the reproduction
percentage of ranking selection.
232 11. SOME APPLICATIONS

Figure 11.3. Mut.at.ion

Step 2: Input the data in connection with the processing plan for every
part.
Step 3: Arrange the data of parts in priority order.
Step 4: Generate an initial population including as many elements as
the total number of individuals given previously.
Step 5: Calculate the fitness of each individual and carry out ranking
selection.
Step 6: Select as many individuals as the crossover rate x the total
number of individuals, pair them randomly, and carry out the intro-
duced crossover with one-point crossover.
Step 7: Carry out mutation with the mutation rate.
Step 8: If the number of generations reaches one established in step 1,
stop the procedure to show the individual with the highest fitness.
Otherwise, return to step 5.

11.1.5.1 Results through genetic algorithm


In the applications of genetic algorithms, one of the most important
tasks is how to determine the appropriate values of parameters such as
the crossover rate, the mutation rate, and so forth, because these param-
eter values essentially affect a scale of the problem or a gene structure.
Table 11.4 shows one of the the most appropriate values of parameters
obtained through a number of simulation experiments.
Table 11.5 shows a result of simulations through the proposed genetic
algorithm using the parameter values in Table 11.4.

11.1.5.2 Results through random method


Naturally, the performance of genetic algorithms is expected to be
much better than that of a usual random method. With this observation
11.1. Flexible scheduling in a machining center 233

Table 11.4. Values of parameters

Number of generations (NG) 400


Number of individuals (NI) 120
Crossover rate (CR) 0.98
Mutation rate (MR) 0.01
Reproduction-selection rate (RSR) 0.20

Table 11.5. Result of simulation through the proposed genetic algorithm

I 1 2 3 4 5 6 7 8 9 10 11 121314 15 16 17 18 19 20 21 22 23 II
Pl 0 2 2 2 2 2 2 2 2 2 2 2 2 1 25
P2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 3 0 4 3 4 3 4 4 25
P3 0 0 0 0 0 0 0 0 0 0 2 2 0 2 2 2 2 12
P4 0 0 0 0 0 0 0 2 1 1 1 5
Ps 0 0 0 0 0 0 0 0 2 2 2 2 2 1 2 13
P6 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 10
P7 0 0 0 0 0 0 0 2 0 0 0 0 2
Ps 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 0 2 1 2 1 10
Pg 0 0 0 0 0 2 1 0 3
P10 0 0 1 1 1 3
III 0.02.64.44.4 4.4 6.8 4.7 10.8 9.4 9.4 10.2 8.4 7.6 9.7 9.8 9.3 7.0 10.4 10.3 10.4 6.3 8.4 8.4
IV o 27 37 37 37 29 29 56 61 61 72 64 53 78 63 42 58 31 31 31 242424

I, processing date; II, total number of blocks processed up to the due date; III, total processing
time in the day; IV, total number of tools used in the day; NG = 400, NI = 120, CR = 0.98,
MR = 0.01, RSR = 0.20; min f = 1320

in mind, we carried out simulations through a random method that was


modified to generate only feasible solutions. Table 11.6 shows the best
of 50,000 solutions generated by the random method.

11.1.5.3 Comparison of results


From Tables 11.5 and 11.6, it can immediately be recognized that
simulations through the proposed genetic algorithm give approximate
solutions in the fitness sense that are much better than those through
the random method are. Moreover, the unevenness is observed within
fitness for each part in approximate optimal solutions given by simula-
tions through the random method, whereas it is not observed in those
given by simulations through the proposed genetic algorithm.
234 11. SOME APPLICATIONS

Table 11.6. Result of simulation through a random method

I 1 2 3 4 5 6 7 8 9 1011 12 13 14 15 16 17 18 19 20 21 22 23 II
PI 2 2 1 1 2 2 2 2 2 2 1 2 2 2 25
P2 0 2 0 0 0 0 0 0 0 2 0 2 0 2 0 2 1 3 1 2 2 3 3 25
P3 0 2 0 1 1 0 2 2 0 1 1 0 0 2 0 0 0 12
P~ 1 0 0 0 0 1 1 0 0 0 2 5
P5 1 0 0 2 2 0 0 2 1 0 1 2 0 0 2 13
P6 0 1 0 0 1 1 0 0 1 1 1 0 1 1 1 1 0 10
P7 0 1 0 0 0 0 1 0 0 0 0 0 2
PB 0 0 0 1 0 0 0 2 2 0 0 0 0 0 2 0 1 1 0 1 10
P9 1 0 1 0 0 0 1 0 3
PlO 1 0 0 1 1 3
III 9.711.03.48.39.86.68.510.210.29.48.99.64.89.89.06.4 4.18.32.16.24.26.36.3
IV 67 64 51 67 47 58 65 46 63 72 59 54 49 50 42 37 49 24 24 25 24 24 24

I, processing date; II, total number of blocks processed up to the due date; III, total processing
time in the day; IV, total number of tools used in the day; min f = 4697

11.1.5.4 Results through the genetic algorithm with


parameter Qi
Tables 11.7 and 11.8 show the results of simulations through the pro-
posed genetic algorithm with parameter Cl:i. As mentioned previously,
Cl:i = 0 implies the DM wishes to move up the processing of parts,
whereas Cl:i = 1 does the opposite. Cl:i = 0.5 is used in the interme-
diate case. As is shown in Tables 11.7 and 11.8, the results well reflect
the DM's desire represented by Cl:i.

11.1.6 Conclusion
In this section, we formulated the practical scheduling problem of
a machining center that adequately reflects the practical situation and
proposed a genetic algorithm suitable for it, in which a matrix with
constraints on both rows and columns was adopted as a gene. In the
comparison of simulations through the proposed genetic algorithm and
those through the proposed genetic algorithm with parameters Cl:i, it
was found that the results of the latter reflected the DM' desire more
than those of the former. This might be because the processing of parts
concentrated on the due dates in the former case, whereas it was dis-
persed in the latter case. Although relatively satisfactory approximate
solutions were obtained for the formulated scheduling problem, it must
be observed here that the proposed genetic algorithm does not always
work well when additional strict constraints are imposed. This is be-
cause the solution space is so small that initial populations could not be
11.1. Flexible scheduling in a machining center 235

Table 11.7. Result of simulation through the proposed genetic algorithm with pa-
rameter (}:i (1)

I 1 2 3 4 5 6 7 8 9 10 11 12 14 15 16 17 18 19 20 21 2223 II (}:i
13
PI 0 2 2 2 2 1 2 2 2 2 2 2 22 25 1.0
P2 0 0 0 0 0 0 0 0 0 0 0 0 0 3 0 2 2 3 3 4 4 4 25 1.0
0
P3 2 2 2 2 2 2 0 0 0 0 0 0 0
0 0 0 0 12 0.0
P4 0 0 0 0 0 0 0 0 1 2 2 51.0
P5 0 0 0 0 0 0 0 0 2 1 2 2 2 2 2 13 1.0
P6 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 10 0.0
P7 0 0 0 0 0 0 0 0 0 0 1 1 21.0
ps 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 2 2 2 10 1.0
P9 2 1 0 0 0 0 0 0 30.0
PIO 1 1 1 0 0 30.0
III 9.09.57.4 5.65.64.34.84.89.4 9.810.26.65.45.4 9.1 4.08.28.2 10.3 10.3 8.4 8.4 8.4
IV 60 67 62 55 55 55 40 40 61 61 52 44 39 39 34 25 31 31 31 31 242424

I, processing date; II, total number of blocks processed up to the due date; III, total processing
time in the day; IV, total number of tools used in the day; NG = 400, NI = 120, CR = 0.98,
MR = 0.01, RSR = 0.20; min f = 1278

Table 11.8. Result of simulation through the proposed genetic algorithm with pa-
rameter (}:i (2)

I 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 II (}:;
PI 0 2 2 2 1 2 2 2 2 2 2 2 2 2 25 1.0
P2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 4 3 3 3 4 4 4 25 1.0
P3 0 0 0 0 0 2 2 2 2 2 0 2 0 0 0 0 0 12 0.5
P4 0 0 0 0 0 0 0 2 1 1 1 51.0
ps 0 0 0 0 0 0 0 0 2 2 2 2 1 2 2 13 1.0
P6 0 0 0 1 1 1 1 1 1 1 1 1 1 0 0 0 0 10 0.5
P7 0 0 0 0 0 0 0 0 0 0 1 1 21.0
ps 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 0 2 2 2 10 1.0
Pg 0 0 0 1 2 0 0 0 30.5
PlO 0 1 1 1 0 30.5
III 0.04.44.48.77.75.65.69.210.210.210.69.66.25.46.84.0 8.410.310.310.38.4 8.4 8.4
IV o 37 37 52 45 55 55 65 72 72 66 69 53 39 38 25 24 31 31 31 242424

I, processing date; II, total number of blocks processed up to the due date; III, total processing
time in the day; IV, total number of tools used in the day; NG = 400, NI = 120, CR = 0.98,
MR = om, RSR = 0.20; min f = 1005

generated well in this case. Hence, the solution space must be wide to
a certain degree for the proposed genetic algorithm to work well. The
unevenness of approximate optimal solutions was observed in some simu-
236 11. SOME APPLICATIONS

lations through the proposed genetic algorithm using the same values of
parameters. This would be based on the dependence of approximate op-
timal solutions on initial populations. Concerning this problem, further
improvement of reproduction, crossover, and mutation will be required.

11.2 Operation planning of district heating and


cooling plants
In recent years, the operation planning of DHC plants has been arous-
ing interest as a result of development of cooling load prediction methods
for DHC systems. In this section, we formulate an operation planning
problem of a DHC plant as a nonlinear 0-1 programming problem. Real-
izing that an efficient exact solution method does not exist and that the
problem involves hundreds of variables, we propose an approximate so-
lution method using genetic algorithms for the formulated nonlinear 0-1
programming problem. Furthermore, we investigate the feasibility and
effectiveness of the proposed method for an operation planning problem
of an actual DHC plant.

11.2.1 Introduction
A DHC system that aims at saving energy, saving space, inhibiting
airpollution, or preventing city disaster has lately attracted considerable
attention as an energy supply system in urban areas. In a DHC system,
cold water and steam used at all facilities in a certain district are made
and supplied by a DHC plant, as shown in Figure 11.4.

Cold water _ Steam

Figure 11.4. A DHC system

Because there exist a number of large-size freezers and boilers in a


DHC plant as in Figure 11.5, if we can predict the amount of cold wa-
ter and steam, called cooling load, with high accuracy, it is possible to
11.2. Operation planning of district heating and cooling plants 237

determine the optimal operation plan of these instruments in advance.


Cooling load prediction methods in such a DHC system have been pro-
posed previously [153, 166-168J Hence, the operation planning of these
instruments on the basis of the predicted cooling load is important for
stable and economical management of a DHC system.

~--------------------------- , -----',
::@::: Electricity
Electricity ::@:::
::@:::
::@:::
COldwat~
::@:::
Pumps
~
Figure 11.5. A DHC plant

In recent years, with the improvement of cooling load prediction meth-


ods for DHC systems [153, 166-168] the formulation needs of the oper-
ation planning problem of a DHC plant as a mathematical program
one and the development of solution methods to it have been increasing
[82, 220].
Under these circumstances, after formulating the operation planning
of a DHC plant as a nonlinear 0-1 programming problem, we propose
an approximate solution method for the formulated problem based on
genetic algorithms [60, 66, 112, 165].

11.2.2 Structure of district heating and cooling


plants
A DHC plant, as illustrated in Figure 11.5, generates cold water,
steam, and electricity by running many instruments using gas and elec-
tricity. Relations among instruments in a DHC plant are depicted in
Figure 11.6. From Figure 11.6, it can be seen that steam required for
heating and cold water required for cooling are generated by running
four absorbing freezers (DAR l , ... , DAR4, where DARI and DAR2 are
the same type), six turbo freezers (ERI,"" ERt;, where ERI through
ER4 are the same type), and three boilers (BW 1 , ... , BW 3 , where BW l
and BW2 are the same type) that use gas and electricity in this DHC
plant, where pumps and cooling towers are connected to the correspond-
ing freezers.
238 11. SOME APPLICATIONS

For the DHC plant, we consider an optimal operation plan to minimize


the cost of gas and electricity under the condition that the demand for
cold water and steam must be satisfied by running instruments.

Pumps
P CP

Absorbing Turbo
Freezers Freezers
Pumps
CDP

To district

Figure 11.6. Structure of a DHC plant

11.2.3 Problem formulation


Given the (predicted) amount of the demand for cold water C1oad(t)
and for steam Sdist (t) at time t, the operation planning problem of the
DHC plant can be sUlIlmarized as follows.
(I) The problem contains 14 decision variables. They are all 0-1 vari-
ables as xl, ... ,xla, which indicate whether each of four absorbing
freezers and six turbo freezers is running; yi, ... 1 y~, which indicate
whether each of three boilers is running; and zt, which indicates
whet.her some condition holds.
(II) The freezer output load rate P = Cfoad/ct, which means the ratio
of the (predicted) amount of the demand for cold water C 10ad (t) to
the total out.put of running freezers C t = LI~l aix~, must be less
than or equal to 1.0, that is,
c/ t >
- C load'
t (11.13)
11.2. Operation planning of district heating and cooling plants 239
where ai denotes the rating output of the ith freezer. This constraint
means that the sum of output of running freezers must exceed the
(predicted) amount of the demand for cold water.

(III) The freezer output load rate P = Cfoad/ct must be greater than
or equal to 0.2, that is,

(11.14)

This constraint means that the sum of output of running freezers


must not exceed five times the (predicted) amount of the demand for
cold water.

(IV) The boiler output load rate Q = (StAR + S~ist) / Sf, which means
the ratio of the (predicted) amount of the demand for steam to the
total output of running boilers Sf = 2:J==l fyyj, must be less than or
equal to 1.0, that is,

(11.15)

where fj denotes the rating output of the jth boiler and StAR de-
notes the total amount of steam used by absorbing freezers at time
t, defined as
4
StAR = L O(P) . Sr ax . Xi, (11.16)
i== 1

where Sr ax is the maximal steam amount used by the ith absorbing


freezers. Furthermore, O(P) denotes the rate of use of steam in an
absorbing freezer, which is a nonlinear function of the freezer output
load rate P. For the sake of simplicity in this section, we use the
following piecewise linear approximation.

O(P) = { 0.8775 . P + 0.0285 ,P:s 0.6 (11.17)


1.1125 P - 0.1125 , P> 0.6

This constraint means that the sum of output of running boilers must
exceed the (predicted) amount of the demand for steam.

(V) The boiler output load rate Q = (StAR + S~ist) / st must be greater
than or equal to 0.2, that is,

(11.18)

This constraint means that the sum of output of running boilers must
not exceed five times the (predicted) amount ofthe demand for steam.
240 11. SOME APPLICATIONS

(VI) The minimizing objective function J(t) is the energy cost, which
is the sum of the gas bill C t and the electricity bill Et.
J(t) = Geos t . Ct + Eeost . E t , (11.19)
where Geos t and Eeost denote the unit cost of gas and that of electricity,
respectively.
The gas bill C t is defined by the gas amount consumed in the rating
running of a boiler gj,j = 1,2,3 and the boiler output load rate Q.

ct = (tgjYj) .Q (11.20)
)=1

On the other hand, Et is defined as the sum of electricity amount con-


sumed by turbo freezers, accompanying cooling towers, and pumps.

Et = E~R + E6T + E6p + E6DP


10 10 10 10
= L ~ (P) . Ef3X . X~ + L +L +L
(11.21)
CiX~ dix; eixL
i=5 i=l i=l i=l

where Ef3X denotes the maximal electricity amount used by the ith
turbo freezer and Ci, di , and ei are the electricity amount of cooling
tower and two kinds of pumps.
In the previous equation, ~(P) denotes the rate of use of electricity
in a turbo freezer, which is a nonlinear function of the freezer output
load rate P. For the sake of simplicity in this section, we use the
following piecewise linear approximation.

~(P) = { 0.6 . P + 0.2 , P :'S 0.6 (11.22)


1.1 P - 0.1 , P > 0.6

Accordingly, the operation planning problem is formulated as the fol-


lowing nonlinear 0-1 programming problem.
Problem P(t)

minimize J(xt, yt, zt) = Geos t . (t gjyj) .


)=1
Q

+ Eeos t . { zt . 31 (P) + (1 - zt) . 32(P)

+ t; CiX~ + t;
10 10
dix; + r;
10}
eiX~
(11.23)
11.2. Operation planning of district heating and cooling plants 241

subject to - c t ~ -Cfoad (11.24)

zt. (0.2. ct) + (1 - zt). (0.6. ct) ~ Cfoad (11.25)

_zt. (0.6. ct) ~ -Cfoad (11.26)

zt . 81 (P) + (1 - zt) . 82(P) - st ~ -Siist (11.27)

x~ E {O, I}, i = 1, ... , 10 (11.29)


Y} E {0,1}, j = 1, ... ,3 (11.30)
zt E {O, I}, (11.31)
where
10
Ct = Laix~ (11.32)
i=l
3
st = LljY; (11.33)
j=l

P = Cfoad (11.34)
Ct
4
81(P) =L (0.8775 P + 0.0285) . Siax . x~ (11.35)
i=1

x;
4
8 2 (P) = L (1.1125 P - 0.1125) . Si ax . (11.36)
i=l

x;
10
3r(P) = L (0.6 . P + 0.2) . Ei ax . (11.37)
i=5
10
32(P) = E (1.1 P - 0.1) . Ei ax . x~ (11.38)
i=5

Q= {i. 8 1 (P) + (1 - zt) . 82(P) + S~ist} (11.39)


and zt = 1, zt = 0 mean P ~ 0.6, P > 0.6, respectively. In the
following, let At = ((xt)T, (ytf ,zt)T and let At, t'f(At), and rf, i =
1, ... ,5 represent the feasible region of P(t), left sides and right ones of
constraints (11.24) through (11.28), respectively.
242 11. SOME APPLICATIONS

The problem P(t) can be solved exactly by complete enumeration be-


cause it includes only 140-1 variables. However, a multiperiod operation
planning made by pasting K solutions to P(t), P{t+ 1), ... , P(t+K -1)
solved independently at each time t together often becomes such an un-
natural operation that the switching of instruments occurs frequently.
Since the starting and stopping of instruments need more electricity
and manpower than the continuous running does, the additional cost
for these operations should be took into account in optimizing a multi-
period operation planning problem.
Thus, in this section, we formulate an extended operation planning
problem in consideration of the starting and stopping of instruments. To
be more explicit, we deal with the following problem P'(t, K) gathering
K periods.

Extended problem P'(t, K)

t+K-l
mllllmize J'(>..(t, K)) = L [J(A T) + ~j I"i - A)T-1) I] },
subject to >"(t, K) E A(t, K)
(11.40)

where>"(t,K)= ( (>..t)T, ... ,(>..t+I<-l) T)T ,A(t,K)=Nx ... xN+K-l,


and j is the cost of switching of the jth instrument.
P'{t, K) is a large-scale nonlinear 0-1 programming problem that in-
volves K times as many variables as P{t) does. Consequently, because
there is no general method for nonlinear integer programming problems,
such as the branch and bound (BAB) method for linear ones, and enu-
meration methods seem impractical, we propose an approximate solution
method through genetic algorithms.

11.2.4 Genetic algorithms for nonlinear 0-1


programming
As discussed in Chapter 3, for multidimensional 0-1 knapsack prob-
lems, Sakawa et al. [138, 144, 147, 148] proposed genetic algorithms with
double strings (G ADS) using double string representation and a decod-
ing algorithm to decode an individual to a feasible solution and showed
its effectiveness [160, 161, 162, 163]. Furthermore, they [155] proposed
GADS based on reference solution updating for general 0-1 programming
problems involving positive and negative coefficients [137].
In this section, a new genetic algorithm using 0-1 string representa-
tion corresponding to >"(t, K) is presented on the basis of the decoding
11.2. Operation planning of district heating and cooling plants 243

algorithm using a reference solution and the reference solution updating


in [137, 155J.

11.2.4.1 Coding and decoding


In genetic algorithms for optimization problems, a solution A( t, K)
and an individual S are generally called a phenotype and a genotype,
respectively. A mapping from a phenotype space to a genotype space is
called coding; a mapping from a genotype space to a phenotype space
is called decoding. Here, we adopt an individual using only "0" and
"I" corresponding to a solution A(t, K) to Pl(t, K), as illustrated in
Figure 11.7. In an individual S, ST is a subindividual corresponding to a
solution AT at time 1", and the basic decoding is represented by AT = S T,
1" = t, ... , t + K - 1, that is, A(t, K) = S.

,:-+-- Sl ~: _____ st+I~: .


,
<------- S ------_0'
'...

Figure 11.7. Individual

It should be noted that all phenotypes are not always feasible when
the problem is constrained. From the viewpoint of search efficiency and
feasibility, decoding such that all phenotypes are feasible seems desirable.
As discussed previously, Sakawa et al. [148, 155J proposed GADS
using double string representation and a decoding algorithm to decode
an individual to a feasible solution for multidimensional 0-1 knapsack
problems and more general 0-1 programming problems involving positive
and negative coefficients. Here, we propose a new decoding algorithm to
decode an individual to a feasible solution for nonlinear 0-1 programming
problems Pl(t, K), based on the decoding algorithm using a reference
solution in Sakawa et al. [155J.
In Sakawa et al. [155J, a feasible solution used as a template in decod-
ing, called a reference solution, must be obtained in advance. In order
to find a feasible solution to the extended problem Pl(t, K), we solve a
maximizing problem with the following objective function

G(A(t, K)) = exp [ -() ?;


t+K -1 5
~R
(CT(A T )

IrTi
- r[)] ' (11.41)

where
RCe) = {e, e~ 0 (11.42)
0, e< 0
244 11. SOME APPLICATIONS

and () is a positive constant. To be more explicit, we find a feasible


solution ~ 0 (t, K) to an optimization problem to maximize the following
objective function G(~(t, K))

maximize G(~(t, K)), ~(t, K) E A(t, K) (11.43)

by a genetic algorithm that uses a fitness function I(S) = G(~(t, K))


and the basic decoding. Then, let the obtained feasible solution be a
reference function in decoding, that is, X = ~o(t, K). If no feasible
solution can be obtained after a prescribed number of generations, we
judge that no feasible solution to the problem exists.
Using the feasible solution ~o(t, K), the decoding algorithm to decode
each of individuals to the corresponding feasible solution to pI (t, K)
is summarized as follows. In the algorithm, gJ,
j = 1, ... ,14, T =
t,. " , t+ K -1 denotes the jth gene of a subindividual ST of an individual
-
S, and ~ = (-t T -t+K-l )T)T
(~) , ... , (~ is a reference solution.

Decoding algorithm using a reference solution


Step 1: Let T := t.

Step 2: Let j:= 1, l:= 0, and ~T:= (OT,OT,g14)T.


Step 3: Let Aj := gJ.
Step 4: If T (~T) :s; rT, let l := j, j := j +1 and go to step 5. Otherwise,
let j := j + 1 and go to step 5.

Step 5: If j > 14, go to step 6. Otherwise, return to step 3.

Step 6: If l > 0, let T := T + 1 and go to step 11. Otherwise, go to step


7.
Step 7: Let j := 1 and ~T := XT.
Step 8: Let Aj := gj.
Step 9: If T(~T):S; rT, let j:= j + 1 and go to step 10. Otherwise, let
Aj := >'j, j := j + 1 and go to step 10.
Step 10: If j > 14, let T := T + 1 and go to step 11. Otherwise, return
to step 8.
Step 11: Regarding instruments of the same type, revise ~T so that

as Aj := 1 (1 :s; j :s; nd, Aj :=


instrumentals will be used in numerical order. Concretely, revise ~ T
(n1 + 1 :s; j :s; 2), Aj := 1
11.2. Operation planning of district heating and cooling plants 245

(5:::; j:::; n2+4), Aj:= 0 (n2+5:::; j:::; 8), Aj:= 1 (11 :::; j:::; n3+1O),
Aj := 0 (n3 + 11 :::; j :::; 12), where rq = 2:]=1 Aj, n2 = 2:J=5 Aj,
n3 = 2:}~1l Aj. Then, go to step 12.

Step 12: If T > t + K - 1, let ,X := ((,Xtf, ... , (,Xt+K -1 )T)T and stop.
Otherwise, return to step 2.

If phenotypes ,X obtained by the decoding strongly depend on the ref-


erence solution, the global search may be impossible without updating
the reference solution. Consequently, we introduce the following algo-
rithm for updating the reference solution, where the phenotype that is
farthest from the reference solution is substituted for the current refer-
ence solution when the average distance between the reference solution
and a phenotype in the current population is less than a sufficiently small
positive constant '1]. In the algorithm, N is the population size, ,X(r) is
the phenotype of the rth individual, and X is the reference solution.

The reference solution updating procedure

Step 1: Let r' := 1, rmax := 1, d max := 0 and d sum := O.

Step 2: After calculating

t+K-1 14
d r := L: L:IAj(r) - -\jl,
r=t j=l

let d sum := dsum +dr . If d r > d max and J'('x(r)) < J'(X), let d max :=
dr, rmax := r, r := r + 1 and go to step 3. Otherwise, let r := r + 1
and go to step 3.

Step 3: If r > N, go to step 4. Otherwise, return to step 2.

Step 4: If dsum/(N . n) < '1], replace the reference solution X with


,X(rmax) and stop. Otherwise, stop without updating the reference
solution.

It should be noted that most individuals may be decoded in the


neighborhood of the reference solution when constraints are tight. To
avoid such a situation, every P generation, we replace the reference
solution with a feasible solution obtained by solving the maximizing
problem (11.43) through a genetic algorithm that uses a fitness function
f(S) = G('x(t, K)) and the basic decoding.
246 11. SOME APPLICATIONS

11.2.4.2 Evaluation
It is quite natural to define the fitness of an individual S by

f(S) = (3. j + (1 - (3) . (1 - dis) . j, (11.44)

where
, ] - J'(>")
(11.45)
f= J-l
Here, 1 and J are the minimal and the maximal objective function
values of the operation plan obtained by connecting K solutions to P(r),
r = t, . .. ,t + K - 1 solved by complete enumeration; J'(>") denotes the
cost of the operation plan for the phenotype>.. decoded from S; dis is the
average Hamming distance between the individual S and its phenotype
>..:
1 t+K-1 14
dis=- L: L:IAj-Sjl
K14 T=t j=l
(11.46)

Namely, j indicates the cost performance of the individual normalized


by the difference between] and L and (I-dis) ji is the evaluation value
in consideration of (1 - dis), which is the degree of similarity between
the individual S and its phenotype >... Thereby, t.he dependence of the
fitness f(S) on the degree of the similarity increases with decreasing (3.
Here, (3 is determined as

0.2 , I < Imax/5,


I (11.47)
I 2 Imax/5 '

where I and Imax denote the generation counter and the maximal search
generation.

11.2.4.3 Scaling
When the variance of fitness in a population is small, the ordinary
roulette wheel selection often does not work well because there is lit-
tle difference between the probability of a good individual surviving
and that of a bad one surviving. In order to overcome this problem,
the following linear scaling technique, discussed in Chapter 2, has been
adopted.

Linear scaling
Step 0: Calculate the mean fitness fmean, the maximal fitness fmax and
the minimal fitness fmin in the current. populat.ion.
11.2. Operation planning of district heating and cooling plants 247

Step 1: If
f . > Cmult . fmean - fmax
mm Cmult - 1.0 '
let
0: '= (Cmult - 1.0) . fmean f3'= fmean . (fmax - Cmult . fmean)
fmax - fmean ,. fmax - fmean

and go to step 2. Otherwise, let

0: := fmean , f3:= _ fmin . fmean


fmean - Jmin fmean - fmin

and go to step 2.
Step 2: Let r := 1.
Step 3: For the fitness fT = f(S(r)) of the rth individual S(r), carry
out the linear scaling f; := 0: . fT + f3 and let 1" := 1" + 1.
Step 4: If 1" > N, stop. Otherwise, return to step 3.
In the procedure, Cmult denotes the expectation of the number of the
best individuals that will survive in the next generation, usually set as
1.2 ::; Cmult ::; 2.0.

11.2.4.4 Reproduction
As discussed in Chapter 3, Sakawa et al. [148J suggested that elitist
expected value selection is more effective than the other five reproduc-
tion operators (ranking selection, elitist ranking selection, expected value
selection, roulette wheel selection, and elitist roulette wheel selection).
Accordingly, we adopt elitist expected value selection, which is a com-
bination of elitism and expected value selection as mentioned below.
Elitism: One or more individuals with the largest fitness up to the cur-
rent population are unconditionally preserved in the next generation.
Expected value selection: Let N denote the number of individuals in the
population. The expected value of the number of the rth individual
S (1") in the next population is calculated as

NT = f(S(r)) x N (11.48)
N
L f(S(i))
i=l

In expected value selection, the integral part of NT (= LNT J) denotes


the definite number of the rth individual S(r) preserved in the next
248 11. SOME APPLICATIONS

population. U sing the fractional part of N r (= N r - l N r J), the


probability to preserve S(r), in the next population is determined by

N
(11.49)
2:(Ni - lNd)
i=l

11.2.4.5 Crossover
In order to preserve the feasibility of offspring generated by crossover,
we use the one-point crossover, in which the crossover point is chosen
from among K - 1 end points of K subindividual ST, T = t, t + 1, ... , t +
K - 1, as shown in Figure 11.8.

One-point crossover

Step 0: Let r := 1.

Step 1: Generate a random number R E [0,1).

Step 2: If Pc 2: R holds for the given crossover rate Pc, let the rth
individual and another one chosen randomly from the population be
parent individuals. Otherwise, let r := r + 1 and go to step 5.

Step 3: After determining the crossover point k E {I, K - I} by a


random number, generate offspring by changing subindividuals ST,
T = t + k, ... , t + K - 1 of one parent individual for that of the other.

Step 4: Get rid of these parent individuals from the population and
include these offspring in the population.

Step 5: If r > N, stop. Otherwise, return to step 1.

11.2.4.6 Mutation
As the mutation operator, we use the mutation of bit reverse type
and inversion.

Mutation of bit reverse type

Step 1: For each gene that takes 0-1 value in an individual, generate a
random number R E [0,1].

Step 2: If Pm 2: R holds for the given mutation rate Pm, reverse the
value of the gene, i.e., 0 -+ 1 or 1 -+ O.
11.2. Operation planning of district heating and cooling plants 249

Crossover point
~
~-+- s~ ---+-~ . * * ~~ s)+k -+~1- S )+k+I ---+-~ l-+-S ~+K. I ____ :

SI 111 10I . 10I .lllll . 11 I . 10I .10I


82 I 0 I . 11 1.. 11 I . Il i0I . ll l . 11 I . 10 I
1"'- s ~ --+-~ ;-+- S ~+k ~-04- S ~+l+I -+-~ ~ ~S ~+K.I

S; 11 110 I . 10I .. 1110 I . 111 . 111 . 10I


S~ I0 I . 11 I . 11 I . 11II I. . 11I .1 0 ,. . 0 I1

Figure 11.8. An illustration of crossover

Step 3: If the new individual is feasible, replace the original individual


with the new one. If not, preserve the original individual in the
population.

Step 4: Carry out the procedure from step 1 to step 3 for all individuals
in the population.

Figure 11.9 illustrates an example of mutation.

111 .. 10 1... 10 I .. 11 1... 111 01... 11 1


+

Figure 11.9. An example of mutation

11.2.4.7 Genetic algorithms for nonlinear 0-1 programming


We can now present genetic algorithms for nonlinear 0-1 programming
that are suitable for operation planning of a DRC plant.

Genetic algorithms for nonlinear 0-1 programming

Step 0: Determine values of the parameters used in the genetic


algorithm-the population size N, the generation gap G, the proba-
bility of crossover Pc, the probability of mutation Pm, the probabil-
ity of inversion Pi, the minimal search generation Imin, the maximal
search generation Imax(> I min ), the scaling constant Cmult, the con-
250 11. SOME APPLICATIONS

vergence criterion E, the parameter for reference solution updating


T/-and set the generation counter t at O.
Step 1: Generate the initial population consisting of N individuals.
Step 2: Decode each individual (genotype) in the current population
and calculate its fitness based on the corresponding solution (pheno-
type).
Step 3: If the termination condition is satisfied, stop. Otherwise, let
t := t + 1 and go to step 4.
Step 4: Carry out linear scaling.
Step 5: Apply reproduction operator using elitist expected value selec-
tion.
Step 6: Apply one-point crossover.
Step 7: Apply mutation of bit reverse type. Return to step 2.

11.2.5 Numerical experiments


We are now ready to present an operation planning problem involving
14 0-1 variables P(t) in a certain actual DHC plant. After formulating
three extended problems P'(t, K) (K = 6,12,24) for the problem P(t),
we apply the proposed solution method through a genetic algorithm ex-
plained in the previous section ("Proposed method") and the method
to connect K solutions to P( T), T = t, ... , t + K - 1 solved by com-
plete enumeration ("Conventional method") to each of P'(t, 6), P'(t, 12)
and P'(t,24) for comparison. The numerical experiments were carried
out on a personal computer (CPU: Intel Pentiumll Processor 266MHz,
C_Compiler: Microsoft Visual C++ 6.0).
Table 11.9 shows the experimental results for the case of K = 6, where,
the extended problem P'(t, K) involves 840-1 variables. In Table 11.9,
the results obtained by Proposed method for P'(t,6) with respect to
the best, average, and worst objective function values; the number of
best solutions (#); the average number of times of switching; and the
average processing time of lO trials are shown in the upper row. The
results obtained by Conventional method with respect to the objective
function values, the number of t.imes of switching, and the processing
time are shown in the lower row. Here, parameter values in the genetic
algorithm are set as population size N = 70, crossover rate Pc = 0.8,
mutation rate Pm = 0.01, and maximal search generation Imax = 2500.
From Table 11.9, it can be seen that all solutions obtained by 10 tri-
als of Proposed method are the same as those obtained by Conventional
11.2. Operation planning of district heating and cooling plants 251

Table 11.9. Experimental results for K:= 6 (10 trials)

J' (yen) # Number of switching Time (s)


80099.02 (best)
1.0 4.6 x 10 1
Proposed 80099.02 (average) 10
(average) (average)
80099.02 (worst)
Conventional 80099.02 - 1 9.0 X 10- 2

method. In case K is less than 8 or so, it would be appropriate to con-


sider that a solution obtained by 'Conventional method' is probably an
optimal solution to the extended problem P'(t, K). However, concern-
ing the processing time, as expected Conventional method is much faster
than the Proposed method is. As a result, for the extended problems
with K = 6, there is no evidence that would reveal an advantage of
Proposed method over Conventional method.
The experimental results for the case of K = 12 are shown in Table
11.10, where the extended problem P'(t, K) involves 168 0-1 variables.
Here, parameter values in the genetic algorithm are set as population
size N = 70, crossover rate Pc = 0.8, mutation rate Pm = 0.005, and
maximal search generation Imax = 5000.

Table 11.10. Experimental results for K = 12 (10 trials)

J' (yen) # Number of switching Time (s)


532451.96 (best)
6.7 1.5 x 10 2
Proposed 535030.28 (average) 4
(average) (average)
539265.86 (worst)
Conventional 536804.4 --
8 2.0 X 10- 1

From Table 11.10, it can be seen that nine solutions obtained by 10


trials of Proposed method are better than the solution obtained by Con-
ventional method with respect to cost. As to processing time, Proposed
method is sufficiently practical because it requires only about 150 sec-
onds.
Finally, the experimental results for the case of K = 24 are shown in
Table 11.11. In this case, 3340-1 variables are contained in the extended
problem P'(t, K). Here, parameter values in the genetic algorithm are
set as population size N = 70, crossover rate Pc = 0.8, mutation rate
Pm = 0.003, and maximal search generation Imax = 10000.
252 11. SOME APPLICATIONS

Table 11.11. Experimental results for K = 24 (10 trials)

J' (yen) # Number of switching Time (s)


1528257.95 (best)
12.2 6.1 x 10 2
Proposed 1542574.70 (average) 1
(average) (average)
1566468.39 (worst)
Conventional 1573317.37 ~

22 5.9 x 10- 1

From Table 11.11, it can be seen that all solutions obtained by 10


trials of Proposed method are better than the solution obtained by Con-
ventional method with respect to cost. As to processing time, Proposed
method is effective and efficient for K = 24 as well as for K = 12 because
Proposed method requires only about 600 seconds.
Figure 11.10 illustrates the obtained operation plans based on the
solutions by Proposed method and Conventional method. In Figure
11.10, the axis of abscissa denotes time, thick lines indicate the change
of cooling load in a day, and bars shaded inside mean that machines
are in operation. From Figure 11.10, we can see that the application
of Conventional method, ignoring continuity of operation to the multi-
period operation planning, results in an unnatural operation plan.
Through these experimental results, it is observed that an operation
plan obtained by solving P{t) independently every 1 hour is less rea-
sonable, practical, or economical than one obtained by solving the ex-
tended problem Pl{t, K). Unfortunately, because the extended problem
Pl(t, K) involves more than 100 0-1 variables if K exceeds 7, the com-
plete enumerate method for Pl(t, K) cannot obtain an optimal solution
in practical time or Conventional method cannot obtain a good approx-
imate solution. On the other hand, the application of Proposed method
through genetic algorithms is supposed to be more practical and efficient
than are enumeration-based methods, for it can obtain better approxi-
mate solutions than they can obtain in hundreds of seconds.

11.2.6 Conclusion
In this section, for operation planning of DHC plants, single-period
operation planning problems P{t) and multiperiod operation planning
problems pi (t, K) by taking account of the continuity of operation of
instrumentals were formulated as nonlinear 0-1 programming problems.
Realizing that the formulated multiperiod operation planning problems
pi (t, K) involve hundreds of 0-1 variables, genetic algorithms for nonlin-
ear 0-1 programming were proposed. The feasibility and effectiveness of
11.3. Coal purchase planning in electric power plants 253
Conventional method Proposed method
Load Load

6 12 18 24 T . 6 12 18 24
Ime Time

Running BW: Boiler


DAR: Absorbing Freezer
- Demand ER : Turbo Freezer

Figure 11.10. Comparison of operation plannings for K = 24

the proposed method were demonstrated through an operation planning


problem of an actual DHC plant.

11.3 Coal purchase planning in electric power


plants
In this section, we treat coal purchase planning in a real electric power
plant. Several complex constraints as well as multiple vague goals are
involved in the planning problem. The conventional mixed integer pro-
gramming approach is in vain because of its complexity and multiple
vague goals. We apply a fuzzy satisficing method to deal with the vague-
ness of the goals. Using a desirable property of the problem, we show
the complex problem can be solved by a couple of rules and genetic al-
gorithms. The validity of the proposed approach is verified by numerical
simulations.

11.3.1 Introduction
In real-world programming problems, we may be faced by the difficulty
of modeling them as traditional mathematical programming problems.
The difficulty comes from the following two facts: (1) The problem is
given by verbal descriptions and sometimes includes unclearly described
objective(s) and/or constraints. (2) The problem is too complex to be
modeled as a mathematical programming problem.
The problem we treat in this section is one of such intractable difficul-
ties. The problem is a coal purchase planning problem in a real electric
power plant. Through a series of interviews with the domain experts,
the coal purchase problem had been clarified. The purchase order of the
254 11. SOME APPLICATIONS

coal (fuel) is basically made according to an annual sales contract. Since


a good annual purchase planning can lead to an efficient fuel inventory
control, the coal purchase planning is an important task in the power
plant. The problem involves several complex conditions and criteria.
Because of the complexity, the formulation of the problem as a mixed
integer programming problem is not easy. Even if we can formulate the
problem, the formulated problem will be very large scale and difficult
to solve. Thus, the conventional mixed integer programming approach
seems to be unsuitable for the problem.
A conceivable approach to such a problem is by establishing a partic-
ular solution method using the characteristics of the problem. In this
section, we demonstrate how we can solve the coal purchase planning
problem by a genetic algorithm [66, 75, 112, 165] together with a fuzzy
satisficing method [79, 135] using a desirable property of the problem.
In order to treat the vague goals, we introduce a fuzzy satisficing method
so that the vague goals are represented as fuzzy sets (fuzzy goals). On
the other hand, to tackle the complexity of the problem, we analyzed the
problem and found a desirable property so that, given a coal purchase
sequence, the optimal (or nearly optimal) receipt date of each coal in
the sequence can be determined by two simple rules. Using this desir-
able property, we decompose the problem into upper- and lower-level
problems. For the upper-level problem, we apply a genetic algorithm to
explore the optimal coal purchase sequence. For the lower-level problem,
the optimal receipt date is obtained by applying the two simple rules.
In order to check the validity of the proposed solution method, the
solutions obtained by the proposed method are compared with the com-
plete optimal solutions to the small-sized problems. Moreover, the use-
fulness of the proposed genetic algorithm exploration is examined by
numerical simulations in the real-world problems. Finally, some sugges-
tions to establish fuzzy goals are given toward the real applications.

11.3.2 Problem statement


The electric power plant of our concern is located on the coast of the
Sea of Japan and newly built. In that electric power plant, the electricity
is generated by coal. The coal is imported from several countries. The
coal from the docked ship at the pier is directly stored in 16 silos. The
lack of stored coal obstructs the electricity generation plan. On the other
hand, excessive stocks of coal hinder the reception of forthcoming coal
as planned because of the stock limitation. Maintaining the appropriate
stock of coal is an important task in the plant. Because the purchase
order of coal is based on an annual sales contract, in principle, a major
11.3. Coal purchase planning in electric power plants 255

revision is not accepted. Thus, the annual purchase planning plays an


important role for the efficient fuel inventory control.
Traditionally, the annual purchase plans were done by the experts in
other plants. However, in the new electric power plant of our concern,
the stock limitation is more restrictive than that of the old plants and
the coals are imported from farther countries. Thus, the fuel inventory
control in the new plant is very difficult because the problem includes
strong restrictions and more uncertain parameters. Planning only by
human experts is regarded as dangerous. In such a case, it will be useful
to solve the problem by a suitable optimization technique. By such an
approach, we can obtain various annual plans under different problem
settings automatically. Comparing the obtained several solutions, we can
select the most suitable solution in the sense of robustness against the
uncertainty of the parameters as well as the quality. In order to build the
automatic planning system, we treat the coal purchase planning problem
in the new electric power plant.
The problem setting is as follows:
C 1 The coal is imported from eight countries and conveyed by a ship.

C 2 We have 28 kinds of coal with different calorific powers.

C 3 Four kinds of coal cannot be used as fuel without being mixed with
some others. For such a kind, we assume a one-to-one mixture.

C4 The load displacement of a ship and the lead time depend on the
country. However, the load displacement is either 30,000, 60,000, or
80,000 tons.

Cs Only one ship can come alongside the pier in a day. However, when
the weather is stormy, no ship can come alongside the pier.

C 6 The coal from the docked ship at the pier is directly stored in 16
silos. In each silo, only one kind of coal with the same receipt date
can be stored.

C7 The capacity of a silo is 33,000 tons. Thus, the stock limitation of


the plant is 33,000 x 16 = 528,000 tons.

C 8 The coal stored for more than 60 days should be moved to another
empty silo in order to avoid spontaneous combustion.

C g In winter (November to March), we have many stormy days. On


a winter day, the probability of the plant not receiving the coal
is high. Thus, there are seasonally different safety stocks for coal:
256 11. SOME APPLICATIONS

160,000 tons in summer season (April to October) and 290,000 tons


in winter season.

C lO The annual purchase planning should be suitable for a given annual


generating plan.

Because we know the calorific power of each kind of coal, given an


annual generating plan, we can estimate the coal consumption by the
relation between the calorific power and the generated energy. The flow
of a coal in the plant can be illustrated in Figure ll.ll.

8
30,000 t
one ship
per day

-~o0
shipping'
carry in
~mpty
" sHoes)
~ t ~~e plant
{" ~~~~~X~~::~d to
an empty sIlo

8
" Safety stock:

I 0 O .. O I
160,000 t in summer
290,000 t in winter
8 countries 80,000 t 16 silos of 33,000 t capacity

Fig'UTe 11.11. Flow of coal in the electric power plant

As criteria of an annual coal purchase plan, we consider the following


four objectives:

G 1 minimize the deviations from a given seasonally changed target stock,


where the change is not very often.

G 2 minimize the deviations from a given target purchase distribution


on five groups of coal mining countries.

G 3 minimize the number of movements of stored coals to another silo


because of C s .

G 4 minimize the total cost of coal purchase.

Under those circumstances, we should decide which coals we buy during


a year, the receipt date and amount of each of them, and the consump-
tion policy of the stored coals.
As described earlier, the coal purchase plan under consideration in-
cludes not only the factors directly related to the coal purchase such as
11.3. Coal purchase planning in electric power plants 257

the cost, annual generation plan, and so on but also the factors indi-
rectly related to the coal purchase such as the coal inventory control,
treatment of coal, and so on. Thus, we must consider the transition of
coal stocks and the complex conditions such as mixing coals, the pre-
vention of the spontaneous combustion, and so on. Because of those,
the problem becomes a very complex and large-scale one. For example,
when the daily stock of 16 silos is regarded as a part of the decision vari-
ables, the number of decision variables becomes more than 5000 because
taking into consideration only the number of variables of daily stocks,
it is equal to 16 x 365 = 5840. Moreover, we should introduce a num-
ber of 0-1 variables to treat complex conditions. We cannot apply the
mathematical programming techniques unless the problem is formulated
as a mathematical programming problem. As described earlier, it will
unfortunately require a formidable effort in formulating the coal pur-
chase problem, and even if the problem is formulated as a mixed integer
programming problem, it will consume a great deal of computation time
to solve the problem. Thus, the traditional mathematical programming
approach will not be suitable for the problem.
In this section, using the desirable property of the given problem, we
demonstrate that, without formulating it as a mixed integer program-
ming problem, a good solution can be explored by a genetic algorithm
together with a fuzzy satisficing method in a practical amount of time.

11.3.3 Desirable property of the problem and two


level programming approach
In the coal purchase planning problem, we must decide when, what
kind, how much, and in what sequence to receive the coal during a year.
Let us start by considering the determination problem of the receipt date
of each coal in a given coal purchase sequence. In this subproblem, the
objective function values with respect to G2 and G4 are constant and
that is also constant with respect to G 3 unless extreme receipt dates
are considered. Thus, we consider the first objective function G 1 only.
Moreover, for the sake of simplicity, we discard the conditions C 5 , C 7 ,
C 8 , and C 9 for a while. We make it a rule to use coals according to a
First In First Out (FIFO) order.
Let m(t) and q(t) be the given target stock and the real stock at
time t, respectively. An example of m(t) and q(t) can be depicted in
Figure 11.12. A vertical sudden increment of q(t) means the receipt of
a coal at that time. Let p(t) and u{t) be the momentary consumption
of coal and the amount of received coal at time t. The real stock q(t)
258 11. SOME APPLICATIONS

u
>
~
...:
9 i---""""'r-""\.
'"

time

Figure 11.12. Differences between target stock met) and real stock q(t)

time

Figure 11.13. The case when met) is constant

satisfies
d~~t) = u(t) - p(t), (11.50)

where q(O) = qo (initial stock) and u(t) is a sum of amplified impulsive


functions; that is,

ift=ii, i=1,2, ... ,N,


(11.51)
otherwise

8 is the Dirac's distribution and N is the number of receipt. The mo-


mentary consumption p{t) is obtained by the amount of electricity pro-
duction and the kind of coal being used.
In Figure 11.12, the shaded area Zl defined by m{t) and q(t) shows
the difference between the target stock and the real stock. Let T be the
11.3. Coal purchase planning in electric power plants 259

final time. Zl is represented by

Zl = foT Im(t) - q(t)ldt (11.52)

Because the given purchase sequence specifies the kind and amount of
the forthcoming coal successively, our current problem is to determine
the receipt dates of coals, in other words, ti's in (11.51) so as to minimize
the shaded area Zl for the plan duration (1 year). Because the target
stock m{t) changes seasonally, we have three possible cases: (a) m{t)
does not change from one reception to the next, (b) m{t) decreases from
one reception to the next, and (c) m{t) increases from one reception to
the next.

Figure 11.14. The case when m{t) decreases

t4 t5 time

Figure 11.15. The case when m{t) increases

First, in case (a), let us consider the optimal receipt date of the forth-
coming coal. Let n be its amount and t* its receipt time, such that
260 11. SOME APPLICATIONS

m(t*) - q(t*) = 0.5n. If we receive the coal at time t* - 15 (15 > 0),
that is, a little earlier, the area defined by met) and q(t) is larger than
when we receive it at time t*. The difference is illustrated as A - B in
Figure 11.13. On the other hand, if we receive the coal at time t* + E
(E > 0), that is, a little later, the area defined by met) and q(t) is also
larger than when we receive it at time t*. The difference is illustrated
as D - C in Figure 11.13. Hence, the receipt at time t* is optimal.
Let us consider case (b). In the real world situation, m( t) does not
change very often, so that at most one change can occur between two
receipt dates. Looking at Figure 11.14, where tt is the time when met)
decreases, it can be shown that the receipt at time t2 tl) or t3 (~ tl)
such that m(ti) - q(ti) = 0.571, i = 2,3 is optimal. Indeed, if we receive
the coal before h, t2 is the optimal receipt time by a similar discussion
as in case (a). Similarly, if we receive the coal after tl, t3 is the optimal
receipt time.
In case (e), let t4 be the time when met) increases. It can be proved
in the same way as case (a) that the optimal receipt time is t4, if m(t4)-
q(t4) ~ 0.571; otherwise, it is t5 (> t4) such that m(t5) - q(t5) = 0.571
(Fig. 11.15).
To sum up, we have the following two rules:
Rule 1. If met) does not decrease, the coal should be received whenever
met) - q(t) ~ 0.571.
Rule 2. If met) decreases at time tl, the coal should be received at
either t2 or t3, where t2 is the time such that m(t2) - q(t2) = 0.571
and t2 < tl, and t3 is the time such that m(t3) - q(t3) = 0.571 and
t3 ~ tl'
To apply one of those rules, we must know whether met) decreases.
This can be done by the following four steps: (I) Set l as a tentative
receipt time if m( i) - q( i) ~ 0.571. (II) Assuming that the coal is received
at l, calculate the next time t such that met) - q(t) ~ 0.571. (III) Check
whether met} decreased between land t. (IV) If met) decreased, cancel
the receipt at l and apply Rule 2. Otherwise, fix the receipt at i.
Now, let us introduce the conditions C 5 , C 7, C s and Cg. First, we
introduce the safety stock condition Cg. Let set) be a safety stock level
and i the optimal receipt time of the coal. If we have that q(i) < sCi),
a safety stock violation, it can be remedied by changing the receipt
time i to a time t6 such that q(t6) = S(t6) and t6 < i. Similarly, we can
introduce the stock limitation condition C7. Namely, if we have q(i) > ij,
this violation can be remedied by changing the receipt time i to a time
t7 such that q(t7) + 71 = ij and t7 > i, where ij is the stock limitation and
71 is the amount of the received coal.
11.3. Coal purchase planning in electric power plants 261

The condition of the prevention of the spontaneous combustion, Cg,


can be satisfied by the following modification. When there is no empty
silo for the 60-day-old coal movement, the last coal receipt is postponed
until the day after the movement. This may lead to violation of the safety
stock condition, but we assume that the prevention of the spontaneous
combustion has a higher priority than the safety stock. In a usual real-
world situation, the occurrence of this violation is quite rare because the
safety stock level is low enough.
The violation of the condition of unloading one ship per a day, C5,
can occur only on the day when the target stock m(t) increases, that is,
t4 in Figure 11.15. This may occur when the increment of m(t) is more
than 30,000 tons. In case of this violation, we change the simultaneous
reception at t4 to a sequential reception so that only one ship is unloading
the coal in a day. This modification does not guarantee optimality but
sub-optimality. We consider that this modification is good enough.
As a result, given a coal purchase sequence, we can obtain the op-
timalor suboptimal receipt time (date) by applying the two rules and
the modifications described earlier. Thus, if an optimal coal purchase
sequence is found, an optimal or suboptimal solution to the problem
can be obtained. Based on this idea, we apply a genetic algorithm for
exploration of an optimal or suboptimal coal purchase sequence.
At the beginning, a FIFO order was adopted. However, considering
the peril of the spontaneous combustion, the longer a coal is stored in
the same silo, the earlier it should be used. Thus, the order of the use
of the coal was changed accordingly.

11.3.4 Exploration of coal purchase sequence


For the exploration of a good coal purchase sequence, we apply a
genetic algorithm [66, 75, 112, 165]. In order to apply a genetic algo-
rithm, we should define the representation of the solution (coding), the
crossover operation, the mutation operation, the fitness function, and so
on. In what follows, those definitions are described.

Table 11.12. Codes for the solution representation


Code Kind Amount (x 1000 tons) Country group
1 A 80 Group 1

45 Z (mix) 30 Group 5

48 30 Group 5
262 11. SOME APPLICATIONS

Coding. A different integer is assigned to each possible pair, composed


of the kind and amount of the received coal as shown in Table 11.12.
In Table 11.12, "(mix)" shows that the kind of coal cannot be used
as fuel without being mixed with some others. Thus, a purchase
sequence is represented as a repeated permutation of integers from 1
to 48. Since we do not know the length of the permutation correctly,
we consider a permutation that is sufficiently long. In our problem,
we set its length to 60. When the lower level problem is solved, we
can know the correct length and discard the superfluous coals. For
example, an individual is illustrated in Figure 11.16.

,.
60 ...
1101312611421181

Figure 11.16. An example of an individual

Crossover operation. Two individuals are chosen from the popula-


tion. The chosen individuals are mated with the given crossover rate
Pc. Two-point crossover [66, 75, 112, 165J is adopted.

Selection method. Introducing the elitist model [66, 75, 112, 165],
the best two individuals survive unconditionally. The other indi-
viduals of the next population are chosen based on a ranking selec-
tion model [66, 75, 112, 165J. We assign the probability mass Ps(1)
to the first ranked individual twice as much as the probability mass
Ps(N) to the last (Nth) ranked individual, where N is the population
size. We produce the arithmetic progression from Ps(1) = 2Ps(N) to
Ps(N) so that the jth-ranked individual has the probability mass
Ps(j) = 2(2N - j - I)/3N(N - 1).
Mutation operation. Every element of the repeated permutation is
replaced with a random number in {I, 2, ... , 48} with a mutation
rate Pm.

Initial population. An individual of the initial population is estab-


lished by repetitively generating random numbers in {I, 2, ... ,48}.

Fitness function. The deviation from the target stock, Zl is defined


by (11.52). The deviation from the target purchase distribution
d E R5 (d 2': 0, Ildll = 1) is defined by Z2 = lid - dll, where
d = (d 1 , d 2 , .. , d5) is defined by di = a(i)/A. A is the total amount
of purchased coal and a(i) is the amount of purchased coal from
11.3. Coal purchase planning in electric power plants 263

Figure 11.17. Linear membership function

countries in Group i. The number of movements of stored coals,


Z3, and the total purchase cost can easily be obtained through the
solution procedure of the lower level problem. In a single objective
problem, fitness function is usually defined by the objective function.
However, we have four objective functions. In order to treat four
objective functions, we introduce a fuzzy satisficing method [79, 135]
that was proposed to obtain a satisfactory solution for the DM's re-
quirements. In this method, eliciting a vague target value of each
objective function as a fuzzy set Gi whose membership grade shows
the satisfaction degree, we select the solution r that maximizes
(11.53)

J.tT is used as the fitness function. J.tGiS are assumed to be linear mem-
bership functions defined by two parameters z? (reservation level) and
zl (aspiration level), as shown in Figure 11.17.
The processes of our approach to the coal purchase problem can be
illustrated in Figure 11.18.

11.3.5 Numerical experiments in the small-sized


problem
11.3.5.1 Small-sized problem
For the purpose of checking the proximity of the proposed genetic algo-
rithm solution to the optimal solution, we consider small-sized problems
because the real-world problem is too large to obtain an optimal solu-
tion by complete enumeration. As the small-sized problem, we consider
a problem with 45-day duration, five kinds of coal, and at most eight
times purchase. Even for this size, we have about 390,000 (precisely,
58 = 390,625) alternatives, including infeasible ones.
We assume the load displacement of the available ship, the calorific
power, the target purchase rate, and the price of each coal shown in
264 11. SOME APPLICATIONS

to the next generation

population receipt time genetic operators


optimization

individual
I
~ I receipt time determined
by the 2 rules
I~ I fitness
evaluation
I r-- r-- -
+ l J~ I I I:: Iil I::

i8 I~ Se
individual receipt time determined fitness 0
.::; 0
.::;
+- 2 by the 2 rules evaluatIOn
~ u ~ 1-


OJ
] u

individual
N
~ I receipt time determined
by the 2 rules
I~ I fitness
evaluatIon
I '-- '-- '--

Figure 11.lB. Procedures of the proposed approach

Table 11.13. Five kinds of coal in the small-sized problem


Coal Load dis- Cal. power Purchase Price
no. placement (tons) (cal/g) rate (%) (US$/ton)
1 80,000 6,950 20 44.6
2 60,000 6,800 20 49.6
3 80,000 6,800 30 43.1
4 60,000 6,520 20 40.6
5 30,000 6,700 10 38.7

Table 11.14. Target and safety stocks in the small-sized problem


Duration Target stock (tons) Safety stock (tons)
1st - 19th d 200,000 160,000
20th -- 39th d 230,000 160,000
40th - 45th d 260,000 160,000

Table 11.13 and the target and safety stocks shown in Table 11.14. All
kinds of coal can be used as fuel without being mixed with some others.
One hundred percent output is assigned to every day in the generating
plan. The initial state of the fuel stock is shown in Table 11.15. The
membership functions /-LG; 's are established using the parameters z~ =
2000, z} = 0, z~ = 150, z~ = 0, zR = 3, z~ = 1, zg = 2700, and
zl = 1000. By complete enumeration, we obtain 0.684059 as the optimal
fitness function value. There are 360, 386 feasible solutions.
11.3. Coal purchase planning in electric power plants 265

Table 11.15. The initial state of the fuel stock in the small-sized problem
Silo number 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
amount (x 1000 tons) 0 5 5 33 33 33 33 33 33 33 0 0 0 0 0 0
coal number - 1 1 1 2 2 3 3 4 5
oldness (d) - 30 30 30 11 12 13 14 15 16 -

Table 11.16. The crossover and the mutation rates


Case 1 Case 2 Case 3 Case 4 Case 5 Case 6
pc 0.4 0.5 0.6 0.4 0.5 0.6
pm 0.05 0.05 0.05 0.1 0.1 0.1

11.3.5.2 Applied approaches and simulations


We applied the proposed genetic algorithm to this problem varying
the crossover rate Pc and the mutation rate Pm as in Table 11.16. We
calculated 100 generations with a population of 100 for each simulation.
We did five simulations for each case in Table 11.16.
For the purpose of comparison, we also applied a simulated annealing
(SA) [1, 209] to the small-sized problem. SA is an exploration technique
that looks for a good solution, starting at a random initial solution and
moving from one randomly chosen neighboring solution to another with a
certain acceptance probability. The acceptance probability is controlled
by a cooling schedule so that acceptance probability decreases as itera-
tion number increases. To apply SA, we should define the neighborhood,
the acceptance probability, and the cooling schedule. Those are defined
as follows:

Neighborhood. Generate an integer i between 1 and 8, randomly. The


ith coal number in the purchase sequence is replaced with a randomly
chosen integer between 1 and 5. The purchase sequences obtained by
those operations are considered as the neighborhood.
Acceptance probability. Let 14 be the J..lT value of the current solu-
tion and J..l} that of the generated neighboring solution. The accep-
tance probability of the movement from the current solution to the
neighboring solution is given by

O
Pt = min ( exp ( J..llT T- J..l T ) , 1) , (11.54)

where T is the temperature parameter controlled by a given cooling


schedule.
266 11. SOME APPLICATIONS

Cooling Schedules. Given the initial temperature TO, every 100 ex-
plorations the temperature parameter is updated as

(11.55)

where a < 0: < 1. For the simulation, we apply three combinations


of TO and 0:, that is, TO = 10, 000 and 0: = 0.9 for cooling schedule 1,
TO = 10, 000 and 0: = 0.8 for cooling schedule 2, and TO = 0.7 and
0: = 0.97 for cooling schedule 3.

Using SA, we explored 10,000 solutions. We did five simulations for


each cooling schedule.

Table 11.17. The results in the small-sized problem


Maximum Minimum Average Variance
case 1 0.684059 0.684059 0.684059 0.000000
case 2 0.684059 0.684059 0.684059 0.000000
case 3 0.684059 0.676877 0.682623 0.000010
GA
case 4 0.684059 0.676575 0.681474 0.000013
case 5 0.684059 0.684059 0.684059 0.000000
case 6 0.684059 0.684059 0.684059 0.000000
cooling 1 0.654449 0.654449 0.654449 0.000000
SA cooling 2 0.664206 0.654449 0.656400 0.000019
cooling 3 0.664206 0.654449 0.657157 0.000018

11.3.5.3 The results

frequency I mode SA GA
1.0

O.S
solution distribution I
(
" , \.1
,
,,~
i!
/
~
I '~lj!
, ,i!
I ,j I

r" !.
0.6

-
I
0.4 I jl

0.2
0.0 L-_'--_'-=-':::.....-"----_-'--_-'--_..l.L..L.LL-----1
0.0 0.1

Figure 11.19. Solution distributions of GA, SA and complete enumeration

The maximum, mlIllmum, average, and variance of the best fitness


values obtained by five simulations for the GA and SA are shown in
Table 11.17. Figure 11.19 shows the solution distributions of GA, SA
11.3. Coal purchase planning in electric power plants 267

and complete enumeration. The fact that the GA and SA solutions are
very good can also be confirmed in Figure 11.19. From Table 11.17 and
Figure 11.19, the GA seems to be better than the employed SA. We did
similar simulations for different small-sized problems and had similar
results. Hence, the proposed GA approach can be regarded as a suitable
technique for our coal purchase problem.

11.3.6 Numerical experiments in the real-world


problem
11.3.6.1 Real-world problem
In the small-sized problem, we observed that the optimal or a near
optimal solution can be obtained by the proposed approach. We apply
the proposed approach to the real-world problem and examine its useful-
ness. We compare the proposed GA approach to SA and simple random
search (RS) approaches.
The circumstances of this problem are as follows. The annual gen-
erating plan is given as 82% in July, 100% in August and September,
and 70% in other months. We have 48 pairs, composed of the kind and
amount of received coal. The calorific power of the coal is from around
6000 to around 7000 (cal/g). The price of the coal is from around 39 to
50 (US$/ton). Five pairs are not available for the fuel without mixing
them with others. Table 11.18 shows the initial state of the fuel stock.
Coal mining countries are divided into five classes. The target annual
purchase distribution on five classes is given as 20% for Groups 1 and
4, 5% for Groups 2 and 5, and 50% for Group 3. Table 11.19 shows the
target coal stock. The linear membership functions of the fuzzy goals are
determined by the parameters z? zg
= 10,000, Zf = 600, = 40, z~ = 0,
z~ = 3, zj = 1, z~ = 13,000, and zl = 9500.

Table 11.lS. The initial state of the silos


Silo number 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
amount (xl000 ton) 0 5 5 33 33 33 33 33 33 33 33 33 0 0 0 0
coal number - 3 4 5 6 7 8 9 10 11 12 13 -
oldness (d) - 50 50 60 11 12 13 14 15 16 17 18 -

11.3.6.2 SA and RS approaches


The adopted SA is the same as in the previous section but the neigh-
borhood definition is different because of the size difference. The neigh-
borhood is defined as follows:
268 11. SOME APPLICATIONS

Table 11.19. The target stock level


Duration Target stock (tons) Duration Target stock (tons)
Jan. 1 - Mar. 31 290,000 Oct. 1 Oct. 15 260,000
Apr. 1 - Aug. 31 170,000 Oct. 16 - Oct. 31 290,000
Sep. 1 - Sep. 15 200,000 Nov. 1 - Dec. 31 320,000
Sep. 16 - Sep. 30 230,000

Neighborhood. Generate an integer i between 1 and 60 randomly. The


ith coal number in the purchase sequence is replaced with a randomly
chosen integer between 1 and 48. The purchase sequences obtained
by those operations are considered as the neighborhood.

In the RS, we generate a certain number of solutions (coal purchase


sequences) randomly and choose the best.

11.3.6.3 The results


In order to set the GA and SA parameters, we examined all cases in
Table 11.16 and cooling schedules 1 to 3. In each parameter setting,
we calculated 100 generations with a population of 100 for the GA and
10,000 explorations for SA. We did one simulation for each. Choosing
the best settings, we set Pc = 0.6, Pm = 0.05, TO = 10,000, and 0: = 0.8.
In the GA, we set the population size to 100 for a 200-generation run.
In SA and RS, we explored 20,000 solutions. For each approach, we
did the 20 simulations. The maximum, minimum, average, and variance
of the best fitness values obtained by 20 simulations are shown in Ta-
ble 11.20. The solution distributions of GA(200), SA(200) and RS(200)
are depicted in Figure 11.20.

frequency
8
7

!\
6
~ k /RS(20.000)

3 ,'\..1 ~,
2 :
, \
'"

OL-----~~------~~~~-----L~

0.455 0.555

Figure 11.20. Solution distributions of GA, SA and RS


11.3. Coal purchase planning in electric power plants 269

For the GA and SA, we saved the best solution of 100 generations
and 10,000 explorations, respectively. They are respectively shown in
GA(100) and SA(lO,OOO) rows in Table 11.20. GA(200), SA(20,000),
and RS(20,000) rows in Table 11.20 show the results of the GA of 200
generations, the SA of 20,000 explorations, and the RS of 20,000 explo-
rations, respectively.

Table 11.20. The results of GA, SA and RS


Maximum Minimum Average Variance
GA(lOO) 0.751596 0.636599 0.696591 0.000712
GA(200) 0.763200 0.674791 0.715217 0.000661
SA(10000) 0.772043 0.562536 0.712039 0.003249
SA(20000) 0.772043 0.636395 0.732522 0.001060
RS(20000) 0.564747 0.462032 0.513046 0.000888

As shown in Table 11.20, even though we explored 20,000 solutions


by RS, the best fJ.T is 0.564747. This is smaller than the worst fJ.T, that
is, 0.636599, of GA(100). RS is inferior to both the GA and SA. The
best solution among all has a fJ.7' = 0.772043 and is obtained by SA.
The averages of SA(lO,OOO) and SA(20,000) are better than those of
GA(100) and GA(200), respectively. On the other hand, the minimum
values of SA(lO,OOO) and SA(20,OOO) are worse than those of GA(100)
and GA(200). The variances of SA(lO,OOO) and SA(20,000) are larger
than those ofGA(100), GA(200), and RS(20,000). This is because of the
dependence of the SA solution on the initial solution. Thus, a smaller
variance in GA implies that it is more robust than SA is. Because the
company would use only one or two random seed settings, SA was judged
to be risky and we opted for the GA. The average computation time of
each run of GA(200), SA(20,000) and RS(20,000) was 1847.61, 1991.47,
and 1735.60 seconds respectively, on Sun Sparc Station 10.
We did similar simulations with the same Pc, Pm, TO, and a in the
other settings and we had similar results.

11.3.7 Fuzzy satisficing method


In the previous two subsections, we examined the usefulness of the
proposed G A approach. In this section, we describe how we can elicit
the membership functions for evaluating the goodness of the solution.
In order to establish a linear membership function, the aspiration
level zl and the reservation level zP have to be set. Those parameters
can be specified by the domain experts who are familiar with the coal
purchase or by the management if possible; otherwise, the determination
270 11. SOME APPLICATIONS

of those parameters would be difficult. In such a case, we can apply a


method similar to Zimmerrnann's method [225, 228] for the parameter
determination.
First, adopting each objective function as a fitness function, we mini-
mize it by a GA. Because we have four objective functions, the deviation
from target stock, Zl; the deviation from the target purchase distribu-
tion, Z2; the total cost, Z3; and the number of coal movements between
silos, Z4, we have four solutions ri, r~, rj, and r4' Using those solutions,
we can define
Z max = max z(r*); 1 2 3 4 (11.56)
~ ~ J"=""
j=1,2,3,4

zimin =. .
nun Zi (r *
j )
, 'Z = 1 , 2 , 3 , 4 . (11.57)
J=1,2,3,4

In Zimmermann's method , we determine zl'l and zO'l as zlZ = zmin 1,


and
z?, = zi ax . However, because we are using G A, a search using random
variables, to obtain ri, r 2, rj and r 4, there is no guarantee of optimality.
Considering this, we determine a linear membership function by two
points (ziaX, 0.1) and (ziin,O.8). As a result, zl and z?, are determined
by
9z min _ 2 z max
~ ~
i=I,2,3,4 (11.58)
7
8z max _ zmin
zO =
l
~
7~
' z. = 1, 2 , 3, 4 (11.59)

Now let us see how we can reflect the DM's preference. To this end,
we use the real-world problem described in the previous section as an
example.

Tabl~ 11.21. The /LGi and Zi values under the equally important goals
1 2 3 4
ILGi 0.593652 0.609053 1 0.593652
7,542.12 16.8575 1 99,779
Zi
(x 1000 tons) (%) (time) (xIOOO US$)

Table 11.22. The /LGi and z, values under the minimum cost preference
1 2 3 4
/LGi 0.453331 0.498192 0.694574
8,142.39 21.6377 1 98,950
Zi
(x 1000 tons) (%) (time) (xIOOO US$)
11.3. Coal purchase planning in electric power plants 271

Applying the propm;ed membership function determination technique,


we obtain the membership functions defined by the parameters z~ =
10,038.8, Zf = 5855.55, zg= 43.1195, z~ = 0, zR= 3, zj = 1,
z2 = 104,655, and zl= 96,441.1. Using those membership functions,
we maximize (11.53) by the proposed GA and obtain the solution of
Table 11.21.
Assume that the DM feels that the total cost (the fourth objective
function value) of the solution is large and would like to make it smaller.
In order to reflect the DM's preference, the fourth membership function
is multiplied by itself, so that we put the importance on the fourth
objective function. Thus, the fitness function JLT is updated as

(11.60)

Using this fitness function, we obtain the solution of Table 11.22. From
Tables 11.21 and 11.22, we can observe that the total cost is reduced
from US$ 99,779,000 to 98,950,000. However, the other two objective
function values, except the third one (number of movements), become
worse. Because we are treating a multiobjective problem, we fall in a
trade-off situation. If the DM request is to make the total cost much
smaller, even if it makes the other objective function values worse, then
JLb 4 can be replaced with JLt 4 in (11.60). In such a way, we can reflect
the DM's preference in the fitness function.

11.3.8 Conclusion
In this section, we showed how we tackled a complex real-world coal
purchase planning problem and how we used the GA and fuzzy program-
ming techniques. In the proposed approach, the coal purchase planning
problem is treated as a two-level problem taking advantage of a desir-
able property of the problem. In the upper-level problem, by applying
a genetic algorithm, a good purchase sequence of the coal is explored.
On the other hand, in the lower level problem, the reception dates of
the sequentially coming coal are determined by applying a few rules to
minimize the total deviations from the target stocks. By numerical sim-
ulations of a small-sized problem, we have examined how much the GA
solutions are close to the optimum and then confirmed the soundness
of the GA to this problem. Moreover, the proposed GA approach has
been applied to a real-world problem and compared with the RS and
the SA approaches. Consequently, it is shown that a good solution can
be obtained by the GA and SA approaches and that the GA approach
produces a good solution more stably than the SA approach in our prob-
272 11. SOME APPLICATIONS

lem setting. Using the test problem, a fuzzy programming technique to


reflect the DM's preference is exemplified.
Similar other planning problems are found in the electric power plant
under consideration. The idea of our proposed approach would be useful
for these problems as welL
References

[1) E. Aarts and J. Korst, Simulated Annealing and Boltzmann Machine, John
Wiley & Sons, New York, 1989.
[2) N. Abboud, M. Inuiguchi, M. Sakawa, and Y. Uemura, Manpower allocation
using genetic annealing, European Journal Operational Research, Vol. 111, pp.
405-420, 1998.
[3) N. Abboud, M. Sakawa, and M. Inuiguchi, The mutate and spread metaheuris-
tic, Journal of Advanced Computational Intelligence, Vol. 2, No.2, pp. 43-46,
1998.
[4) N. Abboud, M. Sakawa, and M. Inuiguchi, A fuzzy programming approach
to multiobjective multidimensional 0-1 knapsack problems, Fuzzy Sets and
Systems, Vol. 86, No.1, pp. 1-14, 1997.
[5] N. Abboud, M. Sakawa, and M. Inuiguchi, School scheduling using threshold
accepting, Cybernetics and Systems: An International Journal, Vol. 29, No.6,
pp. 593-611, 1998.
[6) D. Applegate and W. Cook, A computational study of the job-shop scheduling
problem, ORSA Journal on Computing, Vol. 3, pp. 149-156, 1991.
[7) M. Aramaki, K. Enjohji, M. Yoshimura, M. Sakawa, and K. Kato, HTS (High
Throughput Screening) system scheduling through genetic algorithms, Pro-
ceedings of Fifth International Conference on Knowledge-Based Intelligent In-
formation Engineering Systems (1 Allied Technologies (KES2001), Osaka (in
press), 2001.
[8) S. Ashour and S.R. Hiremath, A branch-and-bound approach to the job-shop
scheduling problem, International Journal of Production Research, Vol. 11, pp.
47-58, 1973.
[9] P.G. Bachhouse, A.F. Fotheringham, and G. Allan, A comparison of a genetic
algorithm with an experimental design technique in the optimisation of a pro-
duction process, Journal of Operational Research Society, Vol. 48, pp. 247-254,
1997.
[10] T. Bii.ck, Selective pressure in evolutionary algorithms: a characterization of
selection mechanisms, in Proceedings of the First IEEE Conference on Evolu-
tionary Computation, IEEE Press, Orlando, FL, pp. 57-62, 1994.
[11) T. Back, Evolutionary Algorithms in Theory and Practice, Oxford University
Press, New York, 1996.
274 References

[12J T. Back, Proceedings of the Seventh International Conference on Genetic Al-


gorithms, Morgan Kaufmann, San Francisco, CA, 1997.
[13J T. Back, D.B. Fogel, and Z. Michalewicz, Hand Book of Evolutionary Compu-
tation, Oxford University Press, New York, 1997.
[14J T. Back, D.B. Fogel, and Z. Michalewicz, Evolutionary Computation 1: Basic
Algorithms and Operators, Institute of Physics Publishing, Philadelphia, 2000.
[15J T. Back, D.B. Fogel, and Z. Michalewicz, Evolutionary Computation 2: Ad-
vanced Algorithms and Operators, Institute of Physics Publishing, Philadel-
phia,2000.
[16J S. Bagchi, S. Uckun, Y. Miyabe, and K. Kawamura, Exploring problem-specific
recombination operator for job shop, in Proceedings of the Fourth International
Conference on Genetic Algorithms, Morgan Kaufmann, San Mateo, CA, pp.
10-17, 1991.
[17J T.P. Bagchi, Multiobjective Scheduling by Genetic Algorithms, Kluwer Aca-
demic Publishers, Norwell, MA, 1999.
[18J J.E. Baker, Adaptive selection methods for genetic algorithms, in Proceedings
of the First International Conference on Genetic Algorithms, Lawrence Erl-
ballm Associates, Hillsdale, NJ, pp. 101-111, 1985.
[19J K. Baker, Introduction to Sequencing and Scheduling, John Wiley, New York,
1974.
[20J R. Bauer, Genetic Algorithms and Investment Strategies, John Wiley & Sons,
New York, 1994.
[21J R.K. Belew and L. B. Booker (eds.), Genettc Algorithms, Proceedings of the
Fourth International Conference on Genetic Algorithms, Morgan Kaufmann
Publishers, San Mateo, CA, 1991.
[22J R.E. Bellman and L.A. Zadeh, Decision making in a fuzzy environment, Man-
agement Science, Vol. 17, pp. 141-164, 1970.
[23J M. Berkelaar, lp~"olve 2.0, ftp://ftp.es.ele.tue.nl/pub/lp..solve
[24J B. Bhanu and S. Lee, Genetic Learning for Adaptive Image Segmentation,
Kluwer Academic Publishers, Norwell, MA, 1994.
[25J C. Bierwirth, A generalized permutation approach to job shop scheduling with
genetic algorithms, OR Spektrum, Vol. 17, pp. 87-92, 1995.
[26J .1. Blazewicz, W. Domschke, and E. Pesch, The job shop scheduling problem:
conventional and new solution techniques, European Journal of Operational
Research, Vol. 93, pp. 1-33, 1996.
[27J .1. Blazewicz, K.H. Ecker, G. Schmidt, and .1. W~glarz, Scheduling in Computer
and Manufacturing Systems, Springer-Verlag, Berlin, 1993, 2nd revised edition,
1994.
[28J A. Bortfeldt and H. Gehring, A hybrid genetic algorithm for the container
loading problem, European Journal of Operational Research, Vol. 131, pp. 143-
161, 200l.
[29J G. Bortolan and R. Degani, A review of some methods for ranking fuzzy sub-
sets, in D. Dubois, H. Prade, and R. Yager (eds.) Readings in Fuzzy Sets for
Intelligent Systems, Morgan Kaufmann Publishers, San Francisco, CA, pp.
149-158, 1993.
[30J M.S. Bright and T. Arslan, Synthesis of low-power DSP systems using a genetic
algorithm, IEEE Transactions on Evolutionary Computation, Vol. 5, pp. 27-
40, 200l.
[31J P. Brucker, Scheduling Algorithms, Springer, Berlin, 1995.
References 275

[32] L. Chambers (cd.), Practical Handbook of Genetic Algorithms: Applications,


Vol. I, CRC Press, Boca Raton, FL, 1995.
[33] L. Chambers (ed.), Practical Handbook of Genetic Algorithms: New Frontiers,
Vol. II, CRC Press, Boca Raton, FL, 1995.
[34] R. Cheng and M. Gen, Fuzzy vehicle routing and scheduling problem using
genetic algorithms, in F. Herrera and J. Verdegay (eds.) Genetic Algorithms
and Soft Computing, Physica-Verlag, Heidelberg, pp. 683-709, 1996.
[35] G.A. Cleveland and S.F. Smith, Using genetic algorithms to scheduling flow
shop releases, Proceedings of the Third International Conference on Genetic
Algorithms, Morgan Kaufmann Publishers, San Mateo, CA, pp. 160-169, 1989.
[36] R.W. Conway, W.L. Maxwell, and L.W. Miller, Theory of Scheduling, Addison-
Wesley, Reading, MA, 1967.
[37] R.J. Dakin, A tree search algorithm for mixed integer programming problems,
Computer Journal, Vol. 8, pp. 250-255, 1965.
[38] G.B. Dantzig, Linear Programming and Extensions, Princeton University
Press, Princeton, NJ, 1961.
[39] D. Dasgupta and Z. Michalewicz (eds.), Evolutionary Algorithms in Engineer-
ing Applications, Springer, Berlin, 1997.
[40] Y. Davidor, Genetic Algorithms and Robotics, World Scientific, Singapore,
1991.
[41] Y. Davidor, H.-P. Schwefel, and R. Manner (eds.), Parallel Problem Solving
from Nature - PPSN III, Springer-Verlag, Berlin, 1994.
[42] L. Davis, Job shop scheduling with genetic algorithms, in Proceedings of the
First International Conference on Genetic Algorithms, Lawrence Erlbaum As-
sociates, Hillsdale, N.J, pp. 136-140, 1985.
[43] L. Davis (ed.), Genetic Algorithms and Simulated Annealing, Morgan Kauf-
mann Publishers, San Francisco, CA, 1987.
[44] L. Davis (ed.), Handbook of Genetic Algorithms. Van Nostrand Reinhold, New
York, 1991.
[45] K. Deb and P. Chakroborty, Time scheduling of transit systems with transfer
considerations using genetic algorithms, Evolutionary Computation, Vol. 6, pp.
1-24, 1998.
[46] K.A. De Jong, An Analysis of the Behavior of a Class of Genetic Adaptive
Systems, Doctoral dissertation, University of Michigan, Ann Arbor, MI, 1975.
[47] M. Delgado, J. Kacprzyk, J.-L. Verdegay, and M.A. Vila (cds.), Fuzzy Opti-
mization: Recent Advances, Physica-Verlag, Heidelberg, 1994.
[48] F. Della Croce, R. Tadei, and G. Volta, A genetic algorithm for the job shop
problem, Computers fj Operations Research, Vol. 22, pp. 15-24, 1995.
[49] B. Dengiz, F. Altiparmak, and A.E. Smith, Local search genetic algorithm
for optimal design of reliable networks, IEEE Transactions on Evolutionary
Computation, Vol. 1, pp. 179-188, 1997.
[50] R. Drechsler, Evolutionary Algorithms for VLSI CAD, Kluwer Academic Pub-
lishers, Boston, MA, 1998.
[51] U. Dorndorf and E. Pesch, Evolution based learning in a job shop scheduling
environment, Computers fj Operations Research, Vol. 22, pp. 25-40, 1995.
[52] D. Dubois and H. Prade, Operations on fuzzy numbers, International lournal
of Systems Science, Vol. 9, pp. 613-626, 1978.
[53] D. Dubois and H. Prade, Fuzzy Sets and Systems: Theory and Applications,
Academic Press, New York, 1980.
276 References

[54) L. Eshelman (ed.), Proceedings of the Sixth International Conference on Ge-


netic Algorithms, Morgan Kaufmann Publishers, San Francisco, CA, 1995.
[55) E. Falkenauer, Genetic Algorithms and Grouping Problems, John Wiley &
Sons, New York, 1998.
[56) H. Fisher and G.L. Thompson, Probabilistic learning combinations of job-shop
scheduling rules, in J.F. Muth and G.L. Thompson (eds.) Industrial Scheduling,
Prentice-Hall, Englewood Cliffs, NJ, pp. 1225-1251, 1963.
[57) M. Florian, P. Trepant, and G. McMahon, An implicit enumeration algorithm
for the machine sequenceing problem, Management Science, Vol. 17, pp. B782-
792, 1971.
[58] S. Forrest (ed.), Proceedings of the Fifth International Conference on Genetic
Algorithms, Morgan Kaufmann Publishers, San Mateo, CA, 1993.
[59] S. French, Sequencing and Scheduling: An Introduction to the Mathematics of
lob-shop, John Wiley, New York, 1982.
[60] M. Gen and R.. Cheng, Genetic Algorithms fj Engineering Design, John Wiley
& Sons, New York, 1996.
[61] M. Gen and R. Cheng, Genetic Algorithms and Engineering Optimization,
John Wiley & Sons, New York, 2000.
[62] C.M. Fonseca and P.J. Fleming, Genetic algorithms for multiobjective opti-
mization: formulation, discussion and generalization, in Proceedings of the Fifth
International Conference on Genetic Algorithms, Morgan Kaufmann Publish-
ers, San Mateo, CA, pp. 416-423, 1993.
[63) C.M. Fonseca and P.J. Fleming, Multiobjective optimization and multiple con-
straint handling with evolutionary algorithms-Part I: A unified formulation,
IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and
Humans, Vol. 28, pp. 26-37, 1998.
[64] C.M. Fonseca and P.J. Fleming, Multiobjective optimization and multiple con-
straint handling with evolutionary algorithms-Part II: Application example,
IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and
Humans, Vol. 28, pp. 38-47, 1998.
[65) B. Giffier and G.L. Thompson, Algorithms for solving production scheduling
problems, Operations Research, Vol. 8, pp. 487-503, 1969.
[66) D.E. Goldberg, Genetic Algorithms in Search, Optimization, and Machine
Learning, Addison-Wesley, Reading, MA, 1989.
[67] D. E. Goldberg and K. Deb, A comparative analysis of selection schemes used
in genetic algorithms, in Foundations of Genetic Algorithms, Morgan Kauf-
mann Publishers, San Francisco, CA, pp. 69-93, 1991.
[68] D.E. Goldberg and R.. Lingle, Alleles, loci, and the traveling salesman problem,
Proceedings of the 1st International Conference on Genetic Algorithms and
Their Applications, Lawrence Erlbaum Associates, Hillsdale, NJ, pp. 154-159,
1985.
[69] .1 ..1. Grefenstette, GENESIS: A system for using genetic search procedures,
Proceedings of the 1984 Conference on Intelligent Systems and Machines, pp.
161-165, 1984.
[70] .1 ..1. Grefenstette (ed.), Proceedings of the First International Conference on
Genetic Algorithms and Their Applications, Lawrence Erlbaum Associates,
Hillsdale, NJ, 1985.
[71] J.J. Grefenstette (ed.), Genetic Algorithms and Their- Applications: Proceed-
ings of the Second International Conference on Genetic Algorithms, Lawrence
Erlbaum Associates, Hillsdale, N.J, 1987.
References 277

[72] J.J. Grefenstette, Genetic Algorithms for Machine Learning, Kluwer Academic
Publishers, Norwell, MA, 1994.
[73] J.J. Grefenstette, R. Gopal, B. Rosmaita, and D. Van Gucht, Genetic algo-
rithms for the traveling salesman problem, in Proceedings of the First Inter-
national Conference on Genetic Algorithms and Their Applications, Lawrence
Erlbaum Associates, Hillsdale, NJ, 160-168, 1985.
[74] F. Herrera and J.L. Verdegay (eds.), Genetic Algorithms and Soft Computing,
Physica-Verlag, Heidelberg, 1996.
[75] J.H. Holland, Adaptation in Natural and Artificial Systems, University of
Michigan Press, Ann Arbor, MI, 1975; MIT Press, Cambridge, MA, 1992.
[76] J. Horn, N. Nafpliotis, and D.E. Goldberg, A niched Pareto genetic algorithm
for multiobjective optimization, in Proceedings of the First IEEE Conference
on Evolutionary Computation, IEEE Press, Orlando, FL, pp. 82-87, 1994.
[77] D. Hertog, Interior Point Approach to Linear, Quadratic and Convex Program-
ming, Kluwer Academic Publishers, Norwell, MA, 1994.
[78] E. Ignall and L. Schrage, Application of the branch and bound technique to
some flow-shop scheduling problem, Operations Research, Vol. 13, pp. 400-412,
1965.
[79) M. Inuiguchi, H. Ichihashi, and H. Tanaka, Fuzzy programming: a survey of
recent developments, in R. Slowinski, and J. Teghem (eds.), Stochastic versus
Fuzzy Approaches to Multiobjective Programming under Uncertainty, Kluwer
Academic Publishers, Norwell, MA, 1990, pp. 45-68.
[80) H. Ishii, M. Sakawa, and S. Iwamoto (eds.), Fuzzy OR, Asakura Publishing,
Tokyo, 2001 (in Japanese).
[81) H. Ishii and M. Tada, Single machine scheduling problem with fuzzy precedence
relation, European Journal of Operational Research, Vol. 87, pp. 284-288, 1995.
[82) K. Ito and R. Yokoyama, Optimal Planning of Co-Generation Systems, Sangyo
Tosho, Tokyo, 1990 (in Japanese).
[83] C. Janikow and Z. Michalewicz, An experimental comparison of binary and
floating Point representations in genetic algorithms, in Proceedings of the
Fourth International Conference on Genetic Algorithms, Morgan Kaufmann
Publishers, San Mateo, CA, 1991, pp. 31-36.
[84) B. Jansen, Interior Point Techniques in Optimization, Kluwer Academic Pub-
lishers, Norwell, MA, 1997.
[85) Q. Ji and Y. Zhang, Camera calibration with genetic algorithms, IEEE Trans-
actions on Systems, Man and Cybernetics, Part A: Systems and Humans, Vol.
31, pp. 120-130, 2001.
[86] S.M. Johnson, Optimal two-and three-stage production schedules with setup
times included, Naval. Research. Logistics Quarterly, Vol. 1, pp. 61-68, 1954.
[87] J.A. Joines, C.T. Culbreth, and R.E. King, Manufacturing cell design: an in-
teger programming model employing genetics, lIE Transactions, Vol. 28, pp.
69-85, 1996.
[88) J.A. Joines and C.R. Houck, On the use of non-stationary penalty functions to
solve nonlinear constrained optimization problems with GA's, in Proceedings
of the First IEEE International Conference Evolutionary Computation, IEEE
Press, Orlando, FL, pp. 579-584, 1994.
[89] K. Kato and M. Sakawa, Genetic algorithms with decomposition procedures for
fuzzy multiobjective 0-1 programming problems with block angular structure,
Proceedings of 1 996 IEEE International Conference on Evolutionary Compu-
tation, IEEE Press, Piscataway, NJ, pp. 706-709, 1996.
278 References

[90] K. Kato and M. Sakawa, An interactive fuzzy satisficing method for multi-
objective structured 0-1 programs through genetic algorithms, Proceedings of
mini-Symposium on Genetic Algorithms and Engineering Design, pp. 48-57,
1996.
[91] K. Kato and M. Sakawa, Interactive decision making for multiobjective block
angular 0-1 programming problems with fuzzy parameters through genetic
algorithms, Proceedings of the Sixth IEEE International Conference on Fuzzy
Systems, Vol. 3, pp. 1645 1650, 1997.
[92] K. Kato and M. Sakawa, An interactive fuzzy satisficing method for multiob-
jective block angular 0-1 programming problems involving fuzzy parameters
through genetic algorithms with decomposition procedures, Proceedings of the
Seventh International Fuzzy Systems Association World Congress, Vol. 3, pp.
9 14, 1997.
[93J K. Kato and M. Sakawa, An interactive fuzzy satisficing method for large-
scale multiobjective 0-1 programming problems with fuzzy parameters through
genetic algorithms, Eumpean Journal of Operational Research, Vol. 107, No.
3, pp. 590-598, 1998.
[94J K. Kato and M. Sakawa, Large scale fuzzy multiobjective 0-1 programs through
genetic algorithms with decomposition procedures, Proceedings of Second In-
ternational Conference on Knowledge-Based Intelligent Electronic Systems,
Vol. 1, pp. 278-284, 1998.
[95] K. Kato and M. Sakawa, Improvement of genetic algorithm by decomposition
procedures for fuzzy block angular multiobjective knapsack problems, Proceed-
ings of the Eighth International Fuzzy Systems Association World Congress,
Vol. 1, pp. 349 353, 1999.
[96] A. Kaufmann and M. Gupta, Fuzzy Mathematical Models in Engineering and
Management Science, North-Holland, Amsterdam, 1988.
[97] A. Kaufmann and M. Gupta, Introduction to Fuzzy Arithmetic, Van Nostrand
Reinhold, New York, 1991.
[98J S. Kirkpatrick, C.D. Gelatt, and M.P. Vecchi, Optimization by simulated an-
nealing, Science, Vol. 220, pp. 671-680, 1983.
[99] S. Kobayashi, 1. Ono, and M. Yamamura, An efficient genetic algorithm for job
shop scheduling problems, in Proceedings of the Sixth International Conference
on Genetic Algorithms, pp. 506~511, 1995.
[100] S. Koziel and Z. Michalewicz, A decoder-based evolutionary algorithms for con-
strained parameter optimization problems, in Proceedings of the Fifth Parallel
Problem Solving from Nature, Springer-Verlag, Berlin, pp. 231-240, 1998.
[101] S. Koziel and Z. Michalewicz, Evolutionary algorithms, homomorphous map-
ping, and constrained parameter optimization, Evolutionary Computation, Vol.
7, No.1, pp. 19-44, 1999.
[102] N. Kubota, T. Fukuda, and K. Shimojima, Virus-evolutionary genetic algo-
rithms for a self-organizing manufacturing system, Computers fj Industrial
Engineering, Vol. 30, No.4, pp. 10151026, 1996.
[103J Y.J. Lai and C.L. Hwang, Fuzzy Multiple Objective Decision Making: Methods
and Applications, Springer-Verlag, Berlin, 1994.
[104] L.S. Lasdon, R.L. Fox, and M.W. Ratner, Nonlinear optimization using the
generalized reduced gradient method, Revue Franr;aise d'Automatique, Infor-
matique et Researche Operationnelle, Vol. 3, pp. 73-103, 1974.
[105J L.S. Lasdon, A.D. Waren, and M.W. Ratner, GRG2 User's Guide, Technical
memorandum, University of Texas, 1980.
References 279

[106] C.C. Lo and W.H. Chang, A multiobjective hybrid genetic algorithm for the
capacitated multipoint network design problem, IEEE Transactions on Sys-
tems, Man and Cybernetics, Part B: Cybernetics, Vol. 30, pp. 461-470, 2000.
[107] Z.A. Lomnicki, A branch-and-bound algorithms for the exact solution of the
three machine scheduling problem, Operational Research Quarterly, Vol. 16,
pp. 89-100, 1965.
[108] J.G. March and H.A. Simon, Organizations, John Wiley, New York, 1958.
[109] R. Manner and B. Manderick (cds.), Parallel Problem Solving from Nature,
2, Proceedings of the Second International Conference on Parallel Problem
Solving from Nature, Brussels, Belgium, North-Holland, Amsterdam, 1992.
[110] D.A. Manolas, C.A. Christos, A. Frangopoulos, T.P. Gialamas, and D.T. Tsa-
halis, Operation optimization of an industrial cogeneration system by a genetic
algorithm, Energy Conversion Management, Vol. 38, pp. 1625-1636, 1997.
[111] D.C. Mattfeld, Evolutionary Search and the Job Shop, Physica-Verlag, Heidel-
berg, 1996.
[112] Z. Michalewicz, Genetic Algorithms + Data Structures = Evolution Programs,
Springer-Verlag, Berlin, 1992; 2nd extended edition, 1994; 3rd revised and
extended edition, Berlin, 1996.
[113] Z. Michalewicz, Genetic algorithms, numerical optimization and constraints,
in Proceedings of the Sixth International Conference on Genetic Algorithms,
pp. 151-158, 1995.
[114] Z. Michalewicz and N. Attia, Evolutionary optimization of constrained prob-
lems, in Proceedings of the Third Annual Conference on Evolutionary Program-
ming, World Scientific Publishers, River Edge, NJ, pp. 98-108, 1994.
[115] Z. Michalewicz, D. Dasgupta, R.G. Le Riche, and M. Schoenauer, Evolution-
ary algorithms for constrained engineering problems, Computers & Industrial
Engineering, Vol. 30, pp. 851-870, 1996.
[116] Z. Michalewicz and C.Z. Janikow, Handling constraints in genetic algorithms,
in Proceedings of the Fourth International Conference on Genetic Algorithms,
Morgan Kaufmann Publishers, San Mateo, CA, pp. 151-157, 1991.
[117] Z. Michalewicz, T. Logan, and S. Swaminathan, Evolutionary operators for
continuous convex parameter spaces, in Proceedings of the Third Annual Con-
ference on Evolutionary Programming, World Scientific Publishers, River Edge,
NJ, pp. 84-97, 1994.
[118] Z. Michalewicz and G. Nazhiyath, Genocop III: a co-evolutionary algorithm
for numerical optimization problems with nonlinear constraints, in Proceedings
of 1 995 IEEE International Conference on Evolutionary Computation, IEEE
Press, Piscataway, NJ, pp. 647-651, 1995.
[119] Z. Michalewicz and M. Schoenauer, Evolutionary algorithms for constrained
parameter optimization problems, Evolutionary Computation, Vol. 4, pp. 1-32,
1996.
[120] C. Moon, C.K. Kim, and M. Gen, Genetic algorithm for maximizing the parts
flow within manufacturing cell design, Computers & Industrial Engineering,
Vol. 36, pp. 1730-1733, 1999.
[121] T.E. Morton and D.W. Pentico, Heuristic Scheduling Systems, John Wiley &
Sons, New York, 1993.
[122] I. Nabeshima, Theory of Scheduring, Morikita Publishing, Tokyo, 1974 (in
Japanese).
[123] R. Nakano and T. Yamada, Conventional genetic algorithm for job shop prob-
lems, in Proceedings of the Fourth International Conference on Genetic Algo-
280 References

rithms, pp. 474-479, 1991.


[124] 1. Oliver, D. Smith, and J. Holland, A study of permutation crossover operators
on the traveling salesman problem, in Proceedings of the Second International
Conference on Genetic Algorithms, Lawrence Erlbaum Associates, Hillsdale,
NJ, pp. 224-230, 1987.
[125] W. Orchard-Hays, Advanced Linear Programming: Computing Techniques,
McGraw-Hill, New York, 1968.
[126] L. Ozdamar, A genetic algorithm approach to a general category project
scheduling problem, IEEE Transactions on Systems, Man and Cybernetics,
Part C: Applications and Reviews, Vol. 29, pp. 44-59, 1999.
[127] W. Pedrycz (ed.), Fuzzy Evolutionary Computation, Kluwer Academic Pub-
lishers, Norwell, MA, 1997.
[128] B.A. Peters and M. Rajasekharan, A genetic algorithm for determining facility
design and configuration of single-stage flexible electronic assembly systems,
Journal of Manufacturing Systems, Vol. 15, pp. 316-324, 1996.
[129] D. Powell and M. Skolnick, Using genetic algorithms in engineering design op-
timization with non-linear constraints, in Proceedings of the Fifth International
Conference on Genetic Algorithms, pp. 424-430, 1993.
[130] R.V. Rogers and KP. White, Algebraic, mathematical programming, and net-
work models of the deterministic job-shop scheduling problem, IEEE Trans-
actions on Systems, Man and Cybernetics, Vol. SMC-21, pp. 693-697, 1991.
[131] M. Sakawa, Interactive computer programs for fuzzy linear programming with
multiple objectives, International Journal of Man-Machine Studies, Vol. 18,
pp. 489-503, 1983.
[132] M. Sakawa, Interactive fuzzy decision making for multiobjective nonlinear pro-
gramming problems, in M. Grauer and A. P. Wierzbicki (eds.) Interactive De-
cision Analysis, Springer-Verlag, Berlin, pp. 105-112, 1984.
[133] M. Sakawa, Optimization of Linear Systems, Morikita Publishing, Tokyo, 1984
(in Japanese).
[134] M. Sakawa, Optimization of Nonlinear Systems, Morikita Publishing, Tokyo,
1986 (in Japanese).
[135] M. Sakawa, Fuzzy Sets and Interactive Multiobjective Optimization, Plenum
Press, New York, 1993.
[136] M. Sakawa, Large Scale Interactive Fuzzy Multiobjective Programming: Decom-
position Approaches, Physica-Verlag, Heidelberg, 2000.
[137] M. Sakawa, Optimization of Discrete Systems, Morikita Publishing, Tokyo,
2000 (in Japanese).
[138] M. Sakawa, M. Inuiguchi, H. Sunada, and K Sawada, Fuzzy multiobjective
combinatorial optimization through revised genetic algorithms, Japanese Jour-
nal of Fuzzy Theory and Systems, Vol. 6, No.1, pp. 177-186, 1994 (in Japanese).
[139] M. Sakawa, H. Ishii, and 1. Nishizaki, Soft Optimization, Asakura Publishing,
Tokyo, 1995 (in Japanese).
[140] M. Sakawa and K Kato, Integer programming through genetic algorithms with
double strings based on reference solution updating, Proceedings of 2000 IEEE
International Conference on Industrial Electronics, Control and Instrumenta-
tion, pp. 2744-2749, Nagoya, Japan, 2000.
[141] M. Sakawa and K Kato, An interactive fuzzy satisficing method for general
multiobjective 0-1 programming problems through genetic algorithms with
double strings based on a reference solution Fuzzy Sets and Systems (in press).
References 281
[142] M. Sakawa, K. Kato, and T. Mori, Flexible scheduling in a machining center
through genetic algorithms, Computers & Industrial Engineering: An Interna-
tional Journal, Vol. 30, No.4, pp. 931-940, 1996.
[143] M. Sakawa, K. Kato, H. Obata, and K. Ooura, An approximate solution
method for general multiobjective 0-1 programming problems through genetic
algorithms with double string representation, Transactions of the Institute of
Electronics, Information and Communication Engineers, Vol. J82-A, No.7,
pp. 1066-1073, 1999 (in Japanese).
[144] M. Sakawa, K. Kato, and T. Shibano, An interactive fuzzy satisficing method
for multiobjective multidimensional 0-1 knapsack problems through genetic al-
gorithms, Proceedings of 1996 IEEE International Conference on Evolutionary
Computation, IEEE Press, Piscataway, NJ, pp. 243-246, 1996.
[145] M. Sakawa, K. Kato, T. Shibano, and K. Hirose, Fuzzy multiobjective integer
programs through genetic algorithms using double string representation and
information about solutions of continuous relaxation problems, Proceedings of
1999 IEEE International Conference on Systems, Man and Cybernetics, Vol.
3, pp. 967-972, 1999.
[146] M. Sakawa, K. Kato, T. Shibano, and K. Hirose, Genetic Algorithms with
Double Strings for Multidimensional Integer Knapsack Problems, Journal of
Japan Society for Fuzzy Theory and Systems, Vol. 12, No.4, pp. 562-569, 2000
(in Japanese).
[147] M. Sakawa, K. Kato, H. Sunada, and Y. Enda, An interactive fuzzy satisficing
method for multiobjective 0-1 programming problems through revised genetic
algorithms, Journal of Japan Society for Fuzzy Theory and Systems, Vol. 7,
No.2, pp. 361-370, 1995 (in Japanese).
[148] M. Sakawa, K. Kato, H. Sunada, and T. Shibano, Fuzzy programming for
multiobjective 0-1 programming problems through revised genetic algorithms,
European Journal of Operational Research, Vol. 97, pp. 149-158, 1997.
[149] M. Sakawa, K. Kato, and S. Ushiro, An interactive fuzzy satisficing method
for multiobjective 0-1 programming problems involving positive and negative
coefficients through genetic algorithms with double strings, Proceedings of the
Eighth International Fuzzy Systems Association World Congress, Vol. 1, pp.
430-434, 1999.
[150] M. Sakawa, K. Kato, and S. Ushiro, Operation planning of district heating
and cooling plants using genetic algorithms for mixed 0-1 linear programming,
Proceedings of 2000 IEEE International Conference on Industrial Electronics,
Control and Instrumentation, pp. 2915-2920, Nagoya, Japan, 2000.
[151] M. Sakawa, K. Kato, and S. Ushiro, Operational planning of district heating
and cooling plants through genetic algorithms for mixed 0-1 linear program-
ming, European Journal of Operational Research (in press).
[152] M. Sakawa, K. Kato, and S. Ushiro, Operation planning of district heating
and cooling plants through genetic algorithms for nonlinear 0-1 programming,
Computers & Mathematics with Applications (in press).
[153] M. Sakawa, K. Kato, and S. Ushiro, Cooling load prediction in a district heat-
ing and cooling system through simplified robust filter and Multi-layered neural
network, Applied Artificial Intelligence, Vol. 15 (in press), 200l.
[154] M. Sakawa, K. Kato, S. Ushiro, and M. Inaoka, Operation planning of dis-
trict heating and cooling plants using genetic algorithms for mixed integer
programming, Applied Soft Computing (in press).
282 References

[155J M. Sakawa, K. Kato, S. Ushiro, and K. Ooura, Fuzzy programming for gen-
eral multiobjective 0-1 programming problems through genetic algorithms with
double strings, 1999 IEEE International Fuzzy Systems Conference Proceed-
ings, Vol. III, pp. 1522~ 1527, 1999.
[156J M. Sakawa and R. Kubota, Fuzzy programming for multiobjective job shop
scheduling with fuzzy processing time and fuzzy duedate through genetic al-
gorithms, European Journal of Operational Research, Vol. 120, pp. 393 407,
2000.
[157J M. Sakawa and T. Mori, Job shop scheduling through genetic algorithms incor-
porating similarity concepts, The Transactions of the Institute of Electronics,
Information and Communication Engineers A, Vol. J80-A (6), pp. 960-968,
1997 (in Japanese).
[158J M. Sakawa and T. Mori, Job shop scheduling with fuzzy duedate and fuzzy
processing time through genetic algorithms, Journal of Japan Society for Fuzzy
Theory and Systems, Vol. 9, pp. 231-238,1997 (in Japanese).
[159J M. Sakawa and T. Mori, An efficient genetic algorithm for job-shop scheduling
problems with fuzzy processing time and fuzzy duedate, Computers fj Indus-
trial Engineering: An International Journal, Vol. 36, pp. 325~341, 1999.
[160J M. Sakawa and T. Shibano, Interactive fuzzy programming for multiobjective
0-1 programming problems through genetic algorithms with double strings, in
Da Ruan (ed.) Fuzzy Logic Foundations and Industrial Applications, Kluwer
Academic Publishers, Norwell, MA, pp. 111~128, 1996.
[161J M. Sakawa and T. Shibano, Multiobjective fuzzy satisficing methods for 0-
1 knapsack problems through genetic algorithms, in W. Pedrycz (ed.) Fuzzy
Evolutionary Computation, Kluwer Academic Publishers, Norwell, MA, pp.
155-177, 1997.
[162J M. Sakawa and T. Shibano, An interactive fuzzy satisficing method for mul-
tiobjective 0-1 programming problems with fuzzy numbers through genetic
algorithms with double strings, European Journal of Opemtional Research,
Vol. 107, pp. 564--574, 1998.
[163J M. Sakawa and T. Shibano, An interactive approach to fuzzy multiobjective 0-1
programming problems using genetic algorithms, in M. Gen and Y. Tsujimura
(eds.), Evolutionary Computations and Intelligent Systems, Gordon & Breach,
New York (to appear).
[164J M. Sakawa, T. Shibano, and K. Kato, An interactive fuzzy satisficing method
for IIlultiobjective integer programming problems with fuzzy numbers through
genetic algorithms, Journal of Japan Society for Fuzzy Theory and Systems,
Vol. 10, pp. 108-116, 1998 (in Japanese).
[165J M. Sakawa and M. Tanaka, Genetic Algorithms, Asakura Publishing, Tokyo,
1995 (in Japanese).
[166J M. Sakawa, S. Ushiro, K. Kato, and T. Inoue, Cooling load prediction through
radial basis function network and simplified robust filter, Journal of Japan So-
ciety for Fuzzy Theory and Systems, Vol. 11, pp. 112-120, 1999 (in Japanese).
[167J M. Sakawa, S. Ushiro, K. Kato, and T. Inoue, Cooling load prediction through
radial basis function network using a hybrid structural learning and simpli-
fied robw:it filter, Transaction of the Institute of Electronics, Information and
Communication Engineers, Vol. A J82-A, pp. 31-39, 1999 (in Japanese).
[168J M. Sakawa, S. Ushiro, K. Kato, and K. Ohtsuka, Cooling load prediction
through simplified robust filter and three-layered neural network in a district
heating and cooling system, Transaction of the Institute of Electronics, Infor-
References 283

mation and Communication Engineers, Vol. A J83-A, pp. 234-237, 2000 (in
Japanese).
[169] M. Sakawa and T. Yumine, Interactive fuzzy decision-making for multiobjec-
tive linear fractional programming problems, Large Scale Systems, Vol. 5, pp.
105-114, 1983.
[170] M. Sakawa and H. Yano, An interactive fuzzy satisficing method using aug-
mented minimax problems and its application to environmental systems, IEEE
Transactions on Systems, Man and Cybernetics, Vol. SMC-15, No.6, pp. 720--
729, 1985.
[171] M. Sakawa and H. Yano, Interactive decision making for multiobjective lin-
ear fractional programming problems with fuzzy parameters, Cybernetics and
Systems: An International Journal, Vol. 16, pp. 377-394, 1985.
[172] M. Sakawa and H. Yano, Interactive decision making for multiobjective linear
problems with fuzzy parameters, in G. Fandel, M. Grauer, A. Kurzhanski
and A. P. Wierzbicki (eds.), Large-Scale Modeling and Interactive Decision
Analysis, Springer-Verlag, Berlin, pp. 88-96, 1986.
[173] M. Sakawa and H. Yano, An interactive fuzzy satisficing method for multiob-
jective linear programming problems with fuzzy parameters, Large Scale Sys-
tems: Theory and Applications, Proceedings of the IFAC/IFORS Symposium,
1986.
[174] M. Sakawa and H. Yano, An interactive fuzzy satisficing method for multiob-
jective nonlinear programming problems with fuzzy parameters, in R. Trappl
(ed.) Cybernetics and Systems '86, D. Reidel Publishing, Dordrecht, pp. 607-
614, 1986.
[175] M. Sakawa and H. Yano, An interactive satisficing method for multiobjective
nonlinear programming problems with fuzzy parameters, in J. Kacprzyk and
S. A. Orlovski (eds.), Optimization Models Using Fuzzy Sets and Possibility
Theory, D. Reidel Publishing, Dordrecht, pp. 258-271, 1987.
[176] M. Sakawa and H. Yano, An interactive fuzzy satisficing method for multiob-
jective linear fractional programming problems, Fuzzy Sets and Systems, Vol.
28, pp. 129-144, 1988.
[177] M. Sakawa and H. Yano, Interactive decision making for multiobjective non-
linear programming problems with fuzzy parameters, Fuzzy Sets and Systems,
Vol. 29, pp. 315-326, 1989.
[178] M. Sakawa and H. Yano, An interactive fuzzy satisficing method for multi-
objective nonlinear programming problems with fuzzy parameters, Fuzzy Sets
and Systems, Vol. 30, pp. 221-238, 1989.
[179] M. Sakawa and K. Yauchi, Co evolutionary genetic algorithms for nonconvex
nonlinear programming problems: Revised GENOCOP III, Cybernetics and
Systems: An International Journal, Vol. 29, No.8, pp. 885-899, 1998.
[180] M. Sakawa and K. Yauchi, An interactive fuzzy satisficing method for multiob-
jective nonconvex programming problems with fuzzy numbers through floating
point genetic algorithms, Journal of Japan Society for Fuzzy Theory and Sys-
tems, Vol. 10, pp. 89-97, 1998.
[181] M. Sakawa and K. Yauchi, An interactive fuzzy satisficing method for mul-
tiobjective nonconvex programming problems through floating point genetic
algorithms, European Journal of Operational Research, Vol. 117, pp. 113-124,
1999.
[182] M. Sakawa and K. Yauchi, Interactive decision making for multiobjective non-
convex programming problems with fuzzy parameters through coevolutionary
284 References

genetic algorithms, Fuzzy Sets and Systems, Vol. 114, pp. 151-165, 2000.
[183] M. Sakawa and K. Yauchi, An interactive fuzzy satisficing method for mul-
tiobjective nonconvex programming problems with fuzzy numbers through
co evolutionary genetic algorithms, IEEE 1ransactions on Systems, Man and
Cybernetics, Part B: Cybernetics, Vol. 31, No.3, 2001.
[184] M. Sakawa, H. Yano, and T. Yumine, An interactive fuzzy satisficing method
for multiobjective linear-programming problems and its application, IEEE
1ransactions on Systems, Man and Cybernetics, Vol. SMC-17, No.4, pp. 654-
661, 1987.
[185] H.G. Sandalidis, P.O. Stavroulakis, and J. Rodriguez-Tellez, An efficient evolu-
tionary algorithm for channel resource management in cellular mobile systems,
IEEE 1ransactions on Evolutionary Computation, Vol. 2, pp. 125-137, 1998.
[186] N. Sannomiya, H. lima, E. Kako, and Y. Kobayashi, Genetic algorithm ap-
proach to a production ordering problem in acid rinsing of steelmaking plant.
Proceedings of Thirteenth IFAC World Congress, Vol. D, pp. 297-302, 1996.
[187] J.D. Schaffer, Multiple objective optimization with vector evaluated genetic
algorithms, in Proceedings of the First International Conference on Genetic
Algorithms and Their Applications, Lawrence Erlbaum Associates, Hillsdale,
NJ, 93-100, 1985.
[188] J.D. Schaffer (ed.), Genetic Algorithms, Proceedings of the Third International
Conference on Genetic Algorithms, Morgan Kaufmann Publishers, San Mateo,
CA,1989.
[189] H-P. Schwefel, Evolution and Optimum Seeking, John Wiley & Sons, New York,
1995.
[190] H.-P. Schwefel and R. Manner (eds.), Parallel Problem Solving from Nature,
Proceedings of the First International Conference on Parallel Problem Solving
from Nature (PPSN), Dortmund, Germany, Springer-Verlag, Berlin, 1990.
[191] F. Seo and M. Sakawa, Multiple Criteria Decision Analysis in Regional Plan-
ning: Concepts, Methods and Applications, D. Reidel Publishing, Dordrecht,
1988.
[192] I. Shiroumaru, M. Inuiguchi, and M. Sakawa, A fuzzy satisficing method for
electric power plant coal purchase using genetic algorithms, European Journal
of Operational Research, Vol. 126, pp. 218-230, 2000.
[193] R.L. Sisson, Methods of sequencing in job shops - a review, Operations Re-
search, Vol. 7, pp. 10-29, 1959.
[194] R. Slowinski and M. Hapke (eds.), Scheduling under Fuzziness, Physica-Verlag,
Heidelberg, 2000.
[195] R. Slowinski and J. Teghem (eds.), Stochastic versus Fuzzy Approaches to Mul-
tiobjective Mathematical Programming Problems under Uncertainty, Kluwer
Academic Publishers, Norwell, MA, 1990.
[196] R Smierzchalski and Z. Michalewicz, Modeling of ship trajectory in collision
situations by an evolutionary algorithm, IEEE 1ransactions on Evolutionary
Computation, Vol. 4, pp. 227-241, 2000.
[197] N. Srinivas and K. Deb, Multiobjective optimization using nondominated sort-
ing in genetic algorithms, Evolutionary Computation, Vol. 2, pp. 221-248, 1995.
[198] T. Starkweather, S. McDaniel, K. Mathias, D. Whitley, and C. Whitley, Com-
parison of genetic sequencing operators. in Proceedings of the Fourth Interna-
tional Conference on Genetic Algorithms, Morgan Kaufmann Publishers, San
Mateo, CA, pp. 69-76, 1991.
References 285
[199] R.E. Steuer, Multiple Criteria Optimization: Theory, Computation, and Ap-
plication, John Wiley & Sons, New York, 1986.
[200] R.E. Steuer and E.U. Choo, An interactive weighted Tchebycheff procedure
for multiple objective programming, Mathematical Programming, Vol. 26, pp.
326-344, 1983.
[201] G. Sommer and M.A. Pollatschek, A fuzzy programming approach to an air
pollution regulation problem, in R. Trappl, G.J. Klir, and L. Ricciardi (eds.),
Progress in Cybernetics and Systems Research, Hemisphere, pp. 303-323, 1978.
[202] G. Syswerda, Uniform crossover in genetic algorithms, in Proceedings of the
Third International Conference on Genetic Algorithms, Morgan Kaufmann
Publishers, San Mateo, CA, pp. 2-9, 1989.
[203] H. Tamaki and Y. Nishikawa, A parallel genetic algorithm based on a neighbor-
hood model and its application to the jobshop scheduling, in Parallel Problem
Solving from Nature 2, North-Holland, Amsterdam, pp. 573-582, 1992.
[204] H. Tamaki and Y. Nishikawa, Maintenance of diversity in a genetic algorithm
and an application to the jobshop scheduling, Proceedings of the IMA CS/SICE
International Symposium on Robotics, Mechatronics and Manufacturing Sys-
tems '92, pp. 869-874, 1992.
[205] H. Tamaki, M. Mori, M. Araki, Y. Mishima, and H. Ogai, Multi-criteria op-
timization by genetic algorithms: a case of scheduling in hot rolling process,
Proceedings of Third Conference of the Association of Asian-Pacific Opera-
tional Research Societies within IFORS, pp. 374-381, 1995.
[206] Y. Tsujimura, M. Gen, and E. Kubota, Solving job-shop scheduling problem
with fuzzy processing time using genetic algorithm, Journal of Japan Society
for Fuzzy Theory and Systems, Vol. 7, pp. 1073-1083, 1995.
[207] Y. Tsujimura and M. Gen, Genetic algorithms for solving multiprocessor
scheduling problems, in X. Yao, J.H. Kim, and T. Furuhashi (eds.) Simulated
Evolution and Learning, Springer-Verlag, Heidelberg, pp. 106-115, 1997.
[208] E.L. Ulungu and J. Teghem, Multi-objective combinatorial optimization prob-
lems: a survey, Journal of Multicriteria Decision Analysis, Vol. 3, pp. 83-104,
1994.
[209] P.J.M. van Laarhiven and E.H.L. Aarts, Simulated Annealing: Theory and
Applications, D. Reidel Publishing, Dordrecht, 1987.
[210] J.-L. Verdegay and M. Delgado (eds.), The Interface between Artificial Intelli-
gence and Operations Research in Fuzzy Environment, Verlag TUV Rheinland,
Ki:iln, 1989.
[211] N. Viswanadhan, S.M. Sharma, and M. Taneja, Inspection allocation in man-
ufacturing systems using stochastic search techniques, IEEE Transactions on
Systems, Man and Cybernetics, Part A: Systems and Humans, Vol. 26, pp.
222-230, 1996.
[212] H-M. Voigt, W. Ebeling, I. Rechenberg, and H-P. Schwefel (cds.), Parallel
Problem Solving from Nature - PPSN IV, Springer-Verlag, Berlin, 1996.
[213] G. Winter, J. Periaux, M. Gal;in, and P. Cuesta (eds.), Genetic Algorithms in
Engineering and Computer Science, John Wiley & Sons, New York, 1995.
[214] L.D. Whitley, T. Starkweather, and D'A Fuquay, Scheduling problems and
traveling salesman: The genetic edge recombination operator, Proceedings of
the Third International Conference on Genetic Algorithms, pp. 133-140, 1989.
[215] A.P. Wierzbicki, The use of reference objectives in multiobjective optimization,
in G. Fandel and T. Gal (cds.) Multiple Criteria Decision Making: Theory and
Application, Springer-Verlag, Berlin, pp. 468-486, 1980.
286 References

[216) A.P. Wierzbicki, A mathematical basis for satisficing decision making, Math-
ematical Modeling, Vol. 3, pp. 391-405, 1982.
[217) A. Wright, Genetic algorithms for real parameter optimization, in J.G. Rawlins
(ed.) Foundations of Genetic Algorithms, Morgan Kaufmann, San Francisco,
CA, pp. 205-218, 1991.
[218) J. Xiao, Z. Michalewicz, L. Zhang, and K. Trojanowski, Adaptive evolution-
ary planner/navigator for mobile robots, IEEE 1ransactions on Evolutionary
Computation, Vol. 1, pp. 18-28, 1997.
[219] T. Yamada and R. Nakano, A genetic algorithm applicable to large-scale job
shop problems, in Parallel Problem Solving from Nature 2, North-Holland,
Amsterdam, pp. 281-290, 1992.
[220) R. Yokoyama and K. Ito, A revised decomposition method for MILP problems
and its application to operational planning of thermal storage systems, Journal
of Energy Resources Technology, Vol. 118, pp. 277-284, 1996.
[221) L.A. Zadeh, Fuzzy sets, Information and Control, Vol. 8, pp. 338-353, 1974.
[222) M. Zeleny, Multiple Criteria Decision Making, McGraw-Hill, New York, 1982.
[223) Q. Zhang and Y.Y. Leung, An orthogonal genetic algorithm for multimedia
multicast routing, IEEE 1ransactions on Evolutionary Computation, Vol. 3,
pp. 53-62, 1999.
[224) H.-J. Zimmermann, Description and optimization of fuzzy systems, Interna-
tional Journal of General Systems, Vol. 2, pp. 209-215, 1976.
[225) H.-J. Zimmermann, Fuzzy programming and linear programming with several
objective functions, Fuzzy Sets and Systems, Vol. 1, pp. 45-55, 1978.
[226) H.-J. Zimmermann, Fuzzy mathematical programming, Computers & Opera-
tions Research, Vol. 10, pp. 291-298, 1983.
[227) H.-J. Zimmermann, Fuzzy Sets, Decision-Making and Expert Systems, Kluwer
Academic Publishers, Norwell, MA, 1987.
[228) H.-J. Zimmermann, Fuzzy Set Theory and Its Applications, Kluwer Academic
Publishers, Norwell, MA, 1985; 2nd edition, 1991, 3rd edition, 1996.
Index

a-level set, 71, 120, 161 double string, 31, 85


a-multiobjective 0-1 programming, 71 due date, 171, 224
a-multiobjective integer programming,
120 elitism, 33, 247
a-Pareto optimal solution, 72, 121, 161 elitist expected value selection 33 45 49
V (max) operation, 193 90, 103, 247 ""
elitist preserving selection, 20, 49, 90, 103
active schedule, 173, 195 equality constraint, 135, 154, 160
addition, 193 expected-value selection, 20, 21
aggregation function, 57, 73, 109, 122 expected value selection, 33, 49, 90, 103,
agreement index, 191, 208 247
augmented minimax problem, 58, 74, Ill, extension principle, 193
123, 156, 163
average agreement index, 209
feasible region, 135
feasible set, 135
backtracking, 40, 45
binary string, 16 fitness, 11, 12, 16, 31, 32, 89
bisection method, 146 fitness function, 156, 163
bit string, 16 FJSP, 191
boundary mutation, 140 floating-point representation, 137
branch and bound method, 179 flow-shop scheduling problem, 170
fuzzy completion time, 191
chromosome, 12, 16 fuzzy decision, 57, 109, 210
coding, 12, 16 fuzzy due date, 191, 208
complete optimal solution, 55, 108, 154 fuzzy equal, 155, 162
completion time, 171, 191 fuzzy goal, 56, 108, 155, 161, 209
conflict set, 175, 196 fuzzy job-shop scheduling problem, 191
convex programming, 135 fuzzy max, 155, 162
convex programming problem, 136 fuzzy min, 155, 162
crossover, 13, 230 fuzzy multiobjective decision-making prob-
cut, 174, 195 lem, 57,109
cycle crossover: ex, 25 fuzzy number, 70, 120, 160
fuzzy processing time, 191, 208
decision variable, 135, 154, 160
decoding, 12, 16 Gannt chart, 172
decoding algorithm, 32, 41, 45, 86, 100, gene, 12, 16
244 generalized a-MONLP, 162
degree of similarity, 176, 198 generalized multiobjective nonlinear pro-
discrete optimization problem, 11 gramming problem, 155
domain constraint, 135 generation, 12
288 Index

genetic algorithm, 11, 133, 170,229,237, mutation, 13, 26, 35, 93, 179, 200, 213,
261 231, 248, 262
GENOCOP, 142
GENOCOP III, 142 natural selection, 19
genotype, 12, 16 nondelay schedule, 173
Giffier and Thompson algorithm, 174, 195 non inferior solution, 55
Giffler and Thompson algorithm-based nonlinear programming, 135
crossover, 177, 198 nonuniform mutation, 140
greatest associate ordinary number, 210
objective function, 135, 154, 160
heuristic crossover, 139 one-point crossover, 23, 137, 248
optimal schedule, 174
individual, 11 ordered crossover: OX, 25
inequality constraint, 135, 154, 160
initial population, 88, 89, 137, 176, 197, Pareto optimality, 55
212, 230, 262 Pareto optimality test, 58, 111, 156
initial reference point, 143 Pareto optimal solution, 55, 154
inversion, 26, 35,94, 248 partially matched crossover: PMX, 25
phenotype, 12, 16
job-shop scheduling problem, 169, 171, PMX for double strings, 33, 91
191 population, 11, 12, 15
JSP, 169, 171, 191 power law scaling, 19

knapsack problem, 30, 84 ranking method, 21, 210


ranking selection, 20, 21, 230, 262
linear membership function, 56, 108, 209 reference membership levels, 58, 110, 156,
linear scaling, 17, 32, 90, 246 163
local Pareto optimal solution, 155 reference membership values, 74, 122
locus, 12, 16 reference point, 58, 74, 110, 122
reference points, 142
M-Q-Pareto optimal solution, 162 reference solution, 40, 45, 99, 243
M-Pareto optimal solution, 156 reproduction, 13, 19, 33, 90
machining center, 224 revised GENOCOP III, 143
makespan, 171 roulette selection, 19
maximum completion time, 171
maximum fuzzy completion time, 209 satisficing solution, 2, 59, 111, 157, 164
membership function, 155, 160, 209 scheduling problem, 227
minimax problem, 58, 74, 110, 123, 156, search points, 142
163 selection, 19
minimum agreement index, 209 semi active schedule, 173
minimum operator, 57, 109,210 sigma scaling, 19
MOO-I-FN,70 sigma truncation, 19
MOFJSP, 209 simple crossover, 24, 137
MOIP-FN, 118 simulated annealing, 182, 200, 213, 265
MONLP, 154, 159 single arithmetic crossover, 138
MONLP-FN, 160 string, 12, 16
monthly processing plan, 224
multiobjective 0-1 programming, 54 triangular fuzzy number, 191
multiobjective fuzzy job-shop scheduling two-point crossover, 24, 262
problem, 209
multiobjective integer programming, 108 unconstrained minimization problem, 135
multiobjective multidimensional 0-1 knap- uniform crossover, 23
sack problem, 55 uniform mutation, 140
multiobjective multidimensional integer
knapsack problem, 108 weak Pareto optimality, 55
multiobjective nonlinear programming, weak Pareto optimal solution, 56
154, 159 whole arithmetic crossover, 138
multipoint crossover, 23 whole nonuniform mutation, 141

Anda mungkin juga menyukai