Anda di halaman 1dari 7

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/259715310

Discrete Optimization using Mathematica

Article · June 2002

CITATIONS READS
24 698

1 author:

Daniel Lichtblau
Wolfram Research
47 PUBLICATIONS   226 CITATIONS   

SEE PROFILE

All content following this page was uploaded by Daniel Lichtblau on 15 January 2014.

The user has requested enhancement of the downloaded file.


Discrete Optimization using Mathematica
Daniel Lichtblau
Wolfram Research, Inc.
100 Trade Centre Dr.
Champaign, Illinois 61820 (USA)
danl@wolfram.com

ABSTRACT ACKNOWLEDGEMENTS
Many important classes of optimization problems are discrete Much of the NMinimize code on which this paper rests was
in nature. Examples are the standard problems of integer pro- developed by Sergei Shebalov (summer intern at Wolfram Re-
gramming (including the ever important “knapsack problems”), search, Inc., 1999, 2000) and Brett Champion of Wolfram Re-
permutation assignment problems (e.g., the notorious “traveling search, Inc.
salesman problem”), set coverings, set partitioning, and so on.
There is a discussion of the technology behind NMinimize in
Problems of practical interest are frequently too large to be solved
Brett Champion’s paper for this conference [1]. The primary fo-
by brute force enumeration techniques. Hence there is a need for
cus there is to describe the general technology and provide exam-
more refined methods that use various tactics for sampling and
ples of continuous optimization problems.
searching the domain in pursuit of optimal solutions. Version 4.2
of Mathematica has a flexible package for performing global opti- All timings indicated below are for a 1.4 GHz machine running a
mization. It includes powerful functionality from the category of development Mathematica kernel under the Linux operating sys-
evolutionary programming. Ways to apply this technology to var- tem (Mathematica (TM) is a registered trademark of Wolfram
ious problems in discrete optimization will be discussed. We will Research, Incorporated [7]). Many of these examples were also
present details of how to code problems so that built-in Mathe- presented in [3].
matica functions can digest them and will illustrate with a variety
of examples. We will also discuss some practical tuning consid-
erations. INITIALIZATION
We begin by loading the add-on package that has the NMini-
Keywords: evolutionary algorithms, discrete optimization, set mize function.
partitioning, subset covering, quadratic assignment problem. NeedsNumericalMath‘NMinimize‘

OffGeneral  spell, General  spell1


INTRODUCTION
Optimization problems that involve discrete parameters are quite In order to do discrete optimizations effectively we will disable
common. Examples include task assignments, knapsack prob- various mechanisms that are in place to determine “convergence”.
lems, and so forth. In Mathematica we now have some powerful This will force the program to work for the full specified maxi-
global optimization technology, described in [1], and contained in mum iterations.
a function called NMinimize. Among its features is a method SetOptionsNMinimize, AccuracyGoal  ,
from the family of evolutionary algorithms (see [2] for a broad PrecisionGoal  , Compiled  False
discussion of these) based on the “differential evolution” method
presented in [6]. That method was designed primarily for con-
A SIMPLE EXAMPLE
tinuous optimization problems. It differs from standard “genetic”
methods in that vector elements take on values in a continuum; We start with a basic coin problem. We are given 143267 coins
this was motivated by a need to represent continuous variables in in pennies, nickels, dimes, and quarters, of total value $12563.29,
a way that discrete-valued genes cannot readily do. In other re- and we are to determine how many coins might be of each type.
spects, e.g. mutation and mating, this method resembles a genetic There are several ways one might set up such a problem in
algorithm. NMinimize. We will try to minimize the sum of squares of dif-
ferences between actual values and desired values of the two lin-
It turns out that differential evolution is a powerful tool for han-
ear expressions implied by the information above. For our search
dling constrained optimization, stochastic problems (see [1] for
space we will impose obvious range constraints on the various
an example), and, of all things, discrete optimization. In this pa-
coin types. We will want to alter the seeding of the random num-
per we demonstrate with several examples how one might attack
ber generator (this changes the random initial parameters used to
discrete problems. It must be stated that these methods are heuris-
seed the optimization code) so we specify the method with this
tic. We generally do not know that the results returned are optimal
option added. We will do 10 runs of this.
(unless shown to be so by other means). What we claim is that the
methods to be presented are practical ways to get “near” optimal
results without resorting to problem-specific software.

1
TimingTable getHalfSetn , opts Rule 
min, sol NMinimize
p 5 n 10 d 25 q 1256329 2 Modulevars, xx, ranges, nmin, vals,

p n d q 143267 2 , p, n, d, q  Integers, vars Arrayxx, n
0 < p < 1256329, 0 < n < 1256329/ 5, ranges Map#, 0, 1&, vars
0 < d < 1256329/ 10, 0 < q < 1256329/ 25, nmin, vals
p, n, d, q, MaxIterations > 1000, NMinimizeobfunvars, ranges, opts
Method > DifferentialEvolution, nmin, MapSort, splitRangevars/.vals
RandomSeed > RandomInteger, 1000, 10
223.53 Second, If we do not specify otherwise, the default behavior will be to
0., d  6362, n  50489, p  50839, q  35577, use the DifferentialEvolution method; this is a sign of
0., d  43218, n  57917, p  21614, q  20518, sound automatic behavior of NMinimize [1]. All the same we
0., d  99194, n  38267, p  3004, q  2802,
explicitly set that method so that we can more readily pass it non-
0., d  45794, n  43001, p  32434, q  22038,
default method-specific options. Finally, we set this to run many
0., d  40522, n  67331, p  15454, q  19960,
0., d  51018, n  40919, p  30904, q  20426, iterations with a lot of search points.
0., d  34674, n  62009, p  23544, q  23040,
Timingmin, s1, s2
0., d  78822, n  22550, p  28834, q  13061,
getHalfSet100, Method  DifferentialEvolution,
0., d  65434, n  22865, p  36939, q  18029,
CrossProbability  0.9, SearchPoints  100,
0., d  54378, n  38981, p  30419, q  19489
PostProcess  False, MaxIterations  10000
916.523 Second,
We obtained valid solutions each time. Using only, say, 400 iter- 5.30582  106 , 1, 4, 6, 8, 9, 13, 17, 20, 23, 24, 28,
ations we tend to get solutions about half the time and “near” so- 29, 30, 33, 35, 36, 37, 38, 40, 41, 42, 46, 47, 48,
lutions the other half (wherein either the number of coins and/or 50, 51, 52, 54, 55, 56, 57, 58, 61, 63, 66, 67, 69,
total value is off by a very small amount). Note that this type of 71, 72, 73, 74, 75, 77, 78, 81, 83, 92, 94, 97, 99,
problem is one of constraint satisfaction. An advantage in these 2, 3, 5, 7, 10, 11, 12, 14, 15, 16, 18, 19, 21,
is that we can discern from the proposed solution whether it is 22, 25, 26, 27, 31, 32, 34, 39, 43, 44, 45, 49, 53,
59, 60, 62, 64, 65, 68, 70, 76, 79, 80, 82, 84, 85,
valid; those are exactly the cases for which we get an object value
86, 87, 88, 89, 90, 91, 93, 95, 96, 98, 100
of zero, with all constraints satisfied.
We obtain a fairly small value for our objective function.
PARTITIONING A SET
We illustrate this sort of problem with an old example from com- A SUBSET COVERING PROBLEM
putational folklore. We are to partition the integers from 1 to 100
The problem below was posed in the news group comp.soft-
into two sets of 50, such that the sums of the square roots in each
sys.math.mathematica. We are given a set of sets, each containing
set are as close to equal as possible. There are various ways to set
integers between 1 and 64. Their union is the set of all integers
this up as a problem for NMinimize, and we illustrate one such
in that range, and we want to find a set of 12 subsets that covers
below.
that entire range. For those interested in trying this problem, the
We will utilize a simple way to choose 50 elements from the set input in electronic form may be found in [3].
of 100. This is an approach we often use for adapting permutation
problems for NMinimize. We work with 100 real values from 0
to 1 and their ordering (from the Mathematica Ordering func-
tion) determines which is to be regarded as “first”, which as “sec-
ond”, and so on. Note that this is different from the last example
in that we now work with continuous variables even though the
problem itself involves a discrete set.
splitRangevec   Withnewvec Orderingvec,
halflen FloorLengthvec/ 2,
Takenewvec, halflen, Dropnewvec, halflen

Once we have a way to associate a pair of subsets to a given set of


100 values in the range from 0 to 1, we form our objective func-
tion. A convenient choice is simply a square of a difference; this
is often the case in optimization problems.
obfunvec   Real  Withvals splitRangevec,
Abs
ApplyPlus, SqrtNFirstvals
ApplyPlus, SqrtNLastvals 

We now put these components together into a function that pro-


vides our set partition.

2
subsets 1, 2, 4, 8, 16, 32, 64, 196.7 Second,
2, 1, 3, 7, 15, 31, 63, 3, 4, 2, 6, 14, 30, 62, 1., 1, 15, 16, 21, 26, 30, 34, 41, 45, 54, 59, 60
4, 3, 1, 5, 13, 29, 61, 5, 6, 8, 4, 12, 28, 60,
6, 5, 7, 3, 11, 27, 59, 7, 8, 6, 2, 10, 26, 58, 64
8, 7, 5, 1, 9, 25, 57, 9, 10, 12, 16, 8, 24, 56,
10, 9, 11, 15, 7, 23, 55, 11, 12, 10, 14, 6, 22, 54, This example has an unfortunate drawback. It requires a non-
12, 11, 9, 13, 5, 21, 53, 13, 14, 16, 12, 4, 20, 52, default value for the CrossProbability option in the Dif-
14, 13, 15, 11, 3, 19, 51, 15, 16, 14, 10, 2, 18, 50, ferentialEvolution method of NMinimize. It is not en-
16, 15, 13, 9, 1, 17, 49, 17, 18, 20, 24, 32, 16, 48,
tirely trivial to find useful values for such options, or even to
18, 17, 19, 23, 31, 15, 47, 19, 20, 18, 22, 30, 14, 46,
know which such options to alter from default values. We hope
20, 19, 17, 21, 29, 13, 45, 21, 22, 24, 20, 28, 12, 44,
22, 21, 23, 19, 27, 11, 43, 23, 24, 22, 18, 26, 10, 42,
to ameliorate this need in future work by providing an adaptive
24, 23, 21, 17, 25, 9, 41, evolution that modifies its own parameters as it proceeds.
25, 26, 28, 32, 24, 8, 40, 26, 25, 27, 31, 23, 7, 39,
27, 28, 26, 30, 22, 6, 38, 28, 27, 25, 29, 21, 5, 37, Method 2
29, 30, 32, 28, 20, 4, 36, 30, 29, 31, 27, 19, 3, 35,
This problem may also be tackled in other ways. One is to write
31, 32, 30, 26, 18, 2, 34, 32, 31, 29, 25, 17, 1, 33,
33, 34, 36, 40, 48, 64, 32, 34, 33, 35, 39, 47, 63, 31,
a simple “greedy” algorithm. Experience indicates that such an
35, 36, 34, 38, 46, 62, 30, 36, 35, 33, 37, 45, 61, 29, approach will fail for this particular set. Another is to cast it as a
37, 38, 40, 36, 44, 60, 28, 38, 37, 39, 35, 43, 59, 27, standard knapsack problem. We will show how to do this. First
39, 40, 38, 34, 42, 58, 26, 40, 39, 37, 33, 41, 57, 25, we transform our set of subsets into a “bit vector” representation;
41, 42, 44, 48, 40, 56, 24, 42, 41, 43, 47, 39, 55, 23, each subset is represented by a positional list of zeros and ones.
43, 44, 42, 46, 38, 54, 22, 44, 43, 41, 45, 37, 53, 21,
45, 46, 48, 44, 36, 52, 20, 46, 45, 47, 43, 35, 51, 19, densevecspvec , len  
47, 48, 46, 42, 34, 50, 18, 48, 47, 45, 41, 33, 49, 17, Modulevec Table0, len,
49, 50, 52, 56, 64, 48, 16, 50, 49, 51, 55, 63, 47, 15, Dovecspvecj 1, j, Lengthspvec
51, 52, 50, 54, 62, 46, 14, 52, 51, 49, 53, 61, 45, 13, vec
53, 54, 56, 52, 60, 44, 12, 54, 53, 55, 51, 59, 43, 11, 
55, 56, 54, 50, 58, 42, 10, 56, 55, 53, 49, 57, 41, 9,
mat Mapdensevec#, 64&, subsets
57, 58, 60, 64, 56, 40, 8,
58, 57, 59, 63, 55, 39, 7, 59, 60, 58, 62, 54, 38, 6,
60, 59, 57, 61, 53, 37, 5, 61, 62, 64, 60, 52, 36, 4, We now work with 0  1 variables and minimize their sum, sub-
62, 61, 63, 59, 51, 35, 3, 63, 64, 62, 58, 50, 34, 2, ject to the constraint that an appropriate inner product be strictly
64, 63, 61, 57, 49, 33, 1 positive.
UnionFlattensubsets Range64
True spanningSets2set ,
iter , sp , seed , cp 0.5 
Method 1 Modulevars, rnges, max Lengthset,
nmin, vals, vars Arrayxx, max
We set up our objective function as follows. We represent a set rnges Map
0  #  1 &, vars
of 12 subsets of this “universe” set by a set of 12 integers in the nmin, vals NMinimizeApplyPlus, vars,
range from 1 to the number of subsets (which in this example is Joinrnges, Elementvars, Integers,
also 64). This set is allowed to contain repetitions. Our objec- Threadvars.set  Table1, max,
tive function to minimize will be based on how many elements vars, MaxIterations  iter,
from 1 through 64 are “covered”. Specifically it will be 2 raised Method  DifferentialEvolution,
CrossProbability  cp,
to the #(elements not covered) power. The code below does this.
SearchPoints  sp, RandomSeed  seed
As it is a problem that may require many iterations and a rela- vals vars/.vals
tively large search space, we allow for nondefault option values nmin, vals
for these parameters.
Timingmin, sets
fn   Integer, set , mx Integer  spanningSets2mat, 2000, 100, 0, 0.9
2ˆLengthComplementRangemx, 1930.4 Second,
UnionFlattensetn 12., 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1,
0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0,
spanningSetsset , nsets , iter , sp , cp  
0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0,
Modulevars, rnges, max Lengthset,
0, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1
nmin, vals, vars Arrayxx, nsets
rnges Map
1  #  max &, vars
nmin, vals NMinimizefvars, set, max, We have again obtained a result that uses 12 subsets. We check
Appendrnges, Elementvars, Integers, that it covers the entire range.
vars, MaxIterations  iter, Method 
ApplyPlus, MapMin#, 1&, sets.mat
DifferentialEvolution, SearchPoints  sp,
CrossProbability  cp 64
vals Unionvars/.vals
nmin, vals We see that this method was much slower. Experience indi-
cates that it needs a lot of iterations and careful setting of the
Timingmin, sets
spanningSetssubsets, 12, 700, 200, 0.94
CrossProbability option. So at present NMinimize has
difficulties with this formulation. All the same it is encouraging
LengthUnionFlattensubsetssets to realize that one may readily set this up as a standard knap-
sack problem; over time NMinimize internals may improve to

3
the point where it can more efficiently handle the problem in this p 5, 11, 20, 15, 22, 2, 25, 8, 9, 1, 18, 16, 3,
way. 6, 19, 24, 21, 14, 7, 10, 17, 12, 4, 23, 13

best ApplyPlus,
AN ASSIGNMENT PROBLEM Flattenaa  permuteMatrixbb, p

Our next example is a benchmark from the literature of discrete baseline ApplyPlus, Flattenaa  bb
optimization. We are given two square matrices. We want a
randomvals Table
permutation that, when applied to the rows and columns of the
perm OrderingTableRandom, 25
second matrix, multiplied element-wise with corresponding ele- ApplyPlus,
ments of the first, and all elements summed, we get a minimum. Flattenaa  permuteMatrixbb, perm,
The matrices we use have 25 rows; they may be found in [4]. 10
This is known as the “NUG25” example and it is an example of 3744
a “quadratic assignment problem” (QAP). The optimal result is
known and was verified by a large parallel computation. The ac- 4838

tual matrices are in a hidden cell below. One should recognize 5250, 5076, 5050, 4628,
that the methods of handling this problem can, with minor modi- 4858, 5140, 5058, 5250, 4962, 4870
fication, be applied to related ones such as the travelling salesman
problem. In general, problems that require a “best permutation” A much more strenuous run over random permutations may be
may be amenable to the methods described below. indicative of how hard it is to get good results by randomized
search.
Method 1
SeedRandom1111
Our first problem is to decide how one can make a set of values
into a permutation. One approach, as with the set partition prob- Timingrandomvals Table
lem, is to take their Ordering. In order that this avoid collisions perm OrderingTableRandom, 25
we thus work with real values. ApplyPlus,
Flattenaa  permuteMatrixbb, perm,
permuteMatrixmat , perm   matperm, perm 1000000 

QAPmat1 , mat2 , cp , it , sp , sc   Minrandomvals


Modulelen Lengthmat1, obfunc, vars, vv, 1130.2 Second, Null
nmin, vals, rnges, vars Arrayvv, len
rnges Map#, 0, 1&, vars 4284
obfuncvec   Real  ApplyPlus, Flatten
mat1  permuteMatrixmat2, Orderingvec We see that the baseline permutation (do nothing) and random
nmin, vals NMinimizeobfuncvars, rnges, permutations tend to be far from optimal, and even a large sam-
MaxIterations  it, PostProcess  False, Method 
pling will get us only about half way from baseline to optimal.
DifferentialEvolution, SearchPoints  sp,
CrossProbability  cp, ScalingFactor  sc
A relatively brief run with good values for the algorithm parame-
nmin, Orderingvars/.vals ters, on the other hand, yields something better still.
SeedRandom11111
Again we face the issue that this problem requires nonstandard
values for options to the DifferentialEvolution method, Timing
in order to achieve a reasonable result. While this is regrettable it min, perm QAPaa, bb, 0.06, 200, 40, 0.6
is clearly better than having no recourse at all. The specific val- 17.33 Second,
4066, 5, 3, 18, 11, 2, 17, 8, 9, 23, 1, 22, 10, 6,
ues we use were found by Brett Champion by running many short
19, 13, 12, 20, 14, 7, 25, 24, 4, 21, 16, 15
trials and noticing which parameter values seemed to work best
(it was also his idea to try nondefault values in the first place).
We now try a longer run.
The idea behind having CrossProbability relatively small
SeedRandom11111
is that we do not want many “genetic crossovers” in mating a pair
of vectors. This in turn is because of the way we define a permu- Timing
tation. In particular it is not just values but relative values across min, perm QAPaa, bb, 0.06, 5000, 100, 0.6
the entire vector that give us the permutation, and thus disrupting 1349.22 Second,
more than a few, even when mating a pair of “good” vectors, is 3836, 5, 2, 25, 11, 24, 12, 17, 4, 18, 21, 20, 3,
likely to give a “random” bad vector. 8, 14, 16, 15, 23, 9, 6, 7, 13, 22, 10, 19, 1

Below is the optimal permutation and the objective function value Method 2
we obtain therefrom. We also show as baseline the result from
applying no permutation, and also the result of applying several Once again we have a very different approach to this problem.
random permutations. This gives some idea of how to gauge the The idea is to generate a permutation as a “shuffle” of a set of in-
results below. tegers. We have for vector a set of integers from 1 to len, the
length of the set in question. The range restriction is the only stip-
ulation and in particular it may contain repeats. We associate to it
a unique permutation as follows. We initialize a “deck” to contain
len zeros . The first element in deck is then set to vector1.

4
We also have a “marker” set telling us that vector1 is now 465.17 Second, Null
“used”. We iterate over subsequent elements in deck, setting
5, 4320.4, 10, 4362.8,
them to the corresponding values in vector provided those val-
15, 4320.8, 20, 4292., 25, 4333.6,
ues are not yet used. Once done with this iteration we go through 30, 4328.8, 35, 4353.2,
the elements that have no values, assigning them in sequence the 40, 4304.8, 45, 4351.2, 50, 4359.2,
values that have not yet been assigned. 55, 4342.4, 60, 4329.2, 65, 4312.8,
70, 4321.2, 75, 4325.6, 80, 4262.8,
This notion of associating a list with repeats to a distinct shuffle 85, 4278., 90, 4238.4, 95, 4225.6
has a clear drawback insofar as earlier elements are more likely
to be assigned corresponding values in vector. All the same From this we see that large values seem to work at least slightly
this provides a reasonable way to make a “chromosome” vector better than other values. A similar test run over values in the
containing repeats correspond to a permutation. Moreover, one range 0.9 through 0.98 indicates that 0.96 might be best for our
can see that any sensible mating process of two chromosomes purposes.
will less drastically alter the objective function than would be the
case in method 1, as the corresponding permutation now depends SeedRandom111111
far less on overall ordering in the chromosomes. The advantage
min, perm QAP2aa, bb, 0.96, 20000, 200
is thus that this method will be less in need of intricate crossover-
3788, 6, 3, 21, 23, 16, 12, 25, 9, 10, 2, 19, 17,
probability parameter tuning. 15, 7, 20, 22, 1, 4, 5, 8, 11, 13, 18, 24, 14
getPerm Compilevec, Integer, 1,
Modulep1, p2, len Lengthvec, k, This gets us quite close to the global minimum with a scant two
p1 p2 Table0, len dozen lines of code. While it is mildly more complicated than
Dok vecj method 1 above, it has one clear advantage. Both intuition and
Ifp2k 0, practical experience indicate that it is less in need of careful tun-
p2k j
ing of relatively obscure algorithm parameters. Also note that
p1j k ,
much of the code complexity is in associating a unique permuta-
j, len
k 1
tion to a given vector in our “search space”. This part is likely
DoIfp1j 0, Whilep2k  0, k  to be problematic for any algorithm. The fact that we employ
p1j k a genetic one makes it all the more so because we then need it
p2k j, j, len to behave well with respect to a mating process, the internals of
p1 which are a black box. Hence a method that is relatively stable
QAP2mat1 , mat2 , cp , it , sp   with respect to tuning parameters is all the more desirable.
Modulelen Lengthmat1, obfunc,
Using method 1 above, Brett Champion has achieved even better
vars, vv, nmin, vals, constraints,
vars Arrayvv, len
results for this problem. He has found a permutation that gives a
constraints PrependMap
1  #  len &, vars, function value of 3752. But the important point is that we have
Elementvars, Integers obtained a quite practical heuristic improvement in the objective
obfuncvec   Integer  ApplyPlus, Flatten function relative to random values, and this idea may apply even
mat1  permuteMatrixmat2, getPermvec in cases where the absolute minimizer is not already known.
nmin, vals NMinimize
obfuncvars, constraints, vars, What lies beyond
Method  DifferentialEvolution,
SearchPoints  sp, CrossProbability  cp, This example comes from a family, the hardest of which is
PostProcess  False, MaxIterations  it NUG30. The salient features are that the baseline and “typical”
nmin, getPermvars/.vals random permutations give values around 8000, and the minimiz-
ing permutation gives 6124. With some tuning of Differen-
Though we suspect that tuning is less important for this method, tialEvolution parameters and reasonably long runs we have
we anyway show details of a “tuning run”. We take several short obtained values within 100 of that minimum.
runs with a given crossover probability parameter value, average
them, and see what values seem to work best. We then try a
longer run using one such value. This method is generally useful OTHER APPROACHES FOR KNAPSACK AND
for problems where default settings may not be optimal. RELATED PROBLEMS

SeedRandom111111
It should be noted that knapsack problems that have exclusively
equality constraints can often be done much more readily using
Timingvals Tablej, Table lattice reduction methods. See [4] for a simple example of how
min, perm QAP2aa, bb, j/ 100, 200, 10 one might tackle, for example, a “subset-sum” problem using
min, 5, j, 5, 95, 5  Mathematica. The set cover problem above requires inequalities
and hence is not amenable to the lattice approach.
v2 Map#1, ApplyPlus, #2/ 5.&,
vals A related application of lattice reduction is to find “small” so-
lutions to a system of linear integer equations. Details may be
found in [5].

5
SUMMARY AND FUTURE WORK References
We have seen several examples of discrete optimization problems [1] B. Champion (2002). Numerical optimization in Mathemat-
and how one might tackle them in Mathematica. The underlying ica: an insider’s view of NMinimize. These proceedings.
method we utilize is a member of the evolutionary family of op- [2] C. Jacob (2001). Illustrating Evolutionary Computation with
timization algorithms and we have seen that it is reasonably well Mathematica. Morgan Kaufmann.
adapted for these problems. We have discussed various ways to [3] D. Lichtblau (2001). NMinimize in Mathematica 4.2. 2001
cast our examples so that simple Mathematica code using NMin- Mathematica Developer’s Conference notebook. An elec-
imize may be applied. Indeed, we noticed that there are often tronic version may be found at:
very different ways to approach the same combinatorial optimiza- http://library.wolfram.com/conferences/devconf2001/
tion problem. lichtblau/lichtblau.nb
[4] D. Lichtblau (2002).Usenet news group comp.soft-
This uses technology presently under development and so it is to
sys.math.mathematica communication. Archived at:
be expected that it might improve over time. For example, we
http://library.wolfram.com/mathgroup/archive/2002/
hope to make it adaptive so that one need not “tune” the param-
Feb/msg00410.html
eters (in essence, they become part of the “chromosomes” and
[5] D. Lichtblau (2002). Revisiting strong Gröbner bases over
evolve). We also might allow an automated multilevel approach
Euclidean domains. Submitted.
wherein several short runs (similar to tuning runs) will be used to
[6] K. Price and R. Storn (1997). Differential evolution. Dr.
initialize values for a longer run. This said, it is clear from the ex-
Dobb’s Journal, April 1997. pp. 18–24, 78.
amples that NMinimize is already a powerful tool for handling
[7] S. Wolfram (1999). The Mathematica Book (4th edition).
nontrivial combinatorial optimization problems.
Wolfram Media/Cambridge University Press.

View publication stats

Anda mungkin juga menyukai