Anda di halaman 1dari 67

Optimization of Design

LecturerDung-An Wang
Lecture 12

Lecture outline
Reading:

Ch12 of text
Today
s lecture

Constrained nonlinear programming problem

Find x=(x1, . . ., xn), a design variable vector of


dimension n, to minimize f=f(x) subject to

The method is based on linearization of the problem


about the current estimate of the optimum design.
Therefore, linearization of the problem is quite
important and is discussed in detail. Once the problem
has been linearized, it is natural to ask if it can be
solved using linear programming methods. The answer
is yes, and we first describe a method that is a simple
extension of the Simplex method of linear
programming.
Then we describe the constrained steepest-descent
method.

Definitions of the status of a constraint at a


design point

Active constraint
Inactive constraint
Violated constraint
-Active inequality constraint

12.1.3 Constraint Normalization


Usually one value for (say 0.01) is used in checking
the status of all of the constraints to check for the active constraint condition.
Since different constraints involve different orders of
magnitude, it is not proper to use the same for all of
the constraints unless they are normalized

In some cases, it may be better to use the constraints


in their original form, especially the equality constraints.
Thus, in numerical calculations some experimentation
with normalization of constraints may be needed for
some constraint forms

12.1.4 The Descent Function


A function used to monitor progress toward the
minimum is called the descent, or merit, function
The descent function also has the property that its
minimum value is the same as that of the original cost
function

12.1.5 Convergence of an Algorithm


An algorithm is said to be convergent if it reaches a
local minimum point starting from an arbitrary point.
The feasible set is closed if all of the boundary points
are included in the set; that is, there are no strict
inequalities in the problem formulation

12.2 LINEARIZATION OF THE CONSTRAINED PROBLEM


At each iteration, most numerical methods for
constrained optimization compute design change by
solving a subproblem that is obtained by writing linear
Taylor
s expansions for the cost and constraint
functions.
Writing Taylor
s expansion of the cost and constraint
functions about the point x(k), we obtain the linearized
subproblem

10

NOTATION FOR THE LINEARIZED SUBPROBLEM

11

DEFINITION OF THE LINEARIZED SUBPROBLEM

Drop fk in the linearized cost function, the approximate


subproblem, Note that fk is a constant, which does not
affect solution of the linearized subproblem

12

EXAMPLE 12.2 DEFINITION OF A LINEARIZED


SUBPROBLEM
Linearize the cost and constraint functions about the
point x(0)=(1,1) and write the approximate problem

13

Function gradients

14

Linearized subproblem

Using the Taylor


s expansion, the linearized cost
function at the point (1,1)

linearizing the constraint functions

15

16

Linearization in Terms of Original Variables

the linearized subproblem is in terms of the design


changes d1 and d2. We may also write the subproblem
in terms of the original variables x1 and x2. To do this
we substitute d=(x-x(0)) in all of the foregoing
expressions

17

Since the linearized cost function is parallel to the


linearized first constraint g1, the optimum solution for
the linearized subproblem is any point on the line DE

18

EXAMPLE 12.3

Linearize the rectangular beam design problem


formulated that is in Section 3.8 at the point (50,200)
mm

19

Evaluation of problem functions

At the given point

20

Evaluation of gradients
In the following calculations, we will ignore constraints
g4 and g5, assuming that they will remain satisfied;
that is, the design will remain in the first quadrant
gradients of the functions

21

Linearized subproblem

22

The linearized constraint functions

the linearized cost function is parallel to constraint . The


optimum solution lies at the point H, which is at the
intersection of constraints

For this point, the original


constraints g1 and g2 are
still violated. Apparently,
for nonlinear constraints,
iterations are needed to
correct constraint violations
and reach the feasible set
23

12.3 THE SEQUENTIAL LINEAR PROGRAMMING


ALGORITHM
Linearized functions in the variables di can be solved by
linear programming methods. Such procedures where
linear programming is used to compute design change
are referred to as sequential linear programming, or SLP
for short
The sequential linear programming algorithm is a simple
and straightforward approach to solving constrained
optimization problems

24

12.3.1 Move Limits in SLP


To solve the LP by the standard Simplex method, the
right-side parameters ei and bj must be nonnegative
the problem may not have a bounded solution, or the
changes in design may become too large, thus
invalidating the linear approximations. Therefore, limits
must be imposed on changes in design. Such
constraints are usually called move limits,

25

Selection of Proper Move Limits


Selecting proper move limits is of critical importance
because it can mean success or failure of the SLP
algorithm.
the user should try different move limits if one
specification leads to failure or improper design.
Usually il(k) and iu(k) are selected as some fraction
of the current design variable values (this may vary
from 1 to 100 percent). If the resulting LP problem
turns out to be infeasible, the move limits will need to
be relaxed (i.e., larger changes in the design must be
allowed

26

the current values of the design variables can increase


or decrease. To allow for such a change, we must treat
the LP variables di as free in sign. This can be done as

27

stopping criteria

28

Sequential linear programming algorithm

29

EXAMPLE 12.4

Consider the problem given in Example 12.2. Define the


linearized subproblem at the point (3, 3) and discuss its
solution after imposing the proper move limits

30

functions and their gradients are calculated


at the given point (3, 3)

The given point is in the infeasible region, as the first


constraint is violated

31

Linearized subproblem

32

The subproblem has only two variables, so it can be


solved using the graphical solution
We may choose d1=-1 and d2=-1 as the solution that
satisfies all of the linearized constraints (note that the
linearized change in cost is 6). If 100 percent move
limits are selected
then the solution to the LP subproblem must lie in the
region ADEF. If the move limits are set as 20 percent of
the current value of design variables, the solution must
satisfy

33

EXAMPLE 12.5

Consider the problem given in Example 12.2. Perform


one iteration of the SLP algorithm, choose move limits
such that a 15 percent design change is permissible

34

Solution

The given point represents a feasible solution

35

linearized subproblem

The linearized subproblem with 15 percent move limits


on design changes d1 and d2 at the point x(0) is
obtained in Example 12.2

36

solve the problem using the graphical solution

Move limits of 15 percent define the solution region as


DEFG. The optimum solution for the problem is at point
F where d1=0.15 and d2=0.15.

37

solve the problem using the Simplex method

in the linearized subproblem, the design changes d1


and d2 are free in sign. If we wish to solve the problem
by the Simplex method, we must define new variables,
A, B, C, and D such that

38

39

The solution to the foregoing LP problem with the


Simplex method is obtained as: A=0.15,B=0, C=0.15,
and D=0. Therefore,

is larger than the permissible tolerance


(0.001), we need to go through more iterations to
satisfy the stopping criterion
40

The SLP Algorithm: Some Observations


The method should not be used as a black box
approach for engineering design problems. The
selection of move limits is one of trial and error and can
be best achieved in an interactive mode.
The method may not converge to the precise minimum
since no descent function is defined, and line search is
not performed along the search direction to compute a
step size.
The method can cycle between two points if the
optimum solution is not a vertex of the feasible set.
The method is quite simple conceptually as well as
numerically. Although it may not be possible to reach
the precise local minimum point with it, it may be used
to obtain improved designs in practice.

41

12.4 SEQUENTIAL QUADRATIC PROGRAMMING


the SLP has some limitations, the major one being its
lack of robustness.
To overcome SLP
s drawbacks, several other derivativebased methods have been developed to solve smooth
nonlinear programming problems.

gradient

projection method
the feasible directions method
the generalized reduced gradient method
Sequential quadratic programming (SQP) methods are
relatively new and have become quite popular as a result
of their generality, robustness, and efficiency.

42

SQP methods
Step 1. A search direction in the design space is
calculated by utilizing the values and the gradients of
the problem functions; a quadratic programming
subproblem is defined and solved.
Step 2. A step size along the search direction is
calculated to minimize a descent function; a step size
calculation subproblem is defined and solved

43

12.5 SEARCH DIRECTION CALCULATION

1. The QP subproblem is strictly convex and therefore


its minimum (if one exists) is global and unique.
2. The cost function represents a equation of a
hypersphere with its center at -c (circle in two
dimensions, sphere in three dimensions).

44

EXAMPLE 12.6 DEFINITION OF A QP SUBPROBLEM

Linearize the cost and constraint functions about a point


(1, 1) and define the QP subproblem

45

Solution
constraints for the problem are already written in the
normalized form
graph

46

Evaluation of functions

cost and the constraint functions are evaluated at the


point (1, 1)

47

Gradient evaluation

gradients of the cost and constraint functions

48

Linearized subproblem
Substituting Eqs. (f) and (i) into Eq. (12.9), linearized
cost function,
linearized forms of the constraint functions can be
written and the linearized subproblem
the constant 5 has been dropped from the linearized
cost function because it does not affect the solution to
the subproblem. Also, the constants 3 and 20.25 in the
linearized constraints have been transferred to the right
side

49

QP subproblem

50

Compart LP with QP subproblems

Solution to LP subproblem

Solution to QP subproblem
51

Solving the QP Subproblem

EXAMPLE 12.7 SOLUTION TO THE QP SUBPROBLEM

52

Solution

linearized cost function is modified to a quadratic


function

53

Graphical solution

54

Analytical solution
A numerical method must generally be used to solve
the subproblem. However, since the present problem is
quite simple, it can be solved by writing the KKT
necessary conditions of Theorem 4.6

55

12.6 THE STEP SIZE CALCULATION SUBPROBLEM


12.6.1 The Descent Function
Recall that in unconstrained optimization methods the
cost function is used as the descent function to monitor
the progress of the algorithms toward the optimum
point.
Although the cost function can be used as a descent
function with some constrained optimization methods, it
cannot be used for general SQP-type methods.
For most methods, the descent function is constructed
by adding a penalty for constraint violations to the
current value of the cost function.
to determine the step size, we minimize the descent
function.

56

EXAMPLE 12.8 CALCULATION OF DESCENT FUNCTION

Taking the penalty parameter R as 10,000, calculate the


value of the descent function at the point

57

Solution

The cost and constraint functions at the given point


x(0)=(40, 0.5)

maximum constraint violation is determined

58

the descent function is calculated

59

Step Size Calculation: Line Search

Once the search direction d(k)is determined at the


current point x(k)

the descent function becomes

Thus the step size calculation subproblem becomes


finding to Minimize

60

EXAMPLE 12.9 CALCULATION OF THE DESCENT


FUNCTION, follow Example 12.8
Descent function value at the initial point, =0

The necessary condition of Eq. (12.33) is satisfied if we


select the penalty parameter R as

the descent function value at the starting point


61

Descent function value at the first trial point

the maximum constraint violation


62

the descent function at the trial step size of 0=0.1 is


given (note that the value of the penalty parameter R is
not changed during step size calculation):

need to continue the process of initial bracketing of the


optimum step size.

63

Descent function value at the second trial point

In the golden section search procedure, the next trial


step size has an increment of (1.618xthe previous
increment)

next trial design point

At the point (46.70, 0.618)

64

the minimum for the descent function has not been


surpassed yet.
Therefore we need to continue the initial bracketing
process. The next trial step size with an increment of
(1.618xthe previous increment)
Value of the penalty parameter R is calculated at the
beginning of the line search along the search direction
and then kept fixed during all subsequent calculations
for step size determination.

65

12.7 THE CONSTRAINED STEEPEST-DESCENT METHOD


constrained steepest-descent (CSD) method has been
proved to be convergent to a local minimum point
starting from any point. This is considered a model
algorithm that illustrates how most optimization
algorithms work.
Since the search direction is a modification of the
steepest-descent direction to satisfy constraints, it is
called the constrained steepest-descent direction. It is
actually a direction obtained by projecting the steepestdescent direction on to the constraint hyperplane

66

12.7.1 The CSD Algorithm

67

Anda mungkin juga menyukai