Anda di halaman 1dari 2

3.

2 DIRECT-SEARCH METHODS In this and the following sections of this chapter we consider the dynamic question for functions of several variables. That is, we examine methods or algorithms that iteratively produce estimates of x*, that set of design variables that causes (x) to take on its minimum value. The methods work equally well for maximization by letting the objectives be _(x). The methods that have been devised for the solution of this problem can be classified into three broad categories based on the type of information that must be supplied by the user: 1. Direct-search methods, which use only function values 2. Gradient methods, which require estimates of the first derivative of (x) 3. Second-order methods, which require estimates of the first and second derivatives of (x) We will consider examples from each class, since no one method or class of methods can be expected to uniformly solve all problems with equal efficiency. For instance, in some applications available computer storage is limited; in others, function evaluations are very time consuming; in still others, great accuracy in the final solution is desired. In some applications it is either impossible or else very time consuming to obtain analytical expressions for derivatives. In addition, analytical derivative development is prone to error. Consequently, different approximations must be employed if gradient-based techniques are to be used. This in turn may mean considerable experimentation to determine step sizes that strike the proper balance between roundoff and truncation errors. Clearly it behooves the engineer to tailor the method used to the characteristics of the problem at hand. Methods for the unconstrained minimization problem are relatively well developed compared to the other areas of nonlinear programming. In fact, excellent surveys of the most powerful method do exist [13]. The books by Murray [1] and Fletcher [3] are good examples. In this section we treat direct methods, which are methods that require only values of the objective to proceed, and in the following section we discuss gradient and second-order methods. We assume here that (x) is continuous and _(x) may or may not exist but certainly is not available. In addition, we recognize that these direct methods can be used on problems where _ does exist and often are when _ is a complex vector function of the design variables. Finally, in this and the following sections we assume that (x) has a single minimum in the domain of interest. When these methods are applied to multimodal functions, we must be content to locate

local minima. Direct-search methods are especially useful when there is noise or uncertainty in the objective function evaluation. Multivariable methods that employ only function values to guide the search for the optimum can be classified broadly into heuristic techniques and theoretically based techniques. The heuristic techniques, as the name implies, are search methods constructed from geometric intuition for which no performance guarantees other than empirical results can be stated. The theoretically based techniques, on the other hand, do have a mathematical foundation that allows performance guarantees, such as convergence, to be established, at least under restricted conditions. We will examine in particular detail three direct search techniques: 1. Simplex search, or S2, method 2. HookeJeeves pattern search method 3. Powells conjugate direction method The first two of these are heuristic techniques and exhibit fundamentally different strategies. The S2 method employs a regular pattern of points in the design space sequentially, whereas the HookeJeeves method employs a fixed set of directions (the coordinate directions) in a recursive manner. Powells method is a theoretically based method that was devised assuming a quadratic

Anda mungkin juga menyukai