Anda di halaman 1dari 11

Multidisciplinary Design Optimization of Aircrafts

1st Assignment May 10, 2012

Ndilokelwa Fernandes Luis, 61017/D

Problem 1
This problem was solved using Mathematica. You can refer to the Mathematica le HW1 mathematica.nb.

a.
x The gradient of f (x ) is: x f (x ) = 4x3 2x1 x2 , x2 + 3x2 . 1 1 x The Hessian of f (x ) is: H(f ) = 12x2 2x2 2x1 1 . 2x1 3 (2) (1)

b.
First, we will verify the necessary and sucient conditions for x = (0, 0) to be a local minimizer. First-order necessary condition f , being a bivariate polynomial, is C x in R2 and, therefore, it is C 1 in an open neighbourhood of x . f (x ) = 0 . So x is a stationary point and a candidate to be a local minimum.

x Second-order necessary condition Lets rst check if H (f (x )) =: H y is positive semi-denite. y T H y = 3x2 0, y R2 , so H is positive 2 semi-denite. f is a smooth function and so it is C 2 in an open neighbourhood of . x x x f (x ) = 0 and H (f (x )) is positive semi-denite, thus x is a local minimizer. x Second-order sucient condition Lets check if H (f (x )) is positive y denite. y T H y = 3x2 > 0, y = 0 R2 , so H is positive denite. 2 x x f is a smooth function, as stated above. f (x ) = 0 and H (f (x )) is positive denite, thus x is a strict local minimizer. Illation x is the only local minimizer because it is the only solution to x f (x ) = 0 . Being a strict local minimizer and the only local minimizer, it is, o course, a global minimizer.

c.
The solution to this problem is presented in gure 2a.

x (a) Function f (x )

x (b) Contour plot of function f (x ) in the square {(x1 , x2 ) : |x1 | 1, |x2 | 1}.

Figure 1: f function graphical representations. In gure 1a it is possible to observe a 3-D representation of function f (x) in the square I1 = [1, 1] [1, 1]. As we can see by this and by gure 1b, the minimum must be located in a neighbourhood of (x1 , x2 ) = (0, 0). Figure 2a presents a zoom in on the neighbourhood aforementioned where, although still not totally clear, it is possible to infer that the minimum is located at the center of the square that denes the domain of this plot, I2 = [0.5, 0.5] [0.5, 0.5]. Observing the gradient of f depicted 2

(a) Contour plot of funcx tion f (x ) in the square {(x1 , x2 ) : |x1 | 0.5, |x2 | 0.5}.

x (b) Gradient of f (x ).

Figure 2: f and

f graphical depictions.

on gure 2b, the arrows are going out from the center of the square dened by I2 which represents the direction of function increase. Thus, and unequivocally, the minimum is at (x1 , x2 ) = (0, 0).

Problem 2
a.
This problem was solved using Mathematica. Refer to the Mathematica le HW1 mathematica.nb. f (x) = 0 has three solutions: x1 = 2.3519, x2 = 0.2532 and x3 = 2.09866. These are the candidates to be the minimum of the function. Actually f (x1 ) < f (x3 ) < f (x2 ). First-order necessary condition Because f (x) is a polynomial it is a smooth function and, therefore, C 1 in R. The 3 points xi , i = 1, 2, 3 determined above are all stationary points because f (x) is C 1 in an open neighbourhood of any xi and f (xi ) = 0. So, like aforementioned, xi , i = 1, 2, 3 are all candidates to be minimums of f . Second-order necessary condition f (x) is smooth and, therefore, C 2 in R. Lets check if the Hessians are positive semi-denite at the candidate 2 points x , which means, in this case, that f (x ) 0. x2

0 0 0

2f (x1 ) = 4.6378. f (x) is C 2 in an open neighbourhood of x1 , x2 2 and f (x1 ) > 0, thus x1 is a local minimizer. x2 2f (x2 ) = 1.9230. f (x) is C 2 in an open neighbourhood of x2 , x2 2 but f (x2 ) < 0, thus x2 is not a local minimizer. x2 2f (x3 ) = 3.2853. f (x) is C 2 in an open neighbourhood of x3 , x2 2 and f (x3 ) > 0, thus x3 is a local minimizer. x2

f (x1 ) = f (x2 ) = f (x3 ) =

Second-order sucient condition f (x) is smooth and, therefore, C 2 in R. At this point we would check if the Hessians were positive denite at 2 the candidate points or, in this case, f (x ) > 0, but we already know that x2 this is true for x1 and x3 and that x2 was eliminated as a candidate because it does not verify the second-order necessary condition. Thus, x1 and x3 , for being local minima and having a positive denite second derivative, are both strict local minima. Moreover, because f (x1 ) < f (x3 ), x1 is a global minima.

b.
The optimization algorithms proposed were implemented under a Matlab function hwork1 which takes no arguments. Just type hwork1 on the Matlab command window, having the Matlab le hwork1.m in the active folder, and follow the instructions. Function f (x) was minimized using the algorithms proposed and the results are presented in Tables 1 through 4 . In these tables, the methods are Fibonacci (F), Golden Section (G), Bisection (B), Secant (S) and Newton (N). The rest of the nomenclature on the tables is INIT for initial conditions , NITER for number of iterations , and fCALLS for number of function calls in (F) and (G) and number of gradient calls for the others methods. In Tables 1 and 2 are the results of the minimization of function f with accuracies and tolerances equal to 104 and 106 , respectively. The major ndings pertaining to the dierence between the two order of magnitudes are that there is NO improvement in the solution for ALL methods. There is also NO signicant increase in the number of iterations or of function calls except for the Fibonacci and the Bisection methods. The time which took the minimizations to conclude where more or less the same except for some anomalies which we believe are due to computer processes concurrent while the minimizations were being executed. Most likely, having only two orders of magnitude of dierence was not sucient to observe a signicant variation on the results. Tables 3 and 4 presents a similar study with the same pair of tolerances and accuracies but with wider or bigger (in absolute value) search interval or initial conditions, respectively. Again, there is no signicant variations on the results reected by the dierence in orders of magnitude. 4

Method (F) (G) (B) (S)

(N)

INIT [-1,1] x0 = 1, s = 1 x0 = 0, s = 1 x0 = 1, s = 1 [-1,1] x0 = 1 x0 = 0 x0 = 1 x0 = 1 x0 = 0 x0 = 1

x -0.9998 - 2.6180 2.6180 2.0000 0.2533 0.2532 0.2532 0.2532 0.2532 0.2532 0.2532

f (x ) -1.3996 -3.4652 -0.8472 -1.4000 0.0629 0.0629 0.0629 0.0629 0.0629 0.0629 0.0629

NITER 19 2 2 2 14 4 3 4 4 3 4

fCALLS 20 7 7 5 16 6 5 6 6 5 6

Time [s] 0.082 0.001 0.009 0.001 6.214 0.043 0.051 0.040 0.041 0.039 0.040

Table 1: Minimization of f (x) having for initial condition a point in or the actual interval [1, 1]. Accuracies and tolerances were all equal to 104 for all methods. Method (F) (G) (B) (S) INIT [-1,1] x0 = 1, s = 1 x0 = 0, s = 1 x0 = 1, s = 1 [-1,1] x0 = 1 x0 = 0 x0 = 1 x0 = 1 x0 = 0 x0 = 1 x -1.0000 - 2.6180 2.6180 2.0000 0.2532 0.2532 0.2532 0.2532 0.2532 0.2532 0.2532 f (x ) -1.4000 -3.4652 -0.8472 -1.4000 0.0629 0.0629 0.0629 0.0629 0.0629 0.0629 0.0629 NITER 29 2 2 2 14 4 4 5 4 4 5 fCALLS 30 7 7 5 22 6 6 7 6 6 7 Time [s] 0.007 0.001 0.001 0.001 0.040 0.041 0.041 0.040 0.039 0.039 0.041

(N)

Table 2: Minimization of f (x) having for initial condition a point in or the actual interval [1, 1]. Accuracies and tolerances were all equal to 106 for all methods. Now, addressing specic results on Tables 2 and 4 (not preferably, only as representative of each pair of tables). Bisection, Secant and Newton tend to nd the stationary point closer to the starting point. Unlike Bisection, Fibonacci nds the global minimum, with a marginal error, if the interval of search is wide enough such that encompasses it. The Golden Section method is clearly not adequate to solve the proposed problem or, at this point we can only presume, has some unidentied error in its programming. Accounting to performance, the Fibonacci method has the worst performance having the higher count of iterations and function calls, being the second worst the 5

Method (F) (G) (B) (S) (N)

INIT [-5,5] x0 = 5, s = 5 x0 = 0, s = 5 x0 = 5, s = 5 [-5,5] x0 = 5 x0 = 5 x0 = 5 x0 = 5

x -2.3516 - 1.9098 -3.0902 -3.0902 -2.3519 -2.3519 2.0987 -2.3519 2.0987

f (x ) -3.6477 -3.2720 -1.9756 -1.9756 -3.6477 -3.6477 -1.4150 -3.6477 -1.4152

NITER 19 3 2 2 17 9 9 9 9

fCALLS 20 6 5 7 19 11 11 11 11

Time [s] 0.002 0.003 0.000 0.001 0.040 0.045 0.040 0.040 0.051

Table 3: Minimization of f (x) having for initial condition a point in or the actual interval [5, 5]. Accuracies and tolerances were all equal to 104 for all methods. Method (F) (G) (B) (S) (N) INIT [-5,5] x0 = 5, s = 5 x0 = 0, s = 5 x0 = 5, s = 5 [-5,5] x0 = 5 x0 = 5 x0 = 5 x0 = 5 x -2.3516 - 1.9098 -3.0902 -3.0902 -2.3519 -2.3519 2.0987 -2.3519 2.0987 f (x ) -3.6477 -3.2720 -1.9756 -1.9756 -3.6477 -3.6477 -1.4152 -3.6477 -1.4152 NITER 29 3 2 2 24 10 10 10 10 fCALLS 30 6 5 7 26 12 12 12 12 Time [s] 0.013 0.003 0.009 0.001 0.041 0.045 0.041 0.040 0.040

Table 4: Minimization of f (x) having for initial condition a point in or the actual interval [5, 5]. Accuracies and tolerances were all equal to 106 for all methods. Bisection method; all other methods perform quite the same and with very low count of iterations and function calls.

Problem 3
a.
The optimization algorithms proposed were implemented under the same Matlab function as Problem 2. Just type hwork1 on the Matlab command window, having the Matlab le hwork1.m in the active folder, and follow the instructions. x Function f (x ) was minimized using the algorithms proposed and the results are presented in Table 5. In this table, the methods are Conjugate

Gradient (CG), Steepest Descent (SD), quasi-Newton DFP (ND) and BFGS (NB), and Newton (Nt). The line search methods (LS), where used, are the aforementioned. The rest of the nomenclature on the table is NITER for number of iterations, as before; fCALLS for the number of function calls and gCALLS for the number of gradient calls. Method LS (F) (G) (B) (S) (N) (F) (G) (B) (S) (N) (F) (G) (B) (S) (N) (F) (G) (B) (S) (N) x ( 0.0002 ,0.0000) ( -0.0009 ,0.0000) ( -0.0000 ,0.0000) ( -0.0000 ,0.0000) ( 0.0000,0.0000) ( -0.0000 ,0.0000) ( 0.0022 ,-0.0022) ( -0.0000 ,-0.0000) ( -0.0000 ,0.0000) ( 0.0000 ,-0.0000) ( -0.0001 ,0.0000) ( -0.0010 ,0.0000) ( 0.0000 ,0.00000) ( 0.0000 ,0.0000) ( 0.0000 ,0.0000) ( -0.0015 ,0.0000) ( -0.0005 ,0.0000) ( 0.0000 ,0.0000) ( 0.0000 ,0.0000) ( 0.0000 ,0.0000) ( -0.0116 ,0.0000) x f (x ) 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 NITER 4 20 13 12 3 2 3 3 3 1 3 25 3 3 2 16 3 3 3 2 13 fCALLS 4 20 13 12 3 5 9 9 9 3 1 1 1 1 1 1 1 1 1 1 1 gCALLS 4 15 10 10 3 3 4 4 4 2 4 26 4 4 3 17 4 4 4 3 12 Time [s] 0.259 1.128 75.330 1.389 0.343 0.199 0.302 0.552 0.586 0.223 0.257 1.521 0.464 0.440 0.398 1.229 0.271 0.459 0.434 0.412 0.140

(CG)

(SD)

(ND)

(NB)

(Nt)

x Table 5: Minimization of f (x ) having for initial condition the point x 0 = (1, 1). Accuracies and tolerances were all equal to 106 for all n-D algorithms and 104 for the 1-D algorithms. From Table 5 we note that all methods with line search perform very well while solving the problem at hand. The Newton method, on the other hand, performs worst mainly because of the behaviour of the function near the minimum ( H = 0). Almost all combinations n-D method - line search method perform well. The Golden Section line search presents the worst results having higher error in x , higher number of iterations and higher number of function calls, in average. Except for the case of the BFGS update where the worst performance is observed with the Fibonacci line search.

b.
The attempted solutions to this problem are presented in Tables 6 through 10. The Rosenbrock function has a global minimum at (x1 , x2 ) = (1, 1). LS (F) x0 (-1.0,1.0) (0.0,0.0) (1.0,-1.0) (-1.0,1.0) (0.0,0.0) (1.0,-1.0) (-1.0,1.0) (0.0,0.0) (1.0,-1.0) (-1.0,1.0) (0.0,0.0) (1.0,-1.0) (-1.0,1.0) (0.0,0.0) (1.0,-1.0) x NC NC NC (-0.8857,0.7477) (0.9276,0.8602) (1.0416,1.0853) (NC NC NC (-1.0019,0.9856) (0.1685,-0.0025) (-0.1564,-0.0232) NC (1.0000,1.0000) (1.0000,1.0000) x f (x ) 3.6914 0.0052 0.0017 4.0402 0.7865 1.5649 0.0000 0.0000 NITER 723 738 629 561 561 561 150 546 fCALLS 719 737 600 511 511 511 150 546 gCALLS 497 501 472 460 460 460 119 426 Time [s] 33.030 34.290 28.470 49.98 50.430 51.140 21.570 84.620

(G)

(B)

(S)

(N)

Table 6: Test with the Rosenbrock function using the Conjugate Gradient method. Accuracies and tolerances were all equal to 106 for all n-D algorithms and 104 for the 1-D algorithms. The Conjugate Gradient method (Table 6) converged to the global minimum only with Newton as the line search method. Using Golden Section it was obtained a solution near the global minimum. All other line search methods produced worst or no solutions at all (NC means here no convergence). Steepest Descent (Table 7) optimizes the Rosenbrock function with Bisection and Newton as line search. Remarkably all other line search methods produce results with marginal errors. Both quasi-Newton methods, DFP (Table 8) and BFGS (Table 9) updates, work very well with all line search methods to solve the Rosenbrock function in order to obtain the global minimum. The Newton Method (Table 10) works also very well in the pursuit of the Rosenbrock function global minimum. To note, although not explicitly stated, the initial condition is an important factor in the solution obtained such that, sometimes, the same method combined with the same line search minimizes correctly the function starting from one point while do not minimize it starting from another. Regarding the Golden Section method, when used as line search method, 8

LS (S)

(G)

(B)

(S)

(N)

x0 (-1.0,1.0) (0.0,0.0) (1.0,-1.0) (-1.0,1.0) (0.0,0.0) (1.0,-1.0) (-1.0,1.0) (0.0,0.0) (1.0,-1.0) (-1.0,1.0) (0.0,0.0) (1.0,-1.0) (-1.0,1.0) (0.0,0.0) (1.0,-1.0)

x (0.9844,0.9690) (0.9908,0.9816) (0.9959,0.9918) (0.9138,0.9484) (1.0271,1.0551) (1.0449,1.0921) (1.0000,1.0000) (0.9935,0.9871) (1.0029,1.0058) (0.9979,0.9958) (1.0030,1.0061) (0.9970,0.9941) (0.9763,0.9531) (1.0000,1.0000) (0.9763,0.9531)

x f (x ) 0.0002 0.0001 0.0000 0.0007 0.0007 0.0020 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0006 0.0000 0.0006

NITER 1633 826 2750 141 7 194 3 136 883 1185 111 1038 2316 109 2165

fCALLS 3269 1655 5503 4635 17 391 9 275 1769 2373 225 2079 4635 219 4333

gCALLS 1634 827 2751 2317 8 195 4 137 884 1186 112 1039 2317 110 2166

Time [s] 93.460 52.560 159.400 8.771 0.624 12.070 0.4464 13.270 103.300 128.500 12.210 122.400 344.900 15.840 340.000

Table 7: Test with the Rosenbrock function using the Steepest Descent method. Accuracies and tolerances were all equal to 106 for all n-D algorithms and 104 for the 1-D algorithms. LS (S) x0 (-1.0,1.0) (0.0,0.0) (1.0,-1.0) (-1.0,1.0) (0.0,0.0) (1.0,-1.0) (-1.0,1.0) (0.0,0.0) (1.0,-1.0) (-1.0,1.0) (0.0,0.0) (1.0,-1.0) (-1.0,1.0) (0.0,0.0) (1.0,-1.0) x (1.0000,1.0000) (1.0000,1.0000) (1.0000,1.0000) (1.0000,1.0000) (1.0000,1.0000) (1.0000,1.0000) NC NC (1.0000,1.0000) (1.0000,1.0000) (1.0000,1.0000) (1.0000,1.0000) (1.0000,1.0000) (1.0000,1.0000) (1.0000,1.0000) x f (x ) 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 NITER 58 38 3477 36 15 433 25 3 16 28 1 14 25 gCALLS 59 39 3478 37 16 434 26 4 17 29 2 15 26 Time [s] 3.368 2.254 196.400 1.899 0.837 24.260 3.556 0.356 1.608 2.866 0.198 1.982 3.595

(G)

(B)

(S)

(N)

Table 8: Test with the Rosenbrock function using the quasi-Newton DFP method. Accuracies and tolerances were all equal to 106 for all n-D algorithms and 104 for the 1-D algorithms.

LS (S)

(G)

(B)

(S)

(N)

x0 (-1.0,1.0) (0.0,0.0) (1.0,-1.0) (-1.0,1.0) (0.0,0.0) (1.0,-1.0) (-1.0,1.0) (0.0,0.0) (1.0,-1.0) (-1.0,1.0) (0.0,0.0) (1.0,-1.0) (-1.0,1.0) (0.0,0.0) (1.0,-1.0)

x (1.0000,1.0000) (1.0000,1.0000) (1.0000,1.0000) (1.0000,1.0000) (1.0000,1.0000) (1.0000,1.0000) (1.0000,1.0000) NC (1.0000,1.0000) (1.0000,1.0000) (1.0000,1.0000) (1.0000,1.0000) (1.0000,1.0000) (1.0000,1.0000) (1.0000,1.0000)

x f (x ) 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000 0.0000

NITER 38 18 3773 9 22 38 4 20 3 18 28 1 14 26

gCALLS 39 19 3774 10 23 39 5 21 4 19 29 2 15 27

Time [s] 2.104 1.038 228.100 0.720 1.626 2.751 0.798 2.811 0.359 1.792 2.884 0.200 1.975 3.698

Table 9: Test with the Rosenbrock function using the quasi-Newton BFGS method. Accuracies and tolerances were all equal to 106 for all n-D algorithms and 104 for the 1-D algorithms. x0 (-1.0,1.0) (0.0,0.0) (1.0,-1.0) x (1.0000,1.0000) (1.0000,1.0000) (1.0000,1.0000) x f (x ) 0.0000 0.0000 0.0000 NITER 2 2 1 gCALLS 3 3 2 Time [s] 0.479 0.215 0.123

Table 10: Test with the Rosenbrock function using the Newton method. Accuracies and tolerances were all equal to 106 for all n-D algorithms and 104 for the 1-D algorithms. produces mostly very good to excellent results. Thus, it is most likely that problem 2 is not suited to be solved by this method or that other initial conditions must be considered. Although we should not still rule out the possibility of a glitch in its programming and that the n-D algorithm (gradient algorithm), although very unlikely, bypassed this glitch. Also, we noticed that the programming of some of the methods presented low precision but did not get further into that analysis.

References
[1] Ashok Belegundu and Tirupathi Chandrupatla. Optimization Concepts and Applications in Engineering. Cambridge University Press, 2 edition,

10

2011. [2] Andr C. Marta. Multidisciplinary design optimization of aircrafts. e Course notes, September 2011. [3] Jorge Nocedal and Stephen J. Wright. Numerical Optimization. Springer Series in Operations Research and Financial Engineering. Springer, 2 edition, 2006.

11

Anda mungkin juga menyukai