56 tayangan

Diunggah oleh larasmoyo

course

- Brazil 2014ugm Integration of Aqwa With Friendship in Psv Vessel Design
- Quantitative Analysis for Management Ch07
- Geometric Programming for Aircraft Design Optimization
- OR.pdf
- Nonlinopt Klerk Roos Terlaky BOOK
- lec14
- 516807 Cost Gas Lift
- Karush Kuhn Tucker Slides
- Lecture4b Optimization Tools Lagrangian Method.
- Thermodynamic Optimization of Screw Compressors
- Formulation Examples
- A Robust Implementation of a Sequential Quadratic
- Unit Commitment Solution Methods
- linear programing example excel solver.pdf
- Basics of Or
- A Review of Constraint Programming
- Lp Formulation and Solution
- 1-s2.0-S0009250909003157-main
- CHE 555 7 Optimization
- Optimzaation in Power System

Anda di halaman 1dari 19

Operation Research II

Spring, 2009

General Form of Nonlinear Programming Problems Max f(x) S.T. gi(x) bi for i = 1,, m x0 No algorithm that will solve every specific problem fitting this format is available. An Example The Product-Mix Problem with Price Elasticity The amount of a product that can be sold has an inverse relationship to the price charged. That is, the relationship between demand and price is an inverse curve.

The firms profit from producing and selling x units is the sales revenue xp(x) minus the production costs. That is, P(x) = xp(x) cx. If each of the firms products has a similar profit function, say, Pj(xj) for producing and selling xj units of product j, then the overall objective function is f( x ) =

P (x

j =1 j

Jin Y. Wang

Chap12-1

Operation Research II

Spring, 2009

An Example The Transportation Problem with Volume Discounts Determine an optimal plan for shipping goods from various sources to various destinations, given supply and demand constraints. In actuality, the shipping costs may not be fixed. Volume discounts sometimes are available for large shipments, which cause a piecewise linear cost function.

Graphical Illustration of Nonlinear Programming Problems Max Z = 3x1 + 5x2 S.T. x1 4 9x12 + 5x22 216 x1, x2 0

The optimal solution is no longer a CPF anymore. (Sometimes, it is; sometimes, it isnt). But, it still lies on the boundary of the feasible region. We no longer have the tremendous simplification used in LP of limiting the search for an optimal solution to just the CPF solutions. What if the constraints are linear; but the objective function is not?

Jin Y. Wang

Chap12-2

Operation Research II

Spring, 2009

Max Z = 126x1 9x12 + 182x2 13x22 4 S.T. x1 2x2 12 3x1 + 2x2 18 x1, x2 0

The optimal solution lies inside the feasible region. That means we cannot only focus on the boundary of feasible region. We need to look at the entire feasible region. The local optimal needs not to be global optimal--Complicate further

Jin Y. Wang

Chap12-3

Operation Research II

Spring, 2009

Nonlinear programming algorithms generally are unable to distinguish between a local optimal and a global optimal. It is desired to know the conditions under which any local optimal is guaranteed to be a global optimal. If a nonlinear programming problem has no constraints, the objective function being concave (convex) guarantees that a local maximum (minimum) is a global maximum (minimum). What is a concave (convex) function? A function that is always curving downward (or not curving at all) is called a concave function.

A function is always curving upward (or not curving at all), it is called a convex function.

Definition of concave and convex functions of a single variable A function of a single variable f ( x) is a convex function, if for each pair of values of x, say, x ' and x '' ( x ' < x '' ),

f [x '' + (1 ) x ' ] f ( x '' ) + (1 ) f ( x ' )

for all value of such that 0 < < 1 . It is a strictly convex function if can be replaced by <. It is a concave function if this statement holds when is replaced by (> for the case of strict concave).

Jin Y. Wang Chap12-4

Operation Research II

Spring, 2009

How to judge a single variable function is convex or concave? Consider any function of a single variable f(x) that possesses a second derivative at all possible value of x. Then f(x) is

d 2 f ( x) 0 for all possible value of x. convex if and only if dx 2

How to judge a two-variables function is convex or concave? If the derivatives exist, the following table can be used to determine a two-variable function is concave of convex. (for all possible values of x1 and x2) Quantity

2 f ( x1 , x 2 ) 2 f ( x1 , x 2 ) 2 f ( x1 , x 2 ) 2 x12 x 2 x1 x 2 2 f ( x1 , x 2 ) x12 2 f ( x1 , x 2 ) 2 x 2

2

Convex

0

Concave

0

Jin Y. Wang

Chap12-5

Operation Research II

Spring, 2009

2 Example: f ( x1 , x 2 ) = x12 2 x1 x 2 + x 2

How to judge a multi-variables function is convex or concave? The sum of convex functions is a convex function, and the sum of concave functions is a concave function. Example: f(x1, x2, x3) = 4x1 x12 (x2 x3)2 = [4x1 x12] + [(x2 x3)2]

If there are constraints, then one more condition will provide the guarantee, namely, that the feasible region is a convex set. Convex set A convex set is a collection of points such that, for each pair of points in the collection, the entire line segment joining these two points is also in the collection.

In general, the feasible region for a nonlinear programming problem is a convex set whenever all the gi(x) (for the constraints gi(x) bi) are convex. Max Z = 3x1 + 5x2 S.T. x1 4 9x12 + 5x22 216 x1, x2 0

Jin Y. Wang

Chap12-6

Operation Research II

Spring, 2009

What happens when just one of these gi(x) is a concave function instead? Max Z = 3x1 + 5x2 4 S.T. x1 14 2x2 8x1 x12 + 14x2 x22 49 x1, x2 0 The feasible region is not a convex set. Under this circumstance, we cannot guarantee that a local maximum is a global maximum. Condition for local maximum = global maximum (with gi(x) bi constraints). To guarantee that a local maximum is a global maximum for a nonlinear programming problem with constraint gi(x) bi and x 0, the objective function f(x) must be a concave function and each gi(x) must be a convex function. Such a problem is called a convex programming problem. One-Variable Unconstrained Optimization The differentiable function f(x) to be maximized is concave. The necessary and sufficient condition for x = x* to be optimal (a global max) is

df * = 0 , at x = x . dx

It is usually not very easy to solve the above equation analytically. The One-Dimensional Search Procedure. Fining a sequence of trial solutions that leads toward an optimal solution. Using the signs of derivative to determine where to move. Positive derivative indicates that x* is greater than x; and vice versa.

Jin Y. Wang

Chap12-7

Operation Research II

Spring, 2009

The Bisection Method Initialization: Select (error tolerance). Find an initial x (lower bound on x* ) and x (upper bound on x* ) by inspection. Set the initial trial solution x ' = Iteration: Evaluate If

df ( x) at x = x ' . dx

x+x . 2

' Select a new x =

x+x . 2

Stopping Rule: If x x 2 , so that the new x ' must be within of x * , stops. Otherwise, perform another iteration. Example: Max f(x) = 12x 3x4 2x6

df(x)/dx 0 1 2 3 4 5 6 7

New x

'

f (x' )

Jin Y. Wang

Chap12-8

Operation Research II

Spring, 2009

Newtons Method The bisection method converges slowly. Only take the information of first derivative into account. The basic idea is to approximate f(x) within the neighborhood of the current trial solution by a quadratic function and then to maximize (or minimize) the approximate function exactly to obtain the new trial solution.

This approximating quadratic function is obtained by truncating the Taylor series after the second derivative term.

f '' ( xi ) f ( xi +1 ) f ( xi ) + f ( xi )( xi +1 xi ) + ( xi +1 xi ) 2 2

'

This quadratic function can be optimized in the usual way by setting its first derivative to zero and solving for xi+1.

Thus, xi +1 = xi

f ' ( xi ) . f '' ( x i )

Stopping Rule: If xi +1 xi , stop and output xi+1. Example: Max f(x) = 12x 3x4 2x6 (same as the bisection example)

xi +1 = xi f ' ( xi ) = f '' ( x i )

f ' ( xi )

f '' ( xi )

xi+1

0.84003 0.83763

7.8838 7.8839

-0.1325 -0.0006

-55.279 -54.790

0.83763 0.83762

Multivariable Unconstrained Optimization Usually, there is no analytical method for solving the system of equations given by setting the respective partial derivatives equal to zero. Thus, a numerical search procedure must be used.

Jin Y. Wang

Chap12-9

Operation Research II

Spring, 2009

The Gradient Search Procedure (for multivariable unconstrained maximization problems) The goal is to reach a point where all the partial derivatives are 0. A natural approach is to use the values of the partial derivatives to select the specific direction in which to move. The gradient at point x = x is f (x) = (

f f f , ,..., ) at x = x . x1 x x xn

The direction of the gradient is interpreted as the direction of the directed line segment from the origin to the point (

changing x that will maximize f(x) change rate. However, normally it would not be practical to change x continuously in the direction of f (x), because this series of changes would require continuously reevaluating the

f and changing the direction of the path. xi

A better approach is to keep moving in a fixed direction from the current trial solution, not stopping until f(x) stops increasing. The stopping point would be the next trial solution and reevaluate gradient. The gradient would be recalculated to determine the new direction in which to move. Reset x = x + t* f (x), where t* is the positive value that maximizes f(x+t* f(x)) = The iterations continue until f ( x) = 0 with a small tolerance .

Summary of the Gradient Search Procedures Initialization: Select and any initial trail solution x. Go first to the stopping rule. Step 1: Express f(x+t f(x)) as a function of t by setting x j = x 'j + t ( j = 1, 2,, n, and then substituting these expressions into f(x). Step 2: Use the one-dimensional search procedure to find t = t* that maximizes f(x+t f(x)) over t 0. Step 3: Reset x = x + t* f(x). Then go to the stopping rule.

Jin Y. Wang Chap12-10

f ) ' , for x j x = x

Operation Research II

Spring, 2009

If so, stop with the current x as the desired approximation of an optimal solution x*. Otherwise, perform another iteration. Example for multivariate unconstraint nonlinear programming Max f(x) = 2x1x2 + 2x2 x12 2x22

f f = 2 x 2 2 x1 , = 2 x1 + 2 4 x 2 x1 x 2

. We verify that f(x) is Suppose pick x = (0, 0) as the initial trial solution.

f (0,0) =

f ( x ' + tf ( x ' )) = f(0, 2t) =

Jin Y. Wang Chap12-11

Operation Research II

Spring, 2009

Iteration 1 2

x'

f ( x ' )

x ' + t f ( x ' )

f ( x ' + tf ( x ' ))

t*

x ' + t * f ( x ' )

For minimization problem We move in the opposite direction. That is x = x t* f(x). Another change is t = t* that minimize f(x t f(x)) over t 0 Necessary and Sufficient Conditions for Optimality (Maximization) Problem Necessary Condition Also Sufficient if: f(x) concave One-variable unconstrained df = 0 Multivariable unconstrained

dx f = 0 (j=1,2,n) xi

f(x) concave

f(x) is concave and gi(x) is convex The Karush-Kuhn-Tucker (KKT) Conditions for Constrained Optimization Assumed that f(x), g1(x), g2(x), , gm(x) are differentiable functions. Then x* = (x1*, x2*, , xn*) can be an optimal solution for the nonlinear programming problem only if there exist m numbers u1, u2, , um such that all the following KKT conditions are satisfied: (1)

m g f * u i i 0 , at x = x , for j = 1, 2, , n x j i =1 x j m g f * u i i ) = 0 , at x = x , for j = 1, 2, , n x j i =1 x j

(2) x *j (

Jin Y. Wang

Chap12-12

Operation Research II

Spring, 2009

(5) x *j 0 , for j =1, 2, , m (6) u j 0 , for j =1, 2, , m Corollary of KKT Theorem (Sufficient Conditions) Note that satisfying these conditions does not guarantee that the solution is optimal. Assume that f(x) is a concave function and that g1(x), g2(x), , gm(x) are convex functions. Then x* = (x1*, x2*, , xn*) is an optimal solution if and only if all the KKT conditions are satisfied. An Example Max f(x) = ln(x1 + 1) + x2 S.T. 2x1 + x2 3 x1, x2 0 n = 2; m = 1; g1(x) = 2x1 + x2 is convex; f(x) is concave. 1. ( j = 1)

1 2u1 0 x1 + 1

1 2u1 ) = 0 x1 + 1

2. ( j = 1) x1 (

1. (j = 2) 1 u1 0 2. (j = 2) x 2 (1 u1 ) = 0 3. 2 x1 + x2 3 0 4. u1 (2 x1 + x 2 3) = 0 5. x1 0, x2 0 6. u1 0 Therefore, There exists a u1 = 1 such that x1 = 0, x2 = 3, and u1 = 1 satisfy KKT conditions. The optimal solution is (0, 3). How to solve the KKT conditions Sorry, there is no easy way. In the above example, there are 8 combinations for x1( 0), x2( 0), and u1( 0). Try each one until find a fit one. What if there are lots of variables? Lets look at some easier (special) cases.

Jin Y. Wang

Chap12-13

Operation Research II

Spring, 2009

Quadratic Programming Max f(x) = cx 1/2 xTQx S.T. Ax b x0 The objective function is f(x) = cx 1/2 xTQx = c j x j

j =1 n

1 n n qij xi x j . 2 i =1 j =1

The qij are elements of Q. If i = j, then xixj = xj2, so 1/2qij is the coefficient of xj2. If i j, then 1/2(qij xixj + qji xjxi) = qij xixj, so qij is the coefficient for the product of xi and xj (since qij = qji).

An example Max f(x1, x2) = 15x1 + 30x2 + 4x1x2 2x12 4x22 S.T. x1 + 2x2 30 x1, x2 0

The KKT conditions for the above quadratic programming problem. 1. 2. 1. 2. 3. 4. 5. 6. (j = 1) (j = 1) (j = 2) (j = 2) 15 + 4x2 4x1 u1 0 x1(15 + 4x2 4x1 u1) = 0 30 + 4x1 8x2 2u1 0 x2(30 + 4x1 8x2 2u1) = 0 x1 + 2x2 30 0 u1(x1 + 2x2 30) = 0 x1 0, x2 0 u1 0

Jin Y. Wang

Chap12-14

Operation Research II

Spring, 2009

Introduce slack variables (y1, y2, and v1) for condition 1 (j=1), 1 (j=2), and 3. = 15 1. (j = 1) 4x1 + 4x2 u1 + y1 + y2 = 30 1. (j = 2) 4x1 8x2 2u1 + v1 = 30 3. x1 + 2x2 Condition 2 (j = 1) can be reexpressed as 2. (j = 1) x1y1 = 0 Similarly, we have 2. (j = 2) x2y2 = 0 4. u1v1 = 0 For each of these pairs(x1, y1), (x2, y2), (u1, v1)the two variables are called complementary variables, because only one of them can be nonzero. Combine them into one constraint x1y1 + x2y2 + u1v1 = 0, called the complementary constraint. Rewrite the whole conditions 4x1 4x2 + u1 y1 = 15 y2 = 30 4x1 + 8x2 + 2u1 + v1 = 30 x1 + 2x2 =0 x1y1 + x2y2 + u1v1 x1 0, x2 0, u1 0, y1 0, y2 0, v1 0 Except for the complementary constraint, they are all linear constraints. For any quadratic programming problem, its KKT conditions have this form Qx + ATu y = cT Ax + v = b x 0, u 0, y 0, v 0 x Ty + u Tv = 0 Assume the objective function (of a quadratic programming problem) is concave and constraints are convex (they are all linear). Thus, x is optimal if and only if there exist values of y, u, and v such that all four vectors together satisfy all these conditions. The original problem is thereby reduced to the equivalent problem of finding a feasible solution to these constraints. These constraints are really the constraints of a LP except the complementary constraint. Why dont we just modify the Simplex Method?

Jin Y. Wang

Chap12-15

Operation Research II

Spring, 2009

The Modified Simplex Method The complementary constraint implies that it is not permissible for both complementary variables of any pair to be basic variables. The problem reduces to finding an initial BF solution to any linear programming problem that has these constraints, subject to this additional restriction on the identify of the basic variables. When cT 0 (unlikely) and b 0, the initial solution is easy to find. x = 0 , u = 0 , y = c T, v = b Otherwise, introduce artificial variable into each of the equations where cj > 0 or bi < 0, in order to use these artificial variables as initial basic variables This choice of initial basic variables will set x = 0 and u = 0 automatically, which satisfy the complementary constraint. Then, use phase 1 of the two-phase method to find a BF solution for the real problem. That is, apply the simplex to (zi is the artificial variables) Min Z = z j

j

Subject to the linear programming constraints obtained from the KKT conditions, but with these artificial variables included. Still need to modify the simplex method to satisfy the complementary constraint. Restricted-Entry Rule: Exclude from consideration any nonbasic variable to be the entering variable whose complementary variable already is a basic variable. Choice the other nonbasic variables according to the usual criterion. This rule keeps the complementary constraint satisfied all the time. When an optimal solution x*, u*, y*, v*, z1 = 0, , zn = 0 is obtained for the phase 1 problem, x* is the desired optimal solution for the original quadratic programming problem.

Jin Y. Wang

Chap12-16

Operation Research II

Spring, 2009

A Quadratic Programming Example Max 15x1 + 30x2 + 4x1x2 2x12 4x22 S.T. x1 + 2x2 30 x1, x2 0

Jin Y. Wang

Chap12-17

Operation Research II

Spring, 2009

Constrained Optimization with Equality Constraints Consider the problem of finding the minimum or maximum of the function f(x), subject to the restriction that x must satisfy all the equations g1(x) = b1 gm(x) = bm Example: Max f(x1, x2) = x12 + 2x2 S.T. g(x1, x2) = x12 + x22 = 1 A classical method is the method of Lagrange multipliers. The Lagrangian function h( x, ) = f ( x) i [ g i ( x) bi ] , where (1 , 2 ,..., m )

i =1 m

are called Lagrange multipliers. For the feasible values of x, gi(x) bi = 0 for all i, so h(x, ) = f(x). The method reduces to analyzing h(x, ) by the procedure for unconstrained optimization. Set all partial derivative to zero

m g h f = i i = 0 , for j = 1, 2, , n x j x j i =1 x j

h = g i ( x) + bi = 0, for i = 1, 2, , m i

Notice that the last m equations are equivalent to the constraints in the original problem, so only feasible solutions are considered. Back to our example h(x1, x2) = x12 + 2x2 ( x12 + x22 1).

h = x1 h = x 21 h =

Jin Y. Wang

Chap12-18

Operation Research II

Spring, 2009

Other types of Nonlinear Programming Problems Separable Programming It is a special case of convex programming with one additional assumption: f(x) and g(x) functions are separable functions. A separable function is a function where each term involves just a single variable. Example: f(x1, x2) = 126x1 9x12 + 182x2 13x22 = f1(x1) + f2(x2) f1(x1) = f2(x2) = Such problem can be closely approximated by a linear programming problem. Please refer to section 12.8 for details. Geometric Programming The objective and the constraint functions take the form

ai 2 ai 3 g ( x) = ci Pi ( x) , where Pi ( x) = x1ai1 x 2 for i = 1, 2, , N ...x3

i =1 N

When all the ci are strictly positive and the objective function is to be minimized, this geometric programming can be converted to a convex programming problem by setting x j = e y .

j

Fractional Programming Suppose that the objective function is in the form of a (linear) fraction. Maximize f(x) = f1(x) / f2(x) = (cx + c0) / (dx + d0). Also assume that the constraints gi(x) are linear. Ax b, x 0. We can transform it to an equivalent problem of a standard type for which effective solution procedures are available. We can transform the problem to an equivalent linear programming problem by letting y = x / (dx + d0) and t = 1 / (dx + d0), so that x = y/t. The original formulation is transformed to a linear programming problem. Max Z = cy + c0t S.T. Ay bt 0 dy + d0t = 1 y,t 0

Jin Y. Wang

Chap12-19

- Brazil 2014ugm Integration of Aqwa With Friendship in Psv Vessel DesignDiunggah olehphisitlai
- Quantitative Analysis for Management Ch07Diunggah olehQonita Nazhifa
- Geometric Programming for Aircraft Design OptimizationDiunggah olehDaniel Posada
- OR.pdfDiunggah olehayush gupta
- Nonlinopt Klerk Roos Terlaky BOOKDiunggah olehrenan_andrade2924
- lec14Diunggah olehDeepak Sakkari
- 516807 Cost Gas LiftDiunggah olehjfernandez_estela
- Karush Kuhn Tucker SlidesDiunggah olehmatchman6
- Lecture4b Optimization Tools Lagrangian Method.Diunggah olehtanababa
- Thermodynamic Optimization of Screw CompressorsDiunggah olehSaeedAkbarzadeh
- Formulation ExamplesDiunggah olehYusuf Hussein
- A Robust Implementation of a Sequential QuadraticDiunggah olehkiuhl
- Unit Commitment Solution MethodsDiunggah olehPatrick Smith
- linear programing example excel solver.pdfDiunggah olehAli Mohsen
- Basics of OrDiunggah olehhimanshu_ymca01
- A Review of Constraint ProgrammingDiunggah olehATS
- Lp Formulation and SolutionDiunggah olehPhanie
- 1-s2.0-S0009250909003157-mainDiunggah olehCalcetin
- CHE 555 7 OptimizationDiunggah olehainmnrh
- Optimzaation in Power SystemDiunggah olehGebrehana
- tthhDiunggah olehagnes
- WRP.docxDiunggah olehabdulaziz
- 23Diunggah olehRobertdnc
- Alfred Asase Thesis (1)Diunggah olehDakshayani Mahendra
- linear programming project spring 2015Diunggah olehapi-240786401
- aimms3om_linearprogrammingtricksDiunggah olehSaurav Bharadwaj
- wcdmaoptimizationdrivetestanalysis-141105231618-conversion-gate02.pdfDiunggah olehballecozt_feb
- A Variational Level Set Method for the t - _Shiwei Zhou; Qing LiDiunggah olehblekotak1
- Weekwise Course Plan Rev Sept 16Diunggah olehFaisalAmin
- Peran Bidang ILmuDiunggah olehDede Permana

- Caldeira Torcs APIDiunggah olehlarasmoyo
- Initial and Final Value TheoremsDiunggah olehlarasmoyo
- aerodynamics Homework 5 - SolutionsDiunggah olehlarasmoyo
- ruijgrok flight performance tocDiunggah olehlarasmoyo
- Caldeira Torcs APIDiunggah olehlarasmoyo
- Doyle Robust Control of Nonlinear SystemDiunggah olehlarasmoyo
- Oral QuestionsDiunggah olehlarasmoyo
- Nps Future UcavsDiunggah olehlarasmoyo
- Yang Chen-missile Control AP n DynamicsDiunggah olehlarasmoyo
- W1121Diunggah olehTamil Vanan
- Fluid Mechanics SyllabusDiunggah olehlarasmoyo
- Fluidmech in Pipe Velocity Shear ProfileDiunggah olehlarasmoyo
- ME582 Thermofluid Finite ElementsDiunggah olehlarasmoyo
- Abaspour Optimal Fuzzy Logic Tuning of Dynamic Inversion AfcsDiunggah olehlarasmoyo
- wael Robust+Hybrid+Control+for+Ballistic+Missile+Longitudinal+AutopilotDiunggah olehlarasmoyo
- A Velocity Algorithm for the Implementation of Gain Scheduled ControllersDiunggah olehlarasmoyo
- Vijaya Aeat 2008Diunggah olehlarasmoyo
- 443 Lecture 07 peetDiunggah olehlarasmoyo
- fluid mechanics hw_2 solutionDiunggah olehlarasmoyo
- fluid mechanics hw_5 solutionDiunggah olehlarasmoyo
- Univ San Diego Aerodynamics 1 Homework 1 - SolutionDiunggah olehlarasmoyo
- 543 Lecture 24 peet optimal output feedbackDiunggah olehlarasmoyo
- 04-Panel Methods(Theory and Method)Diunggah olehlarasmoyo
- Aero Exam Take Home 1Diunggah olehlarasmoyo
- PM docDiunggah olehlarasmoyo
- 16100lectre32_cjDiunggah olehnanduslns07
- 1 Lift DragDiunggah olehtoyota952
- Metodo a PannelliDiunggah olehDoris Graceni
- linear system 2 homeworkDiunggah olehlarasmoyo

- Handout_OPTIMIZATION.pdfDiunggah olehAnimesh Choudhary
- EE 553 Homeworks MoyoDiunggah olehlarasmoyo
- BSCS2016 M OperationsResearch Handout4Diunggah olehShaheer Raza
- Ch6cDiunggah olehvuduyduc
- 174-L031Diunggah olehachutha795830
- KKT.pdfDiunggah olehTrần Mạnh Hùng
- OR_Theory With ExamplesDiunggah olehmaximin
- Branch & BoundDiunggah olehvijaydh
- CpmDiunggah olehShabbir Ahmad
- 63913890104Diunggah olehSaeid Myron
- Network Flow ProblemsDiunggah olehKiên Trần Trung
- Transportation and Assignment ProblemDiunggah olehRovick Tarife
- Mathematical OptimizationDiunggah olehkentbnx
- IOSRJEN(www.iosrjen.org)IOSR Journal of EngineeringDiunggah olehIOSRJEN : hard copy, certificates, Call for Papers 2013, publishing of journal
- PROJ586 W3 Homework AIB SolutionsDiunggah olehboyfass
- Solving Economic Load Dispatch problems using Differential Evolution with Opposition Based LearningDiunggah olehAamirNawaz
- EE227A -- Convex Optimization SYLLABUSDiunggah olehRohan Varma
- Aula 9 - PO Aplicada à LogísticaDiunggah olehMarcelo Kawakame
- A Bi-Population Based Genetic Algorithm for the Resource-Constrained Project Scheduling ProblemDiunggah oleh王旺
- Lec18 Duality ThyDiunggah olehSudheerKumar
- Notes on Transportation ProblemDiunggah olehBharatRajShah
- 1-s2.0-S000785061630066X-mainDiunggah olehitzgaya
- MBA_Qns_2Marks.134134222Diunggah olehGanesh Murthy
- HelpDiunggah olehredouan karihi
- sqp.pdfDiunggah olehNakul S Patel
- 620362_GomoryDiunggah olehguidelli
- OPERATIONS RESEARCH.pdfDiunggah olehayyagarisri
- Operations Research SummaryDiunggah olehQuan Dang
- A 472308Diunggah olehPrem Shankar
- New Gradient Based Particle Swarm Optimization Algorithm for AccurateDiunggah olehVijay Singore