x2 g2(x)
g1(x)
Infeasible regions
Optimum
Decreasing f(x)
x1
Feasible region
Equality constraints
• We will develop the optimality conditions
for equality constraints and then
generalize them for inequality constraints
minimize f ( x)
such that h j x 0
g j x t 0 j 1,
2
j , ng
• This yields the following Lagrangian:
ng
L(x, t, ) f (x) j g j x t
2
j
j 1
• Why is the slack variable squared?
Karush-Kuhn-Tucker conditions
• Conditions for stationary points are then:
L f g j
j 0
xi xi xi
L
g j t 2j 0
j
L
2 j t j 0
t j
• If inequality constraint is inactive (t ≠ 0) then
Lagrange multiplier = 0
• For minimum, non-negative multipliers
Convex problems
• Convex optimization problem has
– convex objective function
– convex feasible domain if
• All inequality constraints are convex
(or gj = convex)
• All equality constraints are linear
– only one optimum
• Karush-Kuhn-Tucker conditions necessary and
will also be sufficient for global minimum
• Why do the equality constraints have to be
linear?
Example extended to inequality
constraints
• Minimize quadratic objective in a ring
Min f x12 10x22
s.t ri2 x12 x22 ro2
df * f T g a
dp p p
Lagrange multipliers called “shadow prices” because they
provide the price of imposing constraints
Why do we have ordinary derivative on the left side and
partial on the right side?
Example
• A simpler version of ring problem
Min f x12 10x22
s.t g p (x12 x22 ) 0
• For p=100 we found 1
• Here it is easy to see that solution is
x1* p , x2* 0, f *( p ) p, df * / dp 1
• Which agrees with
df * f g
0 1 1
dp p p