Anda di halaman 1dari 6

Advanced Microeconomics WS 09/10 Prof.

Aleksander Berentsen
Mathematical Excursus: Optimization Assistant Daniel Müller

(Based on Jehle, Geoffrey A, and Philip J. Reny (2001): Advanced Microeconomic Theory,
2nd Ed., Addison Wesley (S.498-509))

1. Non-linear Optimization and Kuhn-Tucker


For most optimization problems subject to a constraint we assume that the constraint must
satisfied in equilibrium. In this case we apply Lagrange’s well-known method in order to solve
the maximization or minimization problem.
Let x = (x1 , x2 ). Assuming that we want to solve the following problem:
max f (x) s.t. g (x) = 1.
x

The Lagrangian function is:


L = f (x) + λ 1 − g (x) .
 

We obtain the solution by setting the partial derivative(s) for the x variables and the constraint
equal to zero.

In many instances, however, we wish to optimize a function subject to an inequality constraint.


A typical constraint condition of this kind is, for example, that the quantity of produced goods
may not be negative. For this kind of optimization the Lagrangian method has to be extended.
In order to gain an intuitive understanding of this extension, we should now consider the fol-
lowing problem:
max f (x) s.t. x ≥ 0.
x
We now want to derive conditions, similar to the first-order conditions (FOC) of the Lagrangian
(Lagrange function), that characterize the solution.
To represent this problem graphically, one of the three illustrated cases could apply:

Figure 1: Three possibilities for maximization with nonnegativity constraints:


(a) case 1, constraint is binding; (b) case 2, constraint is binding but irrelevant;
and (c) case 3, constraint is not binding. Source: Jehle and Reny (2001).

1 of 6
Advanced Microeconomics WS 09/10 Prof. Aleksander Berentsen
Mathematical Excursus: Optimization Assistant Daniel Müller

In case 1 we have a (global) maximum at x1 . However, as x1 < 0 is not a feasible solution,


the global maximum cannot be a solution because the second-order condition (nonnegativity
constraint) is violated. The solution for the first case is achieved on the boundary of the feasible
set at x∗ = 0, where f 0 (x∗ ) < 0. Here, we say that the constraint is binding. In case 2 the global
maximum is only just achievable. The constraint on x is binding here, but doesn’t really matter.
The solution in case 2 is characterized by two conditions: x∗ = 0 and f 0 (x∗ ) = 0. In case 3
the solution exists inside the feasible set (i.e., the problem admits of an interior solution), and
the constraint is not binding. Once again, two conditions characterize the solution in case 3:
x∗ > 0 and f 0 (x∗ ) = 0.

Case 1: x∗ = 0 and f 0 (x∗ ) < 0


Case 2: x∗ = 0 and f 0 (x∗ ) = 0
Case 3: x∗ > 0 and f 0 (x∗ ) = 0.

From this we can formulate the first condition; namely:

x∗ f 0 (x∗ ) = 0.
 

This condition alone, however, is not sufficient. Let us reconsider case 3: Though the above
x = 0, e
condition is satisfied at e x cannot be a maximum, as f 0 (x∗ ) > 0. We can now formulate
a further condition: The function cannot increase if x increases; i.e.,

f 0 (x∗ ) ≤ 0.

Together with the constraint itself, we have three conditions that characterize the maximum of
the above optimization problem:

1. f 0 (x∗ ) ≤ 0
x∗ f 0 (x∗ ) = 0
 
2.
3. x∗ ≥ 0.

The respective conditions for a minimum of the same optimization would be:

1. f 0 (x∗ ) ≥ 0
x∗ f 0 (x∗ ) = 0
 
2.
3. x∗ ≥ 0.

Next we will look at a somewhat more complicated problem:

max f (x1 , x2 ) s.t. g (x1 , x2 ) ≥ 0.


x

An optimization problem of this kind is commonly called a nonlinear programming or a non-


linear optimization problem. Once again we wish to gain a set of conditions that characterize

2 of 6
Advanced Microeconomics WS 09/10 Prof. Aleksander Berentsen
Mathematical Excursus: Optimization Assistant Daniel Müller

the solution(s). These conditions are called Kuhn-Tucker conditions 1 . In order to obtain
these conditions, we convert the problem into one with a secondary condition that holds with
equality and nonnegative constraints. We define the variable z as the additional amount by
which g is greater than zero. We can thus rewrite the problem as:
max f (x1 , x2 ) s.t. g (x1 , x2 ) − z = 0
x
z ≥ 0.
We can solve the first half of the maximization using the Lagrangian. We have just seen how we
have to modify the Lagrangian method in order to account for the second half of this problem
- the nonnegativity constraint. First, we form the Lagrangian for:
L = f (x1 , x2 ) + λ g (x1 , x2 ) − z
 

We now wish to maximize the Lagrangian subject to z ≥ 0. The solution thus has to satisfy the
following first-order conditions:
L1 = f1 + λg1 = 0 (1)
L2 = f2 + λg2 = 0 (2)
Lλ = g (x1 , x2 ) − z = 0 (3)
Lz = −λ ≤ 0 (4)
zLz = z (−λ) = 0 (5)
z ≥ 0. (6)
If we consider conditions 3, 5 and 6, we see that we can eliminate the z-term. If we multiply
condition 5 by −1 and rewrite the other two, we obtain:
z = g (x1 , x2 )
zλ = 0
z ≥ 0.
If we use the first to substitute for z in the other two, we obtain:
λg (x1 , x2 ) = 0
g (x1 , x2 ) ≥ 0.
Hence we obtain the Kuhn-Tucker conditions:
1. f1 + λg1 = 0
2. f2 + λg2 = 0
3. λg (x1 , x2 ) = 0
4. λ ≥ 0, g (x1 , x2 ) ≥ 0.

Interpretation of the Kuhn-Tucker conditions:


1. and 2. are the conditions for an unconstrained maximum of the Lagrangian. 3. and 4. are
the conditions that we would obtain if we were to choose λ so as to minimize L subject to the
condition that λ ≥ 0.
1
H. W. Kuhn and A. W. Tucker, ”Nonlinear Programming”, in J. Neyman (Ed.), Proceedings of the Second
Berkeley Symposium on Mathematical Statistics and Probability, Berkeley: University of California Press, 1951,
481-492.

3 of 6
Advanced Microeconomics WS 09/10 Prof. Aleksander Berentsen
Mathematical Excursus: Optimization Assistant Daniel Müller

2. Value Functions
There are often optimization problems of the following kind:

max f (x, a) s.t. g (x, a) = 0 and x > 0,


x

where x is a choice variable, and a a parameter that is contained in the objective function, the
constraint, or in both. Let us assume that a unique solution exists for a and denote this as
x∗ (a). We can now define a new function v (a) with this that gives the value of the objective
function if x maximizes the objective function f subject to the constraints. v (a) is then the
maximum-value function.

v (a) ≡ max f (x, a) s.t. g (x, a) = 0 and x > 0.


x

If we evaluate the objective function f (x, a) at the optimum x∗ (a), the value function will be
maximized. We can also define the maximum-value function as follows:

v (a) ≡ f (x∗ (a) , a).

A maximum-value function is also an objective function where the choice variables (x) assume
their maximum value. The maximum value of these choice variables are themselves functions
of the parameter (a). The maximum-value function is thus implicitly only dependent on the
parameters. It is therefore also called the indirect objective function.
Analogously to the maximum-value function one can also construct a minimum -value function.
Examples of this from the lecture are the indirect utility function or the expenditure function.

3. The Envelope theorem for unconstrained optimization


The Envelope theorem describes how the maximum/minimum value of a variable changes
when the parameters of the model change. According to this theorem the effect of a change in
a parameter on the maximum/minimum value of a variable is equal to the partial derivative of
this parameter’s objective function.
Formally: If
v (a) = max f (x, a)
x

then it follows that:


dv (a) ∂ f (x, a)

=
∂ai x=x∗ (a)

da
whereby x∗ (a) is the optimally chosen choice variable.

An example:
The following function is given:

f (x, a1 , a2 ) = xa1 − a2 x, whereby0 < a1 < 1.

This function has a unique maximum where the following FOC is satisfied:

a1 xa1 −1 − a2 = 0.

4 of 6
Advanced Microeconomics WS 09/10 Prof. Aleksander Berentsen
Mathematical Excursus: Optimization Assistant Daniel Müller

The solution is therefore: !1/(1−a1 )


a1
x (a) =∗
a2
and the maximum-value function is thus:
!a1 /(1−a1 ) !1/(1−a1 )
a1 a1
v (a) = x (a) − a2 x (a) =
∗ a1 ∗
− a2
a2 a2
The Envelope theorem states now that the derivative of v (a) with respect to a1 corresponds to
the partial derivative of f with respect to a1 at the position x∗ (a). Therefore:
!a /(1−a1 ) !1/(1−a1 )
∂ f (x, a) ∂(xa1 − a2 x) a1 1

a1
= = x (a) ln x (a) =
∗ a1 ∗
ln
∂ai x=x∗ (a) ∂a1

x=x∗ (a) a2 a2
(Reminder: If f (y) = by , then f 0 (y) = by · ln b).
If, instead of this, v (a) were derived directly, one would have to derive the following term with
respect to a1 :
!a /(1−a1 ) !1/(1−a1 )
a1 1 a1
− a2
a2 a2
which is significantly more complicated.
The proof:

v (a) ≡ f (x∗ (a) , a)


Deriving both sides with respect to a gives:
dv (a) ∂ f (x∗ (a) , a) ∂x∗ (a) ∂ f (x∗ (a) , a)
= +
da ∂x ∂a ∂a

Since x (a) is the maximum of f , we know that
∂ f (x∗ (a) , a)
=0
∂x
Thus it holds:
dv (a) ∂ f (x∗ (a) , a)
=
da ∂a
which is the same as
dv (a) ∂ f (x, a)

=
∂ai x=x∗ (a)

da

4. The Envelope theorem for constrained optimization


The Envelope theorem can also be derived for constrained optimization. Let
m (a) = max f (x, a) s.t. g (x, a) = 0 und x ≥ 0,
x

Let L (x, a, λ) be the associated Lagrangian function, and let x∗ (a) and λ∗ (a) be the respective
values that solve the Kuhn-Tucker conditions. It then follows that:
dm (a) ∂L

=
∂a x∗ (a),λ∗ (a)

da

5 of 6
Advanced Microeconomics WS 09/10 Prof. Aleksander Berentsen
Mathematical Excursus: Optimization Assistant Daniel Müller

Literature:
Most current textbooks generally have an additional chapter or an appendix on optimization;
for example:
Jehle, Geoffrey A, and Philip J. Reny (2001): Advanced Microeconomic Theory, 2nd Ed., Ad-
dison Wesley (S.498-509), or
Varian, Hall R. (1992): Microeconomic Analysis, 3rd Ed., W. W. Norton & Company (Chapter
27).
A comprehensive treatment of optimization problems can be found in the Maths standard work:
Chiang, Alpha C. and Kevin Wainwright (2005): Fundamental Methods of Mathematical Eco-
nomics, 4th Edition, McGraw-Hill.
Simon, Carl P. and Lawrence Blume (1994): Mathematics for Economists , W.W. Norton &
Company, Inc.
An introduction to optimization is also offered by the Lecture Notes of Sten Nyberg of Stock-
holm University under: http://people.su.se/∼ snybe/aml01.pdf.

6 of 6

Anda mungkin juga menyukai