Anda di halaman 1dari 101

Economic Decision Making (D0C29a)

O. Mathematics Refresher Jo Reynaerts


VIVES and CES Katholieke Universiteit Leuven Jo.Reynaerts@econ.kuleuven.be

M.Sc. Economics
Katholieke Universiteit Leuven 20112012

Outline
Mathematics The Structure of an Optimization Problem Solutions to Optimization Problems Existence of Solutions Local and Global Optima Uniqueness of Solutions Interior and Boundary Optima Constrained Optimization: The Method of Lagrange Concave Programming and the Karush-Kuhn-Tucker Conditions Second-Order Conditions and Comparative Statics The Envelope Theorem The Gradient and its Characteristics Other Useful Properties Constrained Optimization: Example

The Structure of an Optimization Problem

Denition
An optimization problem (minimization or maximization) consists in general of
I I I

choice variables x an objective function f , and a feasible set S .

The problem the is to choose the preferred alternative in the feasible set, i.e. nd the maximum (minimum) of the objective function with respect to the choice variables, subject to constraints.

Solutions to Optimization Problems


Denition
A solution to an optimization problem is the vector of values of the choice variables x which is in the feasible set and which yields a maximum or minimum of the objective function over the feasible set: f .x / f .x/; (1) x 2 S:

Fundamental questions
1. Does a solution actually exist? 2. Is the solution local or global? 3. Is the solution unique? 4. Is the solution interior or on the boundary? 5. Is the solution a minimum or a maximum? (Location)

Solutions to Optimization Problems


Local and Global Solutions

Denition
A global solution is a solution which satises condition (1).

Denition
A local solution satises the condition f .x / f .x/; (2)

8 x 2 N S, where N is a set of points in an -neighborhood of x : N D fx 2 S W x

x < g:

4. INTERIOR SOLUTIONS 661

Solutions to Optimization Problems


Figure B.1

Solutions to Optimization Problems


Location

Denition

A su cient condition for x 2 R to be a local maximum is 1. f 0 .x / D 0 2. f 00 .x / 0.

Denition

A su cient condition for x 2 R to be a local minimum is 1. f 0 .x / D 0 2. f 00 .x / 0.

Solutions to Optimization Problems


Essential Properties of the Objective Function

Denition
A function f W A Rn ! R is continuous in a 2 A if 8 > 0; 9 > 0, such that 8 x 2 A: kx ak < ) jf .x/ f .a/j < : (3)

Denition
A function f W Rn ! R is concave if for any x0 ; x1 2 S f k x0 C .1 8 k 2 .0; 1/, or N where x D k x0 C .1 x1 . k/ x1 f .N / x k f .x0 / C .1 fN; k/ f .x1 /; (4)

k/ x1 is a convex combination of x0 and

Solutions to Optimization Problems


Essential Properties of the Objective Function

for example indierence curve and

Denition
Rn

isoquant (concentric circle around c)

A contour of a function f W ! R W x 7! f .x/ is the set of x-values that satises the condition f .x/ D c; with c 2 R. (5)

Example
2 2 Consider f W R2 ! R W .x1 ; x2 /0 7! f .x1 ; x2 / D x1 C x2 . The contours of f are dened as 2 2 x1 C x2 D c:

(6)

Solutions to Optimization Problems


Essential Properties of the Objective Function

The following properties are essential:


I I

Continuity of the objective function f implies continuity of its contours If the conditions of the Implicit Function Theorem are satised, x2 can be written as a function of x1 , x2 D x2 .x1 /, Q and the slope of a contour is computed as dx2 D dx1 with fi D
@f .x1 ;x2 / . @xi

f1 ; f2

f2 not equal to 0 => you can solve

(7)

Solutions to Optimization Problems


Essential Properties of the Objective Function

Given two points x0 and x00 on the same contour, i.e. f .x0 / D f .x00 / D c. A contour is concave if a convex combination of x0 and x00 yields at least as high a value of the function: f .N / f .x0 / D f .x00 / D c: x

Denition
A function f W Rn ! R W x 7! f .x/ is quasiconcave if f .x0 / 0 k 1. f .x00 / ) f k x0 C .1 k/ x00 f .x00 /; (8)

_D02.qxd 1/9/08 1:54 AM Page 663

Solutions to Optimization Problems


Essential Properties of the Objective Function

5. LOCATION 663

Figure B.2

both func+ons are discon+nuous in x0


1. Continuity of the objective function
A function y f(x) is continuous if there are no breaks in its graph, or crudely, if it can be drawn without taking the pen from the paper. In Fig. B.2 the functions drawn in (b) and (c) are not continuous, while that in (a) is continuous. In (b) f(x) becomes arbitrarily large at x0 (tends to innity) and in part (c) f(x) jumps from y 1 to y 2 at x0. When there is more than one variable in the objective function the intuitive idea of continuity is still valid: there should be no jumps or breaks in the graph of the

PENDIX B SOLUTIONS TO OPTIMIZATION PROBLEMS

Solutions to Optimization Problems


Essential Properties of the Objective Function Figure B.3

concave func+on

linear => both convex and concave

convex func+on

not concave nor convex

Solutions to Optimization Problems


Essential Properties of the Objective Function
Figure B.4

df (x1 , x2(x1 )) dx1

f1

f2

dx2 dx1

[B.8]

Solutions to Optimization Problems


Essential Properties of the Objective Function
Figure B.5

When f(x) f(x) [B.11] implies [B.10] so that quasi-concave functions have concave contours. The functions whose contours are shown in (a) and (b) of Fig. B.5 are quasi-concave,3 whereas that in (c) is not. To see this, note that, in (a) and (b), part

strictly quasi-concave (strictly convex be7er than set)

quasi-concave (convex be7er than set)

Solutions to Optimization Problems


Essential Properties of the Objective Function

f(x)

Solutions to Optimization Problems


Essential Properties of the Objective Function

not a concave func+on but it does have a global


f(x) g( )

solu+on => concavity is not a necessary condi+on for global solu+ons

Solutions to Optimization Problems


Essential Properties of the Feasible Set

Denition
A set is non-empty if it contains at least one element, the empty set being the set with no elements.

Denition
A set is closed if all the points on its boundaries are elements of the set.

Example
The set of numbers x on the interval 0 x 1 is closed, while the sets 0 < x 1 and 0 x < 1 are not.

Solutions to Optimization Problems


Essential Properties of the Feasible Set

Denition
A set is bounded when it is not possible to go o to innity in any direction while remaining within the set. It is always possible to enclose a bounded set within a sphere (open ball) of su ciently large nite size.

Example
The set of numbers x on the interval 0 < x < 1 is bounded, while the set x 0 is not.

Exercise
Characterize the following sets in terms of closedness and boundedness: 1. the set dened by 0 < x < 1 bounded but not closed 2. the set dened by x 0

Solutions to Optimization Problems


Essential Properties of the Feasible Set

Denition
A set is convex if every pair of points in it can be joined by a straight line which lies entirely within the set. Formally, a set X is convex if for any x, x00 2 X N x D k x C .1 with 0 k 1. k/ x00 2 X; (9)

that are in both A and B. The intersection of convex sets is itself a convex set. To prove this, let x, x be in both A and B, which are convex sets. Then the point C kx (1 k)x, 0 k 1, is in A and B, because both sets are convex, and so is in A B. But since x and x are any two points in A B this gives the result. The Essential Properties of the Feasible number of convex sets. argument extends easily to any Set

Solutions to Optimization Problems


Figure B.6

Existence of Solutions

Theorem (Weierstrass (Existence))


An optimization problem always has a solution if 1. the objective function f is continuous 2. the feasible set S is
2.1 non-empty 2.2 closed, and 2.3 bounded.

Existence of Solutions
Figure C.1

by f(x), x a scalar, and the feasible set by the set of values on the interval 0 x x. This feasible set is non-empty, closed and bounded. In (a) of the gure the function

not con+nuous

Existence of Solutions

f(x)

x*

Existence of Solutions

f(x)

discon+nuity

Existence of Solutions

f(x)

not bounded

Existence of Solutions

f(x)

not closed

Local and Global Optima

Consider a two-variable problem, in which we wish to maximize the function f .x1 ; x2 /, with f1 ; f2 > 0. Given that we can nd a local maximum of this function, under what conditions can we be sure that it is also a global maximum?

Maximization of a function
Maximization of a function over a given feasible set is equivalent to nding a point within that set which is on the highest possible contour.

Local and Global Optima


Figure D.1

must depend on the shapes of the feasible set and of the contours of the function. As (a) shows, it is not sufcient that the function be quasi-concave; and (b) shows

two possible solu+ons, only x* is global

objec+ve func+on is not quasi-concave x* global solu+on

Local and Global Optima

Maximization of a function
Maximization of a function over a given feasible set is equivalent to nding a point within that set which is on the highest possible contour.

Su cient conditions
Su cient conditions for any local optimum also to be global must depend on
I I

the shapes of the feasible set, and the shapes of the contours of the objective function.

Local and Global Optima

Theorem
A local maximum is always a global maximum if 1. the objective function is quasiconcave, and 2. the feasible set is convex.

igure D.2

Local and Global Optima

contour of quasi- concave func+on, quasi-convex set

Uniqueness of Solutions

Denition
A correspondence W X Y is a functional relationship between an x-value and a set of y-values. eg. a b previous slide => set of solu+ons
instead of unique solu+on

Theorem (Uniqueness)

Given an optimization problem in which the feasible set is convex and the objective function is non-constant and quasiconcave, a solution is unique if
I I I

the feasible set S is strictly convex, or the objective function f is strictly quasiconcave, or both.

quasi-concave but the feasible set Uniqueness of Solutions not strictly convex; in the second we have the

global optima, respectively x*, x and x. In the rst the objective function is strictly

Figure E.1

upper convex set (not strictly convex)

strictly convex feasible set

Interior and Boundary Optima


Denition
An interior point of a point set is characterized by the existence of a (possibly very small) neighborhood around it which contains only points in the set.

Denition
A boundary point has the property that all neighborhoods around it, however small, contain points which are, and points which are not, in the set. Property: a solution to an optimization problem which is at an interior point of the feasible set is unaected by small shifts in the boundaries of the set, while a solution at a boundary point will be sensitive to changes in at least one constraint.

Interior and Boundary Optima

Denition
It is simply necessary to assume that at any point in the feasible set it is always possible to nd a small change in the value of at least one variable which will increase the value of the objective function. This is the property of local non-satiation.

Interior and Boundary Optima

denoted x*. The solution in (a) is unaffected by a small shift in the constraint, e.g. to ab; that in (b) is affected; that in (c) is changed by a shift in constraint cd but not by that in ab, as illustrated. The absence of response of the solution in (a) is due to the assumed existence of a bliss point at x* (the peak of the hill whose contours are drawn in the gure), i.e.

Figure F.1

interior point

local non-sa+a+on

2 constraints: cd and ab only cd is binding

Constrained Optimization: The Method of Lagrange


Maximization of a function
Maximization of a function over a given feasible set is equivalent to nding a point within that set which is on the highest possible contour. If the problem is to maximize f W R2 ! R W .x1 ; x2 /0 7! f .x1 ; x2 / subject to a linear constraint g W R2 ! R W .x1 ; x2 /0 7! g.x1 ; x2 / a1 x1 C a2 x2 D b; it is intuitively clear that the solution arises at the point x D .x1 ; x2 /0 where dx2 D dx1 f1 D f2 a1 : a2

Constrained Optimization: The Method of Lagrange


If the constraint is of the more general form g W R2 ! R W .x1 ; x2 /0 7! g.x1 ; x2 / D b; and the conditions for the IFT to apply are satised, g implicitly denes x2 as a function of x1 , and the slope of the constraint at a point x equals dx2 g1 D ; dx1 g2
and the solution arises at the point x D .x1 ; x2 /0 where

f1 g1 D f2 g2 g.x1 ; x2 / D b:

(10) (11)

OCATION OF THE OPTIMUM: THE METHOD OF LAGRANGE

Constrained Optimization: The Method of Lagrange


slope of tangent = slope of constraint in op+mal point

e G.1

objec+ve func+on

constraint

essential fact about x* in the gure is that it is a point of tangency. Tha

Constrained Optimization: The Method of Lagrange


Rewriting (10) as f1 f2 D D g1 g2 implies f1 D f2 D

> 0;

g1 g2 ;

(12) (13)

i.e., at the constrained optimum, the gradient of the objective function is a scalar multiple of the gradient of the constraint function: rf .x / D rg.x /: (14)

Constrained Optimization: The Method of Lagrange

Constrained Optimization: The Method of Lagrange


Denition
A problem of constrained optimization can generally be written as max f .x/ x s.t. g.x/ D b; and can be solved by constructing the Lagrangian has economic meaning L.x; / D f .x/ and computing the FOCs rL.x ;

g.x/

b ;

(15)

/ D 0:

(16)

Constrained Optimization: The Method of Lagrange


Example
Let f W R2 ! R W .x1 ; x2 /0 7! f .x1 ; x2 / and g W R2 ! R W .x1 ; x2 /0 7! g.x1 ; x2 / D b. The Lagrangian for this two-variable, single-constraint optimization problem is L.x1 ; x2 ; / D f .x1 ; x2 / with corresponding FOCs rL.x ; L2 D
/

g.x1 ; x2 / D0

b ;

L1 D f1 .x1 ; x2 /

L D

f2 .x1 ; x2 / g.x1 ; x2 /

g1 D 0 g2 D 0

C b D 0:

Constrained Optimization: The Method of Lagrange


The FOCs (16) for an optimal solution yield a system of n C 1 equations in n C 1 unknowns, implicitly dening the optimal solution .x1 ; x2 ; /0 as a function of the exogenous parameter b. Under certain conditions (= IFT!!!) we can then solve for
x1 D h1 .b/ x2 D h2 .b/

D h .b/;
= indirect u+lity func+on in consumer problems

and substitute in the objective function (value function) v .b/ D f h1 .b/; h2 .b/ :

Constrained Optimization: The Method of Lagrange


Theorem (Envelope Theorem for Constrained Optimization)
The marginal impact of a change in a constraint parameter on the optimized value of the objective function is equal to the partial derivative of the Lagrangian, evaluated at the optimum:
@L.x1 ; x2 ; dv D db @b /

(17)

Example
@L.x1 ; x2 ; dv D db @b /

@ f .x1 ; x2 / @b

g.x1 ; x2 / b

Constrained G.2

Optimization: The Method of Lagrange

Constrained Optimization: The Method of Lagrange

Denition

The Lagrangian multiplier measures the rate at which the optimized value of the objective function varies with changes in the constraint parameter. The theorem which establishes this result is the Envelope Theorem. Economic interpretations are
I I

marginal utility of income (consumer theory) marginal impact on prots from a decrease in the cost constant (producer theory).

Constrained Optimization: The Method of Lagrange


In case of n variables and m > 1 constraints g j , write the Lagrangian as L.x; / D f .x/ The corresponding FOCs are L i D fi L
j

j D1

m X

jg

.x/:

gj D 0

j D1

m X

j j gi

D 0;

i D 1; : : : ; n

j D 1; : : : ; m:

At the optimum, the gradient of the objective fucntion is a linear combination of the gradients of the constraints: rf .x / D
m X j j rg .x /:

j D1

Constrained Optimization: The Method of Lagrange


Theorem (Implicit Function Theorem)
Suppose we have a system of m continuously dierentiable implicit functions in n > m variables, of the form g 1 .x1 ; : : : ; xm I xmC1 ; : : : ; xn / D 0 g m .x1 ; : : : ; xm I xmC1 ; : : : ; xn / D 0: If the determinant of the Jacobian matrix D of the rst m partial derivatives of g i , i D 1; : : : ; m is non-zero, then we can nd m continuous functions hj .xmC1 ; : : : ; xn / such that xj D hj .xmC1 ; : : : ; xn / and g j .h1 ; : : : ; hm I xmC1 ; : : : ; xn / 0, j D 1; : : : ; m. g 2 .x1 ; : : : ; xm I xmC1 ; : : : ; xn / D 0

Constrained Optimization: The Method of Lagrange


Exercise
Completely characterize the solution to the introductory example of a two-variable optimization problem subject to a linear constraint
x1 ;x2

max f .x1 ; x2 / s.t. a1 x1 C a2 x2 D b

and compute the following derivatives: dv 1. da1 dv 2. , da2 where v denotes the value function. (Hint: use the Envelope Theorem.)

Constrained Optimization: The Method of Lagrange


Theorem (Nondegenerate Constraint Qualication (NDCQ))
Let f; g1 ; : : : ; gk be C 1 functions of n variables and suppose x is a local maximizer of f on the constraint set dened by g1 .x/ b1 ; : : : ; gk .x/ bk : Form the Lagrangian L.x1 ; : : : ; xn ;
0; 1; : : : ; k/

0 f .x/

j D1

gj .x/

bk :

(18)

Suppose that g1 ; : : : ; gh yield binding constraints at x and that ghC1 ; : : : ; gk are not binding at x . If the Jacobian matrix @gi .x / i D1;:::;h @xj
j D1;:::;n

of the binding constraint functions has maximal rank h, then we can take 0 D 1 in (18).

Constrained Optimization: The Method of Lagrange

Exercise
Solve the following constrained optimization problem and comment on its solution: 1 3 2 max f .x; y/ D x 3 y C 2x x;y 3 2 s.t. g.x; y/ x y D 0:

Concave Programming and the Karush-Kuhn-Tucker Conditions

In economics, inequality constraints arise more naturally than equality constraints; think e.g. of
I I I I I I

prices p demand Di

0 0 0

inputs x 2 Rn C production qi ...

budget constraint B.x; p; m/ D fx 2 Rn jp0 x mg C

Concave Programming and the Karush-Kuhn-Tucker Conditions

Consequences
The result is that corner solutions may arise where some of the optimal choice variables are zero, xi D 0; and therefore the FOC rf .x / 0:

08 3:19 AM Page 687

Concave Programming and the Karush-Kuhn-Tucker Conditions


APP E N DIX H CONCAVE PR OGR AMMING AND T HE K UHN T UCK E R C O N DI T I O N S 687

Figure H.1

(Ci(x) R i (x) when xi 0 and all other outputs are at their optimal values). But at x* 0, the slope of the prot function is negative, not zero, implying that [H.1] is i not a necessary condition for an optimum, since an optimal point exists at which it is not satised. In the case shown the constraint [H.2] is binding, since without it the rm would seek to increase prot by producing negative xi . In (c), on the other

Concave Programming and the Karush-Kuhn-Tucker Conditions


The appropriate KKT FOCs for inequality-constrained optimization problems (including nonnegativity constraints) therefore are @L 0 @xi @L 0 @ j where @L D fi .x / @xi @L D @ j
m X m j gi .x /

xi
j

0 0

xi
j

@L D0 @xi @L D0 @ j

i D 1; : : : ; n j D 1; : : : ; m;

(19a) (19b)

j D1

g m .x /:

Concave Programming and the Karush-Kuhn-Tucker Conditions

Example
Consider once more our two-variable optimization problem subject to a linear constraint
x1 ;x2

max f .x1 ; x2 / s.t. a1 x1 C a2 x2 D b;

where we now impose a nonnegativity constraint x1 ; x2 0: Assume f is strictly increasing and strictly quasi-concave, and that the contours of f are everywhere steeper than the constraint.

L(x1, x2, )

f (x1, x2 )

[a1 x1

a2 x2

b]

[H.18]

Figure H.4

Concave Programming and the Karush-Kuhn-Tucker Conditions

Concave Programming and the Karush-Kuhn-Tucker Conditions


The Lagrangian reads L.x1 ; x2 ; / D f .x1 ; x2 /
x1 x2

.a1 x1 C a2 x2

b/;

with corresponding FOCs (see equations (19)) L 1 D f1 L D L 2 D f2 a1 0 a2 0 b/ 0 0 0 0


x1 .f1 x2

a1 /D 0 a2 /D 0

.f2

.a1 x1 C a2 x2

.a1 x1 C a2 x2

b/ D 0;

At the optimum, we have x1 > 0, x2 D 0; hence

f1 D f2 yielding f1 f2 a1 . a2

a1

a2 ;

Concave Programming and the Karush-Kuhn-Tucker Conditions


max f .x/ x s.t. g.x/b

WARNING: The sign of the Lagrangian multiplier


In inequality-constrained maximization problems, the Lagrangian multiplier MUST be positive at the constrained optimum: rf .x / D rg.x /: Hence we NEED to construct the Lagrangian as L.x; / D f .x/ g.x/ b :

Concave Programming and the Karush-Kuhn-Tucker Conditions


gradient always into direc+on of sharpest increase of f

gradient of constraint

Concave Programming and the Karush-Kuhn-Tucker Conditions


min f .x/ x s.t. g.x/ b

WARNING: The sign of the Lagrangian multiplier


In inequality-constrained minimization problems, the Lagrangian multiplier MUST be positive at the constrained optimum: rf .x / D rg.x /: Hence we NEED to construct the Lagrangian as L.x; / D f .x/ g.x/ b :

Concave Programming and the Karush-Kuhn-Tucker Conditions


solu+on is within binding constraint set gradients point to opposite direc+ons

Concave Programming and the Karush-Kuhn-Tucker Conditions


Exercise
Consider the following inequality-constrained optimization problem
x1 ;x2

max f .x1 ; x2 / s.t. a1 x1 C a2 x2 b1 c1 x1 C c2 x2 b2 x1 ; x2 0;

where f is concave and f1 ; f2 > 0. It is assumed that the constraints are such that they intersect in the positive quadrant.

Concave Programming and the Karush-Kuhn-Tucker Conditions


Exercise
1. Sketch all possible solutions to this problem. Does an optimum exist? (Hint: use Weierstrass Theorem.) 2. Write down the Lagrangian and state the appropriate Karush-Kuhn-Tucker conditions. 3. Using the Karush-Kuhn-Tucker conditions and assuming an interior solution x 0, characterize the optimum where
3.1 only constraint 1 is binding 3.2 only constraint 2 is binding 3.3 both constraints are binding.

Comparative Statics
Implicit Function Theorem for a System of Equations

Let F1 ; : : : ; Fm W RnCm ! R be C 1 functions. Consider the system of equations F F1 .y1 ; : : : ; ym ; x1 ; : : : ; xn / D c1 : : :D: : : (20a) (20b) (20c)

Fm .y1 ; : : : ; ym ; x1 ; : : : ; xn / D cm

as possibly dening y1 ; : : : ; ym as implicit functions of x1 ; : : : ; xn .

Comparative Statics
Implicit Function Theorem for a System of Equations

Suppose that .y ; x / is a solution of (20). If the determinant of the m m Jacobian matrix of F 0 1 @F1 @F1 .x/ .x/ @y1 @ym B : C; :: : : C JF .x/ D B : (21) : : : A @ @Fm @Fm .x/ .x/ @y1 @ym evaluated at .y ; x / is nonzero, then there exist C 1 functions y1 D f1 .x1 ; : : : ; xn / : : :D: : : (22a) (22b) (22c)

ym D fm .x1 ; : : : ; xn / dened on a ball B about x such that

Comparative Statics
Implicit Function Theorem for a System of Equations

Fm f1 .x/; : : : ; fm .x/; x1 ; : : : ; xn D cm for all x D .x1 ; : : : ; xn /0 2 B, and


y1 D f1 .x1 ; : : : ; xn / : : :D: : :

F1 f1 .x/; : : : ; fm .x/; x1 ; : : : ; xn D c1 : : :D: : :

(23a) (23b) (23c)

(24a) (24b) (24c)

ym

fm .x1 ; : : : ; xn /:

Comparative Statics
Implicit Function Theorem for a System of Equations

Furthermore, one can compute .@fk =@xh /.y ; x / D .@yk =@xh /.y ; x / by (i) totally dierentiating (= linearizing) the system (20), (ii) setting dxh D 1 and dxj D 0 for h, and (iii) solving the resulting system for dyk by either
I

inverting the coe cient matrix (21) to obtain 0 B : C B : B : CDB : @ : A @ :


@ym @xh @y1 @xh

@F1 @y1

::

@F1 @ym

@Fm @y1

@Fm @ym

: C : C : A

1 0 @F

B @xh C B : C; : @ : A
@Fm @xh

(IFT-2a)

Comparative Statics
Implicit Function Theorem for a System of Equations

or, by applying Cramers Rule and directly compute @F1 @F1 @F1 @y1 @xh @ym : : : :: :: : : : : : : : : @Fm @Fm @Fm @y @yk @xh @ym : D @F1 (IFT-2b) @F1 @F1 1 @xh @y1 @yk @ym : : : :: :: : : : : : : : : @Fm @Fm @Fm @y @y @y
1 k m

Second-Order Conditions
For an unconstrained optimization problem, the SOC is that the corresponding Hessian matrix r 2 f .x / must be
I I

negative denite for a maximum positive denite for a minimum.

Verication: construct the principal minors jHk j f11 f11 f12 ;:::; : jHk j D jf11 j ; : f21 f22 : fn1 of order k D 1; : : : ; n, and check their signs:
I I

f1n : ; : : fnn

if sgn jHk j D . 1/k for k D 1; : : : ; n, then r 2 f .x / is negative denite if sgn jHk j > 0 for 8k, then r 2 f .x / is positive denite.

Second-Order Conditions
For a constrained optimization problem, the SOC are a bit more complicated. The intuition, however is as follows: consider the problem max f .x1 ; x2 / s.t. g.x1 ; x2 / D 0;
x1 ;x2

where, as usual, the optimum is characterized by a tangency condition: dx2 D dx2 : dx1 f dx1 g

Since x at this point is a constrained local maximum, it must not be possible to reach a higher contour of f (assuming fi > 0, i D 1; 2) by moving in feasible directions for the xi .

Second-Order Conditions

The geometry of the SOC


Small movements along the g-contour away from x must reduce the value of the objective function. If x1 increases (decreases) from x1 , then the slope of the g-contour must become greater (smaller) in absolute value, or smaller (greater) algebraically, than the slope of the f -contour: ! d dx2 dx2 > 0; (25) dx1 dx1 f dx1 g at x D x .

Second-Order Conditions
SECOND-ORDER CONDITIONS FOR CONSTRAINED MAXIMIZATION 703

Figure I.2

f less concave than g bigger absolute slope than objec+ve func+on


d dx1 dx2 dx 1 dx2 dx1 g 0 at x x* [I.18]

Second-Order Conditions

Second-Order Conditions
In the general case with n variables and m < n equality constraints, verifying the SOCs boils down to constructing the bordered Hessian matrix 0 1 m1 L11 L1n g1 g1 : : : C B : : : : C B : : : : : B C 1 m B Ln1 Lnn gn gn C B C; HDB 1 1 gn 0 0 C B g1 C B : : : : C : : : : A @ : : : :
m g1 m gn

and checking the signs of the corresponding bordered principal minors jHm j.

Second-Order Conditions

Easier: contruct H as 0 0 B : B : : B B0 HDB 1 Bg B 1 B : @ : : m g1

0 : : : 0 1 gn : : :

1 g1 : : : 1 gn L11 : : :

m gn Ln1

m1 g1 : C : C : C m gn C C; L1n C C : C : A : Lnn

In this case, we need to verify the sign of the last n principal minors of H, and the sign of H itself.

m leading

Second-Order Conditions
if sgn H D . 1/n and all the n

m leading principal minors

alternative in sign, then H is negative denite on the nullspace dened by the m constraints, and x is a (local) maximum if sgn H D . 1/m and all the n m leading principal minors have the same sign, then H is positive denite on the nullspace dened by the m constraints, and x is a (local) minimum if some of the nonzero leading principal minors violate the conditions stated above, H is indenite and x is a neither a minimum nor a maximum.

The Envelope Theorem


Duality Theory

Applying the Envelope Theorem greatly simplies comparative statics analysis; in fact, a number of fundamental economic results derive from duality theory and the repeated application of the Envelope Theorem, e.g.
I I I

Hotellings Lemma Shephards Lemma Roys Identity

Theorem (The Envelope Theorem for Unconstrained Optimization Problems)


Let f .xI / be a C 1 function of x 2 Rn and the scalar . For each choice of the parameter , consider the unconstrained maximization problems max f .xI /: x Let x ./ be a solution of this problem, and suppose x ./ is a C 1 function of . Then, @f x ./I d @f .xI / f x ./I D D : (ET-1) d @ xDx ./ @

Theorem (The Envelope Theorem for Constrained Optimization Problems)


Let f; h1 ; : : : ; hk W Rn R ! R be C 1 functions. Let 0 x ./ D x1 ./; : : : ; xn ./ denote the solution of the problem of maximizing x 7! f .xI / on the constraint set h1 .xI / D 0 : : : hk .xI / D 0; for any choice of the parameter . Suppose that x ./ and the Lagrange multipliers 1 ./; : : : ; k ./ are C 1 functions of and that the NDCQ holds. Then, @L x ./; ./I d @L.x; I / f x ./I D D : (ET-2) d @ @ xDx ./

where L is the Lagrangian for this problem.

The Envelope Theorem


Duality Theory

In plain words, the Envelope Theorem states that the eect of varying the scalar on the optimized objective function v D f x ./I is given by the partial derivative of the objective function with respect to , evaluated at the optimal solution point x ./. For constrained optimization, replace the objective function with the Lagrangian.

The Directional Derivative


We want to compute the rate of change of a function f W Rn ! R at a given point x D .x1 ; : : : ; xn /0 in any given direction h D .h1 ; : : : ; hn /0 . To do so, write the parameterized equation of the line through x in the direction h as x D x C t h: Evaluating f along this line,
g.t/ f .x C t h/ D f .x1 C th1 ; : : : ; xn C t hn /;

and taking the derivative of g at t D 0 yields g 0 .0/ D @f .x /h1 C @x1 C @f .x /hn : @xn

The Directional Derivative

Denition (The Directional Derivative)


Let f W Rn ! R W x 7! f .x/. The directional derivative of f at x in the direction h is dened as 0 1 h1 @f @f B:C Dfx .h/ .x /; : : : ; .x / @ : A D Df .x / h: (26) : @x1 @xn hn To evaluate how f changes at x in the direction of x1 , set h D e D .1; 0; : : : ; 0/0 .

The Gradient
The vector Df .x / in (26) has special meaning:

Denition (The Gradient)


Let f W Rn ! R W x 7! f .x/. The gradient (vector) of f at x is dened as the n 1 column vector of partial derivatives of f evaluated at x : 0 1 @f .x / B @x1 : C C: : rf .x / D B (27) : @ A @f .x / @xn The gradient is interpreted as a vector with tail at x , and has length and direction.

The Gradient
Normalizing khk D 1, by the property of the dot product , Dfx .h/ D rf .x / h D rf .x / khk cos D rf .x / cos ; where is the angle between both vectors.

The Gradient
Normalizing khk D 1, by the property of the dot product , Dfx .h/ D rf .x / h D rf .x / khk cos D rf .x / cos ; where is the angle between both vectors. Hence, the direction of
I I

greatest increase in f at the point x is in the direction of vectors h that point in the same direction as the gradient zero increase in f at the point x is in the direction of vectors h that are perpendicular to the gradient

Theorem (Direction of Greatest Increase in f )


Let f W Rn ! R be a C 1 function. At any point x in the domain of f at which rf .x/ 0, the gradient vector rf .x/ 0 points at x into the direction in which f increases most rapidly.

The Gradient

Theorem (Level Sets and Gradients)


Let g W Rn ! R be a C 1 function in the neighborhood of 0 0 x0 D .x1 ; x2 /0 . Suppose that x0 is a regular point of g. Then, the gradient vector rg.x0 / is perpendicular to the level set of g at x0 .

The Gradient
Proof.
By the IFT, the slope of the level curve of g at x0 is dx2 .x1 / D dx1 realized by the vector h D @1; 0 0
0 0 @g.x1 ;x2 / @x1 A 0 0 @g.x1 ;x2 / @x2

0 0 @g.x1 ; x2 / @x1
0 0 @g.x1 ;x2 / @x2

Hence,

h rg.x0 / D @1;

0 0 @g.x1 ;x2 / @x1 0 0 @g.x1 ;x2 / @x2

1 A

@g.x0 / @g.x0 / ; @x1 @x2

D 0:

The Gradient

(gradient is perpendicular on the tangent of the contour)

Other Useful Properties

Convex Sets
A convex set is a set of the form fx 2 Rn W f .x/ of a. ag, for all values

Quasiconcave Functions
A function f W Rn ! R is quasiconcave if the upper contour sets of the function are convex sets.

Other Useful Properties

Homogeneity
A function f W Rn ! R is homogeneous of degree k if C f .tx/ D t k f .x/; 8 t > 0. (28)

In economics, the most important homogeneous functions are the of the zeroth and rst degree: f .tx/ D f .x/ (29) (30)

f .tx/ D tf .x/:

Other Useful Properties


Homotheticity
A homothetic function is a monotonic transformation of a function that is homogeneous of degree 1, or f .x/ D g .h.x// ; with h homogeneous of degree 1, and g a monotonic function. (31)

Monotonic Transformation
A function g W R ! R is a (positive) monotonic transformation if it is a strictly increasing function, or x > y ) g.x/ > g.y/; 8 x; y 2 D R. (32)

Other Useful Properties

Homogeneity.
If f W Rn ! R is homogeneous of degree k C homogeneous of degree k 1. 1, then

@f .x/ @xi

is

Proof.
Dierentiate the identity f .t x/ D t k f .x/ w.r.t. xi to get @f .t x/ @f .x/ t D tk ; @txi @xi and divide both sides by t.

Other Useful Properties

Theorem (Eulers Theorem)


Suppose f W Rn ! R is homogeneous of degree k. Then C rf .x/ x D kf .x/; or, in scalar notation, @f x1 C @x1 C @f xn D kf .x1 ; : : : ; xn /: @xn (33)

Other Useful Properties


Equation (33) is used over and over again to 1. verify the homogeneity of degree 0 property of consumer and rm demand, which yields X fi xi D 0;
i

and 2. characterize CRT (= homogeneous of degree 1) production functions, yielding X fi xi D f .x1 ; : : : ; xn /:


i

Constrained Optimization: The Method of Lagrange


Example

Example
Consider the problem of maximizing 2 2 2 f W R3 ! R W .x1 ; x2 ; x3 /0 7! f .x1 ; x2 ; x3 / D x1 x2 x3 subject to the constraint 2 2 2 g W R3 ! R W .x1 ; x2 ; x3 /0 7! g.x1 ; x2 ; x3 / x1 C x2 C x3 D 3. In verifying NDCQ, note that rg.x/ D .2x1 ; 2x2 ; 2x3 /0 D 0 , x D 0, a point which is not in the constraint set Cg . As NDCQ is satised, we can use Lagranges method properly and solve the problem.

Constrained Optimization: The Method of Lagrange


Example

The corresponding Lagrangian is


2 2 2 L.x1 ; x2 ; x3 ; / D x1 x2 x3

.x 2 C y 2 C z 2

3/;

with FOCs @L @x1 @L @x2 @L @x3 @L @


2 2 D 2x1 x2 x3 2 2 D 2x1 x2 x3 2 2 D 2x1 x2 x3

2 x1 D 0 2 x2 D 0 2 x3 D 0 3/ D 0;

.x 2 C y 2 C z 2
/0

and solution .x1 ; x2 ; x3 ;

D .1; 1; 1; 1/0 .

Constrained Optimization: The Method of Lagrange


Example

The SOC for a constrained maximum states that the bordered Hessian must be negative denite subject to the constraint g, i.e. we need to check the last n m D 3 1 D 2 leading principal minors of the bordered Hessian, 0 1 0 2x1 2x2 2x3 2 2 B2x 2x 2 x 2 2 4x1 x2 x3 4x1 x2 x3 C 2 3 C: N HDB 1 @2x2 4x1 x2 x 2 2x 2 x 2 2 4x 2 x2 x3 A 2x3
2 4x1 x2 x3 3 2 4x1 x2 x3 1 3 2 2 2x1 x2 1

at the candidate solution .1; 1; 1; 1/0 .

Constrained Optimization: The Method of Lagrange


Example

First,

For a constrained maximum, it must be the case that N N 1. sgn H D sgn H4 D . 1/n D . 1/3 D 3 < 0 and hence N N 2. sgn H3 > 0, where H3 is found by eliminating the last row N 4. and column from H N N We nd that H4 D 192 and H3 D 32; therefore 0 0 is a local constrained maximum. .x1 ; x2 ; x3 ; / D .1; 1; 1; 1/

0 0 B2 N H4 D B @2 2

2 0 4 4

2 4 0 4

1 2 4C C: 4A 0

Anda mungkin juga menyukai