M.Sc. Economics
Katholieke Universiteit Leuven 20112012
Outline
Mathematics The Structure of an Optimization Problem Solutions to Optimization Problems Existence of Solutions Local and Global Optima Uniqueness of Solutions Interior and Boundary Optima Constrained Optimization: The Method of Lagrange Concave Programming and the Karush-Kuhn-Tucker Conditions Second-Order Conditions and Comparative Statics The Envelope Theorem The Gradient and its Characteristics Other Useful Properties Constrained Optimization: Example
Denition
An optimization problem (minimization or maximization) consists in general of
I I I
The problem the is to choose the preferred alternative in the feasible set, i.e. nd the maximum (minimum) of the objective function with respect to the choice variables, subject to constraints.
Fundamental questions
1. Does a solution actually exist? 2. Is the solution local or global? 3. Is the solution unique? 4. Is the solution interior or on the boundary? 5. Is the solution a minimum or a maximum? (Location)
Denition
A global solution is a solution which satises condition (1).
Denition
A local solution satises the condition f .x / f .x/; (2)
x < g:
Denition
Denition
Denition
A function f W A Rn ! R is continuous in a 2 A if 8 > 0; 9 > 0, such that 8 x 2 A: kx ak < ) jf .x/ f .a/j < : (3)
Denition
A function f W Rn ! R is concave if for any x0 ; x1 2 S f k x0 C .1 8 k 2 .0; 1/, or N where x D k x0 C .1 x1 . k/ x1 f .N / x k f .x0 / C .1 fN; k/ f .x1 /; (4)
Denition
Rn
A contour of a function f W ! R W x 7! f .x/ is the set of x-values that satises the condition f .x/ D c; with c 2 R. (5)
Example
2 2 Consider f W R2 ! R W .x1 ; x2 /0 7! f .x1 ; x2 / D x1 C x2 . The contours of f are dened as 2 2 x1 C x2 D c:
(6)
Continuity of the objective function f implies continuity of its contours If the conditions of the Implicit Function Theorem are satised, x2 can be written as a function of x1 , x2 D x2 .x1 /, Q and the slope of a contour is computed as dx2 D dx1 with fi D
@f .x1 ;x2 / . @xi
f1 ; f2
(7)
Given two points x0 and x00 on the same contour, i.e. f .x0 / D f .x00 / D c. A contour is concave if a convex combination of x0 and x00 yields at least as high a value of the function: f .N / f .x0 / D f .x00 / D c: x
Denition
A function f W Rn ! R W x 7! f .x/ is quasiconcave if f .x0 / 0 k 1. f .x00 / ) f k x0 C .1 k/ x00 f .x00 /; (8)
5. LOCATION 663
Figure B.2
concave func+on
convex func+on
f1
f2
dx2 dx1
[B.8]
When f(x) f(x) [B.11] implies [B.10] so that quasi-concave functions have concave contours. The functions whose contours are shown in (a) and (b) of Fig. B.5 are quasi-concave,3 whereas that in (c) is not. To see this, note that, in (a) and (b), part
f(x)
Denition
A set is non-empty if it contains at least one element, the empty set being the set with no elements.
Denition
A set is closed if all the points on its boundaries are elements of the set.
Example
The set of numbers x on the interval 0 x 1 is closed, while the sets 0 < x 1 and 0 x < 1 are not.
Denition
A set is bounded when it is not possible to go o to innity in any direction while remaining within the set. It is always possible to enclose a bounded set within a sphere (open ball) of su ciently large nite size.
Example
The set of numbers x on the interval 0 < x < 1 is bounded, while the set x 0 is not.
Exercise
Characterize the following sets in terms of closedness and boundedness: 1. the set dened by 0 < x < 1 bounded
but
not
closed 2. the set dened by x 0
Denition
A set is convex if every pair of points in it can be joined by a straight line which lies entirely within the set. Formally, a set X is convex if for any x, x00 2 X N x D k x C .1 with 0 k 1. k/ x00 2 X; (9)
that are in both A and B. The intersection of convex sets is itself a convex set. To prove this, let x, x be in both A and B, which are convex sets. Then the point C kx (1 k)x, 0 k 1, is in A and B, because both sets are convex, and so is in A B. But since x and x are any two points in A B this gives the result. The Essential Properties of the Feasible number of convex sets. argument extends easily to any Set
Existence of Solutions
Existence of Solutions
Figure C.1
by f(x), x a scalar, and the feasible set by the set of values on the interval 0 x x. This feasible set is non-empty, closed and bounded. In (a) of the gure the function
not con+nuous
Existence of Solutions
f(x)
x*
Existence of Solutions
f(x)
discon+nuity
Existence of Solutions
f(x)
not bounded
Existence of Solutions
f(x)
not closed
Consider a two-variable problem, in which we wish to maximize the function f .x1 ; x2 /, with f1 ; f2 > 0. Given that we can nd a local maximum of this function, under what conditions can we be sure that it is also a global maximum?
Maximization of a function
Maximization of a function over a given feasible set is equivalent to nding a point within that set which is on the highest possible contour.
must depend on the shapes of the feasible set and of the contours of the function. As (a) shows, it is not sufcient that the function be quasi-concave; and (b) shows
Maximization of a function
Maximization of a function over a given feasible set is equivalent to nding a point within that set which is on the highest possible contour.
Su cient conditions
Su cient conditions for any local optimum also to be global must depend on
I I
the shapes of the feasible set, and the shapes of the contours of the objective function.
Theorem
A local maximum is always a global maximum if 1. the objective function is quasiconcave, and 2. the feasible set is convex.
igure D.2
Uniqueness of Solutions
Denition
A correspondence W X Y is a functional relationship between an x-value and a set of y-values. eg.
a
b
previous
slide
=>
set
of
solu+ons
instead
of
unique
solu+on
Theorem (Uniqueness)
Given an optimization problem in which the feasible set is convex and the objective function is non-constant and quasiconcave, a solution is unique if
I I I
the feasible set S is strictly convex, or the objective function f is strictly quasiconcave, or both.
quasi-concave but the feasible set Uniqueness of Solutions not strictly convex; in the second we have the
global optima, respectively x*, x and x. In the rst the objective function is strictly
Figure E.1
Denition
A boundary point has the property that all neighborhoods around it, however small, contain points which are, and points which are not, in the set. Property: a solution to an optimization problem which is at an interior point of the feasible set is unaected by small shifts in the boundaries of the set, while a solution at a boundary point will be sensitive to changes in at least one constraint.
Denition
It is simply necessary to assume that at any point in the feasible set it is always possible to nd a small change in the value of at least one variable which will increase the value of the objective function. This is the property of local non-satiation.
denoted x*. The solution in (a) is unaffected by a small shift in the constraint, e.g. to ab; that in (b) is affected; that in (c) is changed by a shift in constraint cd but not by that in ab, as illustrated. The absence of response of the solution in (a) is due to the assumed existence of a bliss point at x* (the peak of the hill whose contours are drawn in the gure), i.e.
Figure F.1
interior point
local non-sa+a+on
f1 g1 D f2 g2 g.x1 ; x2 / D b:
(10) (11)
e G.1
objec+ve func+on
constraint
> 0;
g1 g2 ;
(12) (13)
i.e., at the constrained optimum, the gradient of the objective function is a scalar multiple of the gradient of the constraint function: rf .x / D rg.x /: (14)
g.x/
b ;
(15)
/ D 0:
(16)
g.x1 ; x2 / D0
b ;
L1 D f1 .x1 ; x2 /
L D
f2 .x1 ; x2 / g.x1 ; x2 /
g1 D 0 g2 D 0
C b D 0:
D h .b/;
=
indirect
u+lity
func+on in
consumer
problems
and substitute in the objective function (value function) v .b/ D f h1 .b/; h2 .b/ :
(17)
Example
@L.x1 ; x2 ; dv D db @b /
@ f .x1 ; x2 / @b
g.x1 ; x2 / b
Constrained G.2
Denition
The Lagrangian multiplier measures the rate at which the optimized value of the objective function varies with changes in the constraint parameter. The theorem which establishes this result is the Envelope Theorem. Economic interpretations are
I I
marginal utility of income (consumer theory) marginal impact on prots from a decrease in the cost constant (producer theory).
j D1
m X
jg
.x/:
gj D 0
j D1
m X
j j gi
D 0;
i D 1; : : : ; n
j D 1; : : : ; m:
At the optimum, the gradient of the objective fucntion is a linear combination of the gradients of the constraints: rf .x / D
m X j j rg .x /:
j D1
and compute the following derivatives: dv 1. da1 dv 2. , da2 where v denotes the value function. (Hint: use the Envelope Theorem.)
0 f .x/
j D1
gj .x/
bk :
(18)
Suppose that g1 ; : : : ; gh yield binding constraints at x and that ghC1 ; : : : ; gk are not binding at x . If the Jacobian matrix @gi .x / i D1;:::;h @xj
j D1;:::;n
of the binding constraint functions has maximal rank h, then we can take 0 D 1 in (18).
Exercise
Solve the following constrained optimization problem and comment on its solution: 1 3 2 max f .x; y/ D x 3 y C 2x x;y 3 2 s.t. g.x; y/ x y D 0:
In economics, inequality constraints arise more naturally than equality constraints; think e.g. of
I I I I I I
prices p demand Di
0 0 0
Consequences
The result is that corner solutions may arise where some of the optimal choice variables are zero, xi D 0; and therefore the FOC rf .x / 0:
Figure H.1
(Ci(x) R i (x) when xi 0 and all other outputs are at their optimal values). But at x* 0, the slope of the prot function is negative, not zero, implying that [H.1] is i not a necessary condition for an optimum, since an optimal point exists at which it is not satised. In the case shown the constraint [H.2] is binding, since without it the rm would seek to increase prot by producing negative xi . In (c), on the other
xi
j
0 0
xi
j
@L D0 @xi @L D0 @ j
i D 1; : : : ; n j D 1; : : : ; m;
(19a) (19b)
j D1
g m .x /:
Example
Consider once more our two-variable optimization problem subject to a linear constraint
x1 ;x2
where we now impose a nonnegativity constraint x1 ; x2 0: Assume f is strictly increasing and strictly quasi-concave, and that the contours of f are everywhere steeper than the constraint.
L(x1, x2, )
f (x1, x2 )
[a1 x1
a2 x2
b]
[H.18]
Figure H.4
.a1 x1 C a2 x2
b/;
a1 /D 0 a2 /D 0
.f2
.a1 x1 C a2 x2
.a1 x1 C a2 x2
b/ D 0;
f1 D f2 yielding f1 f2 a1 . a2
a1
a2 ;
gradient of constraint
where f is concave and f1 ; f2 > 0. It is assumed that the constraints are such that they intersect in the positive quadrant.
Comparative Statics
Implicit Function Theorem for a System of Equations
Let F1 ; : : : ; Fm W RnCm ! R be C 1 functions. Consider the system of equations F F1 .y1 ; : : : ; ym ; x1 ; : : : ; xn / D c1 : : :D: : : (20a) (20b) (20c)
Fm .y1 ; : : : ; ym ; x1 ; : : : ; xn / D cm
Comparative Statics
Implicit Function Theorem for a System of Equations
Suppose that .y ; x / is a solution of (20). If the determinant of the m m Jacobian matrix of F 0 1 @F1 @F1 .x/ .x/ @y1 @ym B : C; :: : : C JF .x/ D B : (21) : : : A @ @Fm @Fm .x/ .x/ @y1 @ym evaluated at .y ; x / is nonzero, then there exist C 1 functions y1 D f1 .x1 ; : : : ; xn / : : :D: : : (22a) (22b) (22c)
Comparative Statics
Implicit Function Theorem for a System of Equations
ym
fm .x1 ; : : : ; xn /:
Comparative Statics
Implicit Function Theorem for a System of Equations
Furthermore, one can compute .@fk =@xh /.y ; x / D .@yk =@xh /.y ; x / by (i) totally dierentiating (= linearizing) the system (20), (ii) setting dxh D 1 and dxj D 0 for h, and (iii) solving the resulting system for dyk by either
I
@F1 @y1
::
@F1 @ym
@Fm @y1
@Fm @ym
: C : C : A
1 0 @F
B @xh C B : C; : @ : A
@Fm @xh
(IFT-2a)
Comparative Statics
Implicit Function Theorem for a System of Equations
or, by applying Cramers Rule and directly compute @F1 @F1 @F1 @y1 @xh @ym : : : :: :: : : : : : : : : @Fm @Fm @Fm @y @yk @xh @ym : D @F1 (IFT-2b) @F1 @F1 1 @xh @y1 @yk @ym : : : :: :: : : : : : : : : @Fm @Fm @Fm @y @y @y
1 k m
Second-Order Conditions
For an unconstrained optimization problem, the SOC is that the corresponding Hessian matrix r 2 f .x / must be
I I
Verication: construct the principal minors jHk j f11 f11 f12 ;:::; : jHk j D jf11 j ; : f21 f22 : fn1 of order k D 1; : : : ; n, and check their signs:
I I
f1n : ; : : fnn
if sgn jHk j D . 1/k for k D 1; : : : ; n, then r 2 f .x / is negative denite if sgn jHk j > 0 for 8k, then r 2 f .x / is positive denite.
Second-Order Conditions
For a constrained optimization problem, the SOC are a bit more complicated. The intuition, however is as follows: consider the problem max f .x1 ; x2 / s.t. g.x1 ; x2 / D 0;
x1 ;x2
where, as usual, the optimum is characterized by a tangency condition: dx2 D dx2 : dx1 f dx1 g
Since x at this point is a constrained local maximum, it must not be possible to reach a higher contour of f (assuming fi > 0, i D 1; 2) by moving in feasible directions for the xi .
Second-Order Conditions
Second-Order Conditions
SECOND-ORDER CONDITIONS FOR CONSTRAINED MAXIMIZATION 703
Figure I.2
Second-Order Conditions
Second-Order Conditions
In the general case with n variables and m < n equality constraints, verifying the SOCs boils down to constructing the bordered Hessian matrix 0 1 m1 L11 L1n g1 g1 : : : C B : : : : C B : : : : : B C 1 m B Ln1 Lnn gn gn C B C; HDB 1 1 gn 0 0 C B g1 C B : : : : C : : : : A @ : : : :
m g1 m gn
and checking the signs of the corresponding bordered principal minors jHm j.
Second-Order Conditions
0 : : : 0 1 gn : : :
1 g1 : : : 1 gn L11 : : :
m gn Ln1
m1 g1 : C : C : C m gn C C; L1n C C : C : A : Lnn
In this case, we need to verify the sign of the last n principal minors of H, and the sign of H itself.
m leading
Second-Order Conditions
if sgn H D . 1/n and all the n
alternative in sign, then H is negative denite on the nullspace dened by the m constraints, and x is a (local) maximum if sgn H D . 1/m and all the n m leading principal minors have the same sign, then H is positive denite on the nullspace dened by the m constraints, and x is a (local) minimum if some of the nonzero leading principal minors violate the conditions stated above, H is indenite and x is a neither a minimum nor a maximum.
Applying the Envelope Theorem greatly simplies comparative statics analysis; in fact, a number of fundamental economic results derive from duality theory and the repeated application of the Envelope Theorem, e.g.
I I I
In plain words, the Envelope Theorem states that the eect of varying the scalar on the optimized objective function v D f x ./I is given by the partial derivative of the objective function with respect to , evaluated at the optimal solution point x ./. For constrained optimization, replace the objective function with the Lagrangian.
and taking the derivative of g at t D 0 yields g 0 .0/ D @f .x /h1 C @x1 C @f .x /hn : @xn
The Gradient
The vector Df .x / in (26) has special meaning:
The Gradient
Normalizing khk D 1, by the property of the dot product , Dfx .h/ D rf .x / h D rf .x / khk cos D rf .x / cos ; where is the angle between both vectors.
The Gradient
Normalizing khk D 1, by the property of the dot product , Dfx .h/ D rf .x / h D rf .x / khk cos D rf .x / cos ; where is the angle between both vectors. Hence, the direction of
I I
greatest increase in f at the point x is in the direction of vectors h that point in the same direction as the gradient zero increase in f at the point x is in the direction of vectors h that are perpendicular to the gradient
The Gradient
The Gradient
Proof.
By the IFT, the slope of the level curve of g at x0 is dx2 .x1 / D dx1 realized by the vector h D @1; 0 0
0 0 @g.x1 ;x2 / @x1 A 0 0 @g.x1 ;x2 / @x2
0 0 @g.x1 ; x2 / @x1
0 0 @g.x1 ;x2 / @x2
Hence,
h rg.x0 / D @1;
1 A
D 0:
The Gradient
Convex Sets
A convex set is a set of the form fx 2 Rn W f .x/ of a. ag, for all values
Quasiconcave Functions
A function f W Rn ! R is quasiconcave if the upper contour sets of the function are convex sets.
Homogeneity
A function f W Rn ! R is homogeneous of degree k if C f .tx/ D t k f .x/; 8 t > 0. (28)
In economics, the most important homogeneous functions are the of the zeroth and rst degree: f .tx/ D f .x/ (29) (30)
f .tx/ D tf .x/:
Monotonic Transformation
A function g W R ! R is a (positive) monotonic transformation if it is a strictly increasing function, or x > y ) g.x/ > g.y/; 8 x; y 2 D R. (32)
Homogeneity.
If f W Rn ! R is homogeneous of degree k C homogeneous of degree k 1. 1, then
@f .x/ @xi
is
Proof.
Dierentiate the identity f .t x/ D t k f .x/ w.r.t. xi to get @f .t x/ @f .x/ t D tk ; @txi @xi and divide both sides by t.
Example
Consider the problem of maximizing 2 2 2 f W R3 ! R W .x1 ; x2 ; x3 /0 7! f .x1 ; x2 ; x3 / D x1 x2 x3 subject to the constraint 2 2 2 g W R3 ! R W .x1 ; x2 ; x3 /0 7! g.x1 ; x2 ; x3 / x1 C x2 C x3 D 3. In verifying NDCQ, note that rg.x/ D .2x1 ; 2x2 ; 2x3 /0 D 0 , x D 0, a point which is not in the constraint set Cg . As NDCQ is satised, we can use Lagranges method properly and solve the problem.
.x 2 C y 2 C z 2
3/;
2 x1 D 0 2 x2 D 0 2 x3 D 0 3/ D 0;
.x 2 C y 2 C z 2
/0
D .1; 1; 1; 1/0 .
The SOC for a constrained maximum states that the bordered Hessian must be negative denite subject to the constraint g, i.e. we need to check the last n m D 3 1 D 2 leading principal minors of the bordered Hessian, 0 1 0 2x1 2x2 2x3 2 2 B2x 2x 2 x 2 2 4x1 x2 x3 4x1 x2 x3 C 2 3 C: N HDB 1 @2x2 4x1 x2 x 2 2x 2 x 2 2 4x 2 x2 x3 A 2x3
2 4x1 x2 x3 3 2 4x1 x2 x3 1 3 2 2 2x1 x2 1
First,
For a constrained maximum, it must be the case that N N 1. sgn H D sgn H4 D . 1/n D . 1/3 D 3 < 0 and hence N N 2. sgn H3 > 0, where H3 is found by eliminating the last row N 4. and column from H N N We nd that H4 D 192 and H3 D 32; therefore 0 0 is a local constrained maximum. .x1 ; x2 ; x3 ; / D .1; 1; 1; 1/
0 0 B2 N H4 D B @2 2
2 0 4 4
2 4 0 4
1 2 4C C: 4A 0