Anda di halaman 1dari 11

Lagrange Multipliers without Permanent Scarring

Dan Klein

1 Introduction
This tutorial assumes that you want to know what Lagrange multipliers are, but are more interested in getting the intuitions and central ideas. It contains nothing which would qualify as a formal proof, but the key ideas need to read or reconstruct the relevant formal results are provided. If you dont understand Lagrange multipliers, thats ne. If you dont understand vector calculus at all, in particular gradients of functions and surface normal vectors, the majority of the tutorial is likely to be somewhat unpleasant. Understanding about vector spaces, spanned subspaces, and linear combinations is a bonus (a few sections will be somewhat mysterious if these concepts are unclear). Lagrange multipliers are a mathematical tool for constrained optimization of  differentiable   "!  functions. In the that we want to basic, unconstrained version, we have some (differentiable) function maximize (or minimize). We can do this by rst nd extreme points of , which are points where the gradient # is zero, or, equivlantly, each of the partial derivatives is zero. If were lucky, points like this that we nd will turn out to be (local) maxima, but they can also be minima or saddle points. We can tell the different cases apart by a variety of means, including checking properties of the second derivatives or simple inspecting the function values. Hopefully this is all familiar from calculus, though maybe its more concretely clear when dealing with functions of just one variable. All kinds of practical problems can crop up in unconstrained optimization, which we wont worry about here. One is that and its derivative can be expensive to compute, causing people to worry about how many evaluations are needed to nd a maximum. A second problem is that there can be (innitely) many local maxima which are not global maxima, causing people to despair. Were going to ignore these issues, which are as big or bigger problems for the constrained case. In constrained optimization, we have   the same function to maximize as before. However, we also have some restrictions on which points in we are interested in. The points which satisfy our constraints are refered to as the feasible region. A simple constraint on the feasible region is to add boundaries, such as %$ insisting that each be positive. Boundaries complicate matters because extreme points on the boundaries will not, in general, meet the zero-derivative criterion, and so must be searched for in other ways. You probably had to deal with boundaries in calculus class. Boundaries correspond to inequality constraints, which we will say relatively little about in this tutorial. Lagrange multipliers can help deal with both equality constraints and inequality constraints. For the majority of the tutorial, we will be concerned only with equality constraints, which restrict feasible   'the ( )%% region to points & lying on some surface inside . Each constraint will be given by a function , and we will only '1032 1 be interested in points where & .
1 If

you want a

4(57698@BA

constraint, you can just move the to the left:

4(57698DCEAF@HG .

2.5

1.5

0.5

0.5

1.5

2 2

1.5

0.5

0.5

1.5

Figure 1:2 A one-dimensional domain... with a constraint. Maximize the value of 0U PTS .

IQP

%R

while satisfying

4 2 0 2 4 6 8 10 2 1 0 0 1 2 2 1 1 2

Figure 2: The paraboloid IVP

%R

PWIX

2 Trial by Example
Lets do some example maximizations. First, well have an example of not using Lagrange multipliers.

2.1 A No-Brainer

Lets say you want to know the maximum value of subject to the constraint PaS Re0 (see I` P gure 1). Here we can just substitute our value for (1) into , and get our maximum value of IcPdS S . It isnt the most challenging example, but well come back to it once the Lagrange multipliers show up. However, it highlights a basic way that we might go about dealing with constraints: substitution.

Y0

R

0b2

2.2 Substitution

Let X IhP PW iI( 0 X . This 0p2 is the downward cupping paraboloid shown in gure 5. The unconstrained maximum is clearly at , while the unconstrained X minimum is not even dened (you can nd points with as low as you like). Now lets say we constrain and X to lie on the unit circle. To do this, we add 'f c0q Rsr R 0u2 the constraint & . Then, we maximize (or minimize) by rst solving for one of the X X PtS variables explicitly:

'f g0

R

R r

R X v P S R 0

2 SgPwX R

(1) (2)

1.5

1.5

0.5

0.5

0.5

0.5

1.5

1.5

2 2

1.5

0.5

0.5

1.5

2 2

1.5

0.5

0.5

1.5

Figure 3: The paraboloid r 0 IxP right is the line X S.

%R

PyIX

along with two different constraints. Left is the unit circle

%R r X

R0 S

(3) and substitute into

'f  0 X 0 0 R R r IVP IX R R IVP SsPwX PI(X R SgPwX


(4) (5) (6)

Then, back to a one-dimensional unconstrained problem, which has a maximum SR were 0 %R at X R , where and be too surprising; were stuck on a circle which trades for linearly, while S . This shouldnt X X costs twice as much from . Finding the constrained minimum here is slightly more complex, and highlights one weakness of this approach; the in2 that X must be in PhS S . The one-dimensional problem is still actually somewhat "constrained 032 B0t minimum value occurs at both these boundary points, where and .

032

B0a

2.3 Inating Balloons


The main problem with substitution is that, despite our stunning success in the last section, its usually very hard to do. Rather than inventing a new problem and discovering this the hard way, lets stick with the from the last section and consider how the Lagrange multiplier method would work. Figure 3(left) shows a contour plot of . The contours, or level curves, are ellipses, which are wide in the dimension, and which represent points which have the same value of . The dark circle in the middle is the feasible region satisfying the 032 constraint & . The arrows point in the directions of greatest increase of . Note that the direction of greatest increase is always perpendicular to the level curves. Imagine the ellipses as snapshots of an inating and balloon. As the ballon expands, the value of along the ellipse decreases. The size-zero ellipse has the highest value of . Consider what happens as the ellipse expands. At rst, the values of are high, but the ellipse does 2 not B0 intersect the feasible circle anywhere. When S S as in gure 4(left). This is the maximum the long axis of the ellipse nally touches the circle at , constrained value for any larger, and no point on the level curve will be in the feasible circle. The key thing H0 2 is that, at S , the ellipse is tangent to the circle. The ellipse then continues to grow, dropping, intersecting the circle at four points, until T the ellipse sur0 2 U0 2 rounds the circle and only the short axis endpoints are still touching. This is the minimum ( , , 0a X S ). Again, the two curves are tangent. Beyond this value, the level curves do not intersect the circle. The curves being tangent at the minimum and maximum should make intuitive sense. If the two curves were not tangent, imagine a point (call it ) where they touch. Since the curves arent tangent, then the curves will cross, meeting at , as in grefg:crossing(right). Since the contour (light curve) is a level curve, the points to one side of the contour have greater value, while the points on the other side have lower value. Since we may move anywhere along & and still satisfy the constraint, we can nudge along & to either side of the contour to either increase or decrease . So cannot be an extreme point.
2 Differentiable

curves which touch but do not cross are tangent, but feel free to verify it by checking derivatives!

0.1

1.5

0.08 1 0.06 0.8

0.04 0.5 0.02 0.75

0 0.7 0.5

0.02

0.04 0.65 1 0.08

0.06

0.1

1.04

1.02

0.98

0.96

0.94

0.92

1.5 1.5

0.5

0.5

1.5

0.6

0.8

0.7

0.6

Figure 4: Level curves of the paraboloid, intersecting the constraint circle. This intuition is very important; the entire enterprise of Lagrange multipliers (which are coming soon, really!) rests on it. So heres another, equivalent, way of looking at the tangent requirement, which generalizes better. Consider again the zooms in gure 4. Now think about the normal vectors of the contour and constraint curves. The two curves being tangent at a point is equivalent # to the normal vectors being parallel at that point. The contour is a level curve, and so the gradient of , , is normal to it. But that means that at an extreme point , the gradient of will be perpendicular to & as well. This should also make sense the gradient is the direction of steepest ascent. At a solution , we must be on & , and, while it is ne for to have a non-zero # gradient, the direction of steepest ascent had better be perpendicular to & . Otherwise, onto we can project & , get a non-zero direction along & , and nudge along that direction, increasing but staying on & . If the direction of steepest increase and decrease take you off perpendicularly off of & , then, even if you are not at an unconstrained maximum of , there is no local move you can make to increase which does not take you out of the feasible region & . Formally, we can write our claim that the normal vectors are parallel at an extreme point as:

 0 #

 &

(7)

So, our method for nding extreme points3 which satisfy the constraints is to look for point where the following equations hold true:

# '0 '0 &

&

(8) (9)

We can compactly represent both equations at once by writing the Lagrangian:

'f%0

P 2

 &

(10)

and asking for points where

# 0

(11)

recover equations, while the partial derivative The partial derivatives with respect to ' 1032 the parallel-normals with respect to recovers the constraint & . The is our rst Lagrange multiplier. Lets re-solve the circle-paraboloid problem from above using this method. It was so easy to solve with substition that the Lagrange multiplier method isnt any easier (if fact its harder), but at least it illustrates the method. The Lagrangian is:

0 0

' P & R F R r RR  IVP P I R PTS #  P # & '032

'

(12) (13)

and we want

# 'f0

(14) (15)

3 We

can sort out after we nd them which are minima, maxima, or neither.

which gives the equations

d d

d d d R 'f%0 'f%0 PI e PI Des0U2


(16) (17) (18) (19)

d 'f%0

0U2 D Pgf R P I R R r RR 0 2 t PvS

yourself that the minimum will not be dened; that away from the maximum. Formally, we have

0 H B0 "0 0a2 e0g PI . If PhS , then R S , and From have , B0 the rst "0 two equations, R 0awe must es0U 2 either B0t2 PhS or S . If PI , then S, , and . These are the minimum and maximum, respectively. r 032 XEPvS Lets say we instead want the constraint that and X sum to 1 ( ). Then, we have the situation in gure ??(right). Before we do anything numeric, convince yourself from the picture that the maximum is going to occur in the (+,+) quadrant, at a point where the line is tangent to a level curve of . Also convince
values get arbitrarily low in both directions along the line (20) (21)

0 0

'  & R F e r R  IVP PI R PTS P #  P # & '032

'

and we want

# 'f0 d d d d e d R 0 0

(22) (23)

which gives

PI

"032

(24) (25) (26)

d 0 h0 R I We R 0 can see from the rst two B0 equations that "0 Sjh(i . At those values, fkhi and Pgfkhi .

032 " Pgf R P r R 0t2 P S T


, which, with, since they sum to one, means

eh0

(27)

Ih(i

So what do we have so far? Given a function and a constraint, we can write the Lagrangian, differentiate, and solve for zero. Actually  solving $ that system of equations can be hard, but note that the Lagrangian is a function of l +1 variables (l $ plus ) and so we do have the right number of equations to hope for unique, existing solutions: l from the partial derivatives, plus one from the partial derivative.

2.4 More Dimensions


If we want to have mutliple constraints, this method still works perfectly well, though it get harder to draw the pictures to illustrate it. To generalize, lets think of the parallel-normal idea in a slightly different way. In extreme because the gradient of unconstrained optimization (no constraints), we knew we were at a local was zero there was no local direction of motion which increased . Along came the constraint & and dashed all hopes of the gradient being completely zero at a constrained extreme , because we were conned to & . However, we still wanted that there be no direction of increase inside the feasible region. This occured whenever the gradient at , while probably not zero, had nocomponents which were perpendicular to the #  normal of & at . To recap: in the presence of a constraint, does not have  be zero at a solution , it # to just has to be entirely contained in the (one-dimensional) subspace spanned by & .$ Y0m2 The last statement generalizes multiple constraints. With multiple constraints , we will insist $ ton 0o2 #  & that a solution satisfy each & . We will also want the gradient to be non-zero along the

Figure 5: A spherical level curve of the function

0qp rp

with two constraint planes, X

PhS

and s

PhS

directions that is free to vary. However, given the constraints, cannot make any local movement along vectors $  again be  have any component perpendicular to any constraint. Therefore, our condition# should # which that , while not necessarily zero, is entirely contained in the subspace spanned by the & normals. We can express this by the equation

# 0 # 

t $

%$ # &

$ $

(28)

Which asserts that be a linear combination of the normals, with weights . It turns out that tossing all the constraints into a single Lagrangian accomplishes this:

0 'f%

' P

t $

$ $ ' & $

(29)

It should be with respect to $ clear 0q2 that differentiating $ and setting equal to zero recovers the u th constraint, & , while differentiating with respect to the recovers the assertion that the gradient of have no components which arent spanned by the constraints normals. As an example of multiple constraints, consider gure ??. Imagine that is the distance from the origin. Thus, the level surfaces of are out of the spheres. Lets 0 pointing straight 0 concentric spheres with the gradient say we want the minimum of subject to the constraints that X PhS and s PhS , shown as planes in the gure. Again imagine the spheres as expanding from the center, The # until it makes contact with the planes. unconstrained minimum is, of course, at the origin, where is zero. The sphere grows, and increases. When the spheres radius reaches one, the sphere touches both planes individually. At the points of contact, the gradient of is perpendicular to the touching plane. Those points would be solutions if that plane were the only constraint. When the sphere reaches a radius of v I , it is touching both planes along their line of intersection. Note that the gradient is not zero at that point, nor is it perpendicular to either surface. However, it is parallel to an (equal) combination of the2 two planes normal vectors, or, equivalently, it lies inside the plane spanned "0t by those vectors (the plane , [not shown due to my lacking matlab skills]). A good way to think about the effect of adding constraints is as follows. Before there are any constraints, there are l dimensions for to vary along when maximizing, and we want to nd points where all l dimensions have zero gradient. Every time we add a constraint, we restrict one dimension, so we have less freedom in maximizing. However, that constraint also removes a dimension along which the gradient must be zero. So, in the nice case, we should be able to add as many or few constraints (up to l ) as we wish, and everything should work out.4
the not-nice cases, all sorts of things can go wrong. Constraints may be unsatisable (e.g. can prevent the Lagrange multipliers from existing [more].
4 In

6s@"G

and

6s@ww , or subtler situations

3 The Lagrangian
f%x0x rdy $ $ $ ' r{z

The Lagrangian is a function of l variables , plus & z $ |g rUz (remember that one for each of the ). Differentiating gives the corresponding equations, each set to zero, l $ to solve. The equations from differentiating with respect to each recovers our gradient conditions. The l z %$ $ equations from differentiating with respect the recover the constraints & . So the numbers give us some condence that we have the right number of equations to hope for point solutions. Its helpful to have an idea of what the Lagrangian actually means. There are two intuitions, described below.

|y 

3.1 The Lagrangian as an Encoding


First, we can look at the Lagrangian as an encoding of the problem. This $ view is easy to understand (but doesnt really get us anywhere). Whenever the constraints are satised, the & are zero, and so at these point, regarless f%}03' $ of the value of the multipliers, . This is a good fact to keep in mind. way. You move You could   imagine using the Lagrangian to do constrained maximization in the following around looking for a maximum value of . However, you have no control over , which zn gets set in the ~   , is chosen to minimize . Formally, the worst way possible for you. Therefore, when you choose problem is to nd the which gives

F0

f%

(30)

Now remember that if your x happens to satisfy the constraints, , regardless of what 'fis. $' 0 2 $  & However, if does not satisfy the constraints, some . But then, can be ddled to make 'f% as small as desired, and will be Pe . So will be the maximum value of subject to the constraints.

`0m'

3.2 Reversing the Scope


The problem with the above view of the Lagrangian is that it really doesnt accomplish anything beyond en coding the constraints and handing us back the same problem we started with: nd the maximum value of , ignoring the values of which are not in the feasible region. More usefully, we can switch the and from the previous section, and the result still holds:

f%

(31)

This is part of the full Kuhn-Tucker theorem (cite), which we arent going to prove rigorously. However, the intuition behind why its true is important. Before we examine why this reversal should work, lets see what it accomplishes if its true. We originally had a constrained optimization problem. We would 'like $ very much ffor  it to become an uncon strained optimization problem. Once we x the values of the multipliers, becomes a function of alone. We might be that function (its unconstrained!) relatively easily. If so, able to maximize   we would get a solution for each , call it . But then we can do an unconstrained minimization of over the space of . We would then have our solution. ' It might not be clear why thats any different and nding a minimizing value for each . % that 'xing  Its different in two ways. First, unlike ' , would not be continuous. (Remember that its negative innity almost everywhere and jumps to % for which satisfy the constraints.) Second, it is the case   while we have often that we can nd a closed-form solution to nothing useful to say about . This is also a general instance of switching to a dual problem when a primal problem is unpleasant in some way. [cites]

3.3 Duality
Lets say were convinced that it would be a good thing if

( 7 0

7 'f%

(32)

10 5 0 5 10 15 20 25 30 35 4 2 0 0 2 4 4 2 2 4

Figure 6: The Lagrangian of the paraboloid IVP

%R

with the constraint

PTS

032

Now, well argue why this true, rst, intuition second, formal proof elsewhere. Recall the nois 1 0 examples %R 0t 2 IVP brainer problem: maximize subject to PTS . Lets form the Lagrangian.

0

IVP

PTS

(33)

at At This surface is plotted in gure ??. The straight dark line is the value zHof V0S '.c 0 that value, the   S . At constraint is satised and so, as promised, has no effect on and "0U ' "0 0 % each , S where S . The be Pe , except the for all for % i0 curving n0 dark line isi 0 value of at S I S . The minimum value along this line is at , where , which is the maximum (and  only) value of among the point(s) satisfying the constraint.

t0

3.4 A sort-of proof of sort-of the Kuhn-Tucker thereom

The Lagrangian is hard to plot when l3S . However, lets consider what happens in the environment of a point which satises the satisfying the constraints. fis $ }0a 2 constraints, and which % a local maximum among %$ the points  # & Since each , the derivatives of with respect to each are zero. may not #  # $  be zero. But has any component which is not in the space spanned & , then we can if by the constraint normals  nudge in a direction inside the allowed region, increasing . Since is a local minimum inside that region, #  # $  is in the space spanned by the constraint & that isnt possible. So normals , and can therefore be # Y0 y $ $ # $  written as a (unique) linear combination of these normals. Let & be that combination. #  y $ $ # $ 0U2 Then clearly P % & # . y %$ # $  Now consider a vector # near . }0 # combination P  $ y $ & %$ # cannot % $  still be zero, because the %linear # %  P & is non-zero. Thus, xing and allowing to weights are unique. But vary, there is some or the reverse direction where we could nudge to increase . f  direction  (either Therefore, at , is at a local minimum. Another way this intuitively is that 'f is probably not zero, and, if we set it to zero (a huge 2Yto 0mremember ' nudge), , and so the maximum of is the unconstrained maximum of , which can only be  larger than . Lets look another more example. (gure Recall x0u the paraboloid  y0 5) with the constraint that and X sum to one. X Ih(i Sjhi , where The maximum value occured at n0g2 fkhi . The value was Pgf hi . Figure 7 shows what happens when we nudge up and down slightly. At , the Lagrangian is just the original surface w0 . Its maximum value (2) is at the origin 0q (which obviously doesnt satisfy the constraint). At g P f h(i , the  maximum value of the Lagrangian is at Ih(i Shi , (which does satisfy the constraints). The gradient of is not zero, but it is perpendicular to the constraint line, so is a local maximum along that line. Another way of thinking of this is that the gradient of (the top arrow eld) is balanced at that point by the scaled gradient of the constraint (the second arrow eld down). We can see the effect by adding these two elds, which forms the gradient r 2Dof the Lagrangian (third arrow eld). This gradient is zero at with the right # . If we nudge up to Pgfkhi S , then suddenly the gradient of is no longer completely cancelled out by & , and so we can

lambda = 0 2 2

lambda = 1.6667 2

lambda = 1.3333 2

lambda = 1

2 2 2

2 2 2

2 2 2

2 2 2

2 2 2

2 2 2

2 2 2

2 2 2

2 2 2

2 2 2

2 2 2

2 2 2

2 2

2 2

2 2

2 2

Figure 7: Lagrangian surfaces for the paraboloid IVP

%R

PI(X

with the constraint

XEPvS 2D

0t2

increase the lagrangian by nudging toward the origin. Similarly, if we nudge down to Pgf h(igP S , then the gradient of is over-cancelled and we can increase the Lagrangian by nudging away from the origin.

3.5 What do the multipliers mean?


A useful aspect of the Lagrange multiplier method is points often $ that the values of the multipliers at solution is the value of the partial derivative of with respect to has some signicance. Mathematically, a multiplier $ the constraint & . So it is the rate at which we could increase the Lagrangian if we were to raise the target of that 0a  . Therefore, the rate of increase constraint (from zero). But remember that at solution points , of the Lagrangian with respect to that constraint is also the rate of increase of the maximum constrained value of with respect to that constraint. $ $ In economics, when is a prot function and the & are constraints on resource amounts, would be the amount (possibly negative!) by which prot would rise if one were allowed one more unit of resource u . This rate is called the shadow price of u , which is interpreted as the amount it would be worth to relax that constraint upwards (by R&D, mining, bribery, or whatever means). [Physics example?]

4 A bigger example than you probably wanted


This section contains a big example of using the Lagrange multiplier method in practice, as well as another case where the multipliers have an interesting interpretation.

1 0.8 0.6 0.4 0.2 0 0 0.2 0.4 0.6 0.8 1 1 0.8 0.6 0.4 0.2

Figure 8: The simplex

r X

r s

0 S

4.1 Maximum Entropy Models

5 Extensions
5.1 Inequality Constraints
The Lagrange multiplier method also covers the case of inequality constraints. Constraints of this form q 2 ' are written ' key observation about inequality constraints work is that, at any given , a has 032 . The  2 either or , which are qualitatively very different. The two possibilities are shown in gure ??. 'V02 If then is said to be active at , otherwise it is inactive . If is active at , then is a lot like an #  equality constraint; it allows to be maximum if the gradient of , , is either zero or pointing towards negative values of (which violate the constraint). However, if the gradient is pointing towards positive values of , then there is no reason that we cannot move in that direction. Recall that we used to write

# 0t #

&

'

for a (single) equality constraint. ' # The ' interpretation was that, if is a solution, direction of the normal to & , & . For inequality constraints, we write

# ' #

(34) must be entirely in the (35)

# 0t #

but, if x is a maximum, then if is non-zero, it not only has to be parallel & , but it must actually point in the opposite sense along that direction (i.e., out of the feasible side and towards the forbidden side). We can actually enforce this very simply, by restricting the multiplier to be negative (or  zero). Positive mutlipliers mean that the direction of increasing is in the same direction as increasing but points in that situation certainly arent solutions, as we want to increase and we are allowed to increase . ' 2 If is inactive at ( be even stricter about what values of are acceptable from ), then we want to a solution. In fact, in this case, must be zero at . (Intuitively, if is inactive, then nothing should change at if we drop ). [better explanation] In summary, for inequality constraints, to Wt2 we add them  the Lagrangian just as if they were equality constraints, except that we require that and that, if is not zero, then is. The situation that one or the other can be non-zero, }0U2 but not both, is referred to as complementary slackness. This situation can be compactly written as . Bundling it all up, complete with multiple constraints, we get the general Lagrangian:

# 



ff03

%$ $  t(  & P

(36)

The Kuhn-Tucker (or our intuitive arguments) tell us that if a point is a maximum of $ theorem $ the constraints & and , then:

subject to (37)

# ff0

# '

$ #

&

$'

t( #

'032

$ u  t 0 ' 2 2

(38) (39)

The second condition takes care of the restriction The third condition is a somewhat $ inequalities.  $ on active cryptic way of insisting that for each u , either is zero or is zero. Now is probably a good time to point out that there is more to the Kuhn-Tucker theorem than the above statement. The above conditions are called the rst-order conditions. All (local) maxima will satisfy them. The theorem also gives second order conditions on the second derivative (Hessian) matrices which distinguish local maxima from other situations which can trigger false alarms with the rst-order conditions. However, in many situations, one knows in advance that the solution will be a maximum (such as in the maximum entropy example). Caveat about globals?

6 Conclusion
This tutorial only introduces the basic concepts of the Langrange multiplier methods. If you are interested, there are many detailed texts on the subject [cites]. The goal of this tutorial was to supply some intuition behind the central ideas so that other, more comprehensive and formal sources become more accessible. Feedback requested!

Anda mungkin juga menyukai