Anda di halaman 1dari 6

Welcome to Calculus.

I'm Professor Greist.


We're about to begin Lecture 7 on Limits.
>> In many respects, calculus can be
defined as the mathematics of limits.
In this lesson we'll review the concept
definition of a limit.
Consider a few examples and see that one
of the most effective tools for computing
a limit involves Taylor series.
>> In your previous exposure to
calculus, you have certainly seen limits.
But what does it mean to say the limit,
as x approaches a of f of x equals L.
Well, I'm sure you have an image in your
head, that as x gets closer and closer to
a, f of x gets closer and closer to L.
Perhaps you remember that it doesn't
matter whether you approach from the left
Left or from the right.
Perhaps you remember that it doesn't
matter what the actual value of the
function is at x equals a.
What matters is the limit.
Well this picture is the intuition behind
the limit, but it is not the definition.
The definition is another thing all
together.
What is it?
The limit of f of x, as x goes to a,
equals L.
If, and only if, for every epsilon
greater than 0 there exists some delta
greater than 0.
Such that, whenever x not equal a is
within delta of a, then f of x is within
epsilon of L.
That is a bit of a mouthful, and a lot of
students have difficulty with it.
Why?
Because as a logical statement, it is
complex.
As a grammatical statement, it is
complex.
How does one make sense of this?
Well, a picture is not a bad way to go.
If we think of L as a target that we are
trying to hit, then we are allowed some
tolerance on the output.
This tolerance is in the form of an
epsilon.
You have to get within epsilon of L.
Using your function, you can set the
input to be as close to a as you like.
But there's going to be some tolerance on
the input.
Some degree of error that is bounded by
delta.
In order to have the limit of f of x as x

approaches a equals L, then anything


within the input tolerance has to hit the
target within the output tolerance.
This must be true no matter how small the
output tolerance epsilon is.
You can find some sufficiently small
input tolerance to guarantee always
striking within range of the limit.
In the context of an actual function f,
one can visualize this delta epsilon
definition as follows.
You choose an output tolerance, epsilon.
Then, there must be some input tolerance,
delta, so that any input within delta of
a has an output within epsilon of L.
Now, many students get confounded here,
trying to find the optimal delta.
It does not need to be optimal.
You can choose something smaller, that is
not a problem.
The critical part of the definition is,
that as you change epsilon, you need to
be able to update delta.
If you make epsilon smaller still and
decrease your level of acceptable error
on the output, you need to find some
amount of acceptable error on the input.
And this has to continue for every
possible non-zero value of epsilon.
That is what captures what the limit is.
This view of the definition is extendable
to other context.
Consider the limit as x goes to infinity
of f of x.
What does it mean for that to be equal to
L?
Well, we're going to think about infinity
as something like an end point to the
real line, modifying its topology so that
it looks like a closed interval.
Now, this is a dangerous thing to do if
you don't know what you're doing.
But let's think about it from the
perspective of our interpretation of a
limit.
Given any output tolerance, epsilon,
there must be some tolerance on the input
that guarantees striking within epsilon
of L.
Now, how do we take a neighborhood of
infinity?
How do we talk about a tolerance on that
input.
Well, what it becomes in this context is
sum lower bound M, so that, whenever your
input is greater than M, then your output
is within epsilon of L.
As before, this must be true no matter
what epsilon you choose.
If you make your tolerances on the

output, tighter and tighter, then, we can


make the tolerances on the input tighter
and tighter.
In this case, instead of talking about
being within delta of infinity, since we
are only looking at a one sided limit.
We can speak in terms of an explicit
lower bound on x, the same intuition and
picture holds.
To be sure, not all limits exist.
Not all functions, well behaved.
There are several ways in which things
can go wrong.
You could have a discontinuity at a
function.
The limit would not exist at that point.
You could have what is called a blow up,
that is the function goes to infinity as
x gets closer and closer to a.
Or, worse still, the limit can fail to
exist because of an oscillation, where
the function oscillates so badly that the
limit at a does not exist.
On the other hand, most of the time
you're not going to have to worry about
this because most functions are
continuous.
And we say that f is continuous at an
input a, if the limit, as x goes to a, of
f of x exists and equals f at a.
We say that f is continuous everywhere.
If this statement is true for all inputs
a in the domain.
Now most of the functions that we're used
to seeing are continuous functions and
one doesn't have to worry so much about
limits in this case.
There's a bit of a technicality.
One has to be very explicit about which
points are in the domain of the function.
Some functions which look discontinuous
may actually be continuous.
If the discontinuous looking point is not
actually in the domain, if however, the
function is defined there, then, the
discontinuity presents itself.
There are certain rules associated with
limits that you may know by hand, even if
you don't remember them.
If the limit of f of x, as x approaches
a, and the limit of g of x, as x
approaches a, both exist, then the
following rules are in effect.
There's a summation rule that the limit
of the sum of f plus g is, in fact, the
sum of the limits.
There is, likewise, a product rule, that
the limit of the product of f and g is,
in fact, the product of the limits.
There is, likewise, a quotient rule, that

the limit of f divided by g is the limit


of f divided by the limit of g.
Now, at this point, you've gotta be a bit
careful if that denominator is zero.
Well, then this limit may not exist.
There's, likewise, a chain rule or a
composition rule that says the limit of f
composed with g, as x go to a, can be
realized as f of the limit of g as x
approaches a.
Now once again, this too has some
conditions.
f, in this case, needs to be continuous
at the appropriate point in order for
this to hold.
Now, at this point, I think we're
going to have a little quiz to test your
knowledge of limits.
What is the limit, as x approaches zero,
of sine of x over x?
And this is a quotient.
Can we apply our quotient rule for
limits?
No, I'm afraid we cannot because the
denominator is going to 0 and so is the
numerator and 0 over 0 presents some
difficulties.
Now, I bet that most of you know the
answer is 1 but why do you know this?
Well, you may say, I remember this.
This is something that I had to memorize
when I took high school calculus very
useful on exams, I just know it.
Well, that's not a very satisfying answer
is it?
Some of you may say, I wield the mighty
sword of L'Hopital's Rule and I know that
if I differentiate the top and the
bottom, then I get one.
That's great, and I'm glad you remember
L'hopital's rule.
But do you know why it works?
Do you have a good reason for your belief
in this rule?
Well, if not, then let's take a method
that we do trust.
Namely, Taylor series.
If we consider the limit, as x goes to 0
of sine of x over x.
We know what sin of x is.
That's x minus x cubed over 3 factorial
plus higher ordered terms.
Now, thinking of this, as we do, as a
long polynomial.
What are we tempted to do?
Well, I look at that and say hey, we
could factor out an x, from the numerator
and cancel that with the x in the
denominator.
Yielding the limit as x goes to 0, of 1

minus x squared over 3 factorial plus


higher order terms in x.
Sending x to 0 gives us an answer of 1,
and the limit makes perfect sense.
Likewise, you might recall, that the
limit as x goes to zero of 1 minus cosine
of x over x, is, now what was what?
Oh well, I don't remember, but I do
remember what cosine of x is.
And I note that here, the ones cancel.
And I'm left with the limit of x squared
over two factorial minus x to the fourth
over 4 factorial, plus higher order
terms.
When I divide that by x, I get the limit
of x over 2 factorial, minus x cubed over
4 factorial, plus higher ordered terms.
There's no 0 over 0 ambiguity any more.
This limit is precisely 0.
Now, one of the wonderful things about
this Taylor series approach to limits is
that it works even in cases where you
might not have memorized the limit and
where the limit is, indeed, not so
obvious.
Well, let's look at the cube root of 1
plus 4x minus 1 over the fifth root of 1
plus 3x minus 1.
It is clear that evaluating at zero is
not going to work, that yields 0 over 0.
So, what do we do?
Well, rewriting this a little bit allows
us to use the binomial series.
With alpha equal one third in the
numerator, and one fifth in the
denominator.
Applying that gives us 1 plus one third
times 4x plus higher order terms.
Subtract 1.
In the denominator 1 plus one fifth times
three x plus higher of order terms,
subtract 1.
Those subtractions, get rid of the
constant terms, were left with terms that
all have an x in them.
We factor that out, and then the leading
order terms are four thirds in the
numerator and 3 5ths in the denominator.
Yielding an answer of 20 9ths.
That is beautiful.
>> There's a vast gulf between knowing
how to compute something and knowing what
that thing is.
The question of evaluating or computing a
limit is subtle.
And it weaves its way throughout this
course.
In our next lesson, we're going to
consider one of the primary tools for
evaluating limits.

That of L'Hopital's rule.


[BLANK_AUDIO].

Anda mungkin juga menyukai