Anda di halaman 1dari 3

Trapezoid Rule

How to nd the area of under a curve?

When we want to nd the area under a function, we sample

the function at regularly spaced points and add up all the little trapezoids.

f (t)dt =
a
where

xk = (1 nk ) a +

k
n

ba X
[f (xk ) f (xk1 )]
2n
k=1

b.

How bad is this error?

If the second derivative is bounded by a constant

|f (2) (x)| < K2 ,

we get a sense

of how bad this estimate might be.


Z
b
b X
(b a)3

K2


a
12n2
a

These is a wonderful calculus fact. Except: why is it true? What is it good for? How can it be misapplied?
Here is what one article has to say: From "Simpson's Rule is Exact for Quintics":
If our calculus books said anything at all about why these inequalities work, they probably just
referred us to a numerical analysis book. Those of us who pursued the matter found that the
numerical analysis books couched the proofs in terms of Lagrange interpolation in the context of
general Newton-Cotes quadratures. Most who had gotten that far threw up their hands at that
point: the arguments are not accessible to freshmen.
Let's follow up on these, do a calculus derivation.
Proof. For one trapezoid, we need to nd an approximation to the error function:

f (t) dt h[f (h) f (h)]

E(h) :=
h
We dene the error as a function of

h << 1

and use Taylor's approximation on it.

Z
E(h) = E(0) +

E (t) dt =
0

[f 0 (t) f 0 (t)]t dt

This could have been done with integration by parts. Then since

[f 0 (t) f 0 (t)] t dt =

f 0 (h1 ) f 0 (h1 )
h1

0 < t < h << 1

we can try rounding:

t2 dt = 2f 00 (h2 )

h3
3

It works! These are instances of the Mean Value Theorem.

Sometimes we are stuck in a calculation, and we'd like to make a substitution that's not quite kosher.
Then we do it anyway and we're right most of the time. The branch of math dealing with justifying such
substitutions is called Analysis". In many cases it comes down to an additional set of rules, which we build
over the calculus.

Does this Error Make Sense?

Hopefully, if we use enough rectangles (a very expensive process) the

error tends to zero. Indeed,

K2
as

n .

smaller than

(b a)3
0
12n3

We didn't need to know anything about the exact value of

K2 .
1

f 00 (x)

on

[a, b]

except that it was

What about that scary Newton-Coates derivation?


1

It's not so bad. According to text book "Nu-

merical Analysis" by L. Ridgewood Scott , they can be derived from Lagrange interpolation with evenly
spaced points. Then we can estimate the error of Lagrange interpolating using the Mean value theorem.

Observation # 1

Between any two points

(a, f (a)), (b, f (b)),

we can nd a line

(Lf )(x)

that ts between

them.
Proof. Somehow we have to nd a system of equations:

f0 + f1 a

= f (a)

f0 + f1 b = f (b)
f0 + f1 x

(Lf )(x)

This system of equations is overdetermined so the determinant is zero:


1

1

1

a
b
x

f (a)
f (b)
f (x)






= f (a) 1
1



1
a

+
f
(x)
1

x



1
x

f
(b)
1

b


a
=0
b

We recover the Lagrange interpolation formula:

(Lf )(x) = f (a)


This makes sense since

(Lf )(a) = a, (Lf )(b) = b

and it is a line.

Observation # 2
f (x) (Lf )(x) =

Proof. If we integrate both sides from

to

xa
bx
+ f (b)
ba
ba

(x a)(x b) (2)
f ((x))
2!

(or "multiply" by

Rb
a

i
(b a)3
1
max |f (2) (x)|
f (x) (Lf )(x) dx (b a) max |(x a)(x b)| max |f (2) (x)| =
2! x[a,b]
8
x[a,b]
x[a,b]

Observation # 3

We can recover the Midpoint rule using Lagrange interpolation on two points, approx-

imating the curve as a trapezoid.


Proof.

Z
(Lf )(x) dx =


f (a)


bx
xa
f (b) + f (a)
+ f (b)
dx =
(b a)
ba
ba
2

Umm... this is great. Can we use actual formulas?

We are out of time.

Problems from signal

processing (e.g. the volume of Twitter on the Election) and theoretical physics (e.g. the location of random
matrix eigenvalues on the line) have their basis in numerical analysis. And we still haven't tested this out
familiar functions. Hopefully we can tease some of this out next time.

How come you got an 8 instead of a 12 in the denominator?


1

Computer Science prof at University of Chicago


2

Leave me alone!

Appendix: Integration By Parts


Calculus classes usually write integration by parts in the following unwieldy form:

i Z
f (x)g (x) dx = f (b)g(b) f (a)g(a)
h

f 0 (x)g(x) dx

Really it breaks down into joining the product rule and apply the

i
d h
f (x)g(x) = f 0 (x)g(x) + f (x)g 0 (x)
dx

f (b)g(b) f (a)g(a) =

operator to both sides:

Z b
i Z b
d h
f (x)g(x) =
f 0 (x)g(x) +
f (x)g 0 (x)
dx
a
a
R d
dx . We still had to include the endpoints.

The fundamental theorem of calculus lets us cancel the

Rb

f (x)g(x) +

f (x)g 0 (x)

I like to imagine integration by parts has saying the integral jumps" from one side to the other - and
you multiply by

1:
Z

df
g(x) dx =
dx

A little bit fancy to say the adjoint" of

d
dx is

d
dx
.

f (x)

dg
dx
dx
2 and the

The derivative is a kind of matrix here

adjoint" is behaving like the transpose.

References
[1] Louis Talman Simpson's Rule is Exact for Quintics" Amer. Math. Monthly, 113(2006), 144-155

http://clem.mscd.edu/~talmanl/PDFs/Misc/Quintics.pdf
http://rowdy.mscd.edu/~talmanl/PDFs/Misc/ExteQuad.pdf

[2] L. Ridgwood Scott. Numerical Analysis" Princeton University Press, 2011.

2 http://en.wikipedia.org/wiki/Differential_operator

Anda mungkin juga menyukai