the function at regularly spaced points and add up all the little trapezoids.
f (t)dt =
a
where
xk = (1 nk ) a +
k
n
ba X
[f (xk ) f (xk1 )]
2n
k=1
b.
we get a sense
Z
b
b X
(b a)3
K2
a
12n2
a
These is a wonderful calculus fact. Except: why is it true? What is it good for? How can it be misapplied?
Here is what one article has to say: From "Simpson's Rule is Exact for Quintics":
If our calculus books said anything at all about why these inequalities work, they probably just
referred us to a numerical analysis book. Those of us who pursued the matter found that the
numerical analysis books couched the proofs in terms of Lagrange interpolation in the context of
general Newton-Cotes quadratures. Most who had gotten that far threw up their hands at that
point: the arguments are not accessible to freshmen.
Let's follow up on these, do a calculus derivation.
Proof. For one trapezoid, we need to nd an approximation to the error function:
E(h) :=
h
We dene the error as a function of
h << 1
Z
E(h) = E(0) +
E (t) dt =
0
[f 0 (t) f 0 (t)]t dt
This could have been done with integration by parts. Then since
[f 0 (t) f 0 (t)] t dt =
f 0 (h1 ) f 0 (h1 )
h1
t2 dt = 2f 00 (h2 )
h3
3
Sometimes we are stuck in a calculation, and we'd like to make a substitution that's not quite kosher.
Then we do it anyway and we're right most of the time. The branch of math dealing with justifying such
substitutions is called Analysis". In many cases it comes down to an additional set of rules, which we build
over the calculus.
K2
as
n .
smaller than
(b a)3
0
12n3
K2 .
1
f 00 (x)
on
[a, b]
merical Analysis" by L. Ridgewood Scott , they can be derived from Lagrange interpolation with evenly
spaced points. Then we can estimate the error of Lagrange interpolating using the Mean value theorem.
Observation # 1
(Lf )(x)
them.
Proof. Somehow we have to nd a system of equations:
f0 + f1 a
= f (a)
f0 + f1 b = f (b)
f0 + f1 x
(Lf )(x)
1
1
1
a
b
x
f (a)
f (b)
f (x)
= f (a) 1
1
1
a
+
f
(x)
1
x
1
x
f
(b)
1
b
a
=0
b
and it is a line.
Observation # 2
f (x) (Lf )(x) =
to
xa
bx
+ f (b)
ba
ba
(x a)(x b) (2)
f ((x))
2!
(or "multiply" by
Rb
a
i
(b a)3
1
max |f (2) (x)|
f (x) (Lf )(x) dx (b a) max |(x a)(x b)| max |f (2) (x)| =
2! x[a,b]
8
x[a,b]
x[a,b]
Observation # 3
We can recover the Midpoint rule using Lagrange interpolation on two points, approx-
Z
(Lf )(x) dx =
f (a)
bx
xa
f (b) + f (a)
+ f (b)
dx =
(b a)
ba
ba
2
processing (e.g. the volume of Twitter on the Election) and theoretical physics (e.g. the location of random
matrix eigenvalues on the line) have their basis in numerical analysis. And we still haven't tested this out
familiar functions. Hopefully we can tease some of this out next time.
Leave me alone!
i Z
f (x)g (x) dx = f (b)g(b) f (a)g(a)
h
f 0 (x)g(x) dx
Really it breaks down into joining the product rule and apply the
i
d h
f (x)g(x) = f 0 (x)g(x) + f (x)g 0 (x)
dx
f (b)g(b) f (a)g(a) =
Z b
i Z b
d h
f (x)g(x) =
f 0 (x)g(x) +
f (x)g 0 (x)
dx
a
a
R d
dx . We still had to include the endpoints.
Rb
f (x)g(x) +
f (x)g 0 (x)
I like to imagine integration by parts has saying the integral jumps" from one side to the other - and
you multiply by
1:
Z
df
g(x) dx =
dx
d
dx is
d
dx
.
f (x)
dg
dx
dx
2 and the
References
[1] Louis Talman Simpson's Rule is Exact for Quintics" Amer. Math. Monthly, 113(2006), 144-155
http://clem.mscd.edu/~talmanl/PDFs/Misc/Quintics.pdf
http://rowdy.mscd.edu/~talmanl/PDFs/Misc/ExteQuad.pdf
2 http://en.wikipedia.org/wiki/Differential_operator