Anda di halaman 1dari 13

Journal of Online Mathematics and Its Applications

Unied Treatment of Regula Falsi, NewtonRaphson,


Secant, and Steensen Methods for Nonlinear Equations
Avram Sidi
Computer Science Department
Technion - Israel Institute of Technology
Haifa 32000, Israel
e-mail: asidi@cs.technion.ac.il
http://www.cs.technion.ac.il/ asidi
May 2006
Abstract
Regula falsi, NewtonRaphson, secant, and Steensen methods are four very eec-
tive numerical procedures used for solving nonlinear equations of the form f(x) = 0.
They are derived via linear interpolation procedures. Their analyses can be carried
out by making use of interpolation theory through divided dierences and Newtons
interpolation formula. In this note, we unify these analyses. The analysis of the Stef-
fensen method given here seems to be new and is especially simpler than the standard
treatments. The contents of this note should also be a useful exercise/example in the
application of polynomial interpolation and divided dierences in introductory courses
in numerical analysis.
1 Introduction
Let be the solution to the equation
f(x) = 0, (1)
and assume that f(x) is twice continuously dierentiable in a closed interval I containing
in its interior. Some iterative methods used for solving (1) and that make direct use of
f(x) are the regula falsi method (or false position method), the secant method, the Newton
Raphson method, and the Steensen method. These methods are discussed in many books on
numerical analysis. See, for example, Atkinson [1], Henrici [2], Ralston and Rabinowitz [3],
and Stoer and Bulirsch [5].
1 0.5 0 0.5 1 1.5 2 2.5
8
6
4
2
0
2
4
6
8
x
0
x
1
x
2
x
3
x
4

y = f(x)
X
Y
Regula Falsi Method
Figure 1: The regula falsi method.
All four methods are derived by a linear interpolation procedure as follows: Assuming
that the approximations x
k
to the solution of (1) have been determined for all k n,
and another approximation c is available, the next approximation x
n+1
is determined as the
point of intersection (in the x-y plane) of the straight line through the points (x
n
, f(x
n
)) and
(c, f(c)) with the x-axis. (See Figures 13 for the regula falsi, secant, and NewtonRaphson
methods.) Since the equation of this straight line is
y = f(x
n
) +
f(x
n
) f(c)
x
n
c
(x x
n
), (2)
2
1 0.5 0 0.5 1 1.5 2 2.5
8
6
4
2
0
2
4
6
8
x
0
x
1
x
2
x
3

y = f(x)
X
Y
Secant Method
x
4
Figure 2: The secant method.
1 0.5 0 0.5 1 1.5 2 2.5
8
6
4
2
0
2
4
6
8
x
0
x
1
x
2
x
3

x
Newton Raphson Method
Y
y = f(x)
Figure 3: The NewtonRaphson method.
3
x
n+1
is given as
x
n+1
= x
n

f(x
n
)
f(x
n
) f(c)
x
n
c
. (3)
Subtracting now from both sides of (3), we obtain
x
n+1
=
f(x
n
) +
f(x
n
) f(c)
x
n
c
( x
n
)
f(x
n
) f(c)
x
n
c
. (4)
It must be noted though that the above is valid also when c = x
n
, as is the case in
the NewtonRaphson method. In this case, the straight line in question, whose equation is
y = f(x
n
) + f

(x
n
)(x x
n
), is the tangent to the curve y = f(x) at the point (x
n
, f(x
n
)),
and can be obtained by letting c x
n
in (2). Having said this, we would like to emphasize
nevertheless that the polynomial p(x) = f(x
n
) + f

(x
n
)(x x
n
) is the linear interpolant to
f(x) (in the Hermite sense) that satises p(x
n
) = f(x
n
) and p

(x
n
) = f

(x
n
). It can be
treated as any polynomial of interpolation via the Newton interpolation formula. We will
discuss these issues in the next section in more generality and detail.
Most textbooks in numerical analysis that treat the subject, provide the convergence
analyses of the four methods mentioned above separately. They use Taylor series for the
NewtonRaphson method, while they apply linear interpolation theory to the secant method.
They use yet another approach for the Steensen method.
By stressing the fact that all four methods are obtained via a linear interpolation proce-
dure, in this note, we unify the analyses of all these methods by using the theory of poly-
nomial interpolation via divided dierences. As such, the treatment also provides a good
example/exercise in the application of the subject of polynomial interpolation via Newtons
interpolation formula and divided dierences, whether the points of interpolation are distinct
or not. Furthermore, the analysis of the Steensen method presented here turns out to be
especially simple and does not seem to have been given in the literature before.
In the next section, we recall the important properties of divided dierences and New-
tons interpolation formula. We refer the reader to the books mentioned above for detailed
treatments of this subject. In Section 3, we express x
n+1
via divided dierences in a
way that also reveals the order of each method. In Section 4, we complete the convergence
proofs of the methods in a unied manner.
2 Divided dierences and Newtons interpolation for-
mula
In the sequel, we will denote by f[z
0
, z
1
, . . . , z
m
] the divided dierence of order m of f(x) on
the set of points {z
0
, z
1
, . . . , z
m
} and will recall the following:
1. f[z
i
, z
i+1
, . . . , z
m
] can be dened recursively via
f[z
i
] = f(z
i
); f[z
i
, z
j
] =
f[z
i
] f[z
j
]
z
i
z
j
, z
i
= z
j
, (5)
4
and, for m > i + 1, via
f[z
i
, z
i+1
, . . . , z
m
] =
f[z
i
, z
i+1
, . . . , z
m1
] f[z
i+1
, z
i+2
, . . . , z
m
]
z
i
z
m
, z
i
= z
m
. (6)
In case the z
i
are not distinct, the divided dierences are dened as limits of the
quotients above, provided the limits exist. For example, f[a, a] = f

(a) provided f(z)


is dierentiable at z = a.
2. f[z
0
, z
1
, . . . , z
m
] is a symmetric function of its arguments, that is, it has the same value
for every ordering of the points z
0
, z
1
, . . . , z
m
.
3. Provided f C
m
[a, b] and z
i
[a, b], i = 0, 1, . . . , m, f[z
0
, z
1
, . . . , z
m
] is a continu-
ous function of its arguments z
0
, z
1
, . . . , z
m
; thus, lim
yy
0
f[x, y, z] = f[x, y
0
, z], for
example. In addition, whether the z
i
are distinct or not,
f[z
0
, z
1
, . . . , z
m
] =
f
(m)
()
m!
for some int(z
0
, z
1
, . . . , z
m
), (7)
where
int(z
0
, z
1
, . . . , z
m
) (min{z
i
}, max{z
i
}). (8)
(This is a very convenient notation we use throughout this note.) Consequently,
f[z, z, . . . , z
. .
m + 1 times
] =
f
(m)
(z)
m!
, z [a, b]. (9)
4. Newtons formula for the polynomial of interpolation p(z) to the function f(z) at the
points z
0
, z
1
, . . . , z
m
, whether these points are distinct or not, is given by
p(z) = f(z
0
) +
m

i=1
f[z
0
, z
1
, . . . , z
i
]
i1

s=0
(z z
s
), (10)
and the corresponding error formula is
f(z) p(z) = f[z
0
, z
1
, . . . , z
m
, z]
m

s=0
(z z
s
). (11)
Thus, unlike the Lagrange interpolation formula, the Newton interpolation formula
allows points of interpolation to coincide, hence is much more exible.
(i) When the z
i
are distinct, p(z) can be determined from f(z
0
), f(z
1
), . . . , f(z
m
).
In this case, the relevant divided dierence table is computed via (5), (6) in a
straightforward manner.
(ii) In case z
0
= z
1
= = z
m
, by (7), p(z) in (10) becomes the mth partial sum of
the Taylor series of f(z) about z
0
, and the expression for f(z) p(z) given in (11)
becomes the corresponding remainder.
5
(iii) In case the z
i
are not all distinct, we proceed as follows: We denote the distinct
z
i
by a
1
, a
2
, . . . , a
r
. For each i = 1, . . . , r, we denote the multiplicity of a
i
by s
i
.
Thus, m + 1 =

r
i=1
s
i
. We then order the z
i
as in
z
0
= z
1
= = z
s
1
1
= a
1
,
z
s
1
= z
s
1
+1
= = z
s
1
+s
2
1
= a
2
,
z
s
1
+s
2
= z
s
1
+s
2
+1
= = z
s
1
+s
2
+s
3
1
= a
3
,
and so on.
Then the polynomial p(z) in (10), now called the generalized Hermite interpolant,
interpolates f(z) in the following sense:
p
(j)
(a
i
) = f
(j)
(a
i
), j = 0, 1, . . . , s
i
1, i = 1, . . . , r.
Given f
(j)
(a
i
), j = 0, 1, . . . , s
i
1, and i = 1, . . . , r, the relevant divided dierence
table can be constructed via (5), (6), and (9), and p(z) determined. See, for
example, [5, pp. 5159].
3 Divided dierence formulas for x
n+1

Going back to (3) and (4), we realize that they can be rewritten in terms of divided dier-
ences, respectively, as in
x
n+1
= x
n

f(x
n
)
f[x
n
, c]
(12)
and
x
n+1
=
f(x
n
) + f[x
n
, c]( x
n
)
f[x
n
, c]
. (13)
Clearly, the numerator of the quotient on the right-hand side of (13) is related to the poly-
nomial interpolating f(x) at the points x
n
and c, and from (10) and (11), we have
f(x) = f(x
n
) + f[x
n
, c](x x
n
) + f[x
n
, c, x](x x
n
)(x c). (14)
Letting x = in (14), and recalling that f() = 0, we obtain
f(x
n
) + f[x
n
, c]( x
n
) = f[x
n
, c, ]( x
n
)( c), (15)
and thus
x
n+1
=
f[x
n
, c, ]
f[x
n
, c]
(x
n
)(c ). (16)
Finally, we also recall that the arguments c and x
n
above may coincide by the fact that
divided dierences are dened as limits in such a case.
We now specialize the result in (16) to the dierent methods mentioned above. We recall
that f C
2
(I), where I is some closed interval containing . No further dierentiability
properties are assumed for f(x) in our treatment.
6
3.1 Regula falsi method
In the regula falsi method, we start with two initial points, x
0
= c and x
1
, such that
f(c)f(x
1
) < 0 so that f(x) = 0 has a solution between c and x
1
. We assume that is the
unique solution to f(x) = 0 between c and x
1
. The point x
2
is determined as in (3), that is,
x
2
is the point of intersection of the straight line passing through (c, f(c)) and (x
1
, f(x
1
))
with the x-axis. If f(x
2
) = 0, then = x
2
and we stop. If f(c)f(x
2
) < 0, then we leave c
unchanged and continue to the next iteration; otherwise, we set c = x
1
and continue to the
next iteration in the same way.
In case f

(x) and f

(x) have xed signs in an interval containing , which is the situation


of interest to us here, the point c ultimately remains xed. Therefore, in such a case, the
regula falsi method becomes a xed-point method at some point during the iteration process.
Without loss of generality, we will assume that c = x
0
remains xed.
Provided c, x
n
I, by (7), the formula for x
n+1
in (12) and the error formula (16) can
be expressed as in
x
n+1
= x
n

f(x
n
)
f

(
n
)
and x
n+1
=
f

(
n
)(c )
2f

(
n
)
(x
n
),

n
int(x
n
, c, ),
n
int(x
n
, c). (17)
If, in addition, lim
n
x
n
= holds, then (16) gives
lim
n
x
n+1

x
n

=
f[c, , ]
f[c, ]
(c ) =
f

()(c )
2f

()
, for some , int(c, ),
and, as we show later, also
lim
n
x
n+1

x
n

= 1
f

()
f[c, ]
,
which suggests that the convergence of {x
n
} may be linear. This needs to be proved rigor-
ously, however.
As already mentioned, the situation described here happens when f

(x) and f

(x) have
xed signs on I, for example, when f

(x) > 0 and f

(x) > 0 on I and f(c) > 0, f(x


1
) < 0.
We come back to this in Section 4, where we show that the regula falsi method converges
strictly linearly in this case.
3.2 Secant method
In the secant method, we start with two initial points x
0
and x
1
, and for computing x
n+1
via (3), we set c = x
n1
; that is, (3) becomes
x
n+1
= x
n

f(x
n
)
f(x
n
) f(x
n1
)
x
n
x
n1
= x
n

f(x
n
)
f[x
n
, x
n1
]
.
The error formula (16) now is
x
n+1
=
f[x
n
, x
n1
, ]
f[x
n
, x
n1
]
(x
n
)(x
n1
),
7
and, provided x
n1
, x
n
I, by (7), it becomes
x
n+1
=
f

(
n
)
2f

(
n
)
(x
n
)(x
n1
),

n
int(x
n
, x
n1
, ),
n
int(x
n
, x
n1
). (18)
In case, f

() = 0 and x
0
and x
1
are suciently close to , we have lim
n
x
n
= , by
Section 4 and hence
lim
n
x
n+1

(x
n
)(x
n1
)
=
f

()
2f

()
.
From this, one derives the conclusion that the order of convergence of the secant method is
at least (1 +

5)/2.
3.3 NewtonRaphson method
In the NewtonRaphson method, we start with one initial point x
0
, and for computing x
n+1
via (3), we set c = x
n
; that is, x
n+1
is the point at which the tangent line to the function
f(x) at x
n
intersects the x-axis, and (3) becomes
x
n+1
= x
n

f(x
n
)
f

(x
n
)
.
Hence, the error formula (16) now is
x
n+1
=
f[x
n
, x
n
, ]
f

(x
n
)
(x
n
)
2
,
which, provided x
n
I, by (7), becomes
x
n+1
=
f

(
n
)
2f

(x
n
)
(x
n
)
2
,
n
int(x
n
, ). (19)
In case f

() = 0 and x
0
is suciently close to , we have, by Section 4, lim
n
x
n
= ,
and hence
lim
n
x
n+1

(x
n
)
2
=
f

()
2f

()
.
From this, one derives the conclusion that the order of convergence of the NewtonRaphson
method is at least two.
3.4 Steensen method
In the Steensen method, we start with one initial point x
0
, and for computing x
n+1
via (3),
we set c = x
n
+ f(x
n
); that is, (3) becomes
x
n+1
= x
n

f(x
n
)
f(x
n
+ f(x
n
)) f(x
n
)
f(x
n
)
= x
n

f(x
n
)
f[x
n
, x
n
+ f(x
n
)]
.
8
In this case, an error formula is a little more tricky to obtain. First, we note that
c = x
n
+ f(x
n
) f() =
_
1 + f[x
n
, ]
_
(x
n
).
Thus, the error formula in (16) rst becomes
x
n+1
=
f[x
n
, x
n
+ f(x
n
), ]
f[x
n
, x
n
+ f(x
n
)]
_
1 + f[x
n
, ]
_
(x
n
)
2
.
Next, provided x
n
, x
n
+ f(x
n
) I, by (7), this gives
x
n+1
=
f

(
n
)
2f

(
n
)
_
1 + f

(
n
)

(x
n
)
2
,

n
int(x
n
, x
n
+ f(x
n
), ),
n
int(x
n
, x
n
+ f(x
n
)),
n
int(x
n
, ). (20)
In case, f

() = 0 and x
0
is suciently close to , by Section 4, we have lim
n
x
n
= ,
and hence
lim
n
x
n+1

(x
n
)
2
=
f

()
2f

()
_
1 + f

()

.
From this, one derives the conclusion that the order of convergence of the Steensen method
is at least two.
Note. The denition of the Steensen method we have given here can be found in Atkin-
son [1], for example. There is another completely dierent approach to the Steensen method
that goes through xed-point iteration and the Aitken
2
-process that is used to accelerate
linear convergence. In this approach, we start with the equation x = (x), whose solution
we denote again. Starting with x
0
, an initial approximation to , we perform the following
steps:
Step 0. Set z
0
= x
0
and n = 0.
Step 1. Compute z
1
and z
2
via z
1
= (z
0
) and z
2
= (z
1
).
Step 2. Apply the Aitken
2
-process to {z
0
, z
1
, z
2
} to obtain x
n+1
:
x
n+1
=
z
0
z
2
z
2
1
z
0
2z
1
+ z
2
= z
0

(z
1
z
0
)
2
z
0
2z
1
+ z
2
.
Step 3. Set z
0
= x
n+1
and let n n + 1, and go to Step 1.
Let us set f(x) = (x) x. Then is the solution to f(x) = 0. Consequently, in terms
of f(z), we have
z
1
= z
0
+ f(z
0
) hence also z
1
z
0
= f(z
0
),
and also
z
2
z
1
= f(z
1
) = f(z
0
+ f(z
0
)).
Therefore,
z
0
2z
1
+ z
2
= (z
2
z
1
) (z
1
z
0
) = f(z
1
) f(z
0
) = f(z
0
+ f(z
0
)) f(z
0
).
9
Combining all this in Step 2 above, we have
x
n+1
= z
0

[f(z
0
)]
2
f(z
0
+ f(z
0
)) f(z
0
)
= z
0

f(z
0
)
f(z
0
+ f(z
0
)) f(z
0
)
f(z
0
)
.
Recalling that z
0
= x
n
in Step 2, we nally have
x
n+1
= x
n

f(x
n
)
f(x
n
+ f(x
n
)) f(x
n
)
f(x
n
)
,
which is what we had in the beginning of this subsection. The approach we have just
described can be found in [5], for example. For the derivation and convergence acceleration
properties of the Aitken
2
-process, we refer the reader to Sidi [4, Chapter 15].
Note that, with f(x) = (x) x, the condition that f

() = 0 is the same as

() = 1.
4 Completion of proofs of convergence
In this section, we show how the convergence proofs of the four methods above can be
completed. The regula falsi method, with xed c, has a special proof of its own. The
remaining three methods can be shown to converge, in a unied manner, via the following
simple and well-known result:
Lemma 4.1 Let the sequence {x
n
}

n=0
be such that
x
n+1
= C
n
(x
n
), |C
n
| C < 1 n.
Then, (i) |x
n+1
| < |x
n
|, (ii) x
n
[ , + ], where = |x
0
|, (iii) |x
n
|
C
n
|x
0
|, and (iv) lim
n
x
n
= .
In relation to this lemma, we recall that if 0 < lim
n
|C
n
| < 1, the sequence {x
n
}
converges linearly, whereas if lim
n
C
n
= 0, it is said to converge superlinearly.
Let us recall our assumption that f C
2
(I), where I is a closed interval containing in
its interior. In the treatment of the secant, NewtonRaphson, and Steensen methods, we
assume further that f

() = 0, and choose the interval I as I = [, +] for some > 0


such that f

(x) = 0 on I. Therefore, for all x I, 0 < K |f

(x)| L and |f

(x)| M for
some positive constants K, L, and M.
By (18), (19), and (20), the C
n
= (x
n+1
)/(x
n
) in the lemma relevant to the
secant, NewtonRaphson, and Steensen methods are then
C
n
=
f

(
n
)
2f

(
n
)
(x
n1
),
n
,
n
I, if x
n1
, x
n
I. (secant)
C
n
=
f

(
n
)
2f

(x
n
)
(x
n
),
n
I, if x
n
I. (NewtonRaphson)
C
n
=
f

(
n
)
2f

(
n
)
[1 + f

(
n
)] (x
n
),
n
,
n
,
n
I, if x
n
, x
n
+ f(x
n
) I. (Steensen)
10
Thus, by the fact that |f

(x)/f

(y)| M/K when x, y I, and letting Q = M/(2K), there


holds
|C
n
| Q|x
n1
|, if x
n1
, x
n
I. (secant)
|C
n
| Q|x
n
|, if x
n
I. (NewtonRaphson)
|C
n
| Q(1 + L) |x
n
|, if x
n
, x
n
+ f(x
n
) I. (Steensen)
We make use of these in the sequel. It is important to realize that, in order to be able to
make use of these bounds on the |C
n
|, we must show that x
n1
, x
n
, x
n
+ f(x
n
) I for the
relevant methods.
1. Regula falsi method. Let us assume that c, x
1
I, f(c) > 0, f(x
1
) < 0, f

(x) > 0
and f

(x) > 0 on I. From these and from (17), it follows that c remains xed and
x
1
< x
2
< < < c. That is, {x
n
} is an increasing sequence bounded above by
, thus has a limit . By the continuity of f(x) and f

(x) and by the assumption


that f

(x) > 0 on I, it follows from (12) that lim


n
x
n
= . Now, by the fact that
x
n
< x
n+1
< , we already have 0 < C
n
= (x
n+1
)/(x
n
) < 1 for every n. To
show that the convergence is linear, we must show that 0 < lim
n
C
n
< 1. Let us
recall that
C
n
=
x
n+1

x
n

=
f[x
n
, c, ]
f[x
n
, c]
(c ).
First, because x
n
I for all n and lim
n
x
n
= I and because of our assumptions
on f(x), we have that
lim
n
C
n
=
f[, , c]
f[, c]
(c ) =
f

()(c )
2f

()
> 0, for some , int(c, ).
Next, by the recursion relations among the divided dierences, there holds
f[, , c] =
f[, c] f[, ]
c
.
Therefore,
lim
n
C
n
=
f[, c] f[, ]
f[, c]
= 1
f[, ]
f[, c]
= 1
f

()
f[, c]
.
It is now easy to see that the slope f[, c] of the straight line through (, f()) and
(c, f(c)) and the slope f[, ] = f

() of the tangent to f(x) at (, f()) satisfy


f[, ] < f[, c]. From this, we conclude that lim
n
C
n
< 1. This completes the
proof.
We have assumed in the treatment above that f

(x) > 0 and f

(x) > 0 on I. It is easy


to see that the same technique applies to all cases in which f

(x) and f

(x) have xed


signs on I.
2. Secant method. We choose x
0
and x
1
in I suciently close to to ensure that the
inequality
C Qmax{|x
0
|, |x
1
|} < 1
11
holds. Consequently, |C
1
| C. By |x
2
| = |C
1
| |x
1
|, this implies that x
2
I
and |x
2
| < |x
1
|. In addition,
|C
2
| Q|x
2
| < Q|x
1
| C,
that is, |C
2
| C as well. We can now continue by induction and show that
x
n
I, |x
n
| < |x
n1
|, and |C
n
| C n.
Lemma 4.1 now applies. This completes the proof of convergence for the secant method.
3. NewtonRaphson method. We choose x
0
suciently close to to ensure that the
inequality
C Q|x
0
| < 1
holds. Consequently, |C
0
| C. By |x
1
| = |C
0
| |x
0
|, this implies that x
1
I
and |x
1
| < |x
0
|. In addition,
|C
1
| Q|x
1
| < Q|x
0
| = C,
that is, |C
1
| C too. We can now continue by induction and show that
x
n
I, |x
n
| < |x
n1
|, and |C
n
| C n.
Lemma 4.1 now applies. This completes the proof of convergence for the the Newton
Raphson method.
4. Steensen method. First, let us observe that
x + f(x) = x + f(x) f() = [1 + f

((x))] (x )
for some (x) int(x, ), provided x I.
From this, it is clear that
|x + f(x) | (1 + L) |x | provided x I.
Now choose x
0
I such that |x
0
| /(1+L), which guarantees that x
0
+f(x
0
) I
because |x
0
+ f(x
0
) | (1 + L) |x
0
| . Next, let us restrict x
0
further and
choose it suciently close to to ensure that the inequality
Q(1 + L)
2
|x
0
| < 1
holds. From this, we also have that the inequalities
C Q(1 + L) |x
0
| < 1 and C (1 + L) < 1
hold as well. Now, |C
0
| C, which, by |x
1
| = |C
0
| |x
0
|, implies that |x
1
|
C |x
0
|, hence x
1
I and |x
1
| < |x
0
|. Therefore,
|x
1
+ f(x
1
) | (1 + L) |x
1
| C (1 + L)|x
0
| < ,
12
that is, x
1
+ f(x
1
) I too. In addition,
|C
1
| Q(1 + L) |x
1
| < Q(1 + L) |x
0
| = C,
that is, |C
1
| C too. Continuing by induction, we can now show that
x
n
, x
n
+ f(x
n
) I, |x
n
| < |x
n1
|, and |C
n
| C n.
Lemma 4.1 now applies. This completes the proof of convergence for the Steensen
method.
Acknowledgement
The author wishes to thank Miss Enav Alcobi for producing the graphs in Section 1.
References
[1] K.E. Atkinson. An Introduction to Numerical Analysis. Wiley, New York, second edition,
1989.
[2] P. Henrici. Elements of Numerical Analysis. Wiley, New York, 1964.
[3] A. Ralston and P. Rabinowitz. A First Course in Numerical Analysis. McGraw-Hill,
New York, second edition, 1978.
[4] A. Sidi. Practical Extrapolation Methods: Theory and Applications. Number 10 in Cam-
bridge Monographs on Applied and Computational Mathematics. Cambridge University
Press, Cambridge, 2003.
[5] J. Stoer and R. Bulirsch. Introduction to Numerical Analysis. Springer-Verlag, New
York, third edition, 2002.
13

Anda mungkin juga menyukai