Anda di halaman 1dari 48

NUMERICAL MATHEMATICS & COMPUTING

7th Edition
c
Ward Cheney/David Kincaid
UT Austin
Engage Learning: Thomson-Brooks/Cole
www.engage.com
www.ma.utexas.edu/CNA/NMC6

October 25, 2011

c (UT Austin[10pt]
Ward Cheney/David Kincaid
NUMERICAL
Engage Learning:
MATHEMATICS
Thomson-Brooks/Cole
& COMPUTING
www.engage.com[10pt]
7th EditionOctober 25,www.ma.utexas.edu/
2011
1 / 48

Iterative Solutions of Linear Systems

A completely different strategy for solving a nonsingular linear


system
Ax = b

(1)

is explored.
This alternative approach is often used on enormous problems that
arise in solving partial differential equations numerically.
In that subject, systems having hundreds of thousands of equations
arise routinely.

c (UT Austin[10pt]
Ward Cheney/David Kincaid
NUMERICAL
Engage Learning:
MATHEMATICS
Thomson-Brooks/Cole
& COMPUTING
www.engage.com[10pt]
7th EditionOctober 25,www.ma.utexas.edu/
2011
2 / 48

Vector and Matrix Norms

We first present a brief overview of vector and matrix norms because


they are useful in the discussion of errors and in the stopping
criteria for iterative methods.
Norms can be defined on any vector space, but we usually use Rn or
Cn .
A vector norm ||x|| can be thought of as the length or magnitude
of a vector x Rn .

c (UT Austin[10pt]
Ward Cheney/David Kincaid
NUMERICAL
Engage Learning:
MATHEMATICS
Thomson-Brooks/Cole
& COMPUTING
www.engage.com[10pt]
7th EditionOctober 25,www.ma.utexas.edu/
2011
3 / 48

A vector norm is any mapping from Rn to R that obeys these three


properties:
||x|| > 0 if x 6= 0
||x|| = || ||x||
||x + y|| ||x|| + ||y||

triangle inequality

for vectors x, y Rn and scalars R.

c (UT Austin[10pt]
Ward Cheney/David Kincaid
NUMERICAL
Engage Learning:
MATHEMATICS
Thomson-Brooks/Cole
& COMPUTING
www.engage.com[10pt]
7th EditionOctober 25,www.ma.utexas.edu/
2011
4 / 48

Examples of vector norms for the vector x = (x1 , x2 , . . . , xn )T Rn


are
||x||1 =

n
X

|xi |

`1 vector norm

i=1

||x||2 =

X
n

xi2

1/2
Euclidean/`2 vector norm

i=1

||x|| =

max |xi |

1in

` vector norm

c (UT Austin[10pt]
Ward Cheney/David Kincaid
NUMERICAL
Engage Learning:
MATHEMATICS
Thomson-Brooks/Cole
& COMPUTING
www.engage.com[10pt]
7th EditionOctober 25,www.ma.utexas.edu/
2011
5 / 48

For n n matrices, we can also have matrix norms, subject to the


same requirements:
||A|| > 0 if A 6= 0
||A|| = || ||A||
||A + B|| ||A|| + ||B||

triangular inequality

for matrices A, B and scalars .

c (UT Austin[10pt]
Ward Cheney/David Kincaid
NUMERICAL
Engage Learning:
MATHEMATICS
Thomson-Brooks/Cole
& COMPUTING
www.engage.com[10pt]
7th EditionOctober 25,www.ma.utexas.edu/
2011
6 / 48

We usually prefer matrix norms that are related to a vector norm.


For a vector norm || ||, the subordinate matrix norm is defined by
||A|| sup {||Ax|| : x Rn and ||x|| = 1}
Here, A is an n n matrix.

c (UT Austin[10pt]
Ward Cheney/David Kincaid
NUMERICAL
Engage Learning:
MATHEMATICS
Thomson-Brooks/Cole
& COMPUTING
www.engage.com[10pt]
7th EditionOctober 25,www.ma.utexas.edu/
2011
7 / 48

For a subordinate matrix norm, some additional properties are


||I|| = 1
||Ax|| ||A|| ||x||
||AB|| ||A|| ||B||
There are two meanings associated with the notation || ||p , one for
vectors and another for matrices.
The context will determine which one is intended.

c (UT Austin[10pt]
Ward Cheney/David Kincaid
NUMERICAL
Engage Learning:
MATHEMATICS
Thomson-Brooks/Cole
& COMPUTING
www.engage.com[10pt]
7th EditionOctober 25,www.ma.utexas.edu/
2011
8 / 48

Examples of subordinate matrix norms for an n n matrix A are


||A||1 =
||A||2 =

||A|| =

max

1jn

max

1in

max

1in

n
X

|aij |

`1 matrix norm

i=1

|max |

n
X

|aij |

spectral/`2 matrix norm

` matrix norm

j=1

Here, i are the eigenvalues of AT A, which are called the singular


values of A.
The largest max in absolute value is termed the spectral radius of A.

c (UT Austin[10pt]
Ward Cheney/David Kincaid
NUMERICAL
Engage Learning:
MATHEMATICS
Thomson-Brooks/Cole
& COMPUTING
www.engage.com[10pt]
7th EditionOctober 25,www.ma.utexas.edu/
2011
9 / 48

Condition Number and Ill-Conditioning

An important quantity that has some influence in the numerical


solution of a linear system
Ax = b
is the condition number, which is defined as
(A) = kAk2 kA1 k2
It turns out that it is not necessary to compute the inverse of A to
obtain an estimate of the condition number.
Also, it can be shown that the condition number (A) gauges the
transfer of error from the matrix A and the vector b to the solution x.

c (UT Austin[10pt]
Ward Cheney/David Kincaid
NUMERICAL
Engage Learning:
MATHEMATICS
Thomson-Brooks/Cole
& COMPUTING
www.engage.com[10pt]
7th Edition
October 25, www.ma.utexas.edu/
2011
10 / 48

The rule of thumb is that if (A) = 10 k , then one can expect to


lose at least k digits of precision in solving the system Ax = b.
If the linear system is sensitive to perturbations in the elements of
A, or to perturbations of the components of b, then this fact is
reflected in A having a large condition number.
In such a case, the matrix A is said to be ill-conditioned.
Briefly, the larger the condition number, the more ill-conditioned
the system.

c (UT Austin[10pt]
Ward Cheney/David Kincaid
NUMERICAL
Engage Learning:
MATHEMATICS
Thomson-Brooks/Cole
& COMPUTING
www.engage.com[10pt]
7th Edition
October 25, www.ma.utexas.edu/
2011
11 / 48

Suppose we want to solve an invertible linear system of equations


Ax = b
for a given coefficient matrix A and right-hand side b, but there may
have been perturbations of the data owing to uncertainty in the
measurements and roundoff errors in the calculations.
Suppose that the right-hand side is perturbed by an amount assigned
the symbol b and the corresponding solution is perturbed an amount
denoted by the symbol x.

c (UT Austin[10pt]
Ward Cheney/David Kincaid
NUMERICAL
Engage Learning:
MATHEMATICS
Thomson-Brooks/Cole
& COMPUTING
www.engage.com[10pt]
7th Edition
October 25, www.ma.utexas.edu/
2011
12 / 48

Then we have
A(x + x) = Ax + Ax = b + b
where
Ax = b
From the original linear system Ax = x and norms, we have
||b|| = ||Ax|| ||A|| ||x||
which gives us
1
||x||

||A||
||b||

c (UT Austin[10pt]
Ward Cheney/David Kincaid
NUMERICAL
Engage Learning:
MATHEMATICS
Thomson-Brooks/Cole
& COMPUTING
www.engage.com[10pt]
7th Edition
October 25, www.ma.utexas.edu/
2011
13 / 48

From the perturbed linear system Ax = b, we obtain x = A1 b


and
||x|| ||A1 || ||b||
Combining the two inequalities above, we obtain
||x||
||x||

(A) ||b||
||b||

which contains the condition number of the original matrix A.

c (UT Austin[10pt]
Ward Cheney/David Kincaid
NUMERICAL
Engage Learning:
MATHEMATICS
Thomson-Brooks/Cole
& COMPUTING
www.engage.com[10pt]
7th Edition
October 25, www.ma.utexas.edu/
2011
14 / 48

As an example of an ill-conditioned matrix consider the Hilbert


matrix

1 21 13

H3 = 12 13 14
1
3

1
4

1
5

We can use the MATLAB commands to generate the matrix and then
to compute both the condition number using the 2-norm and the
determinant of the matrix.
We find the condition number to be
(A) = 524.0568
and the determinant to be
Det(A) = 4.6296 104

c (UT Austin[10pt]
Ward Cheney/David Kincaid
NUMERICAL
Engage Learning:
MATHEMATICS
Thomson-Brooks/Cole
& COMPUTING
www.engage.com[10pt]
7th Edition
October 25, www.ma.utexas.edu/
2011
15 / 48

In solving linear systems, the condition number of the coefficient


matrix measures the sensitivity of the system to errors in the data.
When the condition number is large, the computed solution of the
system may be dangerously in error!
Further checks should be made before accepting the solution as being
accurate.
Values of the condition number near 1 indicate a well-conditioned
matrix whereas large values indicate an ill-conditioned matrix.
Using the determinant to check for singularity is appropriate only for
matrices of modest size.
Using mathematical software, one can compute the condition number
to check for singular or near-singular matrices.

c (UT Austin[10pt]
Ward Cheney/David Kincaid
NUMERICAL
Engage Learning:
MATHEMATICS
Thomson-Brooks/Cole
& COMPUTING
www.engage.com[10pt]
7th Edition
October 25, www.ma.utexas.edu/
2011
16 / 48

A goal in the study of numerical methods is to acquire an awareness


of whether a numerical result can be trusted or whether it may be
suspect (and therefore in need of further analysis).
The condition number provides some evidence regarding this question.
With the advent of sophisticated mathematical software systems such
as MATLAB and others, an estimate of the condition number is
often available, along with an approximate solution so that one can
judge the trustworthiness of the results.
In fact, some solution procedures involve advanced features that
depend on an estimated condition number and may switch solution
techniques based on it.

c (UT Austin[10pt]
Ward Cheney/David Kincaid
NUMERICAL
Engage Learning:
MATHEMATICS
Thomson-Brooks/Cole
& COMPUTING
www.engage.com[10pt]
7th Edition
October 25, www.ma.utexas.edu/
2011
17 / 48

For example, this criterion may result in a switch of the solution


technique from a variant of Gaussian elimination to a least-squares
solution for an ill-conditioned system.
Unsuspecting users may not realize that this has happened unless
they look at all of the results, including the estimate of the condition
number.
Condition numbers can also be associated with other numerical
problems, such as locating roots of equations.

c (UT Austin[10pt]
Ward Cheney/David Kincaid
NUMERICAL
Engage Learning:
MATHEMATICS
Thomson-Brooks/Cole
& COMPUTING
www.engage.com[10pt]
7th Edition
October 25, www.ma.utexas.edu/
2011
18 / 48

Basic Iterative Methods

The iterative-method strategy produces a sequence of approximate


solution vectors x(0) , x(1) , x(2) , . . . for system Ax = b.
The numerical procedure is designed so that, in principle, the
sequence of vectors converges to the actual solution.
The process can be stopped when sufficient precision has been
attained.
This stands in contrast to the Gaussian elimination algorithm, which
has no provision for stopping midway and offering up an approximate
solution.

c (UT Austin[10pt]
Ward Cheney/David Kincaid
NUMERICAL
Engage Learning:
MATHEMATICS
Thomson-Brooks/Cole
& COMPUTING
www.engage.com[10pt]
7th Edition
October 25, www.ma.utexas.edu/
2011
19 / 48

A general iterative algorithm for solving System (1) goes as follows:


Select a nonsingular matrix Q, and having chosen an arbitrary
starting vector x(0) , generate vectors x(1) , x(2) , . . . recursively from
the equation
Qx(k) = (Q A)x(k1) + b

(k = 1, 2, . . .)

(2)

To see that this is sensible, suppose that the sequence x(k) does
converge, to a vector x , say.

c (UT Austin[10pt]
Ward Cheney/David Kincaid
NUMERICAL
Engage Learning:
MATHEMATICS
Thomson-Brooks/Cole
& COMPUTING
www.engage.com[10pt]
7th Edition
October 25, www.ma.utexas.edu/
2011
20 / 48

Then by taking the limit as k in System (2), we get


Qx = (Q A)x + b
This leads to Ax = b.
Thus, if the sequence converges, its limit is a solution to the original
System (1).
For example, the Richardson iteration uses Q = I.

c (UT Austin[10pt]
Ward Cheney/David Kincaid
NUMERICAL
Engage Learning:
MATHEMATICS
Thomson-Brooks/Cole
& COMPUTING
www.engage.com[10pt]
7th Edition
October 25, www.ma.utexas.edu/
2011
21 / 48

An outline of the pseudocode for carrying out the general iterative


procedure (2) follows:
integer k, kmax
real array
(x(0) )1:n , (b)1:n , (c)1:n , (x)1:n , (y)1:n , (A)1:n1:n , (Q)1:n1:n
x x(0)
for k = 1 to kmax
yx
c (Q A)x + b
solve Qx = c
output k, x
if kx yk < then
output convergence
stop
end if
end for
output maximum iteration reached

c (UT Austin[10pt]
Ward Cheney/David Kincaid
NUMERICAL
Engage Learning:
MATHEMATICS
Thomson-Brooks/Cole
& COMPUTING
www.engage.com[10pt]
7th Edition
October 25, www.ma.utexas.edu/
2011
22 / 48

In choosing the nonsingular matrix Q, we are influenced by the


following considerations.
System (2) should be easy to solve for x(k) , when the right-hand
side is known.
Matrix Q should be chosen to ensure that the sequence x(k)
converges, no matter what initial vector is used.
Ideally, this convergence will be rapid.

c (UT Austin[10pt]
Ward Cheney/David Kincaid
NUMERICAL
Engage Learning:
MATHEMATICS
Thomson-Brooks/Cole
& COMPUTING
www.engage.com[10pt]
7th Edition
October 25, www.ma.utexas.edu/
2011
23 / 48

One should not believe that it is necessary to compute the inverse of


Q to carry out an iterative procedure.
For small systems, we can easily compute the inverse of Q, but in
general, this is definitely not to be done!
We want to solve a linear system in which Q is the coefficient matrix.
As was mentioned previously, we want to select Q so that a linear
system with Q as the coefficient matrix is easy to solve.
Examples of such matrices are diagonal, tridiagonal, banded, lower
triangular, and upper triangular.

c (UT Austin[10pt]
Ward Cheney/David Kincaid
NUMERICAL
Engage Learning:
MATHEMATICS
Thomson-Brooks/Cole
& COMPUTING
www.engage.com[10pt]
7th Edition
October 25, www.ma.utexas.edu/
2011
24 / 48

Now, let us view System (1) in its detailed form


n
X

aij xj = bi

(1 i n)

(3)

j=1

Solving the ith equation for the ith unknown term, we obtain an
equation that describes the Jacobi method:
"
#
n
X
(k)
(k1)
xi =
(aij /aii )xj
+ (bi /aii )
(1 i n)
(4)
j=1
j6=i

Here, we assume that all diagonal elements are nonzero.


If this is not the case, we can usually rearrange the equations so that
it is.

c (UT Austin[10pt]
Ward Cheney/David Kincaid
NUMERICAL
Engage Learning:
MATHEMATICS
Thomson-Brooks/Cole
& COMPUTING
www.engage.com[10pt]
7th Edition
October 25, www.ma.utexas.edu/
2011
25 / 48

In the Jacobi method above, the equations are solved in order.


(k1)

(k)

The components xj
and the corresponding new values xj
be used immediately in their place.

can

If this is done, we have the Gauss-Seidel method:


#
"
n
n
X
X
(k)
(k1)
(k)
(aij /aii )xj
(aij /aii )xj
+ (bi /aii )
xi =
j=1
j<i

(5)

j=1
j>i

c (UT Austin[10pt]
Ward Cheney/David Kincaid
NUMERICAL
Engage Learning:
MATHEMATICS
Thomson-Brooks/Cole
& COMPUTING
www.engage.com[10pt]
7th Edition
October 25, www.ma.utexas.edu/
2011
26 / 48

If x (k1) is not saved, then we can dispense with the superscripts in


the pseudocode as follows:
integer i, j, k, kmax, n;
for k = 1 to kmax
for i = 1 to n
"
P
xi bi n

j=1
j6=i

real array (aij )1:n1:n , (bi )1:n , (xi )1:n

#,
aij xj

aii

end for
end for

c (UT Austin[10pt]
Ward Cheney/David Kincaid
NUMERICAL
Engage Learning:
MATHEMATICS
Thomson-Brooks/Cole
& COMPUTING
www.engage.com[10pt]
7th Edition
October 25, www.ma.utexas.edu/
2011
27 / 48

(k)

xi

An acceleration of the Gauss-Seidel method is possible by the


introduction of a relaxation factor , resulting in the successive
overrelaxation (SOR) method:
#)
(" n
n
X
X
(k)
(k1)
(k1)
=
(aij /aii )xj
(aij /aii )xj
+(bi /aii ) +(1)xi
j=1
j<i

j=1
j>i

(6)
The SOR method with = 1 reduces to the Gauss-Seidel method.

c (UT Austin[10pt]
Ward Cheney/David Kincaid
NUMERICAL
Engage Learning:
MATHEMATICS
Thomson-Brooks/Cole
& COMPUTING
www.engage.com[10pt]
7th Edition
October 25, www.ma.utexas.edu/
2011
28 / 48

Jacobi iteration

We now consider numerical examples using iterative methods


associated with the names Jacobi, Gauss-Seidel, and successive
overrelaxation.

Example
Let

2
A = 1
0

1
0
3 1 ,
1
2

1
b= 8
5

Carry out a number of iterations of the Jacobi iteration, starting with the
zero initial vector.

c (UT Austin[10pt]
Ward Cheney/David Kincaid
NUMERICAL
Engage Learning:
MATHEMATICS
Thomson-Brooks/Cole
& COMPUTING
www.engage.com[10pt]
7th Edition
October 25, www.ma.utexas.edu/
2011
29 / 48

Rewriting the equations, we have the Jacobi method:


(k)

1 (k1)
2 x2

(k)

1 (k1)
3 x1

+ 13 x3

(k)

1 (k1)
2 x2

x1
x2
x3

1
2
(k1)

8
3

5
2

Taking the initial vector to be x(0) = [0, 0, 0]T , we find (with the aid
of a computer program or a programmable calculator) that
x(0) = [0, 0, 0]T
x(1) = [0.5000, 2.6667, 2.5000]T
x(2) = [1.8333, 2.0000, 1.1667]T
..
.
x(21) = [2.0000, 3.0000, 1.0000]T
The actual solution (to four decimal places rounded) is obtained.

c (UT Austin[10pt]
Ward Cheney/David Kincaid
NUMERICAL
Engage Learning:
MATHEMATICS
Thomson-Brooks/Cole
& COMPUTING
www.engage.com[10pt]
7th Edition
October 25, www.ma.utexas.edu/
2011
30 / 48

In the Jacobi iteration, Q is taken

2
Q= 0
0

to be the diagonal of A:

0 0
3 0
0 2

Now
1
2

Q1 = 0

1
3

0 ,

12

Q1 A = 13

1
2

21

13
1

The Jacobi iterative matrix and constant vector are

0 21 0

h = Q1 b =
B = I Q1 A = 13 0 31 ,
0

1
2

1
2
8
3
52

c (UT Austin[10pt]
Ward Cheney/David Kincaid
NUMERICAL
Engage Learning:
MATHEMATICS
Thomson-Brooks/Cole
& COMPUTING
www.engage.com[10pt]
7th Edition
October 25, www.ma.utexas.edu/
2011
31 / 48

One can see that Q is close to A, Q1 A is close to I, and I Q1 A


is small.
We write the Jacobi method as
x(k) = Bx(k1) + h

c (UT Austin[10pt]
Ward Cheney/David Kincaid
NUMERICAL
Engage Learning:
MATHEMATICS
Thomson-Brooks/Cole
& COMPUTING
www.engage.com[10pt]
7th Edition
October 25, www.ma.utexas.edu/
2011
32 / 48

Gauss-Seidel iteration

Example
Repeat the preceding example using the Gauss-Seidel iteration.
The idea of the Gauss-Seidel iteration is simply to accelerate the
convergence by incorporating each vector as soon as it has been
computed.
Obviously, it would be more efficient in the Jacobi method to use the
(k)
updated value x1 in the second equation instead of the old value
(k1)
x1
.
(k)

Similarly, x2

(k1)

could be used in the third equation in place of x2

c (UT Austin[10pt]
Ward Cheney/David Kincaid
NUMERICAL
Engage Learning:
MATHEMATICS
Thomson-Brooks/Cole
& COMPUTING
www.engage.com[10pt]
7th Edition
October 25, www.ma.utexas.edu/
2011
33 / 48

Using the new iterates as soon as they become available, we have the
Gauss-Seidel method:
(k)

1 (k1)
2 x2

(k)

1 (k)
3 x1

+ 13 x3

(k)

1 (k)
2 x2

x1
x2

x3

1
2
(k1)

8
3

5
2

Starting with the initial vector zero, some of the iterates are
x(0) = [0, 0, 0]T
x(1) = [0.5000, 2.8333, 1.0833]T
x(2) = [1.9167, 2.9444, 1.0278]T
..
.
x(9) = [2.0000, 3.0000, 1.0000]T
In this example, the convergence of the Gauss-Seidel method is
approximately twice as fast as that of the Jacobi method.

c (UT Austin[10pt]
Ward Cheney/David Kincaid
NUMERICAL
Engage Learning:
MATHEMATICS
Thomson-Brooks/Cole
& COMPUTING
www.engage.com[10pt]
7th Edition
October 25, www.ma.utexas.edu/
2011
34 / 48

In the iterative algorithm that goes by the name Gauss-Seidel, Q is


chosen as the lower triangular part of A, including the diagonal.
Using the data from the previous example, we now find that

2
0 0
3 0
Q = 1
0 1 2
The usual row operations give us
1

0 0
2

Q1 = 16 13 0 ,
1
12

1
6

1
2

12

Q1 A = 0

5
6
1
12

13

5
6

c (UT Austin[10pt]
Ward Cheney/David Kincaid
NUMERICAL
Engage Learning:
MATHEMATICS
Thomson-Brooks/Cole
& COMPUTING
www.engage.com[10pt]
7th Edition
October 25, www.ma.utexas.edu/
2011
35 / 48

Again, we emphasize that in a practical problem we would not


compute Q 1 .
The Gauss-Seidel iterative matrix and constant vector are

0 21 0

L = I Q1 A = 0 16 31 ,
h = Q1 b =
0

1
12

1
6

1
2
17
6
13
12

We write the Gauss-Seidel method as


x(k) = Lx(k1) + h

c (UT Austin[10pt]
Ward Cheney/David Kincaid
NUMERICAL
Engage Learning:
MATHEMATICS
Thomson-Brooks/Cole
& COMPUTING
www.engage.com[10pt]
7th Edition
October 25, www.ma.utexas.edu/
2011
36 / 48

SOR iteration

Example
Repeat the preceding example using the SOR iteration with = 1.1.
Introducing a brelaxation factor into the Gauss-Seidel method, we
have the SOR method:
i
h
(k1)
(k1)
(k)
+ 21 + (1 )x1
x1
= 12 x2
(k)

1 (k)
3 x1

+ 31 x3

(k)

1 (k)
2 x2

x2

x3

(k1)

5
2

8
3

(k1)

+ (1 )x2
(k1)

+ (1 )x3

c (UT Austin[10pt]
Ward Cheney/David Kincaid
NUMERICAL
Engage Learning:
MATHEMATICS
Thomson-Brooks/Cole
& COMPUTING
www.engage.com[10pt]
7th Edition
October 25, www.ma.utexas.edu/
2011
37 / 48

Starting with the initial vector of zeros and with = 1.1, some of the
iterates are
x(0) = [0, 0, 0]T
x(1) = [0.5500, 3.1350, 1.0257]T
x(2) = [2.2193, 3.0574, 0.9658]T
..
.
x(7) = [2.0000, 3.0000, 1.0000]T
In this example, the convergence of the SOR method is faster than
that of the Gauss-Seidel method.

c (UT Austin[10pt]
Ward Cheney/David Kincaid
NUMERICAL
Engage Learning:
MATHEMATICS
Thomson-Brooks/Cole
& COMPUTING
www.engage.com[10pt]
7th Edition
October 25, www.ma.utexas.edu/
2011
38 / 48

In the iterative algorithm that goes by the name successive


overrelaxation (SOR), Q is chosen as the lower triangular part of A
including the diagonal, but each diagonal element aij is replaced by
aij /, where is the so-called relaxation factor.

c (UT Austin[10pt]
Ward Cheney/David Kincaid
NUMERICAL
Engage Learning:
MATHEMATICS
Thomson-Brooks/Cole
& COMPUTING
www.engage.com[10pt]
7th Edition
October 25, www.ma.utexas.edu/
2011
39 / 48

From the previous example, this means that

20
0 0
11

30
Q = 1 11
0
20
0 1 11
Now

Q1 =

11
20
121
600
1331
12000

0
11
30
121
600

0 ,

11
20

Q1 A =

11
10
11
300
121
6000

11
20

539
600
671
12000

11
30

539
600

c (UT Austin[10pt]
Ward Cheney/David Kincaid
NUMERICAL
Engage Learning:
MATHEMATICS
Thomson-Brooks/Cole
& COMPUTING
www.engage.com[10pt]
7th Edition
October 25, www.ma.utexas.edu/
2011
40 / 48

The SOR iterative matrix and constant vector are

1
11
10
0
20

11
61
11
L = I Q1 A = 300
600
30
671
61
121
12000
6000
600

h = Q1 b =

11
20
627
200
4103
4000

We write the SOR method as


x(k) = L x(k1) + h

c (UT Austin[10pt]
Ward Cheney/David Kincaid
NUMERICAL
Engage Learning:
MATHEMATICS
Thomson-Brooks/Cole
& COMPUTING
www.engage.com[10pt]
7th Edition
October 25, www.ma.utexas.edu/
2011
41 / 48

Pseudocode

We can write pseudocode for the Jacobi, Gauss-Seidel, and SOR


methods assuming that the linear system (1) is stored in
matrix-vector form:
procedure Jacobi(A, b, x)
real kmax 100, 1010 , 12 104
integer i, j, k, kmax, n; real diag , sum
real array (A)1:n1:n , (b)1:n , (x)1:n , (y)1:n
n size(A)

c (UT Austin[10pt]
Ward Cheney/David Kincaid
NUMERICAL
Engage Learning:
MATHEMATICS
Thomson-Brooks/Cole
& COMPUTING
www.engage.com[10pt]
7th Edition
October 25, www.ma.utexas.edu/
2011
42 / 48

for k = 1 to kmax
yx
for i = 1 to n
sum bi
diag aii
if |diag | < then
output diagonal element too small
return
end if
for j = 1 to n
if j 6= i then
sum sum aij yj
end if
end for
xi sum/diag
end for
output k, x

c (UT Austin[10pt]
Ward Cheney/David Kincaid
NUMERICAL
Engage Learning:
MATHEMATICS
Thomson-Brooks/Cole
& COMPUTING
www.engage.com[10pt]
7th Edition
October 25, www.ma.utexas.edu/
2011
43 / 48

if kx yk < then
output k, x
return
end if
end for
output maximum iterations reached
return
end Jacobi

c (UT Austin[10pt]
Ward Cheney/David Kincaid
NUMERICAL
Engage Learning:
MATHEMATICS
Thomson-Brooks/Cole
& COMPUTING
www.engage.com[10pt]
7th Edition
October 25, www.ma.utexas.edu/
2011
44 / 48

Here, the vector y contains the old iterate values, and the vector x
contains the updated ones.
The values of kmax, , and are set either in a parameter statement
or as global variables.

c (UT Austin[10pt]
Ward Cheney/David Kincaid
NUMERICAL
Engage Learning:
MATHEMATICS
Thomson-Brooks/Cole
& COMPUTING
www.engage.com[10pt]
7th Edition
October 25, www.ma.utexas.edu/
2011
45 / 48

The pseudocode for the procedure Gauss Seidel(A, b, x) would be


the same as that for the Jacobi pseudocode above except that the
innermost j-loop would be replaced by the following:
for j = 1 to i 1
sum sum aij xj
end for
for j = i + 1 to n
sum sum aij xj
end for

c (UT Austin[10pt]
Ward Cheney/David Kincaid
NUMERICAL
Engage Learning:
MATHEMATICS
Thomson-Brooks/Cole
& COMPUTING
www.engage.com[10pt]
7th Edition
October 25, www.ma.utexas.edu/
2011
46 / 48

The pseudocode for procedure SOR(A, b, x, ) would be the same


as that for the Gauss-Seidel pseudocode with the statement following
the j-loop replaced by the following:

xi sum/diag
xi xi + (1 )yi

c (UT Austin[10pt]
Ward Cheney/David Kincaid
NUMERICAL
Engage Learning:
MATHEMATICS
Thomson-Brooks/Cole
& COMPUTING
www.engage.com[10pt]
7th Edition
October 25, www.ma.utexas.edu/
2011
47 / 48

In the solution of partial differential equations, iterative methods are


frequently used to solve large sparse linear systems, which often
have special structures.
The partial derivatives are approximated by stencils composed of
relatively few points, such as 5, 7, or 9. This leads to only a few
nonzero entries per row in the linear system.
In such systems, the coefficient matrix A is usually not stored since
the matrix-vector product can be written directly in the code.
See Chapter 12 for additional details on this and how it is related to
solving elliptic partial differential equations.

c (UT Austin[10pt]
Ward Cheney/David Kincaid
NUMERICAL
Engage Learning:
MATHEMATICS
Thomson-Brooks/Cole
& COMPUTING
www.engage.com[10pt]
7th Edition
October 25, www.ma.utexas.edu/
2011
48 / 48

Anda mungkin juga menyukai