Anda di halaman 1dari 91

How to represent a system of

linear equations as a matrix

[A]{X} = {C}

where {X} and {C} are both column vectors

file: matrix.ppt p. 1
0.3 x1  0.52 x2  x3  0.01
0.5 x1  x2  1.9 x3  0.67
0.1x1  0.3 x2  0.5 x3  0.44

 A{ X }  {C}

0.3 0.52 1   x1    0.01


0.5     
1 1.9  x2    0.67 
 
 0.1 0.3 0.5  x3   0.44

file: matrix.ppt p. 2
How to solve the system of linear
equation
• Graphical method
• Matrix inversion (Adjoint formula for A-1)
• Cramer’s rule
• Gauss elimination
• LU decomposition
• Gauss Seidel

file: matrix.ppt p. 3
Graphical Method
2 equations, 2 unknowns
a11 x1  a12 x2  c1 x2

a21 x1  a22 x2  c2
( x1, x2 )
 a11  c1
x2     x1 
 a12  a12
 a21  c2
x2     x1 
 a22  a22 x1

file: matrix.ppt p. 4
x2

3x1  2 x2  18
9
 x1  2 x2  2

3
 3
x2    x1  9
 2 2
(4,3)
1
 1
x2    x1  1 2
 2 1

x1
Check: 3(4) + 2(3) = 12 + 6 = 18

file: matrix.ppt p. 5
Special Cases
• No solution x2

• Infinite solution
• Ill-conditioned ( x1, x2 )

x1

file: matrix.ppt p. 6
a) No solution - same slope f(x)

f(x) x
b) infinite solution

-1/2 x1 + x2 = 1
-x1 +2x2 = 2

x
c) ill conditioned f(x)
so close that the points of
intersection are difficult to
detect visually

x
file: matrix.ppt p. 7
Let’s consider how we know if the system is
ill-conditions. Start by considering systems
where the slopes are identical
• If the determinant is zero, the slopes are
identical
a 11 x 1  a 12 x 2  c1
a 21 x 1  a 22 x 2  c2

Rearrange these equations so that we have an


alternative version in the form of a straight
line: i.e. x2 = (slope) x1 + intercept

file: matrix.ppt p. 8
a11 c1
x2   x1 
a12 a12
a21 c2
x2   x1 
a22 a22

If the slopes are nearly equal (ill-conditioned)

a11 a21

a12 a22 Isn’t this the determinant?
a11a22  a21a12 a11 a12
a11a22  a21a12  0  det A
a21 a22

file: matrix.ppt p. 9
If the determinant is zero the slopes are equal.
This can mean:
- no solution
- infinite number of solutions

If the determinant is close to zero, the system is ill


conditioned.

So it seems that we should use check the determinant


of a system before any further calculations are done.

Let’s try an example.

file: matrix.ppt p. 10
HW 1:

Determine whether the following matrix is ill-conditioned.

37.2 4.7  x1   22 
19.2 2.5  x    12
  2   

file: matrix.ppt p. 11
Matrix Inversion (Adjoint formula for A-1)
If A is any nxn matrix and Cij is the cofactor of
aij, then the matrix
C11 C12….… C1n
C21 C22….… C2n
: : :
: : :
Cn1 Cn2….… Cnn

is called the matrix of cofactors from A. The


transpose of this matrix is called the adjoint of A
and is denoted by adj (A).
file: matrix.ppt p. 12
Example 1
1 0 1
A= -1 3 0
1 0 2
The cofactors of A are
C11=6; C12=2; C13=-3
C21=0 ; C22=1; C23=0
C31=-3; C32=-1;C33=3
It follows that the matrix of
cofactors of A is 6 2 -3
0 1 0
-3 -1 3 file: matrix.ppt p. 13
matrix of cofactors of A
6 2 -3
0 1 0
-3 -1 3

And the adjoint of A is:


6 0 -3
Adj(A)= 2 1 -1
Transpose
-3 0 3

file: matrix.ppt p. 14
Theorem
If A is an invertible matrix, then

1
A-1= det(A)adj(A)

file: matrix.ppt p. 15
Example 2
Use theorem 1 to find the inverse of the matrix A in the
previous example

1 0 1 6 0 -3
A= -1 3 0 Adj(A)= 2 1 -1
1 0 2 -3 0 3

1 0 1 1 0

det(A)= -1 3 0 -1 3 =1*3*2-1*3*1=3
1 0 2 1 0
file: matrix.ppt p. 16
6 0 -3
det(A)=3 Adj(A)= 2 1 -1
-3 0 3
And since

1
A-1= det(A)adj(A)

It follows:
2 0 -1
A-1 = 2/3 1/3 -1/3
-1 0 1
file: matrix.ppt p. 17
HW 2:

1 6 -3
A= -2 7 1
3 -1 4

Find:
(a) The matrix of cofactors.
(b) Adj(A)
(c) A-1 using the formula from theorem 1
file: matrix.ppt p. 18
Cramer’s Rule
• Not efficient for solving large numbers of
linear equations
• Useful for explaining some inherent
problems associated with solving linear
equations.

 a11 a12 a13   x1  b1 


a     
a22 a23  x2   b2   Ax  b
 21 
a31 a32 a33   x3  b3 

file: matrix.ppt p. 19
Cramer’s Rule
b1 a12 a13 a11 b1 a13
1 1
x1  b2 a22 a23 x2  a21 b2 a23
A A
b3 a32 a33 a31 b3 a33

a11 a12 b1 to solve for


1 xi - place {b} in
x3  a21 a22 b2
A the ith column
a31 a32 b3

file: matrix.ppt p. 20
Elimination of Unknowns
( algebraic approach)
a11x1  a12 x2  c1
a 21x1  a 22 x2  c2

a11x1  a12 x2  c1   a21 


a 21x1  a 22 x2  c2   a11 

a 21a11x1  a 2 `1a12 x2  a 2 `1c1 SUBTRACT


a 21a11x1  a11a 22 x2  a11c2

file: matrix.ppt p. 21
Elimination of Unknowns
( algebraic approach)
a21a11 x1  a21a12 x2  a21c1 SUBTRACT
a21a11 x1  a11a22 x2  a11c2

a12 a21 x2  a22 a11 x2  c1a21  c2 a11

a21c1  a11c2
x2 
a12 a21  a22 a11 NOTE: same result as
Cramer’s Rule

a22c1  a12c2
x1 
a11a22  a12 a21 file: matrix.ppt p. 22
Exercise:
Use of Cramer’s Rule
2 x1  3x2  5
x1  x2  5

2 3  x1  5
1 1   x   5
  2   

file: matrix.ppt p. 23
Gauss Elimination
• One of the earliest methods developed for
solving simultaneous equations
• Important algorithm in use today
• Involves combining equations in order to
eliminate unknowns

file: matrix.ppt p. 24
Blind (Naive) Gauss Elimination
• Technique for larger matrix
• Same principals of elimination
- manipulate equations to eliminate an unknown
from an equation
- Solve directly then back-substitute into one of
the original equations

file: matrix.ppt p. 25
Two Phases of Gauss Elimination
 a11 a12 a13 | c1 
a a22 a23 | c2 
Forward
 21  Elimination
 a31 a32 a33 | c3 
Note: the prime indicates
 a11 a12 a13 | c1  the number of times the
0 '
a22 '
a23 | c2' 
element has changed from
  the original value.
 0 0 a33 | c3 
'' ''

file: matrix.ppt p. 26
Two Phases of Gauss Elimination
a11 a12 a13 | c1 
0 '
a22 '
a23 | c2' 
 
 0 a33 | c3 
'' ''
0 Back substitution
c3''
x3  ''
a33

x2 
c '
2  a123 x3 
'
a22
c1  a12 x2  a13 x3 
x1 
a11
file: matrix.ppt p. 27
Exercise:
2 x1  x2  3x3  1
4 x1  4 x2  7 x3  1
2 x1  5x2  9 x3  3

file: matrix.ppt p. 28
Pitfalls of the Elimination
Method
• Division by zero
• Round off errors
magnitude of the pivot element is small compared
to other elements
• Ill conditioned systems

file: matrix.ppt p. 29
Division by Zero
• When we normalize i.e. a12/a11 we need to
make sure we are not dividing by zero
• This may also happen if the coefficient is
very close to zero
2 x2  3x3  8
4 x1  6 x2  7 x3  3
2 x1  x2  6 x3  5

file: matrix.ppt p. 30
Techniques for Improving the
Solution
• Use of more significant figures
• Pivoting
• Scaling
 a11 a12 a13   x1  b1 
a     
 21
a22 a23 x2   b2 
  Ax  b
 a13 a23 a33   x3  b3 

file: matrix.ppt p. 31
Use of more significant figures

• Simplest remedy for ill conditioning


• Extend precision
computational overhead
memory overhead

file: matrix.ppt p. 32
Pivoting

• Problems occur when the pivot element is


zero - division by zero
• Problems also occur when the pivot element
is smaller in magnitude compared to other
elements (i.e. round-off errors)
• Prior to normalizing, determine the largest
available coefficient

file: matrix.ppt p. 33
Pivoting

• Partial pivoting
rows are switched so that the largest element is
the pivot element
• Complete pivoting
columns as well as rows are searched for the
largest element and switched
rarely used because switching columns changes
the order of the x’s adding unjustified
complexity to the computer program
file: matrix.ppt p. 34
Division by Zero - Solution
Pivoting has been developed
to partially avoid these problems

2 x2  3x3  8 4 x1  6 x2  7 x3  3
4 x1  6 x2  7 x3  3 2 x2  3 x3  8
2 x1  x2  6 x3  5 2 x1  x2  6 x3  5

file: matrix.ppt p. 35
Scaling

• Minimizes round-off errors for cases where some


of the equations in a system have much larger
coefficients than others
• In engineering practice, this is often due to the
widely different units used in the development of
the simultaneous equations
• As long as each equation is consistent, the system
will be technically correct and solvable

file: matrix.ppt p. 36
Scaling

value on the diagonal


2 x1  100,000 x2  100,000 0 .00002 x1  x2  1
x1  x2  2 x1  x2  2

put the greatest


Pivot rows to
x1  x2  2
0.00002 x1  x2  1

x1  0.00 x2  100
. x1  x2  1

file: matrix.ppt p. 37
EXAMPLE

Use Gauss Elimination to solve the following set


of linear equations
3x2  13x3  50
2 x1  6x2  x3  45
4 x1  8x3  4

file: matrix.ppt p. 38
SOLUTION
3x2  13x3  50
2 x1  6x2  x3  45
4 x1  8x3  4

First write in matrix form, employing short hand


presented in class.
 0 3 13  50 We will clearly run into
 2 6 1  45  problems of division
 
 4 0 8  4  by zero.

Use partial pivoting

file: matrix.ppt p. 39
 0 3 13  50 Pivot with equation
 2 6 1  45  with largest an1
 
 4 0 8  4 

file: matrix.ppt p. 40
 0 3 13  50
 2 6 1  45 
 
 4 0 8  4 

 4 0 8  4 
 2 6 1  45
 
 0 3 13  50

file: matrix.ppt p. 41
 0 3 13  50
 2 6 1  45 
 
 4 0 8  4 

 4 0 8  4 
 2 6 1  45
 
 0 3 13  50

 4 0 8  4  Begin developing
 0 6 3  43  upper triangular matrix
 
 0 3 13  50

file: matrix.ppt p. 42
 4 0 8  4 
 0 6 3  43 
 
 0 3 13  50

 4 0 8  4 
 0 6 3  43 
 
 0 0 14 .5  28.5

28.5 43  31.966
x3   1.966 x2   8149
.
14 .5 6
4  81.966
x1   2 .931
4
CHECK
...end of
3 8149
.   131.966  50 okay
problem

file: matrix.ppt p. 43
GAUSS-JORDAN
• Variation of Gauss elimination
• primary motive for introducing this method is that
it provides and simple and convenient method for
computing the matrix inverse.
• When an unknown is eliminated, it is eliminated
from all other equations, rather than just the
subsequent one

file: matrix.ppt p. 44
GAUSS-JORDAN
• All rows are normalized by dividing them by their
pivot elements
• Elimination step results in and identity matrix
rather than an UT matrix

a11 a12 a13  1 0 0 0


 A   0 a22 a23 

0 1 0 0
 0 0 a33   A   
0 0 1 0
0 0 0 1

file: matrix.ppt p. 45
Graphical depiction of Gauss-Jordan
a11 a12 a13 | c1  1 0 0 | c1n  
a a ' '
a23 | c2'   n 
 21 22
  0 1 0 | c2 
a31 a32 a33 | c3 
'' ''
0 0 1 | c3 n  

1 0 0 | c1n   x1  c1n 
  n 
 0 1 0 | c2 
x2  c2 n 
0 0 1 | c3 n  
x3  c3 n 

file: matrix.ppt p. 46
Matrix Inversion
• [A] [A] -1 = [A]-1 [A] = I
• One application of the inverse is to solve
several systems differing only by {c}
[A]{x} = {c}
[A]-1[A] {x} = [A]-1{c}
[I]{x}={x}= [A]-1{c}
• One quick method to compute the inverse is
to augment [A] with [I] instead of {c}

file: matrix.ppt p. 47
Graphical Depiction of the Gauss-Jordan
Method with Matrix Inversion
 A I 
 a11 a12 a13  1 0 0 
a  Note: the superscript
a22 a23  0 1 0 “-1” denotes that
 21 
a31 a32 a33  0 0 1  the original values
 1 0 0  a111 a121 a131  have been converted
 1 1 1  to the matrix inverse,
 0 1 0  a21 a22 a23  not 1/a
ij
 0 0 1  1
a31 1
a 32 a33 
1

I   A1

file: matrix.ppt p. 48
LU Decomposition Methods

• Elimination methods
Gauss elimination
Gauss Jordan
LU Decomposition Methods

file: matrix.ppt p. 49
Naive LU Decomposition
• [A]{x}={c}
• Suppose this can be rearranged as an upper
triangular matrix with 1’s on the diagonal
• [U]{x}={d}
• [A]{x}-{c}=0 [U]{x}-{d}=0
• Assume that a lower triangular matrix exists
that has the property
[L]{[U]{x}-{d}}= [A]{x}-{c}
file: matrix.ppt p. 50
Naive LU Decomposition
• [L]{[U]{x}-{d}}= [A]{x}-{c}
• Then from the rules of matrix multiplication
• [L][U]=[A]
• [L]{d}={c}
• [L][U]=[A] is referred to as the LU
decomposition of [A]. After it is
accomplished, solutions can be obtained
very efficiently by a two-step substitution
procedure file: matrix.ppt p. 51
Consider how Gauss elimination can be
formulated as an LU decomposition

U is a direct product of forward


elimination step if each row is scaled by
the diagonal
1 a12 a13 
U   0 1 a23 

0 0 1 

file: matrix.ppt p. 52
Although not as apparent, the matrix [L] is also
produced during the step. This can be readily
illustrated for a three-equation system
a11 a12 a13   x1   c1 
a   x   c 
a a 23  2   2
 21 22

a31 a32 a33   x3  c3 

The first step is to multiply row 1 by the factor


a21
f 21 
a11
Subtracting the result from the second row eliminates a21

file: matrix.ppt p. 53
a11 a12 a13   x1   c1 
a    
a 22 a 23   x 2   c2 
 21

a 31 a 32 a 33   x 3  c3 

Similarly, row 1 is multiplied by


a31
f 31 
a11
The result is subtracted from the third row to eliminate a31
In the final step for a 3 x 3 system is to multiply the modified
row by
a '32
f 32  Subtract the results from the third
a '22 row to eliminate a32
file: matrix.ppt p. 54
The values f21 , f31, f32 are in fact the elements
of an [L] matrix

1 0 0
 L   f 21 1 0

 f 31 f 32 1

CONSIDER HOW THIS RELATES TO THE


LU DECOMPOSITION METHOD TO SOLVE
FOR {X}

file: matrix.ppt p. 55
[A] {x} = {c}

[U][L]

[L] {d} = {c}

{d}

[U]{x}={d} {x}
file: matrix.ppt p. 56
Crout Decomposition
• Gauss elimination method involves two
major steps
forward elimination
back substitution
• Efforts in improvement focused on
development of improved elimination
methods
• One such method is Crout decomposition

file: matrix.ppt p. 57
Crout Decomposition

Represents and efficient algorithm for decomposing [A]


into [L] and [U]

 l11 0 0  1 u12 u13  a11 a12 a13 


l l22 0  0 1 u23   a 21 a 22 a 23 
 21    
 l31 l32 l33  0 0 1  a 31 a 32 a 33 

file: matrix.ppt p. 58
 l11 0 0  1 u12 u13  a11 a12 a13 
l l22 0  0 1 u23   a 21 a 22 a 23 
 21    
 l31 l32 l33  0 0 1  a 31 a 32 a 33 

Recall the rules of matrix multiplication.

The first step is to multiply the rows of [L] by the


first column of [U]

a11   l11 1  00  00  l11 Thus the first


column of [A]
a 21  l21 is the first column
a 31  l31 of [L]

file: matrix.ppt p. 59
 l11 0 0  1 u12 u13  a11 a12 a13 
l l22 0  0 1 u23   a 21 a 22 a 23 
 21    
 l31 l32 l33  0 0 1  a 31 a 32 a 33 

Next we multiply the first row of [L] by the column


of [U] to get

l11  a11
l11u12  a12
l11u13  a13

file: matrix.ppt p. 60
l11  a11
 l11 0 0  1 u12 u13  a11 a12 a13 
l11u12  a12 l
 21
l22 0  0 1

u23   a 21 a 22
 
a 23 

 l31 l32 l33  0 0 1  a 31 a 32 a 33 
l11u13  a13

a12
u12 
l11
a13 Once the first row of [U] is established
u13 
l11 the operation can be represented concisely

a1 j
u1 j  for j  2 ,3,..... n
l11

file: matrix.ppt p. 61
Schematic
depicting
Crout
Decomposition

file: matrix.ppt p. 62
li 1  ail for i  1,2 ,....., n
a1 j
u1 j  for j  2 ,3,..... n
l11
For j  2 ,3...... n  1
j 1
lij  aij   lik ukj for i  j , j  1,..... n
k 1
j 1
a jk   l ji uik
u jk  i 1
for k  j  1, j  2.... n
l jj
n 1
lnn  a nn   lnk ukn
k 1

file: matrix.ppt p. 63
The Substitution Step
• [L]{[U]{x}-{d}}= [A]{x}-{c}
• [L][U]=[A]
• [L]{d}={c}
• [U]{x}={d}
• Recall our earlier graphical depiction of the
LU decomposition method

file: matrix.ppt p. 64
[A] {x} = {c}

[U][L]

[L] {d} = {c}

{d}

[U]{x}={d} {x}
file: matrix.ppt p. 65
c1
d1 
l11
i 1
ci   lij d j
j 1
di  for i  2 ,3,...... n
lii
Back substitution recall U x  d 
xn  dn
n
xi  di  u x
j  i 1
ij j for i  n  1, n  2 ,..... n

file: matrix.ppt p. 66
Parameters used to quantify the
dimensions of a banded system.

BW BW = band width
HBW = half band width

HBW

Diagonal
file: matrix.ppt p. 67
Thomas Algorithm
• As with conventional LU decomposition methods, the
algorithm consist of three steps
decomposition
forward substitution
back substitution
• Want a scheme to reduce the large inefficient use of
storage involved with banded matrices
• Consider the following tridiagonal system

file: matrix.ppt p. 68
 a11 a12   x1   c1 
a a 22 a 23  x  c 
 21   2  2
 a 32 a 33 a 34   x3   c3 
  . .
      
. . .
  
 . . .  . .
 . . .  . .
     
 a n  1, n  2 a n  1, n  1 a n  1, n  . .
   xn   cn 
 an, n  1 a nn 

Note how this tridiagonal system require the storage of a large


number of zero values.

file: matrix.ppt p. 69
 f1 g1   0  f 1  g1 
e f2 g2  e  f  g 
 2   2  2  2 
 e3 f3 g3  e3  f 3  g3 
   .  .  . 
     
. . .

 . . .   .  .  . 
     
 . . .   .  .  . 
 en1 f n1 gn1   .  .  . 
     
 en f n  en  f n  0 

Note that we have changed our notation of a’s to e,f,g


In addition we have changed our notation of c’s to r

file: matrix.ppt p. 70
 0  f 1  g1   0 b12 b13 
e  f g  b b22 b23  Storage can
    
2 2 2 21
 be accomplished
e3  f 3  g3  b31 b32 b33 
 .  .  .   . by either storage
     . . 
      as vector (e,f,g)
 .  .  .   . . .  or as a compact
 .  .  .   . . .  matrix [B]
    
 .  .  .   . . . 
en  f n  0  bn1 bn 2 0

Storage is even further reduced if the matrix is banded and


symmetric. Only elements on the diagonal and in the upper
half need be stored.
file: matrix.ppt p. 71
Gauss Seidel Method
• An iterative approach
• Continue until we converge within some pre-
specified tolerance of error
• Round off is no longer an issue, since you
control the level of error that is acceptable
• Fundamentally different from Gauss elimination
this is an approximate, iterative method
particularly good for large number of equations
file: matrix.ppt p. 72
Gauss-Seidel Method
• If the diagonal elements are all nonzero, the
first equation can be solved for x1
c1  a12 x2  a13x3  a1n xn
x1 
a11

• Solve the second equation for x2, etc.


To assure that you understand this, write the equation for x2

file: matrix.ppt p. 73
c1  a12 x 2  a13 x 3  a1n x n
x1 
a11
c2  a 21 x1  a 23 x 3  a 2 n x n
x2 
a 22
c3  a 31 x1  a 32 x 2  a 3n x n
x3 
a 33

cn  a n1 x1  a n 3 x 2  a nn 1 x n 1
xn 
a nn

file: matrix.ppt p. 74
Gauss-Seidel Method
• Start the solution process by guessing
values of x
• A simple way to obtain initial guesses is to
assume that they are all zero
• Calculate new values of xi starting with
x1 = c1/a11
• Progressively substitute through the
equations
• Repeat until tolerance is reached
file: matrix.ppt p. 75
x1  c1  a12 x2  a13 x3  / a11
x2  c2  a21 x1  a23 x3  / a22
x3  c3  a31 x1  a32 x2  / a33

x1  c1  a12 0  a13 0 / a11 


c1
 x '1
a11
x2  c2  a21 x '1  a23 0 / a22  x '2
x3  c3  a31 x '1  a32 x '2  / a33  x '3

file: matrix.ppt p. 76
EXAMPLE
Given the following augmented matrix,
complete one iteration of the Gauss
Seidel method.

2 3 1  2
4 1 2  2
 
 3 2 1  1 

file: matrix.ppt p. 77
Gauss-Seidel Method
convergence criterion
xij  xij 1
 a ,i  j
 100   s
xi
as in previous iterative procedures in finding the roots,
we consider the present and previous estimates.

As with the open methods we studied previously with one


point iterations

1. The method can diverge


2. May converge very slowly
file: matrix.ppt p. 78
Convergence criteria for two
linear equations
c1 a12
u x1 , x2    x2
a11 a11
c2 a21
v x1 , x2    x2
a22 a22
consider the partial derivatives of u and v
u u a12
0 
x1 x 2 a11
v a21 v
 0
x1 a22 x 2
file: matrix.ppt p. 79
Convergence criteria for two
linear equations
c1 a12
u x1 , x2    x2
a11 a11 Class question:
c2 a21 where do these
v x1 , x2    x1 formulas come from?
a22 a22
consider the partial derivatives of u and v
u u a12
0 
x1 x2 a11
v a21 v
 0
x1 a22 x2
file: matrix.ppt p. 80
Convergence criteria for two linear
equations cont.

u v
 1 Criteria for convergence
x x where presented earlier
u v in class material
 1
y y for nonlinear equations.

Noting that x = x1 and


y = x2

Substituting the previous equation:

file: matrix.ppt p. 81
Convergence criteria for two linear
equations cont.

a21 a12
1 1
a22 a11

This is stating that the absolute values of the slopes must


be less than unity to ensure convergence.

Extended to n equations:
aii   aij where j  1, n excluding j  i

file: matrix.ppt p. 82
Convergence criteria for two linear
equations cont.
aii   aij where j  1, n excluding j  i

This condition is sufficient but not necessary; for convergence.

When met, the matrix is said to be diagonally dominant.

file: matrix.ppt p. 83
x2 Review the concepts
of divergence and
convergence by graphically
illustrating Gauss-Seidel
for two linear equations
x1

u: 11x1  13x2  286


v: 11x1  9 x2  99

file: matrix.ppt p. 84
v: 11x1  9 x2  99
x2
u: 11x1  13x2  286

Note: we are converging


on the solution

x1

file: matrix.ppt p. 85
u: 11x1  13x2  286
x2
v: 11x1  9 x2  99

Change the order of


the equations: i.e. change
direction of initial
estimates

x1

This solution is diverging!


file: matrix.ppt p. 86
Improvement of Convergence
Using Relaxation
This is a modification that will enhance slow convergence.

After each new value of x is computed, calculate a new value


based on a weighted average of the present and previous
iteration.

xinew  xinew  1    xiold

file: matrix.ppt p. 87
Improvement of Convergence Using
Relaxation
xinew  xinew  1    xiold

• if  = 1unmodified
• if 0 <  < 1 underrelaxation
nonconvergent systems may converge
hasten convergence by dampening out oscillations
• if 1<  < 2 overrelaxation
extra weight is placed on the present value
assumption that new value is moving to the correct
solution by too slowly
file: matrix.ppt p. 88
Jacobi Iteration
• Iterative like Gauss Seidel
• Gauss-Seidel immediately uses the value of
xi in the next equation to predict x i+1
• Jacobi calculates all new values of xi’s to
calculate a set of new xi values

file: matrix.ppt p. 89
Graphical depiction of difference between Gauss-Seidel and Jacobi
FIRST ITERATION
x1   c1  a12 x2  a13x3  / a11 x1   c1  a12 x2  a13x3  / a11
x2   c2  a 21x1  a 23x3  / a 22 x2   c2  a 21x1  a 23x3  / a 22
x3   c3  a 31x1  a 32 x2  / a 33 x3   c3  a 31x1  a 32 x2  / a 33

SECOND ITERATION
x1   c1  a12 x2  a13x3  / a11 x1   c1  a12 x2  a13x3  / a11
x2   c2  a 21x1  a 23x3  / a 22 x2   c2  a 21x1  a 23x3  / a 22
x3   c3  a 31x1  a 32 x2  / a 33 x3   c3  a 31x1  a 32 x2  / a 33
file: matrix.ppt p. 90
EXAMPLE
Given the following augmented matrix, complete
one iteration of the Gauss Seidel method and the
Jacobi method.

2 3 1  2
4 1 2  2
 
 3 2 1  1 

We worked the Gauss Seidel method earlier

file: matrix.ppt p. 91

Anda mungkin juga menyukai