Anda di halaman 1dari 75

System of Linear Equations

This presentation covers


explanations for these two topics
shown, together with worked Definition
examples.

Solving Linear Equations

1
Definition
2
Definition
Linear equation: an equation of the form ax+by=0 where a and b
are not both zero.
Linear system of equations: a system of equations such as

ax  by  c

dx  ey  f
is a linear system of equations.
Both equations must be considered together.

Linear system can either be of two, three or more variables.

3
Definition
Think back to linear equations. For instance,
consider the linear equation
y = 3x – 5. A "solution" to this equation was any x, y-
point that "worked" in the equation. So (2, 1) was a
solution because, plugging in 2 for x:
 3x – 5 = 3(2) – 5 = 6 – 5 = 1 = y
On the other hand, (1, 2) was not a solution,
because, plugging in 1 for x:
 3x – 5 = 3(1) – 5 = 3 – 5 = –2

4
Now consider the following two-variable
system of linear equations:
y = 3x – 2
y = –x – 6

Since the two equations


above are in a system, we
deal with them together at
the same time. In particular,
we can graph them
together on the same axis
system, like this.

5
Now consider the following two-variable
system of linear equations:
y = 3x – 2
y = –x – 6

Solution for a single equation


is any point that lies on the
line for that equation. A
solution for a system of
equations is any point that lies
on each line in the system. For
example, the red point at right
is not a solution to the system,
because it is not on either line.

6
Now consider the following two-variable
system of linear equations:
y = 3x – 2
y = –x – 6

The blue point at right is not a


solution to the system,
because it lies on only one of
the lines, not on both of them.

7
Now consider the following two-variable
system of linear equations:
y = 3x – 2
y = –x – 6

The purple point at right is a


solution to the system,
because it lies on both of the
lines.

8
Solving Linear System in Two
Variables

• Graphing

• Substitution method

• Addition method (also known as elimination


method)

9
Graphing
Thinking graphically, when we are solving systems, we are
finding intersections. For two-variable systems, there are
three possible types of solutions.

10
Graphing
The first graph shows two distinct non-parallel lines that cross
at exactly one point. This is called an "independent" system of
equations, and the solution is always some x,y-point.

11
Graphing
The second graph shows two distinct lines that are parallel. Since
parallel lines never cross, then there can be no intersection; that is,
for parallel lines, there can be no solution. This is called an
"inconsistent" system of equations, and it has no solution.

12
Graphing
The third graph appears to show only one line. Actually, it's the
same line drawn twice. These "two" lines, really being the same line,
then "intersect" at every point along their length. This is called a
"dependent" system, and the "solution" is the whole line.

13
Example 1
Solve the following system by graphing.
2x – 3y = –2
4x +  y = 24

Solution

14
Example 1

Solution (5,4)

15
Example 2
Solve the following system by graphing.
y = 36 – 9x
3x + y/3 = 12
Solution

16
Example 3
Solve the following system by graphing.
7x + 2y = 16
–21x – 6y = 24
Solution

17
Summary
To solve a linear system in two variable by
graphing
i. Solve both equations for y
ii. Compare the slopes to decide how many solutions
the system has
iii. If the system has one solutions – graph the two lines
in the same plane
iv. Identify the point of intersection
v. Check the point in both equations

18
Summary
 A linear system in two variables may have one solution,
no solution, or infinitely many solutions.
 We use the slope and y-intercepts of the given
equations to determine how many solutions a system
has:
 Different slopes – one solution
 Same slopes, different y-int – no solutions
 Same slopes, same y-int – infinite many solutions

19
Summary

When graphing to determine the solutions to


the system:
 First – solve for y
 Second – compare the slopes to determine how
many solutions
 Third – if one solution – graph both lines on the
same plane
 Fourth – identify point of intersection
 Lastly – check solution in both equations

20
Solving by Substitution
This method works by solving one of the equations for one of
the variables, and then plugging this into the other equation,
"substituting" for the chosen variable and solving for the other.
Then back-solve for the first variable.

Example 4
Solve the following system by substitution.

2x – 3y = –2
4x +   y = 24

21
Solution
4x + y = 24
y = –4x + 24
Substitute for "y" in the first equation, and solve for x:
2x – 3(–4x + 24) = –2
2x + 12x – 72 = –2
14x = 70
x=5

y = –4(5) + 24 = –20 + 24 = 4

Then the solution is (x, y) = (5, 4).

22
Solving by Addition
The addition method is also called the method of
elimination. If we have the equation "x + 6 = 11", you
would write "–6" under either side of the equation, and add
down to get "x = 5" as the solution.

x + 6 = 11
    –6    –6
x       =   5

23
Example 5
Solve the following system using addition.
2x + y = 9
3x – y = 16

Solution

2x + y = 9
3x – y = 16
5x      = 25
with x = 5, and then back solve, using either of the original
equations, to find the value of y. Using the first equation:
2(5) + y = 9
  10 + y = 9
          y = –1
Then the solution is (x, y) = (5, –1).
24
Exercise 1
Solve the following using addition.

12x – 3y = 6
   4x –   y = 2

25
Solving Linear System in Three or
More Variables
Methods for solving linear equations of three variables:
Direct methods: find the exact solution in a finite
number of steps
Iterative methods: produce a sequence a sequence
of approximate solutions hopefully converging to the
exact solution.

Matrix algebra is used to solve a system of linear


equations.

26
Linear Systems
a11 x1  a12 x 2  a13 x 3    b1
a21 x1  a22 x 2  a23 x 3    b2
a31 x1  a32 x 2  a33 x 3    b3

 a11 a12 a13   x1   b1 


a   x 2  b2 
 21 a22 a23 
a31 a32 a33   x 3  b3 
    
          

27
Matrix Algebra
System of m linear equations in n unknowns

Matrix-vector notation Extended coefficient matrix

28
Solving Linear Systems
 Solve Ax=b, where A is an nn matrix and
b is an n1 column vector
 Can also talk about non-square systems where
A is mn, b is m1, and x is n1
 Overdetermined if m>n:
“more equations than unknowns”
 Underdetermined if n>m:
“more unknowns than equations”
Can look for best solution using least squares

29
Solving Linear Systems
Recap from Lecture 2:

1. Inverting matrix
2. Cramer’s Rule

30
Solving Linear Systems
Recap from Lecture 2:

Inverting matrix
Usually not a good idea to compute x=A-1b
 Inefficient
 Prone to round off error

31
Echelon Form of a Matrix
An mxn matrix A is said to be row echelon form, if it satisfies
the following properties:

1. All zero rows, if there any, appear at the bottom of the


matrix.
2. Each leading entry (or the first nonzero entry from the
left) of a row is in a column to the right of the leading
entry of row above it.
3. All entries in a column below a leading entry are zeros.

32
Echelon Form of a Matrix
Matrices in echelon form:

x * *  x * *  0 x * * * *
x * *    
 ,  0 x * ,  0 x * ,  0 0 0 0 x *
0 0 x 
0 0 x   0 0 0   0 0 0 0 0 0 

(x) may have any nonzero value and the entries (*) may
have any value including zero

33
Reduced Echelon Form of a Matrix
An mxn matrix A is said to be reduced row echelon form, if
it satisfies the following properties:

1. The first entry from the left of a nonzero row is a 1. This


entry is called a leading one of its row.
2. All entries above and below a leading 1 are zeros.

1 0 0 1 0 *   0 1 * * 0 * 
1 * 0    
 ,  0 1 0 ,  0 1 * ,  0 0 0 0 1 0 
 0 0 1  0 0 1  0 0 0  0 0 0 0 0 0
   

34
Elementary Row operation Matrix
An elementary row operation on a matrix A is any one of
the following operations:

1. Type I: Interchange any two rows..


2. Type II: Multiply a row by a nonzero number
3. Type III: Add a multiple of one row to another.

An mxn matrix B is said to be row equivalent to an mxn


matrix A if B can be obtained by applying a finite sequence
of elementary row operations to A.

35
Echelon Matrix

* * * * * * *
0 * * * * * 
*
A
0 0 0 0 * * *
 
0 0 0 0 0 0 *

Freevariables
Free variables
36
Reduced Row Echelon Matrix

1 0 * * 0 * 0
0 1 * * 0 * 
0
A
0 0 0 0 1 * 0
 
0 0 0 0 0 0 1

Free variables
Free variables
37
Example 6
3 2 1
A   2 2  1 R1↔R2
 2 1 2 
 2 2  1
~  3  1 1  2R2→R2
 2 1 2 
 2 2  1
~  6  2 2  -3R1+R2↔R2
 2 1 2 
 2 2  1
~  0  8 5  R1+R3↔R3
 2 1 2 
2 2  1
~ 0  8 5  A and B are row equivalent
0 3 1 
B
38
Steps in Row Reduction-Pivoting
1. Begin with the leftmost nonzero column. This is called pivot
column.
2. Select a nonzero entry (having the smallest absolute value) in
the pivot column as a pivot element. If a pivot element not at
pivot position then use interchange row operations to move this
entry into pivot position.
3. Perform row reduction into row echelon form
(Obtain 0 below the pivot element using row replacement operations by
adding suitable multiple of the top row to the row below that )
and row reduction into reduced row echelon form.
(Obtain 0 above and below the pivot element using row replacement by
adding suitable multiple of the top row to the row below it)
4. Repeat (1) to (3) on the matrix consisting of the remaining rows.

39
Example 7
Apply elementary row operations to transform the
following matrix into echelon form

3 1 1  3
2 1 3 8 
A 
 1 1 2 3
 
  2 4  3 2 

Solution

1. Compute the vector for checking column.


2. Follow the aforementioned steps.

40
Example 7
Row echelon form of matrix A

1  1 2 3
0 2 1 8 
A
0 0 1 4
 
0 0 0 0

41
Exercise 2
Find the row echelon form of matrix A
0 2 3 4 1
0 0 2 3 4
A
2 2 5 2 4
 
2 0 6 9 7

 5 
1 1  1 2
2
 3 1 Answer
0 1 2 
C 2 2
0 3
0 1 2
 2 
0 0 0 0 0  42
Rank of a Matrix
The rank is the number of the pivots of A, which is
also the same as the number of nonzero rows of
an echelon form of A. To compute it, we reduce A
to echelon form and count the number of nonzero
rows or the number of pivot columns.

Example 8
Compute the rank of the following matrix

3 2 1 
A  0 1 0
1 2 3
43
Example 8 3 2 1
Solution A  0 1 0
1 2 3
1 2 3
~ 0 1 0
There are 3
3 2 1
nonzero rows,
hence the rank 1 2 3
(A)=3 ~ 0 1 0 
0  4  8
1 2 3
~ 0 1 0 
0 0  8
44
Exercise 3
Find the rank of matrix A

0 1 2 
A  1 2 1 
0 2 4

45
Solving Linear Systems

Gaussian Elimination

Gauss Jordan Elimination

46
Gauss Elimination Method

 Form the augmented matrix (A│b) corresponding to Ax=b.

 Transform augmented matrix (A │ b) to row echelon matrix


(U │ d).

 Solution (if any) x to the system obtained by solving the


linear system Ux=d corresponding to (U │ d) using backward
substitution.

47
Gauss Elimination Method
Solve the following linear system using Gauss elimination.

a. x1+ 2x2+3x3=6
2x1-3x2+2x3=14
3x1+x2-x3=-2

b. x1+ 2x2+3x3=6
4x1+5x2+6x3=24
2x1+7x2-12x3=-2

c. 3x1-5x2+2x3=6
x1+2x2-x3=1
-x1+9x2-4x3=-4

48
Gauss Elimination Method
Solution:

a. Unique solution (r=n)

b. No solution

c. Infinite solution r<n

n = unknown
r = nonzero rows

49
Solving Linear Systems

Gaussian Elimination Method for Solving M x = b

 A “Direct” Method
Finite Termination for exact result (ignoring round off)
 Produces accurate results for a broad range of
matrices
 Computationally Expensive

50
Gauss-Jordan Elimination Method

 Form the augmented matrix (A│b) corresponding to Ax=b.

 Transform augmented matrix (A │ b) to row reduce echelon


matrix (U │ d). (i.e. Ax=b equivalent to Ux=d and have the
same solution)

 Solution (if any) x to the system obtained by solving the


linear system Ux=d corresponding to (U │ d).

51
Gauss-Jordan Elimination Method
Solve the following linear system using Gauss elimination.

a. x1+ 2x2+3x3=6
2x1-3x2+2x3=14
3x1+x2-x3=-2

b. x1+ 2x2+3x3=6
4x1+5x2+6x3=24
2x1+7x2-12x3=-2

c. 3x1-5x2+2x3=6
x1+2x2-x3=1
-x1+9x2-4x3=-4

52
Gauss-Jordan Elimination Method
Solution:

a. Unique solution (r=n)


(Also unique solution if │A │≠0)

b. No solution

c. Infinite solution r<n

n = unknown
r = nonzero rows

53
Consistency of Solutions
 The linear system of equations Ax=b has a
solution, or said to be consistent IFF
Rank{A}=Rank{A|b}
 A system is inconsistent when
Rank{A}<Rank{A|b}

54
LU Factorization Method
Variant of Gaussian elimination that decomposes a matrix as a product of a lower
triangular and an upper triangular matrix.

Widely used method on computer for solving a linear system.

When U is an upper triangular matrix all of whose diagonal entries are different from
zero, then the linear system UX=B can be solved without transforming the augmented
matrix [U│B] to reduced row echelon form.

This is the preferred general method for solving linear equations.

55
LU Factorization Method
u11 u12 u13 ... b1 
0 u u 23 ... b2 
 22

M M M ... b3 
 
M M M ... M 
 0 0 0 ... bn 

The solution is obtained by the following algorithm


bn
xn 
u nn
bn 1  u n 1n x n This is merely back substitution
x n 1 
u n 1n 1
j 1
b j   u jk x k
xj  k n
, j  n, n  1,...,2,1.
u jj

56
LU Factorization Method
In similar manner, if L is a lower triangular matrix of whose all diagonal
entries are different from zero, then the linear system LX=B can be solved
by forward substitution.
 l11 0 0 ... 0 b1 
l 0 b2 
 21 l 22 0 ...
 l 31 l 32 l 33 ... 0 b3 
 
M M M ... M M 
l n1 l n 2 l n3 ... l nn bn 

b1
x1 
l11
The solution is given by
b2  l 21 x1
x2 
l 22
j 1
b j   l jk x k
xj  k 1
, j  2,..., n.
l jj
57
Example 9
Solve the linear system

5x1 =10
4x1-2x2 =28
2x1+3x2+4x3 =26
Solution

10
x1  2
5
28  4 x1
x2   10
2
26  2 x1  3 x 2
xj   13
4

58
LU Factorization Method
An nxn matrix of A can be written as a product of a matrix L in lower triangular form and a
matrix U in upper triangular form, i.e.
A=LU
In this case, we say that A has an LU-factorization or an LU-decomposition. To solve a
system AX=B, substitute LU for A,
(LU)X=B.
or
L(UX)=B
Letting UX=Z, the matrix becomes
LZ=B

59
LU Factorization Method
There are infinitely many different ways to decompose A.
Most popular one: U=Gaussian eliminated matrix
L=Multipliers used for elimination

 1 0 0  0 0 a11(1) a12(1) a13(1)  a1(1n) 


m
 2,1 1 0  0 0  0 ( 2)
a22 ( 2)
a23  a2( 2n) 

 m3,1 m3, 2 1  0 0  0 0 ( 3)
a33  a3( 3n) 
A  
      0       
mn 1,1 mn 1, 2 mn 1,3  1   0 0 0 an( n1) n 1 an( n1) n 
  
 mn ,1 mn , 2 mn,3 mn, 4  1  0 (n)
0 0 0 ann 

Compact storage: The diagonal entries of L matrix are all 1’s,


they don’t need to be stored. LU is stored in a single matrix.

60
LU Factorization Method
 2 3
• Suppose we are given: A 
1 2
 1 0 2 3 
• Then we can write A = LU where: L  U 
 0.5 1   0 0.5 

• Let’s check that:


 1 0  2 3   1* 2  0*0 1*3  0*0.5   2 3 
LU        
 0.5 1  0 0.5   0.5* 2  1*0 0.5*3  1*0.5   1 2 

61
Example 10
Solve the linear system

6x1-2x2-4x3+4x4 =2
3x1-3x2 -6x3+x4 =-4
-12x1+8x2+21x3-8x4 =8
-6x1-10x3+7x4 =-43
Solution:
 6 2 4 4 
 3  3  6 1 
A
 12 8 21  8
 
6 0  10 7 
1 0 0 0 6  2  4 4   2 
1  0  2  4  1  4
1 0 0
L 2 U  B 
 2  2 1 0 0 0 5  2  8 
     
  1 1  2 1  0 0 0 8    43

62
Example 10
Then solve AX=B by writing LUX=B. Let UX=Z and solve LZ=B

1 0 0 0  z1   2 
1 
 2 1 0 0  z 2    4 
  
 2  2 1 0  z 3   8 
    
  1 1  2 z
1  4    43 
By forward substitution,

z1  2
1
z 2  4  z1  5
2
z3  8  2z1  2z 2  2
z 4  43  z1  z 2  2z3  32

63
Example 10
Solve UX=Z,

6  2  4 4   x 1   2 
0  2  4  1   x    5 
  2    
0 0 5  2  x 3   2 
    
 0 0 0 8 x
 4    32 
Hence,
 32
x4   4
8
2  2x4
x3   1.2
5 1
 5  4x3  x 4
x2   6. 9
2
2  2x 2  4x3  4x 4
x1   4.5
6

64
Decomposition Methods
Solve UX=Z,

Doolittle decomposition

Crout decomposition
Hence,
Cholesky decomposition (for symmetric
matrices)

65
Crout Decomposition
Solve UX=Z,

 a11 a12 a13   b1   x1 


A  a21 a 22 a23 , B  b2 , X   x 2 
a31 a32 a33  b3   x 3 

Hence,
1 0 0 u11 u12 u13 
A  LU  l 21 1 0  0 u 22 u 23 
l 31 l 32 1  0 0 u33 
 a11 a12 a13   u11 u12 u13 
 a21 a 22 a23   l 21u11 l 21u12  u 22 l 21u13  u 23 

a31 a32 a33  l 31u11 l 31u12  l 32u 22 l 31u13  l 32u 23  u33 

66
Crout Decomposition
Solve UX=Z,
u11  a11
u12  a12
u13  a13
a21 a21 a21
Hence, l 21  , u 22  a22  a12 , u 23  a23  a13
a11 a11 a11
a31  a31   a21 
l 31  , u32   a32  a12  /  a22  a12 ,
a11  a11   a11 
a31
u33  a33  a13  l 32u 23
a11

67
Crout Decomposition
Solve UX=Z,
Thus the matrices L and U become known. Now AX=B
becomes
LUX=B  LY=B, where Y = UX

Hence,

68
Example 11
Solve the linear system

3x1+2x2+7x3 =4
2x1+3x2 +x3 =5
3x1+4x2+x3 =7

Solution:

 1 0 0 u11 u12 u13 


A  LU  l 21 1 0  0 u 22 u 23 
l 31 l 32 1  0 0 u33 
3 2 7   u11 u12 u13 
 2 3 1   l 21u11 l 21u12  u 22 l 21u13  u 23 

3 4 1  l 31u11 l 31u12  l 32u 22 l 31u13  l 32u 23  u33 

69
Example 11

u11  3
u12  2
u13  7
2 2 2
l 21  , u 22  3   2 , u 23  1   7 
3 3 7
3
l 31  , u32  3 4  1 2   / 5,
3
6   11 
u33  1     1 7 
5 3 

70
Example 11
  
1 0 0 3 2 7 
2  5  11 
A 1 0  0 
3  3 3 
1 6 8 
1  0 0
 5   5 
Write UX=Y which gives LY=B

   
1 2 0  y 1   4  y1   4 
2       7
 1 0  y 2   5    y 2    
3   y  7   y 3   3
1 6   3    1
1
 5   5 
71
Example 11
Hence the original system reduces to
    7
3 2 7   x1   4   
 x1   8 
 5  11     7    9
0 x2      x2    
 3 3   3 8
0  8   x 3   1   x 3   7 
0  8 
 5   5 

   
1 2 0  x1   4  x1   4 
2       7
 1 0  x 2   5    x 2    
3   x  7   x 3   3
1 6   3    1
1
 5   5 
72
Steps in LU Decomposition

73
Iterative Methods
 If systems of linear equations are very large,
the computational effort of direct methods is
prohibitively expensive
 Three common classical iterative techniques for
linear systems
 The Jacobi method
 Gauss-Seidel method

74
LU Factorization Method
 LU Decomposition
* Based on Gauss elimination
*More efficient
 Decomposition Methods (not unique)
* Doolittle decomposition lii = 1
* Crout decomposition uii = 1 (omitted)
* Cholesky decomposition (for symmetric matrices) uii = lii

75

Anda mungkin juga menyukai