Anda di halaman 1dari 15

Math 308 Lecutre Notes

Spring 2015

Section 5.1: Determinants

The determinant is a number we associate to a square matrix A. It will tell us, among
other things, whether or not A is invertible.

Formula for the Determinant

Formula 1. If A is a 1 1 matrix, A = [a11 ], then det(A) = a11 .

Formula 2. If A is a 2 2 matrix, things get a little bit harder, but are still pretty easy
to compute. If A is the matrix  
a11 a12
a21 a22
then the determinant of A is given by

det(A) = a11 a22 a12 a21

.
Note: weve already
  about the quick formula for the inverse of a 2 2 matrix:
talked
a a
if A is the matrix 11 12 , then
a21 a22
 
1 1 a22 a12
A = .
a11 a22 a12 a21 a21 a11
So weve already seen the determinant of a 2 2 matrix come up and we know that a 2 2
matrix A is invertible if and only if det(A) 6= 0.
 
2 4
Example 1. Find the determinant of the matrix A = .
3 1
The determinant is (2)(1) 4(3) = 14.

Formula 3. Once we know the formula for a 2 2 determinant, we begin to compute


determinants recursively. In order to do this, we first introduce some notation. If A is
an n n matrix, denote the ij th entry by aij . Let Mij be the (n 1) (n 1) matrix
gotten by crossing out the ith row and j th column of A. Mij is called the ij th minor of
A. Finally, let Cij = (1)i+j det(Mij ). Cij is called the ij th cofactor of A.

As an example, consider the matrix



2 1 1
A = 3 2 4 .
2 1 0
Then,
a13 = 1
 
3 2
M13 =
2 1
and
C13 = (1)1+3 det M13 = (1)4 (3(1) (2)(2)) = 1.
Well use these to compute the determinant of any matrix. Heres the formula: if A is
the n n matrix
a11 a12 . . . a1n
.. .. .. ..
A= . . . .
an1 an2 ... ann
then
det(A) = a11 C11 + a12 C12 + + a1n C1n .


2 1 1
Example 2. Find the determinant of the matrix A = 3 2 4 .
2 1 0
The formula above tells us det(A) = a11 C11 + a12 C12 + a13 C13 . We know

a11 = 2, a12 = 1, a13 = 1

and 
1+1 2 4
C11 = (1) det M11 = det = 4
1 0
 
1+2 3 4
C12 = (1) det M11 = det = 8
2 0
 
1+3 3 2
C13 = (1) det M13 = det = 1
2 1
so
det(A) = 2(4) + 1(8) 1(1) = 15.

The formula is recursive, meaning we compute determinants of 3 3 matrices by com-


puting determinants of 2 2 matrices. We compute determinants of 4 4 matrices by
computing determinants of 3 3 matrices, which we do by computing determinants of
2 2 matrices, and so on.


1 1 2 1
0 2 1 1
Example 3. Find the determinant of the matrix A =
0 3 2 4 .

0 2 1 0
The formula tells us det(A) = a11 C11 + a12 C12 + a13 C13 + a14 C14 . We know

a11 = 1, a12 = 1, a13 = 2, a14 = 1

and, by what we computed above (example (2)),



2 1 1
C11 = (1)1+1 det M11 = det 3 2 4 = 15.
2 1 0

We also need to find



0 1 1
C12 = (1)1+2 det M11 = det 0 2 4
0 1 0

0 2 1
C13 = (1)1+3 det M13 = det 0 3 4
0 2 0

0 2 1
C14 = (1)1+4 det M14 = det 0 3 2 .
0 2 1
Lets do the first one together:

0 1 1       
2 4 0 4 0 2
C12 = det 0 2
4 = 0 det
1 det + (1) det = 0.
1 0 0 0 0 1
0 1 0

Similarly, we can compute C13 = 0 and C14 = 0 and we find that so

det(A) = 1(15) 1(0) + 2(0) + 1(0) = 15.

A lot of the determinants in the above example turned out to be 0, and thats a re-
flection of the following fact: we can compute determinants by expanding along the first
row (what weve been doing in the formula above, using the entries from the first row to
compute the determinant), or we could expand along any row or column of our choice.
This is summarized in the following formulas.
Formula 4. Ff A is the n n matrix

a11 a12 . . . a1n
A = ... .. .. ..

. . .
an1 an2 . . . ann

then we can compute the determinant via

1. An expansion across row i: det(A) = ai1 Ci1 + ai2 Ci2 + + ain Cin

2. An expansion across column j: det(A) = a1j C1j + a2j C2j + + anj Cnj

Example 3, revisited. Lets try applying part (2) of the formula to the example above,
with j = 1 (expanding across the first column). The formula tells us det(A) = 1 C11 +
0 C21 + 0 C31 + 0 C41 . So, we just need to find

2 1 1
C11 = (1)1+1 det M11 = det 3 2 4 = 15
2 1 0

(we computed this in a previous example). Therefore,

det(A) = 1(15) = 15.

Facts about the Determinant.

1. If I is the identity matrix (of any size), det(I) = 1.

2. If A is an n n matrix, det(A) = det(AT ) .

3. If A is triangular (i.e. diagonal, upper triangular, or lower triangular) n n matrix,


then det(A) is the product of terms on the diagonal.

4. If A and B are both n n matrices, det(AB) = det(A) det(B).

5. An n n matrix A is invertible if and only if det(A) 6= 0.

We wont prove all of these facts (theyre proved in the book, mostly in section 5.1
and 5.2), so feel free to check out those explanations. I wont expect you to know how to
prove these things, but I will expect you to at least know these facts. The fourth fact is
particularly useful. For instance, even though AB 6= BA in general, it tells us

det(AB) = det(A) det(B) = det(B) det(A) = det(BA).

It also tells us that the matrix AB is singular if and only if either A or B is singular! (Do
you see why??)
Section 6.1: Eigenvalues and Eigenvectors

The Eigenvalue Problem

If A is nn matrix, can we find a scalar and a nonzero vector u such that Au = u ?

We call these vectors u eigenvectors of A and we call the scalars eigenvalues of


A. The set of all eigenvectors corresponding to a given , together with the zero vector,
is called the eigenspace of , and it is a subspace of Rn (which well show below).
   
3 5 5
Example 1. If A = , verify that u = is an eigenvector of A with corresponding
4 2 4
eigenvalues = 7.

To verify this, we need to check the Au = 7u. But this is true:


       
3 5 5 35 5
Au = = =7 = 7u.
4 2 4 28 4

Finding Eigenvalues and Eigenvectors

We want to find and u such that

Au = u.

Subtracting u from both sides, we get

Au u = 0.

Now, factoring out the u (on the right), we get

(A I)u = 0.

Rearranging the eigenvalue problem in this way, were looking for values of such that the
equation (A I)x = 0 has a nonzero solution x = u. However, we know this equation
has a nonzero solution if and only if A I is a singular matrix, which happens if and
only if det(A I) = 0.

By rearranging the problem in this way, we see that we can find eigenvalues of A by
determining when det(A I) = 0. This will turn out to be a polynomial in , called
the characteristic polynomial of A. Once we know the eigenvalues of A, we can find
eigenvectors as nonzero solutions to (A I)x = 0. This also shows that the eigenspace
of , which is the set of all solutions to the equation (A I)x = 0, is a subspace of Rn
because the eigenspace is equal to null(A I).
Example
  2. Find the eigenvalues and a basis for each eigenspace of the matrix A =
3 3
.
6 4
We first want to find eigenvalues of A, which we do by finding such that det(AI) =
0.      
3 3 1 0 3 3
A I = = .
6 4 0 1 6 4
Now we compute
det(A I) = (3 )(4 ) 3(6) = 2 + 30 = ( 5)( + 6).
Setting this equal to 0, we find that the eigenvalues are 1 = 5 and 2 = 6.

We can find the corresponding eigenvectors (and eigenspaces) by solving the equation
(A I)x = 0. For 1 = 5,  
2 3
A 5I =
6 9
and we solve the equation by reducing
   
2 3 0 1 3/2 0
[A 5I|0] = .
6 9 0 0 0 0
The solutions to this equation are the eigenspace corresponding to 1 = 5, so the
eigenspace is    
3/2
s : s is any real number .
1
A basis for this eigenspace would be
 
3/2
.
1
Note that although 0 is in the eigenspace, eigenvectors are required to be nonzero, so any
value s 6= 0 would give an eigenvector corresponding to 1 = 5.

Now, we repeat the process for the other eigenvalue 2 = 6:


   
9 3 0 1 1/3 0
[A + 6I|0] =
6 2 0 0 0 0
meaning the eigenspace corresponding to 2 = 6 (the solutions to (A + 6I)x = 0) is
   
1/3
s : s is any real number .
1
For any s 6= 0, such a vector is an eigenvector. A basis for this eigenspace would be
 
1/3
.
1

1 2 1
Example 3. Find the eigenvalues and a basis for each eigenspace of A = 1 0 1.
1 2 3
First we need to find the characteristic polynomial, det(A I). But,

1 2 1
det(A I) = det 1 1 = 3 + 42 4 = ( 2)2 .
1 2 3

This equals zero when is 0 or 2, so the eigenvalues are 1 = 0 and 2 = 2. To find the
corresponding eigenspaces, we want to find all solutions to (A I)x = 0.

For 1 = 0, we solve this by reducing



1 2 1 0 1 0 1 0
[A 0I|0] = 1 0 1 0 0 1 1 0 .
1 2 3 0 0 0 0 0

The solutions to this system (the eigenspace corresponding to 1 = 0) are given by



1
s 1 : s is any real number
1

so a basis for the eigenspace of 1 = 0 is



1
1 .
1

We repeat the process for 2 = 2, reducing



1 2 1 0 1 2 1 0
[A 2I|0] = 1
2 1 0 0 0 0 0 .
1 2 1 0 0 0 0 0

The solutions to this system (the eigenspace corresponding to 2 = 2) are given by



1 2
s 0 + s2 1 : s1 , s2 any real numbers
1

1 0

so a basis for the eigenspace of 2 = 2 is



1 2
0 , 1 .
1 0

Multiplicity of an Eigenvalue

In the above example, the characteristic polynomial was ( 2)2 . Because of the
square, 2 = 2 appeared as a root twice. The number of times an eigenvalues appears
as a root of the characteristic polynomial is called the multiplicity of . The dimension
of the eigenspace associated to is always less than or equal to the multiplicity of (in
the above example, it was actually equal).

Remarks about Eigenvalues

Things to keep in mind when finding eigenvalues and eigenvectors:


1. If A is an n n matrix, the characteristic polynomial det(A I) should always be
a degree n polynomial (you get n s from the n terms on the diagonal).

2. There can be at most n different eigenvalues for an n n matrix A.

3. If A is a triangular matrix, the eigenvalues of A are just the entries on the diagonal.

4. When youre finding eigenvectors and reducing [A I|0], you should ALWAYS get
free variables and rows of zeros because weve chosen so that A I is a singular
matrix. If you dont get free variables and rows of zeros, something went wrong in
your calculation!

5. Because eigenvalues are all scalars such that A I is a singular matrix, we can
say that A itself is a singular matrix if and only if 0 is an eigenvalue of A. But, this
means we can add something to the Big Theorem!
The Big Theorem: As usual, let S = {a1 , . . . , an } be a set of n vectors in Rn , and let
A = [a1 . . . an ]. Let T : Rn Rn be the linear transformation T (x) = Ax. Then the
following are equivalent:
1. S spans Rn .

2. S is linearly independent.

3. Ax = b has a unique solution for all b in Rn .

4. T is onto.

5. T is one-to-one.

6. ker(T ) = {0}.

7. range(T ) = Rn .

8. null(A) = {0}.

9. col(A) = Rn .
10. row(A) = Rn .

11. nullity(A) = 0.

12. rank(A) = n.

13. S is a basis for Rn .

14. det(A) 6= 0.

15. 0 is not an eigenvalue of A.

16. A row reduces to the identity matrix.


Section 6.3: Change of Basis

Before we get into the technical details of this section, lets first revisit the idea of a
basis and what wed use a basis for. If W is a subspace of Rn , then remember that a basis
is a linearly independent spanning set for W . Why would we want this? Remember the
main theorem about bases: if B = {u1 , . . . , um } is a basis for W , then any vector w in
W can be written as a linear combination

w = c1 u1 + + cm um

in a unique way. We call the scalars c1 , . . . , cm the coordinates of w in the basis B.


In some sense, were imposing a coordinate system on the subspace W where each basis
vector acts like a coordinate axis.

Were used to seeing coordinates in the standard basis. For example, if {e1 , e2 } are the
standard basis vectors for R2 , then any vector in R2 can be written as a linear combination
of these coordinates in a unique way, namely
     
a 1 0
=a +b .
b 0 1
 
a
This tells us the coordinates of the vector in the standard basis are just (a, b). This
b
makes sensewe go a units in the direction of e1 (the x-direction) and b units in the di-
rection of e2 (the y-direction) to get to the point (a, b).

Now, using other bases, we just want to extend this idea to other subspaces. For
example, if I have some plane in R3 , can we find some vector that acts like the x-axis
and some vector that acts like the y-axis? Well, if I have a basis for that plane, we will
be able to impose a coordinate system!

We are also interested in extending our ideas of coordinate systems beyond the stan-
dard basis because some bases are much easier to work with than others. For example, if
we have an n n matrix A and a basis B = {u1 , . . . , un } of Rn consisting of eigenvectors
of A, then it is very easy to compute Ax for any vector x in Rn . Why? Well, suppose
the eigenvectors {u1 , . . . , un } correspond to the eigenvalues 1 , . . . , n . Then, we know
Aui = i ui . So, if I can find scalars c1 , . . . , cn such that

x = c1 u1 + + cn un ,

then
Ax = c1 Au1 + + cn Aun = c1 1 u1 + + cn n un .
This becomes very useful when were trying to find higher powers of A; it says

Ak x = c1 Ak u1 + + cn Ak un = c1 k1 u1 + + cn kn un .
So, instead of multiplying x by A k times, we just have to multiply the eigenvectors by
powers of scalars and add them up. This is much easier to do!

We can investigate some of these ideas further in the future, but for now, thats a bit
of motivation of why wed want to be able to find a basis and then change to a different
basis. Now well move on to the computational side of how to change to a new basis. We
restrict our attention to changing between bases of Rn .

Notation

If B = {u1 , . . . , un } is a basis for Rn , and x is a vector in Rn , we write



c1
..
xB = .
cn B

where c1 , . . . , cn are the coordinates of x in the basis B, i.e.

x = c1 u1 + + cn un .

So, when we write


c1
..
xB = .
cn B
we DO NOT mean that x is equal to the vector with components c1 , . . . , cn . i.e we DO
NOT mean
c1
..
x = . .
cn
Instead, we mean that x is a linear combination of the basis vectors with coefficients given
by c1 , . . . , cn .
     
2 1 2 4
Example 1. Let B be the basis , for R . If xB = , find .
1 3 5 B
 
4
The notation xB = means that
5 B
     
2 1 3
x = 4 5 = .
1 3 19
Remark. When we write x with no subscript, we always
mean that x = xS , where S is
c1
n ..
the standard basis for R . In other words, x = . with no subscript actually means
cn
that x is the vector whose components are given by c1 , . . . , cn .

First Change of Basis Matrices

If B = {u1 , . . . , un } is a basis for Rn , and x is a vector in Rn , then we can find scalars


c1 , . . . , cn such that
x = c1 u1 + + cn un .
However, this says
c1
..
xB = . .
cn B
If U is the matrix whose columns are the basis vectors, A = [u1 . . . un ], then we can
rewrite the first equation as
x = U xB .
In words, this is saying that multiplying the coordinate vector xB of x in the basis B by
the matrix U gives us x, the coordinate vector of x in the standard basis S. Hence, we
call U the change of basis matrix from B to the standard basis.

What if were given a vector in the standard basis, and we want to find xB ? Well,
because B is a basis, the matrix U is invertible, so we can solve for xB by multiplying
both sides of the equation above by U 1 to get the equation

xB = U 1 x.

We call U 1 the change of basis matrix from the standard basis to B.


     
3 5 2
Example 2. Let B = {u1 , u2 } = , . Find x if xB = .
1 2 1 B
We can do this problem using the formula x = U xB . We first set U = [u1 u2 ]. Then,
     
3 5 2 1
x = U xB = = .
1 2 1 0

 
1
Example 3. Let B be as in Example 2. Find xB if x = .
1
 
1 3 5
We do this problem using the formula xB = U x. We know U = , so U 1 =
  1 2
2 5
. Then,
1 3      
1 2 5 1 3
xB = U x = = .
1 3 1 2 B

Change of Basis Matrices between Nonstandard Bases

Now we want to find the change of basis matrix from B1 to B2 , where B1 and B2 are
any bases for Rn . Let B1 = {u1 , . . . , un } and B2 = {v1 , . . . , vn }. Let U be the matrix
U = [u1 . . . un ] and V be the matrix V = [v1 . . . vn ]. From the formulas above, we know
x = U xB1 and xB2 = V 1 x. Substituting the first equation into the second, we see that

xB2 = V 1 U xB1

and we say that V 1 U is the change of basis matrix from B1 to B2 .

Similarly, we know x = V xB2 and xB1 = U 1 x, so

xB1 = U 1 V xB2

and we say that U 1 V is the change of basis matrix from B2 to B1 .



1 1 2 1 1 2
Example 4. Let B1 = 1 , 4 , 1
and let B2 = 0 , 3 , 1 . Find xB1

3 2 6 1 0 2


3
if xB2 = 2 .

1 B2

1 1 2 1 1 2
First set U = 1 4 1 and V = 0 3 1. We know xB1 = U 1 V xB2 so we need
3 2 6 1 0 2
22 2 7
to find U 1 . But, we can find this and will get U 1 = 3 0 1, so
10 1 3

22 2 7 1 1 2 3 129
xB1 = U 1 V xB2 = 3 0 1 0 3 1 2 = 16 .
10 1 3 1 0 2 1 B2 60 B1
Section 6.4: Diagonalization

You have no homework on this section, but if you continue to take math (especially
Math 309), diagonalization is a concept that will pop up. Its a nice application of the
ideas in the Change of Basis section.

Suppose A is an n n matrix with eigenvectors B = {u1 , . . . , un } that form a basis for


R . Let 1 , . . . , n be the corresponding eigenvectors. If x is any vector in Rn , because
n

these eigenvectors form a basis, we can write

x = c1 u1 + + cn un

or
c1
..
xB = . .
cn B
Because these are eigenvectors, if we multiply the first equation by A, we get

Ax = c1 1 u1 + + cn n un

or
1 c1
(Ax)B = ... .

n cn B
But, if D is the matrix
1 0 ... 0
..
0 2 .

D=. .. ,
.. . 0
0 ... 0 n
we have just derived the formula

1 c1 c1
.. ..
(Ax)B = . = D . = DxB .
n cn cn

If P is the matrix P = [u1 . . . un ], then

xB = P 1 x

for any x. But, this implies that

Ax = P (Ax)B = P (DxB ) = P DP 1 x.
This equation holds for any vector x, so tells us that

A = P DP 1 .

We say that an n n matrix A is diagonalizable if we can find these matrices P


and D (where D is a diagonal matrix) such that A = P DP 1 . In the situation outlined
above, we see that we can find P and D by finding a basis for Rn consisting of eigenvectors
{u1 , . . . , un } of A with corresponding eigenvalues 1 , . . . , n . Then P = [u1 . . . un ] and D
is the diagonal matrix with the eigenvalues on the diagonal.
 
3 1
Example 1. Find matrices P and D to diagonalize A = .
2 0
First, we want to find the eigenvalues and eigenvectors of A. A computation shows
that the eigenvalues
 are 1 = 1 and 2 = 2, and an eigenvector corresponding to 1 = 1
1
is the vector u1 = . Similarly, an eigenvector corresponding to 2 = 2 is the vector
  2  
1 2 1 1
u2 = . Then, the vectors {u1 , u2 } form a basis for R , so we set P = and
 1  2 1
1 0
D= . Then, A = P DP 1 .
0 2
Remark. Not every matrix is diagonalizable! In order to do this computation, we need to
find a basis for Rn consisting of eigenvectors of A, and we cannot always do this. However,
if the eigenvalues of A are distinct (all different), then such a basis of eigenvectors does
exist, and we can diagonalize A. Feel free to read more about diagonalization in Section
6.4 of the book!

Anda mungkin juga menyukai