Spring 2015
The determinant is a number we associate to a square matrix A. It will tell us, among
other things, whether or not A is invertible.
Formula 2. If A is a 2 2 matrix, things get a little bit harder, but are still pretty easy
to compute. If A is the matrix
a11 a12
a21 a22
then the determinant of A is given by
.
Note: weve already
about the quick formula for the inverse of a 2 2 matrix:
talked
a a
if A is the matrix 11 12 , then
a21 a22
1 1 a22 a12
A = .
a11 a22 a12 a21 a21 a11
So weve already seen the determinant of a 2 2 matrix come up and we know that a 2 2
matrix A is invertible if and only if det(A) 6= 0.
2 4
Example 1. Find the determinant of the matrix A = .
3 1
The determinant is (2)(1) 4(3) = 14.
2 1 1
Example 2. Find the determinant of the matrix A = 3 2 4 .
2 1 0
The formula above tells us det(A) = a11 C11 + a12 C12 + a13 C13 . We know
and
1+1 2 4
C11 = (1) det M11 = det = 4
1 0
1+2 3 4
C12 = (1) det M11 = det = 8
2 0
1+3 3 2
C13 = (1) det M13 = det = 1
2 1
so
det(A) = 2(4) + 1(8) 1(1) = 15.
0 2 1 0
The formula tells us det(A) = a11 C11 + a12 C12 + a13 C13 + a14 C14 . We know
A lot of the determinants in the above example turned out to be 0, and thats a re-
flection of the following fact: we can compute determinants by expanding along the first
row (what weve been doing in the formula above, using the entries from the first row to
compute the determinant), or we could expand along any row or column of our choice.
This is summarized in the following formulas.
Formula 4. Ff A is the n n matrix
a11 a12 . . . a1n
A = ... .. .. ..
. . .
an1 an2 . . . ann
1. An expansion across row i: det(A) = ai1 Ci1 + ai2 Ci2 + + ain Cin
2. An expansion across column j: det(A) = a1j C1j + a2j C2j + + anj Cnj
Example 3, revisited. Lets try applying part (2) of the formula to the example above,
with j = 1 (expanding across the first column). The formula tells us det(A) = 1 C11 +
0 C21 + 0 C31 + 0 C41 . So, we just need to find
2 1 1
C11 = (1)1+1 det M11 = det 3 2 4 = 15
2 1 0
We wont prove all of these facts (theyre proved in the book, mostly in section 5.1
and 5.2), so feel free to check out those explanations. I wont expect you to know how to
prove these things, but I will expect you to at least know these facts. The fourth fact is
particularly useful. For instance, even though AB 6= BA in general, it tells us
It also tells us that the matrix AB is singular if and only if either A or B is singular! (Do
you see why??)
Section 6.1: Eigenvalues and Eigenvectors
Au = u.
Au u = 0.
(A I)u = 0.
Rearranging the eigenvalue problem in this way, were looking for values of such that the
equation (A I)x = 0 has a nonzero solution x = u. However, we know this equation
has a nonzero solution if and only if A I is a singular matrix, which happens if and
only if det(A I) = 0.
By rearranging the problem in this way, we see that we can find eigenvalues of A by
determining when det(A I) = 0. This will turn out to be a polynomial in , called
the characteristic polynomial of A. Once we know the eigenvalues of A, we can find
eigenvectors as nonzero solutions to (A I)x = 0. This also shows that the eigenspace
of , which is the set of all solutions to the equation (A I)x = 0, is a subspace of Rn
because the eigenspace is equal to null(A I).
Example
2. Find the eigenvalues and a basis for each eigenspace of the matrix A =
3 3
.
6 4
We first want to find eigenvalues of A, which we do by finding such that det(AI) =
0.
3 3 1 0 3 3
A I = = .
6 4 0 1 6 4
Now we compute
det(A I) = (3 )(4 ) 3(6) = 2 + 30 = ( 5)( + 6).
Setting this equal to 0, we find that the eigenvalues are 1 = 5 and 2 = 6.
We can find the corresponding eigenvectors (and eigenspaces) by solving the equation
(A I)x = 0. For 1 = 5,
2 3
A 5I =
6 9
and we solve the equation by reducing
2 3 0 1 3/2 0
[A 5I|0] = .
6 9 0 0 0 0
The solutions to this equation are the eigenspace corresponding to 1 = 5, so the
eigenspace is
3/2
s : s is any real number .
1
A basis for this eigenspace would be
3/2
.
1
Note that although 0 is in the eigenspace, eigenvectors are required to be nonzero, so any
value s 6= 0 would give an eigenvector corresponding to 1 = 5.
This equals zero when is 0 or 2, so the eigenvalues are 1 = 0 and 2 = 2. To find the
corresponding eigenspaces, we want to find all solutions to (A I)x = 0.
In the above example, the characteristic polynomial was ( 2)2 . Because of the
square, 2 = 2 appeared as a root twice. The number of times an eigenvalues appears
as a root of the characteristic polynomial is called the multiplicity of . The dimension
of the eigenspace associated to is always less than or equal to the multiplicity of (in
the above example, it was actually equal).
3. If A is a triangular matrix, the eigenvalues of A are just the entries on the diagonal.
4. When youre finding eigenvectors and reducing [A I|0], you should ALWAYS get
free variables and rows of zeros because weve chosen so that A I is a singular
matrix. If you dont get free variables and rows of zeros, something went wrong in
your calculation!
5. Because eigenvalues are all scalars such that A I is a singular matrix, we can
say that A itself is a singular matrix if and only if 0 is an eigenvalue of A. But, this
means we can add something to the Big Theorem!
The Big Theorem: As usual, let S = {a1 , . . . , an } be a set of n vectors in Rn , and let
A = [a1 . . . an ]. Let T : Rn Rn be the linear transformation T (x) = Ax. Then the
following are equivalent:
1. S spans Rn .
2. S is linearly independent.
4. T is onto.
5. T is one-to-one.
6. ker(T ) = {0}.
7. range(T ) = Rn .
8. null(A) = {0}.
9. col(A) = Rn .
10. row(A) = Rn .
11. nullity(A) = 0.
12. rank(A) = n.
14. det(A) 6= 0.
Before we get into the technical details of this section, lets first revisit the idea of a
basis and what wed use a basis for. If W is a subspace of Rn , then remember that a basis
is a linearly independent spanning set for W . Why would we want this? Remember the
main theorem about bases: if B = {u1 , . . . , um } is a basis for W , then any vector w in
W can be written as a linear combination
w = c1 u1 + + cm um
Were used to seeing coordinates in the standard basis. For example, if {e1 , e2 } are the
standard basis vectors for R2 , then any vector in R2 can be written as a linear combination
of these coordinates in a unique way, namely
a 1 0
=a +b .
b 0 1
a
This tells us the coordinates of the vector in the standard basis are just (a, b). This
b
makes sensewe go a units in the direction of e1 (the x-direction) and b units in the di-
rection of e2 (the y-direction) to get to the point (a, b).
Now, using other bases, we just want to extend this idea to other subspaces. For
example, if I have some plane in R3 , can we find some vector that acts like the x-axis
and some vector that acts like the y-axis? Well, if I have a basis for that plane, we will
be able to impose a coordinate system!
We are also interested in extending our ideas of coordinate systems beyond the stan-
dard basis because some bases are much easier to work with than others. For example, if
we have an n n matrix A and a basis B = {u1 , . . . , un } of Rn consisting of eigenvectors
of A, then it is very easy to compute Ax for any vector x in Rn . Why? Well, suppose
the eigenvectors {u1 , . . . , un } correspond to the eigenvalues 1 , . . . , n . Then, we know
Aui = i ui . So, if I can find scalars c1 , . . . , cn such that
x = c1 u1 + + cn un ,
then
Ax = c1 Au1 + + cn Aun = c1 1 u1 + + cn n un .
This becomes very useful when were trying to find higher powers of A; it says
Ak x = c1 Ak u1 + + cn Ak un = c1 k1 u1 + + cn kn un .
So, instead of multiplying x by A k times, we just have to multiply the eigenvectors by
powers of scalars and add them up. This is much easier to do!
We can investigate some of these ideas further in the future, but for now, thats a bit
of motivation of why wed want to be able to find a basis and then change to a different
basis. Now well move on to the computational side of how to change to a new basis. We
restrict our attention to changing between bases of Rn .
Notation
x = c1 u1 + + cn un .
What if were given a vector in the standard basis, and we want to find xB ? Well,
because B is a basis, the matrix U is invertible, so we can solve for xB by multiplying
both sides of the equation above by U 1 to get the equation
xB = U 1 x.
1
Example 3. Let B be as in Example 2. Find xB if x = .
1
1 3 5
We do this problem using the formula xB = U x. We know U = , so U 1 =
1 2
2 5
. Then,
1 3
1 2 5 1 3
xB = U x = = .
1 3 1 2 B
Now we want to find the change of basis matrix from B1 to B2 , where B1 and B2 are
any bases for Rn . Let B1 = {u1 , . . . , un } and B2 = {v1 , . . . , vn }. Let U be the matrix
U = [u1 . . . un ] and V be the matrix V = [v1 . . . vn ]. From the formulas above, we know
x = U xB1 and xB2 = V 1 x. Substituting the first equation into the second, we see that
xB2 = V 1 U xB1
xB1 = U 1 V xB2
You have no homework on this section, but if you continue to take math (especially
Math 309), diagonalization is a concept that will pop up. Its a nice application of the
ideas in the Change of Basis section.
x = c1 u1 + + cn un
or
c1
..
xB = . .
cn B
Because these are eigenvectors, if we multiply the first equation by A, we get
Ax = c1 1 u1 + + cn n un
or
1 c1
(Ax)B = ... .
n cn B
But, if D is the matrix
1 0 ... 0
..
0 2 .
D=. .. ,
.. . 0
0 ... 0 n
we have just derived the formula
1 c1 c1
.. ..
(Ax)B = . = D . = DxB .
n cn cn
xB = P 1 x
Ax = P (Ax)B = P (DxB ) = P DP 1 x.
This equation holds for any vector x, so tells us that
A = P DP 1 .