Anda di halaman 1dari 6

Section 5.

Although a transformation x |-> Ax may move vectors in a variety of direction, it often happens that
there are special vectors on which the action of A is quite simple.

An eigenvector if an nxn matrix A is a nonzero vector x such that Ax = -\x for some scalar -\. A
scalar -\ is called an eigenvalue of A if there is a nontrivial solution x of Ax= -\x; such an x is called
an eigenvector corresponding to -\.

Note that an eigenvector must be nonzero, by definition, but an eigenvalue may be zero. The case
when the number 0 is an eigenvalue is discussed after Example 5 (page 306).

It is easy to determine if a given vector is an eigenvector of a matrix. You are basically seeing if x is
a nonzero vector x such that Ax= -\x for some scalar -\. To do this compute Ax and see if it is a
multiple of x.

It is also easy to decide if a specified scalar is an eigenvalue. All that has to be done is to compute
Ax and see if Ax= -\x.

Example 3: Show that 7 is an eigenvalue of A, and find the corresponding eigenvectors.


_ _
| 1 6 |
A= |_ 5 2 _|

Solution: The scalar 7 is an eignenvalue of A if and only if the equation

Ax=7x (1)

has a nontrivial solution. But (1) is equivalent to Ax-7x = 0 or

(A – 7I)x = 0 (2)

To solve the homogeneous equation, form the matrix


_ _ _ _ _ _
A-7I = | 1 6 | _ | 7 0 | = | -6 6 |
|_ 5 2 _| |_ 0 7 _| |_ 5 -5 _|

The columns of o f A-7I are obviously linearly dependent, so (2) has nontrivial solutions. Thus 7 is
an eigenvalue of A. To find the corresponding eigenvectors, use row operations:
_ _ _ _
A-7I = | -6 6 0 | ~ | 1 -1 0 |
|_ 5 -5 0 _| |_ 0 0 0 _|

The general solution has the form x2 [1 1]' . Each vector of this form with x2 =/= 0 (since
eigenvectors must be nonzero) is an eignevector corresponding to -\ =7

Warning: Although row reduction was used in Example 3 (which is just above this) to find
eigenvectors, it cannot be used to find eigen.values. An echelon form of a matrix A usually does not
display the eigenvalues of A.

The equivalence of equations (1) and (2) obviously holds for any -\ in place of -\ = 7. Thus -\ is an
eigenvalue of A if and only if the equation
(A- (-\)I)x= 0 (3)

has a nontrivial solution. The set of all solutions of (3) is just the null space of the matrix (A- (-\)I ).
So this set is a subspace of Rn and is called the eigenspace of A corresponding to -\.

Eigenspaces must therefore correspond to a particular eigenvalue of a matrix.

Example 3 shows that for the A in that example, the eigenspace corresponding to -\ = 7 consists of
all multiples of (1,1), which is the line through (1,1) and the origin. From Example 2 (page 303),
one can check that the eigenspace correspoding to -\ =-4 is the line through (6,-5). These
eigenspaces are shown in the text, and they are easy to visualize, along with the geometric action of
the transformation x |--> Ax on each eigenspace. Look at page 305, Figure 2.

Example 4: Given a nxn matrix A, an eigenvalue of 2, and asked to find a basis for the
corresponding eigenspace.

Solution: Form A-2I and row reduce the augmented matrix for (A – 2I)x=0. At this point, if we
realize that the equation (A-2I)x=0 has free variables, we can then be confident that 2 is indeed an
eigenvalue of A. Once we have the general solution in parametric vector form, we can then use the
vectors which are multiplying the free variables in that solution to form a basis for the eigenspace.

The following theorem describes one of the few special cases in which eigenvalues can be foud
precisely. Calculation of eigenvalues will also be discussed in Section 5.2.

Theorem 1: The eigenvalues of a triangular matrix are the entries on its main diagonal.

Example 5: Let
_ _ _ _
| 3 6 -8 | | 4 0 0 |
A = | 0 0 6 | and B= | -2 1 0 |.
|_ 0 0 2 _| |_ 5 3 4 _|

The eigenvalues of A are 3, 0, and 2. The eigenvalues of B are 4 and 1.

Question: What does it mean for a matrix A to have an eigenvalue of 0, such as in Example 5
(above)?

Answer: This happens if and only if the equation

Ax = 0x (4)

has a nontrivial solution. But (4) is equivalent to Ax=0, which has a nontrivial solution if and only if
A is not invertible. Thus 0 is an eigenvalue of A itself if and only if A is not invertible. This fact will
added to the Invertible Matrix Theorem in Section 5.2.

The following important theorem will be need later.

Theorem 2: If v1, . . . ,vr are eigenvectors that correspond to distinct eigenvalues -\1, . . . , -\r of an
nxn matrix A, then the set {v1, . . . ,vr} is linearly independent.

Look at the PROOF of Theorem 2 on page 307.


Eigenvectors and Difference Equations

We conclude this section by showing how to construct solutions of the first-order difference
equation discussed in the chapter introductory example:

xk+1=Axk (k=0,1,2,3, . . .) (8)

If A is an nxn matrix, then (8) is a recursive description of a sequence {xk} in Rn. A solution of (8)
is an explicit description of {xk} whose formula for each xk does not depend directly on A or on
preceding terms in the sequence other that the initial term x0.

The simplest way to build a solution of (8)is to take an eigenvector x0 and its corresponding
eigenvalue -\ and let

xk = (-\)k x0 (k=1,2,3, . . .) (9)

This sequence works, because

Axk = A((-\)k x0)= (-\)k (A x0) = (-\)k ((-\) x0)=((-\)k +1x0) = xk+1

Linear combinations of solutions of the form (9) are solutions, too! See Exercise 33.

Section 5.2

Useful information about the eigenvalues of a square matrix A is encoded in a special scalar
equation called the characteristic equation of A. A simple example will lead to the general case.

Example 1: Find the eignevalues of


_ _
| 2 3 |
A= |_ 3 -6 _|

Solution: We must find all scalars -\ such that the matirx equation

(A - (-\))x = 0

has a nontrivial solution. By thge Invertible Matrix Theorem in Section 2.3, this problem is
equivalent to finding all -\ such that the matrix (A - (-\)) is not invertible, where

_ _ _ _ _ _
A-(-\)I = | 2 3 | _ | -\ 0 | = | 2- (-\) 3 |
|_ 3 -6 _| |_ 0 -\ _| |_ 3 -6 - (-\) _|

By Theorem 4 in Section 2.2, this matrix fails to be invertible precisely when its determinant is
zero. So the eigenvalues of A are the solutions of the equation
_ _
det | 2- (-\) 3 | = 0
det ( A - (-\)) = |_ 3 -6 - (-\) _|

Recall that
_ _
| a b |
det |_ c d _| = ad - bc

So

det (A - (-\)) = (2-(-\))(-6 - (-\)) - (3)(3)


= -12 + 6(-\) - 2(-\) + (-\)2 – 9
= (-\)2 + 4(-\) -21

Setting -\)2 + 4(-\) -21 = 0, we have ((-\)-3) ((-\) + 7)=; so the eigenvalues are 3 and -7. This same
idea works for larger nxn matrices.

Determinants

Suppose a square matrix A has been reduced to an echelon form U by using row replacements and
row interchanges. (This is always possible. See the row reduction algorithm of Section 1.2.) If there
are r interchanges, then Theorem 3 shows that

det A = (-1) r det U

Since U is in echelon form, it is triangular, and so det U is the product of the diagonal entries
u11,...,un n . If A is invertible, the entries ui i are all pivots (because A ~ In and the ui i have not been
scaled to 1's). Otherwise, at least un n is zero, and the product u1 1, . . . ,un n is zero. Look at figure 1
on page 194. Thus
( (-1) r * (product of pivots in U) When A is invertible
det A = { (1)
( 0 When A is not invertible.

It is interesting to note that although the echelon form in U described above is not unique (because
it is not completely row reduced), and the pivots are not unique, the product of the pivots is unique,
excpet for a possible minus sign.

Formula (1) not only gives a concrete interpretation of det A but also proves the main theorem of
this section:

Theorem 4: A square matrix A is inverible if and only if det A =/=0.

Theorem 4 adds the statement "det A =/=0" to the Invertible Matrix Theorem. A useful corollary is
that det A=0 when the columns of A are linearly dependent. (Rows of A are columns of AT and
linearly dependent columns of AT make AT singular. When AT is singular, so is A, by the Invertible
Matrix Theorem.) In practice, linear dependence is obvious only when two columns or two rows are
the same or a column or a row is zero.

Theorem: The Invertible Matrix Theorem (continued)

Let A be an nxn matrix. Then A is invertible if and only if

s. The number 0 is not an eigenvalue of A.


t. The determinant of A is not zero.

Theorem 3: Properties of Determinants


Let A and B be nxn matrices.

a. A is invertible if and only if det A =/= 0.


b. det AB = (det A) ( det B)
c. det AT = det A
d. If A is triangular, then det A is the product of the entries on the main diagonal of A
e. A row replacement operation on A does not change the determinant. A row interchange changea
the sign of the determinant. A row scaling also scales the determinant by the same scalar factor.

The Characteristic Equation

By virtue of Theorem 3a, we can use the a determinant to when a matrix A- (-\) is not invertible.
The scalar equation det ( A - (-\)) = 0 is called the characteristic equation of A, and the argument
in Example 1 (above) justifies the following fact.

A scalar -\ is an eigenvalue of an nxn matrix A if and only if -\ satisfies the characteristic equation

det ( A - (-\)) = 0

In Examples 1 and 3 (pages 313 to 314), det ( A - (-\)) is a polynomial in -\. It can be shon that if A
is an nxn matrix, then det ( A - (-\)) is a polynomial of degree n called the characteristic
polynomial of A.

The eigenvalue 5 in Example 3 is said have multiplicity 2 because ((-\) - 5) occurs two times as a
factor of the characteristic polynomial. In general, the (algebraic) multiplicity of an eigenvalue -\
is its multiplicity as a root of the characteristic equation.

Similarity

the nest theorem illustrates one use of the characteristic polynomial, and it provides the foundation
for several iterative methods that approximate eigenvalues. If A and B are nxn matrices, then A is
similar to B if there is an invertible matrix P such that P-1AP = B, or equivalently, A = PBP-1.
Writing Q for P-1, we have Q -1BQ = A. So B is also similar to A, and we simply say that A and B
are simple. Changing A into P-1AP is called a similarity transformation.

Theorem 4: If nxn matrices A and B are similar, then they have the same characteristic polynomial
and hence the same eigenvalues (with the same multiplicities).

Warning: Similarity is not the same as row equivalence. (If A is row equivalent to B, then B = EA
for some invertible matrix E.) Row operations on a matrix usually change its eigenvalues.

Application to Dynamical Systems

Eigenvalues and eigenvectors hold the key to the discrete evolution of a dynamical system, as
mentioned in the chapter introduction.

Relates to same crap at end of Section 5.1, but there is an example given.

Section 5.3

In many cases, the eigenvalue-eigenvector information contained within a matrix A can be


displayed in a useful factorization of the form A=PDP-1. In this section, the factorization enables us
to comput Ak quickly for large values of k, a fundamental idea in several applications of linear
algebra. Later, in Section 5.6 and 5.7, the factorization will be used to analyze (and decouple)
dynamical systems.

The D in the factorization stands for diagonal. Power of such a D are trivial to compute.

In general (from Example 1 in Section 5.3 in textbook),


_ _
k
| 5 0 |
Dk = |_ 0 3k _| for k>1 or k=1.

A square matrix A is said to be diagonalizable if A is similar to a diagonal matrix, that is, A = PDP-1
for some invertible matrix P and some diagonal matrix D. The next theorem gives a characterization
of diagonizable matrices and tells how to construct a suitable factoriazation.

Theorem 5: The Diagonalization Theorem

An nxn matrix A is diagonalizable if and only if A has n linearly independent eigenvectors.

In fact, A = PDP-1 , with D a diagonal matrix, if and only if the columns of P are n linearly
independent eigenvectors of A. In this case, the diagonal entries of D are eigenvalues of A that
correspond, respectively, to the eigenvectors in P.

In other words, A is diagonalizable if and only if there are enough eigenvectors to form a basis of
Rn. We call such a basis an eigenvector basis.

Diagonalizing Matrices

Step 1. Find the eigenvalues of A

Step 2: Find n linearly independent eigenvectors of A (which is a nxn matrix)

Step 3: Construct P from the vectors in step 2

Step 4: Construct D for the corresponding eigenvalues

(TO avoid compting P-1, simply verify that AP=PD, however be sure that P is invertible!)

Matrices Whose Eigenvalues Are Not Distinct

Theorem 7:

Not all matrices are diagonizable!

Anda mungkin juga menyukai