Anda di halaman 1dari 11

Homework 9 Solutions

Elizabeth Pannell
Chris Jackson
Brian Longo
Jonathan Siegel
June 9, 2015
Exercise 12.1.21: Prove that a finitely generated module over a P.I.D. is projective if and only
if it is free.
Solution:
Let M be a finitely generated module over a P.I.D., R.
Suppose M is free. Then M is trivially the direct summand of a free module, and therefore
projective.
Suppose M is projective. Then, M K = F(S) for some R-module, K and free R-module,
F(S). Since M is finitely generated over a P.I.D., by Theorem 5 (part 1), M
= Rr R/(a1 )
r
R/(a2 ) R/(am ). Therefore, we have F (S)
= R R/(a1 ) R/(a2 ) R/(am ) K.
Now, F(S) is a free R-module, so by Theorem 5 (part 2), F(S) is torsion free. Therefore, no
R/(ai ) term may appear since this is annihilated by am . Therefore M
= Rr is free.
Q.E.D.
Exercise 12.1.22: Let R be a P.I.D. that is not a field. Prove that no finitely generated R-module
is injective. [Use Exercise 4, Section 10.5 to consider torsion and free modules separately.]
Solution: We will use the following result from Exercise 4, section 10.5: If Q1 and Q2 are
R-modules, Q1 Q2 is an injective R-module if and only if both Q1 and Q2 are injective.
Let M be any finitely generated R-module. Then, since R is a P.I.D., M = Rr R/(a1 )
R/(a2 ) R/(am ), where a1 , a2 , , am are not units.
Therefore, by exercise 4 section 10.5, if suffices to prove that R and R/(ai ) 1 i m are not
injective.
We first show that R is not an injective R-module. Suppose R is injective. Then, by proposition
36 section 10.5, given any s R, s 6= 0, we have sR=R. So, t R such that st = 1. This implies
that R is a field, contrary to our assumption. Therefore, R is not an injective R-module.
Now, let a R such that a 6= 0 and a is not a unit. Then, a R/(a) = 0 6= R/(a), so by
proposition 36 in section 10.5, R/(a) is not injective. Therefore, since a1 , a2 , , am are not units,
R/(a1 ), R/(a2 ) R/(am ) are not injective.
Therefore, M is not an injective R-module.
Exercise 12.2.12: Find all similarity classes of 3 3 matrices A over F2 safisfying A6 = 1. Do

the same for 4 4 matrices B satisfying B 20 = 1.


Solution: Since two similar matrices must have the same rational canonical form, we count the
number of distinct matrices in rational canonical form that satisfy the given constraints.
If A Mat3 (F2 ) satisfies A6 = 1, then A6 1 = 0. Therefore, the polynomial x6 1 annihilates
A, so minA (x) | x6 1, where minA (x) is the minimal polynomial of A. The complete factorization
of x6 1 (= x6 + 1) in F2 is easy to find using the freshmens binomial theorem:
x6 + 1 = (x3 )2 + 12 = [(x3 + 1)2 = [(x + 1)(x2 + x + 1)]2 = (x + 1)2 (x2 + x + 1)2
By the Cayley-Hamilton Theorem, minA (x)|charA (x) deg(minA (x)) 3 since charA (x) has
degree 3. The only possible minimal polynomials for such a matrix are therefore:
1. x + 1
2. (x + 1)2
3. x2 + x + 1
4. (x + 1)(x2 + x + 1)
Recall that the invariant factors a1 (x), . . . , an (x) of A have the property that a1 (x)|a2 (x)| . . . |an (x) =
minA (x). Therefore, the list of possible minimal polynomials of A gives rise to the following list of
possible invariant factors:
(a) x + 1, x + 1, x + 1 (from (1))
(b) x + 1, (x + 1)2 (from (2))
(c) (x + 1)(x2 + x + 1) = x3 + 1 (from (4))
Notice that you cannot obtain a set of invariant factors for minA (x) = x2 + x + 1. These three
sets of invariant factors correspond to the following three matrices in rational canonical form:

1 0 0
1 0 0
0 0 1
0 1 0 , 0 0 1 , 1 0 0
0 0 1
0 1 0
0 1 0
Now if B Mat4 (F2 ) is a matrix satisfying B 20 = 1, then minB (x)|x20 1. Again, by the
Cayley-Hamilton Theorem, deg(minB (x)) 4. The complete factorization of x20 1 = x20 + 1 in
F2 is
x20 + 1 = (x5 )4 + 14 = (x5 + 1)4 = [(x + 1)(x4 + x3 + x2 + x + 1)]4 = (x + 1)4 (x4 + x3 + x2 + x + 1)4
From this, we obtain the following list of possible candidates for the minimal polynomial of such
a matrix:
1. x + 1
2

2. (x + 1)2
3. (x + 1)3
4. (x + 1)4
5. x4 + x3 + x2 + x + 1
From these, we get a list of permissible invariant factors:
(a) x + 1, x + 1, x + 1, x + 1
(b) (x + 1)2 , (x + 1)2
(c) x + 1, x + 1, (x + 1)2
(d) x + 1, (x + 1)3
(e) (x + 1)4
(f) x4 + x3 + x2 + x + 1
Finally, we have the associated matrices in

1
1 0 0 0
0 1 0 0 0

0 0 1 0 , 0
0
0 0 0 1

0
1 0 0 0
0 0 0 1 1

0 1 0 1 , 0
0
0 0 1 1

rational canonical form:


0 1 0 0
0 0 0

1 0 0
, 1 0 0 0

0 0 0 1
0 0 1
0 0 1 0
0 1 0


0 0 0 1
0 0 1

0 0 0
, 1 0 0 1
1 0 0 0 1 0 1
0 0 1 1
0 1 0

Exercise 12.2.13: Prove that the number of similarity classes of 3 3 matrices over Q with a
given characteristic polynomial in Q[x] is the same as the number of similarity classes over any
extension field of Q. Give an example to show that this is not true in general for 4 4 matrices.
Solution: In order to count the similarity classes of a matrix with a given characteristic polynomial c(x), we count the possible matrices in rational canonical form that have c(x) as its characteristic polynomial. To do this, we look at the possible ways c(x) can be factored.
If the roots of c(x) in some splitting field of c(x) are distinct, then the minimal polynomial
m(x) of any matrix A that has c(x) as its characteristic polynomial must equal c(x) since m(x) and
c(x) have the same roots. In this case there is only one possible rational canonical form for such a
matrix, namely the companion matrix of c(x).
If c(x) has a double root a, then c(x) must be divisible by (ma (x))2 , the square of the minimal
polynomial of a. Since c(x) has degree 3, the only possibility is that ma (x) = (x a), and
3

c(x) = (x a)2 (x b) (possibly a = b) with a Q. By looking at the x2 coefficiant of c(x), we see


that 2a + b Q, so b Q as well. Now since a, b Q, any possible rational canonical form already
occurs over Q. Therefore the number of similarity classes over Q is the same as over any extension
field of Q.
As a counterexample in the 44 case, consider the characteristic polynomial c(x) = x4 4x2 +4.
Over Q, c(x) factors completely as c(x) = (x2 2)2 . The possible minimal polynomials are then
2
2
2
2
(x2 2) and(x2 2)2 , and the list of possible invariantfactors is:
{x2 2, x 2} and {(x 2) }.
2
completely as c(x) = (x+ 2) (x 2) . The list of possible invariant
Over Q( 2), c(x) factors

factors for c(x) over Q( 2) includes the the possible factors over Q, but we can now include other
possibilities
polynomial. For example one new possible minimal polynomial
would
for theminimal
2
2)(x
+
2)
,
which
would
give
a
new
possibility
for
invariant
factors
{(x

2), (x
be
(x

2)(x + 2)2 . So in this case, the list of similarity classes over Q( 2) is larger than the list of
similarity classes over Q.

Exercise 12.2.20: Let ` be a prime and let ` (x) = x`1 + x`2 + + x + 1 Z[x] be the `th
cyclotomic polynomial, which is irreducible over Q.
(a) Show that if p = `, then ` (x) is divisible by x 1 in F` [x].
(b) Suppose p 6= ` and let f denote the order of p in F
` , i.e., f is the smallest power of p with
pf 1 mod `. Show that m = f is the first value of m for which the group GLm (Fp ) contains
an element A of order `.
(c) Show that ` (x) is not divisible by any polynomial of degree smaller than f in Fp [x]. Let
mA (x) Fp [x] denote the minimal polynomial for the matrix A in (b) and conclude that
mA (x) is irreducible of degree f and divides ` (x) Fp [x].
(d) Prove that ` (x) is irreducible modulo p if and only if ` 1 is the smallest power of p which is
congruent to 1 modulo `
Solution:
(a) Here we can use the freshmens binomial theorem to rewrite ` (x) as
` (x) =

(x 1)`
x` 1
=
= (x 1)`1
x1
x1

In this form, it is clear that x 1 | ` (x).


(b) Recall that |GLm (Fp )| =
if A has order `, then

Qm1
k=0

m(m1) Q
m1
(pm pk ) = p( 2 ) k=0 (pmk 1). By Lagrange theorem,

m(m1)
` | |GLm (Fp )| = p( 2 )

m1
Y

(pmk 1)

k=0

Since ` is prime, ` must divide one of the factors of |GLm (Fp )|. This implies ` | pmk 1 for
some k {0, 1, . . . , m 1}. By assumption, the smallest value for m where this is possible is
m = f.
4

(c) Let g(x) be a polynomial of degree m less than f , and suppose that g(x) divides ` (x). Let
Cg be the compaion matrix of g(x). Recall that g(x) is the minimal polynomial of Cg . Note
that ` (1) = ` 6= 0. Since g(x) divides ` (x), this implies that g(1) 6= 0. Therefore 1 is not an
eigenvalue of Cg , so Cg is not the identity matrix. But
Cg` 1 = (Cg 1) ` (Cg ) = 0
since 0 = g(Cg ) | ` (Cg ). This implies that Cg has order `, which is a contradiction to part (b)
since deg(g(x)) < f . Therefore no such polynomial g(x) exists.
Let A be the f f matrix with order ` from part (b). Let us solve the problem assuming that
A 1 is invertible. From the equation
0 = A` 1 = (A 1)` (A),
we can invert A 1 and conclude that ` (A) = 0. Therefore mA (x) | ` (x). Since A is an
f f matrix, mA (x) has degree at most f . However, by the first half of part (c), we know that
` (x) is not divisible by any matrix of degree less than f . Therefore, mA (x) must have degree
f and must be irreducible.
To conclude the problem, it therefore remains to show that A 1 is invertible, i.e. that 1 is
not an eigenvalue of A. Suppose to the contrary that v is an eigenvector of A with eigenvalue
1. The first case is when f = 1, so A is just the 1 1 matrix containing the element 1; but
then A has order 1, not `, giving a contradiction. So suppose f > 1. Then A induces a linear
transformation A on the f 1-dimensional Fp -vector space Ffp /Fp v. The linear transformation
`

A satisfies A = 1 since A does, so A has order ` unless A is the identity. From part (b), it is
impossible for A to have order ` (since it acts on a vector space of dimension less than f ), so
we conclude that A is the identity. This implies that the matrix A (using a basis with v as the
first basis vector) has the shape

1 1 f 1

A=
.
.
..

1
It is easy to see that A` = 1 only if 1 = 2 = f 1 = 0, in which case A is the identity
matrix; but the identity matrix has order 1, not `, giving a contradiction. Therefore, A 1 is
invertible as desired.
(d) If ` (x) is irreducible, then mA (x) from part (c) must be ` (x) itself, since we already proved
mA (x) must divide ` (x). Therefore f = deg(mA (x)) = deg(` (x)) = ` 1.
Conversely, if f = ` 1, then mA (x) from part (c) is a monic, irreducible polynomial of degree
` 1 dividing ` (x). Therefore mA (x) must be ` (x) itself ` (x) is irreducible.
Exercise 12.3.2 - Prove that if 1 , . . . , n are the eigenvalues of the nn matrix A then k1 , . . . , kn
are the eigenvalues of Ak for any k 0.
Solution: Consider the Jordan canonical form of A, JA . JA is the direct sum of a series of
Jordan blocks such that each block has the eigenvalue i down the diagonal and 1s along the
5

superdiagonal, i.e.:
JA = Ja1 (1 ) Ja2 (2 ) Jam (m ) where m Z such that m n and ai represents the
multiplicity of i
k
Therefore, JA
is a block matrix consisting of the exponents of each block of JA . Since each Jordan
block would remain upper triangular after exponentiating to the power of k, the eigenvalues of each
block would simply be exponents of the original eigenvalues, ki .
Therefore, JA , the sum of all the Jordan blocks is also an upper triangular matrix with diagonal
entries of ki , and k1 , k2 , . . . , kn are the eigenvalues of the Ak .


Exercise 12.3.4 - Prove that the Jordan canonical form for the matrix

4
A=
6

4
0
4

5
3
2

is:

2
J = 0
0

1
2
0

0
0
3

Explicitly determine a matrix P which conjugates this matrix to its Jordan canonical form. Explain
why this matrix cannot be diagonalized.
Solution:

x9 4
5
x
3
X I A = 4
6
4 x + 2
It can be calculated that the characteristic polynomial is Xchar = x3 7x2 +16x12 = (x2)2 (x3).
This implies that the eigenvalues and corresponding eigenvectors are:
1 = 2 v1 = (2, 1, 2)T
2 = 2
3 = 3 v3 = (3, 2, 2)T
To check for the minimal polynomial, we know that Xmin must divide
Xchar = (x 2)2 (x 3) and must be divisible by the roots of Xchar , therefore the options are:
(i)Xmin = (x 2)(x 3)
(ii)Xmin = Xchar = (x 2)2 (x 3)
As can be seen below, (i) does not satisfy the requirements, so the minimal polynomial is the
same as the characteristic polynomial:
Xmin = Xchar = (x 2)2 (x 3)
6

4
(x 2)(x 3) = 2
4

4
2
4

2
1
2

Since the (x 2) term has multiplicity 2, this means the Jordan form, J will take the form of
the direct sum of a Jordan block of size 2 with corresponding eigenvector 2 and a Jordan block of
size 1 with corresponding eigenvector 3.
i.e. J = J2 (2) J1 (3) and J is represented by:

2 1 0
J = J2 (2) J1 (3) = 0 2 0
0 0 3
In order to find the matrix P such that J = P 1 AP , attach the eigenvectors corresponding to
the given Jordan blocks together to form the matrix P .
Since 1 = 2 yields v1 = (2, 1, 2)T , this will be the first column of P . Since 3 = 3 yields
v3 = (3, 2, 2)T , this will make up the third column of P . To find the second column of P which
corresponds to the eigenvalue 2 with multiplicity 2, solve the following equation:
(A 2 I)(v2 ) = v1 , where v1 is the eigenvector previously calculated corresponding to 1 . Then,
v2 = (0, 1/2, 0)T , which will make up the second column of P .
This means P and P 1 take the following form:

2 0
P = 1 1/2
2
0

P 1

1
= 2
1

0
2
0

3
2
2

3/2
1
1

A quick computation shows that with this P , J conjugates with our original matrix A, i.e.:

1 0 3/2
9
4
5
2 0
3
2 1 0
1 4 0 3 1 1/2 2 = 0 2 0 = J
P 1 AP = 2 2
6 4 2
2
0
2
0 0 3
1 0 1
Exercise 12.3.17 - Prove that any matrix A is similar to its transpose At .
Solution:


0 0
0 0

P = ... ...

0 1
1 0

...
...
...
...

0 1
1 0

.. ..
. .

0 0
0 0

t
So P = P 1 P 1 JA P = JA
, where JA is the Jordan form of A. Since A is similar to J, At
t
is similar to JA , which was shown is similar to JA . Since similarity of matrices is an equivalence
relation, A and At are similar.

Exercise 12.3.31 - Let N be an n n matrix with coefficients in the field F . The matrix N
is said to be nilpotent if some power of N is the zero matrix, i.e., N k = 0 for some k. Prove that
any nilpotent matrix is similar to a block diagonal matrix whose blocks are matrices with 1s along
the first superdiagonal and 0s elsewhere.
Solution: From Exercise 12.3.2, if a matrix N has eigenvalues 1 , . . . , m , then the eigenvalues
of N k are k1 , . . . , km
So, take N to be a nilpotent matrix. Consider an arbitrary eigenvalue of N , . Then:
N v = v, v 6= 0
N k v 0 = k v 0 , where v 0 6= 0. Since N is nilpotent, then N k = 0 and:
0 = k v 0 k = 0 since v 0 6= 0
Therefore = 0, and since our choice of was arbitrary, all eigenvalues must be 0. N is then
similar to its Jordan canonical form, J, with eigenvalues, i = 0 i. Therefore J is a matrix
whose blocks are matrices with 1s along the superdiagonal and 0s everywhere else.

Exercise 12.3.32 - Prove that if N is an n n nilpotent matrix then in fact N n = 0
Solution: Following the same notation as in Exercise 12.3.31, N is similar to J, an n n matrix
with entries of 0 except along the superdiagonal where there are entries of 1. Therefore, there is an
n n matrix P such that N = P 1 JP .
So, N n = P 1 J n P and J n = 0 N n = 0
It is clear from a direct calculation that the nth power of a k k Jordan block with eigenvalues 0 is 0 when k n (the diagonal of 1s moves to the nth superdiagonal so that when k n there
8

are no longer any non-zero entries).




Exercise 12.3.37:
Let J be a Jordan block of size n with eigenvalue over C.
(a) Prove that the Jordan canonical form for the matrix J 2 is the Jordan block of size n with
eigenvalue 2 if 6= 0.
(b) If = 0 prove that the Jordan canonical form for J 2 has two blocks (with eigenvalues 0) of size
n1 n+1
n n
2 , 2 if n is even and of size 2 , 2 is n is odd.
Solution:
(a) The matrix J 2 obviously has the single eigenvalue 2 . In order to show that the Jordan canonical
form of J 2 is a single block, it suffices to show that charJ 2 (x) = minJ 2 (x). It is easy to check
that
2

2 1

2 2 1

..
J2 =

2 2
2
where the blank spaces are all zero. So, cJ 2 (x) = (x 2 )n . By Cayley-Hamilton theorem,
we have mJ 2 (x)|cJ 2 (x). That is, mJ 2 (x) = (x 2 )k where k is the smallest power such that
(J 2 2 )k = 0. But then (J 2 2 )k = (J +)k (J )k , and since 6= 0, J +Idn is invertible.
This implies (J )k = 0. Therefore k = n, since otherwise, the polynomial (J )k is a proper
divisor of mJ (x) that annihilates J.
(b) In this case, we can compute the Jordan canonical form of J 2 directly. The matrix J 2 has 10 s
along the second superdiagonal, and zeroes elsewhere. If we choose a basis E := {e1 , e2 , . . . , en }
of Cn , we obtain a linear transformation T representing J 2 with respect to this basis. T acts
on the elements of E by the rule T (e1 ) = T( e2 ) = 0, and T (ei ) = ei2 for 3 i n. The goal
is to choose a new basis B so that the representing matrix of T with respect to B (which is
conjugate to J 2 ) is in Jordan form.
Case 1: n is even: Consider the ordered basis
B := {e1 , e3 , . . . , en1 , e2 , e4 , . . . , en }
It is easy to verify that the matrix representing T with respect to B 0 has two Jordan blocks of
size n2 with eigenvalue 0 by computing the image of each basis element in B.
Case 2: n is odd: Here we can do the same thing as above using the basis:
B = {e1 , e3 , . . . , en , e2 , e4 , . . . , en1 }
Again it is easy to check that the matrix representation of T with respect to B is a block matrix
n+1
with one Jordan block of size n1
2 and one Jordan block of size 2 , both with eigenvalue 0.
9

12.3.38: Determine necessary and sufficient conditions for a matrix A Mn (C) to have a square
root, i.e., for there to exist another matrix B Mn (C) such that A = B 2 .
Solution:
Claim: A Mn (C) has a square root if and only if A has an even number of Jordan blocks with 0
as an eigenvalue and these blocks come in pairs of size n/2, n/2 with n even, or (n 1)/2, (n + 1)/2
with n odd.
Proof: From exercise 37, the Jordan form of a matrix of the form B 2 has the shape described
in the claim; furthermore, the steps are reversible, so its possible to solve A = B 2 if A has such
2 1
a Jordan form. (Note: Since A = P JA P 1 , A2 = P JA
P , so it suffices to determine when the
Jordan form is a square.)
Exercse 12.3.39: Let J be a Jordan block of size n with eigenvalue over a field F of characteristic
2. Determine the Jordan canonical form for the matrix J 2 . Determine necessary and sufficient
conditions for a matrix A Mn (F ) to have a square root, i.e., for there to exist another matrix
B Mn (F ) such that A = B 2 .
Solution:
Claim: Let J be a Jordan block of size n with eigenvalue . Then, the Jordan canonical form
for J 2 has two blocks with eigenvalues 2 of size (n/2), (n/2) if n is even and of size (n 1)/2,
(n + 1)/2 if n is odd.
Let J be a Jordan block for with size 2n. Then

1 0 0

.
0 . . . . . . ..

J = 0 0 . . .

1
0

.
.
..
..
1
0 0
0
and

0
J2 =

.
..

2
0

2
..
.

..

..

0
0

2
0

..

0
..
.

0
..

.
0

0
=0

1
0
.
2 ..
0
2

0
..
.
..
.
..
.

0
0
0

0
..
.
..

0
0

2
0

0
..
.

0
.

0
2

Therefore the only eigenvalue for J 2 is 2 .


Let B1 = {e1 , e2 , e2n } be the basis for the linear transformation J 2 . Now, re-order the basis:

10

B2 = {e2 , e4 , e2n , e1 , e3 , e2n1 }. It is easy to see that the matrix


2

H= .
..

1
2
..
.

0
..
.
..

..
.
1
2
0
..
.

0
2

..

..

..

.
0

0
..
.

1
2

is the transformation J 2 with respect to the the new basis B2 , where the two Jordan blocks in H
have size n. Therefore, there exists a permutation matrix P such that P 1 J 2 P = H. Therefore
the Jordan form for J 2 is H.
Similarly, if J is a Jordan block for of size 2n 1, with basis B1 = {e1 , e2 , . . . , e2n1 }, let
B2 = {e2 , e4 , . . . , e2(n1) , e1 , e3 , . . . , e2n1 }. Then the matrix with respect to B2 is analogous to
the matrix H above, but with a block of size (n 1)/2 on the upper diagonal, and a block of size
(n + 1)/2 on the lower.
Necessary and sufficient condition: A Mn (F ) has a square root if and only if every eigenvalue
of A has a square root, and for each eigenvalue , A has an even number of Jordan blocks with
eigenvalue , and these blocks come in pairs of size n/2, n/2 with n even, or (n 1)/2, (n + 1)/2
with n odd.
The proof is similar to the proof of problem 38.

11

Anda mungkin juga menyukai