Book 4
Linear algebra
T. S. BLYTH o E. F. ROBERTSON
University of St Andrews
 r.
The right f ,h,
University of Cambridge
to print and sell
all manner of books
as granted by
Henry Vlll in 1534.
The University has printed
and published continuously
since 1584.
Published in the United States of America by Cambridge University Press, New York
www.cambridge.org
Information on this title: www.cambridge.org/9780521272896
A catalogue record for this publication is available from the British Library
Preface vi
Background reference material vii
1: Direct sums and Jordan forms 1
2: Duality and normal transformations 18
Solutions to Chapter 1 31
Solutions to Chapter 2 67
Test paper 1 96
Test paper 2 98
Test paper 3 100
Test paper 4 102
v
Preface
TSB, EFR
St Andrews
Background reference material
vii
[12] I. D. Macdonald, The Theory of Groups, Oxford University
Press, 1968.
[13] S. MacLane and G. Birkhoff, Algebra, Macmillan, 1968.
[14] N. H. McCoy, Introduction to Modern Algebra, Allyn and
Bacon, 1975.
[15] J. J. Rotman, The Theory of Groups: An Introduction, Allyn
and Bacon, 1973.
[16] I. Stewart, Galois Theory, Chapman and Hall, 1975.
[17] I. Stewart and D. Tall, The Foundations of Mathematics,
Oxford University Press, 1977.
viii
1: Direct sums and Jordan forms
In this chapter we take as a central theme the notion of the direct sum
A ® B of subspaces A, B of a vector space V. Recall that V = A ® B
if and only if every x E V can be expressed uniquely in the form a + b
where a E A and b E B; equivalently, if V = A+B and AnB = {0}. For
every subspace A of V there is a subspace B of V such that V = A® B.
In the case where V is of finite dimension, this is easily seen; take a basis
{V1,  .. , vk} of A, extend it to a basis {vi,. .. , v,,} of V, then note that
spans a subspace B such that V = A ®B.
If f : V * V is a linear transformation then a subspace W of V is
said to be finvariant if f maps W into itself. If W is finvariant then
there is an ordered basis of V with respect to which the matrix of V is
of the form
0 X
where M is of size dim W x dim W.
If f : V + V is such that f o f = f then f is called a projection.
For such a linear transformation we have V = Im f ® Ker f where the
subspace Imf is finvariant (and the subspace Kerf is trivially so). A
vector space V is the direct sum of subspaces W1, .. . , Wk if and only if
there are nonzero projections p1, ... , Pk : V  V such that
k
Epi = idv and pi o pj= 0 for i # j.
i=1
Mk
Of particular importance is the situation where each Ms is of the form
A 1 0 ... 0 0
0 A 1 ... 0 0
0 0 A ... 0 0
0 0 0 ... A l
0 0 0 ... 0 A
X2
xn
1.1 Which of the following statements are true? For those that are false,
give a counterexample.
(i) If {al, a2, a3} is a basis for IR3 and b is a nonzero vector in IR then
{b + a1, a2, a3} is also a basis for IR3.
2
1: Direct sums and Jordan forms
(ii) If A is a finite set of linearly independent vectors then the dimension
of the subspace spanned by A is equal to the number of vectors in
A.
(iii) The subspace {(x, x, x) x E IR} of IR3 has dimension 3.
I
(iv) If A is a linearly dependent set of vectors in IRn then there are more
than n vectors in A.
(v) If A is a linearly dependent subset of IRn then the dimension of the
subspace spanned by A is strictly less than the number of vectors
in A.
(vi) If A is a subset of IRn and the subspace spanned by A is IRn itself
then A contains exactly n vectors.
(vii) If A and B are subspaces of IRn then we can find a basis of IRn
which contains a basis of A and a basis of B.
(viii) An ndimensional vector space contains only finitely many sub
spaces.
(ix) If A is an n x n matrix over Q2 with A3 = I then A is nonsingular.
(x) If A is an n x n matrix over Q with A3 = I then A is nonsingular.
(xi) An isomorphism between two vector spaces can always be repre
sented by a square singular matrix.
(xii) Any two ndimensional vector spaces are isomorphic.
(xiii) If A is an n x n matrix such that A2 = I then A = I.
(xiv) If A, B and C are nonzero matrices such that AC = BC then
A=B.
(xv) The identity map on IRn is represented by the identity matrix with
respect to any basis of IRn.
(xvi) Given any two bases of IRn there is an isomorphism from IRn to
itself that maps one basis onto the other.
(xvii) If A and B represent linear transformations f, g : IRn * IRn with
respect to the same basis then there is a nonsingular matrix P such
that P1 AP = B.
(xviii) There is a bijection between the set of linear transformations from
IRn to itself and the set of n x n matrices over IR.
(xix) The map t : IR2 4 IR2 given by t(x, y) = (y, x+y) can be represented
by the matrix
t1(a,b,c) = (a+b,b+c,c+a);
t2 (a, b, c) = (a  b, b  c, 0);
t3 (a, b, c) = (b, a, c);
t4 (a, b, c) = (a, b, b).
3 1 1
1 5 1
1 1 3
with respect to some basis of V. Find dim Ker t and dim Im t when
(i) F = IR;
(ii) F = 12;
(iii) F = 73.
Is V = Ker t ® Im t in any of cases (i), (ii) or (iii)?
1.4 Let V be a finitedimensional vector space and let s, t E .C (V, V) be such
that sot = idv. Prove that to s = W. Prove also that a subspace of V
is tinvariant if and only if it is sinvariant. Are these results true when
V is infinitedimensional?
1.5 Let V,, be the vector space of polynomials of degree less than n over
the field IR. If D E £(V,,, V,,) is the differentiation map, find Im D and
Ker D. Prove that Im D _ V,,_ 1 and that Ker D = IR. Is it true that
V,y=ImDa) KerD?
Do the same results hold if the ground field IR is replaced by the field
Z2?
4
1: Direct sums and Jordan forms
1.6 Let V be a finitedimensional vector space and let t E C (V, V)  Establish
the chains
V D Im t D Im t2 D ... D Im to D Im to+1 Q
{0} C Ker t C Ker t2 C ... C Ker to C Ker to+1 C ....
Show that there is a positive integer p such that ImtP = ImtP+1 and
deduce that
('wl)...,wr,f(wl),..., f(wr),xl,...,xn2r}
is a basis of V.
Hence show that a nonzero n x n matrix A over F is such that A2 = 0
if and only if A is similar to a matrix of the form
VI = {x E V I x3 = x2 and x4 = XI),
V2 = {x E V I x3 = x2 and x4 = x1 }.
Show that
(1) V1 and V2 are subspaces of V;
5
Book 4 Linear algebra
(2) {b1 + b4, b2 + b3} is a basis of V1 and {b1  b4, b2  b3} is a basis
of V2i
(3) V = V1 ® V2;
(4) with respect to the basis B. and the basis
mij = m5i,5j
for all i, j. If M is centrosymmetric, show that M is similar to a matrix
of the form
Q 0 0
ry S 0 0
0 0 E S
0 0 n tg
xEKerg b x= f(x),
6
1: Direct sums and Jordan forms
and that g o g = 0. Deduce that an n x n matrix A over F is such that
A2 = In if and only if A is similar to a matrix of the form
In2p
V = {(a, a, 0) I a E IR}.
given by
ek(x)= J0 if0<x<k;
1 ifk<x<1.
A piecewise linear function on [0, 1[ is a mapping f : [0,1[> IR for
which there exists a net (ai)o<i<n+1 and sequences (b1)o<i<n, (c,)o<i<n
of elements of IR such that
(Vx E [ai, ai+1 [) f (x) = bix + ci.
Let F be the set of piecewise linear functions on [0, 1[ and let G be
the subset of F consisting of the piecewise linear functions g that are
continuous with g(0) = 0. Show that F, G are vector spaces over IR and
that F=E®G.
Show that a basis of G is the set {gk I k E [0, 1[} of functions given
by
0 if0<x<k;
gk(x) xk ifk<x<1.
Finally, show that the assignment
fi+I(f)= jf(t)dt
describes an isomorphism from E to G.
8
1: Direct sums and Jordan forms
1.16 Let V be a vector space over a field F and let t E ,C(V,V). Let ). and
)2 be distinct eigenvalues of t with associated eigenvectors v1 and v2. Is
it possible for al + .12 to be an eigenvalue of t? What about .11 )2?
1.17 Let t E C((E2, d2) be given by
l 3 1 1 7 1 2
(a) 3J, (b) 1 5 1 , (c) 1 7 2 ,
L' 1 1 3 2 2 10
2 1 1 1 0 1
(d) 0 2 1, (e) 0 2 1.
0 0 1 1 0 3
A bn
[bn+1]
CA = {v E V I t(v) = Av}
that
B = {M(1, 0, 0), M(0,1, 0), M(0, 0, 1)}
is a basis of this subspace.
If f : C3 + C3 represents M(a, /3, y) relative to the canonical basis
{e1, e2, e3 b show that e1 + e2 + e3 is an eigenvector of f. Determine the
matrix of f relative to the basis {el + e2 + e3, e2, e3}. Hence find the
eigenvalues of M(a, Q, y).
11
Book 4 Linear algebra
1.30 Let V be a vector space of dimension n over a field F. A linear trans
formation f : V > V (respectively, an n x n matrix A over F) is said to
be nilpotent of index p if there is an integer p > 1 such that fP1 # 0
and fP = 0 (respectively, AP' 54 0 and AP = 0).
Show that if f is nilpotent of index p and x E V\{0} is such that
fp1(x) # 0 then
{x,f(x),...,f"1(x)}
is a linearly independent subset of V. Hence show that f is nilpotent
of index n if and only if there is an ordered basis of V with respect to
which the matrix of f is
0 0
In1 0'
Ak
where each Ai is real and of the form
cl 0 J.
1.33 Let V be a vector space of dimension 3 over IR and let t E Z (V, V) have
eigenvalues 2,1,2. Use the CayleyHamilton theorem to express ten
as a real quadratic polynomial in t.
1.34 Let V be a vector space of dimension n over a field F and let f E £ (V, V)
be such that all the zeros of the characteristic polynomial of f lie in F.
Let Al be an eigenvalue of f and let bl be an associated eigenvector.
Let W be such that V = Fbl ®W and let (b;)2<i<n be an ordered basis
of W. Show that the matrix of f relative to the basis {bl, b2i ... , bn} is
of the form
[Ai P12 .. Qln
0 M
Observe that in general Qi2...... in are nonzero, so that W is not f
invariant. Let it be the projection of V onto W and let g = it of. Show
that W is ginvariant and that if g' is the linear transformation induced
on W by g then Mat (g', (bf)) = M. Show also that all the zeros of the
characteristic polynomial of g' lie in F.
Deduce that f is triangularisable in the sense that there is a basis B
of V relative to which the matrix of f is upper triangular with diagonal
entries the eigenvalues of f.
13
Book 4 Linear algebra
1 3 2 3 0 1
(c) 0 7 4 , (d) 0 3 0
0 9 5 0 0 3
0 1 1 1 0
Find also a Jordan basis and hence an invertible matrix P such that
P'AP = J.
1.39 For each of the following matrices A find a Jordan normal form J, a
Jordan basis, and an invertible matrix P such that P1AP = J.
13 8
22 2 12 (b) 22 13
1
0
2
3
(a) 20 0 12
5 0 1
30 3 16 22
8
13 5 5
14
1: Direct sums and Jordan forms
1.40 Find a Jordan normal form and a Jordan basis for the matrix
5 1 3 2 5
0 2 0 0 0
1 0 1 1 2
0 1 0 3 1
1 1 1 1 1
1 0 1 1 0
4 1 3 2 1
A = 2 1 0 1 1
3 1 3 4 1
8 2 7 5 4
From only the information given by the minimum polynomial, how many
essentially different Jordan normal forms are possible? How many lin
early independent eigenvectors are there? Does the number of linearly
independent eigenvectors determine the Jordan normal form J? If not,
does the information given by the minimum polynomial together with
the number of linearly independent eigenvectors determine J?
1.42 Find a Jordan normal form of the differentiation map D on the vector
space IR4[X) of polynomials of degree less than or equal to 3 with real
coefficients. Find also a Jordan basis for D on IR4[X].
1.43 If a 3 x 3 real matrix has eigenvalues 3,3,3 what are the possible Jordan
normal forms? Which of these are similar?
1.44 Which of the following are true? If A, B E Mat,,,,(() then AB and BA
have the same Jordan normal form
(i) if A and B are both invertible;
(ii) if one of A, B is invertible;
(iii) if and only if A and B are invertible;
(iv) if and only if one of A, B is invertible.
1.45 Let V be a vector space of dimension n over C. Let t E C(V, V) and let
A be an eigenvalue of t. Let J be a matrix that represents t relative to
some Jordan basis of V. Show that there are
Find also a Jordan basis and an invertible matrix P such that P1 AP =
J.
Hence solve the system of differential equations
x1 0 1 0 1 x1
X1 _ 2 3 0 1 x2
X3 2 1 2 1 x3
x4 2 1 0 3 x4
dx3
xl  x2 + 2x2
dt =
dxj
dIi_5x'+6x2+6x3 = 0 jj=x1+3x22x3
dx2 dx2
(c) + x1  4x2  2x3 = 0 (d) = 7x2  4x3
dt dt
dX3
 3x1 + 6x2 + 4x3 = 0 dt = 9x2  5x3
1.48 dt
Solve the system of differential equations
5dx2+4d3 = Y2
x"'2x"4x'+8x=0
can be written as a firstorder matrix system X' = AX. By using the
method of the Jordan normal form, solve the equation given the initial
conditions
x(0) = 0, x'(0) = 0, x"(0) = 16.
17
2: Duality and normal transformations
1 ifi=j;
vd(vj) 0 ifi#j.
For every x E V we have
If (Vi)n, (w:)n are ordered bases of V and (va)n, (wa)n the correspond
ing dual bases then the transition matrix from (va)n to (wa)n is (p1)t
where P is the transition matrix from (vi)n to (wt)n. In particular,
consider V = IRS. Note that if
is a basis of IRS then the transition matrix from B to the canonical basis
(ei)n of IRS is M = [mj,]nxn where mij = aji. The transition matrix
from Bd to (ed)n is given by (Ml)t. We can therefore usefully denote
the dual basis by
B = {[all, ... , aln], [all, ... , a2n], ... , [anl, ... , an[}
2: Duality and normal transformations
where ain] denotes the ith row of M1, so that
[ail,...,ain](xl) ...,xn) =ailxl + ...+atnxn,
The bidual of an element x is x^ : Vd * F where x^(yd) = yd(x). It
is common practice to write yd(x) as (x, yd) and say that yd annihilates
x if (x, yd) = 0. For every subspace W of V the set
f* _ V1 ° ft o t9w
and is the unique linear transformation such that
2.1 Determine which of the following mappings are linear functionals on the
vector space IR3 [X] of all real polynomials of degree less than or equal
to 2 :
(a) f '' f'; (b) f '+ f f;
1 (c) f '+ f (2);
f2.
(d) f '' f'(2); (e) f '' / 1
2.2 Let C[0,1] be the vector space of continuous functions f : [0, 11 + IR. If
fo is a fixed element of C[O,1], prove that (p : C[0,1] + IR given by
is a linear functional.
2.3 Determine the basis of (IR3)d that is dual to the basis
of 1R3.
2.4 Let A = {x1i x2} be a basis of a vector space V of dimension 2 and let
Ad = {V1i a2) be the corresponding dual basis of Vd. Find, in terms of
epl, (p2 the basis of V d that is dual to the basis A' = {xl +2x2, 3x1 +4x2}
of V.
2.5 Which of the following bases of (IR2)d is dual to the basis {(1, 2), (0, 1)}
of IR2?
(a) {[1, 2], [0, 1]}; (b) {[1, 0], [2, 1]};
(c) {[1, 0], [2,1]}; (d) {[l, 0], [2, 1]}.
2.6 (i) Find a basis that is dual to the basis
of IR4.
(ii) Find a basis of IR4 whose dual basis is
MAX)) = p(ti)
Show that Bd = {fl, f2, f3} is a basis for the dual space (IR3[X])d and
determine a basis B = {pl(X),p2(X),p3(X)} of IR3[X] of which Bd is
the dual.
2.9 Let a = (1, 2) and Q = (5, 6) be elements of IR2 and let cp = [3,4] be an
element of (IR2)d. Determine
Kertd = (Imt)1.
Deduce that if v E V then one of the following holds
(i) there exists u E U such that t(u) = v;
(ii) there exists (p E Vd such that td((p) = 0 and <p(v) = 1.
Translate these results into a theorem on solving systems of linear
equations.
Show that (i) is not satisfied by the system
3x+ y=2
x+2y=1
x+3y= 1.
Find the linear functional 'p whose existence is guaranteed by (ii).
2.11 If s, t : U + V are linear transformations, show that
(sot)d=td0Sd.
Prove that the dual of an injective linear transformation is surjective,
and that the dual of a surjective linear transformation is injective.
21
Book 4 Linear algebra
2.12 Let t E C(IR3, IR3) be given by the prescription
If X = {(1, 0, 0), (1,1, 0), (1,1,1)} and yd = {[1, 0, 0], [1,1, 0], [1,1,1]},
find the matrix of td with respect to the bases yd and Xd.
2.13 Let {al, 02) 013} and {al, a2i a3} be bases of IR3 that differ only in the
third basis element. Suppose that {<pi, ,p2, ,p3 } and {, 'p' , V3 } are the
corresponding dual bases. Prove that V3 is a scalar multiple of V3
2.14 Let CIO, 11 denote the space of continuous functions on the interval [0, 11.
Given g E C[0,1], define Lg : C[0,11 + IR by
Show that f is an isomorphism that does not satisfy (*). [Hint. Take
x = xl, y = t.] If, on the other hand, t 0 Ker S(t) for all t 0 let
22
2: Duality and normal transformations
{z1,...,xn1} be a basis of Kers(t) so that {z1i...,xn_1,t} is a basis
of V. Show that
{xi + x2,x2,x3,...,xn_1,t}
then f is an isomorphism that does not satisfy (*). Conclude from these
observations that we must have n = 2.
Suppose now that F has more than two elements and let A E F be such
that A # 0,1. If there exists t 0 such that t E Ker S(t) observe that {t}
is a basis of Ker S(t) and extend this to a basis It, z} of V. If f : V  V
is the (unique) linear transformation such that f (t) = t, f (z) = Az
show that f is an isomorphism that does not satisfy (*). [Hint. Take
x = z, y = t.] If, on the other hand, t 0 Ker S(t) for all t 0 0 let {z} be
a basis for Kers(t) so that {z,t} is a basis for V. If f : V . V is the
(unique) linear transformation such that f (z) = Az, f (t) = t show that
f is an isomorphism that does not satisfy (*). [Hint. Take x = y = z.]
Conclude from these observations that F must have two elements.
Now examine the vector space F2 where F = {0, 1).
[Hint. (F' )d is the set of linear transformations f : F x F + F. Since
F2 has four elements there are 24 = 16 laws of composition on F. Only
four of these are linear transformations from F2 to F; and each of these
is determined by its action on the natural basis of F2. Compute (F2)d
and determine a canonical isomorphism from F2 onto (F2)a ]
2.16 Let V be an inner product space of dimension k and let U be a subspace
of V of dimension k  1 (a hyperplane). Show that there exists a unit
vector n in V such that
U= {x E V I (n]x) = 0}.
Given v E V, define
v' = v  2(nlv)n.
Show that v  v' is orthogonal to U and that i (v + v') E U, so that v'
is the reflection of v in the hyperplane U. Show also that the mapping
t : V 4V defined by
t(v)=v'
is linear and orthogonal. What can you say about its eigenvalues and
eigenvectors?
23
Book 4 Linear algebra
Ifs IR3 > IR3 and t : IR4 , IR4 are respectively reflections in the
plane 3x  y + z = 0 and in the hyperplane 2x  y + 2z  t = 0, show
that the matrices of s and t are respectively
7 6
1 2 4 2
6 1
2 4 2 1
6 9 2 ,
11 6 2 9
5 4 2 1 2
2 1 2 4
x2 + 6xy  y2 = 1.
fm (A) = MA.
(fM)* = .fM*.
2.20 Let V be a finitedimensional inner product space. Show that for every
f E Vd there is a unique ,0 E V such that
i=1
Show as follows that this result does not necessarily hold for inner
product spaces of infinite dimension. Let V be the vector space of poly
nomials over Q. Show that the mapping
(P, q) H (PIq) = f0
1 p (t) q(t) dt
(VPEV) f(P)=P(z)
Show that there is no q E V such that (Vp E V) f (p) = (pjq).
[Hint. Suppose that such a q exists. Let r E V be given by r(t) = t  z
and show that, for every p E V,
2.21 Let C[O,1] be the inner product space of real continuous functions on
[0, 11 with the integral inner product. Let K : C[O,1] + C[0,1] be the
integral operator defined by
det(I  T  iS) 0 0.
Show also that the matrix
U=(I+T+iS)(ITiS)1
is unitary.
2.24 Let A be a real symmetric matrix and let S be a real skewsymmetric
matrix of the same order. Suppose that A and S commute and that
det(A  S) # 0. Prove that
(A+ S)(A  S)1
is orthogonal.
26
2: Duality and normal transformations
2.25 A complex matrix A is such that A A = A. Show that the eigenvalues
of A are either 0 or 1.
2.26 Let A and B be orthogonal n x n matrices with det A det B. Prove
that A + B is singular.
2.27 Let A be an orthogonal n x n matrix. Prove that
(1) if det A = 1 and n is odd, or if det A = 1 and n is even, then 1 is
an eigenvalue of A;
(2) if det A = 1 then 1 is an eigenvalue of A.
2.28 If A is a skewsymmetric matrix and g(X) is a polynomial such that
g(A) = 0, prove that g(A) = 0. Deduce that the minimum polynomial
of A contains only terms of even degree.
Deduce that if A is skewsymmetric and f (X), g(X) are polynomials
whose terms are respectively odd and even then f (A), g(A) are respec
tively skewsymmetric and symmetric.
2.29 For every complex n x n matrix A let
n
N(A) = tr(A A) = E[A A]ii.
i=1
2.34 Consider the quadratic form q(x) = xtAx on IR". Prove that q(x) > 0
for all x E IR" if and only if the rank of q equals the signature of q.
Prove also that q(x) > 0 for all x E IR" with q(x) = 0 only when x = 0
if and only if the rank and signature of q are each n.
2.35 With respect to the standard basis for IR3, a quadratic form q is repre
sented by the matrix
1 1 1
A= 1 1 0 .
1 0 1
Is q positive definite? Is q positive semidefinite? Find a basis of IR3
with respect to which the matrix representing q is in normal form.
2.36 Let f be the bilinear form on IR2 x IR2 given by
f((xi,x2),(yi,y2)) = xiyi +xiy2+2x2y1 +x2Y2
Find a symmetric bilinear form g and a skewsymmetric bilinear form h
such that f = g + h.
Let q be the quadratic form given by q(x) = f (x, x) where x E IR2.
Find the matrix of q with respect to the standard basis. Find also the
rank and signature of q. Is q positive definite? Is q positive semidefinite?
2.37 Write the quadratic form
4x2 +4y 2 + 4z2  2yz + 2xz  2xy
in matrix notation and show that there is an orthogonal transformation
(x, y, z) H (u, v, w) which transforms the quadratic form to
3u2 + 3v2 + 6w2.
Deduce that the original form is positive definite.
28
2: Duality and normal transformations
2.38 By completing squares, find the rank and signature of the following
quadratic forms :
(1) 2y2  z2 + xy + xz;
(2) 2xy  xz  yz;
(3) yz+xz+xy+xt+yt+zt.
2.39 For each of the following quadratic forms write down the symmetric
matrix A for which the form is expressible as xt Ax. Diagonalise each of
the forms and in each case find a real nonsingular matrix P for which
the matrix Pt AP is diagonal with entries in {1, 1,0}.
(1) x2 + 2ya + 9z2  2xy + 4xz  6yz;
(2) 4xy + 2yz;
(3) x2 + 4y2 + z2  4t2 + 2xy  2xt + 6yz  8yt  14zt.
2.40 Find the rank and signature of the quadratic form
2.41 Show that the rank and signature of the quadratic form
n
E (Ars + r + s)xrx,
r'8=1
are independent of A.
2.42 Let A be the matrix associated with the quadratic form Q(x1, ... , xn)
and let A be an eigenvalue of A. Show that there exist a1, ... , an not all
zero such that
2 a
Q
2.43 If the real square matrix A is such that det A # 0 show that the quadratic
form xtAtAx is positive definite.
2.44 Let f : IRn x IRn + IR be a symmetric bilinear form and let Q f be the
associated quadratic form. Suppose that Qf is positive definite and let
g : IRn X IRn > IR be a symmetric bilinear form with associated quadratic
form Qg. Prove that there is a basis of IRn with respect to which Q f
and Qg are each represented by sums of squares.
For every x E IRn let f, E (IRn)d be given by f,, (y) = f (x, y). Call
f degenerate if there exists x E IRn with ff = 0. Determine the scalars
A E IR such that g  A f is degenerate. Show that such scalars are the
29
Book 4 Linear algebra
roots of the equation det(B  AA) = 0 where A, B represent f, g relative
to some basis of IR'.
By considering the quadratic forms 2xy + 2yz and x2  y2 + 2xz show
that the result in the first paragraph fails if neither f nor g is positive
definite.
2.45 Evaluate
(xz+y'+xa+xN+xz+yz)dx
8 dy dz.
1. 01.0 f 00
30
Solutions to Chapter 1
1.1
(i) False. For example, take b = al.
(ii) True.
(iii) False. {(1,1,1)} is a basis, so the dimension is 1.
(iv) False. For example, take A = {0} or A = {v, 2v}.
(v) True.
(vi) False. For example, take A = IR'.
(vii) True.
(viii) False. {(x, \x) x E IR} is a subspace of IR2 for every A E IR.
I
(ix) True.
(x) True.
(xi) False. An isomorphism is always represented by a nonsingular
matrix.
(xii) False. Consider, for example, IR2 and d2. The statement is true,
however, if the vector spaces have the same ground field.
(xiii) False. 0 0 is a counterexample.
(xiv) False. For example,
1 0 1 1
(xv) True.
(xvi) True.
(xvii) False. Take, for example, f, g : IR" * IR' given by f (x, y) (0, 0)
and g(x, y) = (x, y). Relative to the standard basis of IR' we see
Book 4 Linear algebra
that f is represented by the zero matrix and g is represented by
the identity matrix; and there is no invertible matrix P such that
P142P = 0.
(xviii) True.
(xix) False. The transformation t is nonsingular (an isomorphism), but
r1 21
is singular.
1 2J
r 1
(xx) False. The matrix I 0 is not diagonalisable.
1J
(a,b,c)EKert2 b ab=0,bc=0
b a=b=c
and so Kert2 = {(a, a, a) a E IR}. It is clear from the definition of t2
I
3 1 1 1 1 3 1 1 3
1 5 1 > 1 5 1 > 0 4 2
1 1 3 3 1 1 0 2 8
1 3 1 3
0
1 1
2 8 > 0 2 8 .
0 4 2 0 0 18
Note that we have been careful not to divide by any number that is
divisible by either 2 or 3 (since these will be zero in 74 and 713 respec
tively).
(i) When F = IR the rank of the row echelon matrix is 3, in which case
dim Im t = 3 and hence dim Ker t = 0.
(ii) When F = 712 we have that 2,18, 8 are zero so that the rank is 1,
in which case dim Im t = 1 and dim Ker t = 2.
(iii) When F = 713 we have that 18 is zero so that the rank is 2, in
which case dim Im t = 2 and dim Ker t = 1.
V = Ker t ® Im t holds in cases (i) and (ii), but not in case (iii); for in
case (iii) we have that (1, 1, 1) belongs to both Kert and Imt.
1.4 If s o t = idv then s is surjective, hence bijective (since V is of finite
dimension). Then t = s1 and so t o s = idv.
Suppose that W is tinvariant, so that t(W) C_ W. Since t is an
isomorphism we must have dimt(W) = dim W and so t(W) = W. Hence
W = s[t(W)] = s(W) and W is sinvariant.
The result is false for infinitedimensional spaces. For example, con
sider the real vector space IR[X] of polynomials over IR. Let s be the
differentiation map and t the integration map. We have s o t = id but
toe 54 id.
33
Book 4 Linear algebra
1.5 KerD = {a I a E F} and ImD = {p(X) I degp(X) < n  2}. Clearly,
Im D is isomorphic to Vn_ 1 and Ker D is isomorphic to F. Now Ker D n
ImD # {0} since if a E F with a # 0 then the constant polynomial a
belongs to both.
The same results do not hold when the ground field is Z2. For exam
ple, in this case we see that the polynomial X2 belongs to the kernel of
D.
1.6 Let s, t E £(V, V). Then if w E Im(s o t) we have w = a[t(u)] for some
u E V which shows that w E Im s. Thus Im(s o t) C Im s. The first
chain now follows by taking s = t'.
Similarly, if u E Kertn then s[tn(u)] = s(0) = 0 gives u E Ker(s o tn)
and so Ker t' C_ Ker(s o tn). The second chain now follows by taking
s=t.
Now we cannot have an infinite number of strict inclusions in the first
chain since X C Y implies that dim X < dim Y, and the dimension of V
is finite. Hence the chain is finite. It follows that there exists a positive
integer p such that ImtP = ImtP+k for all positive integers k. Since
dim Im tP + dim Ker tP = dim V the corresponding results for the kernel
chain are easily deduced.
To show that V = Im tP ®Ker tP it suffices, by the dimension argument,
to prove that ImtP n KertP = {0}. Now if x E ImtP n KertP then
tP(x) = 0 and there exists v E V such that x = tP(v). Consequently
0 = tP(x) = t2p(v)
t)LiwiEKerf
1=1
=i' r
EAtwtEKerfnW={0}
:=1
r
> A1w;=0
i=1
(i = 1,...,r) Ai = 0.
{f(wl),...,f(wr),xi,...,xn_2r}
is a basis for Ker f . Since V = W ® Ker f it follows that
1.8 (1) Sums and scalar multiples of elements of V1, V2 are clearly elements
of V1, V2 respectively.
(2) If z E V1 then x = xl(bl + b4) + x2(b2 + b3) shows that V1 is
generated by {b1 + b4, b2 + b3}. Also, if xl (bl + b4) + x2(b2 + b3) = 0
then, since {b1, b2, b3, b4} is a basis of V, we have x1 = x2 = 0. Thus
{b1 + b4i b2 + b3} is a basis of V1. Similarly, {b1  b4i b2  b3} is a basis
of V2
(3) It is clear from the definitions of V1 and V2 that we have V1 n
V2 = {0}. Consequently, the sum V1 + V2 is direct. Since V1,V2 are of
dimension 2 and V is of dimension 4, it follows that V = V1 ® V2.
(4) To find the matrix of idv relative to the bases B = {b1, b2, b3, b4}
and C = {b1 + b4, b2 + b3, b2  b3, b1  b4} we observe that
2 0 0 2
0 12 1 0
A 0 1
2 2 0
2 0 0 2
11 0 0 1
0 1 1 0
A1 =2A=
0 1 1 0
1 0 0 1
a b c d
e f g h
h g f e
d c b a
36
Solutions to Chapter 1
Let f represent M relative to the basis B. Then the matrix of f relative
to the basis C is given by AMA1, which is readily seen to be of the
form
a /3 0 0
_ ry 6 0 0
K
0 0 E S
0 0 $
Thus if M is centrosymmetric it is similar to a matrix of the form K.
1.9 If F is not of characteristic 2 then 1F + 1F # OF. Writing 2 = 1F + 1F
we have that 2 E F. Given x c V we then observe that
Ip
11611P
0  I0n pJ
and A is then similar to this matrix. Conversely, if A is similar to a
matrix of the form AP then there is an invertible matrix Q such that
Q1 AQ = Ap. Then
A2 (QAPQ1)2 = QO2PQ1 = QjnQ1
= = In.
37
Book 4 Linear algebra
0 1
vp =
38
Solutions to Chapter 1
Consequently A is similar to this matrix. Conversely, if A is similar to
a matrix of the form VP then there is an invertible matrix Q such that
Q1 AQ = V p and so
A2= (QVPQ1)2 = QV Q1 = QInQ1 = In.
1.10 If t E £(IR2, IR2) is given by t(a,b) = (b,0) then clearly Imt = Kert
{0}.
If t E C(IR3,IR3) is given by t(a,b,c) = (c,0,0) then Imt C Kert.
If t E £(IR3, IR3) is given by t(a,b, c) = (b,c,0) then Kert C Imt.
If t is a projection then Im t f1 Ker t = {0} and none of the above are
possible.
1.11 Consider the elements of £(IR3, IR3) given by
ti (a, b, c) _ (a, a, 0);
t2 (a, b, c) _ (0, b, 0);
t3 (a, b, c) _ (0, b, c);
t4(a,b,c)=(0,ba,c);
tb(a,b,c) _ (a,0,0).
Each of these transformations is a projection. We have
Ker tb = Ker t 1 but Im tb Im t1 f
Im t3 = Im t4 but Ker t3 ¢ Ker t4.
Also, t1 o t2 = 0 but t2 o ti 96 0. (Note that t2 o t1 is not a projection.)
1.12 Clearly, ei + e2 is a projection if and only if (denoting composites by
juxtaposition) e1e2 + e2e1 = 0. Thus if e1e2 = 0 and e2e1 = 0 then
the property holds. Conversely, suppose that e1 + e2 is a projection.
Then multiplying each side of e1 e2+e2e1 = 0 on the left by e1 we obtain
ele2+e1e2e1 = 0, and multiplying each side on the right by e1 we obtain
ele2el + e2e1 = 0. It follows that e1e2 = e2e1. But e1e2 + e2e1 = 0 also
gives e1e2 = e2e1. Hence we have that each composite is zero.
When ei + e2 is a projection, we have that
Ker(ei + e2) = Kerel fl Kere2i
Im(ei +e2) = Imel ®Ime2.
1.13 Take U = {(0, a, b) a, b E IR}. Then IR3 = V ® U since it is clear that
I
e2 = (?iAjej)2
(k)+ (k)
_ A e + ... + akek + A1A2e1e2 + A2.1e2e1
..+
+ + AkAklekek1
l
sGJ
= e,
and so e is also a projection. To show that Ime = Imel it suffices to
prove that e o el = el and e1 o e = e. Now
(A1e1 + + .kek)el = Ale, + + Akel = el
gives the first of these, and the second is similar.
For the last part, consider e, f E .C(IR2, IR2) given by
e(a, b) = (a, 0), f (a, b) = (0, b).
Then e and f are projections but clearly e + if is not.
a 2
1.15 Since sums and scalar multiples of step functions are step functions it is
clear that E is a subspace of the real vector space of all mappings from
IR to IR. Given 6 E E, the step function t9i whose graph is
ai ai+1
i.e. the function that agrees with t9 on [ai, ai+1 [ and is zero elsewhere,
40
Solutions to Chapter 1
is given by the prescription
Oi(x) = O(ai)[ear(x)  ear+,(x)]
It follows that {ek I k E [0, 1[} generates E since then
n+1
tf = E 9gi.
i=0
Since the functions ek clearly form an independent set, they therefore
form a basis of E.
It is likewise clear that F is a vector space and that G is a subspace
of F. Consider now an element of F, as depicted in the diagram
ai+1 ai
41
Book 4 Linear algebra
Z
ai ai+1
i.e. the function that agrees with µ on [ai, ai+1 [ and is zero elsewhere.
Let the gradient in the interval [ai, ai+i [ be bi, so that di = p(ai) +
bi(ai+l  ai). Then it can be seen that
1+i/ 0
0 1ice
42
Solutions to Chapter 1
1.18 Since t has 0 as an eigenvalue we have t(v) = 0 for some nonzero v E V
and hence t is not injective, so not invertible. Thus if t is invertible
then all its eigenvalues are nonzero. For the converse, suppose that t is
not invertible and hence not injective. Then there is a nonzero vector
V E Ker t, and t(v) = 0 shows that 0 is an eigenvalue of t.
If now t is invertible and t(v) = av with A # 0 then v = t1 [t(v)] _
t1(Av) = Jet1(v) gives t1(v) = A' 1v and so A1 is an eigenvalue of
t1 with the same associated eigenvector. (Remark. Note that we have
assumed that V is finitedimensional (where?)in fact the result is false
for infinitedimensional spaces.)
1.19 Suppose that A is a nonzero eigenvalue of t. Then t(v) = Av for some
nonzero v E V and
0 = tm(v) = tm,[t(v)] = t'' (AV) = ... = Amv,
An
0 1 2 3 ... n
0 0 1 3 ... an(n  1)
0 0 0 0 ... 1
3A 1
1 3A =Az6A+8=(A4)(A2)
so the eigenvalues are 2 and 4, each of geometric multiplicity 1. For the
eigenvectors associated with the eigenvalue 2, solve
1 y1=[01
44
Solutions to Chapter 1
to obtain the eigenspace {[x, z] x E IR}. Since A has distinct eigen
I
1 1 1
P 0 1 2 .
1 1 1
2 2 4 z 0
0 1 1 x 0
0 0 y= 0
1 l1
1
0 0 1 z 0
and we have
An1
[:1=
The characteristic polynomial of A is X2  2X 1 and its eigenvalues are
a1 = 1 + f and )2 = 1  v'2. Corresponding eigenvectors are [f, 1]
and [vi, 1]. The matrix
1 1
P=
is such that P1 AP = diag{A1,A2}.
In the new coordinate system, I i ] becomes P1 { i ] = [1'1]where
L
P2
f+1 f1
P1=
2f , P2= 2/
We then have
an =P1Ai1viP2.A21vi, + P2,\21
bn =p1Ai1
from which we see that
an __ P,An,\/2
I P2 a2
+P2\a1
bn PlAi1
f (P2/P1)(A2/a1)n1vi
1 + (p2/P1)(A2/A1)n1
1  (P2/P1)(A2/A1)n1
_ (1
+ (P2/P1)(A2/)t1)n1
46
Solutions to Chapter 1
Now since 0 < 1A2/1\1I < 1 we deduce that
hm
n.oo
an
b ,,,
=f.
1.25 Since f (X) and g(X) are coprime there are polynomials p(X) and q(X)
such that f (X)p(X) + g(X)q(X) = 1. Let c = p(t) and d = q(t). Then
ac + bd = idv.
Suppose now that v is an eigenvector of ab associated with the eigen
value 0. (Note that ab has 0 as an eigenvalue since t is singular.) Let
u = a(cv) and w = b(dv). Then since a, b, c commute we have
bu = bacv = cabv = 0,
aw = abdv = dabv = 0.
= rsr(s + t)
= pq.
Suppose now that it is true for all r < n where n > 2. Then
qn+1
= qn q = pn1q2 = pn1pq = pnq,
which shows that it holds for n + 1. The second equality is established
in a similar way.
(2) If r is nonsingular then r' exists and consequently from rtr = 0
we obtain the contradiction t = 0. Hence r is singular, so both p and
q are singular and hence have 0 as an eigenvalue. Consequently we see
that p(X) and q(X) are divisible by X.
(3) Let q(X) = a1X+a2X2+ +a,.Xr. Then a1q+a2g2+ +a,.qr' _
0 and so(a1Q + + a,.q'')p = 0, i.e. a1p2 + + arp''}1 = 0 which
shows that p satisfies Xq(X) = 0. Similarly, q satisfies Xp(X) = 0.
By (3), p(X) divides Xq(X), and q(X) divides Xp(X), so we have
1 3a 1
M(a,Q, 7) 1 = 3a = 3a 1
1 3a 1
To compute the matrix of f relative to the basis {el + e2 + e3, e2, e3}
we observe that, by the above,
49
Book 4 Linear algebra
and that
f (e2) = (a  /3 +'y)e1 + ae2 + (a + /3  'y)e3
=(a/.3+7)(ei+e2+e3e2e3)+ae2+(a+pry)e3
= (a  Q + ry)(el + e2 + e3) + (/3 'y)e2 + (2Q  2ry)e3i
f(e3) = (a  ry)el + (a + Q + ry)e2 + (a  /3)e3
=(a'r)(e1+e2+e3e2e3)+(Q+27)e2+ (7Q)e3
=(ary)(e1+e2+e3)+(/3+2ry)e2+(79)e3
The matrix of f relative to {e1 + e2 + e3, e2, e3) is then
3a a+ry a  ry
L= 0 /3i /3+2ry
0 2/3try ry/3
Since L and M(a, /3, y) represent the same linear mapping they are sim
ilar and therefore have the same eigenvalues. It is readily seen that
Bp = {x, f(x),...,fp1(x)}
Suppose that
On applying fP1 to (*) and using the fact that fP = 0, we see that
Ao fP1(x) = 0 whence we deduce that Ao = 0. Deleting the first term
in (*) and applying fp2 to the remainder, we obtain similarly Al = 0.
Repeating this argument, we see that each Al = 0 and hence that Bp is
linearly independent.
It follows from the above that if f is nilpotent of index n = dim V
then
B. = {x, f(x),..., fn1(x)}
50
Solutions to Chapter 1
is a basis of V. The matrix of f relative to Bn is readily seen to be
0 0
I* = In1 0 .
Eajva+E# f(Vj)=0.
Using the given identity, we can rewrite this as the following equation
in the Qvector space V :
E(aj  if1)v, = 0.
{vi,...,vn,f(vl),.... f(vn)}
we deduce immediately from the fact that f o f =  idv that the matrix
of f relative to this basis is
1
0 In
In ].
0
V Y = Z (A  AI).(A  AI)Z = 0
a + ib
t4 = t2 + 4(t2  id);
t6 = t2 +4(1+4) (t2  id);
t2p=t2+4(1+4+42+ ..+4p2)(t2id).
It is easy to see by induction that this is indeed the case. Thus we see
that t2n
= t2 + 4(11 + 4 +  + 4n2) (t2  id)
14(t2id)
=t2+4(1_4n_1)
Thus the matrix of f relative to the basis {b1, b'2, ... , b,n} is of the form
1 Al 012 Pin
A= 0 M
If wEW,say w=w1b1+E1>2w;b; then
0 0 1
54
Solutions to Chapter 1
1.36 We have that
t(1) = 5  8X  5X2,
t2(1) = 5(5  8X  5X2)  8(1 + X + X2)  5(4 + 7X + 4X2),
t3(1) = 0.
40 [00]
[2 5 40][y]
so 1 has geometric multiplicity 1 with [8, 51 as an associated eigenvec
tor. Hence the Jordan normal form is
Take vi = [8, 5]. Then a possible solution for v2 is [5, 3], giving
55
Book 4 Linear algebra
(b) The characteristic polynomial is (X+1)2. The eigenvalues are 1
(twice) with geometric multiplicity 1, and a corresponding eigenvector
is [1, 01. The Jordan normal form is
(A+I2)v1=0, (A+I2)v2=v1
Take v1 = [1, 0] and v2 = [0, 1]; then
P [1O
(Any Jordan basis is of the form {[c, 0], [d, c]} with P =
c cdl
0
L
(c) The characteristic polynomial is (X  1)3, so the only eigenvalue is
1. It has geometric multiplicity 2 with {[I, 0, 01, [0, 2,3]) as a basis for
the eigenspace. The Jordan normal form is then
1 1 0
0 1 0 .
0 0 1
Now (A I3)2 = 0 so choose v2 to be any vector not in ([1, 0, 01, [0, 2, 3]),
for example v2 = [0, 1, 0]. Then v1 = (AI3)v2 = [3, 6, 9]. For v3 choose
any vector in ([1, 0, 0], [0, 2,31) that is independent of [3,6,9], for example
V3 = [1, 0, 0]. This gives
3 0 1
P= 6 1 0 .
9 0 0
56
Solutions to Chapter 1
(d) The Jordan normal form is
3 1 0
0 3 0 .
0 0 3
2
[0 2]
1 1 1 1 0 x 0
0 1 0 0 0 11
0
0 0 1 1 0 z 0
0 0 0 0 1 t 0
0 1 1 1  1 w 0
57
Book 4 Linear algebra
to obtain w = y = x = 0, z + t = 0. Thus the general eigenvector
associated with the eigenvalue 1 is [0, 0, z, z, 0] with z # 0. The Jordan
block associated with the eigenvalue 1 is
1 1 0
0 1 1 .
0 0 1
0 0 0 0 1 t 1
0 1 1 1 1 w 0
0 1 1 1 1 w 1
to obtain y = 0, t + z = 0, w = 1,x = 1, so we consider [1,0,0,0,11.
A Jordan basis is therefore
{[1, 0, 0, 0, o], [0, 1, 1,0,011 10,0, 1,1, o], [1,0,0, 11 1], [1101010, ill
and a suitable matrix is
1 0 0 1 1
0 1 0 0 0
P= 0 1 1 0 0
0 0 1 1 0
0 0 0 1 1
58
Solutions to Chapter 1
1.39 (a) The Jordan form and a suitable (nonunique) matrix P are
2 1 0 2 5 5
J= 0 2 0 , P= 2 3 8 .
0 0 2 3 8 7
(b) The Jordan form and a suitable P are
2 0 0 0 4 3 2 07
0 1 1 0 5 4 3 0
0 0 1 1
P= 2 2 1 0
0 0 0 1 11 6 4 1j
1.40 The Jordan normal form is
2 0 0 0 0
0 2 1 0 0
0 0 2 0 0
0 0 0 3 1
0 0 0 0 3
A Jordan basis is
{[2, 1, 0, 0,1], [1,0,1,0,0], [0, 1, 0,1, 0], [1,0,0,1,0], [2, 0, 0, 0, 1]j.
1.41 The minimum polynomial is (X  2)3. There are two possibilities for
the Jordan normal form, namely
2 1 0 0 0 2 1 0 0 01
0 2 1 0 0 0 2 0 0 0
J1= 0 0 2 0 0 , J2 = 0 0 2 1 0
0 0 0 2 0 0 0 0 2 1
0 0 0 0 2 0 0 0 0 2
0 0 0 0 2
59
Book 4 Linear algebra
have two linearly independent eigenvectors. Both pieces of information
are required in order to determine the Jordan form. For the given matrix
this is J2.
1.42 A basis for IR4[X] is {1,X,X2,X3} and D(1) = 0,D(X) = 1,D(X2) _
2X, D(X3) = 3X2. Hence, relative to the above basis, D is represented
by the matrix
0 1 0 0
0 0 2 0
0 0 0 3
0 0 0 0
0 0 0 0
3 0 0 3 1 0 3 1 0 3 0 0
0 3 0 , 0 3 1 0 3 0 , 0 3 1.
0 0 3 0 0 3 0 0 3 0 0 3
0 1 and 1 0
[0 0][0 0] [0 0][0 01
(tAidv)ivi=(tAidv)i1vt_i = ...=0.
Thus there is one eigenvector associated with each block, and so there
are
dim Ker(t  A idv )
blocks.
Consider Ker(t  A idv)j. For every 1 x 1 block there corresponds
a single basis element which is an eigenvector in Ker(t  A idv)j. For
every 2 x 2 block there correspond two basis elements in Ker(t  A idv)j
if j > 2 and 1 basis element if j < 2. In general, to each i x i block
there correspond i basis elements in Ker(t  A idv)' if j > i and j basis
elements if j < i.
It follows that
dj=n1+2n2+..+(j1)nj1+9(nj+nj+l+...)
and a simple calculation shows that 2di  dt_1  di+1 = ni.
1.46 The characteristic polynomial of A is (X  2)4, and the minimum poly
nomial is (X  2)2. A has a single eigenvalue and is not diagonalisable.
The possible Jordan normal forms are
2 1 0 01 2 1 0 0
0 2 0 0 0 2 0 0
0 0 2 0 0 0 2 1
0 0 0 2 0 0 0 2
61
Book 4 Linear algebra
Now Ker(A  214) = {[x, y, z, t] I 2x  y + t = 0}, and we must choose
v2 such that (A  214)2v2 = 0 but v2 0 Ker(A  214). So we take
v2 = [1,0,0,0], and then v1 = (A  214)v2 = [2,2, 2,21. We now
wish to choose v3 and v4 such that {vi, v3i v4} is a basis for Ker(A214).
So we take v3 = [0, 1, 0, 1] and v4 = [0, 0, 1, 0]. Then we have
2 1 0 0
P= 2
2
0
0
1
0
0
1
2 0 1 0
To solve the system X' = AX we first solve the system Y' = JY,
namely
yi = 2yi + Y2
y2' = 2y2
ys=2y3
y4 = 2y4
The solution to this is clearly
Zt
y4 = c4e
2t
y3=c3e
2t
y2 = c2e
cle2t.
Yi = c2te2t +
Since now
2 1 0 0 c2te2t + cie2t
0 0 c2e2t
X=PY= 2
2 0
1
0 1 c3e2t
L 2 0 1 0 c4e2t
we deduce that
xi = 2c2te2t  2c1e2t + c2e2t
C3e2t
x2 = 2c2te2t  2c1e2t +
62
Solutions to Chapter 1
1.47 (a) The system is X' = AX where
`4 = [1 0].
xl = aet + 4be4t
x2 = aet  befit
4 1 1
A= 1 2 1 .
1 1 2
so that
xi = ae2t + (b + c)e3t
x2 = ae2t + be 3t
x3 = ae2t + ce3t.
5 6 6
A= 1 4 2 .
3 6 4
0 9 5
P= 6 1 0 .
9 0 0
0 9 7
e2t
([1,1 + V31e f3t + [1, 1 + V3]e t ).
2Nf3
1.50 Let x = xl,xi = x2,4' = x2 = X3, X'1" = x3 = 2x3 + 4x2  8x1. Then
the system can be written in the form X' = AX where
0 1 0
A= 0 0 1 .
8 4 2
0 0 2
P= 2 1 2 .
4 4 4
65
Book 4 Linear algebra
Now solve the system Y' = JY to get
1 0 1 cle2t + c2te2t
X=PY= 2 1 2 c2 e2t
4 4 4 c3e2t
66
Solutions to Chapter 2
2.1 (a) f H f' does not define a linear functional since f' 0 IR in general.
(b),(c),(d) These are linear functionals.
(e) 99 : f F+ J 1f 2 is not a linear mapping; for example, we have
0 = t9[ f + (f)] whereas in general
,O(,f)+0(f)=2 f f2#0.
0
1
<p(a f +'8g) = f fo
1 (t) [a.f (t) + Q9(t)] dt
0
= c (f) + &M.
2.3 The transition matrix from the given basis to the standard basis is
1 1 0
P= 0 1 1
1 0 1
P1 = _ 12 2
1
2 1
2 2 2
1 1 1
2 2 2
Book 4 Linear algebra
Hence the dual basis is
2.4 The transition matrix from the basis A' to the basis A is
PI3 4].
Its inverse is
p1 = [2 2 ].
P=
Its inverse is
P1 2
1]
so the dual basis is (b), namely { [1, 0], [2, 1] }.
2.6 (i) {[2, 1, 1,0j,[7,3,1,1], [10,5,2, 1], [8,3,3, 1]};
(ii) {(4,5,2,11),(3,4,2,6), (2, 3, 1,4), (0, 0,0, 1)}.
2.7 Since V = A (D B we have
Consequently, Vd = Al ® B1.
The answer to the question is `no' : Ad is the set of linear functionals
f : A > F so if A $ V we have that Ad is not a subset of Vd. What is
true is : if V = A ® B then V d = A' ® B' where A', B' are subspaces
of Vd with A' Ad and B' Bd. To see this, let f E Ad and define
f : V + F by f (v) = f (a) where v = a + b. Then V : Ad > Vd
given by o(f) = f is an injective linear transformation and V(Ad) is a
68
Solutions to Chapter 2
subspace of Vd that is isomorphic to Ad. Define similarly µ : Bd + Vd
by µ(g) = g where g(v) = g(b). Then we have
V d = co(Ad) ® µ(Bd).
2.8 { fl, f2i f3} is linearly independent. For, if Al f1 + A2f2 + )t3f3 = 0 then
we have
Since the coefficient matrix is the Vandermonde matrix and since the t;
are given to be distinct, the only solution is Al = A2 = A3 = 0. Hence
{ fl, f2i f3} is linearly independent and so forms a basis for (IR3[X])d
If {p1i p2i p3} is a basis of V of which {fl, f2i f3} is the dual then we
must have fg(p3) = Sgt; i.e.
pi(ts) = Std.
p3(X)  ( t2)
{V1,...,vk,vk+1,.... vn}
from which the result follows. The final statements are immediate from
the fact that Imt = (Kertd)1 (see question 2.10).
2.12 To find Y we find the dual of {[1, 0, 0], [1, 1, 0], [1,1,1]}. The transition
matrix and its inverse are
1 1 1 1 1 0
P= 0 1 1, P'1= 0 1 1 .
0 0 1 0 0 1
70
Solutions to Chapter 2
Thus Y = {(1, 1, 0), (0, 1, 1), (0, 0,1)}.
The matrix of t with respect to the standard basis is
2 1 0
1 1 1
0 0 1
and the transition matrices relative to X, Y are respectively
1 0 0 1 1 1
1 1 0 , 0 1 1.
0 1 1 0 0 1
1 0 0 2 1 0 1 1 1 2 3 3
1 1 0 1 1 1 0 1 1 = 3 5 6.
1 1 1 0 0 1 0 0 1 3 5 5
which gives
1 7 6 6
mats =11 6 9 2 .
_6 2 9
Lx yJ[ 3 11[y]= 1.
[1/v, 1/V
1/vf2
1/,42
P1/f 1//
1// 1/f
73
Book 4 Linear algebra
xl Pt 1 = Ptx.
 [ X, I Y1 X
yI
_ [xi yi ]I 0
yi,
O4lIx
= 2z2  4y1.
Thus the principal axes are given by xl = 0 and yl = 0, i.e. y = x
and y = x.
The ellipsoid is represented by the equation
7 2 0
[x y zJ
0
6
2 5
][;].
z
7 2 0
The eigenvalues of A = 2 6 2 a re 3, 6, 9 with associated nor
0 2 5
malised eigenvectors
2 2 1
xl x
xi = Yi = y = Ptx.
zl z
pt
74
Solutions to Chapter 2
Then we have x = Pxj and the original equation becomes (with a
calculation similar to the above)
X2 2 = 1.
+ 6y12 + 9z1
Since
x1 = (x + 2y + 2z),
y1 = s (2x  y + 2z),
z1 = s (2x + 2y  z),
s
the xlaxis is given by y1 = z1 = 0 and has direction numbers (1, 2,2);
the ylaxis is given by x1 = z1 = 0 and has direction numbers (2,1,2);
the z1axis is given by x1 = y1 = 0 and has direction numbers (2, 2, 1).
2.18 Let {v1,.. . , vn} be an orthonormal basis of V. Then for every x E V
we have x = >'=1(xIvk ) vk so in particular
= n
(i = i,...,n) f(vi) E(f(vi)Ivk)vk.
k=1
2.19 The first part is a routine check of the axioms. Using the fact that
tr(AB) = tr(BA) we have
1
Ir(t)I2 Iq(t)I2 dt = 0
fo
whence we have rq = 0. Since r 54 0 we must therefore have q = 0, a
contradiction.
For the next part of the question note that
(fp(q)Ir) = (pglr)
4
= f0 1 p(t) q(t) r(t) dt
=
f0
1 q(t) p(t) r(t) dt
_ (glpr)
_ (ql fp(r) )
so (fP)* = f.
Integration by parts gives
f
2
0 xy (yn n+2) dy
1 1
= f xyn+1 dy  n 2 2 Jo2
xydy
xyn+2 Y1 2
xy2I!1
a= (x2 2, x
(x2 ,x2)33 3
Since
e2(x) = x2  x + g.
If K(f) = Af then
1
Xf(x) = x fo yf(y) dy
0
(idt)1(id+t) _ (id+t)(idt)1.
Hence s* = (id+t)(idt)1 and so
We have that
det(T + iS  I) # 0.
U = (I + T + iS)(I  T  iS)1
_ [I + (T + iS)] [I  (T + iS)]i.
The fact that U is unitary now follows from the previous question.
78
Solutions to Chapter 2
2.24 It is given that At = A, St = S, AS = SA,det(A  S) # 0. Let
B = (A + S)(A  S)1. Then we have
BtB = [(A + S)(A  S)1]t(A+ S)(A  S)1
= [(A  S)'It (A+S)t(A+S)(A S)1
= (At  St)' (At + St)(A + S)(A  S)1
= (A+S)1(A S)(A + S)(A  S)1
_ (A + S)1(A + S)(A  S)(A  S)1,
the last equality following from the fact that since A, S commute so do
A + S and A  S. Hence Bt B = I and B is orthogonal.
2.25 Since
(1) AA+A=0
we have, taking transposes,
AtA+At=0
and hence, taking complex conjugates,
r1.
81
Book 4 Linear algebra
Since A1,...,A,. are distinct the (Vandermonde) coefficient matrix has
nonzero determinant and so the system has a unique solution. We then
have
D=aoI+a1D+a2D2+ +a,._1D''1
and consequently
A =P1DP=P1(aoI+a1D+ +a,._1D''1)P
=aoI+ajA+ +a,._1A''1.
2.31 Suppose that A is normal and let B = g(A). There is a unitary matrix
P and a diagonal matrix D such that
A=P DP=P1DP.
Consequently we have
A 2 2
det(A  AI) = det 2 A 1 = A(A2 + 9),
2 1 A
1 1
U=3 2 .
83
Book 4 Linear algebra
Then we have
4
3z=Ay= 1 1
1
which gives
f
Relative to the basis {u, y, z} the representing matrix is now
0 0 0
0 0 3 .
0 3 0
1/3 0 4/3f
P= 2/3 1/f 1/3f
2/3 1/f 1/3f
2.84 Let Q be the matrix that represents the change to a new basis with
respect to which q is in normal form. Then xtAx becomes ytBy where
x = Qy and B = QtAQ. Now
q(x)=9(y)=yz
1+...+ypz yp+21...ypa+m
Now if q has the same rank and signature then clearly m = 0. Hence
ytBy > 0 for all y E IRn since it is a sum of squares. Consequently
xtAx > 0 for all x E IRn.
Conversely, if xtAx > 0 for all x c IRn then ytBy > 0 for all y E IRn.
Choose y = (0, ... , 0, yj, 0, ... , 0). Now the coefficient of yE must be 0 or
1, but not 1. Therefore there are no terms of the form y?, so m = 0
and q has the same rank and signature.
84
Solutions to Chapter 2
If the rank and signature are both equal to n then m = 0 and p = n.
Hence
ytBy=yi+...+yn,
But a sum of squares is zero if and only if each term is zero, so xtAx > 0
and is equal to 0 only when x = 0.
Conversely, if xtAx _> 0 for x E IR' then ytBy > 0 for y E IR" so
m = 0, for otherwise we can choose
with yp+1 = 1 to obtain ytBy < 0. Also, xtAx = 0 only for x = 0 gives
ytBy = 0 only for y = 0. If p < n then, since we have m = 0, choose
y = (0,..., 0, 1) to get yt By = 0 with y 0 0. Hence p = n as required.
2.35 The quadratic form q can be reduced to normal form either by complet
ing squares or by row and column operations. We solve the problem by
completing squares. We have
1 0 0
0 1 0 .
0 0 1
85
Book 4 Linear algebra
2.86 Take
9((xi, x2), (y1, y2)) = 2 [f ((xi, z2), (y1, y2)) + f ((yi, y2), (z1, z2))]
= x1y1 + 2(x1y2 + x2y1) + X2!2
and
h((z1, x2), (y1, y2)) = a [f ((xi, x2), (yi, y2))  f ((yl, y2), (x1, x2))I
=ZX1Y2+ZX2Yl.
We have &1, z2) = f ((xii x2), (x1i x2)) = zi + 3x1 x2 + z2 and so the
matrix of q relative to the standard basis is
3
1
2
3
1
2
and the rank is 2. The form is neither positive definite nor positive
semidefinite.
2.87 In matrix notation, the quadratic form is
4 1 1 x
xtAx=[x y z, 1 4 1 y .
1 1 4 z
11
V =Pt y
w
transforms the original quadratic form to
z
X
2y2z2+xy+xz=2(y+4x)2$x2+xzz2
= 2(y + 4x)2  '(x  4z)2 + z2.
Thus the rank is 3 and the signature is 1.
(2) In 2xyxzyz put x=X+Y,y=XY,z=Ztoobtain
2(X2  Y2)  (X+ Y)Z  (X_ Y)Z
= 2X2  2Y2  2XZ
= 2(X  Z)2  2 Z2  W.
1 1 2
A= 1 2 3 .
2 3 9
Now
x2 + 2y2 + 9z2  2xy + 4xz  6yz
=(xy+2z)2+y2+5z22yz
= (Cx y+2z)2 +(yz)2 +4z2
= [2 + Q2 + S2
87
Book 4 Linear algebra
where E=xy+2z,, =yz,s=2z. Then
z=1
y=rI+2S
X+q  ZS
1 1
21
P= 0 1
z
0 0 2
0 2 0
A= 2 0 1 .
0 1 0
Now
4xy+2yz=(x+y)2(xy)2+2yz
=X2Y2+(XY)z [X=x+y,Y=xy]
=(X+Zz)2Y2Yz4z2
=(X+Zz)2_(Y+Zz)2
= r2  n2,
where e=x+y+Zz,tl=xy+Zzand S=z, say. Then
x= W+4S)
y = 2(  n)
z=S
so if we let
1 1 1
2 2 2
P= 2 2 0
0 0 1
88
Solutions to Chapter 2
then we have
and so
1 1/f 1/f 2
P= 0 1/\ 1/' 3
o o 1/ 2 2
o 0 0 1
gives
x
y
z
=P 71
t Ti
+ (XI  X2)'
= (n  1)(x2 + ... +.. xn)
2(xlx2+ ...+xlxn+x2x3+ ...+x2xn+ ...+.xn_lxn)
= xt Ax
where
n1 1 1 ... 1
1 n1 1 ... 1
A= 1 1 n  1 ... 1
1 1 1 ... n  1
1
Now, by adding n 1 1 times the first column to columns 2,... , n and
adding n 1 times the first row to rows 2, ... , n, then multiplying rows
nn l , we see that A is congruent to
2, ... , n and columns 2, ... , n by
the matrix
n 1 0 0 ... 0
0 n2 1 ... 1
0 1 n  2 ... 1
0 1 1 ... n2
Repeating this process we can show that A is congruent to the matrix
n1 0 0 0 ... 0
0 n2 0 0 ... 0
0 0 n3 1 ... 1
0 0 1 n  3 ... 1
L 0 0 1 1 ... n3
90
Solutions to Chapter 2
Continuing in this way, we see that A is congruent to the diagonal matrix
diag{n1,n2,n3,...,2,1,0}.
Consequently the rank is n  1 and the signature is n  1.
2.41 We have
n
E (Ars+r+s)xrx,
r,e= 1
n n n
_ E (rxr)(sx,) + >2 (rxr)x. + E xr(sxe)
r,e=1 r,8=1 r,8=1
_ .1 I
n 12
rxr) +2 (n
l(n
> xr) I > rxr
r,s=1 r=1 r1
_ A(x1 + 2x2 + + nxn)2
+ 2(x1 + + xn)(x1 + 2x2 + + nxn).
Now let
y1 =x1+2x2+...+nxn
y2=xl+x2+ +xn
y3 = x3
yn = xn
Then the form is Ayi + 2y1112 which can be written as
A(yl + *y2)2  1 y2 if A 0;
I(yl + 112)2  4(y,  y2)2 if A = 0.
Hence in either case the rank is 2 and the signature is 0.
2.42 Since A is an eigenvalue of A there exist a1, ... , an not all zero such
that Ax = Ax where x = [a1 ... an]t. Then xtAx = Axtx and so, if
Q = xtAx then we have
Q(a1,... , an) = A(ai + ... + an),
2.44 Let Q(x,x) = xtAtAx = (Ax)tAx. Since detA $ 0 we may apply
the nonsingular linear transformation described by y = Ax so that
Q(x, x)  Q(y, y) where
Q(yfY) =VY=yl + "'+yn
Thus Q is positive definite.
91
Book 4 Linear algebra
2.44 Let {u1, ... , un} be an orthonormal basis of the real inner product space
IRn under the inner product given by (x1y) = f (x, y). Let x = E 1 xiu1
and y = ni_1 yiuiThen
n n n
/
f (x) y) _ (xIy) = (E xiui I > yiui) = E xiyi
i=1 i=1 i=1
x= y=
Lxn yn
xl y l
x  Y=
xn yn
0 0 1
A= 1 101, B= 0 1 0.
0 1 0 1 0 0
Since
1 A 1
det(B  AA) = det A 1 A =A2+1
1 A 0
A= 2 1 2
1 1 1
93
Book 4 Linear algebra
x2+y2+z2+zy+xz+yz=(x+2y+az)2+4y2+4z2+ayz
=(x+2y+az)2+4(y+3z)2+ 3z2
which is greater than 0 for all x # 0. So the integral converges to
ir3/2/ det A, i.e. to 2P.
94
Test paper 1
by the matrix
1 8 6 4 1
0 1 0 0 0
0 1 2 1 0
0 1 1 0 1
0 5 4 3 2
Find a basis of Q5 with respect to which the matrix of t is in Jordan
normal form.
S Let rP1, ... , rPn E (IRn)d. Prove that the solution set C of the linear
inequalities
rP1(x)>0, rai(x)>0, ... ,rpn(x)>0
satisfies
(a) a,QEC=a+QEC;
(b) aEC,tEIR,t>O==> taEC.
Show that if VI, ... , rPn form a basis of (IR')d then
C = {t1 al +  + tnan I t1 E IR, t{ > 0}
where {a1, ... , an} is the basis of IRn dual to the basis {(pl,... , rPn }.
Hence write down the solution of the system of inequalities
(P1 (X) > 0, rP2(x) >_ 0, S03 (X) > 0, 94 (X) > 0
where rP1 = [4,5,2, 111, rP2 = [3, 4, 2,6], ro3 = 12,3,1,4] and rP4 =
[0, 0, 0, 1].
4 Let A be a real orthogonal matrix. If (A  AI)2x = 0 and y = (A  AI)x
show, by considering yty, that y = 0. Hence prove that an orthogonal
matrix satisfies an equation without repeated roots.
Prove that a real orthogonal matrix with all its eigenvalues real is
necessarily symmetric.
5 Prove that if a real quadratic form in n variables is reduced by a real
nonsingular linear transformation to a form
97
Test paper 2
_ J1 ifs=r+
ura = l 0 otherwise,
11 ifr+s= n+1;
jr°  0 otherwise,
3 6 5
3 Let V be a vector space of dimension n over a field F. Suppose that W
is a subspace of V with dim W = m. Show that
(a) dim W' = n  m;
(b) (Wl)1 = W.
If f, g E Vd are such that there is no A E F \ {0} with f = Ag, show
that Ker f n Ker g is of dimension n  2.
4 Let V be a finitedimensional complex inner product space and let f :
V + V be a normal transformation. Prove that
f2(x) = 0 f(x) = 0
0 0 x33 0 0 a
A= 2 2 2 2
1
12 2i 12 2
1 1 1 1
2 2 2 2
Show that
(k  1)Q(k, r) = kQ(k  1, r  1) + yr
where Yr is a homogeneous linear function of z1,.. . , xr.
Hence find the rank and signature of
101
Test paper 4
zk
B = {eaz I k = 0,...,n  1}
k!
a
2 Given A = I b use the CayleyHamilton theorem and euclidean
division to show that every positive power of A can be written in the
form
An=th12+Q2A.
If the eigenvalues of A are Al, A2 show that
n  A2A1  A1A2
I2 +
\n \n

lA if Al # A2,
A AZ  Al A2  Al
(1n)AnI2+nAi1A if Al=A2.
Hence solve the system of difference equations
xn+1 = xn + 2yn
yn+1 = 2xn + yn
where x1=0andy1=1.
3 Suppose that f E C(C , Qn) and that every eigenvalue of f is 0. Show
that f is nilpotent and explain how to find dimKer f from the Jordan
normal form of f.
Let f, g E C((6, Q8) be nilpotent with the same minimum polynomial
and dim Ker f = dim Ker g. Show that f, g have the same Jordan normal
form. By means of an example show that this fails in general for f, g E
C(d', (7).
Deduce that if s, t E C(Cn, Qn) have the same characteristic polyno
mial
(X  a1)k'(X  a2)k' ... (X  ar)kr
and the same minimum polynomial, and if
dim Ker(s  at id) = dim Ker(t  ai id)
for 1 < i < r, then s and t have the same Jordan normal form provided
ki < 6 for 1 < i < r.
4 Let V be a vector space of dimension n over a field F.
(i) If s E C (V, V) show that s o s = 0 if and only if Im s C Ker s, in
in.
which case dim Im s <
(ii) Let p E C(VV)
a be such that p' = 0 and pn1 # 0. Show
that there is a basis B = {z1,...,xn} of V such that p(xj) = xj+1 for
j = 1,...,n  1 and p(xn) = 0.
Show that if t = > 1 Aipi1 where each Ai E F then t commutes
with p. Conversely, suppose that t E C (V, V) commutes with p and is
represented relative to the basis B by the matrix [a=j]nxn Prove by
induction that
nj+l
(9 = 1, ... , n) t(xj) _ ail xi+jl
i=1
103
b Show that each of the quadratic forms
6x?+5x2+7x34fx2x3i
7yi + 6y2 + 5y3 + 4yi y2 + 4y2Y3
104