Anda di halaman 1dari 202

ELEMENTARY

LINEAR ALGEBRA
K. R. MATTHEWS
DEPARTMENT OF MATHEMATICS
UNIVERSITY OF QUEENSLAND
Second Online Version, December 1998
Comments to the author at krm@maths.uq.edu.au
Contents
1 LINEAR EQUATIONS 1
1.1 Introduction to linear equations . . . . . . . . . . . . . . . . . 1
1.2 Solving linear equations . . . . . . . . . . . . . . . . . . . . . 6
1.3 The GaussJordan algorithm . . . . . . . . . . . . . . . . . . 8
1.4 Systematic solution of linear systems. . . . . . . . . . . . . . 9
1.5 Homogeneous systems . . . . . . . . . . . . . . . . . . . . . . 16
1.6 PROBLEMS . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2 MATRICES 23
2.1 Matrix arithmetic . . . . . . . . . . . . . . . . . . . . . . . . . 23
2.2 Linear transformations . . . . . . . . . . . . . . . . . . . . . . 27
2.3 Recurrence relations . . . . . . . . . . . . . . . . . . . . . . . 31
2.4 PROBLEMS . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
2.5 Nonsingular matrices . . . . . . . . . . . . . . . . . . . . . . 36
2.6 Least squares solution of equations . . . . . . . . . . . . . . . 47
2.7 PROBLEMS . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
3 SUBSPACES 55
3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.2 Subspaces of 1
a
. . . . . . . . . . . . . . . . . . . . . . . . . 55
3.3 Linear dependence . . . . . . . . . . . . . . . . . . . . . . . . 58
3.4 Basis of a subspace . . . . . . . . . . . . . . . . . . . . . . . . 61
3.5 Rank and nullity of a matrix . . . . . . . . . . . . . . . . . . 64
3.6 PROBLEMS . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
4 DETERMINANTS 71
4.1 PROBLEMS . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
i
5 COMPLEX NUMBERS 89
5.1 Constructing the complex numbers . . . . . . . . . . . . . . . 89
5.2 Calculating with complex numbers . . . . . . . . . . . . . . . 91
5.3 Geometric representation of C . . . . . . . . . . . . . . . . . . 95
5.4 Complex conjugate . . . . . . . . . . . . . . . . . . . . . . . . 96
5.5 Modulus of a complex number . . . . . . . . . . . . . . . . . 99
5.6 Argument of a complex number . . . . . . . . . . . . . . . . . 103
5.7 De Moivres theorem . . . . . . . . . . . . . . . . . . . . . . . 107
5.8 PROBLEMS . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
6 EIGENVALUES AND EIGENVECTORS 115
6.1 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
6.2 Denitions and examples . . . . . . . . . . . . . . . . . . . . . 118
6.3 PROBLEMS . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
7 Identifying second degree equations 129
7.1 The eigenvalue method . . . . . . . . . . . . . . . . . . . . . . 129
7.2 A classication algorithm . . . . . . . . . . . . . . . . . . . . 141
7.3 PROBLEMS . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
8 THREEDIMENSIONAL GEOMETRY 149
8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
8.2 Threedimensional space . . . . . . . . . . . . . . . . . . . . . 154
8.3 Dot product . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
8.4 Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
8.5 The angle between two vectors . . . . . . . . . . . . . . . . . 166
8.6 The crossproduct of two vectors . . . . . . . . . . . . . . . . 172
8.7 Planes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
8.8 PROBLEMS . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
9 FURTHER READING 189
ii
List of Figures
1.1 GaussJordan algorithm . . . . . . . . . . . . . . . . . . . . . 10
2.1 Reection in a line . . . . . . . . . . . . . . . . . . . . . . . . 29
2.2 Projection on a line . . . . . . . . . . . . . . . . . . . . . . . 30
4.1 Area of triangle O1Q. . . . . . . . . . . . . . . . . . . . . . . 72
5.1 Complex addition and subtraction . . . . . . . . . . . . . . . 96
5.2 Complex conjugate . . . . . . . . . . . . . . . . . . . . . . . . 97
5.3 Modulus of a complex number . . . . . . . . . . . . . . . . . 99
5.4 Apollonius circles . . . . . . . . . . . . . . . . . . . . . . . . . 101
5.5 Argument of a complex number . . . . . . . . . . . . . . . . . 104
5.6 Argument examples . . . . . . . . . . . . . . . . . . . . . . . 105
5.7 The nth roots of unity. . . . . . . . . . . . . . . . . . . . . . . 108
5.8 The roots of .
a
= o. . . . . . . . . . . . . . . . . . . . . . . . 109
6.1 Rotating the axes . . . . . . . . . . . . . . . . . . . . . . . . . 116
7.1 An ellipse example . . . . . . . . . . . . . . . . . . . . . . . . 135
7.2 ellipse: standard form . . . . . . . . . . . . . . . . . . . . . . 137
7.3 hyperbola: standard forms . . . . . . . . . . . . . . . . . . . . 138
7.4 parabola: standard forms (i) and (ii) . . . . . . . . . . . . . . 138
7.5 parabola: standard forms (iii) and (iv) . . . . . . . . . . . . . 139
7.6 1st parabola example . . . . . . . . . . . . . . . . . . . . . . . 140
7.7 2nd parabola example . . . . . . . . . . . . . . . . . . . . . . 141
8.1 Equality and addition of vectors . . . . . . . . . . . . . . . . 150
8.2 Scalar multiplication of vectors. . . . . . . . . . . . . . . . . . 151
8.3 Representation of threedimensional space . . . . . . . . . . . 155
8.4 The vector

1. . . . . . . . . . . . . . . . . . . . . . . . . . . 155
8.5 The negative of a vector. . . . . . . . . . . . . . . . . . . . . . 157
iii
1
8.6 (a) Equality of vectors; (b) Addition and subtraction of vectors.157
8.7 Position vector as a linear combination of i, j and k. . . . . . 158
8.8 Representation of a line. . . . . . . . . . . . . . . . . . . . . . 162
8.9 The line 1. . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
8.10 The cosine rule for a triangle. . . . . . . . . . . . . . . . . . . 167
8.11 Pythagoras theorem for a rightangled triangle. . . . . . . . 168
8.12 Distance from a point to a line. . . . . . . . . . . . . . . . . . 169
8.13 Projecting a segment onto a line. . . . . . . . . . . . . . . . . 171
8.14 The vector crossproduct. . . . . . . . . . . . . . . . . . . . . 174
8.15 Vector equation for the plane 1C. . . . . . . . . . . . . . . 177
8.16 Normal equation of the plane 1C. . . . . . . . . . . . . . . 178
8.17 The plane or + /j + c. = d. . . . . . . . . . . . . . . . . . . . 179
8.18 Line of intersection of two planes. . . . . . . . . . . . . . . . . 182
8.19 Distance from a point to the plane or + /j + c. = d. . . . . . 184
2
Chapter 1
LINEAR EQUATIONS
1.1 Introduction to linear equations
A linear equation in n unknowns r

, r

, , r
a
is an equation of the form
o

+ o

+ + o
a
r
a
= /,
where o

, o

, . . . , o
a
, / are given real numbers.
For example, with r and j instead of r

and r

, the linear equation


2r + 3j = 6 describes the line passing through the points (3, 0) and (0, 2).
Similarly, with r, j and . instead of r

, r

and r

, the linear equa-


tion 2r + 3j + 4. = 12 describes the plane passing through the points
(6, 0, 0), (0, 4, 0), (0, 0, 3).
A system of : linear equations in n unknowns r

, r

, , r
a
is a family
of linear equations
o

r

+ o

r

+ + o
a
r
a
= /

o

r

+ o

r

+ + o
a
r
a
= /

.
.
.
o
n
r

+ o
n
r

+ + o
na
r
a
= /
n
.
We wish to determine if such a system has a solution, that is to nd
out if there exist numbers r

, r

, , r
a
which satisfy each of the equations
simultaneously. We say that the system is consistent if it has a solution.
Otherwise the system is called inconsistent.
1
2 CHAPTER 1. LINEAR EQUATIONS
Note that the above system can be written concisely as
a

)
o
i)
r
)
= /
i
, i = 1, 2, , :.
The matrix
_

_
o

o

o
a
o

o

o
a
.
.
.
.
.
.
o
n
o
n
o
na
_

_
is called the coecient matrix of the system, while the matrix
_

_
o

o

o
a
/

o

o

o
a
/

.
.
.
.
.
.
.
.
.
o
n
o
n
o
na
/
n
_

_
is called the augmented matrix of the system.
Geometrically, solving a system of linear equations in two (or three)
unknowns is equivalent to determining whether or not a family of lines (or
planes) has a common point of intersection.
EXAMPLE 1.1.1 Solve the equation
2r + 3j = 6.
Solution. The equation 2r + 3j = 6 is equivalent to 2r = 6 3j or
r = 3

j, where j is arbitrary. So there are innitely many solutions.
EXAMPLE 1.1.2 Solve the system
r + j + . = 1
r j + . = 0.
Solution. We subtract the second equation from the rst, to get 2j = 1
and j =
1
. Then r = j . =
1
., where . is arbitrary. Again there are
innitely many solutions.
EXAMPLE 1.1.3 Find a polynomial of the form j = o

+o

r+o

+o

which passes through the points (3, 2), (1, 2), (1, 5), (2, 1).
1.1. INTRODUCTION TO LINEAR EQUATIONS 3
Solution. When r has the values 3, 1, 1, 2, then j takes corresponding
values 2, 2, 5, 1 and we get four equations in the unknowns o

, o

, o

, o

:
o

3o

+ 9o

27o

= 2
o

o

+ o

= 2
o

+ o

+ o

+ o

= 5
o

+ 2o

+ 4o

+ 8o

= 1.
This system has the unique solution o

= 93,20, o

= 221,120, o

=
23,20,
o

= 41,120. So the required polynomial is


j =
93
20
+
221
120
r
23
20
r

41
120
r

.
In [26, pages 3335] there are examples of systems of linear equations
which arise from simple electrical networks using Kirchhos laws for elec-
trical circuits.
Solving a system consisting of a single linear equation is easy. However if
we are dealing with two or more equations, it is desirable to have a systematic
method of determining if the system is consistent and to nd all solutions.
Instead of restricting ourselves to linear equations with rational or real
coecients, our theory goes over to the more general case where the coef-
cients belong to an arbitrary eld. A eld 1 is a set 1 which possesses
operations of addition and multiplication which satisfy the familiar rules of
rational arithmetic. There are ten basic properties that a eld must have:
THE FIELD AXIOMS.
1. (o + /) + c = o + (/ + c) for all o, /, c in 1;
2. (o/)c = o(/c) for all o, /, c in 1;
3. o + / = / + o for all o, / in 1;
4. o/ = /o for all o, / in 1;
5. there exists an element 0 in 1 such that 0 + o = o for all o in 1;
6. there exists an element 1 in 1 such that 1o = o for all o in 1;
4 CHAPTER 1. LINEAR EQUATIONS
7. to every o in 1, there corresponds an additive inverse o in 1, satis-
fying
o + (o) = 0;
8. to every nonzero o in 1, there corresponds a multiplicative inverse
o

in 1, satisfying
oo

= 1;
9. o(/ + c) = o/ + oc for all o, /, c in 1;
10. 0 ,= 1.
With standard denitions such as o / = o + (/) and
o
/
= o/

for
/ ,= 0, we have the following familiar rules:
(o + /) = (o) + (/), (o/)

= o

;
(o) = o, (o

= o;
(o /) = / o, (
o
/
)

=
/
o
;
o
/
+
c
d
=
od + /c
/d
;
o
/
c
d
=
oc
/d
;
o/
oc
=
/
c
,
o
_
o
c
_ =
oc
/
;
(o/) = (o)/ = o(/);

_
o
/
_
=
o
/
=
o
/
;
0o = 0;
(o)

= (o

).
Fields which have only nitely many elements are of great interest in
many parts of mathematics and its applications, for example to coding the-
ory. It is easy to construct elds containing exactly j elements, where j is
a prime number. First we must explain the idea of modular addition and
modular multiplication. If o is an integer, we dene o (mod j) to be the
least remainder on dividing o by j: That is, if o = /j +:, where / and : are
integers and 0 : < j, then o (mod j) = :.
For example, 1 (mod 2) = 1, 3 (mod 3) = 0, 5 (mod 3) = 2.
1.1. INTRODUCTION TO LINEAR EQUATIONS 5
Then addition and multiplication mod j are dened by
o / = (o + /) (mod j)
o / = (o/) (mod j).
For example, with j = 7, we have 3 4 = 7 (mod 7) = 0 and 3 5 =
15 (mod 7) = 1. Here are the complete addition and multiplication tables
mod 7:
0 1 2 3 4 5 6
0 0 1 2 3 4 5 6
1 1 2 3 4 5 6 0
2 2 3 4 5 6 0 1
3 3 4 5 6 0 1 2
4 4 5 6 0 1 2 3
5 5 6 0 1 2 3 4
6 6 0 1 2 3 4 5
0 1 2 3 4 5 6
0 0 0 0 0 0 0 0
1 0 1 2 3 4 5 6
2 0 2 4 6 1 3 5
3 0 3 6 2 5 1 4
4 0 4 1 5 2 6 3
5 0 5 3 1 6 4 2
6 0 6 5 4 3 2 1
If we now let Z
j
= 0, 1, . . . , j 1, then it can be proved that Z
j
forms
a eld under the operations of modular addition and multiplication mod j.
For example, the additive inverse of 3 in Z

is 4, so we write 3 = 4 when
calculating in Z

. Also the multiplicative inverse of 3 in Z

is 5 , so we write
3

= 5 when calculating in Z

.
In practice, we write o/ and o/ as o+/ and o/ or o/ when dealing
with linear equations over Z
j
.
The simplest eld is Z

, which consists of two elements 0, 1 with addition
satisfying 1+1 = 0. So in Z

, 1 = 1 and the arithmetic involved in solving
equations over Z

is very simple.
EXAMPLE 1.1.4 Solve the following system over Z

:
r + j + . = 0
r + . = 1.
Solution. We add the rst equation to the second to get j = 1. Then r =
1 . = 1 +., with . arbitrary. Hence the solutions are (r, j, .) = (1, 1, 0)
and (0, 1, 1).
We use and 1 to denote the elds of rational and real numbers, re-
spectively. Unless otherwise stated, the eld used will be .
6 CHAPTER 1. LINEAR EQUATIONS
1.2 Solving linear equations
We show how to solve any system of linear equations over an arbitrary eld,
using the GAUSSJORDAN algorithm. We rst need to dene some terms.
DEFINITION 1.2.1 (Rowechelon form) A matrix is in rowechelon
form if
(i) all zero rows (if any) are at the bottom of the matrix and
(ii) if two successive rows are nonzero, the second row starts with more
zeros than the rst (moving from left to right).
For example, the matrix
_

_
0 1 0 0
0 0 1 0
0 0 0 0
0 0 0 0
_

_
is in rowechelon form, whereas the matrix
_

_
0 1 0 0
0 1 0 0
0 0 0 0
0 0 0 0
_

_
is not in rowechelon form.
The zero matrix of any size is always in rowechelon form.
DEFINITION 1.2.2 (Reduced rowechelon form) A matrix is in re-
duced rowechelon form if
1. it is in rowechelon form,
2. the leading (leftmost nonzero) entry in each nonzero row is 1,
3. all other elements of the column in which the leading entry 1 occurs
are zeros.
For example the matrices
_
1 0
0 1
_
and
_

_
0 1 2 0 0 2
0 0 0 1 0 3
0 0 0 0 1 4
0 0 0 0 0 0
_

_
1.2. SOLVING LINEAR EQUATIONS 7
are in reduced rowechelon form, whereas the matrices
_
_
1 0 0
0 1 0
0 0 2
_
_
and
_
_
1 2 0
0 1 0
0 0 0
_
_
are not in reduced rowechelon form, but are in rowechelon form.
The zero matrix of any size is always in reduced rowechelon form.
Notation. If a matrix is in reduced rowechelon form, it is useful to denote
the column numbers in which the leading entries 1 occur, by c

, c

, . . . , c

,
with the remaining column numbers being denoted by c

, . . . , c
a
, where
: is the number of nonzero rows. For example, in the 4 6 matrix above,
we have : = 3, c

= 2, c

= 4, c

= 5, c

= 1, c

= 3, c

= 6.
The following operations are the ones used on systems of linear equations
and do not change the solutions.
DEFINITION 1.2.3 (Elementary row operations) There are three
types of elementary row operations that can be performed on matrices:
1. Interchanging two rows:
1
i
1
)
interchanges rows i and ,.
2. Multiplying a row by a nonzero scalar:
1
i
t1
i
multiplies row i by the nonzero scalar t.
3. Adding a multiple of one row to another row:
1
)
1
)
+ t1
i
adds t times row i to row ,.
DEFINITION 1.2.4 [Row equivalence]Matrix is rowequivalent to ma-
trix 1 if 1 is obtained from by a sequence of elementary row operations.
EXAMPLE 1.2.1 Working from left to right,
=
_
_
1 2 0
2 1 1
1 1 2
_
_
1

+ 21

_
_
1 2 0
4 1 5
1 1 2
_
_
1

_
_
1 2 0
1 1 2
4 1 5
_
_
1

21

_
_
2 4 0
1 1 2
4 1 5
_
_
= 1.
8 CHAPTER 1. LINEAR EQUATIONS
Thus is rowequivalent to 1. Clearly 1 is also rowequivalent to , by
performing the inverse rowoperations 1

, 1

, 1

21

on 1.
It is not dicult to prove that if and 1 are rowequivalent augmented
matrices of two systems of linear equations, then the two systems have the
same solution sets a solution of the one system is a solution of the other.
For example the systems whose augmented matrices are and 1 in the
above example are respectively
_
_
_
r + 2j = 0
2r + j = 1
r j = 2
and
_
_
_
2r + 4j = 0
r j = 2
4r j = 5
and these systems have precisely the same solutions.
1.3 The GaussJordan algorithm
We now describe the GAUSSJORDAN ALGORITHM. This is a process
which starts with a given matrix and produces a matrix 1 in reduced row
echelon form, which is rowequivalent to . If is the augmented matrix
of a system of linear equations, then 1 will be a much simpler matrix than
from which the consistency or inconsistency of the corresponding system
is immediately apparent and in fact the complete solution of the system can
be read o.
STEP 1.
Find the rst nonzero column moving from left to right, (column c

)
and select a nonzero entry from this column. By interchanging rows, if
necessary, ensure that the rst entry in this column is nonzero. Multiply
row 1 by the multiplicative inverse of o
c
1
thereby converting o
c
1
to 1. For
each nonzero element o
ic
1
, i 1, (if any) in column c

, add o
ic
1
times
row 1 to row i, thereby ensuring that all elements in column c

, apart from
the rst, are zero.
STEP 2. If the matrix obtained at Step 1 has its 2nd, . . . , :th rows all
zero, the matrix is in reduced rowechelon form. Otherwise suppose that
the rst column which has a nonzero element in the rows below the rst is
column c

. Then c

< c

. By interchanging rows below the rst, if necessary,


ensure that o
c
2
is nonzero. Then convert o
c
2
to 1 and by adding suitable
multiples of row 2 to the remaing rows, where necessary, ensure that all
remaining elements in column c

are zero.
1.4. SYSTEMATIC SOLUTION OF LINEAR SYSTEMS. 9
The process is repeated and will eventually stop after : steps, either
because we run out of rows, or because we run out of nonzero columns. In
general, the nal matrix will be in reduced rowechelon form and will have
: nonzero rows, with leading entries 1 in columns c

, . . . , c

, respectively.
EXAMPLE 1.3.1
_
_
0 0 4 0
2 2 2 5
5 5 1 5
_
_
1

_
_
2 2 2 5
0 0 4 0
5 5 1 5
_
_
1

_
_
1 1 1
5
0 0 4 0
5 5 1 5
_
_
1

51

_
_
1 1 1
5
0 0 4 0
0 0 4

_
_
1

_
_
1 1 1
5
0 0 1 0
0 0 4

_
_
_
1

+ 1

41

_
_
1 1 0
5
0 0 1 0
0 0 0

_
_
1

_
_
1 1 0
5
0 0 1 0
0 0 0 1
_
_
1

_
_
1 1 0 0
0 0 1 0
0 0 0 1
_
_
The last matrix is in reduced rowechelon form.
REMARK 1.3.1 It is possible to show that a given matrix over an ar-
bitrary eld is rowequivalent to precisely one matrix which is in reduced
rowechelon form.
A owchart for the GaussJordan algorithm, based on [1, page 83] is pre-
sented in gure 1.1 below.
1.4 Systematic solution of linear systems.
Suppose a system of : linear equations in n unknowns r

, , r
a
has aug-
mented matrix and that is rowequivalent to a matrix 1 which is in
reduced rowechelon form, via the GaussJordan algorithm. Then and 1
are :(n + 1). Suppose that 1 has : nonzero rows and that the leading
entry 1 in row i occurs in column number c
i
, for 1 i :. Then
1 c

< c

< , < c

n + 1.
10 CHAPTER 1. LINEAR EQUATIONS
START

Input , :, n

i = 1, , = 1

Are the elements in the


,th column on and below
the ith row all zero?
, = , + 1

Yes No

Is , = n?
Yes
No

`
Let o
j)
be the rst nonzero
element in column , on or
below the ith row

Is j = i?
Yes

No
Interchange the
jth and ith rows
>
>
>
>
>
>
>
.
Divide the ith row by o
i)

Subtract o
q)
times the ith
row from the th row for
for = 1, . . . , :( ,= i)

Set c
i
= ,

Is i = :?

.
Is , = n?

i = i + 1
, = , + 1
`
No
No
Yes
Yes

Print ,
c

, . . . , c
i

STOP
Figure 1.1: GaussJordan algorithm.
1.4. SYSTEMATIC SOLUTION OF LINEAR SYSTEMS. 11
Also assume that the remaining column numbers are c

, , c
a
, where
1 c

< c

< < c
a
n + 1.
Case 1: c

= n + 1. The system is inconsistent. For the last nonzero


row of 1 is [0, 0, , 1] and the corresponding equation is
0r

+ 0r

+ + 0r
a
= 1,
which has no solutions. Consequently the original system has no solutions.
Case 2: c

n. The system of equations corresponding to the nonzero


rows of 1 is consistent. First notice that : n here.
If : = n, then c

= 1, c

= 2, , c
a
= n and
1 =
_

_
1 0 0 d

0 1 0 d

.
.
.
.
.
.
0 0 1 d
a
0 0 0 0
.
.
.
.
.
.
0 0 0 0
_

_
.
There is a unique solution r

= d

, r

= d

, , r
a
= d
a
.
If : < n, there will be more than one solution (innitely many if the
eld is innite). For all solutions are obtained by taking the unknowns
r
c
1
, , r
cr
as dependent unknowns and using the : equations correspond-
ing to the nonzero rows of 1 to express these unknowns in terms of the
remaining independent unknowns r
c
r+1
, . . . , r
cn
, which can take on arbi-
trary values:
r
c
1
= /
a
/
c
r+1
r
c
r+1
/
cn
r
cn
.
.
.
r
cr
= /
a
/
c
r+1
r
c
r+1
/
cn
r
cn
.
In particular, taking r
c
r+1
= 0, . . . , r
c
n1
= 0 and r
cn
= 0, 1 respectively,
produces at least two solutions.
EXAMPLE 1.4.1 Solve the system
r + j = 0
r j = 1
4r + 2j = 1.
12 CHAPTER 1. LINEAR EQUATIONS
Solution. The augmented matrix of the system is
=
_
_
1 1 0
1 1 1
4 2 1
_
_
which is row equivalent to
1 =
_
_
1 0
1
0 1

0 0 0
_
_
.
We read o the unique solution r =
1
, j =

.
(Here n = 2, : = 2, c

= 1, c

= 2. Also c

= c

= 2 < 3 = n + 1 and
: = n.)
EXAMPLE 1.4.2 Solve the system
2r

+ 2r

2r

= 5
7r

+ 7r

+ r

= 10
5r

+ 5r

= 5.
Solution. The augmented matrix is
=
_
_
2 2 2 5
7 7 1 10
5 5 1 5
_
_
which is row equivalent to
1 =
_
_
1 1 0 0
0 0 1 0
0 0 0 1
_
_
.
We read o inconsistency for the original system.
(Here n = 3, : = 3, c

= 1, c

= 3. Also c

= c

= 4 = n + 1.)
EXAMPLE 1.4.3 Solve the system
r

+ r

= 1
r

+ r

= 2.
1.4. SYSTEMATIC SOLUTION OF LINEAR SYSTEMS. 13
Solution. The augmented matrix is
=
_
1 1 1 1
1 1 1 2
_
which is row equivalent to
1 =
_
1 0 0
3
0 1 1
1
_
.
The complete solution is r

=
3
, r

=
1
+ r

, with r

arbitrary.
(Here n = 3, : = 2, c

= 1, c

= 2. Also c

= c

= 2 < 4 = n + 1 and
: < n.)
EXAMPLE 1.4.4 Solve the system
6r

+ 2r

4r

8r

= 8
3r

+ r

2r

4r

= 4
2r

3r

+ r

+ 4r

7r

+ r

= 2
6r

9r

+ 11r

19r

+ 3r

= 1.
Solution. The augmented matrix is
=
_

_
0 0 6 2 4 8 8
0 0 3 1 2 4 4
2 3 1 4 7 1 2
6 9 0 11 19 3 1
_

_
which is row equivalent to
1 =
_

_
1

0
11

0
1
0 0 1
1

0
5
0 0 0 0 0 1
1
0 0 0 0 0 0 0
_

_
.
The complete solution is
r

=
1
+
3
r

+
19
r

,
r

=
5

+
2
r

,
r

=
1
,
with r

, r

, r

arbitrary.
(Here n = 6, : = 3, c

= 1, c

= 3, c

= 6; c

= c

= 6 < 7 = n + 1; : < n.)


14 CHAPTER 1. LINEAR EQUATIONS
EXAMPLE 1.4.5 Find the rational number t for which the following sys-
tem is consistent and solve the system for this value of t.
r + j = 2
r j = 0
3r j = t.
Solution. The augmented matrix of the system is
=
_
_
1 1 2
1 1 0
3 1 t
_
_
which is rowequivalent to the simpler matrix
1 =
_
_
1 1 2
0 1 1
0 0 t 2
_
_
.
Hence if t ,= 2 the system is inconsistent. If t = 2 the system is consistent
and
1 =
_
_
1 1 2
0 1 1
0 0 0
_
_

_
_
1 0 1
0 1 1
0 0 0
_
_
.
We read o the solution r = 1, j = 1.
EXAMPLE 1.4.6 For which rationals o and / does the following system
have (i) no solution, (ii) a unique solution, (iii) innitely many solutions?
r 2j + 3. = 4
2r 3j + o. = 5
3r 4j + 5. = /.
Solution. The augmented matrix of the system is
=
_
_
1 2 3 4
2 3 o 5
3 4 5 /
_
_
1.4. SYSTEMATIC SOLUTION OF LINEAR SYSTEMS. 15
_
1

21

31

_
_
1 2 3 4
0 1 o 6 3
0 2 4 / 12
_
_
1

21

_
_
1 2 3 4
0 1 o 6 3
0 0 2o + 8 / 6
_
_
= 1.
Case 1. o ,= 4. Then 2o + 8 ,= 0 and we see that 1 can be reduced to
a matrix of the form
_
_
1 0 0 n
0 1 0
0 0 1
o
o
_
_
and we have the unique solution r = n, j = , . = (/ 6),(2o + 8).
Case 2. o = 4. Then
1 =
_
_
1 2 3 4
0 1 2 3
0 0 0 / 6
_
_
.
If / ,= 6 we get no solution, whereas if / = 6 then
1 =
_
_
1 2 3 4
0 1 2 3
0 0 0 0
_
_
1

+ 21

_
_
1 0 1 2
0 1 2 3
0 0 0 0
_
_
. We
read o the complete solution r = 2 + ., j = 3 + 2., with . arbitrary.
EXAMPLE 1.4.7 Find the reduced rowechelon form of the following ma-
trix over Z

:
_
2 1 2 1
2 2 1 0
_
.
Hence solve the system
2r + j + 2. = 1
2r + 2j + . = 0
over Z

.
Solution.
16 CHAPTER 1. LINEAR EQUATIONS
_
2 1 2 1
2 2 1 0
_
1

_
2 1 2 1
0 1 1 1
_
=
_
2 1 2 1
0 1 2 2
_
1

21

_
1 2 1 2
0 1 2 2
_
1

+ 1

_
1 0 0 1
0 1 2 2
_
.
The last matrix is in reduced rowechelon form.
To solve the system of equations whose augmented matrix is the given
matrix over Z

, we see from the reduced rowechelon form that r = 1 and


j = 2 2. = 2 + ., where . = 0, 1, 2. Hence there are three solutions
to the given system of linear equations: (r, j, .) = (1, 2, 0), (1, 0, 1) and
(1, 1, 2).
1.5 Homogeneous systems
A system of homogeneous linear equations is a system of the form
o

r

+ o

r

+ + o
a
r
a
= 0
o

r

+ o

r

+ + o
a
r
a
= 0
.
.
.
o
n
r

+ o
n
r

+ + o
na
r
a
= 0.
Such a system is always consistent as r

= 0, , r
a
= 0 is a solution.
This solution is called the trivial solution. Any other solution is called a
nontrivial solution.
For example the homogeneous system
r j = 0
r + j = 0
has only the trivial solution, whereas the homogeneous system
r j + . = 0
r + j + . = 0
has the complete solution r = ., j = 0, . arbitrary. In particular, taking
. = 1 gives the nontrivial solution r = 1, j = 0, . = 1.
There is simple but fundamental theorem concerning homogeneous sys-
tems.
THEOREM 1.5.1 A homogeneous system of : linear equations in n un-
knowns always has a nontrivial solution if : < n.
1.6. PROBLEMS 17
Proof. Suppose that : < n and that the coecient matrix of the system
is rowequivalent to 1, a matrix in reduced rowechelon form. Let : be the
number of nonzero rows in 1. Then : : < n and hence n : 0 and
so the number n : of arbitrary unknowns is in fact positive. Taking one
of these unknowns to be 1 gives a nontrivial solution.
REMARK 1.5.1 Let two systems of homogeneous equations in n un-
knowns have coecient matrices and 1, respectively. If each row of 1 is
a linear combination of the rows of (i.e. a sum of multiples of the rows
of ) and each row of is a linear combination of the rows of 1, then it is
easy to prove that the two systems have identical solutions. The converse is
true, but is not easy to prove. Similarly if and 1 have the same reduced
rowechelon form, apart from possibly zero rows, then the two systems have
identical solutions and conversely.
There is a similar situation in the case of two systems of linear equations
(not necessarily homogeneous), with the proviso that in the statement of
the converse, the extra condition that both the systems are consistent, is
needed.
1.6 PROBLEMS
1. Which of the following matrices of rationals is in reduced rowechelon
form?
(a)
_
_
1 0 0 0 3
0 0 1 0 4
0 0 0 1 2
_
_
(b)
_
_
0 1 0 0 5
0 0 1 0 4
0 0 0 1 3
_
_
(c)
_
_
0 1 0 0
0 0 1 0
0 1 0 2
_
_
(d)
_

_
0 1 0 0 2
0 0 0 0 1
0 0 0 1 4
0 0 0 0 0
_

_
(e)
_

_
1 2 0 0 0
0 0 1 0 0
0 0 0 0 1
0 0 0 0 0
_

_
(f)
_

_
0 0 0 0
0 0 1 2
0 0 0 1
0 0 0 0
_

_
(g)
_

_
1 0 0 0 1
0 1 0 0 2
0 0 0 1 1
0 0 0 0 0
_

_
. [Answers: (a), (e), (g)]
2. Find reduced rowechelon forms which are rowequivalent to the following
matrices:
(a)
_
0 0 0
2 4 0
_
(b)
_
0 1 3
1 2 4
_
(c)
_
_
1 1 1
1 1 0
1 0 0
_
_
(d)
_
_
2 0 0
0 0 0
4 0 0
_
_
.
18 CHAPTER 1. LINEAR EQUATIONS
[Answers:
(a)
_
1 2 0
0 0 0
_
(b)
_
1 0 2
0 1 3
_
(c)
_
_
1 0 0
0 1 0
0 0 1
_
_
(d)
_
_
1 0 0
0 0 0
0 0 0
_
_
.]
3. Solve the following systems of linear equations by reducing the augmented
matrix to reduced rowechelon form:
(o) r + j + . = 2 (/) r

+ r

+ 2r

= 10
2r + 3j . = 8 3r

+ 7r

+ 4r

= 1
r j . = 8 5r

+ 3r

15r

6r

= 9
(c) 3r j + 7. = 0 (d) 2r

+ 3r

4r

= 1
2r j + 4. =
1
2r

+ 3r

= 4
r j + . = 1 2r

+ 2r

5r

+ 2r

= 4
6r 4j + 10. = 3 2r

6r

+ 9r

= 7
[Answers: (a) r = 3, j =
19
, . =
1
; (b) inconsistent;
(c) r =

3., j =

2., with . arbitrary;


(d) r

=
19
9r

, r

+
17
r

, r

= 2

r

, with r

arbitrary.]
4. Show that the following system is consistent if and only if c = 2o 3/
and solve the system in this case.
2r j + 3. = o
3r + j 5. = /
5r 5j + 21. = c.
[Answer: r =
o o
+
2
., j =
o o
+
19
., with . arbitrary.]
5. Find the value of t for which the following system is consistent and solve
the system for this value of t.
r + j = 1
tr + j = t
(1 + t)r + 2j = 3.
[Answer: t = 2; r = 1, j = 0.]
1.6. PROBLEMS 19
6. Solve the homogeneous system
3r

+ r

+ r

+ r

= 0
r

3r

+ r

+ r

= 0
r

+ r

3r

+ r

= 0
r

+ r

+ r

3r

= 0.
[Answer: r

= r

= r

= r

, with r

arbitrary.]
7. For which rational numbers does the homogeneous system
r + ( 3)j = 0
( 3)r + j = 0
have a nontrivial solution?
[Answer: = 2, 4.]
8. Solve the homogeneous system
3r

+ r

+ r

+ r

= 0
5r

+ r

= 0.
[Answer: r

, r

, with r

and r

arbitrary.]
9. Let be the coecient matrix of the following homogeneous system of
n equations in n unknowns:
(1 n)r

+ r

+ + r
a
= 0
r

+ (1 n)r

+ + r
a
= 0
= 0
r

+ r

+ + (1 n)r
a
= 0.
Find the reduced rowechelon form of and hence, or otherwise, prove that
the solution of the above system is r

= r

= = r
a
, with r
a
arbitrary.
10. Let =
_
o /
c d
_
be a matrix over a eld 1. Prove that is row
equivalent to
_
1 0
0 1
_
if od /c ,= 0, but is rowequivalent to a matrix
whose second row is zero, if od /c = 0.
20 CHAPTER 1. LINEAR EQUATIONS
11. For which rational numbers o does the following system have (i) no
solutions (ii) exactly one solution (iii) innitely many solutions?
r + 2j 3. = 4
3r j + 5. = 2
4r + j + (o

14). = o + 2.
[Answer: o = 4, no solution; o = 4, innitely many solutions; o ,= 4,
exactly one solution.]
12. Solve the following system of homogeneous equations over Z

:
r

+ r

+ r

= 0
r

+ r

+ r

= 0
r

+ r

+ r

+ r

= 0
r

+ r

= 0.
[Answer: r

= r

= r

+ r

, r

= r

, with r

and r

arbitrary elements of
Z

.]
13. Solve the following systems of linear equations over Z

:
(o) 2r + j + 3. = 4 (/) 2r + j + 3. = 4
4r + j + 4. = 1 4r + j + 4. = 1
3r + j + 2. = 0 r + j = 3.
[Answer: (a) r = 1, j = 2, . = 0; (b) r = 1 + 2., j = 2 + 3., with . an
arbitrary element of Z

.]
14. If (

, . . . ,
a
) and (

, . . . ,
a
) are solutions of a system of linear equa-
tions, prove that
((1 t)

+ t

, . . . , (1 t)
a
+ t
a
)
is also a solution.
15. If (

, . . . ,
a
) is a solution of a system of linear equations, prove that
the complete solution is given by r

+ j

, . . . , r
a
=
a
+ j
a
, where
(j

, . . . , j
a
) is the general solution of the associated homogeneous system.
1.6. PROBLEMS 21
16. Find the values of o and / for which the following system is consistent.
Also nd the complete solution when o = / = 2.
r + j . + n = 1
or + j + . + n = /
3r + 2j + on = 1 + o.
[Answer: o ,= 2 or o = 2 = /; r = 1 2., j = 3. n, with ., n arbitrary.]
17. Let 1 = 0, 1, o, / be a eld consisting of 4 elements.
(a) Determine the addition and multiplication tables of 1. (Hint: Prove
that the elements 1+0, 1+1, 1+o, 1+/ are distinct and deduce that
1 + 1 + 1 + 1 = 0; then deduce that 1 + 1 = 0.)
(b) A matrix , whose elements belong to 1, is dened by
=
_
_
1 o / o
o / / 1
1 1 1 o
_
_
,
prove that the reduced rowechelon form of is given by the matrix
1 =
_
_
1 0 0 0
0 1 0 /
0 0 1 1
_
_
.
22 CHAPTER 1. LINEAR EQUATIONS
Chapter 2
MATRICES
2.1 Matrix arithmetic
A matrix over a eld 1 is a rectangular array of elements from 1. The sym-
bol `
na
(1) denotes the collection of all :n matrices over 1. Matrices
will usually be denoted by capital letters and the equation = [o
i)
] means
that the element in the ith row and ,th column of the matrix equals
o
i)
. It is also occasionally convenient to write o
i)
= ()
i)
. For the present,
all matrices will have rational entries, unless otherwise stated.
EXAMPLE 2.1.1 The formula o
i)
= 1,(i + ,) for 1 i 3, 1 , 4
denes a 3 4 matrix = [o
i)
], namely
=
_

_
_
_

_
.
DEFINITION 2.1.1 (Equality of matrices) Matrices and 1 are said
to be equal if and 1 have the same size and corresponding elements are
equal; that is and 1 `
na
(1) and = [o
i)
], 1 = [/
i)
], with o
i)
= /
i)
for 1 i :, 1 , n.
DEFINITION 2.1.2 (Addition of matrices) Let = [o
i)
] and 1 =
[/
i)
] be of the same size. Then + 1 is the matrix obtained by adding
corresponding elements of and 1; that is
+ 1 = [o
i)
] + [/
i)
] = [o
i)
+ /
i)
].
23
24 CHAPTER 2. MATRICES
DEFINITION 2.1.3 (Scalar multiple of a matrix) Let = [o
i)
] and
t 1 (that is t is a scalar). Then t is the matrix obtained by multiplying
all elements of by t; that is
t = t[o
i)
] = [to
i)
].
DEFINITION 2.1.4 (Additive inverse of a matrix) Let = [o
i)
] .
Then is the matrix obtained by replacing the elements of by their
additive inverses; that is
= [o
i)
] = [o
i)
].
DEFINITION 2.1.5 (Subtraction of matrices) Matrix subtraction is
dened for two matrices = [o
i)
] and 1 = [/
i)
] of the same size, in the
usual way; that is
1 = [o
i)
] [/
i)
] = [o
i)
/
i)
].
DEFINITION 2.1.6 (The zero matrix) For each :, n the matrix in
`
na
(1), all of whose elements are zero, is called the zero matrix (of size
:n) and is denoted by the symbol 0.
The matrix operations of addition, scalar multiplication, additive inverse
and subtraction satisfy the usual laws of arithmetic. (In what follows, : and
t will be arbitrary scalars and , 1, C are matrices of the same size.)
1. ( + 1) + C = + (1 + C);
2. + 1 = 1 + ;
3. 0 + = ;
4. + () = 0;
5. (: + t) = : + t, (: t) = :t;
6. t( + 1) = t + t1, t(1) = tt1;
7. :(t) = (:t);
8. 1 = , 0 = 0, (1) = ;
9. t = 0 t = 0 or = 0.
Other similar properties will be used when needed.
2.1. MATRIX ARITHMETIC 25
DEFINITION 2.1.7 (Matrix product) Let = [o
i)
] be a matrix of
size : n and 1 = [/
)I
] be a matrix of size n j; (that is the number
of columns of equals the number of rows of 1). Then 1 is the : j
matrix C = [c
iI
] whose (i, /)th element is dened by the formula
c
iI
=
a

)
o
i)
/
)I
= o
i
/
I
+ + o
ia
/
aI
.
EXAMPLE 2.1.2
1.
_
1 2
3 4
_ _
5 6
7 8
_
=
_
1 5 + 2 7 1 6 + 2 8
3 5 + 4 7 3 6 + 4 8
_
=
_
19 22
43 50
_
;
2.
_
5 6
7 8
_ _
1 2
3 4
_
=
_
23 34
31 46
_
,=
_
1 2
3 4
_ _
5 6
7 8
_
;
3.
_
1
2
_
_
3 4

=
_
3 4
6 8
_
;
4.
_
3 4

_
1
2
_
=
_
11

;
5.
_
1 1
1 1
_ _
1 1
1 1
_
=
_
0 0
0 0
_
.
Matrix multiplication obeys many of the familiar laws of arithmetic apart
from the commutative law.
1. (1)C = (1C) if , 1, C are :n, n j, j , respectively;
2. t(1) = (t)1 = (t1), (1) = ()1 = (1);
3. ( + 1)C = C + 1C if and 1 are :n and C is n j;
4. 1( + 1) = 1 + 11 if and 1 are :n and 1 is j :.
We prove the associative law only:
First observe that (1)C and (1C) are both of size :.
Let = [o
i)
], 1 = [/
)I
], C = [c
I|
]. Then
((1)C)
i|
=
j

I
(1)
iI
c
I|
=
j

I
_
_
a

)
o
i)
/
)I
_
_
c
I|
=
j

I
a

)
o
i)
/
)I
c
I|
.
26 CHAPTER 2. MATRICES
Similarly
((1C))
i|
=
a

)
j

I
o
i)
/
)I
c
I|
.
However the double summations are equal. For sums of the form
a

)
j

I
d
)I
and
j

I
a

)
d
)I
represent the sum of the nj elements of the rectangular array [d
)I
], by rows
and by columns, respectively. Consequently
((1)C)
i|
= ((1C))
i|
for 1 i :, 1 | . Hence (1)C = (1C).
The system of : linear equations in n unknowns
o

r

+ o

r

+ + o
a
r
a
= /

o

r

+ o

r

+ + o
a
r
a
= /

.
.
.
o
n
r

+ o
n
r

+ + o
na
r
a
= /
n
is equivalent to a single matrix equation
_

_
o

o

o
a
o

o

o
a
.
.
.
.
.
.
o
n
o
n
o
na
_

_
_

_
r

.
.
.
r
a
_

_
=
_

_
/

.
.
.
/
n
_

_
,
that is A = 1, where = [o
i)
] is the coecient matrix of the system,
A =
_

_
r

.
.
.
r
a
_

_
is the vector of unknowns and 1 =
_

_
/

.
.
.
/
n
_

_
is the vector of
constants.
Another useful matrix equation equivalent to the above system of linear
equations is
r

_
o

o

.
.
.
o
n
_

_
+ r

_
o

o

.
.
.
o
n
_

_
+ + r
a
_

_
o
a
o
a
.
.
.
o
na
_

_
=
_

_
/

.
.
.
/
n
_

_
.
2.2. LINEAR TRANSFORMATIONS 27
EXAMPLE 2.1.3 The system
r + j + . = 1
r j + . = 0.
is equivalent to the matrix equation
_
1 1 1
1 1 1
_
_
_
r
j
.
_
_
=
_
1
0
_
and to the equation
r
_
1
1
_
+ j
_
1
1
_
+ .
_
1
1
_
=
_
1
0
_
.
2.2 Linear transformations
An ndimensional column vector is an n 1 matrix over 1. The collection
of all ndimensional column vectors is denoted by 1
a
.
Every matrix is associated with an important type of function called a
linear transformation.
DEFINITION 2.2.1 (Linear transformation) With `
na
(1), we
associate the function T

: 1
a
1
n
dened by T

(A) = A for all


A 1
a
. More explicitly, using components, the above function takes the
form
j

= o

r

+ o

r

+ + o
a
r
a
j

= o

r

+ o

r

+ + o
a
r
a
.
.
.
j
n
= o
n
r

+ o
n
r

+ + o
na
r
a
,
where j

, j

, , j
n
are the components of the column vector T

(A).
The function just dened has the property that
T

(:A + tY ) = :T

(A) + tT

(Y ) (2.1)
for all :, t 1 and all ndimensional column vectors A, Y . For
T

(:A + tY ) = (:A + tY ) = :(A) + t(Y ) = :T

(A) + tT

(Y ).
28 CHAPTER 2. MATRICES
REMARK 2.2.1 It is easy to prove that if T : 1
a
1
n
is a function
satisfying equation 2.1, then T = T

, where is the : n matrix whose


columns are T(1

), . . . , T(1
a
), respectively, where 1

, . . . , 1
a
are the n
dimensional unit vectors dened by
1

=
_

_
1
0
.
.
.
0
_

_
, . . . , 1
a
=
_

_
0
0
.
.
.
1
_

_
.
One wellknown example of a linear transformation arises from rotating
the (r, j)plane in 2-dimensional Euclidean space, anticlockwise through
radians. Here a point (r, j) will be transformed into the point (r

, j

),
where
r

= rcos j sin
j

= rsin + j cos .
In 3dimensional Euclidean space, the equations
r

= rcos j sin , j

= rsin + j cos , .

= .;
r

= r, j

= j cos . sin , .

= j sin + . cos ;
r

= rcos . sin , j

= j, .

= rsin + . cos ;
correspond to rotations about the positive ., r, jaxes, anticlockwise through
, , radians, respectively.
The product of two matrices is related to the product of the correspond-
ing linear transformations:
If is :n and 1 is nj, then the function T

T
1
: 1
j
1
n
, obtained
by rst performing T
1
, then T

is in fact equal to the linear transformation


T
1
. For if A 1
j
, we have
T

T
1
(A) = (1A) = (1)A = T
1
(A).
The following example is useful for producing rotations in 3dimensional
animated design. (See [27, pages 97112].)
EXAMPLE 2.2.1 The linear transformation resulting from successively
rotating 3dimensional space about the positive ., r, jaxes, anticlockwise
through , , radians respectively, is equal to T
1C
, where
2.2. LINEAR TRANSFORMATIONS 29

l
(r, j)
(r

, j

)
Z
Z
Z
Z
Z
Z
Z
Z
Z
Z
Z
Z
Z
`
`
`
`
`
`
`
Figure 2.1: Reection in a line.
C =
_
_
cos sin 0
sin cos 0
0 0 1
_
_
, 1 =
_
_
1 0 0
0 cos sin
0 sin cos
_
_
.
=
_
_
cos 0 sin
0 1 0
sin 0 cos
_
_
.
The matrix 1C is quite complicated:
(1C) =
_
_
cos 0 sin
0 1 0
sin 0 cos
_
_
_
_
cos sin 0
cos sin cos cos sin
sin sin sin cos cos
_
_
=
_
_
cos cos sin sin sin cos sin sin sin sin sin cos
cos sin cos cos sin
sin cos + cos sin sin sin sin + cos sin cos cos cos
_
_
.
EXAMPLE 2.2.2 Another example of a linear transformation arising from
geometry is reection of the plane in a line l inclined at an angle to the
positive raxis.
We reduce the problem to the simpler case = 0, where the equations
of transformation are r

= r, j

= j. First rotate the plane clockwise


through radians, thereby taking l into the raxis; next reect the plane in
the raxis; then rotate the plane anticlockwise through radians, thereby
restoring l to its original position.
30 CHAPTER 2. MATRICES

l
(r, j)
(r

, j

)
Z
Z
Z
Z
Z
Z
Z
Z
Z
Z
Z
Z
Z
`
`
`
Figure 2.2: Projection on a line.
In terms of matrices, we get transformation equations
_
r

_
=
_
cos sin
sin cos
_ _
1 0
0 1
_ _
cos () sin ()
sin () cos ()
_ _
r
j
_
=
_
cos sin
sin cos
_ _
cos sin
sin cos
_ _
r
j
_
=
_
cos 2 sin 2
sin 2 cos 2
_ _
r
j
_
.
The more general transformation
_
r

_
= o
_
cos sin
sin cos
_ _
r
j
_
+
_
n

_
, o 0,
represents a rotation, followed by a scaling and then by a translation. Such
transformations are important in computer graphics. See [23, 24].
EXAMPLE 2.2.3 Our last example of a geometrical linear transformation
arises from projecting the plane onto a line l through the origin, inclined
at angle to the positive raxis. Again we reduce that problem to the
simpler case where l is the raxis and the equations of transformation are
r

= r, j

= 0.
In terms of matrices, we get transformation equations
_
r

_
=
_
cos sin
sin cos
_ _
1 0
0 0
_ _
cos () sin ()
sin () cos ()
_ _
r
j
_
2.3. RECURRENCE RELATIONS 31
=
_
cos 0
sin 0
_ _
cos sin
sin cos
_ _
r
j
_
=
_
cos
2
cos sin
sin cos sin
2

_ _
r
j
_
.
2.3 Recurrence relations
DEFINITION 2.3.1 (The identity matrix) The n n matrix 1
a
=
[
i)
], dened by
i)
= 1 if i = ,,
i)
= 0 if i ,= ,, is called the n n identity
matrix of order n. In other words, the columns of the identity matrix of
order n are the unit vectors 1

, , 1
a
, respectively.
For example, 1

=
_
1 0
0 1
_
.
THEOREM 2.3.1 If is :n, then 1
n
= = 1
a
.
DEFINITION 2.3.2 (/th power of a matrix) If is an nn matrix,
we dene
I
recursively as follows:

= 1
a
and
I
=
I
for / 0.
For example

=

= 1
a
= and hence

= .
The usual index laws hold provided 1 = 1:
1.
n

a
=
n a
, (
n
)
a
=
na
;
2. (1)
a
=
a
1
a
;
3.
n
1
a
= 1
a

n
;
4. ( + 1)
2
=

+ 21 + 1

;
5. ( + 1)
a
=
a

i
_
a
i
_

i
1
ai
;
6. ( + 1)(1) =

.
We now state a basic property of the natural numbers.
AXIOM 2.3.1 (PRINCIPLE OF MATHEMATICAL INDUCTION)
If for each n 1, T
a
denotes a mathematical statement and
(i) T

is true,
32 CHAPTER 2. MATRICES
(ii) the truth of T
a
implies that of T
a
for each n 1,
then T
a
is true for all n 1.
EXAMPLE 2.3.1 Let =
_
7 4
9 5
_
. Prove that

a
=
_
1 + 6n 4n
9n 1 6n
_
if n 1.
Solution. We use the principle of mathematical induction.
Take T
a
to be the statement

a
=
_
1 + 6n 4n
9n 1 6n
_
.
Then T

asserts that

=
_
1 + 6 1 4 1
9 1 1 6 1
_
=
_
7 4
9 5
_
,
which is true. Now let n 1 and assume that T
a
is true. We have to deduce
that

a
=
_
1 + 6(n + 1) 4(n + 1)
9(n + 1) 1 6(n + 1)
_
=
_
7 + 6n 4n + 4
9n 9 5 6n
_
.
Now

a
=
a

=
_
1 + 6n 4n
9n 1 6n
_ _
7 4
9 5
_
=
_
(1 + 6n)7 + (4n)(9) (1 + 6n)4 + (4n)(5)
(9n)7 + (1 6n)(9) (9n)4 + (1 6n)(5)
_
=
_
7 + 6n 4n + 4
9n 9 5 6n
_
,
and the induction goes through.
The last example has an application to the solution of a system of re-
currence relations:
2.4. PROBLEMS 33
EXAMPLE 2.3.2 The following system of recurrence relations holds for
all n 0:
r
a
= 7r
a
+ 4j
a
j
a
= 9r
a
5j
a
.
Solve the system for r
a
and j
a
in terms of r

and j

.
Solution. Combine the above equations into a single matrix equation
_
r
a
j
a
_
=
_
7 4
9 5
_ _
r
a
j
a
_
,
or A
a
= A
a
, where =
_
7 4
9 5
_
and A
a
=
_
r
a
j
a
_
.
We see that
A

= A

A

= A

= (A

) =

A

.
.
.
A
a
=
a
A

.
(The truth of the equation A
a
=
a
A

for n 1, strictly speaking
follows by mathematical induction; however for simple cases such as the
above, it is customary to omit the strict proof and supply instead a few
lines of motivation for the inductive statement.)
Hence the previous example gives
_
r
a
j
a
_
= A
a
=
_
1 + 6n 4n
9n 1 6n
_ _
r

j

_
=
_
(1 + 6n)r

+ (4n)j

(9n)r

+ (1 6n)j

_
,
and hence r
a
= (1+6n)r

+4nj

and j
a
= (9n)r

+(16n)j

, for n 1.
2.4 PROBLEMS
1. Let , 1, C, 1 be matrices dened by
=
_
_
3 0
1 2
1 1
_
_
, 1 =
_
_
1 5 2
1 1 0
4 1 3
_
_
,
34 CHAPTER 2. MATRICES
C =
_
_
3 1
2 1
4 3
_
_
, 1 =
_
4 1
2 0
_
.
Which of the following matrices are dened? Compute those matrices
which are dened.
+ 1, + C, 1, 1, C1, 1C, 1

.
[Answers: + C, 1, C1, 1

;
_
_
0 1
1 3
5 4
_
_
,
_
_
0 12
4 2
10 5
_
_
,
_
_
14 3
10 2
22 4
_
_
,
_
14 4
8 2
_
.]
2. Let =
_
1 0 1
0 1 1
_
. Show that if 1 is a 3 2 such that 1 = 1

,
then
1 =
_
_
o /
o 1 1 /
o + 1 /
_
_
for suitable numbers o and /. Use the associative law to show that
(1)
2
1 = 1.
3. If =
_
o /
c d
_
, prove that

(o + d) + (od /c)1

= 0.
4. If =
_
4 3
1 0
_
, use the fact

= 4 31

and mathematical
induction, to prove that

a
=
(3
a
1)
2
+
3 3
a
2
1

if n 1.
5. A sequence of numbers r

, r

, . . . , r
a
, . . . satises the recurrence rela-
tion r
a
= or
a
+/r
a
for n 1, where o and / are constants. Prove
that
_
r
a
r
a
_
=
_
r
a
r
a
_
,
2.4. PROBLEMS 35
where =
_
o /
1 0
_
and hence express
_
r
a
r
a
_
in terms of
_
r

r

_
.
If o = 4 and / = 3, use the previous question to nd a formula for
r
a
in terms of r

and r

.
[Answer:
r
a
=
3
a
1
2
r

+
3 3
a
2
r

.]
6. Let =
_
2o o

1 0
_
.
(a) Prove that

a
=
_
(n + 1)o
a
no
a
no
a
(1 n)o
a
_
if n 1.
(b) A sequence r

, r

, . . . , r
a
, . . . satises the recurrence relation r
a
=
2or
a
o

r
a
for n 1. Use part (a) and the previous question
to prove that r
a
= no
a
r

+ (1 n)o
a
r

for n 1.
7. Let =
_
o /
c d
_
and suppose that

and

are the roots of the


quadratic polynomial r

(o+d)r+od/c. (

and

may be equal.)
Let /
a
be dened by /

= 0, /

= 1 and for n 2
/
a
=
a

ai

.
Prove that
/
a
= (

)/
a

/
a
,
if n 1. Also prove that
/
a
=
_
(
a

),(

) if

,=

,
n
a

if

.
Use mathematical induction to prove that if n 1,

a
= /
a

/
a
1

,
[Hint: Use the equation

= (o + d)(od /c)1

.]
36 CHAPTER 2. MATRICES
8. Use Question 7 to prove that if =
_
1 2
2 1
_
, then

a
=
3
a
2
_
1 1
1 1
_
+
(1)
a
2
_
1 1
1 1
_
if n 1.
9. The Fibonacci numbers are dened by the equations 1

= 0, 1

= 1
and 1
a
= 1
a
+ 1
a
if n 1. Prove that
1
a
=
1

5
__
1 +

5
2
_
a

_
1

5
2
_
a
_
if n 0.
10. Let : 1 be an integer. Let o and / be arbitrary positive integers.
Sequences r
a
and j
a
of positive integers are dened in terms of o and
/ by the recurrence relations
r
a
= r
a
+ :j
a
j
a
= r
a
+ j
a
,
for n 0, where r

= o and j

= /.
Use Question 7 to prove that
r
a
j
a

: as n .
2.5 Nonsingular matrices
DEFINITION 2.5.1 (Nonsingular matrix)
A square matrix `
aa
(1) is called nonsingular or invertible if
there exists a matrix 1 `
aa
(1) such that
1 = 1
a
= 1.
Any matrix 1 with the above property is called an inverse of . If does
not have an inverse, is called singular.
2.5. NONSINGULAR MATRICES 37
THEOREM 2.5.1 (Inverses are unique)
If has inverses 1 and C, then 1 = C.
Proof. Let 1 and C be inverses of . Then 1 = 1
a
= 1 and C =
1
a
= C. Then 1(C) = 11
a
= 1 and (1)C = 1
a
C = C. Hence because
1(C) = (1)C, we deduce that 1 = C.
REMARK 2.5.1 If has an inverse, it is denoted by

. So

= 1
a
=

.
Also if is nonsingular, it follows that

is also nonsingular and


(

= .
THEOREM 2.5.2 If and 1 are nonsingular matrices of the same size,
then so is 1. Moreover
(1)

= 1

.
Proof.
(1)(1

) = (11

= 1
a

= 1
a
.
Similarly
(1

)(1) = 1
a
.
REMARK 2.5.2 The above result generalizes to a product of : non
singular matrices: If

, . . . ,
n
are nonsingular n n matrices, then the
product

. . .
n
is also nonsingular. Moreover
(

. . .
n
)

n
. . .

.
(Thus the inverse of the product equals the product of the inverses in the
reverse order.)
EXAMPLE 2.5.1 If and 1 are n n matrices satisfying

= 1

=
(1)
2
= 1
a
, prove that 1 = 1.
Solution. Assume

= 1

= (1)
2
= 1
a
. Then , 1, 1 are non
singular and

= , 1

= 1, (1)

= 1.
But (1)

= 1

and hence 1 = 1.
38 CHAPTER 2. MATRICES
EXAMPLE 2.5.2 =
_
1 2
4 8
_
is singular. For suppose 1 =
_
o /
c d
_
is an inverse of . Then the equation 1 = 1

gives
_
1 2
4 8
_ _
o /
c d
_
=
_
1 0
0 1
_
and equating the corresponding elements of column 1 of both sides gives the
system
o + 2c = 1
4o + 8c = 0
which is clearly inconsistent.
THEOREM 2.5.3 Let =
_
o /
c d
_
and = od /c ,= 0. Then is
nonsingular. Also

_
d /
c o
_
.
REMARK 2.5.3 The expression od /c is called the determinant of
and is denoted by the symbols det or

o /
c d

.
Proof. Verify that the matrix 1 =

_
d /
c o
_
satises the equation
1 = 1

= 1.
EXAMPLE 2.5.3 Let
=
_
_
0 1 0
0 0 1
5 0 0
_
_
.
Verify that

= 51

, deduce that is nonsingular and nd

.
Solution. After verifying that

= 51

, we notice that

_
1
5

_
= 1

=
_
1
5

_
.
Hence is nonsingular and

=
1

.
2.5. NONSINGULAR MATRICES 39
THEOREM 2.5.4 If the coecient matrix of a system of n equations
in n unknowns is nonsingular, then the system A = 1 has the unique
solution A =

1.
Proof. Assume that

exists.
1. (Uniqueness.) Assume that A = 1. Then
(

)A =

1,
1
a
A =

1,
A =

1.
2. (Existence.) Let A =

1. Then
A = (

1) = (

)1 = 1
a
1 = 1.
THEOREM 2.5.5 (Cramers rule for 2 equations in 2 unknowns)
The system
or + /j = c
cr + dj = )
has a unique solution if =

o /
c d

,= 0, namely
r =

1

, j =

2

,
where

1
=

c /
) d

and
2
=

o c
c )

.
Proof. Suppose ,= 0. Then =
_
o /
c d
_
has inverse

_
d /
c o
_
and we know that the system

_
r
j
_
=
_
c
)
_
40 CHAPTER 2. MATRICES
has the unique solution
_
r
j
_
=

_
c
)
_
=
1

_
d /
c o
_ _
c
)
_
=
1

_
dc /)
cc + o)
_
=
1

_

1

2
_
=
_

1
,

2
,
_
.
Hence r =
1
,, j =
2
,.
COROLLARY 2.5.1 The homogeneous system
or + /j = 0
cr + dj = 0
has only the trivial solution if =

o /
c d

,= 0.
EXAMPLE 2.5.4 The system
7r + 8j = 100
2r 9j = 10
has the unique solution r =
1
,, j =
2
,, where
=

7 8
2 9

= 79,
1
=

100 8
10 9

= 980,
2
=

7 100
2 10

= 130.
So r =
980
and j =
130
.
THEOREM 2.5.6 Let be a square matrix. If is nonsingular, the
homogeneous system A = 0 has only the trivial solution. Equivalently,
if the homogenous system A = 0 has a nontrivial solution, then is
singular.
Proof. If is nonsingular and A = 0, then A =

0 = 0.
REMARK 2.5.4 If

, . . . ,
a
denote the columns of , then the equa-
tion
A = r

+ . . . + r
a

a
holds. Consequently theorem 2.5.6 tells us that if there exist scalars r

, . . . , r
a
,
not all zero, such that
r

+ . . . + r
a

a
= 0,
2.5. NONSINGULAR MATRICES 41
that is, if the columns of are linearly dependent, then is singular. An
equivalent way of saying that the columns of are linearly dependent is that
one of the columns of is expressible as a sum of certain scalar multiples
of the remaining columns of ; that is one column is a linear combination
of the remaining columns.
EXAMPLE 2.5.5
=
_
_
1 2 3
1 0 1
3 4 7
_
_
is singular. For it can be veried that has reduced rowechelon form
_
_
1 0 1
0 1 1
0 0 0
_
_
and consequently A = 0 has a nontrivial solution r = 1, j = 1, . = 1.
REMARK 2.5.5 More generally, if is rowequivalent to a matrix con-
taining a zero row, then is singular. For then the homogeneous system
A = 0 has a nontrivial solution.
An important class of nonsingular matrices is that of the elementary
row matrices.
DEFINITION 2.5.2 (Elementary row matrices) There are three types,
1
i)
, 1
i
(t), 1
i)
(t), corresponding to the three kinds of elementary row oper-
ation:
1. 1
i)
, (i ,= ,) is obtained from the identity matrix 1
a
by interchanging
rows i and ,.
2. 1
i
(t), (t ,= 0) is obtained by multiplying the ith row of 1
a
by t.
3. 1
i)
(t), (i ,= ,) is obtained from 1
a
by adding t times the ,th row of
1
a
to the ith row.
EXAMPLE 2.5.6 (n = 3.)
1

=
_
_
1 0 0
0 0 1
0 1 0
_
_
, 1

(1) =
_
_
1 0 0
0 1 0
0 0 1
_
_
, 1

(1) =
_
_
1 0 0
0 1 1
0 0 1
_
_
.
42 CHAPTER 2. MATRICES
The elementary row matrices have the following distinguishing property:
THEOREM 2.5.7 If a matrix is premultiplied by an elementary row
matrix, the resulting matrix is the one obtained by performing the corre-
sponding elementary rowoperation on .
EXAMPLE 2.5.7
1

_
_
o /
c d
c )
_
_
=
_
_
1 0 0
0 0 1
0 1 0
_
_
_
_
o /
c d
c )
_
_
=
_
_
o /
c )
c d
_
_
.
COROLLARY 2.5.2 The three types of elementary rowmatrices are non
singular. Indeed
1. 1

i)
= 1
i)
;
2. 1

i
(t) = 1
i
(t

);
3. (1
i)
(t))

= 1
i)
(t).
Proof. Taking = 1
a
in the above theorem, we deduce the following
equations:
1
i)
1
i)
= 1
a
1
i
(t)1
i
(t

) = 1
a
= 1
i
(t

)1
i
(t) if t ,= 0
1
i)
(t)1
i)
(t) = 1
a
= 1
i)
(t)1
i)
(t).
EXAMPLE 2.5.8 Find the 3 3 matrix = 1

(5)1

(2)1

explicitly.
Also nd

.
Solution.
= 1

(5)1

(2)
_
_
0 1 0
1 0 0
0 0 1
_
_
= 1

(5)
_
_
0 1 0
1 0 2
0 0 1
_
_
=
_
_
0 1 0
1 0 2
0 0 5
_
_
.
To nd

, we have

= (1

(5)1

(2)1

)

= 1


(1

(2))

(1

(5))

= 1

1

(2)1

(5

)
2.5. NONSINGULAR MATRICES 43
= 1

1

(2)
_
_
1 0 0
0 1 0
0 0
1
_
_
= 1

_
_
1 0 0
0 1

0 0
1
_
_
=
_
_
0 1

1 0 0
0 0
1
_
_
.
REMARK 2.5.6 Recall that and 1 are rowequivalent if 1 is obtained
from by a sequence of elementary row operations. If 1

, . . . , 1

are the
respective corresponding elementary row matrices, then
1 = 1

(. . . (1

(1

)) . . .) = (1

. . . 1

) = 1,
where 1 = 1

. . . 1

is nonsingular. Conversely if 1 = 1, where 1 is


nonsingular, then is rowequivalent to 1. For as we shall now see, 1 is
in fact a product of elementary row matrices.
THEOREM 2.5.8 Let be nonsingular n n matrix. Then
(i) is rowequivalent to 1
a
,
(ii) is a product of elementary row matrices.
Proof. Assume that is nonsingular and let 1 be the reduced rowechelon
form of . Then 1 has no zero rows, for otherwise the equation A = 0
would have a nontrivial solution. Consequently 1 = 1
a
.
It follows that there exist elementary row matrices 1

, . . . , 1

such that
1

(. . . (1

) . . .) = 1 = 1
a
and hence = 1

. . . 1

, a product of
elementary row matrices.
THEOREM 2.5.9 Let be n n and suppose that is rowequivalent
to 1
a
. Then is nonsingular and

can be found by performing the


same sequence of elementary row operations on 1
a
as were used to convert
to 1
a
.
Proof. Suppose that 1

. . . 1

= 1
a
. In other words 1 = 1
a
, where
1 = 1

. . . 1

is nonsingular. Then 1

(1) = 1

1
a
and so = 1

,
which is nonsingular.
Also

=
_
1

= 1 = 1

((. . . (1

1
a
) . . .), which shows that

is obtained from 1
a
by performing the same sequence of elementary row
operations as were used to convert to 1
a
.
44 CHAPTER 2. MATRICES
REMARK 2.5.7 It follows from theorem 2.5.9 that if is singular, then
is rowequivalent to a matrix whose last row is zero.
EXAMPLE 2.5.9 Show that =
_
1 2
1 1
_
is nonsingular, nd

and
express as a product of elementary row matrices.
Solution. We form the partitioned matrix [[1

] which consists of followed


by 1

. Then any sequence of elementary row operations which reduces to


1

will reduce 1

to

. Here
[[1

] =
_
1 2 1 0
1 1 0 1
_
1

_
1 2 1 0
0 1 1 1
_
1

(1)1

_
1 2 1 0
0 1 1 1
_
1

21

_
1 0 1 2
0 1 1 1
_
.
Hence is rowequivalent to 1

and is nonsingular. Also

=
_
1 2
1 1
_
.
We also observe that
1

(2)1

(1)1

(1) = 1

.
Hence

= 1

(2)1

(1)1

(1)
= 1

(1)1

(1)1

(2).
The next result is the converse of Theorem 2.5.6 and is useful for proving
the nonsingularity of certain types of matrices.
THEOREM 2.5.10 Let be an n n matrix with the property that
the homogeneous system A = 0 has only the trivial solution. Then is
nonsingular. Equivalently, if is singular, then the homogeneous system
A = 0 has a nontrivial solution.
2.5. NONSINGULAR MATRICES 45
Proof. If is n n and the homogeneous system A = 0 has only the
trivial solution, then it follows that the reduced rowechelon form 1 of
cannot have zero rows and must therefore be 1
a
. Hence is nonsingular.
COROLLARY 2.5.3 Suppose that and 1 are n n and 1 = 1
a
.
Then 1 = 1
a
.
Proof. Let 1 = 1
a
, where and 1 are n n. We rst show that 1
is nonsingular. Assume 1A = 0. Then (1A) = 0 = 0, so (1)A =
0, 1
a
A = 0 and hence A = 0.
Then from 1 = 1
a
we deduce (1)1

= 1
a
1

and hence = 1

.
The equation 11

= 1
a
then gives 1 = 1
a
.
Before we give the next example of the above criterion for non-singularity,
we introduce an important matrix operation.
DEFINITION 2.5.3 (The transpose of a matrix) Let be an :n
matrix. Then
|
, the transpose of , is the matrix obtained by interchanging
the rows and columns of . In other words if = [o
i)
], then
_

|
_
)i
= o
i)
.
Consequently
|
is n :.
The transpose operation has the following properties:
1.
_

|
_
|
= ;
2. (1)
|
=
|
1
|
if and 1 are :n;
3. (:)
|
= :
|
if : is a scalar;
4. (1)
|
= 1
|

|
if is :n and 1 is n j;
5. If is nonsingular, then
|
is also nonsingular and
_

|
_

=
_

_
|
;
6. A
|
A = r

+ . . . + r

a
if A = [r

, . . . , r
a
]
|
is a column vector.
We prove only the fourth property. First check that both (1)
|
and 1
|

|
have the same size (j :). Moreover, corresponding elements of both
matrices are equal. For if = [o
i)
] and 1 = [/
)I
], we have
_
(1)
|
_
Ii
= (1)
iI
=
a

)
o
i)
/
)I
46 CHAPTER 2. MATRICES
=
a

)
_
1
|
_
I)
_

|
_
)i
=
_
1
|

|
_
Ii
.
There are two important classes of matrices that can be dened concisely
in terms of the transpose operation.
DEFINITION 2.5.4 (Symmetric matrix) A real matrix is called sym-
metric if
|
= . In other words is square (n n say) and o
)i
= o
i)
for
all 1 i n, 1 , n. Hence
=
_
o /
/ c
_
is a general 2 2 symmetric matrix.
DEFINITION 2.5.5 (Skewsymmetric matrix) A real matrix is called
skewsymmetric if
|
= . In other words is square (n n say) and
o
)i
= o
i)
for all 1 i n, 1 , n.
REMARK 2.5.8 Taking i = , in the denition of skewsymmetric matrix
gives o
ii
= o
ii
and so o
ii
= 0. Hence
=
_
0 /
/ 0
_
is a general 2 2 skewsymmetric matrix.
We can now state a second application of the above criterion for non
singularity.
COROLLARY 2.5.4 Let 1 be an n n skewsymmetric matrix. Then
= 1
a
1 is nonsingular.
Proof. Let = 1
a
1, where 1
|
= 1. By Theorem 2.5.10 it suces to
show that A = 0 implies A = 0.
We have (1
a
1)A = 0, so A = 1A. Hence A
|
A = A
|
1A.
Taking transposes of both sides gives
(A
|
1A)
|
= (A
|
A)
|
A
|
1
|
(A
|
)
|
= A
|
(A
|
)
|
A
|
(1)A = A
|
A
A
|
1A = A
|
A = A
|
1A.
Hence A
|
A = A
|
A and A
|
A = 0. But if A = [r

, . . . , r
a
]
|
, then A
|
A =
r

+ . . . + r

a
= 0 and hence r

= 0, . . . , r
a
= 0.
2.6. LEAST SQUARES SOLUTION OF EQUATIONS 47
2.6 Least squares solution of equations
Suppose A = 1 represents a system of linear equations with real coe-
cients which may be inconsistent, because of the possibility of experimental
errors in determining or 1. For example, the system
r = 1
j = 2
r + j = 3.001
is inconsistent.
It can be proved that the associated system
|
A =
|
1 is always
consistent and that any solution of this system minimizes the sum :

+. . . +
:

n
, where :

, . . . , :
n
(the residuals) are dened by
:
i
= o
i
r

+ . . . + o
ia
r
a
/
i
,
for i = 1, . . . , :. The equations represented by
|
A =
|
1 are called the
normal equations corresponding to the system A = 1 and any solution
of the system of normal equations is called a least squares solution of the
original system.
EXAMPLE 2.6.1 Find a least squares solution of the above inconsistent
system.
Solution. Here =
_
_
1 0
0 1
1 1
_
_
, A =
_
r
j
_
, 1 =
_
_
1
2
3.001
_
_
.
Then
|
=
_
1 0 1
0 1 1
_
_
_
1 0
0 1
1 1
_
_
=
_
2 1
1 2
_
.
Also
|
1 =
_
1 0 1
0 1 1
_
_
_
1
2
3.001
_
_
=
_
4.001
5.001
_
.
So the normal equations are
2r + j = 4.001
r + 2j = 5.001
which have the unique solution
r =
3.001
3
, j =
6.001
3
.
48 CHAPTER 2. MATRICES
EXAMPLE 2.6.2 Points (r

, j

), . . . , (r
a
, j
a
) are experimentally deter-
mined and should lie on a line j = :r +c. Find a least squares solution to
the problem.
Solution. The points have to satisfy
:r

+ c = j

.
.
.
:r
a
+ c = j
a
,
or r = 1, where
=
_

_
r

1
.
.
.
.
.
.
r
a
1
_

_, A =
_
:
c
_
, 1 =
_

_
j

.
.
.
j
a
_

_.
The normal equations are given by (
|
)A =
|
1. Here

|
=
_
r

. . . r
a
1 . . . 1
_
_

_
r

1
.
.
.
.
.
.
r
a
1
_

_ =
_
r

+ . . . + r

a
r

+ . . . + r
a
r

+ . . . + r
a
n
_
Also

|
1 =
_
r

. . . r
a
1 . . . 1
_
_

_
j

.
.
.
j
a
_

_ =
_
r

+ . . . + r
a
j
a
j

+ . . . + j
a
_
.
It is not dicult to prove that
= det (
|
) =

_i<)a
(r
i
r
)
)
2
,
which is positive unless r

= . . . = r
a
. Hence if not all of r

, . . . , r
a
are
equal,
|
is nonsingular and the normal equations have a unique solution.
This can be shown to be
: =
1

_i<)a
(r
i
r
)
)(j
i
j
)
), c =
1

_i<)a
(r
i
j
)
r
)
j
i
)(r
i
r
)
).
REMARK 2.6.1 The matrix
|
is symmetric.
2.7. PROBLEMS 49
2.7 PROBLEMS
1. Let =
_
1 4
3 1
_
. Prove that is nonsingular, nd

and
express as a product of elementary row matrices.
[Answer:

=
_
_


_
,
= 1

(3)1

(13)1

(4) is one such decomposition.]
2. A square matrix 1 = [d
i)
] is called diagonal if d
i)
= 0 for i ,= ,. (That
is the odiagonal elements are zero.) Prove that premultiplication
of a matrix by a diagonal matrix 1 results in matrix 1 whose
rows are the rows of multiplied by the respective diagonal elements
of 1. State and prove a similar result for postmultiplication by a
diagonal matrix.
Let diag (o

, . . . , o
a
) denote the diagonal matrix whose diagonal ele-
ments d
ii
are o

, . . . , o
a
, respectively. Show that
diag (o

, . . . , o
a
)diag (/

, . . . , /
a
) = diag (o

, . . . , o
a
/
a
)
and deduce that if o

. . . o
a
,= 0, then diag (o

, . . . , o
a
) is nonsingular
and
(diag (o

, . . . , o
a
))

= diag (o

, . . . , o

a
).
Also prove that diag (o

, . . . , o
a
) is singular if o
i
= 0 for some i.
3. Let =
_
_
0 0 2
1 2 6
3 7 9
_
_
. Prove that is nonsingular, nd

and
express as a product of elementary row matrices.
[Answers:

=
_
_
12 7 2
9
3 1
1
0 0
_
_
,
= 1

1

(3)1

1

(2)1

(2)1

(24)1

(9) is one such decompo-
sition.]
50 CHAPTER 2. MATRICES
4. Find the rational number / for which the matrix =
_
_
1 2 /
3 1 1
5 3 5
_
_
is singular. [Answer: / = 3.]
5. Prove that =
_
1 2
2 4
_
is singular and nd a nonsingular matrix
1 such that 1 has last row zero.
6. If =
_
1 4
3 1
_
, verify that

2 + 131

= 0 and deduce that

=

(21

).
7. Let =
_
_
1 1 1
0 0 1
2 1 2
_
_
.
(i) Verify that

= 3

3 + 1

.
(ii) Express

in terms of

, and 1

and hence calculate

explicitly.
(iii) Use (i) to prove that is nonsingular and nd

explicitly.
[Answers: (ii)

= 6

8 + 31

=
_
_
11 8 4
12 9 4
20 16 5
_
_
;
(iii)

3 + 31

=
_
_
1 3 1
2 4 1
0 1 0
_
_
.]
8. (i) Let 1 be an nn matrix such that 1

= 0. If = 1
a
1, prove
that is nonsingular and

= 1
a
+ 1 + 1

.
Show that the system of linear equations A = / has the solution
A = / + 1/ + 1

/.
(ii) If 1 =
_
_
0 : :
0 0 t
0 0 0
_
_
, verify that 1

= 0 and use (i) to determine


(1

1)

explicitly.
2.7. PROBLEMS 51
[Answer:
_
_
1 : : + :t
0 1 t
0 0 1
_
_
.]
9. Let be n n.
(i) If

= 0, prove that is singular.


(ii) If

= and ,= 1
a
, prove that is singular.
10. Use Question 7 to solve the system of equations
r + j . = o
. = /
2r + j + 2. = c
where o, /, c are given rationals. Check your answer using the Gauss
Jordan algorithm.
[Answer: r = o 3/ + c, j = 2o + 4/ c, . = /.]
11. Determine explicitly the following products of 3 3 elementary row
matrices.
(i) 1

1

(ii) 1

(5)1

(iii) 1

(3)1

(3) (iv) (1

(100))

(v) 1


(vi) (1

(7))

(vii) (1

(7)1

(1))

.
[Answers: (i)
_
_
0 0 1
1 0 0
0 1 0
_
_
(ii)
_
_
0 5 0
1 0 0
0 0 1
_
_
(iii)
_
_
8 3 0
3 1 0
0 0 1
_
_
(iv)
_
_
_
0 0
0 1 0
0 0 1
_
_
(v)
_
_
0 1 0
1 0 0
0 0 1
_
_
(vi)
_
_
1 7 0
0 1 0
0 0 1
_
_
(vii)
_
_
1 7 0
0 1 0
1 7 1
_
_
.]
12. Let be the following product of 4 4 elementary row matrices:
= 1

(2)1

1

(3).
Find and

explicitly.
[Answers: =
_

_
0 3 0 1
0 1 0 0
0 0 2 0
1 0 0 0
_

_
,

=
_

_
0 0 0 1
0 1 0 0
0 0
1
0
1 3 0 0
_

_
.]
52 CHAPTER 2. MATRICES
13. Determine which of the following matrices over Z

are nonsingular
and nd the inverse, where possible.
(a)
_

_
1 1 0 1
0 0 1 1
1 1 1 1
1 0 0 1
_

_
(b)
_

_
1 1 0 1
0 1 1 1
1 0 1 0
1 1 0 1
_

_
.
[Answer: (a)
_

_
1 1 1 1
1 0 0 1
1 0 1 0
1 1 1 0
_

_
.]
14. Determine which of the following matrices are nonsingular and nd
the inverse, where possible.
(a)
_
_
1 1 1
1 1 0
2 0 0
_
_
(b)
_
_
2 2 4
1 0 1
0 1 0
_
_
(c)
_
_
4 6 3
0 0 7
0 0 5
_
_
(d)
_
_
2 0 0
0 5 0
0 0 7
_
_
(e)
_

_
1 2 4 6
0 1 2 0
0 0 1 2
0 0 0 2
_

_
(f)
_
_
1 2 3
4 5 6
5 7 9
_
_
.
[Answers: (a)
_
_
0 0
1
0 1
1
1 1 1
_
_
(b)
_
_

2 1
0 0 1
1
1 1
_
_
(d)
_
_
_
0 0
0

0
0 0
1
_
_
(e)
_

_
1 2 0 3
0 1 2 2
0 0 1 1
0 0 0
1
_

_
.]
15. Let be a nonsingular n n matrix. Prove that
|
is nonsingular
and that (
|
)

= (

)
|
.
16. Prove that =
_
o /
c d
_
has no inverse if od /c = 0.
[Hint: Use the equation

(o + d) + (od /c)1

= 0.]
2.7. PROBLEMS 53
17. Prove that the real matrix =
_
_
1 o /
o 1 c
/ c 1
_
_
is nonsingular by
proving that is rowequivalent to 1

.
18. If 1

1 = 1, prove that 1


a
1 = 1
a
for n 1.
19. Let =
_
_
_
, 1 =
_
1 3
1 4
_
. Verify that 1

1 =
_
_
0
0 1
_
and deduce that

a
=
1
7
_
3 3
4 4
_
+
1
7
_
5
12
_
a
_
4 3
4 3
_
.
20. Let =
_
o /
c d
_
be a Markov matrix; that is a matrix whose elements
are nonnegative and satisfy o+c = 1 = /+d. Also let 1 =
_
/ 1
c 1
_
.
Prove that if ,= 1

then
(i) 1 is nonsingular and 1

1 =
_
1 0
0 o + d 1
_
,
(ii)
a

1
/ + c
_
/ /
c c
_
as n , if ,=
_
0 1
1 0
_
.
21. If A =
_
_
1 2
3 4
5 6
_
_
and Y =
_
_
1
3
4
_
_
, nd AA
|
, A
|
A, Y Y
|
, Y
|
Y .
[Answers:
_
_
5 11 17
11 25 39
17 39 61
_
_
,
_
35 44
44 56
_
,
_
_
1 3 4
3 9 12
4 12 16
_
_
, 26.]
22. Prove that the system of linear equations
r + 2j = 4
r + j = 5
3r + 5j = 12
is inconsistent and nd a least squares solution of the system.
[Answer: r = 6, j = 7,6.]
54 CHAPTER 2. MATRICES
23. The points (0, 0), (1, 0), (2, 1), (3, 4), (4, 8) are required to lie on a
parabola j = o + /r + cr

. Find a least squares solution for o, /, c.


Also prove that no parabola passes through these points.
[Answer: o =
1
, / = 2, c = 1.]
24. If is a symmetric nn real matrix and 1 is n:, prove that 1
|
1
is a symmetric :: matrix.
25. If is :n and 1 is n :, prove that 1 is singular if : n.
26. Let and 1 be n n. If or 1 is singular, prove that 1 is also
singular.
Chapter 3
SUBSPACES
3.1 Introduction
Throughout this chapter, we will be studying 1
a
, the set of all ndimensional
column vectors with components from a eld 1. We continue our study of
matrices by considering an important class of subsets of 1
a
called subspaces.
These arise naturally for example, when we solve a system of : linear ho-
mogeneous equations in n unknowns.
We also study the concept of linear dependence of a family of vectors.
This was introduced briey in Chapter 2, Remark 2.5.4. Other topics dis-
cussed are the row space, column space and null space of a matrix over 1,
the dimension of a subspace, particular examples of the latter being the rank
and nullity of a matrix.
3.2 Subspaces of F
n
DEFINITION 3.2.1 A subset o of 1
a
is called a subspace of 1
a
if
1. The zero vector belongs to o; (that is, 0 o);
2. If n o and o, then n + o; (o is said to be closed under
vector addition);
3. If n o and t 1, then tn o; (o is said to be closed under scalar
multiplication).
EXAMPLE 3.2.1 Let `
na
(1). Then the set of vectors A 1
a
satisfying A = 0 is a subspace of 1
a
called the null space of and is
denoted here by (). (It is sometimes called the solution space of .)
55
56 CHAPTER 3. SUBSPACES
Proof. (1) 0 = 0, so 0 (); (2) If A, Y (), then A = 0 and
Y = 0, so (A + Y ) = A + Y = 0 + 0 = 0 and so A + Y (); (3)
If A () and t 1, then (tA) = t(A) = t0 = 0, so tA ().
For example, if =
_
1 0
0 1
_
, then () = 0, the set consisting of
just the zero vector. If =
_
1 2
2 4
_
, then () is the set of all scalar
multiples of [2, 1]
|
.
EXAMPLE 3.2.2 Let A

, . . . , A
n
1
a
. Then the set consisting of all
linear combinations r

+ + r
n
A
n
, where r

, . . . , r
n
1, is a sub-
space of 1
a
. This subspace is called the subspace spanned or generated by
A

, . . . , A
n
and is denoted here by A

, . . . , A
n
). We also call A

, . . . , A
n
a spanning family for o = A

, . . . , A
n
).
Proof. (1) 0 = 0A

+ + 0A
n
, so 0 A

, . . . , A
n
); (2) If A, Y
A

, . . . , A
n
), then A = r

+ + r
n
A
n
and Y = j

+ + j
n
A
n
,
so
A + Y = (r

+ + r
n
A
n
) + (j

+ + j
n
A
n
)
= (r

+ j

)A

+ + (r
n
+ j
n
)A
n
A

, . . . , A
n
).
(3) If A A

, . . . , A
n
) and t 1, then
A = r

+ + r
n
A
n
tA = t(r

+ + r
n
A
n
)
= (tr

)A

+ + (tr
n
)A
n
A

, . . . , A
n
).
For example, if `
na
(1), the subspace generated by the columns of
is an important subspace of 1
n
and is called the column space of . The
column space of is denoted here by C(). Also the subspace generated
by the rows of is a subspace of 1
a
and is called the row space of and is
denoted by 1().
EXAMPLE 3.2.3 For example 1
a
= 1

, . . . , 1
a
), where 1

, . . . , 1
a
are
the ndimensional unit vectors. For if A = [r

, . . . , r
a
]
|
1
a
, then A =
r

+ + r
a
1
a
.
EXAMPLE 3.2.4 Find a spanning family for the subspace o of 1

dened
by the equation 2r 3j + 5. = 0.
3.2. SUBSPACES OF 1
.
57
Solution. (o is in fact the null space of [2, 3, 5], so o is indeed a subspace
of 1

.)
If [r, j, .]
|
o, then r =
3
j

.. Then
_
_
r
j
.
_
_
=
_
_
_
j

.
j
.
_
_
= j
_
_
_
1
0
_
_
+ .
_
_

0
1
_
_
and conversely. Hence [
3
, 1, 0]
|
and [

, 0, 1]
|
form a spanning family for
o.
The following result is easy to prove:
LEMMA 3.2.1 Suppose each of A

, . . . , A

is a linear combination of
Y

, . . . , Y
-
. Then any linear combination of A

, . . . , A

is a linear combi-
nation of Y

, . . . , Y
-
.
As a corollary we have
THEOREM 3.2.1 Subspaces A

, . . . , A

) and Y

, . . . , Y
-
) are equal if
each of A

, . . . , A

is a linear combination of Y

, . . . , Y
-
and each of Y

, . . . , Y
-
is a linear combination of A

, . . . , A

.
COROLLARY 3.2.1 Subspaces A

, . . . , A

, 7

, . . . , 7
|
) and A

, . . . , A

)
are equal if each of 7

, . . . , 7
|
is a linear combination of A

, . . . , A

.
EXAMPLE 3.2.5 If A and Y are vectors in 1
a
, then
A, Y ) = A + Y, A Y ).
Solution. Each of A + Y and A Y is a linear combination of A and Y .
Also
A =
1
2
(A + Y ) +
1
2
(A Y ) and Y =
1
2
(A + Y )
1
2
(A Y ),
so each of A and Y is a linear combination of A + Y and A Y .
There is an important application of Theorem 3.2.1 to row equivalent
matrices (see Denition 1.2.4):
THEOREM 3.2.2 If is row equivalent to 1, then 1() = 1(1).
Proof. Suppose that 1 is obtained from by a sequence of elementary row
operations. Then it is easy to see that each row of 1 is a linear combination
of the rows of . But can be obtained from 1 by a sequence of elementary
operations, so each row of is a linear combination of the rows of 1. Hence
by Theorem 3.2.1, 1() = 1(1).
58 CHAPTER 3. SUBSPACES
REMARK 3.2.1 If is row equivalent to 1, it is not always true that
C() = C(1).
For example, if =
_
1 1
1 1
_
and 1 =
_
1 1
0 0
_
, then 1 is in fact the
reduced rowechelon form of . However we see that
C() =
__
1
1
_
,
_
1
1
__
=
__
1
1
__
and similarly C(1) =
__
1
0
__
.
Consequently C() ,= C(1), as
_
1
1
_
C() but
_
1
1
_
, C(1).
3.3 Linear dependence
We now recall the denition of linear dependence and independence of a
family of vectors in 1
a
given in Chapter 2.
DEFINITION 3.3.1 Vectors A

, . . . , A
n
in 1
a
are said to be linearly
dependent if there exist scalars r

, . . . , r
n
, not all zero, such that
r

+ + r
n
A
n
= 0.
In other words, A

, . . . , A
n
are linearly dependent if some A
i
is expressible
as a linear combination of the remaining vectors.
A

, . . . , A
n
are called linearly independent if they are not linearly depen-
dent. Hence A

, . . . , A
n
are linearly independent if and only if the equation
r

+ + r
n
A
n
= 0
has only the trivial solution r

= 0, . . . , r
n
= 0.
EXAMPLE 3.3.1 The following three vectors in 1

=
_
_
1
2
3
_
_
, A

=
_
_
1
1
2
_
_
, A

=
_
_
1
7
12
_
_
are linearly dependent, as 2A

+ 3A

+ (1)A

= 0.
3.3. LINEAR DEPENDENCE 59
REMARK 3.3.1 If A

, . . . , A
n
are linearly independent and
r

+ + r
n
A
n
= j

+ + j
n
A
n
,
then r

= j

, . . . , r
n
= j
n
. For the equation can be rewritten as
(r

)A

+ + (r
n
j
n
)A
n
= 0
and so r

= 0, . . . , r
n
j
n
= 0.
THEOREM 3.3.1 A family of : vectors in 1
a
will be linearly dependent
if : n. Equivalently, any linearly independent family of : vectors in 1
a
must satisfy : n.
Proof. The equation
r

+ + r
n
A
n
= 0
is equivalent to n homogeneous equations in : unknowns. By Theorem 1.5.1,
such a system has a nontrivial solution if : n.
The following theorem is an important generalization of the last result
and is left as an exercise for the interested student:
THEOREM 3.3.2 A family of : vectors in A

, . . . , A

) will be linearly
dependent if : :. Equivalently, a linearly independent family of : vectors
in A

, . . . , A

) must have : :.
Here is a useful criterion for linear independence which is sometimes
called the lefttoright test:
THEOREM 3.3.3 Vectors A

, . . . , A
n
in 1
a
are linearly independent if
(a) A

,= 0;
(b) For each / with 1 < / :, A
I
is not a linear combination of
A

, . . . , A
I
.
One application of this criterion is the following result:
THEOREM 3.3.4 Every subspace o of 1
a
can be represented in the form
o = A

, . . . , A
n
), where : n.
60 CHAPTER 3. SUBSPACES
Proof. If o = 0, there is nothing to prove we take A

= 0 and : = 1.
So we assume o contains a nonzero vector A

; then A

) o as o is a
subspace. If o = A

), we are nished. If not, o will contain a vector A

,
not a linear combination of A

; then A

, A

) o as o is a subspace. If
o = A

, A

), we are nished. If not, o will contain a vector A

which is
not a linear combination of A

and A

. This process must eventually stop,


for at stage / we have constructed a family of / linearly independent vectors
A

, . . . , A
I
, all lying in 1
a
and hence / n.
There is an important relationship between the columns of and 1, if
is rowequivalent to 1.
THEOREM 3.3.5 Suppose that is row equivalent to 1 and let c

, . . . , c

be distinct integers satisfying 1 c


i
n. Then
(a) Columns
c
1
, . . . ,
cr
of are linearly dependent if and only if the
corresponding columns of 1 are linearly dependent; indeed more is
true:
r

c
1
+ + r

cr
= 0 r

1
c
1
+ + r

1
cr
= 0.
(b) Columns
c
1
, . . . ,
cr
of are linearly independent if and only if the
corresponding columns of 1 are linearly independent.
(c) If 1 c

n and c

is distinct from c

, . . . , c

, then

c
r+1
= .

c
1
+ + .

cr
1
c
r+1
= .

1
c
1
+ + .

1
cr
.
Proof. First observe that if Y = [j

, . . . , j
a
]
|
is an ndimensional column
vector and is :n, then
Y = j

+ + j
a

a
.
Also Y = 0 1Y = 0, if 1 is row equivalent to . Then (a) follows by
taking j
i
= r
c
j
if i = c
)
and j
i
= 0 otherwise.
(b) is logically equivalent to (a), while (c) follows from (a) as

c
r+1
= .

c
1
+ + .

cr
.

c
1
+ + .

cr
+ (1)
c
r+1
= 0
.

1
c
1
+ + .

1
cr
+ (1)1
c
r+1
= 0
1
c
r+1
= .

1
c
1
+ + .

1
cr
.
3.4. BASIS OF A SUBSPACE 61
EXAMPLE 3.3.2 The matrix
=
_
_
1 1 5 1 4
2 1 1 2 2
3 0 6 0 3
_
_
has reduced rowechelon form equal to
1 =
_
_
1 0 2 0 1
0 1 3 0 2
0 0 0 1 3
_
_
.
We notice that 1

, 1

and 1

are linearly independent and hence so are

and

. Also
1

= 21

+ 31

= (1)1

+ 21

+ 31

,
so consequently

= 2

+ 3

= (1)

+ 2

+ 3

.
3.4 Basis of a subspace
We now come to the important concept of basis of a vector subspace.
DEFINITION 3.4.1 Vectors A

, . . . , A
n
belonging to a subspace o are
said to form a basis of o if
(a) Every vector in o is a linear combination of A

, . . . , A
n
;
(b) A

, . . . , A
n
are linearly independent.
Note that (a) is equivalent to the statement that o = A

, . . . , A
n
) as we
automatically have A

, . . . , A
n
) o. Also, in view of Remark 3.3.1 above,
(a) and (b) are equivalent to the statement that every vector in o is uniquely
expressible as a linear combination of A

, . . . , A
n
.
EXAMPLE 3.4.1 The unit vectors 1

, . . . , 1
a
form a basis for 1
a
.
62 CHAPTER 3. SUBSPACES
REMARK 3.4.1 The subspace 0, consisting of the zero vector alone,
does not have a basis. For every vector in a linearly independent family
must necessarily be nonzero. (For example, if A

= 0, then we have the


nontrivial linear relation
1A

+ 0A

+ + 0A
n
= 0
and A

, . . . , A
n
would be linearly dependent.)
However if we exclude this case, every other subspace of 1
a
has a basis:
THEOREM 3.4.1 A subspace of the form A

, . . . , A
n
), where at least
one of A

, . . . , A
n
is nonzero, has a basis A
c
1
, . . . , A
cr
, where 1 c

<
< c

:.
Proof. (The lefttoright algorithm). Let c

be the least index / for which


A
I
is nonzero. If c

= : or if all the vectors A


I
with / c

are linear
combinations of A
c
1
, terminate the algorithm and let : = 1. Otherwise let
c

be the least integer / c

such that A
I
is not a linear combination of
A
c
1
.
If c

= : or if all the vectors A


I
with / c

are linear combinations


of A
c
1
and A
c
2
, terminate the algorithm and let : = 2. Eventually the
algorithm will terminate at the :th stage, either because c

= :, or because
all vectors A
I
with / c

are linear combinations of A


c
1
, . . . , A
cr
.
Then it is clear by the construction of A
c
1
, . . . , A
cr
, using Corollary 3.2.1
that
(a) A
c
1
, . . . , A
cr
) = A

, . . . , A
n
);
(b) the vectors A
c
1
, . . . , A
cr
are linearly independent by the lefttoright
test.
Consequently A
c
1
, . . . , A
cr
form a basis (called the lefttoright basis) for
the subspace A

, . . . , A
n
).
EXAMPLE 3.4.2 Let A and Y be linearly independent vectors in 1
a
.
Then the subspace 0, 2A, A, Y, A +Y ) has lefttoright basis consisting
of 2A, Y .
A subspace o will in general have more than one basis. For example, any
permutation of the vectors in a basis will yield another basis. Given one
particular basis, one can determine all bases for o using a simple formula.
This is left as one of the problems at the end of this chapter.
We settle for the following important fact about bases:
3.4. BASIS OF A SUBSPACE 63
THEOREM 3.4.2 Any two bases for a subspace o must contain the same
number of elements.
Proof. For if A

, . . . , A

and Y

, . . . , Y
-
are bases for o, then Y

, . . . , Y
-
form a linearly independent family in o = A

, . . . , A

) and hence : : by
Theorem 3.3.2. Similarly : : and hence : = :.
DEFINITION 3.4.2 This number is called the dimension of o and is
written dimo. Naturally we dene dim0 = 0.
It follows from Theorem 3.3.1 that for any subspace o of 1
a
, we must have
dimo n.
EXAMPLE 3.4.3 If 1

, . . . , 1
a
denote the ndimensional unit vectors in
1
a
, then dim1

, . . . , 1
i
) = i for 1 i n.
The following result gives a useful way of exhibiting a basis.
THEOREM 3.4.3 A linearly independent family of : vectors in a sub-
space o, with dimo = :, must be a basis for o.
Proof. Let A

, . . . , A
n
be a linearly independent family of vectors in a
subspace o, where dimo = :. We have to show that every vector A o is
expressible as a linear combination of A

, . . . , A
n
. We consider the following
family of vectors in o: A

, . . . , A
n
, A. This family contains :+1 elements
and is consequently linearly dependent by Theorem 3.3.2. Hence we have
r

+ + r
n
A
n
+ r
n
A = 0, (3.1)
where not all of r

, . . . , r
n
are zero. Now if r
n
= 0, we would have
r

+ + r
n
A
n
= 0,
with not all of r

, . . . , r
n
zero, contradictiong the assumption that A

. . . , A
n
are linearly independent. Hence r
n
,= 0 and we can use equation 3.1 to
express A as a linear combination of A

, . . . , A
n
:
A =
r

r
n
A

+ +
r
n
r
n
A
n
.
64 CHAPTER 3. SUBSPACES
3.5 Rank and nullity of a matrix
We can now dene three important integers associated with a matrix.
DEFINITION 3.5.1 Let `
na
(1). Then
(a) column rank =dimC();
(b) row rank =dim1();
(c) nullity =dim().
We will now see that the reduced rowechelon form 1 of a matrix allows
us to exhibit bases for the row space, column space and null space of .
Moreover, an examination of the number of elements in each of these bases
will immediately result in the following theorem:
THEOREM 3.5.1 Let `
na
(1). Then
(a) column rank =row rank ;
(b) column rank +nullity = n.
Finding a basis for 1(): The : nonzero rows of 1 form a basis for 1()
and hence row rank = :.
For we have seen earlier that 1() = 1(1). Also
1(1) = 1

, . . . , 1
n
)
= 1

, . . . , 1

, 0 . . . , 0)
= 1

, . . . , 1

).
The linear independence of the nonzero rows of 1 is proved as follows: Let
the leading entries of rows 1, . . . , : of 1 occur in columns c

, . . . , c

. Suppose
that
r

+ + r

= 0.
Then equating components c

, . . . , c

of both sides of the last equation, gives


r

= 0, . . . , r

= 0, in view of the fact that 1 is in reduced row echelon


form.
Finding a basis for C(): The : columns
c
1
, . . . ,
cr
form a basis for
C() and hence column rank = :. For it is clear that columns c

, . . . , c

of 1 form the lefttoright basis for C(1) and consequently from parts (b)
and (c) of Theorem 3.3.5, it follows that columns c

, . . . , c

of form the
lefttoright basis for C().
3.5. RANK AND NULLITY OF A MATRIX 65
Finding a basis for (): For notational simplicity, let us suppose that c

=
1, . . . , c

= :. Then 1 has the form


1 =
_

_
1 0 0 /

/
a
0 1 0 /

/
a
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 1 /

/
a
0 0 0 0 0
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
0 0 0 0 0
_

_
.
Then (1) and hence () are determined by the equations
r

= (/

)r

+ + (/
a
)r
a
.
.
.
r

= (/

)r

+ + (/
a
)r
a
,
where r

, . . . , r
a
are arbitrary elements of 1. Hence the general vector A
in () is given by
_

_
r

.
.
.
r

r

.
.
.
r
a
_

_
= r

_

_
/

.
.
.
/

1
.
.
.
0
_

_
+ + r
a
_

_
/
a
.
.
.
/
a
0
.
.
.
1
_

_
(3.2)
= r

A

+ + r
a
A
a
.
Hence () is spanned by A

, . . . , A
a
, as r

, . . . , r
a
are arbitrary. Also
A

, . . . , A
a
are linearly independent. For equating the right hand side of
equation 3.2 to 0 and then equating components : + 1, . . . , n of both sides
of the resulting equation, gives r

= 0, . . . , r
a
= 0.
Consequently A

, . . . , A
a
form a basis for ().
Theorem 3.5.1 now follows. For we have
row rank = dim1() = :
column rank = dimC() = :.
Hence
row rank = column rank .
66 CHAPTER 3. SUBSPACES
Also
column rank + nullity = : + dim() = : + (n :) = n.
DEFINITION 3.5.2 The common value of column rank and row rank
is called the rank of and is denoted by rank .
EXAMPLE 3.5.1 Given that the reduced rowechelon form of
=
_
_
1 1 5 1 4
2 1 1 2 2
3 0 6 0 3
_
_
equal to
1 =
_
_
1 0 2 0 1
0 1 3 0 2
0 0 0 1 3
_
_
,
nd bases for 1(), C() and ().
Solution. [1, 0, 2, 0, 1], [0, 1, 3, 0, 2] and [0, 0, 0, 1, 3] form a basis for
1(). Also

=
_
_
1
2
3
_
_
,

=
_
_
1
1
0
_
_
,

=
_
_
1
2
0
_
_
form a basis for C().
Finally () is given by
_

_
r

_
=
_

_
2r

+ r

3r

2r

3r

_
= r

_
2
3
1
0
0
_

_
+ r

_
1
2
0
3
1
_

_
= r

+ r

,
where r

and r

are arbitrary. Hence A

and A

form a basis for ().


Here rank = 3 and nullity = 2.
EXAMPLE 3.5.2 Let =
_
1 2
2 4
_
. Then 1 =
_
1 2
0 0
_
is the reduced
rowechelon form of .
3.6. PROBLEMS 67
Hence [1, 2] is a basis for 1() and
_
1
2
_
is a basis for C(). Also ()
is given by the equation r

= 2r

, where r

is arbitrary. Then
_
r

_
=
_
2r

_
= r

_
2
1
_
and hence
_
2
1
_
is a basis for ().
Here rank = 1 and nullity = 1.
EXAMPLE 3.5.3 Let =
_
1 2
3 4
_
. Then 1 =
_
1 0
0 1
_
is the reduced
rowechelon form of .
Hence [1, 0], [0, 1] form a basis for 1() while [1, 3], [2, 4] form a basis
for C(). Also () = 0.
Here rank = 2 and nullity = 0.
We conclude this introduction to vector spaces with a result of great
theoretical importance.
THEOREM 3.5.2 Every linearly independent family of vectors in a sub-
space o can be extended to a basis of o.
Proof. Suppose o has basis A

, . . . , A
n
and that Y

, . . . , Y

is a linearly
independent family of vectors in o. Then
o = A

, . . . , A
n
) = Y

, . . . , Y

, A

, . . . , A
n
),
as each of Y

, . . . , Y

is a linear combination of A

, . . . , A
n
.
Then applying the lefttoright algorithm to the second spanning family
for o will yield a basis for o which includes Y

, . . . , Y

.
3.6 PROBLEMS
1. Which of the following subsets of 1

are subspaces?
(a) [r, j] satisfying r = 2j;
(b) [r, j] satisfying r = 2j and 2r = j;
(c) [r, j] satisfying r = 2j + 1;
(d) [r, j] satisfying rj = 0;
68 CHAPTER 3. SUBSPACES
(e) [r, j] satisfying r 0 and j 0.
[Answer: (a) and (b).]
2. If A, Y, 7 are vectors in 1
a
, prove that
A, Y, 7) = A + Y, A + 7, Y + 7).
3. Determine if A

=
_

_
1
0
1
2
_

_
, A

=
_

_
0
1
1
2
_

_
and A

=
_

_
1
1
1
3
_

_
are linearly
independent in 1

.
4. For which real numbers are the following vectors linearly independent
in 1

?
A

=
_
_

1
1
_
_
, A

=
_
_
1

1
_
_
, A

=
_
_
1
1

_
_
.
5. Find bases for the row, column and null spaces of the following matrix
over :
=
_

_
1 1 2 0 1
2 2 5 0 3
0 0 0 1 3
8 11 19 0 11
_

_
.
6. Find bases for the row, column and null spaces of the following matrix
over Z

:
=
_

_
1 0 1 0 1
0 1 0 1 1
1 1 1 1 0
0 0 1 1 0
_

_
.
7. Find bases for the row, column and null spaces of the following matrix
over Z

:
=
_

_
1 1 2 0 1 3
2 1 4 0 3 2
0 0 0 1 3 0
3 0 2 4 3 2
_

_
.
3.6. PROBLEMS 69
8. Find bases for the row, column and null spaces of the matrix dened
in section 1.6, Problem 17. (Note: In this question, 1 is a eld of four
elements.)
9. If A

, . . . , A
n
form a basis for a subspace o, prove that
A

, A

+ A

, . . . , A

+ + A
n
also form a basis for o.
10. Let =
_
o / c
1 1 1
_
. Find conditions on o, /, c such that (a) rank =
1; (b) rank = 2.
[Answer: (a) o = / = c; (b) at least two of o, /, c are distinct.]
11. Let o be a subspace of 1
a
with dimo = :. If A

, . . . , A
n
are vectors
in o with the property that o = A

, . . . , A
n
), prove that A

. . . , A
n
form a basis for o.
12. Find a basis for the subspace o of 1

dened by the equation


r + 2j + 3. = 0.
Verify that Y

= [1, 1, 1]
|
o and nd a basis for o which includes
Y

.
13. Let A

, . . . , A
n
be vectors in 1
a
. If A
i
= A
)
, where i < ,, prove that
A

, . . . A
n
are linearly dependent.
14. Let A

, . . . , A
n
be vectors in 1
a
. Prove that
dimA

, . . . , A
n
) = dimA

, . . . , A
n
)
if A
n
is a linear combination of A

, . . . , A
n
, but
dimA

, . . . , A
n
) = dimA

, . . . , A
n
) + 1
if A
n
is not a linear combination of A

, . . . , A
n
.
Deduce that the system of linear equations A = 1 is consistent, if
and only if
rank [[1] = rank .
70 CHAPTER 3. SUBSPACES
15. Let o

, . . . , o
a
be elements of 1, not all zero. Prove that the set of
vectors [r

, . . . , r
a
]
|
where r

, . . . , r
a
satisfy
o

+ + o
a
r
a
= 0
is a subspace of 1
a
with dimension equal to n 1.
16. Prove Lemma 3.2.1, Theorem 3.2.1, Corollary 3.2.1 and Theorem 3.3.2.
17. Let 1 and o be subspaces of 1
a
, with 1 o. Prove that
dim1 dimo
and that equality implies 1 = o. (This is a very useful way of proving
equality of subspaces.)
18. Let 1 and o be subspaces of 1
a
. If 1 o is a subspace of 1
a
, prove
that 1 o or o 1.
19. Let A

, . . . , A

be a basis for a subspace o. Prove that all bases for o


are given by the family Y

, . . . , Y

, where
Y
i
=

)
o
i)
A
)
,
and where = [o
i)
] `

(1) is a nonsingular matrix.


Chapter 4
DETERMINANTS
DEFINITION 4.0.1 If =
_
o

o

o

o

_
, we dene the determinant of
, (also denoted by det ,) to be the scalar
det = o

o

o

o

.
The notation

o

o

o

o

is also used for the determinant of .


If is a real matrix, there is a geometrical interpretation of det . If
1 = (r

, j

) and Q = (r

, j

) are points in the plane, forming a triangle


with the origin O = (0, 0), then apart from sign,
1

is the area
of the triangle O1Q. For, using polar coordinates, let r

= :

cos

and
j

= :

sin

, where :

= O1 and

is the angle made by the ray



O1 with
the positive raxis. Then triangle O1Q has area
1
O1 OQsin , where
= 1OQ. If triangle O1Q has anticlockwise orientation, then the ray

OQ makes angle

+ with the positive raxis. (See Figure 4.1.)


Also r

= :

cos

and j

= :

sin

. Hence
Area O1Q =
1
2
O1 OQsin
=
1
2
O1 OQsin (

)
=
1
2
O1 OQ(sin

cos

cos

sin

)
=
1
2
(OQsin

O1 cos

OQcos

O1 sin

)
71
72 CHAPTER 4. DETERMINANTS

`
r
j
>
>
>
>
>
>
>
>
Q
1
O

Figure 4.1: Area of triangle O1Q.


=
1
2
(j

)
=
1
2

.
Similarly, if triangle O1Q has clockwise orientation, then its area equals

.
For a general triangle 1

, with 1
i
= (r
i
, j
i
), i = 1, 2, 3, we can
take 1

as the origin. Then the above formula gives


1
2

or
1
2

,
according as vertices 1

are anticlockwise or clockwise oriented.


We now give a recursive denition of the determinant of an nn matrix
= [o
i)
], n 3.
DEFINITION 4.0.2 (Minor) Let `
i)
() (or simply `
i)
if there is no
ambiguity) denote the determinant of the (n 1) (n 1) submatrix of
formed by deleting the ith row and ,th column of . (`
i)
() is called
the (i, ,) minor of .)
Assume that the determinant function has been dened for matrices of
size (n1)(n1). Then det is dened by the socalled rstrow Laplace
73
expansion:
det = o

`

() o

`

() + . . . + (1)
1+a
`
a
()
=
a

)
(1)
1+)
o
)
`
)
().
For example, if = [o
i)
] is a 3 3 matrix, the Laplace expansion gives
det = o

`

() o

`

() + o

`

()
= o

o

o

o

o

o

o

o

o

+ o

o

o

o

o

= o

(o

o

o

o

) o

(o

o

o

o

) + o

(o

o

o

o

)
= o

o

o

o

o

o

o

o

o

+ o

o

o

+ o

o

o

o

o

o

.
(The recursive denition also works for 2 2 determinants, if we dene the
determinant of a 1 1 matrix [t] to be the scalar t:
det = o

`

() o

`

() = o

o

o

o

.)
EXAMPLE 4.0.1 If 1

is a triangle with 1
i
= (r
i
, j
i
), i = 1, 2, 3,
then the area of triangle 1

is
1
2

1
r

1
r

or
1
2

1
r

1
r

,
according as the orientation of 1

is anticlockwise or clockwise.
For from the denition of 3 3 determinants, we have
1
2

1
r

1
r

=
1
2
_
r

1
j

1
r

_
=
1
2

.
One property of determinants that follows immediately from the deni-
tion is the following:
THEOREM 4.0.1 If a row of a matrix is zero, then the value of the de-
terminant is zero.
74 CHAPTER 4. DETERMINANTS
(The corresponding result for columns also holds, but here a simple proof
by induction is needed.)
One of the simplest determinants to evaluate is that of a lower triangular
matrix.
THEOREM 4.0.2 Let = [o
i)
], where o
i)
= 0 if i < ,. Then
det A = o

o

. . . o
aa
. (4.1)
An important special case is when is a diagonal matrix.
If =diag (o

, . . . , o
a
) then det = o

. . . o
a
. In particular, for a scalar
matrix t1
a
, we have det (t1
a
) = t
a
.
Proof. Use induction on the size n of the matrix.
The result is true for n = 2. Now let n 2 and assume the result true
for matrices of size n 1. If is n n, then expanding det along row 1
gives
det = o

o

0 . . . 0
o

o

. . . 0
.
.
.
o
a
o
a
. . . o
aa

= o

(o

. . . o
aa
)
by the induction hypothesis.
If is upper triangular, equation 4.1 remains true and the proof is again
an exercise in induction, with the slight dierence that the column version
of theorem 4.0.1 is needed.
REMARK 4.0.1 It can be shown that the expanded form of the determi-
nant of an n n matrix consists of n! signed products o
i
1
o
i
2
. . . o
ain
,
where (i

, i

, . . . , i
a
) is a permutation of (1, 2, . . . , n), the sign being 1 or
1, according as the number of inversions of (i

, i

, . . . , i
a
) is even or odd.
An inversion occurs when i

i
-
but : < :. (The proof is not easy and is
omitted.)
The denition of the determinant of an n n matrix was given in terms
of the rstrow expansion. The next theorem says that we can expand
the determinant along any row or column. (The proof is not easy and is
omitted.)
75
THEOREM 4.0.3
det =
a

)
(1)
i )
o
i)
`
i)
()
for i = 1, . . . , n (the socalled ith row expansion) and
det =
a

i
(1)
i )
o
i)
`
i)
()
for , = 1, . . . , n (the socalled ,th column expansion).
REMARK 4.0.2 The expression (1)
i )
obeys the chessboard pattern
of signs:
_

_
+ + . . .
+ . . .
+ + . . .
.
.
.
_

_
.
The following theorems can be proved by straightforward inductions on
the size of the matrix:
THEOREM 4.0.4 A matrix and its transpose have equal determinants;
that is
det
|
= det .
THEOREM 4.0.5 If two rows of a matrix are equal, the determinant is
zero. Similarly for columns.
THEOREM 4.0.6 If two rows of a matrix are interchanged, the determi-
nant changes sign.
EXAMPLE 4.0.2 If 1

= (r

, j

) and 1

= (r

, j

) are distinct points,


then the line through 1

and 1

has equation

r j 1
r

1
r

= 0.
76 CHAPTER 4. DETERMINANTS
For, expanding the determinant along row 1, the equation becomes
or + /j + c = 0,
where
o =

1
j

= j

and / =

1
r

= r

.
This represents a line, as not both o and / can be zero. Also this line passes
through 1
i
, i = 1, 2. For the determinant has its rst and ith rows equal
if r = r
i
and j = j
i
and is consequently zero.
There is a corresponding formula in threedimensional geometry. If
1

, 1

, 1

are noncollinear points in threedimensional space, with 1


i
=
(r
i
, j
i
, .
i
), i = 1, 2, 3, then the equation

r j . 1
r

1
r

1
r

= 0
represents the plane through 1

, 1

, 1

. For, expanding the determinant


along row 1, the equation becomes or + /j + c. + d = 0, where
o =

1
j

1
j

, / =

1
r

1
r

, c =

1
r

1
r

.
As we shall see in chapter 6, this represents a plane if at least one of o, /, c
is nonzero. However, apart from sign and a factor
1
, the determinant
expressions for o, /, c give the values of the areas of projections of triangle
1

on the (j, .), (r, .) and (r, j) planes, respectively. Geometrically,


it is then clear that at least one of o, /, c is nonzero. It is also possible to
give an algebraic proof of this fact.
Finally, the plane passes through 1
i
, i = 1, 2, 3 as the determinant has
its rst and ith rows equal if r = r
i
, j = j
i
, . = .
i
and is consequently
zero. We now work towards proving that a matrix is nonsingular if its
determinant is nonzero.
DEFINITION 4.0.3 (Cofactor) The (i, ,) cofactor of , denoted by
C
i)
() (or C
i)
if there is no ambiguity) is dened by
C
i)
() = (1)
i )
`
i)
().
77
REMARK 4.0.3 It is important to notice that C
i)
(), like `
i)
(), does
not depend on o
i)
. Use will be made of this observation presently.
In terms of the cofactor notation, Theorem 4.0.3 takes the form
THEOREM 4.0.7
det =
a

)
o
i)
C
i)
()
for i = 1, . . . , n and
det =
a

i
o
i)
C
i)
()
for , = 1, . . . , n.
Another result involving cofactors is
THEOREM 4.0.8 Let be an n n matrix. Then
(o)
a

)
o
i)
C
I)
() = 0 if i ,= /.
Also
(/)
a

i
o
i)
C
iI
() = 0 if , ,= /.
Proof.
If is nn and i ,= /, let 1 be the matrix obtained from by replacing
row / by row i. Then det 1 = 0 as 1 has two identical rows.
Now expand det 1 along row /. We get
0 = det 1 =
a

)
/
I)
C
I)
(1)
=
a

)
o
i)
C
I)
(),
in view of Remark 4.0.3.
78 CHAPTER 4. DETERMINANTS
DEFINITION 4.0.4 (Adjoint) If = [o
i)
] is an n n matrix, the ad-
joint of , denoted by adj , is the transpose of the matrix of cofactors.
Hence
adj =
_

_
C

C

C
a
C

C

C
a
.
.
.
.
.
.
C
a
C
a
C
aa
_

_
.
Theorems 4.0.7 and 4.0.8 may be combined to give
THEOREM 4.0.9 Let be an n n matrix. Then
(adj ) = (det )1
a
= (adj ).
Proof.
(adj )
iI
=
a

)
o
i)
(adj )
)I
=
a

)
o
i)
C
I)
()
=
iI
det
= ((det )1
a
)
iI
.
Hence (adj ) = (det )1
a
. The other equation is proved similarly.
COROLLARY 4.0.1 (Formula for the inverse) If det ,= 0, then
is nonsingular and

=
1
det
adj .
EXAMPLE 4.0.3 The matrix
=
_
_
1 2 3
4 5 6
8 8 9
_
_
is nonsingular. For
det =

5 6
8 9

4 6
8 9

+ 3

4 5
8 8

= 3 + 24 24
= 3 ,= 0.
79
Also

=
1
3
_
_
C

C

C

C

C

C

C

C

C

_
_
=
1
3
_

5 6
8 9

2 3
8 9

2 3
5 6

4 6
8 9

1 3
8 9

1 3
4 6

4 5
8 8

1 2
8 8

1 2
4 5

_
=
1
3
_
_
3 6 3
12 15 6
8 8 3
_
_
.
The following theorem is useful for simplifying and numerically evaluating
a determinant. Proofs are obtained by expanding along the corresponding
row or column.
THEOREM 4.0.10 The determinant is a linear function of each row and
column.
For example
(o)

o

+ o
t

o

+ o
t

o

+ o
t

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o
t

o
t

o
t

o

o

o

o

o

o

(/)

to

to

to

o

o

o

o

o

o

= t

o

o

o

o

o

o

o

o

o

.
COROLLARY 4.0.2 If a multiple of a row is added to another row, the
value of the determinant is unchanged. Similarly for columns.
Proof. We illustrate with a 3 3 example, but the proof is really quite
general.

o

+ to

o

+ to

o

+ to

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

to

to

to

o

o

o

o

o

o

80 CHAPTER 4. DETERMINANTS
=

o

o

o

o

o

o

o

o

o

+ t

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

o

+ t 0
=

o

o

o

o

o

o

o

o

o

.
To evaluate a determinant numerically, it is advisable to reduce the matrix
to rowechelon form, recording any sign changes caused by row interchanges,
together with any factors taken out of a row, as in the following examples.
EXAMPLE 4.0.4 Evaluate the determinant

1 2 3
4 5 6
8 8 9

.
Solution. Using row operations 1

41

and 1

81

and
then expanding along the rst column, gives

1 2 3
4 5 6
8 8 9

1 2 3
0 3 6
0 8 15

3 6
8 15

= 3

1 2
8 15

= 3

1 2
0 1

= 3.
EXAMPLE 4.0.5 Evaluate the determinant

1 1 2 1
3 1 4 5
7 6 1 2
1 1 3 4

.
Solution.

1 1 2 1
3 1 4 5
7 6 1 2
1 1 3 4

1 1 2 1
0 2 2 2
0 1 13 5
0 0 1 3

81
= 2

1 1 2 1
0 1 1 1
0 1 13 5
0 0 1 3

= 2

1 1 2 1
0 1 1 1
0 0 12 6
0 0 1 3

= 2

1 1 2 1
0 1 1 1
0 0 1 3
0 0 12 6

= 2

1 1 2 1
0 1 1 1
0 0 1 3
0 0 0 30

= 60.
EXAMPLE 4.0.6 (Vandermonde determinant) Prove that

1 1 1
o / c
o

= (/ o)(c o)(c /).


Solution. Subtracting column 1 from columns 2 and 3 , then expanding
along row 1, gives

1 1 1
o / c
o

1 0 0
o / o c o
o

/ o c o
/

= (/ o)(c o)

1 1
/ + o c + o

= (/ o)(c o)(c /).


REMARK 4.0.4 From theorems 4.0.6, 4.0.10 and corollary 4.0.2, we de-
duce
(a) det (1
i)
) = det ,
(b) det (1
i
(t)) = t det , if t ,= 0,
82 CHAPTER 4. DETERMINANTS
(c) det (1
i)
(t)) =det .
It follows that if is rowequivalent to 1, then det 1 = c det , where c ,= 0.
Hence det 1 ,= 0 det ,= 0 and det 1 = 0 det = 0. Consequently
from theorem 2.5.8 and remark 2.5.7, we have the following important result:
THEOREM 4.0.11 Let be an n n matrix. Then
(i) is nonsingular if and only if det ,= 0;
(ii) is singular if and only if det = 0;
(iii) the homogeneous system A = 0 has a nontrivial solution if and
only if det = 0.
EXAMPLE 4.0.7 Find the rational numbers o for which the following
homogeneous system has a nontrivial solution and solve the system for
these values of o:
r 2j + 3. = 0
or + 3j + 2. = 0
6r + j + o. = 0.
Solution. The coecient determinant of the system is
=

1 2 3
o 3 2
6 1 o

1 2 3
0 3 + 2o 2 3o
0 13 o 18

3 + 2o 2 3o
13 o 18

= (3 + 2o)(o 18) 13(2 3o)


= 2o

+ 6o 80 = 2(o + 8)(o 5).


So = 0 o = 8 or o = 5 and these values of o are the only values for
which the given homogeneous system has a nontrivial solution.
If o = 8, the coecient matrix has reduced rowechelon form equal to
_
_
1 0 1
0 1 2
0 0 0
_
_
83
and so the complete solution is r = ., j = 2., with . arbitrary. If o = 5,
the coecient matrix has reduced rowechelon form equal to
_
_
1 0 1
0 1 1
0 0 0
_
_
and so the complete solution is r = ., j = ., with . arbitrary.
EXAMPLE 4.0.8 Find the values of t for which the following system is
consistent and solve the system in each case:
r + j = 1
tr + j = t
(1 + t)r + 2j = 3.
Solution. Suppose that the given system has a solution (r

, j

). Then the
following homogeneous system
r + j + . = 0
tr + j + t. = 0
(1 + t)r + 2j + 3. = 0
will have a nontrivial solution
r = r

, j = j

, . = 1.
Hence the coecient determinant is zero. However
=

1 1 1
t 1 t
1 + t 2 3

1 0 0
t 1 t 0
1 + t 1 t 2 t

1 t 0
1 t 2 t

= (1t)(2t).
Hence t = 1 or t = 2. If t = 1, the given system becomes
r + j = 1
r + j = 1
2r + 2j = 3
which is clearly inconsistent. If t = 2, the given system becomes
r + j = 1
2r + j = 2
3r + 2j = 3
84 CHAPTER 4. DETERMINANTS
which has the unique solution r = 1, j = 0.
To nish this section, we present an old (1750) method of solving a
system of n equations in n unknowns called Cramers rule . The method is
not used in practice. However it has a theoretical use as it reveals explicitly
how the solution depends on the coecients of the augmented matrix.
THEOREM 4.0.12 (Cramers rule) The system of n linear equations
in n unknowns r

, . . . , r
a
o

r

+ o

r

+ + o
a
r
a
= /

o

r

+ o

r

+ + o
a
r
a
= /

.
.
.
o
a
r

+ o
a
r

+ + o
aa
r
a
= /
a
has a unique solution if = det [o
i)
] ,= 0, namely
r

=

1

, r

=

2

, . . . , r
a
=

a

,
where
i
is the determinant of the matrix formed by replacing the ith
column of the coecient matrix by the entries /

, /

, . . . , /
a
.
Proof. Suppose the coecient determinant ,= 0. Then by corollary 4.0.1,

exists and is given by

=
1
adj and the system has the unique
solution
_

_
r

.
.
.
r
a
_

_
=

_
/

.
.
.
/
a
_

_
=
1

_
C

C

C
a
C

C

C
a
.
.
.
.
.
.
C
a
C
a
C
aa
_

_
_

_
/

.
.
.
/
a
_

_
=
1

_
/

C

+ /

C

+ . . . + /
a
C
a
/

C

+ /

C

+ . . . + /
a
C
a
.
.
.
/
a
C
a
+ /

C
a
+ . . . + /
a
C
aa
_

_
.
However the ith component of the last vector is the expansion of
i
along
column i. Hence
_

_
r

.
.
.
r
a
_

_
=
1

2
.
.
.

a
_

_
=
_

1
,

2
,
.
.
.

a
,
_

_
.
4.1. PROBLEMS 85
4.1 PROBLEMS
.
1. If the points 1
i
= (r
i
, j
i
), i = 1, 2, 3, 4 form a quadrilateral with ver-
tices in anticlockwise orientation, prove that the area of the quadri-
lateral equals
1
2
_

_
.
(This formula generalizes to a simple polygon and is known as the
Surveyors formula.)
2. Prove that the following identity holds by expressing the lefthand
side as the sum of 8 determinants:

o + r / + j c + .
r + n j + . + n
n + o + / n + c

= 2

o / c
r j .
n n

.
3. Prove that

(n + 1)
2
(n + 2)
2
(n + 1)
2
(n + 2)
2
(n + 3)
2
(n + 2)
2
(n + 3)
2
(n + 4)
2

= 8.
4. Evaluate the following determinants:
(a)

246 427 327


1014 543 443
342 721 621

(b)

1 2 3 4
2 1 4 3
3 4 1 2
4 3 2 1

.
[Answers: (a) 29400000; (b) 900.]
5. Compute the inverse of the matrix
=
_
_
1 0 2
3 1 4
5 2 3
_
_
by rst computing the adjoint matrix.
[Answer:

=

_
_
11 4 2
29 7 10
1 2 1
_
_
.]
86 CHAPTER 4. DETERMINANTS
6. Prove that the following identities hold:
(i)

2o 2/ / c
2/ 2o o + c
o + / o + / /

= 2(o /)
2
(o + /),
(ii)

/ + c / c
c c + o o
/ o o + /

= 2o(/

+ c

).
7. Let 1
i
= (r
i
, j
i
), i = 1, 2, 3. If r

, r

, r

are distinct, prove that there


is precisely one curve of the form j = or

+ /r + c passing through
1

, 1

and 1

.
8. Let
=
_
_
1 1 1
2 3 /
1 / 3
_
_
.
Find the values of / for which det = 0 and hence, or otherwise,
determine the value of / for which the following system has more than
one solution:
r + j . = 1
2r + 3j + /. = 3
r + /j + 3. = 2.
Solve the system for this value of / and determine the solution for
which r

+ j

+ .

has least value.


[Answer: / = 2; r = 10,21, j = 13,21, . = 2,21.]
9. By considering the coecient determinant, nd all rational numbers o
and b for which the following system has (i) no solutions, (ii) exactly
one solution, (iii) innitely many solutions:
r 2j + /. = 3
or + 2. = 2
5r + 2j = 1.
Solve the system in case (iii).
[Answer: (i) o/ = 12 and o ,= 3, no solution; o/ ,= 12, unique solution;
o = 3, / = 4, innitely many solutions; r =

. +
2
, j =
5
.

, with
. arbitrary.]
4.1. PROBLEMS 87
10. Express the determinant of the matrix
1 =
_

_
1 1 2 1
1 2 3 4
2 4 7 2t + 6
2 2 6 t t
_

_
as as polynomial in t and hence determine the rational values of t for
which 1

exists.
[Answer: det 1 = (t 2)(2t 1); t ,= 2 and t ,=
1
.]
11. If is a 3 3 matrix over a eld and det ,= 0, prove that
(i) det (adj ) = (det )
2
,
(ii) (adj )

=
1
det
= adj (

).
12. Suppose that is a real 3 3 matrix such that
|
= 1

.
(i) Prove that
|
(1

) = (1

)
|
.
(ii) Prove that det = 1.
(iii) Use (i) to prove that if det = 1, then det (1

) = 0.
13. If is a square matrix such that one column is a linear combination of
the remaining columns, prove that det = 0. Prove that the converse
also holds.
14. Use Cramers rule to solve the system
2r + 3j . = 1
r + 2j . = 4
2r j + . = 3.
[Answer: r = 2, j = 3, . = 4.]
15. Use remark 4.0.4 to deduce that
det 1
i)
= 1, det 1
i
(t) = t, det 1
i)
(t) = 1
and use theorem 2.5.8 and induction, to prove that
det (1) = det 1 det ,
if 1 is nonsingular. Also prove that the formula holds when 1 is
singular.
88 CHAPTER 4. DETERMINANTS
16. Prove that

o + / + c o + / o o
o + / o + / + c o o
o o o + / + c o + /
o o o + / o + / + c

= c

(2/+c)(4o+2/+c).
17. Prove that

1 + n

1 + n

1 + n

1 + n

= 1 + n

+ n

+ n

+ n

.
18. Let `
aa
(1). If
|
= , prove that det = 0 if n is odd and
1 + 1 ,= 0 in 1.
19. Prove that

1 1 1 1
: 1 1 1
: : 1 1
: : : 1

= (1 :)
3
.
20. Express the determinant

1 o

/c o

1 /

co /

1 c

o/ c

as the product of one quadratic and four linear factors.


[Answer: (/ o)(c o)(c /)(o + / + c)(/

+ /c + c

+ oc + o/ + o

).]
Chapter 5
COMPLEX NUMBERS
5.1 Constructing the complex numbers
One way of introducing the eld C of complex numbers is via the arithmetic
of 2 2 matrices.
DEFINITION 5.1.1 A complex number is a matrix of the form
_
r j
j r
_
,
where r and j are real numbers.
Complex numbers of the form
_
r 0
0 r
_
are scalar matrices and are called
real complex numbers and are denoted by the symbol r.
The real complex numbers r and j are respectively called the real
part and imaginary part of the complex number
_
r j
j r
_
.
The complex number
_
0 1
1 0
_
is denoted by the symbol i.
We have the identities
_
r j
j r
_
=
_
r 0
0 r
_
+
_
0 j
j 0
_
=
_
r 0
0 r
_
+
_
0 1
1 0
_ _
j 0
0 j
_
= r + ij,
i

=
_
0 1
1 0
_ _
0 1
1 0
_
=
_
1 0
0 1
_
= 1.
89
90 CHAPTER 5. COMPLEX NUMBERS
Complex numbers of the form ij, where j is a nonzero real number, are
called imaginary numbers.
If two complex numbers are equal, we can equate their real and imaginary
parts:
r

+ ij

= r

+ ij

= r

and j

= j

,
if r

, r

, j

, j

are real numbers. Noting that 0 + i0 = 0, gives the


useful special case is
r + ij = 0 r = 0 and j = 0,
if r and j are real numbers.
The sum and product of two real complex numbers are also real complex
numbers:
r +j = r + j, rj = rj.
Also, as real complex numbers are scalar matrices, their arithmetic is very
simple. They form a eld under the operations of matrix addition and
multiplication. The additive identity is 0, the additive inverse of r is
r, the multiplicative identity is 1 and the multiplicative inverse of r
is r

. Consequently
r j = r + (j) = r +j = r j,
r
j
= rj

= rj

= rj

=
_
r
j
_
.
It is customary to blur the distinction between the real complex number
r and the real number r and write r as r. Thus we write the complex
number r + ij simply as r + ij.
More generally, the sum of two complex numbers is a complex number:
(r

+ ij

) + (r

+ ij

) = (r

+ r

) + i(j

+ j

); (5.1)
and (using the fact that scalar matrices commute with all matrices under
matrix multiplication and 1 = if is a matrix), the product of
two complex numbers is a complex number:
(r

+ ij

)(r

+ ij

) = r

(r

+ ij

) + (ij

)(r

+ ij

)
= r

+ r

(ij

) + (ij

)r

+ (ij

)(ij

)
= r

+ ir

+ ij

+ i

= (r

+1j

) + i(r

+ j

)
= (r

) + i(r

+ j

), (5.2)
5.2. CALCULATING WITH COMPLEX NUMBERS 91
The set C of complex numbers forms a eld under the operations of
matrix addition and multiplication. The additive identity is 0, the additive
inverse of r + ij is the complex number (r) + i(j), the multiplicative
identity is 1 and the multiplicative inverse of the nonzero complex number
r + ij is the complex number n + i, where
n =
r
r

+ j

and =
j
r

+ j

.
(If r + ij ,= 0, then r ,= 0 or j ,= 0, so r

+ j

,= 0.)
From equations 5.1 and 5.2, we observe that addition and multiplication
of complex numbers is performed just as for real numbers, replacing i

by
1, whenever it occurs.
A useful identity satised by complex numbers is
:

+ :

= (: + i:)(: i:).
This leads to a method of expressing the ratio of two complex numbers in
the form r + ij, where r and j are real complex numbers.
r

+ ij

+ ij

=
(r

+ ij

)(r

ij

)
(r

+ ij

)(r

ij

)
=
(r

+ j

) + i(r

+ j

)
r

+ j

.
The process is known as rationalization of the denominator.
5.2 Calculating with complex numbers
We can now do all the standard linear algebra calculations over the eld of
complex numbers nd the reduced rowechelon form of an matrix whose el-
ements are complex numbers, solve systems of linear equations, nd inverses
and calculate determinants.
For example,

1 + i 2 i
7 8 2i

= (1 + i)(8 2i) 7(2 i)


= (8 2i) + i(8 2i) 14 + 7i
= 4 + 13i ,= 0.
92 CHAPTER 5. COMPLEX NUMBERS
Then by Cramers rule, the linear system
(1 + i). + (2 i)n = 2 + 7i
7. + (8 2i)n = 4 9i
has the unique solution
. =

2 + 7i 2 i
4 9i 8 2i

4 + 13i
=
(2 + 7i)(8 2i) (4 9i)(2 i)
4 + 13i
=
2(8 2i) + (7i)(8 2i) (4(2 i) 9i(2 i)
4 + 13i
=
16 4i + 56i 14i

8 4i 18i + 9i

4 + 13i
=
31 + 74i
4 + 13i
=
(31 + 74i)(4 13i)
(4)
2
+ 13
2
=
838 699i
(4)
2
+ 13
2
=
838
185

699
185
i
and similarly n =
698
185
+
229
185
i.
An important property enjoyed by complex numbers is that every com-
plex number has a square root:
THEOREM 5.2.1
If n is a nonzero complex number, then the equation .

= n has a so-
lution . C.
Proof. Let n = o + i/, o, / 1.
Case 1. Suppose / = 0. Then if o 0, . =

o is a solution, while if
o < 0, i

o is a solution.
Case 2. Suppose / ,= 0. Let . = r + ij, r, j 1. Then the equation
.

= n becomes
(r + ij)
2
= r

+ 2rji = o + i/,
5.2. CALCULATING WITH COMPLEX NUMBERS 93
so equating real and imaginary parts gives
r

= o and 2rj = /.
Hence r ,= 0 and j = /,(2r). Consequently
r

_
/
2r
_
_
= o,
so 4r

4or

= 0 and 4(r

)
2
4o(r

) /

= 0. Hence
r

=
4o

16o

+ 16/

8
=
o

+ /

2
.
However r

0, so we must take the + sign, as o

+ /

< 0. Hence
r

=
o +

+ /

2
, r =

o +

+ /

2
.
Then j is determined by j = /,(2r).
EXAMPLE 5.2.1 Solve the equation .

= 1 + i.
Solution. Put . = r + ij. Then the equation becomes
(r + ij)
2
= r

+ 2rji = 1 + i,
so equating real and imaginary parts gives
r

= 1 and 2rj = 1.
Hence r ,= 0 and j = /,(2r). Consequently
r

_
1
2r
_
_
= 1,
so 4r

4r

1 = 0. Hence
r

=
4

16 + 16
8
=
1

2
2
.
Hence
r

=
1 +

2
2
and r =

1 +

2
2
.
94 CHAPTER 5. COMPLEX NUMBERS
Then
j =
1
2r
=
1

2
_
1 +

2
.
Hence the solutions are
. =
_
_

1 +

2
2
+
i

2
_
1 +

2
_
_
.
EXAMPLE 5.2.2 Solve the equation .

+ (

3 + i). + 1 = 0.
Solution. Because every complex number has a square root, the familiar
formula
. =
/

4oc
2o
for the solution of the general quadratic equation o.

+ /. + c = 0 can be
used, where now o(,= 0), /, c C. Hence
. =
(

3 + i)
_
(

3 + i)
2
4
2
=
(

3 + i)
_
(3 + 2

3i 1) 4
2
=
(

3 + i)
_
2 + 2

3i
2
.
Now we have to solve n

= 2 + 2

3i. Put n = r + ij. Then n

=
r

+ 2rji = 2 + 2

3i and equating real and imaginary parts gives


r

= 2 and 2rj = 2

3. Hence j =

3,r and so r

3,r

= 2. So
r

+ 2r

3 = 0 and (r

+ 3)(r

1) = 0. Hence r

1 = 0 and r = 1.
Then j =

3. Hence (1 +

3i)
2
= 2 + 2

3i and the formula for . now


becomes
. =

3 i (1 +

3i)
2
=
1

3 + (1 +

3)i
2
or
1

3 (1 +

3)i
2
.
EXAMPLE 5.2.3 Find the cube roots of 1.
5.3. GEOMETRIC REPRESENTATION OF C 95
Solution. We have to solve the equation .

= 1, or .

1 = 0. Now
.

1 = (. 1)(.

+ . + 1). So .

1 = 0 . 1 = 0 or .

+ . + 1 = 0.
But
.

+ . + 1 = 0 . =
1

1
2
4
2
=
1

3i
2
.
So there are 3 cube roots of 1, namely 1 and (1

3i),2.
We state the next theorem without proof. It states that every non
constant polynomial with complex number coecients has a root in the
eld of complex numbers.
THEOREM 5.2.2 (Gauss) If )(.) = o
a
.
a
+ o
a
.
a
+ + o

. + o

,
where o
a
,= 0 and n 1, then )(.) = 0 for some . C.
It follows that in view of the factor theorem, which states that if o 1 is
a root of a polynomial )(.) with coecients from a eld 1, then . o is a
factor of )(.), that is )(.) = (. o)p(.), where the coecients of p(.) also
belong to 1. By repeated application of this result, we can factorize any
polynomial with complex coecients into a product of linear factors with
complex coecients:
)(.) = o
a
(. .

)(. .

) (. .
a
).
There are available a number of computational algorithms for nding good
approximations to the roots of a polynomial with complex coecients.
5.3 Geometric representation of C
Complex numbers can be represented as points in the plane, using the cor-
respondence r + ij (r, j). The representation is known as the Argand
diagram or complex plane. The real complex numbers lie on the raxis,
which is then called the real axis, while the imaginary numbers lie on the
jaxis, which is known as the imaginary axis. The complex numbers with
positive imaginary part lie in the upper half plane, while those with negative
imaginary part lie in the lower half plane.
Because of the equation
(r

+ ij

) + (r

+ ij

) = (r

+ r

) + i(j

+ j

),
complex numbers add vectorially, using the parallellogram law. Similarly,
the complex number .

can be represented by the vector from (r

, j

)
to (r

, j

), where .

= r

+ ij

and .

= r

+ ij

. (See Figure 5.1.)


96 CHAPTER 5. COMPLEX NUMBERS

`

+ .

Z
Z
Z
Z
Z
Z
Z
Z
Z
Z
Z
Z
.
.
.
.
.
.
.
.
/
/
/
/
/
/
/
/
`
`
`
`
`
`
`
`
.
.
.
.
.
.
.
.
/
/
/
/
/
/
/
/
Figure 5.1: Complex addition and subraction.
The geometrical representation of complex numbers can be very useful
when complex number methods are used to investigate properties of triangles
and circles. It is very important in the branch of calculus known as Complex
Function theory, where geometric methods play an important role.
We mention that the line through two distinct points 1

= (r

, j

) and
1

= (r

, j

) has the form . = (1 t).

+ t.

, t 1, where . = r + ij is
any point on the line and .
i
= r
i
+ij
i
, i = 1, 2. For the line has parametric
equations
r = (1 t)r

+ tr

, j = (1 t)j

+ tj

and these can be combined into a single equation . = (1 t).

+ t.

.
Circles have various equation representations in terms of complex num-
bers, as will be seen later.
5.4 Complex conjugate
DEFINITION 5.4.1 (Complex conjugate) If . = r + ij, the complex
conjugate of . is the complex number dened by . = r ij. Geometrically,
the complex conjugate of . is obtained by reecting . in the real axis (see
Figure 5.2).
The following properties of the complex conjugate are easy to verify:
5.4. COMPLEX CONJUGATE 97

`

r
j
.
.
.
.
.
.
`
`
`
`
Figure 5.2: The complex conjugate of .: ..
1. .

+ .

= .

+ .

;
2. . = ..
3. .

= .

;
4. .

= .

;
5. (1,.) = 1,.;
6. (.

,.

) = .

,.

;
7. . is real if and only if . = .;
8. With the standard convention that the real and imaginary parts are
denoted by Re . and Im., we have
Re . =
. + .
2
, Im. =
. .
2i
;
9. If . = r + ij, then .. = r

+ j

.
THEOREM 5.4.1 If )(.) is a polynomial with real coecients, then its
nonreal roots occur in complexconjugate pairs, i.e. if )(.) = 0, then
)(.) = 0.
Proof. Suppose )(.) = o
a
.
a
+ o
a
.
a
+ + o

. + o

= 0, where
o
a
, . . . , o

are real. Then
0 = 0 = )(.) = o
a
.
a
+ o
a
.
a
+ + o

. + o

= o
a
.
a
+ o
a
.
a
+ + o

. + o

= o
a
.
a
+ o
a
.
a
+ + o

. + o

= )(.).
98 CHAPTER 5. COMPLEX NUMBERS
EXAMPLE 5.4.1 Discuss the position of the roots of the equation
.

= 1
in the complex plane.
Solution. The equation .

= 1 has real coecients and so its roots come


in complex conjugate pairs. Also if . is a root, so is .. Also there are
clearly no real roots and no imaginary roots. So there must be one root n
in the rst quadrant, with all remaining roots being given by n, n and
n. In fact, as we shall soon see, the roots lie evenly spaced on the unit
circle.
The following theorem is useful in deciding if a polynomial )(.) has a
multiple root o; that is if (. o)
n
divides )(.) for some : 2. (The proof
is left as an exercise.)
THEOREM 5.4.2 If )(.) = (. o)
n
p(.), where : 2 and p(.) is a
polynomial, then )
t
(o) = 0 and the polynomial and its derivative have a
common root.
From theorem 5.4.1 we obtain a result which is very useful in the explicit
integration of rational functions (i.e. ratios of polynomials) with real coe-
cients.
THEOREM 5.4.3 If )(.) is a nonconstant polynomial with real coe-
cients, then )(.) can be factorized as a product of real linear factors and
real quadratic factors.
Proof. In general )(.) will have : real roots .

, . . . , .

and 2: nonreal
roots .

, .

, . . . , .
-
, .
-
, occurring in complexconjugate pairs by
theorem 5.4.1. Then if o
a
is the coecient of highest degree in )(.), we
have the factorization
)(.) = o
a
(. .

) (. .

)
(. .

)(. .

) (. .
-
)(. .
-
).
We then use the following identity for , = : + 1, . . . , : + : which in turn
shows that paired terms give rise to real quadratic factors:
(. .
)
)(. .
)
) = .

(.
)
+ .
)
). + .
)
.
)
= .

2Re .
)
+ (r

)
+ j

)
),
where .
)
= r
)
+ ij
)
.
A wellknown example of such a factorization is the following:
5.5. MODULUS OF A COMPLEX NUMBER 99

`

[.[
r
j
.
.
.
.
.
Figure 5.3: The modulus of .: [.[.
EXAMPLE 5.4.2 Find a factorization of .

+1 into real linear and quadratic


factors.
Solution. Clearly there are no real roots. Also we have the preliminary
factorization .

+ 1 = (.

i)(.

+ i). Now the roots of .

i are easily
veried to be (1 + i),

2, so the roots of .

+ i must be (1 i),

2.
In other words the roots are n = (1 + i),

2 and n, n, n. Grouping
conjugatecomplex terms gives the factorization
.

+ 1 = (. n)(. n)(. + n)(. + n)


= (.

2.Re n + nn)(.

+ 2.Re n + nn)
= (.

2. + 1)(.

2. + 1).
5.5 Modulus of a complex number
DEFINITION 5.5.1 (Modulus) If . = r + ij, the modulus of . is the
nonnegative real number [.[ dened by [.[ =
_
r

+ j

. Geometrically, the
modulus of . is the distance from . to 0 (see Figure 5.3).
More generally, [.

[ is the distance between .

and .

in the complex
plane. For
[.

[ = [(r

+ ij

) (r

+ ij

)[ = [(r

) + i(j

)[
=
_
(r

)
2
+ (j

)
2
.
The following properties of the modulus are easy to verify, using the identity
[.[

= ..:
(i) [.

[ = [.

[[.

[;
100 CHAPTER 5. COMPLEX NUMBERS
(ii) [.

[ = [.[

;
(iii)

=
[.

[
[.

[
.
For example, to prove (i):
[.

= (.

).

= (.

).

= (.

)(.

) = [.

[.

= ([.

[[.

[)
2
.
Hence [.

[ = [.

[[.

[.
EXAMPLE 5.5.1 Find [.[ when . =
(1 + i)
4
(1 + 6i)(2 7i)
.
Solution.
[.[ =
[1 + i[

[1 + 6i[[2 7i[
=
(

1
2
+ 1
2
)
4

1
2
+ 6
2
_
2
2
+ (7)
2
=
4

37

53
.
THEOREM 5.5.1 (Ratio formulae) If . lies on the line through .

and
.

:
. = (1 t).

+ t.

, t 1,
we have the useful ratio formulae:
(i)

. .

. .

t
1 t

if . ,= .

,
(ii)

. .

= [t[.
Circle equations. The equation [. .

[ = :, where .

C and :
0, represents the circle centre .

and radius :. For example the equation
[. (1 + 2i)[ = 3 represents the circle (r 1)
2
+ (j 2)
2
= 9.
Another useful circle equation is the circle of Apollonius :

. o
. /

= ,
5.5. MODULUS OF A COMPLEX NUMBER 101

`

r
j
Figure 5.4: Apollonius circles:
[: i[
[: i[
=
1
,

,

,

;
4
,

,

,

.
where o and / are distinct complex numbers and is a positive real number,
,= 1. (If = 1, the above equation represents the perpendicular bisector
of the segment joining o and /.)
An algebraic proof that the above equation represents a circle, runs as
follows. We use the following identities:
(i) [. o[

= [.[

2Re (.o) +[o[

(ii) Re (.

) = Re .

Re .

(iii) Re (t.) = tRe . if t 1.


We have

. o
. /

= [. o[

[. /[

[.[

2Re .o +[o[

([.[

2Re ./ +[/[

)
(1

)[.[

2Re .(o

/) =

[/[

[o[

[.[

2Re
_
.
_
o

/
1

__
=

[/[

[o[

[.[

2Re
_
.
_
o

/
1

__
+

/
1

_
=

[/[

[o[

/
1

_
.
102 CHAPTER 5. COMPLEX NUMBERS
Now it is easily veried that
[o

/[

+ (1

)(

[/[

[o[

) =

[o /[

.
So we obtain

. o
. /

.
_
o

/
1

_
=

[o /[

[1

.
_
o

/
1

=
[o /[
[1

[
.
The last equation represents a circle centre .

, radius :, where
.

=
o

/
1

and : =
[o /[
[1

[
.
There are two special points on the circle of Apollonius, the points .

and
.

dened by
.

o
.

/
= and
.

o
.

/
= ,
or
.

=
o /
1
and .

=
o + /
1 +
. (5.3)
It is easy to verify that .

and .

are distinct points on the line through o


and / and that .

=
:
1
+:
2
. Hence the circle of Apollonius is the circle based
on the segment .

, .

as diameter.
EXAMPLE 5.5.2 Find the centre and radius of the circle
[. 1 i[ = 2[. 5 2i[.
Solution. Method 1. Proceed algebraically and simplify the equation
[r + ij 1 i[ = 2[r + ij 5 2i[
or
[r 1 + i(j 1)[ = 2[r 5 + i(j 2)[.
Squaring both sides gives
(r 1)
2
+ (j 1)
2
= 4((r 5)
2
+ (j 2)
2
),
which reduces to the circle equation
r

+ j

38
3
r
14
3
j + 38 = 0.
5.6. ARGUMENT OF A COMPLEX NUMBER 103
Completing the square gives
(r
19
3
)
2
+ (j
7
3
)
2
=
_
19
3
_
_
+
_
7
3
_
_
38 =
68
9
,
so the centre is (
19
,

) and the radius is
_
.
Method 2. Calculate the diametrical points .

and .

dened above by
equations 5.3:
.

1 i = 2(.

5 2i)
.

1 i = 2(.

5 2i).
We nd .

= 9 + 3i and .

= (11 + 5i),3. Hence the centre .



is given by
.

=
.

+ .

2
=
19
3
+
7
3
i
and the radius : is given by
: = [.

.

[ =

_
19
3
+
7
3
i
_
(9 + 3i)

8
3

2
3
i

68
3
.
5.6 Argument of a complex number
Let . = r + ij be a nonzero complex number, : = [.[ =
_
r

+ j

. Then
we have r = : cos , j = : sin , where is the angle made by . with the
positive raxis. So is unique up to addition of a multiple of 2 radians.
DEFINITION 5.6.1 (Argument) Any number satisfying the above
pair of equations is called an argument of . and is denoted by arg .. The
particular argument of . lying in the range < is called the principal
argument of . and is denoted by Arg . (see Figure 5.5).
We have . = : cos + i: sin = :(cos + i sin ) and this representation
of . is called the polar representation or modulusargument form of ..
EXAMPLE 5.6.1 Arg 1 = 0, Arg (1) = , Arg i =

, Arg (i) =

.
We note that j,r = tan if r ,= 0, so is determined by this equation up
to a multiple of . In fact
Arg . = tan

j
r
+ /,
104 CHAPTER 5. COMPLEX NUMBERS

`

.
.
.
.
r
j :
.

Figure 5.5: The argument of .: arg . = .


where / = 0 if r 0; / = 1 if r < 0, j 0; / = 1 if r < 0, j < 0.
To determine Arg . graphically, it is simplest to draw the triangle formed
by the points 0, r, . on the complex plane, mark in the positive acute angle
between the rays 0, r and 0, . and determine Arg . geometrically, using
the fact that = tan

([j[,[r[), as in the following examples:


EXAMPLE 5.6.2 Determine the principal argument of . for the followig
complex numbers:
. = 4 + 3i, 4 + 3i, 4 3i, 4 3i.
Solution. Referring to Figure 5.6, we see that Arg . has the values
, , + , ,
where = tan

.
An important property of the argument of a complex number states that
the sum of the arguments of two nonzero complex numbers is an argument
of their product:
THEOREM 5.6.1 If

and

are arguments of .

and .

, then

is an argument of .

.
Proof. Let .

and .

have polar representations .

= :

(cos

+ i sin

)
and .

= :

(cos

+ i sin

). Then
.

= :

(cos

+ i sin

):

(cos

+ i sin

)
= :

(cos

cos

sin

sin

+ i(cos

sin

+ sin

cos

))
= :

(cos (

) + i sin (

)),
5.6. ARGUMENT OF A COMPLEX NUMBER 105
_ _
'
'
r
j
4 + 3i

.
.
.
.
_ _
'
'
r
j
4 + 3i

`
`
`
`*
_ _
'
'
r
j
4 3i
.
.
.
.
_ _
'
'
r
j
4 3i
`
`
`
`
Figure 5.6: Argument examples.
which is the polar representation of .

, as :

= [.

[[.

[ = [.

[. Hence

is an argument of .

.
An easy induction gives the following generalization to a product of n
complex numbers:
COROLLARY 5.6.1 If

, . . . ,
a
are arguments for .

, . . . , .
a
respectively,
then

+ +
a
is an argument for .

.
a
.
Taking

= =
a
= in the previous corollary gives
COROLLARY 5.6.2 If is an argument of ., then n is an argument for
.
a
.
THEOREM 5.6.2 If is an argument of the nonzero complex number
., then is an argument of .

.
Proof. Let be an argument of .. Then . = :(cos +i sin), where : = [.[.
Hence
.

= :

(cos + i sin )

= :

(cos i sin )
= :

(cos() + i sin()).
106 CHAPTER 5. COMPLEX NUMBERS
Now :

= [.[

= [.

[, so is an argument of .

.
COROLLARY 5.6.3 If

and

are arguments of .

and .

, then

is an argument of .

,.

.
In terms of principal arguments, we have the following equations:
(i) Arg (.

) = Arg .

+Arg .

+ 2/

,
(ii) Arg (.

) = Arg . + 2/

,
(iii) Arg (.

,.

) = Arg .

Arg .

+ 2/

,
(iv) Arg (.

.
a
) = Arg .

+ +Arg .
a
+ 2/

,
(v) Arg (.
a
) = nArg . + 2/

,
where /

, /

, /

, /

, /

are integers.
In numerical examples, we can write (i), for example, as
Arg (.

) Arg .

+ Arg .

.
EXAMPLE 5.6.3 Find the modulus and principal argument of
. =
_

3 + i
1 + i
_
_
and hence express . in modulusargument form.
Solution. [.[ =
[

3 + i[

[1 + i[

=
2
17
(

2)
17
= 2
17
.
Arg . 17Arg
_

3 + i
1 + i
_
= 17(Arg (

3 + i) Arg (1 + i))
= 17
_

6


4
_
=
17
12
.
Hence Arg . =
_

_
+ 2/, where / is an integer. We see that / = 1 and
hence Arg . =
7
. Consequently . = 2
17
_
cos
7
+ i sin
7
_
.
DEFINITION 5.6.2 If is a real number, then we dene c
i
by
c
i
= cos + i sin .
More generally, if . = r + ij, then we dene c
:
by
c
:
= c
a
c
i
.
5.7. DE MOIVRES THEOREM 107
For example,
c
i
2
= i, c
i
= 1, c

i
2
= i.
The following properties of the complex exponential function are left as
exercises:
THEOREM 5.6.3 (i) c
:
1
c
:
2
= c
:
1
+:
2
,
(ii) c
:
1
c
:n
= c
:
1
+ :n
,
(iii) c
:
,= 0,
(iv) (c
:
)

= c
:
,
(v) c
:
1
,c
:
2
= c
:
1
:
2
,
(vi) c
:
= c
:
.
THEOREM 5.6.4 The equation
c
:
= 1
has the complete solution . = 2/i, / Z.
Proof. First we observe that
c
Ii
= cos (2/) + i sin (2/) = 1.
Conversely, suppose c
:
= 1, . = r + ij. Then c
a
(cos j + i sin j) = 1. Hence
c
a
cos j = 1 and c
a
sin j = 0. Hence sin j = 0 and so j = n, n Z. Then
c
a
cos (n) = 1, so c
a
(1)
a
= 1, from which follows (1)
a
= 1 as c
a
0.
Hence n = 2/, / Z and c
a
= 1. Hence r = 0 and . = 2/i.
5.7 De Moivres theorem
The next theorem has many uses and is a special case of theorem 5.6.3(ii).
Alternatively it can be proved directly by induction on n.
THEOREM 5.7.1 (De Moivre) If n is a positive integer, then
(cos + i sin )
a
= cos n + i sin n.
As a rst application, we consider the equation .
a
= 1.
THEOREM 5.7.2 The equation .
a
= 1 has n distinct solutions, namely
the complex numbers
I
= c
2ki
n
, / = 0, 1, . . . , n 1. These lie equally
spaced on the unit circle [.[ = 1 and are obtained by starting at 1, moving
round the circle anticlockwise, incrementing the argument in steps of
2
a
.
(See Figure 5.7)
We notice that the roots are the powers of the special root = c
2i
n
.
108 CHAPTER 5. COMPLEX NUMBERS

a
`

>
>
>
>
>
>
>
>

2,n
2,n
2,n
[.[ = 1
Figure 5.7: The nth roots of unity.
Proof. With
I
dened as above,

a
I
=
_
c
2ki
n
_
a
= c
2ki
n
a
= 1,
by De Moivres theorem. However [
I
[ = 1 and arg
I
=
2I
a
, so the com-
plex numbers
I
, / = 0, 1, . . . , n 1, lie equally spaced on the unit circle.
Consequently these numbers must be precisely all the roots of .
a
1. For
the polynomial .
a
1, being of degree n over a eld, can have at most n
distinct roots in that eld.
The more general equation .
a
= o, where o , C, o ,= 0, can be reduced
to the previous case:
Let be argument of ., so that o = [o[c
i
. Then if n = [o[
a
c
i
n
, we
have
n
a
=
_
[o[
a
c
i
n
_
a
= ([o[
a
)
a
_
c
i
n
_
a
= [o[c
i
= o.
So n is a particular solution. Substituting for o in the original equation,
we get .
a
= n
a
, or (.,n)
a
= 1. Hence the complete solution is .,n =
5.7. DE MOIVRES THEOREM 109

.

.

.
a
`

`
>
>
>
>
>
>
>
>>

2,n
[.[ = ([o[)
1a
Figure 5.8: The roots of .
a
= o.
c
2ki
n
, / = 0, 1, . . . , n 1, or
.
I
= [o[
a
c
i
n
c
2ki
n
= [o[
a
c
i(+2k)
n
, (5.4)
/ = 0, 1, . . . , n 1. So the roots are equally spaced on the circle
[.[ = [o[
a
and are generated from the special solution having argument equal to (arg o)/n,
by incrementing the argument in steps of 2,n. (See Figure 5.8.)
EXAMPLE 5.7.1 Factorize the polynomial .

1 as a product of real
linear and quadratic factors.
Solution. The roots are 1, c
2i
5
, c
2i
5
, c
4i
5
, c
4i
5
, using the fact that non
real roots come in conjugatecomplex pairs. Hence
.

1 = (. 1)(. c
2i
5
)(. c
2i
5
)(. c
4i
5
)(. c
i
).
Now
(. c
2i
5
)(. c
2i
5
) = .

.(c
2i
5
+ c
2i
5
) + 1
= .

2. cos
2
+ 1.
110 CHAPTER 5. COMPLEX NUMBERS
Similarly
(. c
4i
5
)(. c
4i
5
) = .

2. cos
4
+ 1.
This gives the desired factorization.
EXAMPLE 5.7.2 Solve .

= i.
Solution. [i[ = 1 and Arg i =

= . So by equation 5.4, the solutions are
.
I
= [i[

c
i(+2k)
3
, / = 0, 1, 2.
First, / = 0 gives
.

= c
i
6
= cos

6
+ i sin

6
=

3
2
+
i
2
.
Next, / = 1 gives
.

= c
5i
6
= cos
5
6
+ i sin
5
6
=

3
2
+
i
2
.
Finally, / = 2 gives
.

= c
9i
6
= cos
9
6
+ i sin
9
6
= i.
We nish this chapter with two more examples of De Moivres theorem.
EXAMPLE 5.7.3 If
C = 1 + cos + + cos (n 1),
o = sin + + sin (n 1),
prove that
C =
sin
a
sin

cos
(a
and o =
sin
a
sin

sin
(a
,
if ,= 2/, / Z.
5.8. PROBLEMS 111
Solution.
C + io = 1 + (cos + i sin ) + + (cos (n 1) + i sin (n 1))
= 1 + c
i
+ + c
i a
= 1 + . + + .
a
, where . = c
i
=
1 .
a
1 .
, if . ,= 1, i.e. ,= 2/,
=
1 c
ia
1 c
i
=
c
in
2
(c
in
2
c
in
2
)
c
i
2
(c
i
2
c
i
2
)
= c
i a

2
sin
a
sin

= (cos (n 1)

+ i sin (n 1)

)
sin
a
sin

.
The result follows by equating real and imaginary parts.
EXAMPLE 5.7.4 Express cos n and sin n in terms of cos and sin ,
using the equation cos n + sin n = (cos + i sin )
a
.
Solution. The binomial theorem gives
(cos + i sin )
a
= cos
a
+
_
a

_
cos
a
(i sin ) +
_
a

_
cos
a
(i sin )
2
+
+ (i sin )
a
.
Equating real and imaginary parts gives
cos n = cos
a

_
a

_
cos
a
sin
2
+
sin n =
_
a

_
cos
a
sin
_
a

_
cos
a
sin
3
+ .
5.8 PROBLEMS
1. Express the following complex numbers in the form r + ij, r, j real:
(i) (3 + i)(14 2i); (ii)
2 + 3i
1 4i
; (iii)
(1 + 2i)
2
1 i
.
[Answers: (i) 40 + 20i; (ii)

+
11
i; (iii)

+
i
.]
2. Solve the following equations:
112 CHAPTER 5. COMPLEX NUMBERS
(i) i. + (2 10i). = 3. + 2i,
(ii) (1 + i). + (2 i)n = 3i
(1 + 2i). + (3 + i)n = 2 + 2i.
[Answers:(i) . =


i
; (ii) . = 1 + 5i, n =
19

i
.]
3. Express 1 + (1 + i) + (1 + i)
2
+ . . . + (1 + i)
99
in the form r + ij, r, j
real. [Answer: (1 + 2
50
)i.]
4. Solve the equations: (i) .

= 8 6i; (ii) .

(3 +i). + 4 + 3i = 0.
[Answers: (i) . = (1 3i); (ii) . = 2 i, 1 + 2i.]
5. Find the modulus and principal argument of each of the following
complex numbers:
(i) 4 + i; (ii)


i
; (iii) 1 + 2i; (iv)
1
(1 + i

3).
[Answers: (i)

17, tan

; (ii)

, + tan

; (iii)

5,
tan

2.]
6. Express the following complex numbers in modulus-argument form:
(i) . = (1 + i)(1 + i

3)(

3 i).
(ii) . =
(1 + i)
5
(1 i

3)
5
(

3 + i)
4
.
[Answers:
(i) . = 4

2(cos
5
+ i sin
5
); (ii) . = 2
7
(cos
11
+ i sin
11
).]
7. (i) If . = 2(cos

+i sin

) and n = 3(cos

+i sin

), nd the polar
form of
(a) .n; (b)
:
&
; (c)
&
:
; (d)
:
5
&
2
.
(ii) Express the following complex numbers in the form r + ij:
(a) (1 + i)
12
; (b)
_
_i

.
[Answers: (i): (a) 6(cos
5
+ i sin
5
); (b)
2
(cos

+ i sin

);
(c)
3
(cos

+ i sin

); (d)
32
(cos
11
+ i sin
11
);
(ii): (a) 64; (b) i.]
5.8. PROBLEMS 113
8. Solve the equations:
(i) .

= 1 + i

3; (ii) .

= i; (iii) .

= 8i; (iv) .

= 2 2i.
[Answers: (i) . =

; (ii) i
I
(cos

+ i sin

), / = 0, 1, 2, 3; (iii)
. = 2i,

3i,

3i; (iv) . = i
I
2
3
8
(cos

i sin

), / = 0, 1, 2, 3.]
9. Find the reduced rowechelon form of the complex matrix
_
_
2 + i 1 + 2i 2
1 + i 1 + i 1
1 + 2i 2 + i 1 + i
_
_
.
[Answer:
_
_
1 i 0
0 0 1
0 0 0
_
_
.]
10. (i) Prove that the line equation |r + :j = n is equivalent to
j. + j. = 2n,
where j = | + i:.
(ii) Use (ii) to deduce that reection in the straight line
j. + j. = n
is described by the equation
jn + j. = n.
[Hint: The complex number | + i: is perpendicular to the given
line.]
(iii) Prove that the line [.o[ = [./[ may be written as j.+j. = n,
where j = / o and n = [/[

[o[

. Deduce that if . lies on the


Apollonius circle
[:o[
[:o[
= , then n, the reection of . in the line
[. o[ = [. /[, lies on the Apollonius circle
[:o[
[:o[
=
1

.
11. Let o and / be distinct complex numbers and 0 < < .
(i) Prove that each of the following sets in the complex plane rep-
resents a circular arc and sketch the circular arcs on the same
diagram:
114 CHAPTER 5. COMPLEX NUMBERS
Arg
. o
. /
= , , , .
Also show that Arg
. o
. /
= represents the line segment joining
o and /, while Arg
. o
. /
= 0 represents the remaining portion of
the line through o and /.
(ii) Use (i) to prove that four distinct points .

, .

, .

, .

are con-
cyclic or collinear, if and only if the crossratio
.

,
.

is real.
(iii) Use (ii) to derive Ptolemys Theorem: Four distinct points , 1, C, 1
are concyclic or collinear, if and only if one of the following holds:
1 C1 + 1C 1 = C 11
11 C + 1 1C = 1 C1
11 C + 1 C1 = 1 1C.
Chapter 6
EIGENVALUES AND
EIGENVECTORS
6.1 Motivation
We motivate the chapter on eigenvalues by discussing the equation
or

+ 2/rj + /j

= c,
where not all of o, /, / are zero. The expression or

+ 2/rj + /j

is called
a quadratic form in r and j and we have the identity
or

+ 2/rj + /j

=
_
r j

_
o /
/ /
_ _
r
j
_
= A
|
A,
where A =
_
r
j
_
and =
_
o /
/ /
_
. is called the matrix of the quadratic
form.
We now rotate the r, j axes anticlockwise through radians to new
r

, j

axes. The equations describing the rotation of axes are derived as


follows:
Let 1 have coordinates (r, j) relative to the r, j axes and coordinates
(r

, j

) relative to the r

, j

axes. Then referring to Figure 6.1:


115
116 CHAPTER 6. EIGENVALUES AND EIGENVECTORS

`

/
/
/
/
/
/
/
/
/
/ /

r
j
r

1
Q
1
O

Figure 6.1: Rotating the axes.


r = OQ = O1 cos ( + )
= O1(cos cos sin sin )
= (O1 cos ) cos (O1 sin ) sin
= O1cos 11sin
= r

cos j

sin .
Similarly j = r

sin + j

cos .
We can combine these transformation equations into the single matrix
equation:
_
r
j
_
=
_
cos sin
sin cos
_ _
r

_
,
or A = 1Y , where A =
_
r
j
_
, Y =
_
r

_
and 1 =
_
cos sin
sin cos
_
.
We note that the columns of 1 give the directions of the positive r

and j

axes. Also 1 is an orthogonal matrix we have 11


|
= 1

and so 1

= 1
|
.
The matrix 1 has the special property that det 1 = 1.
A matrix of the type 1 =
_
cos sin
sin cos
_
is called a rotation matrix.
We shall show soon that any 2 2 real orthogonal matrix with determinant
6.1. MOTIVATION 117
equal to 1 is a rotation matrix.
We can also solve for the new coordinates in terms of the old ones:
_
r

_
= Y = 1
|
A =
_
cos sin
sin cos
_ _
r
j
_
,
so r

= rcos + j sin and j

= rsin + j cos . Then


A
|
A = (1Y )
|
(1Y ) = Y
|
(1
|
1)Y.
Now suppose, as we later show, that it is possible to choose an angle so
that 1
|
1 is a diagonal matrix, say diag(

). Then
A
|
A =
_
r

0
0

_ _
r

_
=

(6.1)
and relative to the new axes, the equation or

+ 2/rj + /j

= c becomes

= c, which is quite easy to sketch. This curve is symmetrical


about the r

and j

axes, with 1

and 1

, the respective columns of 1,


giving the directions of the axes of symmetry.
Also it can be veried that 1

and 1

satisfy the equations


1

and 1

.
These equations force a restriction on

and

. For if 1

=
_
n

_
, the
rst equation becomes
_
o /
/ /
_ _
n

_
=

_
n

_
or
_
o

/
/ /

_ _
n

_
=
_
0
0
_
.
Hence we are dealing with a homogeneous system of two linear equations in
two unknowns, having a nontrivial solution (n

). Hence

/
/ /

= 0.
Similarly,

satises the same equation. In expanded form,

and

satisfy

(o + /) + o/ /

= 0.
This equation has real roots
=
o + /
_
(o + /)
2
4(o/ /

)
2
=
o + /
_
(o /)
2
+ 4/

2
(6.2)
(The roots are distinct if o ,= / or / ,= 0. The case o = / and / = 0 needs
no investigation, as it gives an equation of a circle.)
The equation

(o+/)+o//

= 0 is called the eigenvalue equation


of the matrix .
118 CHAPTER 6. EIGENVALUES AND EIGENVECTORS
6.2 Denitions and examples
DEFINITION 6.2.1 (Eigenvalue, eigenvector)
Let be a complex square matrix. Then if is a complex number and
A a nonzero complex column vector satisfying A = A, we call A an
eigenvector of , while is called an eigenvalue of . We also say that A
is an eigenvector corresponding to the eigenvalue .
So in the above example 1

and 1

are eigenvectors corresponding to

and

, respectively. We shall give an algorithm which starts from the


eigenvalues of =
_
o /
/ /
_
and constructs a rotation matrix 1 such that
1
|
1 is diagonal.
As noted above, if is an eigenvalue of an n n matrix , with
corresponding eigenvector A, then ( 1
a
)A = 0, with A ,= 0, so
det (1
a
) = 0 and there are at most n distinct eigenvalues of .
Conversely if det (1
a
) = 0, then (1
a
)A = 0 has a nontrivial
solution A and so is an eigenvalue of with A a corresponding eigenvector.
DEFINITION 6.2.2 (Characteristic equation, polynomial)
The equation det ( 1
a
) = 0 is called the characteristic equation of ,
while the polynomial det (1
a
) is called the characteristic polynomial of
. The characteristic polynomial of is often denoted by ch

().
Hence the eigenvalues of are the roots of the characteristic polynomial
of .
For a 2 2 matrix =
_
o /
c d
_
, it is easily veried that the character-
istic polynomial is

(trace )+det , where trace = o+d is the sum


of the diagonal elements of .
EXAMPLE 6.2.1 Find the eigenvalues of =
_
2 1
1 2
_
and nd all eigen-
vectors.
Solution. The characteristic equation of is

4 + 3 = 0, or
( 1)( 3) = 0.
Hence = 1 or 3. The eigenvector equation (1
a
)A = 0 reduces to
_
2 1
1 2
_ _
r
j
_
=
_
0
0
_
,
6.2. DEFINITIONS AND EXAMPLES 119
or
(2 )r + j = 0
r + (2 )j = 0.
Taking = 1 gives
r + j = 0
r + j = 0,
which has solution r = j, j arbitrary. Consequently the eigenvectors
corresponding to = 1 are the vectors
_
j
j
_
, with j ,= 0.
Taking = 3 gives
r + j = 0
r j = 0,
which has solution r = j, j arbitrary. Consequently the eigenvectors corre-
sponding to = 3 are the vectors
_
j
j
_
, with j ,= 0.
Our next result has wide applicability:
THEOREM 6.2.1 Let be a 2 2 matrix having distinct eigenvalues

and

and corresponding eigenvectors A

and A

. Let 1 be the matrix


whose columns are A

and A

, respectively. Then 1 is nonsingular and


1

1 =
_

0
0

_
.
Proof. Suppose A

and A

. We show that the system


of homogeneous equations
rA

+ jA

= 0
has only the trivial solution. Then by theorem 2.5.10 the matrix 1 =
[A

[A

] is nonsingular. So assume
rA

+ jA

= 0. (6.3)
Then (rA

+ jA

) = 0 = 0, so r(A

) + j(A

) = 0. Hence
r

+ j

= 0. (6.4)
120 CHAPTER 6. EIGENVALUES AND EIGENVECTORS
Multiplying equation 6.3 by

and subtracting from equation 6.4 gives


(

)jA

= 0.
Hence j = 0, as (

) ,= 0 and A

,= 0. Then from equation 6.3, rA

= 0
and hence r = 0.
Then the equations A

and A

give
1 = [A

[A

] = [A

[A

] = [

]
= [A

[A

]
_

0
0

_
= 1
_

0
0

_
,
so
1

1 =
_

0
0

_
.
EXAMPLE 6.2.2 Let =
_
2 1
1 2
_
be the matrix of example 6.2.1. Then
A

=
_
1
1
_
and A

=
_
1
1
_
are eigenvectors corresponding to eigenvalues
1 and 3, respectively. Hence if 1 =
_
1 1
1 1
_
, we have
1

1 =
_
1 0
0 3
_
.
There are two immediate applications of theorem 6.2.1. The rst is to the
calculation of
a
: If 1

1 =diag (

), then = 1diag (

)1

and

a
=
_
1
_

0
0

_
1

_
a
= 1
_

0
0

_
a
1

= 1
_

a

0
0
a

_
1

.
The second application is to solving a system of linear dierential equations
dr
dt
= or + /j
dj
dt
= cr + dj,
where =
_
o /
c d
_
is a matrix of real or complex numbers and r and j
are functions of t. The system can be written in matrix form as

A = A,
where
A =
_
r
j
_
and

A =
_
r
j
_
=
_
oa
o|
o
o|
_
.
6.2. DEFINITIONS AND EXAMPLES 121
We make the substitution A = 1Y , where Y =
_
r

_
. Then r

and j

are also functions of t and

A = 1

Y = A = (1Y ), so

Y = (1

1)Y =
_

0
0

_
Y.
Hence r

and j

.
These dierential equations are wellknown to have the solutions r

=
r

(0)c

1
|
and j

= j

(0)c

2
|
, where r

(0) is the value of r

when t = 0.
[If
oa
o|
= /r, where / is a constant, then
d
dt
_
c
I|
r
_
= /c
I|
r + c
I|
dr
dt
= /c
I|
r + c
I|
/r = 0.
Hence c
I|
r is constant, so c
I|
r = c
I
r(0) = r(0). Hence r = r(0)c
I|
.]
However
_
r

(0)
j

(0)
_
= 1

_
r(0)
j(0)
_
, so this determines r

(0) and j

(0) in
terms of r(0) and j(0). Hence ultimately r and j are determined as explicit
functions of t, using the equation A = 1Y .
EXAMPLE 6.2.3 Let =
_
2 3
4 5
_
. Use the eigenvalue method to
derive an explicit formula for
a
and also solve the system of dierential
equations
dr
dt
= 2r 3j
dj
dt
= 4r 5j,
given r = 7 and j = 13 when t = 0.
Solution. The characteristic polynomial of is

+3+2 which has distinct


roots

= 1 and

= 2. We nd corresponding eigenvectors A

=
_
1
1
_
and A

=
_
3
4
_
. Hence if 1 =
_
1 3
1 4
_
, we have 1

1 = diag (1, 2).


Hence

a
=
_
1diag (1, 2)1

_
a
= 1diag ((1)
a
, (2)
a
)1

=
_
1 3
1 4
_ _
(1)
a
0
0 (2)
a
_ _
4 3
1 1
_
122 CHAPTER 6. EIGENVALUES AND EIGENVECTORS
= (1)
a
_
1 3
1 4
_ _
1 0
0 2
a
_ _
4 3
1 1
_
= (1)
a
_
1 3 2
a
1 4 2
a
_ _
4 3
1 1
_
= (1)
a
_
4 3 2
a
3 + 3 2
a
4 4 2
a
3 + 4 2
a
_
.
To solve the dierential equation system, make the substitution A =
1Y . Then r = r

+ 3j

, j = r

+ 4j

. The system then becomes


r

= r

= 2j

,
so r

= r

(0)c
|
, j

= j

(0)c
|
. Now
_
r

(0)
j

(0)
_
= 1

_
r(0)
j(0)
_
=
_
4 3
1 1
_ _
7
13
_
=
_
11
6
_
,
so r

= 11c
|
and j

= 6c
|
. Hence r = 11c
|
+ 3(6c
|
) = 11c
|
+
18c
|
, j = 11c
|
+ 4(6c
|
) = 11c
|
+ 24c
|
.
For a more complicated example we solve a system of inhomogeneous
recurrence relations.
EXAMPLE 6.2.4 Solve the system of recurrence relations
r
a
= 2r
a
j
a
1
j
a
= r
a
+ 2j
a
+ 2,
given that r

= 0 and j

= 1.
Solution. The system can be written in matrix form as
A
a
= A
a
+ 1,
where
=
_
2 1
1 2
_
and 1 =
_
1
2
_
.
It is then an easy induction to prove that
A
a
=
a
A

+ (
a
+ + + 1

)1. (6.5)
6.2. DEFINITIONS AND EXAMPLES 123
Also it is easy to verify by the eigenvalue method that

a
=
1
2
_
1 + 3
a
1 3
a
1 3
a
1 + 3
a
_
=
1
2
l +
3
a
2
\,
where l =
_
1 1
1 1
_
and \ =
_
1 1
1 1
_
. Hence

a
+ + + 1

=
n
2
l +
(3
a
+ + 3 + 1)
2
\
=
n
2
l +
(3
a
1)
4
\.
Then equation 6.5 gives
A
a
=
_
1
2
l +
3
a
2
\
__
0
1
_
+
_
n
2
l +
(3
a
1)
4
\
__
1
2
_
,
which simplies to
_
r
a
j
a
_
=
_
(2n + 1 3
a
),4
(2n 5 + 3
a
),4
_
.
Hence r
a
= (2n 1 + 3
a
),4 and j
a
= (2n 5 + 3
a
),4.
REMARK 6.2.1 If ( 1

existed (that is, if det ( 1

) ,= 0, or
equivalently, if 1 is not an eigenvalue of ), then we could have used the
formula

a
+ + + 1

= (
a
1

)(1

. (6.6)
However the eigenvalues of are 1 and 3 in the above problem, so formula 6.6
cannot be used there.
Our discussion of eigenvalues and eigenvectors has been limited to 2 2
matrices. The discussion is more complicated for matrices of size greater
than two and is best left to a second course in linear algebra. Nevertheless
the following result is a useful generalization of theorem 6.2.1. The reader
is referred to [28, page 350] for a proof.
THEOREM 6.2.2 Let be an n n matrix having distinct eigenvalues

, . . . ,
a
and corresponding eigenvectors A

, . . . , A
a
. Let 1 be the matrix
whose columns are respectively A

, . . . , A
a
. Then 1 is nonsingular and
1

1 =
_

0 0
0

0
.
.
.
.
.
.
.
.
.
.
.
.
0 0
a
_

_
.
124 CHAPTER 6. EIGENVALUES AND EIGENVECTORS
Another useful result which covers the case where there are multiple eigen-
values is the following (The reader is referred to [28, pages 351352] for a
proof):
THEOREM 6.2.3 Suppose the characteristic polynomial of has the fac-
torization
det (1
a
) = ( c

)
a
1
( c
|
)
at
,
where c

, . . . , c
|
are the distinct eigenvalues of . Suppose that for i =
1, . . . , t, we have nullity (c
i
1
a
) = n
i
. For each i, choose a basis A
i
, . . . , A
ia
i
for the eigenspace (c
i
1
a
). Then the matrix
1 = [A

[ [A
a
1
[ [A
|
[ [A
|at
]
is nonsingular and 1

1 is the following diagonal matrix


1

1 =
_

_
c

1
a
1
0 0
0 c

1
a
2
0
.
.
.
.
.
.
.
.
.
.
.
.
0 0 c
|
1
at
_

_
.
(The notation means that on the diagonal there are n

elements c

, followed
by n

elements c

,. . . , n
|
elements c
|
.)
6.3 PROBLEMS
1. Let =
_
4 3
1 0
_
. Find a nonsingular matrix 1 such that 1

1 =
diag (1, 3) and hence prove that

a
=
3
a
1
2
+
3 3
a
2
1

.
2. If =
_
0.6 0.8
0.4 0.2
_
, prove that
a
tends to a limiting matrix
_
2,3 2,3
1,3 1,3
_
as n .
6.3. PROBLEMS 125
3. Solve the system of dierential equations
dr
dt
= 3r 2j
dj
dt
= 5r 4j,
given r = 13 and j = 22 when t = 0.
[Answer: r = 7c
|
+ 6c
|
, j = 7c
|
+ 15c
|
.]
4. Solve the system of recurrence relations
r
a
= 3r
a
j
a
j
a
= r
a
+ 3j
a
,
given that r

= 1 and j

= 2.
[Answer: r
a
= 2
a
(3 2
a
), j
a
= 2
a
(3 + 2
a
).]
5. Let =
_
o /
c d
_
be a real or complex matrix with distinct eigenvalues

and corresponding eigenvectors A

, A

. Also let 1 = [A

[A

].
(a) Prove that the system of recurrence relations
r
a
= or
a
+ /j
a
j
a
= cr
a
+ dj
a
has the solution
_
r
a
j
a
_
=
a

+
a

,
where and are determined by the equation
_

_
= 1

_
r

j

_
.
(b) Prove that the system of dierential equations
dr
dt
= or + /j
dj
dt
= cr + dj
has the solution
_
r
j
_
= c

1
|
A

+ c

2
|
A

,
126 CHAPTER 6. EIGENVALUES AND EIGENVECTORS
where and are determined by the equation
_

_
= 1

_
r(0)
j(0)
_
.
6. Let =
_
o

o

o

o

_
be a real matrix with nonreal eigenvalues =
o + i/ and = o i/, with corresponding eigenvectors A = l + i\
and A = l i\ , where l and \ are real vectors. Also let 1 be the
real matrix dened by 1 = [l[\ ]. Finally let o + i/ = :c
i
, where
: 0 and is real.
(a) Prove that
l = ol /\
\ = /l + o\.
(b) Deduce that
1

1 =
_
o /
/ o
_
.
(c) Prove that the system of recurrence relations
r
a
= o

r
a
+ o

j
a
j
a
= o

r
a
+ o

j
a
has the solution
_
r
a
j
a
_
= :
a
(l + \ ) cos n + (l \ ) sin n,
where and are determined by the equation
_

_
= 1

_
r

j

_
.
(d) Prove that the system of dierential equations
dr
dt
= or + /j
dj
dt
= cr + dj
6.3. PROBLEMS 127
has the solution
_
r
j
_
= c
o|
(l + \ ) cos /t + (l \ ) sin /t,
where and are determined by the equation
_

_
= 1

_
r(0)
j(0)
_
.
[Hint: Let
_
r
j
_
= 1
_
r

_
. Also let . = r

+ ij

. Prove that
. = (o i/).
and deduce that
r

+ ij

= c
o|
( + i)(cos /t + i sin/t).
Then equate real and imaginary parts to solve for r

, j

and
hence r, j.]
7. (The case of repeated eigenvalues.) Let =
_
o /
c d
_
and suppose
that the characteristic polynomial of ,

(o +d) +(od /c), has


a repeated root . Also assume that ,= 1

. Let 1 = 1

.
(i) Prove that (o d)
2
+ 4/c = 0.
(ii) Prove that 1

= 0.
(iii) Prove that 1A

,= 0 for some vector A

; indeed, show that A

can be taken to be
_
1
0
_
or
_
0
1
_
.
(iv) Let A

= 1A

. Prove that 1 = [A

[A

] is nonsingular,
A

= A

and A

= A

+ A

and deduce that


1

1 =
_
1
0
_
.
8. Use the previous result to solve system of the dierential equations
dr
dt
= 4r j
dj
dt
= 4r + 8j,
128 CHAPTER 6. EIGENVALUES AND EIGENVECTORS
given that r = 1 = j when t = 0.
[To solve the dierential equation
dr
dt
/r = )(t), / a constant,
multiply throughout by c
I|
, thereby converting the lefthand side to
oa
o|
(c
I|
r).]
[Answer: r = (1 3t)c
|
, j = (1 + 6t)c
|
.]
9. Let
=
_
_
1,2 1,2 0
1,4 1,4 1,2
1,4 1,4 1,2
_
_
.
(a) Verify that det (1

), the characteristic polynomial of , is


given by
( 1)(
1
4
).
(b) Find a nonsingular matrix 1 such that 1

1 = diag (1, 0,

).
(c) Prove that

a
=
1
3
_
_
1 1 1
1 1 1
1 1 1
_
_
+
1
3 4
a
_
_
2 2 4
1 1 2
1 1 2
_
_
if n 1.
10. Let
=
_
_
5 2 2
2 5 2
2 2 5
_
_
.
(a) Verify that det (1

), the characteristic polynomial of , is


given by
( 3)
2
( 9).
(b) Find a nonsingular matrix 1 such that 1

1 = diag (3, 3, 9).


Chapter 7
Identifying second degree
equations
7.1 The eigenvalue method
In this section we apply eigenvalue methods to determine the geometrical
nature of the second degree equation
or

+ 2/rj + /j

+ 2pr + 2)j + c = 0, (7.1)


where not all of o, /, / are zero.
Let =
_
o /
/ /
_
be the matrix of the quadratic form or

+2/rj +/j

.
We saw in section 6.1, equation 6.2 that has real eigenvalues

and

,
given by

=
o + /
_
(o /)
2
+ 4/

2
,

=
o + / +
_
(o /)
2
+ 4/

2
.
We show that it is always possible to rotate the r, j axes to r

, r

axes whose
positive directions are determined by eigenvectors A

and A

corresponding
to

and

in such a way that relative to the r

, j

axes, equation 7.1 takes


the form
o
t
r

+ /
t
j

+ 2p
t
r + 2)
t
j + c = 0. (7.2)
Then by completing the square and suitably translating the r

, j

axes,
to new r

, j

axes, equation 7.2 can be reduced to one of several standard


forms, each of which is easy to sketch. We need some preliminary denitions.
129
130 CHAPTER 7. IDENTIFYING SECOND DEGREE EQUATIONS
DEFINITION 7.1.1 (Orthogonal matrix) An n n real matrix 1 is
called orthogonal if
1
|
1 = 1
a
.
It follows that if 1 is orthogonal, then det 1 = 1. For
det (1
|
1) = det 1
|
det 1 = ( det 1)
2
,
so (det 1)
2
=det 1
a
= 1. Hence det 1 = 1.
If 1 is an orthogonal matrix with det 1 = 1, then 1 is called a proper
orthogonal matrix.
THEOREM 7.1.1 If 1 is a 2 2 orthogonal matrix with det 1 = 1, then
1 =
_
cos sin
sin cos
_
for some .
REMARK 7.1.1 Hence, by the discusssion at the beginning of Chapter
6, if 1 is a proper orthogonal matrix, the coordinate transformation
_
r
j
_
= 1
_
r

_
represents a rotation of the axes, with new r

and j

axes given by the


repective columns of 1.
Proof. Suppose that 1
|
1 = 1

, where =det 1 = 1. Let


1 =
_
o /
c d
_
.
Then the equation
1
|
= 1

=
1

adj 1
gives
_
o c
/ d
_
=
_
d /
c o
_
Hence o = d, / = c and so
1 =
_
o c
c o
_
,
where o

+ c

= 1. But then the point (o, c) lies on the unit circle, so


o = cos and c = sin , where is uniquely determined up to multiples of
2.
7.1. THE EIGENVALUE METHOD 131
DEFINITION 7.1.2 (Dot product). If A =
_
o
/
_
and Y =
_
c
d
_
, then
A Y , the dot product of A and Y , is dened by
A Y = oc + /d.
The dot product has the following properties:
(i) A (Y + 7) = A Y + A 7;
(ii) A Y = Y A;
(iii) (tA) Y = t(A Y );
(iv) A A = o

+ /

if A =
_
o
/
_
;
(v) A Y = A
|
Y .
The length of A is dened by
[[A[[ =
_
o

+ /

= (A A)
1
.
We see that [[A[[ is the distance between the origin O = (0, 0) and the point
(o, /).
THEOREM 7.1.2 (Geometrical interpretation of the dot product)
Let = (r

, j

) and 1 = (r

, j

) be points, each distinct from the origin


O = (0, 0). Then if A =
_
r

_
and Y =
_
r

_
, we have
A Y = O O1 cos ,
where is the angle between the rays O and O1.
Proof. By the cosine law applied to triangle O1, we have
1

= O

+ O1

2O O1 cos . (7.3)
Now 1

= (r

)
2
+ (j

)
2
, O

= r

+ j

, O1

= r

+ j

.
Substituting in equation 7.3 then gives
(r

)
2
+ (j

)
2
= (r

+ j

) + (r

+ j

) 2O O1 cos ,
132 CHAPTER 7. IDENTIFYING SECOND DEGREE EQUATIONS
which simplies to give
O O1 cos = r

+ j

= A Y.
It follows from theorem 7.1.2 that if = (r

, j

) and 1 = (r

, j

) are
points distinct from O = (0, 0) and A =
_
r

_
and Y =
_
r

_
, then
A Y = 0 means that the rays O and O1 are perpendicular. This is the
reason for the following denition:
DEFINITION 7.1.3 (Orthogonal vectors) Vectors A and Y are called
orthogonal if
A Y = 0.
There is also a connection with orthogonal matrices:
THEOREM 7.1.3 Let 1 be a 2 2 real matrix. Then 1 is an orthogonal
matrix if and only if the columns of 1 are orthogonal and have unit length.
Proof. 1 is orthogonal if and only if 1
|
1 = 1

. Now if 1 = [A

[A

], the
matrix 1
|
1 is an important matrix called the Gram matrix of the column
vectors A

and A

. It is easy to prove that


1
|
1 = [A
i
A
)
] =
_
A

_
.
Hence the equation 1
|
1 = 1

is equivalent to
_
A

_
=
_
1 0
0 1
_
,
or, equating corresponding elements of both sides:
A

= 1, A

= 0, A

= 1,
which says that the columns of 1 are orthogonal and of unit length.
The next theorem describes a fundamental property of real symmetric
matrices and the proof generalizes to symmetric matrices of any size.
THEOREM 7.1.4 If A

and A

are eigenvectors corresponding to distinct


eigenvalues

and

of a real symmetric matrix , then A

and A

are
orthogonal vectors.
7.1. THE EIGENVALUE METHOD 133
Proof. Suppose
A

, A

, (7.4)
where A

and A

are nonzero column vectors,


|
= and

,=

.
We have to prove that A
|

= 0. From equation 7.4,


A
|

A
|

(7.5)
and
A
|

A
|

. (7.6)
From equation 7.5, taking transposes,
(A
|

)
|
= (

A
|

)
|
so
A
|

|
A

A
|

.
Hence
A
|

A
|

. (7.7)
Finally, subtracting equation 7.6 from equation 7.7, we have
(

)A
|

= 0
and hence, since

,=

,
A
|

= 0.
THEOREM 7.1.5 Let be a real 2 2 symmetric matrix with distinct
eigenvalues

and

. Then a proper orthogonal 22 matrix 1 exists such


that
1
|
1 = diag (

).
Also the rotation of axes
_
r
j
_
= 1
_
r

_
diagonalizes the quadratic form corresponding to :
A
|
A =

.
134 CHAPTER 7. IDENTIFYING SECOND DEGREE EQUATIONS
Proof. Let A

and A

be eigenvectors corresponding to

and

. Then
by theorem 7.1.4, A

and A

are orthogonal. By dividing A

and A

by
their lengths (i.e. normalizing A

and A

) if necessary, we can assume that


A

and A

have unit length. Then by theorem 7.1.1, 1 = [A

[A

] is an
orthogonal matrix. By replacing A

by A

, if necessary, we can assume


that det 1 = 1. Then by theorem 6.2.1, we have
1
|
1 = 1

1 =
_

0
0

_
.
Also under the rotation A = 1Y ,
A
|
A = (1Y )
|
(1Y ) = Y
|
(1
|
1)Y = Y
|
diag (

)Y
=

.
EXAMPLE 7.1.1 Let be the symmetric matrix
=
_
12 6
6 7
_
.
Find a proper orthogonal matrix 1 such that 1
|
1 is diagonal.
Solution. The characteristic equation of is

19 + 48 = 0, or
( 16)( 3) = 0.
Hence has distinct eigenvalues

= 16 and

= 3. We nd corresponding
eigenvectors
A

=
_
3
2
_
and A

=
_
2
3
_
.
Now [[A

[[ = [[A

[[ =

13. So we take
A

=
1

13
_
3
2
_
and A

=
1

13
_
2
3
_
.
Then if 1 = [A

[A

], the proof of theorem 7.1.5 shows that


1
|
1 =
_
16 0
0 3
_
.
However det 1 = 1, so replacing A

by A

will give det 1 = 1.


7.1. THE EIGENVALUE METHOD 135
y
2
x
2
2 4 -2 -4
2
4
-2
-4
x
y
Figure 7.1: 12r

12rj + 7j

+ 60r 38j + 31 = 0.
REMARK 7.1.2 (A shortcut) Once we have determined one eigenvec-
tor A

=
_
o
/
_
, the other can be taken to be
_
/
o
_
, as these vectors are
always orthogonal. Also 1 = [A

[A

] will have det 1 = o

+ /

0.
We now apply the above ideas to determine the geometric nature of
second degree equations in r and j.
EXAMPLE 7.1.2 Sketch the curve determined by the equation
12r

12rj + 7j

+ 60r 38j + 31 = 0.
Solution. With 1 taken to be the proper orthogonal matrix dened in the
previous example by
1 =
_
3,

13 2,

13
2,

13 3,

13
_
,
then as theorem 7.1.1 predicts, 1 is a rotation matrix and the transformation
A =
_
r
j
_
= 1Y = 1
_
r

_
136 CHAPTER 7. IDENTIFYING SECOND DEGREE EQUATIONS
or more explicitly
r =
3r

+ 2j

13
, j =
2r

+ 3j

13
, (7.8)
will rotate the r, j axes to positions given by the respective columns of 1.
(More generally, we can always arrange for the r

axis to point either into


the rst or fourth quadrant.)
Now =
_
12 6
6 7
_
is the matrix of the quadratic form
12r

12rj + 7j

,
so we have, by Theorem 7.1.5
12r

12rj + 7j

= 16r

+ 3j

.
Then under the rotation A = 1Y , our original quadratic equation becomes
16r

+ 3j

+
60

13
(3r

+ 2j

)
38

13
(2r

+ 3j

) + 31 = 0,
or
16r

+ 3j

+
256

13
r

+
6

13
j

+ 31 = 0.
Now complete the square in r

and j

:
16
_
r

+
16

13
r

_
+ 3
_
j

+
2

13
j

_
+ 31 = 0,
16
_
r

+
8

13
_
_
+ 3
_
j

+
1

13
_
_
= 16
_
8

13
_
_
+ 3
_
1

13
_
_
31
= 48. (7.9)
Then if we perform a translation of axes to the new origin (r

, j

) =
(

):
r

= r

+
8

13
, j

= j

+
1

13
,
equation 7.9 reduces to
16r

+ 3j

= 48,
or
r

3
+
j

16
= 1.
7.1. THE EIGENVALUE METHOD 137
x
y
Figure 7.2:
r

+
j

= 1, 0 < / < o: an ellipse.


This equation is now in one of the standard forms listed below as Figure 7.2
and is that of a whose centre is at (r

, j

) = (0, 0) and whose axes of


symmetry lie along the r

, j

axes. In terms of the original r, j coordinates,


we nd that the centre is (r, j) = (2, 1). Also Y = 1
|
A, so equations 7.8
can be solved to give
r

=
3r

2j

13
, j

=
2r

+ 3j

13
.
Hence the j

axis is given by
0 = r

= r

+
8

13
=
3r 2j

13
+
8

13
,
or 3r 2j + 8 = 0. Similarly the r

axis is given by 2r + 3j + 1 = 0.
This ellipse is sketched in Figure 7.1.
Figures 7.2, 7.3, 7.4 and 7.5 are a collection of standard second degree
equations: Figure 7.2 is an ellipse; Figures 7.3 are hyperbolas (in both these
examples, the asymptotes are the lines j =
/
o
r); Figures 7.4 and 7.5
represent parabolas.
EXAMPLE 7.1.3 Sketch j

4r 10j 7 = 0.
138 CHAPTER 7. IDENTIFYING SECOND DEGREE EQUATIONS
x
y
x
y
Figure 7.3: (i)
r

= 1; (ii)
r

= 1, 0 < /, 0 < o.
x
y
x
y
Figure 7.4: (i) j

= 4or, o 0; (ii) j

= 4or, o < 0.
7.1. THE EIGENVALUE METHOD 139
x
y
x
y
Figure 7.5: (iii) r

= 4oj, o 0; (iv) r

= 4oj, o < 0.
Solution. Complete the square:
j

10j + 25 4r 32 = 0
(j 5)
2
= 4r + 32 = 4(r + 8),
or j

= 4r

, under the translation of axes r

= r +8, j

= j 5. Hence we
get a parabola with vertex at the new origin (r

, j

) = (0, 0), i.e. (r, j) =


(8, 5).
The parabola is sketched in Figure 7.6.
EXAMPLE 7.1.4 Sketch the curve r

4rj + 4j

+ 5j 9 = 0.
Solution. We have r

4rj + 4j

= A
|
A, where
=
_
1 2
2 4
_
.
The characteristic equation of is

5 = 0, so has distinct eigenvalues

= 5 and

= 0. We nd corresponding unit length eigenvectors


A

=
1

5
_
1
2
_
, A

=
1

5
_
2
1
_
.
Then 1 = [A

[A

] is a proper orthogonal matrix and under the rotation of


axes A = 1Y , or
r =
r

+ 2j

5
j =
2r

+ j

5
,
140 CHAPTER 7. IDENTIFYING SECOND DEGREE EQUATIONS
x
1
y
1
4 8 12 -4 -8
4
8
12
-4
-8
x
y
Figure 7.6: j

4r 10j 7 = 0.
we have
r

4rj + 4j

= 5r

.
The original quadratic equation becomes
5r

5
(2r

+ j

) 9 = 0
5(r

5
r

) +

5j

9 = 0
5(r

5
)
2
= 10

5j

=

5(j

5),
or 5r

, where the r

, j

axes have been translated to r

, j

axes
using the transformation
r

= r

5
, j

= j

5.
Hence the vertex of the parabola is at (r

, j

) = (0, 0), i.e. (r

, j

) =
(
1

, 2

5), or (r, j) = (
21
,

). The axis of symmetry of the parabola is the
line r

= 0, i.e. r

= 1,

5. Using the rotation equations in the form


r

=
r 2j

5
7.2. A CLASSIFICATION ALGORITHM 141
x
2
y
2
2 4 -2 -4
2
4
-2
-4
x
y
Figure 7.7: r

4rj + 4j

+ 5j 9 = 0.
j

=
2r + j

5
,
we have
r 2j

5
=
1

5
, or r 2j = 1.
The parabola is sketched in Figure 7.7.
7.2 A classication algorithm
There are several possible degenerate cases that can arise from the general
second degree equation. For example r

+j

= 0 represents the point (0, 0);


r

+ j

= 1 denes the empty set, as does r

= 1 or j

= 1; r

= 0
denes the line r = 0; (r + j)
2
= 0 denes the line r + j = 0; r

= 0
denes the lines r j = 0, r + j = 0; r

= 1 denes the parallel lines


r = 1; (r + j)
2
= 1 likewise denes two parallel lines r + j = 1.
We state without proof a complete classication
1
of the various cases
1
This classication forms the basis of a computer program which was used to produce
the diagrams in this chapter. I am grateful to Peter Adams for his programming assistance.
142 CHAPTER 7. IDENTIFYING SECOND DEGREE EQUATIONS
that can possibly arise for the general second degree equation
or

+ 2/rj + /j

+ 2pr + 2)j + c = 0. (7.10)


It turns out to be more convenient to rst perform a suitable translation of
axes, before rotating the axes. Let
=

o / p
/ / )
p ) c

, C = o/ /

, = /c )

, 1 = co p

.
If C ,= 0, let
=

p /
) /

C
, =

o p
/ )

C
. (7.11)
CASE 1. = 0.
(1.1) C ,= 0. Translate axes to the new origin (, ), where and are
given by equations 7.11:
r = r

+ , j = j

+ .
Then equation 7.10 reduces to
or

+ 2/r

+ /j

= 0.
(a) C 0: Single point (r, j) = (, ).
(b) C < 0: Two nonparallel lines intersecting in (r, j) = (, ).
The lines are
j
r
=
/

C
/
if / ,= 0,
r = and
j
r
=
o
2/
, if / = 0.
(1.2) C = 0.
(a) / = 0.
(i) o = p = 0.
(A) 0: Empty set.
(B) = 0: Single line j = ),/.
7.2. A CLASSIFICATION ALGORITHM 143
(C) < 0: Two parallel lines
j =
)

/
(ii) / = ) = 0.
(A) 1 0: Empty set.
(B) 1 = 0: Single line r = p,o.
(C) 1 < 0: Two parallel lines
r =
p

1
o
(b) / ,= 0.
(i) 1 0: Empty set.
(ii) 1 = 0: Single line or + /j = p.
(iii) 1 < 0: Two parallel lines
or + /j = p

1.
CASE 2. ,= 0.
(2.1) C ,= 0. Translate axes to the new origin (, ), where and are
given by equations 7.11:
r = r

+ , j = j

+ .
Equation 7.10 becomes
or

+ 2/r

+ /j

C
. (7.12)
CASE 2.1(i) / = 0. Equation 7.12 becomes or

+ /j

=

C
.
(a) C < 0: Hyperbola.
(b) C 0 and o 0: Empty set.
(c) C 0 and o < 0.
(i) o = /: Circle, centre (, ), radius
_
j
2
+)
2
oc
o
.
(ii) o ,= /: Ellipse.
144 CHAPTER 7. IDENTIFYING SECOND DEGREE EQUATIONS
CASE 2.1(ii) / ,= 0.
Rotate the (r

, j

) axes with the new positive r

axis in the direction


of
[(/ o + 1),2, /],
where 1 =
_
(o /)
2
+ 4/

.
Then equation 7.12 becomes

C
. (7.13)
where

= (o + / 1),2,

= (o + / + 1),2,
Here

= C.
(a) C < 0: Hyperbola.
Here

and equation 7.13 becomes


r

=

[[
,
where
n =

[[
C

, =

[[
C

.
(b) C 0 and o 0: Empty set.
(c) C 0 and o < 0: Ellipse.
Here

, o, / have the same sign and

,=

and equa-
tion 7.13 becomes
r

+
j

= 1,
where
n =
_

C

, =
_

C

.
(2.1) C = 0.
(a) / = 0.
(i) o = 0: Then / ,= 0 and p ,= 0. Parabola with vertex
_

2p/
,
)
/
_
.
7.2. A CLASSIFICATION ALGORITHM 145
Translate axes to (r

, j

) axes:
j

=
2p
/
r

.
(ii) / = 0: Then o ,= 0 and ) ,= 0. Parabola with vertex
_

p
o
,
1
2)o
_
.
Translate axes to (r

, j

) axes:
r

=
2)
o
j

.
(b) / ,= 0: Parabola. Let
/ =
po + /)
o + /
.
The vertex of the parabola is
_
(2o/) //

/oc)
d
,
o(/

+ oc 2/p)
d
_
.
Now translate to the vertex as the new origin, then rotate to
(r

, j

) axes with the positive r

axis along [:o, :/], where


: = sign (o).
(The positive r

axis points into the rst or fourth quadrant.)


Then the parabola has equation
r

=
2:t

+ /

,
where t = (o) p/),(o + /).
REMARK 7.2.1 If = 0, it is not necessary to rotate the axes. Instead
it is always possible to translate the axes suitably so that the coecients of
the terms of the rst degree vanish.
EXAMPLE 7.2.1 Identify the curve
2r

+ rj j

+ 6j 8 = 0. (7.14)
146 CHAPTER 7. IDENTIFYING SECOND DEGREE EQUATIONS
Solution. Here
=

2
1
0
1
1 3
0 3 8

= 0.
Let r = r

+ , j = j

+ and substitute in equation 7.14 to get


2(r

+ )
2
+ (r

+ )(j

+ ) (j

+ )
2
+ 4(j

+ ) 8 = 0. (7.15)
Then equating the coecients of r

and j

to 0 gives
4 + = 0
+ 2 + 4 = 0,
which has the unique solution =

, =
8
. Then equation 7.15 simplies
to
2r

+ r

= 0 = (2r

)(r

+ j

),
so relative to the r

, j

coordinates, equation 7.14 describes two lines: 2r

= 0 or r

+ j

= 0. In terms of the original r, j coordinates, these lines


become 2(r+
2
) (j

) = 0 and (r+
2
) +(j

) = 0, i.e. 2rj +4 = 0
and r + j 2 = 0, which intersect in the point
(r, j) = (, ) = (
2
3
,
8
3
).
EXAMPLE 7.2.2 Identify the curve
r

+ 2rj + j

+ +2r + 2j + 1 = 0. (7.16)
Solution. Here
=

1 1 1
1 1 1
1 1 1

= 0.
Let r = r

+ , j = j

+ and substitute in equation 7.16 to get


(r

+)
2
+2(r

+)(j

+)+(j

+)
2
+2(r

+)+2(j

+)+1 = 0. (7.17)
Then equating the coecients of r

and j

to 0 gives the same equation


2 + 2 + 2 = 0.
Take = 0, = 1. Then equation 7.17 simplies to
r

+ 2r

+ j

= 0 = (r

+ j

)
2
,
and in terms of r, j coordinates, equation 7.16 becomes
(r + j + 1)
2
= 0, or r + j + 1 = 0.
7.3. PROBLEMS 147
7.3 PROBLEMS
1. Sketch the curves
(i) r

8r + 8j + 8 = 0;
(ii) j

12r + 2j + 25 = 0.
2. Sketch the hyperbola
4rj 3j

= 8
and nd the equations of the asymptotes.
[Answer: j = 0 and j =
4
r.]
3. Sketch the ellipse
8r

4rj + 5j

= 36
and nd the equations of the axes of symmetry.
[Answer: j = 2r and r = 2j.]
4. Sketch the conics dened by the following equations. Find the centre
when the conic is an ellipse or hyperbola, asymptotes if an hyperbola,
the vertex and axis of symmetry if a parabola:
(i) 4r

9j

24r 36j 36 = 0;
(ii) 5r

4rj + 8j

+ 4

5r 16

5j + 4 = 0;
(iii) 4r

+ j

4rj 10j 19 = 0;
(iv) 77r

+ 78rj 27j

+ 70r 30j + 29 = 0.
[Answers: (i) hyperbola, centre (3, 2), asymptotes 2r 3j 12 =
0, 2r + 3j = 0;
(ii) ellipse, centre (0,

5);
(iii) parabola, vertex (

), axis of symmetry 2r j + 1 = 0;
(iv) hyperbola, centre (

,

), asymptotes 7r + 9j + 7 = 0 and
11r 3j 1 = 0.]
5. Identify the lines determined by the equations:
(i) 2r

+ j

+ 3rj 5r 4j + 3 = 0;
148 CHAPTER 7. IDENTIFYING SECOND DEGREE EQUATIONS
(ii) 9r

+ j

6rj + 6r 2j + 1 = 0;
(iii) r

+ 4rj + 4j

r 2j 2 = 0.
[Answers: (i) 2r + j 3 = 0 and r + j 1 = 0; (ii) 3r j + 1 = 0;
(iii) r + 2j + 1 = 0 and r + 2j 2 = 0.]
Chapter 8
THREEDIMENSIONAL
GEOMETRY
8.1 Introduction
In this chapter we present a vectoralgebra approach to threedimensional
geometry. The aim is to present standard properties of lines and planes,
with minimum use of complicated threedimensional diagrams such as those
involving similar triangles. We summarize the chapter:
Points are dened as ordered triples of real numbers and the distance
between points 1

= (r

, j

, .

) and 1

= (r

, j

, .

) is dened by the
formula
1

=
_
(r

)
2
+ (j

)
2
+ (.

)
2
.
Directed line segments

1 are introduced as threedimensional column
vectors: If = (r

, j

, .

) and 1 = (r

, j

, .

), then

1=
_
_
r

_
_
.
If 1 is a point, we let P =

O1 and call P the position vector of 1.
With suitable denitions of lines, parallel lines, there are important ge-
ometrical interpretations of equality, addition and scalar multiplication of
vectors.
(i) Equality of vectors: Suppose , 1, C, 1 are distinct points such that
no three are collinear. Then

1=

C1 if and only if

1 |

C1 and

C |

11 (See Figure 8.1.)


149
150 CHAPTER 8. THREEDIMENSIONAL GEOMETRY

`
j
.
r
O
Z
Z
Z
Z
_

Z
Z
Z
Z
C
`
`
`
`
`
`
1
`
`
`
`
`
`
1
_
1=
_
C1,
_
C=
_
11
_
1 +
_
C=
_
1
Figure 8.1: Equality and addition of vectors.
(ii) Addition of vectors obeys the parallelogram law: Let , 1, C be non
collinear. Then

1 +

C=

1,
where 1 is the point such that

1 |

C1 and

C |

11. (See Fig-


ure 8.1.)
(iii) Scalar multiplication of vectors: Let

1= t

1, where and 1 are


distinct points. Then 1 is on the line 1,
1
1
= [t[
and
(a) 1 = if t = 0, 1 = 1 if t = 1;
(b) 1 is between and 1 if 0 < t < 1;
(c) 1 is between and 1 if 1 < t;
(d) is between 1 and 1 if t < 0.
(See Figure 8.2.)
8.1. INTRODUCTION 151

`
j
.
r
O
`
`
`
`
`
`
`
` 1

1
_
1= t
_
1, 0 < t < 1
Figure 8.2: Scalar multiplication of vectors.
The dot product AY of vectors A =
_
_
o

_
_
and Y =
_
_
o

_
_
, is dened
by
A Y = o

+ /

+ c

.
The length [[A[[ of a vector A is dened by
[[A[[ = (A A)
1
and the CauchySchwarz inequality holds:
[A Y [ [[A[[ [[Y [[.
The triangle inequality for vector length now follows as a simple deduction:
[[A + Y [[ [[A[[ +[[Y [[.
Using the equation
1 = [[

1 [[,
we deduce the corresponding familiar triangle inequality for distance:
1 C + C1.
152 CHAPTER 8. THREEDIMENSIONAL GEOMETRY
The angle between two nonzero vectors A and Y is then dened by
cos =
A Y
[[A[[ [[Y [[
, 0 .
This denition makes sense. For by the CauchySchwarz inequality,
1
A Y
[[A[[ [[Y [[
1.
Vectors A and Y are said to be perpendicular or orthogonal if A Y = 0.
Vectors of unit length are called unit vectors. The vectors
i =
_
_
1
0
0
_
_
, j =
_
_
0
1
0
_
_
, k =
_
_
0
0
1
_
_
are unit vectors and every vector is a linear combination of i, j and k:
_
_
o
/
c
_
_
= oi + /j + ck.
Nonzero vectors A and Y are parallel or proportional if the angle be-
tween A and Y equals 0 or ; equivalently if A = tY for some real number
t. Vectors A and Y are then said to have the same or opposite direction,
according as t 0 or t < 0.
We are then led to study straight lines. If and 1 are distinct points,
it is easy to show that 1 + 11 = 1 holds if and only if

1= t

1, where 0 t 1.
A line is dened as a set consisting of all points 1 satisfying
P = P
0
+ tA, t 1 or equivalently

1

1= tA,
for some xed point 1

and xed nonzero vector A called a direction vector
for the line.
Equivalently, in terms of coordinates,
r = r

+ to, j = j

+ t/, . = .

+ tc,
where 1

= (r

, j

, .

) and not all of o, /, c are zero.
8.1. INTRODUCTION 153
There is then one and only one line passing passing through two distinct
points and 1. It consists of the points 1 satisfying

1= t

1,
where t is a real number.
The crossproduct AY provides us with a vector which is perpendicular
to both A and Y . It is dened in terms of the components of A and Y :
Let A = o

i + /

j + c

k and Y = o

i + /

j + c

k. Then
A Y = oi + /j + ck,
where
o =

, / =

, c =

.
The crossproduct enables us to derive elegant formulae for the distance
from a point to a line, the area of a triangle and the distance between two
skew lines.
Finally we turn to the geometrical concept of a plane in threedimensional
space.
A plane is a set of points 1 satisfying an equation of the form
P = P
0
+ :A + tY, :, t 1, (8.1)
where A and Y are nonzero, nonparallel vectors.
In terms of coordinates, equation 8.1 takes the form
r = r

+ :o

+ to

j = j

+ :/

+ t/

. = .

+ :c

+ tc

,
where 1

= (r

, j

, .

).
There is then one and only one plane passing passing through three
noncollinear points , 1, C. It consists of the points 1 satisfying

1= :

1 +t

C,
where : and t are real numbers.
The crossproduct enables us to derive a concise equation for the plane
through three noncollinear points , 1, C, namely

1 (

1

C) = 0.
154 CHAPTER 8. THREEDIMENSIONAL GEOMETRY
When expanded, this equation has the form
or + /j + c. = d,
where oi + /j + ck is a nonzero vector which is perpendicular to

1

for
all points 1

, 1

lying in the plane. Any vector with this property is said to


be a normal to the plane.
It is then easy to prove that two planes with nonparallel normal vectors
must intersect in a line.
We conclude the chapter by deriving a formula for the distance from a
point to a plane.
8.2 Threedimensional space
DEFINITION 8.2.1 Threedimensional space is the set 1

of ordered
triples (r, j, .), where r, j, . are real numbers. The triple (r, j, .) is called
a point 1 in 1

and we write 1 = (r, j, .). The numbers r, j, . are called,


respectively, the r, j, . coordinates of 1.
The coordinate axes are the sets of points:
(r, 0, 0) (raxis), (0, j, 0) (jaxis), (0, 0, .) (.axis).
The only point common to all three axes is the origin O = (0, 0, 0).
The coordinate planes are the sets of points:
(r, j, 0) (rjplane), (0, j, .) (j.plane), (r, 0, .) (r.plane).
The positive octant consists of the points (r, j, .), where r 0, j
0, . 0.
We think of the points (r, j, .) with . 0 as lying above the rjplane,
and those with . < 0 as lying beneath the rjplane. A point 1 = (r, j, .)
will be represented as in Figure 8.3. The point illustrated lies in the positive
octant.
DEFINITION 8.2.2 The distance 1

between points 1

= (r

, j

, .

)
and 1

= (r

, j

, .

) is dened by the formula


1

=
_
(r

)
2
+ (j

)
2
+ (.

)
2
.
For example, if 1 = (r, j, .),
O1 =
_
r

+ j

+ .

.
8.2. THREEDIMENSIONAL SPACE 155

`
j
.
r
O
`
`
`
`
`
` (r, j, 0) (r, 0, 0) /
/
/
/
(0, j, 0)
`
`
`
`
`
`
1 = (r, j, .)
(0, 0, .)
1 = (r, j, .)
Figure 8.3: Representation of three-dimensional space.
j
/
/
/
/
/
/
/
/
/
/- r
(r

, 0, 0)
(r

, 0, 0)
(0, j

, 0) (0, j

, 0)
(0, 0, .

)
(0, 0, .

)
`
.

`
`
`
`
`
/
/
/
/
/
`
`
`
`
`

1
Figure 8.4: The vector

1.
156 CHAPTER 8. THREEDIMENSIONAL GEOMETRY
DEFINITION 8.2.3 If = (r

, j

, .

) and 1 = (r

, j

, .

) we dene
the symbol

1 to be the column vector

1=
_
_
r

_
_
.
We let P =

O1 and call P the position vector of 1.
The components of

1 are the coordinates of 1 when the axes are
translated to as origin of coordinates.
We think of

1 as being represented by the directed line segment from
to 1 and think of it as an arrow whose tail is at and whose head is at
1. (See Figure 8.4.)
Some mathematicians think of

1 as representing the translation of
space which takes into 1.
The following simple properties of

1 are easily veried and correspond
to how we intuitively think of directed line segments:
(i)

1= 0 = 1;
(ii)

1=

1;
(iii)

1 +

1C=

C (the triangle law);
(iv)

1C=

C

1= CB;
(v) if A is a vector and a point, there is exactly one point 1 such that

1= A, namely that dened by B = A+ A.


To derive properties of the distance function and the vector function

, we need to introduce the dot product of two vectors in 1

.
8.3 Dot product
DEFINITION 8.3.1 If A =
_
_
o

_
_
and Y =
_
_
o

_
_
, then A Y , the
dot product of A and Y , is dened by
A Y = o

+ /

+ c

.
8.3. DOT PRODUCT 157

-

` `
.
.
.
. .
.
.
.

1
=
_
1 =
_
1
Figure 8.5: The negative of a vector.

-

` `
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
-
-
-
-

1
C
1

1
C
(a) (b)
_
1=
_
C1
_
C=
_
1 +
_
1C
_
1C=
_
C
_
1
Figure 8.6: (a) Equality of vectors; (b) Addition and subtraction of vectors.
The dot product has the following properties:
(i) A (Y + 7) = A Y + A 7;
(ii) A Y = Y A;
(iii) (tA) Y = t(A Y );
(iv) A A = o

+ /

+ c

if A =
_
_
o
/
c
_
_
;
(v) A Y = A
|
Y ;
(vi) A A = 0 if and only if A = 0.
The length of A is dened by
[[A[[ =
_
o

+ /

+ c

= (A A)
1
.
We see that [[P[[ = O1 and more generally [[

[[ = 1

, the
distance between 1

and 1

.
158 CHAPTER 8. THREEDIMENSIONAL GEOMETRY

`
j
.
r
O
`
`
`
`
`
`

/
/
/

i
_
j
'
_
'
ck
k
`
`
`
`
`
`
oi + /j
oi /
/
/
/
/j
P = oi + /j + ck
Figure 8.7: Position vector as a linear combination of i, j and k.
Vectors having unit length are called unit vectors.
The vectors
i =
_
_
1
0
0
_
_
, j =
_
_
0
1
0
_
_
, k =
_
_
0
0
1
_
_
are unit vectors. Every vector is a linear combination of i, j and k:
_
_
o
/
c
_
_
= oi + /j + ck.
(See Figure 8.7.)
It is easy to prove that
[[tA[[ = [t[ [[A[[,
if t is a real number. Hence if A is a nonzero vector, the vectors

1
[[A[[
A
are unit vectors.
A useful property of the length of a vector is
[[A Y [[

= [[A[[

2A Y +[[Y [[

. (8.2)
8.3. DOT PRODUCT 159
The following important property of the dot product is widely used in
mathematics:
THEOREM 8.3.1 (The CauchySchwarz inequality)
If A and Y are vectors in 1

, then
[A Y [ [[A[[ [[Y [[. (8.3)
Moreover if A ,= 0 and Y ,= 0, then
A Y = [[A[[ [[Y [[ Y = tA, t 0,
A Y = [[A[[ [[Y [[ Y = tA, t < 0.
Proof. If A = 0, then inequality 8.3 is trivially true. So assume A ,= 0.
Now if t is any real number, by equation 8.2,
0 [[tA Y [[

= [[tA[[

2(tA) Y +[[Y [[

= t

[[A[[

2(A Y )t +[[Y [[

= ot

2/t + c,
where o = [[A[[

0, / = A Y, c = [[Y [[

.
Hence
o(t

2/
o
t +
c
o
) 0
_
t
/
o
_
_
+
co /

0 .
Substituting t = /,o in the last inequality then gives
oc /

0,
so
[/[

oc =

o

c
and hence inequality 8.3 follows.
To discuss equality in the CauchySchwarz inequality, assume A ,= 0
and Y ,= 0.
Then if A Y = [[A[[ [[Y [[, we have for all t
[[tA Y [[

= t

[[A[[

2tA Y +[[Y [[

= t

[[A[[

2t[[A[[ [[Y [[ +[[Y [[

= [[tA Y [[

.
160 CHAPTER 8. THREEDIMENSIONAL GEOMETRY
Taking t = [[A[[,[[Y [[ then gives [[tA Y [[

= 0 and hence tA Y = 0.
Hence Y = tA, where t 0. The case A Y = [[A[[ [[Y [[ is proved
similarly.
COROLLARY 8.3.1 (The triangle inequality for vectors)
If A and Y are vectors, then
[[A + Y [[ [[A[[ +[[Y [[. (8.4)
Moreover if A ,= 0 and Y ,= 0, then equality occurs in inequality 8.4 if and
only if Y = tA, where t 0.
Proof.
[[A + Y [[

= [[A[[

+ 2A Y +[[Y [[

[[A[[

+ 2[[A[[ [[Y [[ +[[Y [[

= ([[A[[ +[[Y [[)


2
and inequality 8.4 follows.
If [[A + Y [[ = [[A[[ +[[Y [[, then the above proof shows that
A Y = [[A[[ [[Y [[.
Hence if A ,= 0 and Y ,= 0, the rst case of equality in the CauchySchwarz
inequality shows that Y = tA with t 0.
The triangle inequality for vectors gives rise to a corresponding inequality
for the distance function:
THEOREM 8.3.2 (The triangle inequality for distance)
If , 1, C are points, then
C 1 + 1C. (8.5)
Moreover if 1 ,= and 1 ,= C, then equality occurs in inequality 8.5 if and
only if

1= :

C, where 0 < : < 1.


Proof.
C = [[

C [[ = [[

1 +

1C [[
[[

1 [[ +[[

1C [[
= 1 + 1C.
8.4. LINES 161
Moreover if equality occurs in inequality 8.5 and 1 ,= and 1 ,= C, then
A =

1,= 0 and Y =

1C,= 0 and the equation C = 1 + 1C becomes
[[A + Y [[ = [[A[[ + [[Y [[. Hence the case of equality in the vector triangle
inequality gives
Y =

1C= tA = t

1, where t 0.
Then

1C =

C

1= t

C = (1 + t)

1

1 = :

C,
where : = 1,(t + 1) satises 0 < : < 1.
8.4 Lines
DEFINITION 8.4.1 A line in 1

is the set L(1



, A) consisting of all
points 1 satisfying
P = P
0
+ tA, t 1 or equivalently

1

1= tA, (8.6)
for some xed point 1

and xed nonzero vector A. (See Figure 8.8.)
Equivalently, in terms of coordinates, equation 8.6 becomes
r = r

+ to, j = j

+ t/, . = .

+ tc,
where not all of o, /, c are zero.
The following familiar property of straight lines is easily veried.
THEOREM 8.4.1 If and 1 are distinct points, there is one and only
one line containing and 1, namely L(,

1) or more explicitly the line


dened by

1= t

1, or equivalently, in terms of position vectors:


P = (1 t)A+ tB or P = A+ t

1 . (8.7)
Equations 8.7 may be expressed in terms of coordinates: if = (r

, j

, .

)
and 1 = (r

, j

, .

), then
r = (1 t)r

+ tr

, j = (1 t)j

+ tj

, . = (1 t).

+ t.

.
162 CHAPTER 8. THREEDIMENSIONAL GEOMETRY

`
j
.
r
O
1
1

`
`
`
`
`
`
`
`
C
1
`
`
`
`
`
`
`
`
`
`
`
`
`
_
1

1= t
_
C1
Figure 8.8: Representation of a line.

`
j
.
r
O.
.
.
.
.
.
1
/
/
/
/
/
/
Z
Z
Z
Z
Z

1
`
`
`
`
`
`
`
`
`
`
`
`
`
P = A+ t
_
1, 0 < t < 1
Figure 8.9: The line segment 1.
8.4. LINES 163
There is an important geometric signicance in the number t of the above
equation of the line through and 1. The proof is left as an exercise:
THEOREM 8.4.2 (Joachimsthals ratio formulae)
If t is the parameter occurring in theorem 8.4.1, then
(i) [t[ =
1
1
; (ii)

t
1 t

=
1
11
if 1 ,= 1.
Also
(iii) 1 is between and 1 if 0 < t < 1;
(iv) 1 is between and 1 if 1 < t;
(v) is between 1 and 1 if t < 0.
(See Figure 8.9.)
For example, t =
1
gives the midpoint 1 of the segment 1:
P =
1
2
(A+B).
EXAMPLE 8.4.1 L is the line 1, where = (4, 3, 1), 1 = (1, 1, 0);
/ is the line C1, where C = (2, 0, 2), 1 = (1, 3, 2); ^ is the line 11,
where 1 = (1, 4, 7), 1 = (4, 3, 13). Find which pairs of lines intersect
and also the points of intersection.
Solution. In fact only L and ^ intersect, in the point (

,

,

). For
example, to determine if L and ^ meet, we start with vector equations for
L and ^:
P = A+ t

1, Q = E+ :

11,
equate P and Q and solve for : and t:
(4i + 3j + /) + t(5i 2j k) = (i + 4j + 7k) + :(5i 7j 20k),
which on simplifying, gives
5t + 5: = 5
2t + 7: = 1
t + 20: = 6
This system has the unique solution t =
2
, : =
1
and this determines a
corresponding point 1 where the lines meet, namely 1 = (

,

,

).
The same method yields inconsistent systems when applied to the other
pairs of lines.
164 CHAPTER 8. THREEDIMENSIONAL GEOMETRY
EXAMPLE 8.4.2 If = (5, 0, 7) and 1 = (2, 3, 6), nd the points 1
on the line 1 which satisfy 1,11 = 3.
Solution. Use the formulae
P = A+ t

1 and

t
1 t

=
1
11
= 3.
Then
t
1 t
= 3 or 3,
so t =
3
or t =
3
. The corresponding points are (
11
,

,

) and (
1
,

,

).
DEFINITION 8.4.2 Let A and Y be nonzero vectors. Then A is parallel
or proportional to Y if A = tY for some t 1. We write A|Y if A is parallel
to Y . If A = tY , we say that A and Y have the same or opposite direction,
according as t 0 or t < 0.
DEFINITION 8.4.3 if and 1 are distinct points on a line L, the non
zero vector

1 is called a direction vector for L.
It is easy to prove that any two direction vectors for a line are parallel.
DEFINITION 8.4.4 Let L and / be lines having direction vectors A
and Y , respectively. Then L is parallel to / if A is parallel to Y . Clearly
any line is parallel to itself.
It is easy to prove that the line through a given point and parallel to a
given line C1 has an equation P = A+ t

C1.
THEOREM 8.4.3 Let A = o

i + /

j + c

k and Y = o

i + /

j + c

k be
nonzero vectors. Then A is parallel to Y if and only if

= 0. (8.8)
Proof. The case of equality in the CauchySchwarz inequality (theorem 8.3.1)
shows that A and Y are parallel if and only if
[A Y [ = [[A[[ [[Y [[.
Squaring gives the equivalent equality
(o

+ /

+ c

)
2
= (o

+ /

+ c

)(o

+ /

+ c

),
8.4. LINES 165
which simplies to
(o

)
2
+ (/

)
2
+ (o

)
2
= 0,
which is equivalent to
o

= 0, /

= 0, o

= 0,
which is equation 8.8.
Equality of geometrical vectors has a fundamental geometrical interpre-
tation:
THEOREM 8.4.4 Suppose , 1, C, 1 are distinct points such that no
three are collinear. Then

1=

C1 if and only if

1 |

C1 and

C |

11
(See Figure 8.1.)
Proof. If

1=

C1 then
BA = DC,
CA = DB
and so

C=

11. Hence

1 |

C1 and

C |

11.
Conversely, suppose that

1 |

C1 and

C |

11. Then

1= :

C1 and

C= t

11,
or
BA = :(DC) and CA = tDB.
We have to prove : = 1 or equivalently, t = 1.
Now subtracting the second equation above from the rst, gives
BC = :(DC) t(DB),
so
(1 t)B = (1 :)C+ (: t)D.
If t ,= 1, then
B =
1 :
1 t
C+
: t
1 t
D
and 1 would lie on the line C1. Hence t = 1.
166 CHAPTER 8. THREEDIMENSIONAL GEOMETRY
8.5 The angle between two vectors
DEFINITION 8.5.1 Let A and Y be nonzero vectors. Then the angle
between A and Y is the unique value of dened by
cos =
A Y
[[A[[ [[Y [[
, 0 .
REMARK 8.5.1 By Cauchys inequality, we have
1
A Y
[[A[[ [[Y [[
1,
so the above equation does dene an angle .
In terms of components, if A = [o

, /

, c

]
|
and Y = [o

, /

, c

]
|
, then
cos =
o

+ /

+ c

_
o

+ /

+ c

_
o

+ /

+ c

. (8.9)
The next result is the well-known cosine rule for a triangle.
THEOREM 8.5.1 (Cosine rule) If , 1, C are points with ,= 1 and
,= C, then the angle between vectors

1 and

C saties
cos =
1

+ C

1C

21 C
, (8.10)
or equivalently
1C

= 1

+ C

21 C cos .
(See Figure 8.10.)
Proof. Let = (r

, j

, .

), 1 = (r

, j

, .

), C = (r

, j

, .

). Then

1 = o

i + /

j + c

C = o

i + /

j + c

1C = (o

)i + (/

)j + (c

)k,
where
o
i
= r
i
r

, /
i
= j
i
j

, c
i
= .
i
.

, i = 1, 2.
8.5. THE ANGLE BETWEEN TWO VECTORS 167

`
j
.
r
O
Z
Z
Z
Z

C
\
\
\
\
\
\
\
\
`
`
`
`
`
`
1

cos =
1
2
+C
2
1C
2
1C
Figure 8.10: The cosine rule for a triangle.
Now by equation 8.9,
cos =
o

+ /

+ c

1 C
.
Also
1

+ C

1C

= (o

+ /

+ c

) + (o

+ /

+ c

)
((o

)
2
+ (/

)
2
+ (c

)
2
)
= 2o

+ 2/

+ c

.
Equation 8.10 now follows, since

C= o

+ /

+ c

.
EXAMPLE 8.5.1 Let = (2, 1, 0), 1 = (3, 2, 0), C = (5, 0, 1). Find
the angle between vectors

1 and

C.
Solution.
cos =

C
1 C
.
Now

1= i +j and

C= 3i j +k.
168 CHAPTER 8. THREEDIMENSIONAL GEOMETRY

`
j
.
r
O
Z
`
`

C
\
\
\
\
\
\
\
\
`
`
`
`
`
`
Z
Z
Z
Z
1
1

+ C

= 1C

Figure 8.11: Pythagoras theorem for a rightangled triangle.


Hence
cos =
1 3 + 1 (1) + 0 1

1
2
+ 1
2
+ 0
2
_
3
2
+ (1)
2
+ 1
2
=
2

11
=

11
.
Hence = cos

.
DEFINITION 8.5.2 If A and Y are vectors satisfying A Y = 0, we say
A is orthogonal or perpendicular to Y .
REMARK 8.5.2 If , 1, C are points forming a triangle and

1 is or-
thogonal to

C, then the angle between

1 and

C satises cos = 0
and hence =

and the triangle is rightangled at .
Then we have Pythagoras theorem:
1C

= 1

+ C

. (8.11)
We also note that 1C 1 and 1C C follow from equation 8.11. (See
Figure 8.11.)
EXAMPLE 8.5.2 Let = (2, 9, 8), 1 = (6, 4, 2), C = (7, 15, 7). Show
that

1 and

C are perpendicular and nd the point 1 such that 11C
forms a rectangle.
8.5. THE ANGLE BETWEEN TWO VECTORS 169

`
j
.
r
O
\
\
\
\
\
\
\
\
Z
`
.
.
.
.
.
`
`
`
`
`
`
`
`
`
`
`
C

1
1
Z
Z
Z
Figure 8.12: Distance from a point to a line.
Solution.

C= (4i 5j 10k) (5i + 6j k) = 20 30 + 10 = 0.


Hence

1 and

C are perpendicular. Also, the required fourth point 1
clearly has to satisfy the equation

11=

C, or equivalently DB =

C .
Hence
D = B+

C= (6i + 4j 2k) + (5i + 6j k) = 11i + 10j 3k,
so 1 = (11, 10, 3).
THEOREM 8.5.2 (Distance from a point to a line) If C is a point
and L is the line through and 1, then there is exactly one point 1 on L
such that

C1 is perpendicular to

1, namely
P = A+ t

1, t =

1
1

. (8.12)
Moreover if Q is any point on L, then CQ C1 and hence 1 is the point
on L closest to C.
170 CHAPTER 8. THREEDIMENSIONAL GEOMETRY
The shortest distance C1 is given by
C1 =
_
C

(

C

1)
2
1
. (8.13)
(See Figure 8.12.)
Proof. Let P = A + t

1 and assume that



C1 is perpendicular to

1.
Then

C1

1 = 0
(PC)

1 = 0
(A+ t

1 C)

1 = 0
(

C +t

1)

1 = 0

1 +t(

1

1) = 0

1 +t(

1

1) = 0,
so equation 8.12 follows.
The inequality CQ C1, where Q is any point on L, is a consequence
of Pythagoras theorem.
Finally, as

C1 and

1 are perpendicular, Pythagoras theorem gives
C1

= C

= C

[[t

1 [[

= C

= C

_
_

1
1

_
_
_
1

=
C

(

C

1)
2
1

,
as required.
EXAMPLE 8.5.3 The closest point on the line through = (1, 2, 1) and
1 = (2, 1, 3) to the origin is 1 = (
17
,

,

) and the corresponding
shortest distance equals
5

42.
Another application of theorem 8.5.2 is to the projection of a line segment
on another line:
8.5. THE ANGLE BETWEEN TWO VECTORS 171

`
j
.
r
O
`
`
`
`
`
`
`
`
`
`
`
1

Z
Z
Z
Z
Z
`
`
`
`.
.
.
.
.
.
.
.
.
.
~
~
~
~
~
~
~
~
~
C

`
Z
.
.
Figure 8.13: Projecting the segment C

onto the line 1.


THEOREM 8.5.3 (The projection of a line segment onto a line)
Let C

, C

be points and 1

, 1

be the feet of the perpendiculars from


C

and C

to the line 1. Then


1

= [

n[,
where
n =
1
1

1 .
Also
C

. (8.14)
(See Figure 8.13.)
Proof. Using equations 8.12, we have
P
1
= A+ t

1, P
2
= A+ t

1,
where
t

1
1

, t

1
1

.
Hence

= (A+ t

1) (A+ t

1)
= (t

)

1,
172 CHAPTER 8. THREEDIMENSIONAL GEOMETRY
so
1

= [[

[[ = [t

[1
=

1
1

1
1

1
=

1
=

,
where n is the unit vector
n =
1
1

1 .
Inequality 8.14 then follows from the CauchySchwarz inequality 8.3.
DEFINITION 8.5.3 Two nonintersecting lines are called skew if they
have nonparallel direction vectors.
Theorem 8.5.3 has an application to the problem of showing that two skew
lines have a shortest distance between them. (The reader is referred to
problem 16 at the end of the chapter.)
Before we turn to the study of planes, it is convenient to introduce the
crossproduct of two vectors.
8.6 The crossproduct of two vectors
DEFINITION 8.6.1 Let A = o

i + /

j + c

k and Y = o

i + /

j + c

k.
Then A Y , the crossproduct of A and Y , is dened by
A Y = oi + /j + ck,
where
o =

, / =

, c =

.
The vector crossproduct has the following properties which follow from
properties of 2 2 and 3 3 determinants:
(i) i j = k, j k = i, k i = j;
8.6. THE CROSSPRODUCT OF TWO VECTORS 173
(ii) A A = 0;
(iii) Y A = A Y ;
(iv) A (Y + 7) = A Y + A 7;
(v) (tA) Y = t(A Y );
(vi) (Scalar triple product formula) if 7 = o

i + /

j + c

k, then
A (Y 7) =

= (A Y ) 7;
(vii) A (A Y ) = 0 = Y (A Y );
(viii) [[A Y [[ =
_
[[A[[

[[Y [[

(A Y )
2
;
(ix) if A and Y are nonzero vectors and is the angle between A and Y ,
then
[[A Y [[ = [[A[[ [[Y [[ sin .
(See Figure 8.14.)
From theorem 8.4.3 and the denition of crossproduct, it follows that
nonzero vectors A and Y are parallel if and only if A Y = 0; hence by
(vii), the crossproduct of two nonparallel, nonzero vectors A and Y , is
a nonzero vector perpendicular to both A and Y .
LEMMA 8.6.1 Let A and Y be nonzero, nonparallel vectors.
(i) 7 is a linear combination of A and Y , if and only if 7 is perpendicular
to A Y ;
(ii) 7 is perpendicular to A and Y , if and only if 7 is parallel to A Y .
Proof. Let A and Y be nonzero, nonparallel vectors. Then
A Y ,= 0.
Then if A Y = oi + /j + ck, we have
det [A Y [A[Y ]
|
=

o / c
o

= (A Y ) (A Y ) 0.
174 CHAPTER 8. THREEDIMENSIONAL GEOMETRY

`
j
.
r
O
`
`
`
``
Z
Z
Z
_

Z `
`
A
Y
A Y
Figure 8.14: The vector crossproduct.
Hence the matrix [A Y [A[Y ] is nonsingular. Consequently the linear
system
:(A Y ) + :A + tY = 7 (8.15)
has a unique solution :, :, t.
(i) Suppose 7 = :A + tY . Then
7 (A Y ) = :A (A Y ) + tY (A Y ) = :0 + t0 = 0.
Conversely, suppose that
7 (A Y ) = 0. (8.16)
Now from equation 8.15, :, :, t exist satisfying
7 = :(A Y ) + :A + tY.
Then equation 8.16 gives
0 = (:(A Y ) + :A + tY ) (A Y )
= :[[A Y [[

+ :A (A Y ) + tY (Y A)
= :[[A Y [[

.
Hence : = 0 and 7 = :A + tY , as required.
(ii) Suppose 7 = (A Y ). Then clearly 7 is perpendicular to A and Y .
8.6. THE CROSSPRODUCT OF TWO VECTORS 175
Conversely suppose that 7 is perpendicular to A and Y .
Now from equation 8.15, :, :, t exist satisfying
7 = :(A Y ) + :A + tY.
Then
:A A + tA Y = A 7 = 0
:Y A + tY Y = Y 7 = 0,
from which it follows that
(:A + tY ) (:A + tY ) = 0.
Hence :A + tY = 0 and so : = 0, t = 0. Consequently 7 = :(A Y ), as
required.
The crossproduct gives a compact formula for the distance from a point
to a line, as well as the area of a triangle.
THEOREM 8.6.1 (Area of a triangle)
If , 1, C are distinct noncollinear points, then
(i) the distance d from C to the line 1 is given by
d =
[[

C [[
1
, (8.17)
(ii) the area of the triangle 1C equals
[[

C [[
2
=
[[AB+BC+CA[[
2
. (8.18)
Proof. The area of triangle 1C is given by
=
1 C1
2
,
where 1 is the foot of the perpendicular from C to the line 1. Now by
formula 8.13, we have
C1 =
_
C

(

C

1)
2
1
=
[[

C [[
1
,
176 CHAPTER 8. THREEDIMENSIONAL GEOMETRY
which, by property (viii) of the crossproduct, gives formula 8.17. The
second formula of equation 8.18 follows from the equations

C = (BA) (CA)
= (BA) C (CA) A
= (BCAC) (BAAA)
= BCACBA
= BC+CA+AB,
as required.
8.7 Planes
DEFINITION 8.7.1 A plane is a set of points 1 satisfying an equation
of the form
P = P
0
+ :A + tY, :, t 1, (8.19)
where A and Y are nonzero, nonparallel vectors.
For example, the rjplane consists of the points 1 = (r, j, 0) and corre-
sponds to the plane equation
P = ri + jj = O+ ri + jj.
In terms of coordinates, equation 8.19 takes the form
r = r

+ :o

+ to

j = j

+ :/

+ t/

. = .

+ :c

+ tc

,
where 1

= (r

, j

, .

) and (o

, /

, c

) and (o

, /

, c

) are nonzero and


nonproportional.
THEOREM 8.7.1 Let , 1, C be three noncollinear points. Then there
is one and only one plane through these points, namely the plane given by
the equation
P = A+ :

1 +t

C, (8.20)
or equivalently

1= :

1 +t

C . (8.21)
(See Figure 8.15.)
8.7. PLANES 177

`
j
.
r
O.
.
.
.
.
.
.
.
.
.
.
.
/
/
/ /
.
.
.
.
_
~
~
~
~
~
~
~
~
~
~
~
~
.
.
.
.

C
1
1
t
C
t
1
_
1
t
= :
_
1,
_
C
t
= t
_
C
_
1= :
_
1 +t
_
C
Figure 8.15: Vector equation for the plane 1C.
Proof. First note that equation 8.20 is indeed the equation of a plane
through , 1 and C, as

1 and

C are nonzero and nonparallel and
(:, t) = (0, 0), (1, 0) and (0, 1) give 1 = , 1 and C, respectively. Call
this plane T.
Conversely, suppose P = P
0
+ :A + tY is the equation of a plane Q
passing through , 1, C. Then A = P
0
+ :

A + t

Y , so the equation for
Q may be written
P = A+ (: :

)A + (t t

)Y = A+ :
t
A + t
t
Y ;
so in eect we can take 1

= in the equation of Q. Then the fact that 1
and C lie on Q gives equations
B = A+ :

A + t

Y, C = A+ :

A + t

Y,
or

1= :

A + t

Y,

C= :

A + t

Y. (8.22)
Then equations 8.22 and equation 8.20 show that
T Q.
Conversely, it is straightforward to show that because

1 and

C are not
parallel, we have

,= 0.
178 CHAPTER 8. THREEDIMENSIONAL GEOMETRY

`
j
.
r
O
`
`
`
`
``
|
|
|
| *
Z
Z
Z
Z

` Z

1
C
1
1
_
1=
_
1
_
C
_
1
_
1= 0
Figure 8.16: Normal equation of the plane 1C.
Hence equations 8.22 can be solved for A and Y as linear combinations of

1 and

C, allowing us to deduce that
Q T.
Hence
Q = T.
THEOREM 8.7.2 (Normal equation for a plane) Let
= (r

, j

, .

), 1 = (r

, j

, .

), C = (r

, j

, .

)
be three noncollinear points. Then the plane through , 1, C is given by

1 (

1

C) = 0, (8.23)
or equivalently,

r r

j j

. .

= 0, (8.24)
where 1 = (r, j, .). (See Figure 8.16.)
8.7. PLANES 179

`
j
.
r
O
/
/
/
/
/
/
/
/
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
`
`
`
` oi + /j + ck
or + /j + c. = d
Figure 8.17: The plane or + /j + c. = d.
REMARK 8.7.1 Equation 8.24 can be written in more symmetrical form
as

r j . 1
r

1
r

1
r

= 0. (8.25)
Proof. Let T be the plane through , 1, C. Then by equation 8.21, we
have 1 T if and only if

1 is a linear combination of

1 and

C and so
by lemma 8.6.1(i), using the fact that

1

C,= 0 here, if and only if



1
is perpendicular to

1

C. This gives equation 8.23.


Equation 8.24 is the scalar triple product version of equation 8.23, taking
into account the equations

1 = (r r

)i + (j j

)j + (. .

)k,

1 = (r

)i + (j

)j + (.

)k,

C = (r

)i + (j

)j + (.

)k.
REMARK 8.7.2 Equation 8.24 gives rise to a linear equation in r, j and
.:
or + /j + c. = d,
180 CHAPTER 8. THREEDIMENSIONAL GEOMETRY
where oi + /j + ck ,= 0. For

r r

j j

. .

r j .
r

(8.26)
and expanding the rst determinant on the righthand side of equation 8.26
along row 1 gives an expression
or + /j + c.
where
o =

, / =

, c =

.
But o, /, c are the components of

1

C, which in turn is nonzero, as


, 1, C are noncollinear here.
Conversely if oi + /j + ck ,= 0, the equation
or + /j + c. = d
does indeed represent a plane. For if say o ,= 0, the equation can be solved
for r in terms of j and .:
_
_
r
j
.
_
_
=
_
_

o
o
0
0
_
_
+ j
_
_

o
o
1
0
_
_
+ .
_
_

c
o
0
1
_
_
,
which gives the plane
P = P
0
+ jA + .Y,
where 1

= (
o
o
, 0, 0) and A =
o
o
i + j and Y =
c
o
i + k are evidently
nonparallel vectors.
REMARK 8.7.3 The plane equation or+/j +c. = d is called the normal
form, as it is easy to prove that if 1

and 1

are two points in the plane,


then oi + /j + ck is perpendicular to

1

. Any nonzero vector with this


property is called a normal to the plane. (See Figure 8.17.)
8.7. PLANES 181
By lemma 8.6.1(ii), it follows that every vector A normal to a plane
through three noncollinear points , 1, C is parallel to

1

C, since
A is perpendicular to

1 and

C.
EXAMPLE 8.7.1 Show that the planes
r + j 2. = 1 and r + 3j . = 4
intersect in a line and nd the distance from the point C = (1, 0, 1) to this
line.
Solution. Solving the two equations simultaneously gives
r =
1
2
+
5
2
., j =
3
2

1
2
., (8.27)
where . is arbitrary. Hence
ri + jj + .k =
1
2
i
3
2
j + .(
5
2
i
1
2
j +k),
which is the equation of a line L through = (

, 0) and having
direction vector
5
i

j +k.
We can now proceed in one of three ways to nd the closest point on L
to .
One way is to use equation 8.17 with 1 dened by

1=
5
2
i
1
2
j +k.
Another method minimizes the distance C1, where 1 ranges over L.
A third way is to nd an equation for the plane through C, having
5
i

j +k as a normal. Such a plane has equation
5r j + 2. = d,
where d is found by substituting the coordinates of C in the last equation.
d = 5 1 0 + 2 1 = 7.
We now nd the point 1 where the plane intersects the line L. Then

C1
will be perpendicular to L and C1 will be the required shortest distance
from C to L. We nd using equations 8.27 that
5(
1
2
+
5
2
.) (
3
2

1
2
.) + 2. = 7,
182 CHAPTER 8. THREEDIMENSIONAL GEOMETRY

`
j
.
r
O
/
/
/
/
/
/
/
/
/
/
/
/
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
`
`
`
`
o

i + /

j + c

k
o

i + /

j + c

k
o

r + /

j + c

. = d

r + /

j + c

. = d

-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
/
/
/
/ /
/
/
/
L
Figure 8.18: Line of intersection of two planes.
so . =
11
. Hence 1 = (
4
,

,

).
It is clear that through a given line and a point not on that line, there
passes exactly one plane. If the line is given as the intersection of two planes,
each in normal form, there is a simple way of nding an equation for this
plane. More explicitly we have the following result:
THEOREM 8.7.3 Suppose the planes
o

r + /

j + c

. = d

(8.28)
o

r + /

j + c

. = d

(8.29)
have nonparallel normals. Then the planes intersect in a line L.
Moreover the equation
(o

r + /

j + c

. d

) + j(o

r + /

j + c

. d

) = 0, (8.30)
where and j are not both zero, gives all planes through L.
(See Figure 8.18.)
Proof. Assume that the normals o

i + /

j + c

k and o

i + /

j + c

k are
nonparallel. Then by theorem 8.4.3, not all of

1
=

,
2
=

,
3
=

(8.31)
8.7. PLANES 183
are zero. If say
1
,= 0, we can solve equations 8.28 and 8.29 for r and j in
terms of ., as we did in the previous example, to show that the intersection
forms a line L.
We next have to check that if and j are not both zero, then equa-
tion 8.30 represents a plane. (Whatever set of points equation 8.30 repre-
sents, this set certainly contains L.)
(o

+ jo

)r + (/

+ j/

)j + (c

+ jc

). (d

+ jd

) = 0.
Then we clearly cannot have all the coecients
o

+ jo

, /

+ j/

, c

+ jc

zero, as otherwise the vectors o

i + /

j + c

k and o

i + /

j + c

k would be
parallel.
Finally, if T is a plane containing L, let 1

= (r

, j

, .

) be a point not
on L. Then if we dene and j by
= (o

r

+ /

j

+ c

.

d

), j = o

r

+ /

j

+ c

.

d

,
then at least one of and j is nonzero. Then the coordinates of 1

satisfy
equation 8.30, which therefore represents a plane passing through L and 1

and hence identical with T.
EXAMPLE 8.7.2 Find an equation for the plane through 1

= (1, 0, 1)
and passing through the line of intersection of the planes
r + j 2. = 1 and r + 3j . = 4.
Solution. The required plane has the form
(r + j 2. 1) + j(r + 3j . 4) = 0,
where not both of and j are zero. Substituting the coordinates of 1

into
this equation gives
2 + j(4) = 0, = 2j.
So the required equation is
2j(r + j 2. 1) + j(r + 3j . 4) = 0,
or
r + j + 3. 2 = 0.
Our nal result is a formula for the distance from a point to a plane.
184 CHAPTER 8. THREEDIMENSIONAL GEOMETRY

`
j
.
r
O
/
/
/
/
/
/
/
/
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
`
`
`
`
`
`
`
`
`
`
`
`
oi + /j + ck
or + /j + c. = d
1

1
Figure 8.19: Distance from a point 1

to the plane or + /j + c. = d.
THEOREM 8.7.4 (Distance from a point to a plane)
Let 1

= (r

, j

, .

) and T be the plane
or + /j + c. = d. (8.32)
Then there is a unique point 1 on T such that

1

1 is normal to T. Morever
1

1 =
[or

+ /j

+ c.

d[

+ /

+ c

(See Figure 8.19.)


Proof. The line through 1

normal to T is given by
P = P
0
+ t(oi + /j + ck),
or in terms of coordinates
r = r

+ ot, j = j

+ /t, . = .

+ ct.
Substituting these formulae in equation 8.32 gives
o(r

+ ot) + /(j

+ /t) + c(.

+ ct) = d
t(o

+ /

+ c

) = (or

+ /j

+ c.

d),
so
t =
_
or

+ /j

+ c.

d
o

+ /

+ c

_
.
8.8. PROBLEMS 185
Then
1

1 = [[

1

1 [[ = [[t(oi + /j + ck)[[
= [t[
_
o

+ /

+ c

=
[or

+ /j

+ c.

d[
o

+ /

+ c

_
o

+ /

+ c

=
[or

+ /j

+ c.

d[

+ /

+ c

.
Other interesting geometrical facts about lines and planes are left to the
problems at the end of this chapter.
8.8 PROBLEMS
.
1. Find the point where the line through = (3, 2, 7) and 1 =
(13, 3, 8) meets the r.plane.
[Ans: (7, 0, 1).]
2. Let , 1, C be noncollinear points. If 1 is the midpoint of the
segment 1C and 1 is the point on the segment 1 satisfying
1
11
= 2,
prove that
F =
1
3
(A+B+C).
(1 is called the centroid of triangle 1C.)
3. Prove that the points (2, 1, 4), (1, 1, 2), (3, 3, 6) are collinear.
4. If = (2, 3, 1) and 1 = (3, 7, 4), nd the points 1 on the line 1
satisfying 1,11 = 2,5.
[Ans:
_
_
,

,

_
and
_
_
,

,

_
.]
5. Let / be the line through = (1, 2, 3) parallel to the line joining
1 = (2, 2, 0) and C = (4, 1, 7). Also ^ is the line joining 1 =
(1, 1, 8) and 1 = (10, 1, 11). Prove that / and ^ intersect and
nd the point of intersection.
[Ans: (7, 1, 10).]
186 CHAPTER 8. THREEDIMENSIONAL GEOMETRY
6. Prove that the triangle formed by the points (3, 5, 6), (2, 7, 9) and
(2, 1, 7) is a 30
c
, 60
c
, 90
c
triangle.
7. Find the point on the line 1 closest to the origin, where =
(2, 1, 3) and 1 = (1, 2, 4). Also nd this shortest distance.
[Ans:
_

,
,

,

_
and
_
.]
8. A line ^ is determined by the two planes
r + j 2. = 1, and r + 3j . = 4.
Find the point 1 on ^ closest to the point C = (1, 0, 1) and nd the
distance 1C.
[Ans:
_
_
,

,

_
and

.]
9. Find a linear equation describing the plane perpendicular to the line
of intersection of the planes r + j 2. = 4 and 3r 2j + . = 1 and
which passes through (6, 0, 2).
[Ans: 3r + 7j + 5. = 28.]
10. Find the length of the projection of the segment 1 on the line L,
where = (1, 2, 3), 1 = (5, 2, 6) and L is the line C1, where
C = (7, 1, 9) and 1 = (1, 5, 8).
[Ans:
17
.]
11. Find a linear equation for the plane through = (3, 1, 2), perpen-
dicular to the line L joining 1 = (2, 1, 4) and C = (3, 1, 7). Also
nd the point of intersection of L and the plane and hence determine
the distance from to L. [Ans: 5r+2j3. = 7,
_
___
,

,

_
,
_
.]
12. If 1 is a point inside the triangle 1C, prove that
P = :A+ :B+ tC,
where : + : + t = 1 and : 0, : 0, t 0.
13. If 1 is the point where the perpendicular from = (6, 1, 11) meets
the plane 3r + 4j + 5. = 10, nd 1 and the distance 1.
[Ans: 1 =
_
___
,

,

_
and 1 =
59

.]
8.8. PROBLEMS 187
14. Prove that the triangle with vertices (3, 0, 2), (6, 1, 4), (5, 1, 0)
has area
1

333.
15. Find an equation for the plane through (2, 1, 4), (1, 1, 2), (4, 1, 1).
[Ans: 2r 7j + 6. = 21.]
16. Lines L and / are nonparallel in 3dimensional space and are given
by equations
P = A+ :A, Q = B+ tY.
(i) Prove that there is precisely one pair of points 1 and Q such that

1Q is perpendicular to A and Y .
(ii) Explain why 1Q is the shortest distance between lines L and /.
Also prove that
1Q =
[ (A Y )

1[
|A Y |
.
17. If L is the line through = (1, 2, 1) and C = (3, 1, 2), while /
is the line through 1 = (1, 0, 2) and 1 = (2, 1, 3), prove that the
shortest distance between L and / equals
13

.
18. Prove that the volume of the tetrahedron formed by four noncoplanar
points
i
= (r
i
, j
i
, .
i
), 1 i 4, is equal to
1
6
[ (

[,
which in turn equals the absolute value of the determinant
1
6

1 r

1 r

1 r

1 r

.
19. The points = (1, 1, 5), 1 = (2, 2, 1), C = (1, 2, 2) and 1 =
(2, 1, 2) are the vertices of a tetrahedron. Find the equation of the
line through perpendicular to the face 1C1 and the distance of
from this face. Also nd the shortest distance between the skew lines
1 and 1C.
[Ans: P = (1 + t)(i +j + 5k); 2

3; 3.]
188
Chapter 9
FURTHER READING
Matrix theory has many applications to science, mathematics, economics
and engineering. Some of these applications can be found in the books
[2, 3, 4, 5, 11, 13, 16, 20, 26, 28].
For the numerical side of matrix theory, [6] is recommended. Its bibliography
is also useful as a source of further references.
For applications to:
1. Graph theory, see [7, 13];
2. Coding theory, see [8, 15];
3. Game theory, see [13];
4. Statistics, see [9];
5. Economics, see [10];
6. Biological systems, see [12];
7. Markov nonnegative matrices, see [11, 13, 14, 17];
8. The general equation of the second degree in three variables, see [18];
9. Ane and projective geometry, see [19, 21, 22];
10. Computer graphics, see [23, 24].
189
190
Bibliography
[1] B. Noble. Applied Linear Algebra, 1969. Prentice Hall, NJ.
[2] B. Noble and J.W. Daniel. Applied Linear Algebra, third edition, 1988.
Prentice Hall, NJ.
[3] R.P. Yantis and R.J. Painter. Elementary Matrix Algebra with Appli-
cation, second edition, 1977. Prindle, Weber and Schmidt, Inc. Boston,
Massachusetts.
[4] T.J. Fletcher. Linear Algebra through its Applications, 1972. Van Nos-
trand Reinhold Company, New York.
[5] A.R. Magid. Applied Matrix Models, 1984. John Wiley and Sons, New
York.
[6] D.R. Hill and C.B. Moler. Experiments in Computational Matrix Alge-
bra, 1988. Random House, New York.
[7] N. Deo. Graph Theory with Applications to Engineering and Computer
Science, 1974. PrenticeHall, N. J.
[8] V. Pless. Introduction to the Theory of ErrorCorrecting Codes, 1982.
John Wiley and Sons, New York.
[9] F.A. Graybill. Matrices with Applications in Statistics, 1983.
Wadsworth, Belmont Ca.
[10] A.C. Chiang. Fundamental Methods of Mathematical Economics, sec-
ond edition, 1974. McGrawHill Book Company, New York.
[11] N.J. Pullman. Matrix Theory and its Applications, 1976. Marcel Dekker
Inc. New York.
191
[12] J.M. Geramita and N.J. Pullman. An Introduction to the Application
of Nonnegative Matrices to Biological Systems, 1984. Queens Papers
in Pure and Applied Mathematics 68. Queens University, Kingston,
Canada.
[13] M. Pearl. Matrix Theory and Finite Mathematics, 1973. McGrawHill
Book Company, New York.
[14] J.G. Kemeny and J.L. Snell. Finite Markov Chains, 1967. Van Nostrand
Reinhold, N.J.
[15] E.R. Berlekamp. Algebraic Coding Theory, 1968. McGrawHill Book
Company, New York.
[16] G. Strang. Linear Algebra and its Applications, 1988. Harcourt Brace
Jovanovich, San Diego.
[17] H. Minc. Nonnegative Matrices, 1988. John Wiley and Sons, New York.
[18] G.C. Preston and A.R. Lovaglia. Modern Analytic Geometry, 1971.
Harper and Row, New York.
[19] J.A. Murtha and E.R. Willard. Linear Algebra and Geometry, 1969.
Holt, Rinehart and Winston, Inc. New York.
[20] L.A. Pipes. Matrix Methods for Engineering, 1963. PrenticeHall, Inc.
N. J.
[21] D. Gans. Transformations and Geometries, 1969. AppletonCentury
Crofts, New York.
[22] J.N. Kapur. Transformation Geometry, 1976. Aliated EastWest
Press, New Delhi.
[23] G.C. Reid. Postscript Language Tutorial and Cookbook, 1988. Addison
Wesley Publishing Company, New York.
[24] D. Hearn and M.P. Baker. Computer Graphics, 1989. PrenticeHall,
Inc. N. J.
[25] C.G. Cullen. Linear Algebra with Applications, 1988. Scott, Foresman
and Company, Glenview, Illinois.
[26] R.E. Larson and B.H. Edwards. Elementary Linear Algebra, 1988. D.C.
Heath and Company, Lexington, Massachusetts Toronto.
192
[27] N. MagnenatThalman and D. Thalmann. Stateoftheartin Com-
puter Animation, 1989. SpringerVerlag Tokyo.
[28] W.K. Nicholson. Elementary Linear Algebra, 1990. PWSKent, Boston.
193
Index
2 2 determinant, 71
algorithm, Gauss-Jordan, 8
angle between vectors, 166
asymptotes, 137
basis, left-to-right algorithm, 62
Cauchy-Schwarz inequality, 159
centroid, 185
column space, 56
complex number, 89
complex number, imaginary num-
ber, 90
complex number, imaginary part,
89
complex number, rationalization,
91
complex number, real, 89
complex number, real part, 89
complex numbers, Apollonius cir-
cle, 100
complex numbers, Argand diagram,
95
complex numbers, argument, 103
complex numbers, complex conju-
gate, 96
complex numbers, complex expo-
nential, 107
complex numbers, complex plane,
95
complex numbers, cross-ratio, 114
complex numbers, De Moivre, 107
complex numbers, lower half plane,
95
complex numbers, modulus, 99
complex numbers, modulus-argument
form, 103
complex numbers, polar represen-
tation, 103
complex numbers, ratio formulae,
100
complex numbers, square root, 92
complex numbers, upper half plane,
95
coordinate axes, 154
coordinate planes, 154
cosine rule, 166
determinant, 38
determinant, cofactor, 76
determinant, diagonal matrix, 74
determinant, Laplace expansion, 73
determinant, lower triangular, 74
determinant, minor, 72
determinant, recursive denition,
72
determinant, scalar matrix, 74
determinant, Surveyors formula,
85
determinant, upper triangular, 74
dierential equations, 120
direction of a vector, 164
distance, 154
distance to a plane, 184
194
dot product, 131, 156
eigenvalue, 118
eigenvalues, characteristic equation,
118
eigenvector, 118
ellipse, 137
equation, linear, 1
equations, consistent system of, 1,
11
equations, Cramers rule, 39
equations, dependent unknowns, 11
equations, homogeneous system of,
16
equations, homogeneous, nontrivial
solution, 16
equations, homogeneous, trivial so-
lution, 16
equations, inconsistent system of,
1
equations, independent unknowns,
11
equations, system of linear, 1
factor theorem, 95
eld, 3
eld, additive inverse, 4
eld, multiplicative inverse, 4
Gauss theorem, 95
hyperbola, 137
imaginary axis, 95
independence, left-to-right test, 59
inversion, 74
Joachimsthal, 163
least squares, 47
least squares, normal equations, 47
least squares, residuals, 47
length of a vector, 131, 157
linear combination, 17
linear dependence, 58
linear equations, Cramers rule, 84
linear transformation, 27
linearly independent, 41
mathematical induction, 31
matrices, rowequivalence of, 7
matrix, 23
matrix, addition, 23
matrix, additive inverse, 24
matrix, adjoint, 78
matrix, augmented, 2
matrix, coecient, 26
matrix, coecient , 2
matrix, diagonal, 49
matrix, elementary row, 41
matrix, elementary row operations,
7
matrix, equality, 23
matrix, Gram, 132
matrix, identity, 31
matrix, inverse, 36
matrix, invertible, 36
matrix, Markov, 53
matrix, nonsingular, 36
matrix, nonsingular diagonal, 49
matrix, orthogonal , 130
matrix, power, 31
matrix, product, 25
matrix, proper orthogonal, 130
matrix, reduced rowechelon form,
6
matrix, row-echelon form, 6
matrix, scalar multiple, 24
matrix, singular, 36
matrix, skewsymmetric, 46
matrix, subtraction, 24
matrix, symmetric, 46
195
matrix, transpose, 45
matrix, unit vectors, 28
matrix, zero, 24
modular addition, 4
modular multiplication, 4
normal form, 180
orthogonal matrix, 116
orthogonal vectors, 168
parabola, 137
parallel lines, 164
parallelogram law, 150
perpendicular vectors, 168
plane, 176
plane through 3 points, 176, 178
position vector, 156
positive octant, 154
projection on a line, 171
rank, 66
real axis, 95
recurrence relations, 32
reection equations, 29
rotation equations, 28
row space, 56
scalar multiplication of vectors, 150
scalar triple product, 173
skew lines, 172
subspace, 55
subspace, basis, 61
subspace, dimension, 63
subspace, generated, 56
subspace, null space, 55
Threedimensional space, 154
triangle inequality, 160
unit vectors, 158
vector cross-product, 172
vector equality, 149, 165
vector, column, 27
vector, of constants, 26
vector, of unknowns, 26
vectors, parallel vectors, 164
196

Anda mungkin juga menyukai