Anda di halaman 1dari 35

Chapter 6

Linear transformations

Together with the fundamental notion of vector space, the linear


transformation is another basic concept of linear algebra. This represents the
information „carrier” regarding the linearity property from one vector space
to another.
The large scale use of this notion in geometry requires therefore a
detailed treatment of the subject.

§1. Definition and General Properties


Let V and W be two vector spaces over the commutative corp K.

1.1 Definition. A function T: V → W, with the following properties:


1) T (x+y) = T (x) +T (y) , ∀ x, y ∈ V
2) T (α x) = α T (x) , ∀x∈V,∀α ∈K
is called a linear transformation (linear application,
linear operator or vector space morphism).

For the mapping T(x) of a linear transformation T the writing Tx


is also used sometimes.

1.2 Corollary. The application T: V → W, is a linear transformation if


and only if

T(α x+β y) = α T(x) + β T (y) , ∀ x, y ∈ V, ∀ α , β ∈ K


(1.1)

The proof can be written directly, and condition (1.1) shows that an
application T: V → W is a linear transformation, if the image of a linear
combination of vectors is the linear combination of the images of these
vectors.

1
Examples
1° The application T: Rn → Rm , T (x) = AX, A ∈ M m× n(R) , X = tx is a
linear transformation. In the particular case n = m = 1 , the application
defined by T (x) = ax, a ∈ R, is linear.
2° If U ⊂ V is a vector subspace, then the application T: U → V,
defined by T(x) = x, is a linear transformation, named inclusion application.
In general, the restriction of a linear transformation to a subset S ⊂ V is not
a linear transformation. The linearity is inherited only by vector subspaces.
3° The application T: C1 (a, b) → C0(a,b) , T (f) = f ′ is linear.
b
4° The application T: C0 (a, b) → R, T(f) = ∫
a
f ( x ) dx is linear.
5° If T: V → W is a bijective linear transformation, then T -1: W → V
is a linear transformation.

The set of linear transformations from the V vector space to the W


vector space is denoted with L(V, W).
If we define on the set L(V, W) these operations:

(T1 + T 2) (x) = : T 1(x) + T 2(x), ∀x∈V


(α T) (x) = : α T (x) , ∀ x ∈ V, ∀ α ∈ K

then L(V, W) aquires a K-vector space structure.


A vector space isomorphism is a bijective linear transformation T: V
→ W.
If W = V, then the linear application T: V → V will be named the
endomorphism of the vector space V, and it’s set will be denoted by End(V).
For T1, T2 ∈ End(V), being two endomorphisms of V , we can write the
following line:

(T 1 ° T 2)(x) = : T1(T 2(x)) , ∀ x ∈ V

The upper operation will be named the product of the transformations T1 şi


T2, shortly T 1T 2.
A bijective endomorphism T: V → V will be named automorphism
of the V vector space, and it’s set will be denoted by Aut(V).
The automorphism subsets of a vector space are stable parts with
regard to the endomorphism product operation, and they form a subgroup
GL(V) ⊂ End(V), also called linear group of the vector space V.

2
If W = K, then the linear application T: V → K, is named , and the set
V* = L( V, K), of all linear forms of V, is a K-vector space named the
vector space V `s dual.
If V is an euclidian vector space, of finite dimensions, then it’s dual
or V* has the same dimension and its identified with V.
1.3 Theorem. If T: V → W is some linear transformation, then :
a) T (0v) = 0w , T (-x) = - T(x), ∀ x ∈ V
b) The image T (U) ⊂ W, of a vector subspace U ⊂ V,
is also a vector subspace.
c) The inverted image T-1 (W′ ) ⊂ V , of a vector subspace
W′ ⊂ W , is also a vector subspace.
d) If the vectors x1, x2 ,..., xn ∈ V are linearly dependent
then the vectors T (x1), T (x2) ,..., T (xn) ∈ W are also
linearly dependent.
Proof. a) By giving α the value α = 0 , and α =-1 in the T (α x) =
α T (x) equation, we obtain T (0v) = 0w and T (-x) = - T (x).
Therefore we will ignore the two vector’s 0 indexes.
b) For ∀ u, v ∈ T (U) , ∃ x, y ∈ U such that u = T (x) and v =
T (y). According to the hypothesis that U ⊂ V is a vector subspace, for x, y
∈ U, α , β ∈ K we have α x + β y ∈ U and with relation (1.1), we obtain
α u + β v = α T (x) + β T (y) = T(α x + β y) ∈ T (U).
c) If x, y ∈ T -1(W ′ ) then T (x) , T (y) ∈ W ′ and for ∀ α , β ∈
K we have α T (x) + β T(y) = T (α x + β y) ∈ W ′ (W ′ vector subspace),
we get α x + β y ∈ T -1(W ′ ) as a result.
d) We apply the transformation T onto the relation of linear
dependence λ 1x1 + λ 1x1 + ... + λ nxn = 0 and by using a), the linear
dependency relation will be λ 1 T (x1) + λ 2 T (x2) + ... + λ n T (xn) = 0.
1.4 Consequence If T: V → W is a linear transformation, then :
a) The set Ker T = T -1{0} ={x ∈ V | T (x) = 0 } ⊂ V is
called the kernel of the linear transformation T ,
and it is a vector subspace.
b) The image of the linear transformation T, Im T = T
(V) ⊂ W, is a vector subspace.
c) If T(x1) , T(x2) , ... , T(xn) ∈ W are linearly
independent then the vectors x1, x2 ,..., xn ∈ V are
also linearly independent.

3
1.5 Theorem. A linear transformation T: V → W is injective if and only
if Ker T = {0}.

Proof. The injectivity of the linear transformation T combined with the


general property T (0) = 0 implies Ker T = {0}.
Reciproc. If Ker T = {0} and T(x) = T(y), using the linearity property we
obtain T (x - y) = 0 ⇔ x – y ∈ Ker T , that is x = y , so T is injective.
The nullity of the operator T is the dimension of the kernel Ker T.
The rank of the operator T is the dimensioning of the image Im T.

1.6 Theorem. (rank theorem) If the vector space V is finitely dimensional


then the vector space ImT is also finitely dimensional, and
we have the relation :

dim Ker T+ dim Im T = dim V (1.2)

Proof. Lets note with n = dim V and s = dim Ker T. For s ≥ 1 let us
consider a base {e1, e2, ..., es} in Ker T and complete it with B = {e1, e2,
..., es, es+1, ..., en} so that B is a base of the entire vector space V. The
vectors es+1, es+2, ..., en represent a base of the supplementary subspace of
the subspace Ker T .
n

For any y ∈ Im T, there is x = ∑x e


i =1
i i ∈ V, such that y = T (x)
Due to the fact that T (e1) = T (e1) = ... = T (es) = 0, we obtain
n
y = T (x) = ∑x
i =1
i T (ei) = xs+1 T (es+1) + ... + xn T (en),
which means that T (es+1), T (es+2), ... , T (en) generates the subspace ImT.
We must proove that the vectors T(es+1) , ... , T (en) are linearly
dependent. Hence,
λ s+1 T (es+1) + ... + λ n T (en) = 0 ⇔ T (λ s+1 es+1 + ... + λ n en) = 0
which means that λ s+1 es+1 + ... + λ n en ∈ Ker T. Moreover, as the
subspace Ker T has only the null vector in common with the supplementary
subspace, we obtain:
λ s+1 es+1 + ... + λ n en = 0 ⇒ λ s+1 = ... = λ n = 0,
that is, T (es+1) , ... , T (en), are linearly independent. Therefore the image
subspace Im T is finitely dimensional, moreover
dim Im T = n – s = dimV – dim Ker T .

4
For s = 0 (K er T = {0} and dim Ker T = 0), we consider the base B
= {e1, e2, ..., en} belonging to the vector space V and by following the same
reasoning we obtrain that T (e1) , T (e2) , ... , T (en) represents a base of the
space Im T meaning Im T = dim V . (q.e.d)

The property of linear dependence of a vector sistem is maintained in


a linear transformation, opposed to the linear independency which generally
will not be conserved by applying a linear transformation. The conditions
that maintain the linear independence of a vector system, are given by the
following theorem:

1.7 Theorem. If V is a n-dimensional vector space, and T : V → W


a linear transformation, then the following relations are
equivalent :

1) T is injective
2) The image of a linearly independent vector system e1,
e2, ..., ep ∈ V (p ≤ n) is a system of vectors T (e1), T (e2) ,
... , T (ep), which are also linearly independent.

Proof. 1) → 2) Let us cosider a injective linear transformation T, and the


linearly independent vectors e1, e2, ..., ep and T (e1), T (e2) , ... , T (ep) the
images of these vectors.
For λ i ∈ K, i =1, p we get
λ 1 T (e1) + λ 2 T (e2) + ... + λ p T (ep) = 0 ⇔
⇔T (λ 1e1 + λ 2e2 + ... + λ pep) = 0 ⇔
⇔ λ 1e1 + λ 2e2 + ... + λ pep ∈ K er T = {0} (T - injectivă)

⇔ λ 1e1 + λ 2e2 + ... + λ pep = 0 ⇔ λ 1 = λ 2 = ... = λ p = 0,
therefore the vectors T (e1), ... , T (ep) sunt linearly independent.
2) → 1) Suppose there is a vector x ≠ 0 for that its image T (x) =
0. If B = {e1, e2, ..., en} ⊂ V is a base, then there exist ∃ xi ∈ K, not all null,
n
so that x = ∑xi ei . Since T (e1), T (e2), ..., T (en) are linearly independent,
i =1
based on the relation T (x) = x1T (e1) + x2 T (e2) + ... + xn T (en) = 0 ,
results that x1 = x2 = ... = xn = 0 ⇔ x = 0, contradiction. We obtain K er T =
{0}, hence it is injective.

5
1.8 Consequence. If V and W are two finitely dimension vector spaces,
and T: V → W is a linear transformation, then :
1) If T is injective, and we consider {e1, e2, ..., en} a
base of V then { T (e1), T (e2), ..., T (en)} is a base of
ImT .
2) Two isomorph vector spaces have the same
dimension.

The demonstration can be written directly using the results from the
theorems 1.6 and 1.7.

§2. The Matrix of a Linear Transformation

Let V and W two vector spaces of the corp K.

2.1 Theorem. If B = {e1, e2, ..., en} is a base of the vector space V and
{w1, w2, ..., wn} are n arbitrary vectors from W then there
is a linear transformation T: V → W with the property
T(ei) = wi , i =1, n , and it is unique.

Proof. Let us consider x ∈ V a vector that can be written in base B as


n n
x = ∑ xi ei . The corespondence x  T ( x ) = ∑ xi wi defines an
i =1 i =1

application T: V → W, with the property T(ei) = wi , i =1, n . This


n
application is linear. Truly, if y = ∑ yi ei is another arbitrary vector from
i =1
V, then the linear combination
n

α x+β y= ∑(αx
i =1
i + βyi )ei , ∀ α ,β ∈ K,

has the image given by T


n n n
T (α x + β y)= ∑ (αxi + βyi ) wi =α ∑ xi + wi +β ∑y i + wi = α T (x) +
i =1 i =1 i =1

β T (y), hence T is linear.


Lets suppose that there ∃ T′ : V → W with the property T′ (ei) = wi ,

6
n
i =1, n . Accordingly, for any x ∈ V , x = ∑x e
i =1
i i we obtain:

 n  n n n

T′ (x) = T′ ∑ i i = ∑ i T′ (ei) = ∑ i i = ∑xi T (ei) =


 x e  x x w
 i =1  i =1 i =1 i =1

 n

= T  ∑ xi ei  = T (x),
 i =1 
that is the oneness of the linear transformation T .
If { w1, w2, ..., wn } ⊂ W are linearly independent, then the linear
transformation T defined i theorem 2.1 is injective.
Theorem 2.1 states that a linear transformation T : V → W, dimV =
n, is perfectly determined, if it’s images {T (e1), T (e2), ...,T (en)} are known
over the vectors of a basis B = {e1, e2, ..., en} ⊂ V.
Let Vn and Wm be two K-vector spaces of dimension n respectively
m, and let T: V → W be a linear transformation. If B = {e1, e2, ..., en} is a
fixed base in Vn , and B′ = {f1, f2, ..., fm} is a fixed base in Wm then the linear
transformation T is uniquely determined by the values T (ej)∈Wm. For ∀ j
= 1, n we associate the n-uple (a1j, a2j, ..., amj) to the image T (ej), that is
m
T (ej) = ∑a
i =1
ij f i , ∀ j = 1, n

(2.1)
The coefficients aij ∈ K, i = 1, n , j = 1, n , define uniquely the
matrix A = (aij) ∈ Mm× n (K). If we consider the vectors Vn şi Wm and the
bases B şi B′ fixed, then the matrix A ∈ Mm× n (K) determines the linear
transformation T uniquely.

2.2 Definition. The matrix A ∈ Mm× n (K) who’s elements are given by the
relation (2.1) is called the associated matrix to the linear
transformation T with regard to the pair of bases B and B′ .
n m

2.3 Theorem. If x = ∑x j e j has the image y = T (x) =


j =1
∑y i fi ,
i =1
then
n

yi = ∑aij x j , ∀ i = 1, m
j =1

(2.2)

7
Truly,
 n  n n
 m 
T(x) = T  ∑x j e j  = ∑ j T (ej)= ∑ j  ∑ aij f i  =
  x x
 j =1  j =1 j =1  i =1 
m  n  n
=∑   ∑aij x j  f i = ∑yi f i ,

i =1  j =1  i =1

resulting the relation (2.2)


If we note X = t(x1, x2, ..., xn), Y = t(y1, y2, ..., ym) then relation (2.2) is
also written as a matrix equation of the following form:

Y = AX (2.3)

The equation (2.2) or (2.3) is called the equation of the linear


transformation T with regard to the considered bases.

Remarks:
1° If L (Vn, Wm) is the set of all linear transformation from Vn with
values in Wm and Mm× n (K) is the set of all matrices of type m × n, and
B and B ′ are two fixed bases in Vn respectively Wm, then the corespondence
Ψ : L (Vn, Wm) → Mm× n (K) , Ψ (T) = A,
which associates the matrix A to a linear transformation T, relative to the
two fixed bases, is a vector space isomorphism. Consequently dim L (Vn,
Wm) = m ⋅ n.
2° This isomorphism has the following properties:
- Ψ (T 1 T 2) = Ψ (T 1) ⋅ Ψ (T 2), if T 1 and T 2 exist
- T: Vn → Vn is invertable iff the associated matrix A, with regard to
some base from Vn, is invertable.

Let Vn be a K-vector space and T∈ End(Vn). Considering different


bases in Vn, we can associate separate square matrices to the linear
transformation T. Naturally, one question rises: when do these two square
matrices represent the same endomorphism? The answer is given by the
following theorem :

2.4 Theorem. Two matrices A, A′ ∈ Mn (K), relative to the bases B, B′ ⊂


Vn , represent the same linear transformation T : Vn → Vn

8
iff A′ = Ω -1 A Ω . The matrix Ω is the passing matrix from
base B to the base B′ .

Proof. Let B = {e1, e2, ..., en} and B′ = {e′ 1, e′ 2, ..., e′ n} two bases in
Vn and Ω = (ω ij) the passing matrix from base B to base B ′ , hence
n

e′ j = ∑ω e
i =1
ij i , ∀ j = 1, n .
If A = (aij) is the associated matrix of T relative to the base B,
n
hence T (ej) = ∑a
i =1
ij ei , j = 1, n and A′ = (a′ ij) the associated matrix
n

of T relative to the base B′ , hence T (e′ j) = ∑a'


i =1
ij e'i , j = 1, n , then
the images T (e’j) can be written in two ways:
n
 n  n  n 
T(e′ j) = ∑a 'ij  ∑ωki ek  = ∑  ∑ωki a 'ij ek , respectively
i =1  k =1  k =1  i =1 
 n  n n
 n  n
T(e’j) = T  ∑ωij ei  = ∑ωij T(ei)= ∑ωij  ∑ aki ek  = ∑
 i=1  i =1 i =1  k =1  k =1

 n

 ∑ akiωij ek
 i =1 
n n
Out of the two expressions we obtain ∑ωki a 'ij = ∑a ki ωij ⇔
i =1 i =1

Ω A′ = A Ω .
Ω being a nondegenerated matrix, results the relation A′ = Ω -
1
AΩ .

2.5 Definition. Two matrices A, B ∈ Mn (K) are called similar if there is


a nondegenerated matrix C ∈ Mn (K) such that
B = C-1⋅ A ⋅ C .

Remarks:
1° The similarity relation is an equivalence relation on the set Mn
(K). Every equivalence class corresponds to an endomorphism T ∈ End(Vn)
and contains all associated matrices of T relative the the bases of the
vector space Vn.

9
2° The matrices A and B have the same determinant

detB = det(C-1) ⋅ detA ⋅ detC = det A


Any two similar matrices have the same rank, number which represents the
rank of the endomorphism T . Therefore the rank of an endomorphism does
not depend on the chosen base of the vector space V (the rank of an
endomorphism is invariant to base change).

§3. Eigenvectors and Eigenvalues


Let V be an n-dimensional K-vector space and T ∈ End(V) an
endomorphism.
If in the vector space V we consider different bases, then to the same
endomorphism T ∈ End(V) several different similar matrices will
correspond. Therefore, we are interested in finding the base of V, relative to
whom the associated matrix of the endomorphism T has the simplest form,
the canonical form. In this case the relations that define the endomorphism
n

T , yi = ∑a
j =1
ij x j , will have the simplest expressions. We will solve this

problem with the help of the values and the vectors proper to the
endomophism T.

3.1 Definition. Let V a K-vector space and T ∈ End(V) an


endomorphism.
A vector x ∈ V, x ≠ 0 is called eigenvector of the
endomorphism T : V → V if there is a λ ∈ K such that
T(x) = λ x (3.1)
The scalar λ ∈ K is called the eigenvalue of T
corresponding to the eigenvector x.

The set of all eigenvalues of T is called the spectrum of the operator


T and it is noted with σ (T).
The equation T (x) = λ x with x ≠ 0 is equivalent to x ∈ Ker(T - λ I
) \{0}, where I is the identity endomorphism.
If x is an eigenvector of T , then the vectors kx , k ∈ K \{0} are
also eigenvectors.

3.2 Theorem. If V is a K-vector space and T ∈ End(V), then

10
1) for any eigenvector of T corresponds a single eigenvalue
λ ∈ σ (T).
2) Eigenvectors which correspond to single eigenvalueas
are linearly independent.
3) The set Sλ = {x ∈ V | T x = λ x , λ ∈ σ (T)} ⊂ V
is an invariant vector subspace with regard to, T , hence
T(Sλ ) ⊆ Sλ . The vector subspace is named the
eigensubspace corresponding to the eigenvalue λ ∈ σ
(T).
Proof. 1) Let us consider x ≠ 0 an eigenvector corresponding to the
eigenvalue λ ∈ K. Suppose there is another eigenvalue λ ′ ∈ K
corresponding to the same eigenvector such that T (x) = λ ′ x then it
would result λ x = λ ′ x ⇔ (λ - λ ′ )x = 0 ⇔ λ = λ ′ .
2) Let x1, x2, ..., xp be the eigenvectors corresponding to the distinct
eigenvalues λ 1, λ 2, ..., λ p. We show using induction (p) the linear
independence of the considered vectors. For p = 1 and x1 ≠ 0, as an
eigenvector, the set {x1} is linearly independent. Supposing that the property
is true for p-1, we show that it is true for p eigenvectors. Applying the
endomorphism T to the relation k1x1 + k2x2 + ... + kpxp = 0 we obtain k1λ 1x1
+ k2λ 2x2 + ...+ kpλ pxp = 0. By substracting the first relation amplified by
λ p, we obtain the following from the second relation:
k1(λ - λ p)x1 + ... + kp-1(λ p - λ p-1)xp-1 = 0
The inductive hypothesis prooves k1 = k1 = ... = kp-1 = 0, and if we use
this in the relation k1x1 +... + kp-1xp-1 + kpxp = 0 we obtain kpxp = 0 ⇔ kp =
0 , that is x1, x2, ..., xp are linearly independent.
3) For any x, y ∈ Sλ and ∀ α , β ∈ K we have :
T (α x + β y) = α T (x) + β T (y) = α λ x + β λ y = λ (α x + β y),
which means that Sλ is a vector subspace of V.
For ∀ x ∈ Sλ , then T (x) = λ x ∈ Sλ , followingly T (Sλ ) ⊆ Sλ .
3.3 Theorem. The eigensubspaces S λ1 , S λ2 corresponding to distinct
eigenvalues λ 1 ≠ λ 2 have only the null vector in
common.

Proof. Let λ 1, λ 2, ∈ σ (T), λ 1 ≠ λ 2. Suppose there exists a vector x ∈


S λ1 ∩ S λ2 , different from 0, for which we can write the relations T (x)=
λ 1x şi T (x)= λ 2x. We obtain (λ 1 - λ 2)x = 0 ⇔ λ 1 = λ 2, which is a

11
contradiction. Therefore S λ1 ∩ S λ2 = {0}.

3.4 Definition. The non-zero matrix X∈ Mn(K) is called eigenvector


of the matrix A if there ∃ λ ∈ K such that AX = λ X.
The scalar λ ∈ K is called eigenvalue of the matrix A.

The matrix equation AX = λ X can be written under the form (A -


λ I )X = 0 and it is equivalent to the sistem of linear and homogenous
equations:
( a11 −λ) x1 +a12 x2 +... +a1n xn = 0
a x +( a −λ) x +... +a x = 0
 21 2 12 2 2n n

 .......... .......... .......... .......... .........

an1 x1 +an1 x2 +... +( ann −λ) xn = 0
(3.2)
which admits solutions different than the trivial solution if
a11 −λ a12 ... a1n
a21 a22 -λ ... a2 n
P(λ ) = det(A - λ I ) = .......... .......... .......... =0

an1 an2 ... ann −λ


(3.3)
3.5 Definition. The polinom P(λ ) = det(A - λ I ) is called the
caracteristic polinom of the matrix A and the ecuation
P(λ ) = 0 is called the caracteristic equation of the
matrix A .

If can be proven that the caracteristic polinom can be also written


like:

P(λ ) = (-1)n [λ n - δ 1λ n-1


+ ... + (-1)nδ n ],
(3.4)

where δ i is the sum of the principal minors of order i belonging to the


matrix A.

Remarks

12
1° The solutions of the caracteristic equations det(A - λ I ) = 0 are the
eigenvalues of the matrix A.
2° If the field K is a closed field, then all roots of the caracteristic
equations are in the field K therefore the corresponding eigenvectors are
also in the K-vector space Mn× 1(K).
If K is open, i.e. K = R, the caracteristic equation may have also
complex roots and the corresponding eigenvectors will be included in the
complexified real vector space.
For any real and symmetrical matrix, it can be proven that the
eigenvalues are real.
3° Two similar matrices have the same caracteristic polinom.
Truly, if A and A′ are similar, A′ = C-1AC with C nedegenerated, then

P′ (λ ) = det(A′ - λ I ) = det(C-1AC - λ I ) = det[C-1(A - λ I)C] =


= det(C-1) det(A - λ I) detC= det(A - λ I) = P(λ )

If A ∈ Mn(K) and P(x) = a0xn + a1xn-1 + ... + an ∈ K[X] then the


polinom P(A) = a0An + a1An-1 + ... + anI is named matrix polinom.

3.6 Theorem. (Hamilton – Cayley)


If P(λ ) is the caracteristic polinom of matrix A, then
P(A) = 0.

Proof. Let us consider P(λ ) = det(A - λ I) = a0λ n + a1λ n-1


+ ... + an.
Likewise the reciproc of the matrix A - λ I is given by

(A - λ I)* = Bn-1λ n-1


+ Bn-2λ n-2
+ ... + B1λ + B0, Bi ∈ Mn(K)

and satisfies the relation (A - λ I) ⋅ (A - λ I)* = P(λ ) ⋅ I , hence

(A - λ I) (Bn-1λ n-1
+ Bn-2λ n-2
+ ... + B1λ + B0) = (a0λ n
+ a1λ n-1
+ ... +
an)I,

By identrifying the polinoms trough λ , we obtain

a0I = – Bn-1 An
a1I = A Bn-1 – Bn-2 An-1

13
a2I = A Bn-2 – Bn-3 An-2
...............................
an-1I = A B1 – B0 A
anI = A B0
a0An + a1An-1 + ... + a0I = 0 , q.e.d.

3.7 Consequence Any polinom A ∈ Mn(K) of rank ≥ to n can be wirtten


like a polinom of rank n – 1.

3.8 Consequence. The inverted matrix A can be expressed by raising the


matrix A to different powers, inferior to its order.

Now let us consider an n-dimensional K-vector space Vn , and a base


B and lets not with A ∈ Mn(K), the associated matrix to the endomorphism
T with regard to this base. The equation T x = λ x is equivalent to (A - λ I
)X = 0.
The eigenvalues of the endomorphism T, if they exist, are the roots
of the polinom P(λ )in the field K, and the eigenvectors of T will be the
solutions to the matriceal equations (A - λ I )X = 0. Because of the
invariance of the caracteristic polinom regarding a base change from Vn ,
P(λ ) depends only on the endomophism T and does not depend on the
matrix representation of T in a given base. Therefore, the naming of the
caracteristic polinom / caracteristic equation of the endomorphism T are
well justified, respective to the caracteristic polinom P(λ ) of the matrix A,
and for the caracteristic equation (A - λ I )X = 0 of the matrix A.

§4. An Endomorphism’s Canonical Form.

Let us consider the endomorphism T : Vn → Vn defined on the n-


dimensional K-vector space Vn.
If we consider the bases B şi B ′ belonging to vector space Vn and
we denote the associated matrices of the endomorphism T with respect to
these bases with A and A′ , then we have A′ = Ω -1AΩ , where Ω represents
the passing matrix from base B to base B ′ . Knowing that the associated

14
matrix to an endomorphism depends on the chosen base from the vector
space Vn, we will determine that particular base which with regard to the
endomorphism has the simplest form, that means that the associated matrix
will have a diagonal form.

4.1 Definition. The endomorphism T: Vn → Vn is diagonalizable if there


exists a base B = {e1, e2, ..., en} belonging to vector space
Vn astfel încât matricea corespunzătoare lui T în această
bază să aibă forma diagonală.

4.2 Theorem. The endomorphism T: Vn → Vn is diagonalizable if and


only if there exists a base of the vector space Vn formed
only of the eigenvectors corresponding to the
endomorphism T.

Proof: If T is diagonalizable then there exists a base B = {e1, e2, ...,


en} with respect to whom the associated matrix A = (aij) has diagonal form,
meaning aij = 0, ∀ i ≠ j. The actoion of T on the elements on base B is
given by the relations T (ei) = aiiei , ∀ i = 1, n , followingly ei , i = 1, n are
eigenvectors of T .
Reciproc. Let {v1, v2, ..., vn} a base in Vn , formed only of eigenvectors, that
is T vi = λ ivi , i = 1, n .
From these equations we can construct the associated matrix corresponding
to T in the current base:
 λ1 0 ... 0 
 
 0 λ2 ... 0 
D=  .
... ... ... ... 
 
0 0 ... λn 

where the scalars λ i ∈ K are not necessarily distinct.
In the context of the previous theorem, the matrices from the
similarity classes which correspond to the diagonalizable endomorphism T ,
are called diagonalizable for the different bases referring to the vector space
Vn.

4.3 Consequence. If the endomorphism T has n distinct eigenvalues, then


the corresponding eigenvectors determine a base in Vn
and the associated matrix to T in this base is a
diagonal matrix having on the main diagonal the

15
eigenvalues of T .

4.4 Consequence. If A ∈ Mn(K) is diagonalizable then detA = λ 1⋅


λ 2 ⋅ ... ⋅ λ n.
An eigenvalue λ ∈ K , as the root of the caracteristic equation
P(λ ) = 0, has a multiplicity order which is named algebric multiplicity, and
the dimension of the eigensubspace to dimSλ is named the geometric
multiplicity of the eigenvalue λ .

4.5 Theorem. The dimension of an eigensubspace pf the endomorphism


T is at most equal with the multiplicity order of the
respective eigenvalue (the geometric multiplicity is at
most equal to the algebric one.).

Proof: Let the eigenvalue λ 0 ∈ K have the algebric multiplicity m ≤ n ,


and the corresponding eigensubpace S λ0 have the following dimension :
dim S λ = p ≤ n , and by considering a base B = {e1, e2, ..., ep} in S λ we
0 0

will obtain the following:


If p = n, then we have n linear independent eigenvalues, therefore a
base in Vn reported to whom the associated matrix of the endomorphism T
has diagonal form, and it’s main diagonal having the eigenvalue λ 0. In this
case, the caracteristic polinom is written in the following form : P(λ ) = (-
1)n (λ - λ 0)n , resulting p = n.
If p < n , we upgrade the base of the vector subspace S λ to a base
0

B = {e1, e2, ..., ep, ep+1, ..., en} belonging to Vn.


The action of the operator T on the elements of this base is given
by :

T (ei) = λ 0 ei , ∀ i = 1, p and
n
T (ej) = ∑a
k =1
kj ek , ∀ j = p +1, n

With regard to the base B , the endomorphism T has the following


matrix:

16
 λ0 0 ... 0 a1 p +1 ... a1n 
 
0 λ0 ... 0 a2 p +1 ... a2 n 
. . ... . . ... . 
 
A = 0 0 ... λ0 a pp +1 ... a pn 
0 0 ... 0 a p +1 p +1 ... a p +1n 

. . ... . . ... . 
 
0 0 ... 0 anp +1 ... ann 

hence the caracteristic polinom T has the form P(λ ) = (λ 0 - λ )p Q(λ ) ,


meaning that (λ 0 - λ )p devides P(λ ) , then p ≤ m, q.e.d.
4.6 Theorem. The endomorphism T : Vn → Vn is diagonalizable iff the
caracteristic polinom has all solutions in the field K and
the dimension of every eigensubspace is equal to the
multiplicity order of the corresponding eigenvalue.

Proof. If T is diagonalizable, then there is a base B = {e1, e2, ..., en} ⊂


Vn , formed only by eigenvectors, with regard to whom the associated matrix
has the diagonal form. In these conditions the caracteristic polinom is
written in the following form:

P(λ ) = (-1)n (λ − λ1 ) 1 (λ − λ 2 ) 2 ... (λ − λp )


m m mp

with λ i ∈ K , the eigenvalues of T, with the multiplicity orders mi ,


p

∑m
i =1
i = n . Without constraining the generality, we can admit that the first

mi vectors from the base B = {e1, e2, ..., en} are the eigenvectors
corresponding to the eigenvalue λ 1 , the next m2 corresponds to λ 2
etc. Hence {e1, e2, ..., en} ⊂ S λ , therefore, m1 ≤ dim S λ . But dim
1 1

S λ1 ≤ m1 (theorem 3.12) and we get dim S λ1 = m1 , and likewise for the


other values.
n

Reciproc. If λ i ∈ K and dim S λ = mi,


i
i =1, p with ∑m
1
i =n,

then we can consider the set B = {e1, e2, ..., em1 , em1 +1 , ..., em1 +m2 , ...},
with the convention that the first m1 vectors form a base in S λ , the 1

following m2 vectors form a base in S λ2 , and so on.

17
p
Because S λi ∩ S λ = {0} and
j ∑m
i =1
i = n , base B is a base in Vn , with

regard to whom the associated matrix of T has the following form:

 λ1 0 ... 0 
 
0 λ1 ... 0 
. . . . 0 
 
0 0 ... λ1 
 . 
A =  

.
 
 λp 0 ... 0
 0 0 λp ... 0
 
 0 . ... . 
 0 0 ... λ p 

A being a diagonal matrix, we conclude that the endomorphism T este


diagonalizable.

4.7 Consequence. If T : Vn → Vn is a diagonalizable endomorphism, then


the vector space Vn can be reprezented as the sum
Vn = S λ1 ⊕ S λ2 ⊕ ... ⊕ S λ . p

Practically, the required steps for diagonalizing and endomorphis T


are the following:
1° We wirte the matrix A , associated to the endomorphism T with
regard to a given base of the vector space Vn.
2° We solve the caracteristic equation det(A - λ I ) = 0, determing
the eigenvalues λ 1, λ 2, ..., λ p with the corresponding multiplicity orders
m1, m2, ..., mp.
3° We use the result in theorem 3.13 and we have the following
cases:
I) If λ i ∈ K, ∀ i = 1, p we determine the dimensions of the
subspaces S λi . The dimension of the eigensubspace S λi ,
representing the vector space of the solutions of the homogenous
system (A - λ iI )X = 0, is given by dim S λ = n - rank(A - λ iI ). The
i

18
dimension of the subspace S λi can be found by identifying the
subspace S λ itself.
i

a) If dim S λi = mi , ∀ i = 1, n , then T is
diagonalizable. The associated matrix to T , with regard to
the base formed of eigenvectors is a diagonal matrix having
on the main diagonal the eigenvalues written as many times
it’s multiplicity order would allow it.
We can check this result by constructing the matrix T
= { v1, v2, ..., tvn}, having as columns the coordonates of the
t t

eigenvectors (the diagonalizing matrix) and represents the


passing matrix from the base considered initially to the base
formed of eigenvectors, where the associated matrix of T
with regard to the latter base is the diagonal matrix D, given
by

 λ1 
 
 . 0
-1
D = T AT =   .
0 .
 
 λ p 

b) if ∃ λ i ∈ K such that dim S λ < mi , then T is


i

not diagonalizable. The following paragraph will analyze this


case.

II) If ∃ λ i ∉ K, then T is not diagonalizable. The problem of T ‘s


diagonalization can be considered only if the K-vector space V
is considered an extension over the field K.

Let us consider Vn a K-vector space and T ∈ End(Vn) an


endomorphism defined on Vn.
If A ∈ Mn(K) is the associated matrix of T with regard to a base in
Vn , then A can be diagonalized only if the conditions of theorem 3.13 are
fulfilled..
If the eigenvalues corresponding to T belong to field K, λ i ∈ K ,
and the geometric multiplicity is different then the algebric multiplicity dim

19
S λi < mi , at least for some eigenvalue λ i ∈ K , the endomorphism T is not
diagonalizable, but we can still determine a base in the vector space Vn with
respect to which the endomorphism T has a more general canonical form,
named Jordan form.

For λ ∈ K, the matrices of the form :


λ 1 0 ... 0
 
λ 1 0 0 λ 1 ... 0
λ 1   .
(λ ), 
 0 1
, 0 λ 1 , ... , . . ... .
  0  
 0 λ . . . ... 1
0 0 . ... λ
 
(3.5)
are named Jordan cells attached to the scalar λ , of the order 1, 2, 3, ...,n .
4.7 Definition The endomorphism T: Vn → Vn is called jordanizable if
there exists a base in the vector space Vn with regard to
whom the associated matrix has the form:
 J1 0 ... 0 
 
 0 J 2 ... 0 
J = ,
. . ... . 
 
0 0 ... J p 
 
where Ji , i = 1, p , are Jordan cells of different orders
attached to the eigenvalues λ i .

A Jordan cell of order p attached to the eigenvalue λ ∈ K , is a


multiple of the order m ≥ p , and it corresponds to the linearly independent
vectors e1, e2, ..., ep which satisfy the following relations:
T(e1) = λ e1
T(e2) = λ e2 + e1
T(e3) = λ e3 + e2
....................
T(ep) = λ ep + ep-1

The vector e1 is an eigenvector and the vectors e2, e3, ..., ep are called
main vectors.

Remarks
1° The diagonal form of a diagonalizable endomorphism is a

20
particular case of a the Jordan canonical form, having all Jordan cells of the
first order
2° The jordan canonical form is not unique. The order in the main
diagonal of the Jordan cells depends on the chosen order of the main
vectors and the eigenvectors with regard to the given base.
3° The number of the Jordan cells, equal to the number of the linearly
independent eigenvectors, and also their orders are unique.
The following theorem can be prooven :

4.8 Theorem. (Jordan) If the endomorphism T ∈ End(Vn) has all


eigenvalues belonging to field K, then there exists a base
in the vector space Vn , with regard to whom the
associated matrix T has Jordan form.

Practically, for determining the canonical Jordan form of an


endomorphism, we will consider the following steps:

1° We write the matrix A ,associated to the endomorphism T with regard to


the given base.
2° We solve the caracteristic equation det(A - λ I ) = 0 having the
eigenvalues λ 1, λ 2, ..., λ p with their multiplicity order m1, m2, ..., mp
determined.
3° We find the eigensubspaces S λ , for each eigenvalue λ i.
i

4° We calculate the number of Jordan cells, separately for each eigenvalue


λ i , given by dim S λi = n – rang(A - λ I ). That would mean, that for
each eigenvalue the number of the linearly independent vectors gives us
the number of the corresponding Jordan cells.
5° The corresponding main vectors of those eigenvalues are determined for
which dim S λ < mi , their number is given by mi - dim S λ . If v ∈
i i

S λi is some eigenvector from S λi , we will check the compatibility


conditions and we will solve the linear equation systems one by one.
(A - λ iI )X1 = v, ..., (A - λ iI )Xp = Xs
Keeping in mind the compatibility conditions and the general form of
the eigenvectors and those of the main vectors we will determine by
giving arbitrary values to the parameters, the linearly independent
eigenvectors of S λi and the main vectors associated to each of them.
6° We write the base of the vector space Vn, uniting the systems of linear
independent vectors mi , i = 1, p , formed by eigen- and main vectors.

21
By using the matrix T having as columns the coordonates of the
vectors belonging to the base, which were constructed using the associated
eigenvectors and main vectors in this order, we obtain the matrix J = T-1AT ,
the Jordan canonical form which contains on the main diagonal the Jordan
cells, in the order in which they appear in the constructed base of associated
eigen- and main vectors (if they exist). The Jordan cells have the order
equal to the number of vectors from the system constructed out of the
associated eigen- and main vectors.

§5. Linear transformations on euclidean vector spaces.

The properties of the linear transformations defined on an arbitrary


vector space, are applicable on the euclidian vector spaces as well. The
scalar product that defines the euclidean structure permits us the
introduction of particular linear transformation classes.

5.1. Orthogonal transformations

Let V and W two euclidean R-vector spaces. Without being in danger


of confusion we can denote the scalar products on the two vector spaces
with the same symbol < , > .

5.1 Definition A lienar transformation T: V → W is named linear


orthogonal transformation if it keeps the scalar product,
meaning
< T x, T y> = < x, y > (5.1)

Examples.
1° The identical transformation T: V → V, T (x) = x , is an orthogonal
transformation.
2° The transformation T : V → V, that associates to a vector x ∈ V
its inverse T (x) = - x , is an orthogonal transformation.

5.2 Theorem. The linear transformation T : V → W is orthohonal iff it


keeps the norm, meaning
|| T x|| = || x || , ∀ x ∈ V (5.2)

22
Proof. If T is orthogonal then < T x, T y> = < x, y >, which for x=y
becomes
< T x, T y> = < x, x> ⇔ || T x||2 = ||x||2 ⇔ || T x|| = ||x||.
1
Reciproc. Using the relation <a, b> = [|| a + b || 2 − || a − b || 2 ] , we have
4
1
< T x, T y > = [ || T x + T y ||2 - || T x - T y ||2 ] =
4
1
= [|| T ( x + y ) || 2 − || T ( x + y ) || 2 ] =
4
1
= [|| x + y || 2 − || x + y || 2 ] = <x, y> , q.e.d.
4
5.3 Consequence The orthogonal transformation T : V → W is a linear
injective transformation.

Proof. If T is an orthogonal transformation then || T x|| = || x || and in the


hypothesis T x = 0 results || x || = 0 ⇔ x = 0 . Followingly, the kernel
Ker T = {0}, meaning T is injective.
Using the consequence 4.3 we find that by using an orthogonal
transformation on a linearly independent system we get also a linearly
independent system.
5.4 Consequence The orthogonal transformation T: V → V , keeps the
euclidean distance and has as the fixed point the origin
T (0) = 0.
Truly, d(T x, T y) = || T x - T y || = || T (x - y)|| = ||x - y|| = d(x, y).
Any orthogonal transformation T, different then the identical transformation
is injective, T(0) = 0 is the only fixed point. Conversely, the transformation
T is the same with the identical transformation.
If W = V , T 1 and T 2 are two orthogonal transformations on V
then their composition is also an orthogonal transformation.
If T: V → V is also surjective, then T is invertable moreover, T is an
orthogonal transformation.
In these conditions the set of all bijective orthogonal transformation
of the euclidian vector space V forms a group together with the composing
operation (product) of two orthogonal transformations, named the the
orthogonal group of the euclidian vector space V denoted GO(V) and which
represents a subgroup of the linear group GL(V).

23
Let us consider in the finitely dimensional euclidian vector spaces Vn
and Vm, the orthonormed bases B = {e1, e2, …, en}, respectively B = {f1, f2,
…, fn} and the orthogonal linear transformation T : Vn → Wm.
With regard to the orthonormed bases B and B the linear
transformation is caracterized by the matrix A ∈ Mm× n(R), with the
following relations :
m
(T ej) = ∑a
k =1
kj f k , ∀ i, j = 1, n

The bases B and B are orthonormed, meaning that


< e1, e2 > = δij , i, j = 1, n şi < fk, fh> =δ kh , k, h = 1, m
Lets evaluate the scalar product of the images of the vectors of B,
m m m

< T (ei), T (ej)> = ∑a


k =1
ki fk , ∑a
k =1
hj fh = ∑a
k , h =1
ki a hj fk , fh =
m m
= ∑a ki a hj σkh = ∑a ki a hj
k , h =1 k =1
By using T ‘s orthogonal transformation property < T (ei), T (ej)>
=δ ij we have
m

∑a
k =1
ki a kj = δ ij , i, j = 1, n (5.3)

The relation (5.3) can also be written under the form


t
A ⋅ A = In
(5.3)′
Reciproc. If T is a linear transformation , caracterized (with regard to the
orthonormed bases B and B ), by the matrix A ∈ Mm× n(R) which satisfied
the conditions (4.3) , then T is an orthogonal transformation.
Truly, let x = (x1, x2, …, xn) and y = (y1, y2, …, yn) two vectors in the
space Vn. If we calculate the scalar product of the images of these vectors
 n   n 
< T (x), T (y)> = < T ∑ i i , T  ∑ xje j  > =
 x e 
 i =1   i =1 

24
n
m  
n n

= ∑∑xi y j < T (ei), T (ej)> =


i =1 j =1
∑i,j=1xiyi k∑,h= 1ak aki <j fk, fh >  =

 
n
 m  n n
= ∑x i  ∑a ki a kj
yi  
 = ∑x i y iσij = ∑xi y i = <x,
i , j =1 k ,h =1  i , j =1 i =1

y>,
meaning < T (x), T (y)> = < x, y> is an orthogonal transformation.
We have also proven the following theorem:

5.5 Theorem. With regard to the orthonormed bases B ⊂ Vn and B


⊂ Wm, the linear transformation T: Vn → Wm is
orthogonal iff the associated matrix satisfies the
condition tA ⋅ A = In.

5.6 Consequence The orthogonal transformation T: Vn → Wm is


caracterized with regard to an orthonormed base B ⊂
Vn by an orthogonal matrix A-1 = tA .

By taking into consideration that det tA = detA, results that if A is an


orthogonal matrix, then detA = ± 1.
The subset of the orthogonal matrices which have the property detA
= 1 form a subgroup denoted with SO(n; R) , of the orthogonal matrix group
of the n-th order name the special orthogonal group. An orthogonal
transformation caracterized by the matrix A ∈ SO(n; R) is also called
rotation.

Remarks

25
1° Since an orthogonal transformation is injective, and if B ⊂ Vn, is a
base, then it’s image trough the orthogonal transformation T: Vn → Wm , is a
base in Im T ⊂ Wm. Consequently n ≤ m.
2° Any orthogonal transformation between two euclidian vector
spaces of the same dimension is an isomorhpism of euclidian vector spaces.
3° The associated matrix of two orthogonal transformations T 1, T 2 :
Vn → Vn with regard to an orthonormed base, is given by the product of the
associated matrices correspondin to the transformations T1 and T2 . Thus,
the orthogonal group of the euclidian vector space Vn, group GO(Vm), is
isomorph with the multiplicative group GO(n; R), of the orthogonal
matrices of the n-th order.

5.2 Symmetrical linear transformations

Let V and W two euclidian R-vector spaces and T: V → W a linear


transformation.

5.7 Definition The linear orthogonal transformation t


T : W → V
defined by the relation

< tTy, x>V = <y, T x>W , ∀ x ∈ V, ∀ y ∈ W (5.4)

is called the transpose of the linear transformation

5.8 Definition A linear transformation T : V → V is called symmetrical


(antisymmetrical) if tT = T ( tT = - T )

Let Vn a finitely dimensional euclidian R-vector space and an


orthonormed base B ⊂ Vn . If the endomorphism T : Vn → Vn is
caracterized by the real matrix A ∈ Mn(R), we can easily proove that to any
symmetrical (antisymmetrical) endomorphism corresponds a symmetrical
(antisymmetrical) matrix, with regard to an orthonormed base.

5.9 Theorem The eigenvalues of a symmetrical linear transformation


T: Vn → Vn , are real.

26
Proof. Let λ 0 an arbitraty root of the caracteristic equation det(A-λ I ) =
0 and X = t(x1, x2, …, xn) and eigenvector that corresponds to the
eigenvalue λ 0.
Denoting with X the conjugated matrix of X and multiplying with
t
X the equation (A-λ 0I )X = 0, we obtain t X AX = λ 0t X X .
Since A is symmetrical, the first member of the equality is real,
because t XAX = t X ⋅ A ⋅ X = tX A X = t( tX A X ) = t X A X.
Also, tXX ≠ 0 is a real number and λ 0 is the quotient of two real
numbers, hence it is real.
Let the eigenvector v1 ∈ R correspond to the eigenvalue λ 1 ∈ R ,
and S1 the subspace generalted by v1 and Vn-1 ⊂ Vn , it’s orthogonal
complement, Vn = S1 ⊕ Vn-1.

5.10 Theorem. The subspace Vn-1 ⊂ Vn is invariant to the symmetrical


linear transformation T: Vn → Vn.
Proof. If v1 ∈ Vn is an eigenvector corresponding the eigenvalue λ 1, then
T v1 = λ 0 v 1 . For ∀ v ∈ Vn-1 we have satisfied the relation < v1, v > = 0. Let
us proove that T (v) ∈ Vn-1.
< v1, T v > = < T v1, v > = < λ , v1, v > = λ 1< v1, v > = 0.
Based on this result the next theorem can be easily prooven.

5.11 Theorem. The geometric multiplicity of an eigenvalue


corresponding to the endomorphism T: Vn → Vn is equal
to the algebric multiplicity dim S λ = mi .
1

5.12 Proposition The vector subspaces of an endomorphism T: Vn → Vn ,


corresponding to distinct eigenvalues are orthogonal.
S
Proof. Let λ , S λj , the corresponding eigensubspaces to the distinct
i

eigenvalues λ i and λ j which have the algebric multipolicity orders mi and


mj respectively. For ∀ v ∈ S λ and ∀ w ∈ S λj , T (v) = λ iv , T (w) = λ jw
i

we obtain :
< w, T (v) > = λ i < w, v> şi < T (w), v> = λ j < w, v> . Since T is
symmetrical we have (λ j - λ i) < w, v> = < T (w), v> - < w, T (v)> = 0 .
Thus λ j ≠ λ i ⇒ < w, v > = 0 . q.e.d.

5.13 Proposition Any symmetrical linear transformation of an euclidian


vector space Vn determines an orthonormed base.

27
Truly, the first mi vectors belong to the eigensubspace S λi and the
corresponding caracteristic roots λ i of multiplicity mi and so on until all
eigenvalues are used up m1 + m2 + ... + mp = n.
With regard to the orthonormed base of these eigenvectors the
endomorphism T:Vn → Vn has canonical form. The matrix associated to the
endomorphism T is this base is diagonal, having on the main diagonal
eigenvalues, written as may times as their multiplicirty order allows and are
expressed in function of the associated matrix A corresponding to the
endomorphism T in the orthonormed base B , trought the relation D = tΩ
AΩ , where Ω is the othogonal passing matrix form base B to the base
formed out of eigenvectors.

5.3 Isometrical Transformations on punctual euclidian spaces.

Let E=(E,V,ϕ ) a punctual euclidian space, E being the support set,


V the director vector space and ϕ the affine structure function.
5.14 Definition A bijective correspondance f : E → E is called a
transformation of the set E of a permutation of the set E.
If E is endowed with a certain geometrical structure, and f satisfies
certain conditions referring to this structure, then f will be named
geometrical transformation.
By denoting (σ E ,°) the transformation group of the set E with regard
to the composing operation of the functions, and G a subgroup of his, then
the pair (E, G ) will be named the geometric space or the space iwth a
fundamental group.
A subset of points F⊂ E is named a figure of the geometric space
(E, G ). Two figures F1,F2 ⊂ E are called congruent if ∃ f∈ G, such
that f(F1) = F2.
A property or scale referring to the figures of the geometrical space
(E, G ) is called geometric if this is invariant to the transforamtions of the
group G .
We call the geometry of the space (E, G ) the theory obtained by the
study of the notions, propeties and geometric scales of geometrice referring
to the figures of the set E .
If E = (E,V,ϕ ) and E’’ = (E’,V ′ ,ϕ ’) are two affine spaces, then an

affine transformation t : E→E is uniquely determined by the pair of points

28
′ ′ ′
A∈E, and A ∈E respectively, and by a linear transformation T : V → V .
We shall consider the euclidian punctual space of the free
 
vectors E3
= (E3, V3 , ϕ ) and the carthesian reference point R = ( O, i , j ) in this
space.
An affine transformation t : E3 → E3 , f(M) = M’ realizes the
corespondance M(x1,x2,x3) → M’(x1’,x2’,x3’) caracterized by the relations
3

xi = ∑a
j =1
ij xj’ +
0
xi , det.(aij) ≠ 0
(5,5)
and written in matrix form
0
X = A X + X , det.A ≠ 0
(5,5)’
An affine transformation can be also interpreted as a change of affine
reference points. The set of the affine transformations form a group, with
regard to that composung operation of the application named the affinity
groups.
A very special interest exists for those transformations of the space
that do not deform the figures when studying the properties of the
geometrical spaces. For that matter we will present some examples of linear
affine transformations having the above mentioned property.
Examples:
1. The application so : E3→E3 defined trough so(O) = O, O∈E3,
fixed and t(P) = P′ , with the property OP ' =−OP , is an affine
transformation called the center symmetry O . The associated linear
 
transformation T : V3 → V3 is defined by the relation T( v ) = - v .
2. If d ⊂ E3 is a staright line and P a point that is not contained in
the line d, then exists one and only one point P’∈ E3 with the property PP’
⊥ d and the middle point of the segment PP’ is on the line d .
The application sd : E3 → E3 , sd(P) = P’ , with P’ defined above, is
called axial symmetry If P0 is the orthogonal projection of the point P on the
line d, then there exists the affine combination P’= 2P0 – P . The associated

linear transformation T : V3 → V3 is given by the relation T ( v ) = 2
 
pr v −v .
d

3. The application t : E3→E3 is given by the corespondence t(P) =


 
P’ with the property PP ' ∈v , v ∈ V3 being a given vector, and it is
0 0

an affine transformation on E3 named the vector tranlation v . 0

The associated linear transformation T : V3 → V3 is given by the

29
 
identical application T ( v ) = v .
4. Let E2 = (E2, V2 , ϕ ) be a bidimensional puctual euclidian
space.
The application r0,α : E2 → E3 , r0,α (P) = P’ , with the
following properties:
δ (O,P’)=δ (O,P’) and ∠ POP’ = α is called center rotation O and the
angle α .
 
The associated linear transformation T : V2 → V2 , T ( v ) = v ’ is
caracterized by an orthogonal matrix.
In the euclidian space’s geometry we are foremost interested in those
geometrical transformations which conserve certain properties of the figures
in the considered space. With other words, we will consider certain
subgroups of the euclidian space affinity group which govern these
transformations

5.15 Definition We call isometry on the euclidian punctual space


E3 = (E3,V 3 , ϕ ) that application f : E3 → E3 which
has the property:
δ ( f(A),f(B) ) = δ (A,B) , ∀A,B ∈ E3
(5,6)

If we consider the representant of the vector u in the point A∈ E3 ,


then ( ∃1) B∈ E3, so that AB ∈u and, beside that we have δ (A,B) =

AB = u . Therefore, the relation (5,6), for the linear transformation
T : V →V, associated to f, is equivalent with
   
δ ( T (v ), T (u ) ) = δ ( u, v ) ,
(5,6)’
If the punctial space E3, is the space of the euclidian geometry, then
an isometry f : E3 → E3 is a bijective application , hence a geometrical
transformation named isometric transformation..
The subset of the isometrical transformations, with regard to the
composing operation of the appliactions forma a subgroup Izo E, called the
group of isometries.
5.16 Theorem. The affine application f : E→ E is an isometric
transformtion iff the associated linear application
T: V →V

30
is an orthogonal transformation.
The geometrical transformations given in the previous examples
(central symmetry, axial symmetry, translation and rotation) are isometrical
transformations.

5.17 Theorem. Any isometry f : E→ E is the product of a translation with


an isometry with a fixed point x , f = t° g .

 
Let us consider the euclidian plane PE = ( E2 , IzoE2 ), and R =
(O, i , j ) as a carthesian and orthonormed reference point in the euclidian
puntual space E2 = (E2, V2 , ϕ ) and
g : E2 → E2 an isometry with O as a fixed point. Consider the point A(1,0)
and some arbitraty point M(x,y) having the image A’(a,b) and M’(x’,y’)
respectively.
With the conditions : δ (O,A’) = δ (O,A) , δ (O,M’) = δ (O,M) and
respectively δ (A’,M’) = δ (A,M) we obtain the equation systems
 a2 + b2 =1
 2
 x + y = x' + y'
2 2 2
(5.7)
( x −1) 2 + y 2 = ( x'−a ) 2 + ( y '−b) 2

The general solution is:
 x ' = ax − ε by
 (5.8)
 y ' = bx + ε ay , cu a + b = 1, ε = ±1
2 2

Reciprocally, the formulae (5.8) represent an isometry. Truly, if M1 and M2


are two arbitrary points and their images are M’1 and M’2 respectively, then
δ (M’1,M’2)2 = (x’2 - x’1)2 +(y’2 - y’1)2 =
= (a2+b2) [ (x2-x1)2 +(y2-y1)2 ] = δ (M1,M2)2 ,
hence , δ (M’1,M’2) = δ (M1,M2) .
The fixed points of the caracterized by the equations (5.8) are
obtained in (5.8) x’ = x and y’ = y , hence
( a −1) x − ε by = 0
 (5.9)
bx + (ε a −1) y = 0
The system (5.9) is a system of linear equations and homogenous, and has
the determinant
∆ = (ε +1) (1-a) .
If ε = -1 the system (5.9) admits an infinity of fixed points, hence
the equations (5.8) represemt an axial symmetry.

31
If ε = 1 and a ≠ 1 the isometry (5.8) admits only the orifin as a
fixed point and represents a rotation, and for ε = 1 and a = 1 the
application (5.8) becomes an identical transformation.
The equations (5.8) can be written under the matrix form as follows:
 x'   a − ε b  x  2

 y' 
 =
   , a + b 2 = 1, ε = ±1 (5.8)’
  b ε a  y 
a −ε b 
The associated matrix A =  b  , in the conditions
 εa  
a2=b2=1,ε = ± 1, is an orthogonal matrix, which means that the isometry
subgroups with the origin as a fixed point defined on the euclidian plane is
isomorph with the orthogonal group GO(2;R)
For ε = 1 (considering the identical application as having the
rotation of angle α = 0) the orthogonal matrix A has the property
det.A=1 , which means that the subgroup of the rotations of the euclidian
plane is isomorph with the orthogonal special subgroup SO(2;R).
By combining the isometries, having the origin as a fixed point ,
together with the translations which transform the origin into the point
(xo,yo) we obtain the isometries of the euclidian plane PE = ( E2 , IzoE2 )
,caracterized analytically by the equations:
 x' = ax − ε by + xo
 (5.10)
 y ' = bx + ε ay + yo cu a + b = 1, ε = ±1
2 2

Using the trigonometrical functions, and considering a = cosϕ and


b = sinϕ ,ϕ∈ R
the equations (5,10) are written in the following form:
 x ' = x cos ϕ −ε y sin ϕ + xo

y ' = x sin ϕ +ε y cos ϕ + yo , ε = ±1
(5.11)
For ε =1 , the transformations caracterized by the equations
 x' = x cos ϕ − y sin ϕ + xo
 (5.12)
y ' = x sin ϕ + y cos ϕ + yo ,
meaning those isometries represented by the composing of a rotation and a
translation ( movements of the plane ), form a subgroup called the subroup
of movements.
In the tridimensional punctual euclidian punctual space E3 =
(E3,V 3 , ϕ ),

32
The isometries T : E3 → E3 , T(x1,x2,x3) = (y1,y2,y3) , are actually the
transformations caracterized by the equations
y 1 =a11 x1 +a12 x 2 +a13 x 3 +b1

y 2 =a 21 x1 +a 22 x 2 +a 23 x 3 +b 2
y 3 =a 31 x1 +a 32 x 2 +a 33 x 3 +b 3 ,

(5.13)
where the matrix A = (aij) , i,j = 1,3 is an orthogonal matrix, and the
coordinate points (b1,b2,b3) is the translation of the origin of the
 
orthonormed carthesian reference point R (O; i , j , k ) from E3

Problems
1. Find out which of the following applications are linear
transformations?
T: R2→ R2 , T(x1,x2) = (x1-x2,x1+x2)
T: R2→ R2, T(x1,x2) = (x1cosα - x2sinα , x1sinα + x2cosα ), α∈ R
T: R3→ R3, T(x1,x2,x3) = (x1-2x2+3x3, 2x3 , x1+x2 )
T: R2→ R3, T(x1 x2) = (x1-x2,x1+x2,0)
T: R2→ R3, T(x1 x2) = (x1x2 ,x1,x2) .
2. Proove that the following transformations are linear:
 
a) T: V3 → R , T( v ) = a v
n

∑α
(i )

b) T: C [0,1] → C [0,1] , Tf =
(n) (o i f , α i∈ R
i =1
x

c) T: C[0,1] → C [0,1] ,
(
(Tf)(x) = ∫ f ( x)dx
a

3. Determine the linear transformations T: R3→ R3 ,T(vi) = wi , where


v1 = (1,1,0), v2 = (1,0,1), v3 = (0,1,1) and w1 = (2,1,0),w2=(-1,0,1),
w3=(1,1,1).
Determine those transformation for which T 3 = T .

4. Sho that the transformation T: R2→ R3, T(x1 x2) = (x1-


x,x1+x2,2x1+3x2) is injective, but not surjective.

5. Determine KerT and ImT for the transformation T: R3 → R4 ,

33
T(x1,x2,x3) = ( -x1 + x2+x3,x1-x2+x3,2x3,x1-x2) and verify the relation dim
KerT + dim Im T = 3 .

6. Consider the transformation T: R3→ R3 , given in the canonical base


by the matrix
3 2 −1
 
A = −1 −1 1 
0 −1 2 
 
a) Determine the vector subspace T-1(W) for the subspace
W = { (x1,x2,x3)∈R3 x1+ x2+ x3 =0 }.
b) Determine a base for each of the subspaces Ker T and Im T .


a∈ V -
  
7. Show that the application T : V3 → V3 , T( v ) =a ×v , 3

Fixed, is linear . Determine Ker T , Im T and verify the rank theorem.

8. On the space R3 we consider the projections Ti on the three


coordinate axes.
3
Show that R3 = ⊕Ti şi Ti  Tj =δij , ij=1,2,3
i =1

9. Show that the endomorphism T : R3→R3 given by the matrix


− 4 −7 −5 
 
A=  2 3 3  is nilpotent of two index (T2 = 0 ).
 1 2 1 
 
A transformation with this property is called tangent structure.

10. Show that the transformation T : R2n→ R2n defined by the relation
T(x) =(xn+1,xn+2,…,x2n,-x1,-x2,…,xn) has the property T2 = - Id.R2n .
A transformation with this propery is called complex structure.
11. Determine the linear transformation which transformes the point
(x1,x2,x3) ∈ R3 in it’s symmetric to the plane x1 +x2 + x3 = 0 and show that
the newly determined transformation is orthogonal.
11. Determine the eigenvectors and the eigenvalues for the
endomorphism T: R3 → R3 caracterized by the matrix

34
2 1 1 0 1 1 2 0 0 − 3 −7 −5 
       
A= 1 2 1 , A= 1 0 1  , A= 0 1 0  , A=  2 4 3 .
1 1 2 1 1 0 0 1 1  1 2 2 
       

12. Study the possibility of reducing to the canonical form, if yes, find
the diagonalizing matrix for the endomorphisms with the following
associated matrices:
1 0 0 1 
4 0 0 2 −2 3  
5 4     0 1 0 0 
A= 
  , A= 0 0 1  , A= 1 1 1  , A=  .
4 5
 0 0 0 1 −2
 1 2
1
 3 1 
1

 0 −2 5 

13. Determine the Jordan canonical form for the following matrices:
2 0 0 0 
2 0 0 2 3 −5   
    1 3 1 1 
A = 0 4 1 , A = 2 4 −7 , A =  .
0 0 0 0 −1
 −4 0
1
 2 −3 
 
−1

 −1 0 2 

14. Determine an orthonormed base in the R3 space where the
endomorphism T : R3→ R3 , T(x) = (-x1 +x2-4x3, 2x1-4x2-2x3, -4x1-
2x2-x3) admits the canonical form.

15. Using the Hamilton-Cayley theorem, calculate A-1 and P(A),


P(x) = x4 + x3 +x2 +x + 1, for the following matrices:
0 2 0 −1
2 0 0 4 1 1  
1 0     1 0 0 0 
A= 
 
 , A= 0 1 0  , A= 2 4 1  ,A =  .
−1 1 0 0 1 0 0 
 1 1
0
 1 4 
0

 0 1 0 

35

Anda mungkin juga menyukai