Linear transformations
The proof can be written directly, and condition (1.1) shows that an
application T: V → W is a linear transformation, if the image of a linear
combination of vectors is the linear combination of the images of these
vectors.
1
Examples
1° The application T: Rn → Rm , T (x) = AX, A ∈ M m× n(R) , X = tx is a
linear transformation. In the particular case n = m = 1 , the application
defined by T (x) = ax, a ∈ R, is linear.
2° If U ⊂ V is a vector subspace, then the application T: U → V,
defined by T(x) = x, is a linear transformation, named inclusion application.
In general, the restriction of a linear transformation to a subset S ⊂ V is not
a linear transformation. The linearity is inherited only by vector subspaces.
3° The application T: C1 (a, b) → C0(a,b) , T (f) = f ′ is linear.
b
4° The application T: C0 (a, b) → R, T(f) = ∫
a
f ( x ) dx is linear.
5° If T: V → W is a bijective linear transformation, then T -1: W → V
is a linear transformation.
2
If W = K, then the linear application T: V → K, is named , and the set
V* = L( V, K), of all linear forms of V, is a K-vector space named the
vector space V `s dual.
If V is an euclidian vector space, of finite dimensions, then it’s dual
or V* has the same dimension and its identified with V.
1.3 Theorem. If T: V → W is some linear transformation, then :
a) T (0v) = 0w , T (-x) = - T(x), ∀ x ∈ V
b) The image T (U) ⊂ W, of a vector subspace U ⊂ V,
is also a vector subspace.
c) The inverted image T-1 (W′ ) ⊂ V , of a vector subspace
W′ ⊂ W , is also a vector subspace.
d) If the vectors x1, x2 ,..., xn ∈ V are linearly dependent
then the vectors T (x1), T (x2) ,..., T (xn) ∈ W are also
linearly dependent.
Proof. a) By giving α the value α = 0 , and α =-1 in the T (α x) =
α T (x) equation, we obtain T (0v) = 0w and T (-x) = - T (x).
Therefore we will ignore the two vector’s 0 indexes.
b) For ∀ u, v ∈ T (U) , ∃ x, y ∈ U such that u = T (x) and v =
T (y). According to the hypothesis that U ⊂ V is a vector subspace, for x, y
∈ U, α , β ∈ K we have α x + β y ∈ U and with relation (1.1), we obtain
α u + β v = α T (x) + β T (y) = T(α x + β y) ∈ T (U).
c) If x, y ∈ T -1(W ′ ) then T (x) , T (y) ∈ W ′ and for ∀ α , β ∈
K we have α T (x) + β T(y) = T (α x + β y) ∈ W ′ (W ′ vector subspace),
we get α x + β y ∈ T -1(W ′ ) as a result.
d) We apply the transformation T onto the relation of linear
dependence λ 1x1 + λ 1x1 + ... + λ nxn = 0 and by using a), the linear
dependency relation will be λ 1 T (x1) + λ 2 T (x2) + ... + λ n T (xn) = 0.
1.4 Consequence If T: V → W is a linear transformation, then :
a) The set Ker T = T -1{0} ={x ∈ V | T (x) = 0 } ⊂ V is
called the kernel of the linear transformation T ,
and it is a vector subspace.
b) The image of the linear transformation T, Im T = T
(V) ⊂ W, is a vector subspace.
c) If T(x1) , T(x2) , ... , T(xn) ∈ W are linearly
independent then the vectors x1, x2 ,..., xn ∈ V are
also linearly independent.
3
1.5 Theorem. A linear transformation T: V → W is injective if and only
if Ker T = {0}.
Proof. Lets note with n = dim V and s = dim Ker T. For s ≥ 1 let us
consider a base {e1, e2, ..., es} in Ker T and complete it with B = {e1, e2,
..., es, es+1, ..., en} so that B is a base of the entire vector space V. The
vectors es+1, es+2, ..., en represent a base of the supplementary subspace of
the subspace Ker T .
n
4
For s = 0 (K er T = {0} and dim Ker T = 0), we consider the base B
= {e1, e2, ..., en} belonging to the vector space V and by following the same
reasoning we obtrain that T (e1) , T (e2) , ... , T (en) represents a base of the
space Im T meaning Im T = dim V . (q.e.d)
1) T is injective
2) The image of a linearly independent vector system e1,
e2, ..., ep ∈ V (p ≤ n) is a system of vectors T (e1), T (e2) ,
... , T (ep), which are also linearly independent.
5
1.8 Consequence. If V and W are two finitely dimension vector spaces,
and T: V → W is a linear transformation, then :
1) If T is injective, and we consider {e1, e2, ..., en} a
base of V then { T (e1), T (e2), ..., T (en)} is a base of
ImT .
2) Two isomorph vector spaces have the same
dimension.
The demonstration can be written directly using the results from the
theorems 1.6 and 1.7.
2.1 Theorem. If B = {e1, e2, ..., en} is a base of the vector space V and
{w1, w2, ..., wn} are n arbitrary vectors from W then there
is a linear transformation T: V → W with the property
T(ei) = wi , i =1, n , and it is unique.
α x+β y= ∑(αx
i =1
i + βyi )ei , ∀ α ,β ∈ K,
6
n
i =1, n . Accordingly, for any x ∈ V , x = ∑x e
i =1
i i we obtain:
n n n n
n
= T ∑ xi ei = T (x),
i =1
that is the oneness of the linear transformation T .
If { w1, w2, ..., wn } ⊂ W are linearly independent, then the linear
transformation T defined i theorem 2.1 is injective.
Theorem 2.1 states that a linear transformation T : V → W, dimV =
n, is perfectly determined, if it’s images {T (e1), T (e2), ...,T (en)} are known
over the vectors of a basis B = {e1, e2, ..., en} ⊂ V.
Let Vn and Wm be two K-vector spaces of dimension n respectively
m, and let T: V → W be a linear transformation. If B = {e1, e2, ..., en} is a
fixed base in Vn , and B′ = {f1, f2, ..., fm} is a fixed base in Wm then the linear
transformation T is uniquely determined by the values T (ej)∈Wm. For ∀ j
= 1, n we associate the n-uple (a1j, a2j, ..., amj) to the image T (ej), that is
m
T (ej) = ∑a
i =1
ij f i , ∀ j = 1, n
(2.1)
The coefficients aij ∈ K, i = 1, n , j = 1, n , define uniquely the
matrix A = (aij) ∈ Mm× n (K). If we consider the vectors Vn şi Wm and the
bases B şi B′ fixed, then the matrix A ∈ Mm× n (K) determines the linear
transformation T uniquely.
2.2 Definition. The matrix A ∈ Mm× n (K) who’s elements are given by the
relation (2.1) is called the associated matrix to the linear
transformation T with regard to the pair of bases B and B′ .
n m
yi = ∑aij x j , ∀ i = 1, m
j =1
(2.2)
7
Truly,
n n n
m
T(x) = T ∑x j e j = ∑ j T (ej)= ∑ j ∑ aij f i =
x x
j =1 j =1 j =1 i =1
m n n
=∑ ∑aij x j f i = ∑yi f i ,
i =1 j =1 i =1
Y = AX (2.3)
Remarks:
1° If L (Vn, Wm) is the set of all linear transformation from Vn with
values in Wm and Mm× n (K) is the set of all matrices of type m × n, and
B and B ′ are two fixed bases in Vn respectively Wm, then the corespondence
Ψ : L (Vn, Wm) → Mm× n (K) , Ψ (T) = A,
which associates the matrix A to a linear transformation T, relative to the
two fixed bases, is a vector space isomorphism. Consequently dim L (Vn,
Wm) = m ⋅ n.
2° This isomorphism has the following properties:
- Ψ (T 1 T 2) = Ψ (T 1) ⋅ Ψ (T 2), if T 1 and T 2 exist
- T: Vn → Vn is invertable iff the associated matrix A, with regard to
some base from Vn, is invertable.
8
iff A′ = Ω -1 A Ω . The matrix Ω is the passing matrix from
base B to the base B′ .
Proof. Let B = {e1, e2, ..., en} and B′ = {e′ 1, e′ 2, ..., e′ n} two bases in
Vn and Ω = (ω ij) the passing matrix from base B to base B ′ , hence
n
e′ j = ∑ω e
i =1
ij i , ∀ j = 1, n .
If A = (aij) is the associated matrix of T relative to the base B,
n
hence T (ej) = ∑a
i =1
ij ei , j = 1, n and A′ = (a′ ij) the associated matrix
n
n
∑ akiωij ek
i =1
n n
Out of the two expressions we obtain ∑ωki a 'ij = ∑a ki ωij ⇔
i =1 i =1
Ω A′ = A Ω .
Ω being a nondegenerated matrix, results the relation A′ = Ω -
1
AΩ .
Remarks:
1° The similarity relation is an equivalence relation on the set Mn
(K). Every equivalence class corresponds to an endomorphism T ∈ End(Vn)
and contains all associated matrices of T relative the the bases of the
vector space Vn.
9
2° The matrices A and B have the same determinant
T , yi = ∑a
j =1
ij x j , will have the simplest expressions. We will solve this
problem with the help of the values and the vectors proper to the
endomophism T.
10
1) for any eigenvector of T corresponds a single eigenvalue
λ ∈ σ (T).
2) Eigenvectors which correspond to single eigenvalueas
are linearly independent.
3) The set Sλ = {x ∈ V | T x = λ x , λ ∈ σ (T)} ⊂ V
is an invariant vector subspace with regard to, T , hence
T(Sλ ) ⊆ Sλ . The vector subspace is named the
eigensubspace corresponding to the eigenvalue λ ∈ σ
(T).
Proof. 1) Let us consider x ≠ 0 an eigenvector corresponding to the
eigenvalue λ ∈ K. Suppose there is another eigenvalue λ ′ ∈ K
corresponding to the same eigenvector such that T (x) = λ ′ x then it
would result λ x = λ ′ x ⇔ (λ - λ ′ )x = 0 ⇔ λ = λ ′ .
2) Let x1, x2, ..., xp be the eigenvectors corresponding to the distinct
eigenvalues λ 1, λ 2, ..., λ p. We show using induction (p) the linear
independence of the considered vectors. For p = 1 and x1 ≠ 0, as an
eigenvector, the set {x1} is linearly independent. Supposing that the property
is true for p-1, we show that it is true for p eigenvectors. Applying the
endomorphism T to the relation k1x1 + k2x2 + ... + kpxp = 0 we obtain k1λ 1x1
+ k2λ 2x2 + ...+ kpλ pxp = 0. By substracting the first relation amplified by
λ p, we obtain the following from the second relation:
k1(λ - λ p)x1 + ... + kp-1(λ p - λ p-1)xp-1 = 0
The inductive hypothesis prooves k1 = k1 = ... = kp-1 = 0, and if we use
this in the relation k1x1 +... + kp-1xp-1 + kpxp = 0 we obtain kpxp = 0 ⇔ kp =
0 , that is x1, x2, ..., xp are linearly independent.
3) For any x, y ∈ Sλ and ∀ α , β ∈ K we have :
T (α x + β y) = α T (x) + β T (y) = α λ x + β λ y = λ (α x + β y),
which means that Sλ is a vector subspace of V.
For ∀ x ∈ Sλ , then T (x) = λ x ∈ Sλ , followingly T (Sλ ) ⊆ Sλ .
3.3 Theorem. The eigensubspaces S λ1 , S λ2 corresponding to distinct
eigenvalues λ 1 ≠ λ 2 have only the null vector in
common.
11
contradiction. Therefore S λ1 ∩ S λ2 = {0}.
Remarks
12
1° The solutions of the caracteristic equations det(A - λ I ) = 0 are the
eigenvalues of the matrix A.
2° If the field K is a closed field, then all roots of the caracteristic
equations are in the field K therefore the corresponding eigenvectors are
also in the K-vector space Mn× 1(K).
If K is open, i.e. K = R, the caracteristic equation may have also
complex roots and the corresponding eigenvectors will be included in the
complexified real vector space.
For any real and symmetrical matrix, it can be proven that the
eigenvalues are real.
3° Two similar matrices have the same caracteristic polinom.
Truly, if A and A′ are similar, A′ = C-1AC with C nedegenerated, then
(A - λ I) (Bn-1λ n-1
+ Bn-2λ n-2
+ ... + B1λ + B0) = (a0λ n
+ a1λ n-1
+ ... +
an)I,
a0I = – Bn-1 An
a1I = A Bn-1 – Bn-2 An-1
13
a2I = A Bn-2 – Bn-3 An-2
...............................
an-1I = A B1 – B0 A
anI = A B0
a0An + a1An-1 + ... + a0I = 0 , q.e.d.
14
matrix to an endomorphism depends on the chosen base from the vector
space Vn, we will determine that particular base which with regard to the
endomorphism has the simplest form, that means that the associated matrix
will have a diagonal form.
15
eigenvalues of T .
T (ei) = λ 0 ei , ∀ i = 1, p and
n
T (ej) = ∑a
k =1
kj ek , ∀ j = p +1, n
16
λ0 0 ... 0 a1 p +1 ... a1n
0 λ0 ... 0 a2 p +1 ... a2 n
. . ... . . ... .
A = 0 0 ... λ0 a pp +1 ... a pn
0 0 ... 0 a p +1 p +1 ... a p +1n
. . ... . . ... .
0 0 ... 0 anp +1 ... ann
∑m
i =1
i = n . Without constraining the generality, we can admit that the first
mi vectors from the base B = {e1, e2, ..., en} are the eigenvectors
corresponding to the eigenvalue λ 1 , the next m2 corresponds to λ 2
etc. Hence {e1, e2, ..., en} ⊂ S λ , therefore, m1 ≤ dim S λ . But dim
1 1
then we can consider the set B = {e1, e2, ..., em1 , em1 +1 , ..., em1 +m2 , ...},
with the convention that the first m1 vectors form a base in S λ , the 1
17
p
Because S λi ∩ S λ = {0} and
j ∑m
i =1
i = n , base B is a base in Vn , with
λ1 0 ... 0
0 λ1 ... 0
. . . . 0
0 0 ... λ1
.
A =
.
λp 0 ... 0
0 0 λp ... 0
0 . ... .
0 0 ... λ p
18
dimension of the subspace S λi can be found by identifying the
subspace S λ itself.
i
a) If dim S λi = mi , ∀ i = 1, n , then T is
diagonalizable. The associated matrix to T , with regard to
the base formed of eigenvectors is a diagonal matrix having
on the main diagonal the eigenvalues written as many times
it’s multiplicity order would allow it.
We can check this result by constructing the matrix T
= { v1, v2, ..., tvn}, having as columns the coordonates of the
t t
λ1
. 0
-1
D = T AT = .
0 .
λ p
19
S λi < mi , at least for some eigenvalue λ i ∈ K , the endomorphism T is not
diagonalizable, but we can still determine a base in the vector space Vn with
respect to which the endomorphism T has a more general canonical form,
named Jordan form.
The vector e1 is an eigenvector and the vectors e2, e3, ..., ep are called
main vectors.
Remarks
1° The diagonal form of a diagonalizable endomorphism is a
20
particular case of a the Jordan canonical form, having all Jordan cells of the
first order
2° The jordan canonical form is not unique. The order in the main
diagonal of the Jordan cells depends on the chosen order of the main
vectors and the eigenvectors with regard to the given base.
3° The number of the Jordan cells, equal to the number of the linearly
independent eigenvectors, and also their orders are unique.
The following theorem can be prooven :
21
By using the matrix T having as columns the coordonates of the
vectors belonging to the base, which were constructed using the associated
eigenvectors and main vectors in this order, we obtain the matrix J = T-1AT ,
the Jordan canonical form which contains on the main diagonal the Jordan
cells, in the order in which they appear in the constructed base of associated
eigen- and main vectors (if they exist). The Jordan cells have the order
equal to the number of vectors from the system constructed out of the
associated eigen- and main vectors.
Examples.
1° The identical transformation T: V → V, T (x) = x , is an orthogonal
transformation.
2° The transformation T : V → V, that associates to a vector x ∈ V
its inverse T (x) = - x , is an orthogonal transformation.
22
Proof. If T is orthogonal then < T x, T y> = < x, y >, which for x=y
becomes
< T x, T y> = < x, x> ⇔ || T x||2 = ||x||2 ⇔ || T x|| = ||x||.
1
Reciproc. Using the relation <a, b> = [|| a + b || 2 − || a − b || 2 ] , we have
4
1
< T x, T y > = [ || T x + T y ||2 - || T x - T y ||2 ] =
4
1
= [|| T ( x + y ) || 2 − || T ( x + y ) || 2 ] =
4
1
= [|| x + y || 2 − || x + y || 2 ] = <x, y> , q.e.d.
4
5.3 Consequence The orthogonal transformation T : V → W is a linear
injective transformation.
23
Let us consider in the finitely dimensional euclidian vector spaces Vn
and Vm, the orthonormed bases B = {e1, e2, …, en}, respectively B = {f1, f2,
…, fn} and the orthogonal linear transformation T : Vn → Wm.
With regard to the orthonormed bases B and B the linear
transformation is caracterized by the matrix A ∈ Mm× n(R), with the
following relations :
m
(T ej) = ∑a
k =1
kj f k , ∀ i, j = 1, n
∑a
k =1
ki a kj = δ ij , i, j = 1, n (5.3)
24
n
m
n n
n
m n n
= ∑x i ∑a ki a kj
yi
= ∑x i y iσij = ∑xi y i = <x,
i , j =1 k ,h =1 i , j =1 i =1
y>,
meaning < T (x), T (y)> = < x, y> is an orthogonal transformation.
We have also proven the following theorem:
Remarks
25
1° Since an orthogonal transformation is injective, and if B ⊂ Vn, is a
base, then it’s image trough the orthogonal transformation T: Vn → Wm , is a
base in Im T ⊂ Wm. Consequently n ≤ m.
2° Any orthogonal transformation between two euclidian vector
spaces of the same dimension is an isomorhpism of euclidian vector spaces.
3° The associated matrix of two orthogonal transformations T 1, T 2 :
Vn → Vn with regard to an orthonormed base, is given by the product of the
associated matrices correspondin to the transformations T1 and T2 . Thus,
the orthogonal group of the euclidian vector space Vn, group GO(Vm), is
isomorph with the multiplicative group GO(n; R), of the orthogonal
matrices of the n-th order.
26
Proof. Let λ 0 an arbitraty root of the caracteristic equation det(A-λ I ) =
0 and X = t(x1, x2, …, xn) and eigenvector that corresponds to the
eigenvalue λ 0.
Denoting with X the conjugated matrix of X and multiplying with
t
X the equation (A-λ 0I )X = 0, we obtain t X AX = λ 0t X X .
Since A is symmetrical, the first member of the equality is real,
because t XAX = t X ⋅ A ⋅ X = tX A X = t( tX A X ) = t X A X.
Also, tXX ≠ 0 is a real number and λ 0 is the quotient of two real
numbers, hence it is real.
Let the eigenvector v1 ∈ R correspond to the eigenvalue λ 1 ∈ R ,
and S1 the subspace generalted by v1 and Vn-1 ⊂ Vn , it’s orthogonal
complement, Vn = S1 ⊕ Vn-1.
we obtain :
< w, T (v) > = λ i < w, v> şi < T (w), v> = λ j < w, v> . Since T is
symmetrical we have (λ j - λ i) < w, v> = < T (w), v> - < w, T (v)> = 0 .
Thus λ j ≠ λ i ⇒ < w, v > = 0 . q.e.d.
27
Truly, the first mi vectors belong to the eigensubspace S λi and the
corresponding caracteristic roots λ i of multiplicity mi and so on until all
eigenvalues are used up m1 + m2 + ... + mp = n.
With regard to the orthonormed base of these eigenvectors the
endomorphism T:Vn → Vn has canonical form. The matrix associated to the
endomorphism T is this base is diagonal, having on the main diagonal
eigenvalues, written as may times as their multiplicirty order allows and are
expressed in function of the associated matrix A corresponding to the
endomorphism T in the orthonormed base B , trought the relation D = tΩ
AΩ , where Ω is the othogonal passing matrix form base B to the base
formed out of eigenvectors.
28
′ ′ ′
A∈E, and A ∈E respectively, and by a linear transformation T : V → V .
We shall consider the euclidian punctual space of the free
vectors E3
= (E3, V3 , ϕ ) and the carthesian reference point R = ( O, i , j ) in this
space.
An affine transformation t : E3 → E3 , f(M) = M’ realizes the
corespondance M(x1,x2,x3) → M’(x1’,x2’,x3’) caracterized by the relations
3
xi = ∑a
j =1
ij xj’ +
0
xi , det.(aij) ≠ 0
(5,5)
and written in matrix form
0
X = A X + X , det.A ≠ 0
(5,5)’
An affine transformation can be also interpreted as a change of affine
reference points. The set of the affine transformations form a group, with
regard to that composung operation of the application named the affinity
groups.
A very special interest exists for those transformations of the space
that do not deform the figures when studying the properties of the
geometrical spaces. For that matter we will present some examples of linear
affine transformations having the above mentioned property.
Examples:
1. The application so : E3→E3 defined trough so(O) = O, O∈E3,
fixed and t(P) = P′ , with the property OP ' =−OP , is an affine
transformation called the center symmetry O . The associated linear
transformation T : V3 → V3 is defined by the relation T( v ) = - v .
2. If d ⊂ E3 is a staright line and P a point that is not contained in
the line d, then exists one and only one point P’∈ E3 with the property PP’
⊥ d and the middle point of the segment PP’ is on the line d .
The application sd : E3 → E3 , sd(P) = P’ , with P’ defined above, is
called axial symmetry If P0 is the orthogonal projection of the point P on the
line d, then there exists the affine combination P’= 2P0 – P . The associated
linear transformation T : V3 → V3 is given by the relation T ( v ) = 2
pr v −v .
d
29
identical application T ( v ) = v .
4. Let E2 = (E2, V2 , ϕ ) be a bidimensional puctual euclidian
space.
The application r0,α : E2 → E3 , r0,α (P) = P’ , with the
following properties:
δ (O,P’)=δ (O,P’) and ∠ POP’ = α is called center rotation O and the
angle α .
The associated linear transformation T : V2 → V2 , T ( v ) = v ’ is
caracterized by an orthogonal matrix.
In the euclidian space’s geometry we are foremost interested in those
geometrical transformations which conserve certain properties of the figures
in the considered space. With other words, we will consider certain
subgroups of the euclidian space affinity group which govern these
transformations
30
is an orthogonal transformation.
The geometrical transformations given in the previous examples
(central symmetry, axial symmetry, translation and rotation) are isometrical
transformations.
Let us consider the euclidian plane PE = ( E2 , IzoE2 ), and R =
(O, i , j ) as a carthesian and orthonormed reference point in the euclidian
puntual space E2 = (E2, V2 , ϕ ) and
g : E2 → E2 an isometry with O as a fixed point. Consider the point A(1,0)
and some arbitraty point M(x,y) having the image A’(a,b) and M’(x’,y’)
respectively.
With the conditions : δ (O,A’) = δ (O,A) , δ (O,M’) = δ (O,M) and
respectively δ (A’,M’) = δ (A,M) we obtain the equation systems
a2 + b2 =1
2
x + y = x' + y'
2 2 2
(5.7)
( x −1) 2 + y 2 = ( x'−a ) 2 + ( y '−b) 2
The general solution is:
x ' = ax − ε by
(5.8)
y ' = bx + ε ay , cu a + b = 1, ε = ±1
2 2
31
If ε = 1 and a ≠ 1 the isometry (5.8) admits only the orifin as a
fixed point and represents a rotation, and for ε = 1 and a = 1 the
application (5.8) becomes an identical transformation.
The equations (5.8) can be written under the matrix form as follows:
x' a − ε b x 2
y'
=
, a + b 2 = 1, ε = ±1 (5.8)’
b ε a y
a −ε b
The associated matrix A = b , in the conditions
εa
a2=b2=1,ε = ± 1, is an orthogonal matrix, which means that the isometry
subgroups with the origin as a fixed point defined on the euclidian plane is
isomorph with the orthogonal group GO(2;R)
For ε = 1 (considering the identical application as having the
rotation of angle α = 0) the orthogonal matrix A has the property
det.A=1 , which means that the subgroup of the rotations of the euclidian
plane is isomorph with the orthogonal special subgroup SO(2;R).
By combining the isometries, having the origin as a fixed point ,
together with the translations which transform the origin into the point
(xo,yo) we obtain the isometries of the euclidian plane PE = ( E2 , IzoE2 )
,caracterized analytically by the equations:
x' = ax − ε by + xo
(5.10)
y ' = bx + ε ay + yo cu a + b = 1, ε = ±1
2 2
32
The isometries T : E3 → E3 , T(x1,x2,x3) = (y1,y2,y3) , are actually the
transformations caracterized by the equations
y 1 =a11 x1 +a12 x 2 +a13 x 3 +b1
y 2 =a 21 x1 +a 22 x 2 +a 23 x 3 +b 2
y 3 =a 31 x1 +a 32 x 2 +a 33 x 3 +b 3 ,
(5.13)
where the matrix A = (aij) , i,j = 1,3 is an orthogonal matrix, and the
coordinate points (b1,b2,b3) is the translation of the origin of the
orthonormed carthesian reference point R (O; i , j , k ) from E3
Problems
1. Find out which of the following applications are linear
transformations?
T: R2→ R2 , T(x1,x2) = (x1-x2,x1+x2)
T: R2→ R2, T(x1,x2) = (x1cosα - x2sinα , x1sinα + x2cosα ), α∈ R
T: R3→ R3, T(x1,x2,x3) = (x1-2x2+3x3, 2x3 , x1+x2 )
T: R2→ R3, T(x1 x2) = (x1-x2,x1+x2,0)
T: R2→ R3, T(x1 x2) = (x1x2 ,x1,x2) .
2. Proove that the following transformations are linear:
a) T: V3 → R , T( v ) = a v
n
∑α
(i )
b) T: C [0,1] → C [0,1] , Tf =
(n) (o i f , α i∈ R
i =1
x
c) T: C[0,1] → C [0,1] ,
(
(Tf)(x) = ∫ f ( x)dx
a
33
T(x1,x2,x3) = ( -x1 + x2+x3,x1-x2+x3,2x3,x1-x2) and verify the relation dim
KerT + dim Im T = 3 .
a∈ V -
7. Show that the application T : V3 → V3 , T( v ) =a ×v , 3
10. Show that the transformation T : R2n→ R2n defined by the relation
T(x) =(xn+1,xn+2,…,x2n,-x1,-x2,…,xn) has the property T2 = - Id.R2n .
A transformation with this propery is called complex structure.
11. Determine the linear transformation which transformes the point
(x1,x2,x3) ∈ R3 in it’s symmetric to the plane x1 +x2 + x3 = 0 and show that
the newly determined transformation is orthogonal.
11. Determine the eigenvectors and the eigenvalues for the
endomorphism T: R3 → R3 caracterized by the matrix
34
2 1 1 0 1 1 2 0 0 − 3 −7 −5
A= 1 2 1 , A= 1 0 1 , A= 0 1 0 , A= 2 4 3 .
1 1 2 1 1 0 0 1 1 1 2 2
12. Study the possibility of reducing to the canonical form, if yes, find
the diagonalizing matrix for the endomorphisms with the following
associated matrices:
1 0 0 1
4 0 0 2 −2 3
5 4 0 1 0 0
A=
, A= 0 0 1 , A= 1 1 1 , A= .
4 5
0 0 0 1 −2
1 2
1
3 1
1
0 −2 5
13. Determine the Jordan canonical form for the following matrices:
2 0 0 0
2 0 0 2 3 −5
1 3 1 1
A = 0 4 1 , A = 2 4 −7 , A = .
0 0 0 0 −1
−4 0
1
2 −3
−1
−1 0 2
14. Determine an orthonormed base in the R3 space where the
endomorphism T : R3→ R3 , T(x) = (-x1 +x2-4x3, 2x1-4x2-2x3, -4x1-
2x2-x3) admits the canonical form.
35