Anda di halaman 1dari 6

#1.3.

19
Suppose W1 W2 is subspace of V and W1 6 W2 , then exists u W1 \W2 . Given any v W2 , since W1 W2
is subspace, u + v W1 W2 . If u + v W2 , then u = (u + v) v W2 , a contradiction. Therefore,
u + v W1 and v = (u + v) u W1 , we obtain W2 W1 .
The converse is obvious.

#1.3.30
Suppose V = W1 W2 and u V . Since V = W1 + W2 , there exist w1 W1 , w2 W2 such that
u = w1 + w2 . If exist another pair w10 W1 , w20 W2 such that u = w10 + w20 , then w1 + w2 = w10 + w20 . Then
w1 w10 = w20 w2 W1 W2 = {0}. Hence w1 = w10 and w2 = w20 , which means that the representation
must be unique.
Conversely, since each vector in V can be expressed by sum of elements in W1 and W2 , V = W1 + W2 . It
remains to show W1 W2 = {0}. Suppose u W1 W2 . Then u has two representations u + 0 and 0 + u,
where u in the former is considered as an element in W1 and u in the later is considered as an element in
W2 . Since the representation is unique, u must be 0.

#1.4.17
We shall prove: W has only finitely many distinct generating sets if and only if W has finitely many elements.
Suppose W is finite. Note that any generating set of W is a subset of W and there are only 2|W | subsets of
W . Hence, the number of distinct generating sets of W must be finite ( 2|W | ).
Conversely, assume that W has infinitely many elements. Choose u W , put Wu = W \{u}. We shall
show that Wu is a generating set of W . Let v W be nonzero element and u 6= v, then u + v Wu and
u = (u + v) v span(Wu ), which means W = span(Wu ). Since W has infinitely many elements, the
number of generating sets of W is infinite.

#1.5.20
Suppose exist a, b F satisfying af +bg = 0. Then 0 = (af +bg)(0) = a+b and 0 = (af +bg)(1) = aer +bes .
Combine these two equations, we obtain aer = aes . The condition r 6= s implies a must be zero, so is b.
Hence f, g are linearly independent.

#1.6.29
(a) It suffices to show dim(W1 + W2 ) = dim(W1 ) + dim(W2 ) dim(W1 W2 ).
Since W1 is finite-dimensional, W1 W2 is finite-dimensional. Choose a basis = {u1 , . . . , uk } of W1 W2 .
By Replacement Theorem, can be extended to a basis 1 = {u1 , . . . , uk , v1 , . . . , vm } for W1 and a basis
2 = {u1 , . . . , uk , w1 , . . . , wp } for W2 . We shall show = {u1 , . . . , uk , v1 , . . . , vm , w1 , . . . , wp } is a basis for
W1 + W2 .
Given w1 W1 , w2 W2 , write

w1 = (a1 u1 + . . . + ak uk ) + (ak+1 v1 + . . . + ak+m vm )

and
w2 = (b1 u1 + . . . + bk uk ) + (bk+1 w1 + . . . + bk+p wp ).

1
Then

w1 + w2 = {(a1 + b1 )u1 + . . . + (ak + bk )uk }+


(ak+1 v1 + . . . + ak+m vm ) + (bk+1 w1 + . . . + bk+p wp ) span().

It remains to show is linearly independent.


Suppose exist a1 , . . . , ak , b1 , . . . , bm , c1 , . . . , cp F such that

(a1 u1 + . . . + ak uk ) + (b1 v1 + . . . + bm vm ) + (c1 w1 + . . . + cp wp ) = 0. (1)

Then
(a1 u1 + . . . + ak uk ) + (b1 v1 + . . . + bm vm ) = (c1 w1 + . . . + cp wp ).

The element in left-hand side lies in W1 and element in right-hand side lies in W2 . It follows that both
elements are actually in W1 W2 , thus can be written as linear combination of elements in .
Write
(a1 u1 + . . . + ak uk ) + (b1 v1 + . . . + bm vm ) = d1 u1 + . . . + dk uk ,

we obtain
{(a1 d1 )u1 + . . . + (ak dk )uk } + (b1 v1 + . . . + bm vm ) = 0.

Since 1 is basis, b1 = 0 = b2 = . . . = bm and (1) becomes

(a1 u1 + . . . + ak uk ) + (c1 w1 + . . . + cp wp ) = 0.

Since 2 is basis, a1 = a2 = . . . = ak = 0 = c1 = . . . = cp . This completes the proof.

(b) Since V = W1 + W2 . By (a), dim(V ) = dim(W1 ) + dim(W2 ) dim(W1 W2 ).

V = W1 W2 W1 W2 = {0}
dim(W1 W2 ) = 0
dim(V ) = dim(W1 ) + dim(W2 ).

#1.7.2
Let V be set of all convergent sequences in R. By Replacement Theorem and its consequences, it suffices to
show V contains an infinite subset S which is linearly independent. Put

S = {(1, 0, 0, 0, . . .), (0, 1, 0, 0, . . .), (0, 0, 1, 0, . . .), },

S is a desired one.

#2.1.14
(a) Suppose T is one-to-one and S is a linearly independent subset of V . We shall show T (S) is linearly
independent. Let a1 , . . . , an F and T (u1 ), . . . , T (un ) T (S), uj S such that a1 T (u1 )+. . .+an T (un ) = 0.
Then T (a1 u1 + . . . + an un ) = 0. Since T is one-to-one, a1 u1 + . . . + an un = 0. S is linearly independent,
hence a1 = 0 = . . . = an , which implies T (S) is linearly independent.

2
Conversely, if u N (T ), u 6= 0. Then {u} is linearly independent, so is {T (u)}. But {T (u)} = {0}, a
contradiction. Thus N (T ) = {0}, T is one-to-one.

(b) Suppose S is linearly independent. By (a), T (S) is linearly independent.


Conversely, suppose a1 , . . . , an F and u1 , . . . , un S satisfy a1 u1 + . . . + an un = 0. Then 0 = T (a1 u1 +
. . . + an un ) = a1 T (u1 ) + . . . + an T (un ). Note that T (uj ) T (S) and T (S) is linearly independent. Hence
a1 = 0 = a2 = . . . = an , which implies S is linearly independent.

(c) Since T is one-to-one, T () is linearly independent. Since T is onto, W = R(T ) = span(T ()). Thus
T () is basis for W .

#2.1.27
(a) It suffices to show that exists a subspace W 0 of V such that V = W W 0 .
Since V is finite-dimensional, W is also finite-dimensional. Choose a basis = {w1 , . . . , wk } for W .
By Replacement Theorem, can be extended to basis = {w1 , . . . , wk , u1 , . . . , un } for V . Put W 0 =
span({u1 , . . . , un }), we shall show V = W W 0 .
Given v V , there exist a1 , . . . , ak , b1 , . . . , bn F such that v = (a1 w1 + . . . + ak wk ) + (b1 u1 + . . . + bn un ).
Since a1 w1 + . . . + ak wk W1 and b1 u1 + . . . + bn un W2 , v W1 + W2 . Hence V = W1 + W2 .
If v 0 W1 W2 , then exist a1 , . . . , ak , b1 , . . . , bn F such that (a1 w1 + . . . + ak wk ) = v 0 = (b1 u1 + . . . + bn un ).
From equation, we obtain (a1 w1 + . . . + ak wk ) (b1 u1 + . . . + bn un ) = 0. Since is basis, a1 = . . . = ak =
0 = b1 = . . . = bn , which implies v 0 = 0. Hence W1 W2 = {0} and V = W1 W2 .

(b) Let V = R2 , W be x-axis. Put W1 = {(t, t) : t R} and W2 = {(0, t) : t R}. Then W W1 = V =


W W2 . The projections on W along these two subspaces are distinct.

#2.1.31
(a) Given w W . Since W is T -invariant, T (w) W and T (w) R(T ). Since R(T ) W = {0},
T (w) = 0.

(b) Since V is finite-dimensional, by Dimension Theorem, dim V = dim R(T ) + dim N (T ). By #1.6.29(a),
dim V = dim R(T ) + dim W . Combine these equations, we obtain dim N (T ) = dim W . By (a), since
W N (T ) have same dimension, W = N (T ).

(c) Consider left shift operator (see #2.1.21) on space introduced in Example 1.2.5.

#2.1.35
(a) Since V is finite-dimensional, by Dimension Theorem, dim V = dim R(T ) + dim N (T ). Also, by
#1.6.29(a), dim V = dim(R(T ) + N (T )) = dim R(T ) + dim N (T ) dim(R(T ) N (T )). Combine these
two equations, we get dim(R(T ) N (T )) = 0. Hence, R(T ) N (T ) = {0} and V = R(T ) N (T ).

(b) dim(R(T ) + N (T )) = dim R(T ) + dim N (T ) dim(R(T ) N (T )) = dim R(T ) + dim N (T ) = dim V .
Hence, V = R(T ) + N (T ).

3
#2.2.11
Choose a basis = {u1 , . . . , uk } for W and extend it to basis = {u1 , . . . , uk , uk+1 , . . . , un } for V . Let
[T ] = [aij ], then T (uj ) = a1j u1 + . . . + anj un . Note that since W is T -invariant, we have T (uj ) W , for all
1 j k. Therefore, T (uj ) can be written as linear combination of elements in , for all 1 j k. Since
each element can be written uniquely as linear combination"of elements # in , T (uj ) = a1j u1 + . . . + akj uk +
A B
0uk+1 + . . . + 0un , for all 1 j k, which leads to [T ] = .
O C

#2.2.13
Suppose exist a, b F such that aT + bU = 0. Note first that if a 6= 0, b must also be nonzero, since T
is nonzero transformation. Similarly, if a = 0, then b must be zero. Assume a 6= 0, choose u V \N (T ),
then T (u) = ab U (u) = U ( ab u) R(T ) R(U ) = {0}. The equation implies T (u), which is contradiction.
Therefore, a = 0 = b, {U, T } is linearly independent.

#2.2.15
(a) 0 S 0 is clear. For T, U S 0 and c F , (cT + U )(x) = cT (x) + U (x) = 0, for all x S. Therefore,
cT + U S 0 .

(b) Let T S20 . For any x S1 , since S1 S2 , x lies in S2 . Hence T (x) = 0, T S10 .

(c) Since V1 V1 + V2 and V2 V1 + V2 , by (b), (V1 + V2 )0 V10 V20 . For any T V10 V20 and any
x1 + x2 V1 + V2 with x1 V1 , x2 V2 , T (x1 + x2 ) = T (x1 ) + T (x2 ) = 0. Therefore, T (V1 + V2 )0 . By
argument above, we obtain (V1 + V2 )0 = V10 V20 .

#2.2.16
Let = {u1 , . . . , uk } be basis of N (T ) and extend it to basis = {u1 , . . . , uk , uk+1 , . . . , un } of V . Then

R(T ) = span(T ())


= span({T (u1 ), . . . , T (uk ), T (uk+1 ), . . . , T (un )})
= span({0, . . . , 0, T (uk+1 ), . . . , T (un )})
= span({T (uk+1 ), . . . , T (un )})

By Dimension Theorem, dim R(T ) = dim V dim N (T ) = n k. Therefore, by Replacement Theorem,


0 = {T (uk+1 ), . .". , T (un )} #is basis for R(T ). Extend 0 to basis = {w1 , . . . , wk , T (uk+1 ), . . . , T (un )} for
O
W . Then [T ] = is a diagonal matrix.
Ink

#2.3.11
For any y R(T ), exist x V such that y = T (x). Then T (y) = T (T (x)) = T 2 (x) = 0, which implies
y N (T ), R(T ) N (T ).
Conversely, for any x V , T (x) R(T ) N (T ) and T 2 (x) = T (T (x)) = 0. Hence, T 2 = 0.

#2.3.12
(a) Suppose x V satisfies T (x) = 0, then U (T (x)) = (U T )(x) = 0. U T is one-to-one implies x = 0, which

4
leads to T is one-to-one.
Consider V = Z = R and W = R2 . Define T (x) = (x, 0) and U (x, y) = x. Then U T is one-to-one, but U is
NOT one-to-one.

(b) Given z Z, we shall show that there exist y W such that U (y) = z. Since U T is onto, there exists
x V such that U T (x) = z. Put y = T (x) W , then U (y) = U (T (x)) = U T (x) = z. Hence U is onto.
Consider the example as in (a), U T is onto, but T is not onto.

(c) Suppose x V satisfies U T (x) = 0. Then 0 = U T (x) = U (T (x)). Since U is one-to-one, T (x) = 0. Since
T is one-to-one, x = 0. Hence U T is one-to-one. For any z Z, since U is onto, there exists y W such that
U (y) = z. Since T is onto, there exists x V such that T (x) = y. Therefore, z = U (y) = U (T (x)) = U T (x),
U T is onto.

#2.3.12 (General proof, one should compare this to proof above)


(a) Suppose x1 , x2 V satisfy T (x1 ) = T (x2 ), then U T (x1 ) = U (T (x1 )) = U (T (x2 )) = U T (x2 ). U T is
one-to-one implies x1 = x2 , which leads to T is one-to-one.

(b) Given z Z, we shall show that there exist y W such that U (y) = z. Since U T is onto, there exists
x V such that U T (x) = z. Put y = T (x) W , then U (y) = U (T (x)) = U T (x) = z. Hence U is onto.

(c) Suppose x1 , x2 V satisfy U T (x1 ) = U T (x2 ), then U (T (x1 )) = U (T (x2 )). Since U is one-to-one,
T (x1 ) = T (x2 ). Since T is one-to-one, x1 = x2 . Hence U T is one-to-one. For any z Z, since U is onto,
there exists y W such that U (y) = z. Since T is onto, there exists x V such that T (x) = y. Therefore,
z = U (y) = U (T (x)) = U T (x), U T is onto.

#2.3.16
(a) Since V is finite-dimensional and both T, T 2 are defined on V , by Dimension Theorem, dim V =
dim R(T ) + dim N (T ) and dim V = dim R(T 2 ) + dim N (T 2 ). Since rank(T ) = rank(T 2 ), we get dim N (T ) =
dim N (T 2 ). Observe that N (T ) N (T 2 ), which implies N (T ) = N (T 2 ). Suppose y R(T ) N (T ), then
exists x V such that y = T (x). It follows that 0 = T (y) = T (T (x)) = T 2 (x), x N (T 2 ) = N (T ). Hence
y = T (x) = 0, R(T ) N (T ) = {0}. By #2.1.35, we conclude V = R(T ) N (T ).

(b) Observe that


R(T ) R(T 2 ) R(T 3 ) . . .
and
dim R(T ) dim R(T 2 ) dim R(T 3 ) . . . .
Since V is finite-dimensional, say, dim V = n. Therefore, there exists 1 k n + 1 such that dim R(T k ) =
dim R(T k+1 ) (which implies R(T k ) = R(T k+1 )). We shall show that R(T k ) = R(T j ), for all j k. It
suffices to show R(T k+2 ) = R(T k+1 ).

R(T k+2 ) = {T k+2 (x) : x V }


= {T (T k+1 (x)) : x V }
= {T (y) : y R(T k+1 )}
= {T (y) : y R(T k )}
= R(T k+1 ) = R(T k ).

5
Therefore, R(T 2k ) = R(T k ). By (a), V = R(T k ) N (T k ), for some 1 k n + 1.

#2.3.17
Suppose T 2 = T , then T (x T (x)) = T (x) T 2 (x) = T (x) T (x) = 0, for all x V . Hence x =
T (x) + (x T (x)) R(T ) + N (T ), which implies V = R(T ) + N (T ). If y R(T ) N (T ), then exists x V
such that y = T (x). It follows that 0 = T (y) = T (T (x)) = T 2 (x) = T (x) = y. Hence V = R(T ) N (T ).
Note that for any y R(T ), y = T (x) for some x V and T (y) = T 2 (x) = T (x) = y. And if y V satisfies
T (y) = y, then y R(T ). From above, we conclude R(T ) = {y V : T (y) = y}. Hence, T is actually the
projection on R(T ) along N (T ).
Conversely, if V = W1 W2 and T is projection on W1 along W2 , then T 2 = T .

#2.4.9
We first show that LAB = LA LB . Let {e1 , . . . , en } be standard basis of F n . It suffices to show LAB (ej ) =
LA (LB (ej )), for all 1 j n.
Write A = [aij ], B = [bij ] and AB = [cij ]. Then
n
X n X
X n n
X Xn
LAB (ej ) = cij ei = ( aik bkj )ei = bkj ( aik ei )
i=1 i=1 k=1 k=1 i=1

and
n
X n
X n
X Xn
LA (LB (ej )) = LA ( bkj ek ) = bkj LA (ek ) = bkj ( aik ei ).
k=1 k=1 k=1 i=1

Hence, LAB = LA LB .
Note that AB is invertible if and only if LAB is isomorphism. Since LAB = LA LB , and by #2.3.12, LB is
one-to-one and LA is onto. By Dimension Theorem, both LA , LB are isomorphisms. Hence, both A, B are
invertible.
" # 1 0
1 0 0
Consider A = and B = 0 1, then AB = I2 , which is invertible. But both A, B are not square

0 1 0
0 0
1 1
matrix, A , B do not exist.

#2.4.10
(a) By #2.4.9.

(b) Since AB = In = AA1 , we have B = A1 (AB) = A1 (AA1 ) = A1 .

(c) Suppose T, U are linear operators on V . If U T is isomorphism, then both U, T are isomorphisms and
T = U 1 .

#2.4.20
Suppose = {u1 , . . . , un } and = {w1 , . . . , wm } are bases for V and W . Let A = [aij ] = [T ] and
{e1 , . . . , en }, {f1 , . . . , fm } be standard bases for F n and F m , respectively. Then
T (uj ) = a1j w1 + . . . + amj wm and LA (ej ) = a1j f1 + . . . + amj fm . (2)
Note that : V F n is defined as (uj ) = ej , j, which is an isomorphism. By (2), its clear
that (N (T )) = N (LA ). Therefore, N (T ) is isomorphic to N (LA ). By Dimension Theorem, dim V =
dim N (T ) + dim R(T ) and dim F n = dim N (LA ) + dim R(LA ). Since dim V = dim F n and dim N (T ) =
dim N (LA ), hence dim R(T ) = dim R(LA ).

Anda mungkin juga menyukai