Anda di halaman 1dari 43

Basis and Dimension

Finite-dimensional spaces
Recall we defined a finite-dimensional space as one that
is spanned by a finite number of vectors.
Today we will see how questions about a finite-dimensional
vector space can be translated into questions about
Euclidean space.

Then we can use all the techniques we have developed


for Euclidean space to solve them.

Basis
Let V be a vector space. Vectors v1 , . . . , vr are a finite
basis for V if they are linearly independent and
span({v1 , . . . , vr }) = V.

Note: Every vector v 2 V can be written as a linear


combination of v1 , . . . , vr in a unique way.

Example
Consider the vector space P2 of degree 2 polynomials.
2

A basis for this space is given by {1, x, x }.


We saw yesterday that these functions are linearly
independent and span P2 .
More generally, a basis for Pn the vector space of
2
n
polynomials of degree n is given by {1, x, x , . . . , x }.
This is the canonical basis for Pn .

Example
Consider the vector space P2 of degree 2 polynomials.
Another basis for this space is given by {1 + x, x, x

These functions span P2 because their span contains


2
1,
x,
x
the functions
.
1 = 1 + x + ( 1)x
x=x
x2 = x2

x+x

x} .

Example
Consider the vector space P2 of degree 2 polynomials.
Another basis for this space is given by {1 + x, x, x

These functions are linearly independent. Suppose


a(1 + x) + bx + c(x2
a 1 + (a + b

x) = 0 .

c)x + c x2 = 0 .

By linear independence of 1, x, x2 this means


a = 0, a + b

c = 0, c = 0

a = 0, b = 0, c = 0

x} .

Example
Consider the vector space of 2-by-3 matrices with real
entries.
A basis for this space is given by the matrices

1 0 0
0 1 0
0 0 1
,
,
,
0 0 0
0 0 0
0 0 0

0
1

0
0

0
0 0 0
0 0 0
,
,
.
0
0 1 0
0 0 1

This is the canonical basis for M (2, 3).

Construction of a basis
Theorem: Every finite-dimensional vector space V has a
finite basis.
Proof: Let v1 , . . . , vr be such that span({v1 , . . . , vr }) = V .

We know that such vectors exist as V is finite-dimensional.


If v1 , . . . , vr are linearly independent, then we have found a
basis.
Otherwise, some vi can be expressed as a linear
combination of the others.
vi = a1 v 1 + . . . + ai

1 vi 1

+ ai+1 vi+1 + . . . + ar vr

Construction of a basis
Theorem: Every finite-dimensional vector space V has a
finite basis.
Proof:
Otherwise, some vi can be expressed as a linear
combination of the others.
vi = a1 v 1 + . . . + ai

1 vi 1

+ ai+1 vi+1 + . . . + ar vr

This means
span({v1 , . . . , vr }) = span({v1 , . . . , vi

1 , vi+1 , . . . , vr })

We can throw away vi without changing the span.

Construction of a basis
Theorem: Every finite-dimensional vector space V has a
finite basis.
Proof:

the throw away theorem

We can throw away vi without changing the span.


If the vectors v1 , . . . , vi 1 , vi+1 , . . . , vr are linearly
independent, then we are done.
Otherwise, we repeat this process.
This process must terminate, and when it does, we have
found a basis.

Coordinates
We discussed the coordinates of a vector with respect to
a given basis in Euclidean space.
Example:
Let B = {(1, 0, 0), (1, 1, 0), (1, 1, 1)} be a basis for R3 .
(0, 1, 1) = 1 (1, 0, 0) + ( 2) (1, 1, 0) + 1 (1, 1, 1)

The coordinates of v = (0, 1, 1) with respect to the


basis B are
(v)B = (1, 2, 1).

Coordinates
We can do the same thing in a general finite-dimensional
vector space V .
Fix some basis v1 , . . . , vr for V . We have already seen
such a basis exists.
Every vector u 2 V can be written as a unique linear
combination of the basis vectors v1 , . . . , vr .
u = a1 v1 + + ar vr

The coordinate vector of u with respect to the


r
v
,
.
.
.
,
v
(a
,
.
.
.
,
a
)
2
R
basis 1
.
r is
1
r

Example
Consider the vector space P2 of degree 2 polynomials.
2
{1
+
x,
x,
x
A basis for this space is given by
2

x} .

What is the coordinate vector of x with respect to this


basis?
It is (0, 1, 1) because
2

x = 0 (1 + x) + 1 x + 1 (x

x).

Coordinate Mapping
Fix some basis v1 , . . . , vr for V .
If

u = a1 v1 + + ar vr

then the coordinate vector of u with respect to the


r
basis v1 , . . . , vr is (a1 , . . . , ar ) 2 R .
The basis B = {v1 , . . . , vr } defines a mapping
coordB : V ! Rr

where coordB (u) = (a1 , . . . , ar ) if u = a1 v1 + + ar vr .


This is a function, because the representation is unique.

Coordinate Mapping
The basis B = {v1 , . . . , vr } defines a mapping
coordB : V ! R

where coordB (u) = (a1 , . . . , ar ) if u = a1 v1 + + ar vr .


This is a function, because the representation is unique.
Claim: The coordinate mapping is one-to-one.
If coordB (u1 ) = coordB (u2 ) then u1 = u2 .
Claim: The coordinate mapping is onto.
a1 v1 + + ar vr 2 V for any (a1 , . . . , ar ) 2 R

Coordinate Mapping
The basis B = {v1 , . . . , vr } defines a mapping
coordB : V ! R

where coordB (u) = (a1 , . . . , ar ) if u = a1 v1 + + ar vr .


This mapping is one-to-one and onto. It is a bijection
r
R
between V and
.
This mapping has even more nice properties.
coordB (u + v) = coordB (u) + coordB (v)
u = a1 v1 + + ar vr
v = b1 v 1 + + br v r

u + v = (a1 + b1 )v1 + + (ar + br )vr

Coordinate Mapping
The basis B = {v1 , . . . , vr } defines a mapping
coordB : V ! R

where coordB (u) = (a1 , . . . , ar ) if u = a1 v1 + + ar vr .


r

This mapping is a bijection between V and R .


coordB (u + v) = coordB (u) + coordB (v)
coordB (c u) = c coordB (u)

for any c 2 R

If u = a1 v1 + + ar vr then c u = c(a1 v1 + + ar vr )
= ca1 v1 + + car vr

Example
Consider the vector space P2 of degree 2 polynomials.
2

Fix the basis B = {1, x, x }.


coordB (3

2x + 5x ) = (3, 2, 5)
2

coordB ( 1 + x + x ) = ( 1, 1, 1)
(3

2x + 5x2 ) + ( 1 + x + x2 ) = 2

x + 6x2

(3, 2, 5) + ( 1 + 1 + 1) = (2, 1, 6)
3
P
Adding polynomials in 2 is just like adding vectors in R .

Isomorphism
Let V and W be two vector spaces. A function f : V ! W
is an isomorphism if
f is a bijection between V and W .
f (u + v) = f (u) + f (v) for all u, v 2 V
f (c v) = c f (v) for all c 2 R, v 2 V
Two spaces are isomorphic if there is an isomorphism
between them.
Isomorphic spaces are essentially the same.

Linear combinations
Let V and W be two vector spaces. If f : V ! W is
an isomorphism then
f (c1 v1 + c2 v2 + + cr vr ) = c1 f (v1 ) + c2 f (v2 ) + + cr f (vr )

This follows by progressively applying the properties of an


isomorphism.
f ((c1 v1 ) + (c2 v2 + + cr vr )) = f (c1 v1 ) + f (c2 v2 + + cr vr )
= c1 f (v1 ) + f (c2 v2 + + cr vr )

= c1 f (v1 ) + c2 f (v2 ) + + cr f (vr )

Coordinate Mapping
The basis B = {v1 , . . . , vr } defines a mapping
coordB : V ! R

where coordB (u) = (a1 , . . . , ar ) if u = a1 v1 + + ar vr .


r

This mapping is a bijection between V and R .


coordB (u + v) = coordB (u) + coordB (v)
coordB (c u) = c coordB (u)

for any c 2 R

If V has a basis with r many elements then V is


r
isomorphic to R .
Reason: The coordinate mapping is an isomorphism.

Consequences
Example: The vector space P2 of polynomials of degree 2
3
is isomorphic to R .
An isomorphism is given by the coordinate mapping with
2
{1,
x,
x
}.
respect to the basis
The coordinate mapping lets us transfer questions about
a general finite-dimensional vector space to questions about
Euclidean space.
We are already familiar with Euclidean space, and know how
to answer the questions there!

Consequences
Theorem: Let V be a vector space with zero element 0 and
basis B = {v1 , . . . , vr } . Then
r

coordB (0) = 0 2 R

Proof:

coordB (0) = coordB (0 0)


= 0 coordB (0)
= 0

Question: Does the same hold for any isomorphism?

Linear Independence
Theorem: Let V be a finite-dimensional vector space and fix
a basis B = {v1 , . . . , vr } for V . Then vectors u1 , . . . , uk 2 V
are linearly dependent if and only if
coordB (u1 ), . . . , coordB (uk ) 2 R

are linearly dependent.


Proof:

Suppose u1 , . . . , uk 2 V are linearly dependent, thus


c 1 u1 + c k uk = 0
where some ci 6= 0 .

Apply the function coordB to both sides.


coordB (0) = 0

Theorem: Let V be a finite-dimensional vector space and fix


a basis B = {v1 , . . . , vr } for V . Then vectors u1 , . . . , uk 2 V
are linearly dependent if and only if
coordB (u1 ), . . . , coordB (uk ) 2 Rr

are linearly dependent.


Proof:

c1 u1 + ck uk = 0 where some ci 6= 0 .

Apply the function coordB to both sides.

coordB (0) = 0

Simplifying the left hand side:


coordB (c1 u1 + + ck uk ) = coordB (c1 u1 ) + coordB (c2 u2 + + ck uk )
= c1 coordB (u1 ) + coordB (c2 u2 + + ck uk )
= c1 coordB (u1 ) + + ck coordB (uk )

This shows coordB (u1 ), . . . , coordB (uk ) are lin. dependent.

Theorem: Let V be a finite-dimensional vector space and fix


a basis B = {v1 , . . . , vr } for V . Then vectors u1 , . . . , uk 2 V
are linearly dependent if and only if
coordB (u1 ), . . . , coordB (uk ) 2 Rr

are linearly dependent.

Proof:
Suppose coordB (u1 ), . . . , coordB (uk ) are lin. dependent.
0 = c1 coordB (u1 ) + + ck coordB (uk )
= coordB (c1 u1 ) + + coordB (ck uk )
= coordB (c1 u1 + + ck uk )

coord
(0)
=
0
coord
:
V
!
R
We know that
and as
is
B
B

one-to-one this means

c 1 u1 + + c k uk = 0

Comment
Notice that in the last proof we only used the fact that coordB
is an isomorphism.

We did not use any specific details about the action of coordB

The last theorem actually holds with respect to any


isomorphism.

Span
Theorem: Let V be a finite-dimensional vector space and fix
a basis B = {v1 , . . . , vr } for V . Then v 2 span({u1 , . . . , uk })
if and only if
coordB (v) 2 span({coordB (u1 ), . . . , coordB (uk )}),

for any v, u1 , . . . , uk 2 V .
Proof: If v 2 span({u1 , . . . , uk }) then v = c1 u1 + . . . + ck uk
coordB (v) = coordB (c1 u1 + + ck uk )

= c1 coordB (u1 ) + + ck coordB (uk )

and so coordB (v) 2 span({coordB (u1 ), . . . , coordB (uk )}) .

Span
Theorem: Let V be a finite-dimensional vector space and fix
a basis B = {v1 , . . . , vr } for V . Then v 2 span({u1 , . . . , uk })
if and only if
coordB (v) 2 span({coordB (u1 ), . . . , coordB (uk )}),

for any v, u1 , . . . , uk 2 V .
Proof: If coordB (v) 2 span({coordB (u1 ), . . . , coordB (uk )})
coordB (v) = c1 coordB (u1 ) + + ck coordB (uk )
= coordB (c1 u1 + + ck uk )

Again as coordB is one-to-one, this means


v = c 1 u1 + + c k uk

Dimension Theorem
Theorem: Let V be a finite dimensional vector space and
let B1 = {u1 , . . . , ur } and B2 = {v1 , . . . , vs } be two bases
for V . Then r = s .
Proof: Suppose that r < s. From the dimension theorem
in Euclidean space we know that the largest linearly
independent set in Rr is of size r .
Consider coordB1 : V ! Rr . As v1 , . . . , vs are linearly
independent, so are
coordB1 (v1 ), . . . , coordB1 (vs ) 2 Rr.

This is a contradiction as r < s.

Dimension Theorem
Theorem: Let V be a finite dimensional vector space and
let B1 = {u1 , . . . , ur } and B2 = {v1 , . . . , vs } be two bases
for V . Then r = s .

Proof: We can make an analogous argument if s < r. This


completes the proof.

Dimension
Definition: Let V be a finite-dimensional vector space. The
dimension of V is the number of elements in a basis for V.

This definition makes sense by the dimension theorem.

Remark: If V is a vector space of dimension d then it is


isomorphic to Rd .

Example: Polynomials
Consider Pn the vector space of polynomials of degree
at most n.
2
n
{1,
x,
x
,
.
.
.
,
x
}.
A basis is given by

The dimension of Pn is n + 1 .

Example: Matrices
Let M (m, n) be the vector space of m-by-n matrices
with real entries.
A basis for this space is given by the matrices
Eij (k, `) =

1
0

if k = i, ` = j
otherwise

where 1 i m, 1 j n.
In total, there are mn elements in this basis.
The dimension of M (m, n) is mn.

Subspaces
Theorem: Let V be a vector space of dimension r with a
basis B = {v1 , . . . , vr }, and let U V. If
r
{coord
(u)
:
u
2
U
}

R
S=
B

is a subspace of Rr then U is a subspace of V.


Proof: As S is a subspace of R it contains 0 .
Thus 0 2 U as this is the only vector satisfying coordB (v) = 0.

Subspaces
Theorem: Let V be a vector space of dimension r with a
basis B = {v1 , . . . , vr }, and let U V. If
r
{coord
(u)
:
u
2
U
}

R
S=
B

is a subspace of Rr then U is a subspace of V.


Proof: Now we see that U is closed under scalar mult.
Let u 2 U, c 2 R and s = coordB (u) 2 S. As S is a subspace,
also c s 2 S.
Note that coordB (c u) = c coordB (u) = c s , and c u is the
only vector with this property as coordB is one-to-one.
Thus c u 2 U.

Subspaces
Theorem: Let V be a vector space of dimension r with a
basis B = {v1 , . . . , vr }, and let U V. If
r
{coord
(u)
:
u
2
U
}

R
S=
B

is a subspace of Rr then U is a subspace of V.


Proof: Now we see that U is closed under vector addition.
Take u1 , u2 2 U and let s1 = coordB (u1 ), s2 = coordB (u2 ) 2 S.
As S is a subspace also s1 + s2 2 S.

Note that coordB (u1 + u2 ) = coordB (u1 ) + coordB (u2 ) = s1 + s2


and this is the only vector with this property (as one-to-one).
Thus u1 + u2 2 U.

Example
Consider the vector space M (2, 2) of 2-by-2 matrices.
Let
S=

a11
a21

a12
: a11 + a22 = 0
a22

be the subset of M (2, 2) of matrices whose diagonal


elements sum to zero.
Is S a subspace?

We could show this directly and verify that S contains


the all-zero matrix and satisfies the closure conditions.

Consider the vector space M (2, 2) of 2-by-2 matrices.


Let
S=

a11
a21

a12
: a11 + a22 = 0
a22

be the subset of M (2, 2) of matrices whose diagonal


elements sum to zero.
A basis for M (2, 2) is given by the set of matrices
E=

Note that
coordE

1
0

0
0 1
0
,
,
0
0 0
1

a11
a12

a12
a22

0
0
,
0
0

0
1

= (a11 , a12 , a21 , a22 )

Consider the vector space M (2, 2) of 2-by-2 matrices.


Let

S=

a11
a21

a12
: a11 + a22 = 0
a22

be the subset of M (2, 2) of matrices whose diagonal


elements sum to zero.
4

Consider the set T = {coordE (A) : A 2 S} R .


By the previous theorem, if T is a subspace then so is S .

But T is simply the null space of the matrix 1 0 0 1


and therefore is a subspace.

Example continued
Consider the vector space M (2, 2) of 2-by-2 matrices.
Let

S=

a11
a21

a12
: a11 + a22 = 0
a22

be the subset of M (2, 2) of matrices whose diagonal


elements sum to zero.
What is a basis for S?
To do this, we can first find a basis for
T = {coordE (A) : A 2 S}

= {(a11 , a12 , a21 , a22 ) : a11 + a22 = 0}

To do this, we can first find a basis for


T = {coordE (A) : A 2 S}

= {(a11 , a12 , a21 , a22 ) : a11 + a22 = 0}

We can do this by finding the special solutions:


{( 1, 0, 0, 1), (0, 1, 0, 0), (0, 0, 1, 0)}

Now we look for vectors in S which map to these special


solutions under coordE .
coordE

1
0

0
1

= ( 1, 0, 0, 1)

coordE

0
1

0
0

coordE

= (0, 0, 1, 0)

0
0

1
0

= (0, 1, 0, 0)

To do this, we can first find a basis for


T = {coordE (A) : A 2 S}

= {(a11 , a12 , a21 , a22 ) : a11 + a22 = 0}

The matrices

1
0

0
0
,
1
0

1
0 0
,
S
form
a
basis
for
.
0
1 0

They are linearly independent as their coordinate images


are.
They span S as their coordinate images span T .
The dimension of S is 3.

Anda mungkin juga menyukai