Anda di halaman 1dari 15

                    

The Final Exam for Professor Gary Glaze's Linear Algebra Class

Nicholas Cemenenkoff
(here it is right here)

1. Express the standard matrix representation of the invertible transformation T

of R2 into itself as a product of elementary matrices. Use this expression to describe the transformation as a product of one or more reections, horizontal or vertical expansions or contractions, and shears.
T Note T 1 0 x y = 2x y 3x + 3y 2 3 for T : R2 R2 and T 0 1 = T [e2 ] = x y 1 2 R2 ,
A

x y

2x y 3x + 3y

= T [e1 ] =

So for the linear transformation T (x) = Ax x = A= T (x) = T T (e1 ) T (e2 ) x y = Ax = = 2 3 2 3 2 3 1 2 1 2 1 2 x y

Now what we'll do is transform the relation I2 A = operations until the right-hand side becomes I2 .
I2 A = 2 3 1 2

through a series of elementary row

1 R1 R1 2
1 2

0 1

A=

2 3

1 2

3R1 + R2 R2 1 3 0 1
1 2

0 1

A=

1 0

1 2
7 2

2 R2 R2 7 1 0 0
2 7

1 3

0 1

1 2

0 1

A=

1 0

1 2 1

1 R2 + R1 R1 2 1 0
E4 1 2

1 0
E3

0
2 7

1 3
E2

0 1

1 2

0
E1

0 1

A=

1 0
I2

0 1

1 1 1 1 E4 E3 E2 E1 = I2 , and elementary row operations are reversible, so E4 , E3 , E2 , E1 1 1 1 1 A = E1 E2 E3 E4

It can be easily looked up that E1 is a horizontal contraction, E2 is a negative vertical shear, E3 is a vertical contraction, and E4 is a positive vertical shear. 2

2. Find an orthogonal basis for the column space of the matrix A =

is orthogonal.
A= 1 1 0 1 1 3 1 2 5 5 5 1 3 2 8 = v1 v2 v3 1 0 0 0 0 0 1 0 0 0 0 0 1 0 0

1 1 0 1 1

3 1 2 5 5

5 1 3 2 8

and prove the basis

column space = C = span {v1 , v2 , v3 }

Now I'll normalize the rst vector.


v1 = (1) + (1) + (0) + (1) + (1) = v1 1 u1 = = v1 2 1 1 0 1 1
1 2 1 2 2 2 2 2 2

4=2

= 0 1
2 1 2

The space spanned by u1 = V1 V1 = span {u1 }.


y2 = v2 projV1 v2 projV1 v2 = v2 u1 u1 u1 8 u1 = u1 = 8u1 = 1 4 4 0 4 4 1 1 2 1 1 4 4 0 4 4

y2 =

3 3 2 5 5

Now I'll normalize the second vector.


y2 = (1) + (1) + (2) + (1) + (1) = y2 1 u2 = = y2 2 2 1 1 2 1 1
2 2 2 2 2

8=2 2

= 2 4

1 1 2 1 1

v1 y2 = 1 1 + 0 + 1 + 1 = 0 orthogonal y3 = v3 v3 u1 u1 u1 u1 + v3 u2 u2 u2 u2

Because u1 and u2 are normalized, u1 u1 = 1 = u2 u2 . 3

y3 =

5 1 3 2 8

7 1 2

1 1 0 1 1

3 2 2 4

1 1 2 1 1

3 3 0 3 3

= 3

1 1 0 1 1

y3 = 3 y3 = u3 = y3 6

4 (9) = 2 (3) = 6 1 1 0 1 1 1 1 0 1 1 1 = z 2

1 = 2

v1 z = 1 1 + 0 1 + 1 = 0 orthogonal y2 z = 1 + 1 + 0 1 + 1 = 0 orthogonal 1 orthonormal basis = {u1 , u2 , u3 } = 2 1 1 0 1 1 1 1 2 1 1 1 1 0 1 1

, 2 4

1 , 2

3. Prove that for every positive integer n and every a R, the set
B = (x a) , (x a)
n n1

, (x a)

n2

, , (x a) , (x a) , 1

is a basis for the vector space Pn of polynomials of degree at most n. Then nd the coordinate vector of 3 2 1 p (x) = x3 + x2 x 1 relative to the ordered basis (x a) , (x a) , (x a) , 1 .   For B to be a basis of the vector space Pn , B must span Pn and all elements of B must all be linearly independent. B is a spanning set of Pn if for every polynomial p (x) Pn , p (x) can be written as a linear combination of the vectors in B . Taylor series expansion about a point proves that all functions can be represented by an innite series about a point a. The Taylor series for f (x) about a is,
f (x) =
(3) f (a) f (a) (a) f (n) (a) f (n) (a) n 2 f 3 n (x a) = f (a)+ (x a)+ (x a) + (x a) + + (x a) n ! 1! 2! 3! n ! n=0

Since f (x) is a polynomial, we'll change this form to


p (x) = c1 (1) + c2 (x a) + c3 (x a) + + cn+1 (x a)
2 n

Notice that if p (x) is a polynomial with nite degree, the number of terms in its associated Taylor series expansion will also be nite. For example, if f (x) = x2 + x + 1, then the Taylor series expansion about zero is
f (0) f (0) f (3) (0) f (4) (0) f (n) (0) 2 3 4 n (x) + (x) + (x) + ( x) + + (x) 1! 2! 3! 4! n! Notice though that every consecutive derivative of f (x) from the third onward is zero. Hence f (x) = f (0) + f (x) = 1 + x + x2 + 0 + 0 + + 0

It can be easily shown through induction that this principle applies to any nite polynomial. So what we've found here is that for a nite polynomial of degree n, its Taylor series expansion about any point a has a nite amount of terms, and moreso, none of the terms exceed degree n. Since p (x) = c1 (1) + c2 (x a) + 2 n c3 (x a) + + cn+1 (x a) , it is clear that p (x) is a linear combination of B for all polynomials p (x) of degree n or less. (Note that if a polynomial is of an order p < n, then all terms of order k such that p < k < n have a coecient of zero.) Now since Pn is a nite-dimensional vector space with basis B , we know the elements of B = {v1 , v2 , ..., vp } are also elements of Pn . If the elements of B are independent, then c1 v1 + c2 v2 + + cp vp = 0 only contains the trivial solution. Applying a coordinate map to this equation gives
[c1 v1 + c2 v2 + + cp vp ]B = 0
B

c1 [v1 ]B + c2 [v2 ]B + + cp [vp ]B = 0

This relation implies that if the coordinate vectors relative to B are independent, then so are the elements of B . Given B = (x a)n , (x a)n1 , (x a)n2 , , (x a)2 , (x a)1 , 1 , we see the associated coordinate vectors for each element are
1 (1, 0, 0, 0, .., 0) (x a) (0, 1, 0, 0, ..., 0)

(x a) (0, 0, 1, 0, ..., 0)

. . .
(x a) (0, 0, 0, 0, ..., 1)
n

Changing these row vectors into columns and then grouping them into a matrix gives
. . . 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 . . . 1

. . .

. . .

. . .

..

Since this matrix is an innitely large version of the identity matrix, clearly the only solution to this system is the trivial solution, so we may then conclude that the elements of B are linearly independent.  Next, we'll nd the coordinate vector of p (x) = x3 + x2 x 1 relative to the ordered basis B = 3 2 1 (x a) , (x a) , (x a) , 1 . The Taylor series expansion of p (x) about a gives
p (x) = p (a) + p (a) p (a) p(3) (a) p(4) (a) p(n) a 2 3 n (x a) + (x a) + (x a) + + + (x a) 1! 2! 3! 4! n! p (a) = a3 + a2 a 1 = (a + 1) (a 1) p (a) = 3a2 + 2a 1 = (a + 1) (3a 1) p (a) = 6a + 2 = 2 (3a + 1) p(3) (a) = 6 p(4) (a) = 0 p(n) (a) = 0
2

p ( x) = x3 + x2 x 1 p ( x) = 3 x2 + 2 x 1 p ( x) = 6 x + 2 p(3) (x) = 6 p(4) (x) = 0 p(n) (x) = 0

. . .

. . .

Plugging these terms into the expansion relation gives


p (x) = (a + 1) (a 1) + (a + 1) (3a 1) (x a) + (3a + 1) (x a) + (x a) + + 0
2 2 3

So nally, the coordinate vector is


[p (x)]B = 1, (3a + 1) , (a + 1) (3a 1) , (a + 1) (a 1)
2

4. If T =  Let v1 =

3 2 3 2

4 = 5 and T 9

2 1

5 = 1 , nd T 1

x y

and v2 =
1 0 1 0

2 1

. Now we'll nd T (e1 ) and T (e2 ) using the fact that v1 , v2 are linear
0 1

combinations of e1 =
3 2 2 1 1 0

and e2
0 1
1 7 2 7

4 5 2 1 2 1 2 5 + 1 = 1 T (e1 ) = T (v1 ) + T (v2 ) = 7 7 7 7 9 1 1 4 5 1 2 3 2 3 5 1 = 1 T (e2 ) = T (v1 ) T (v2 ) = 7 7 7 7 9 1 3 T (e1 ) T (en ) .

3 2

2 1

0 1

1 0 0 1

2 7 3 7

Theorem 10 of the text states that for a linear transformation T (x) = Ax, A = T such that Therefore, for the current problem, A =
T 2 1 T (e1 ) T (e2 ) = 1 1 . Hence 1 3 2 1 x x = 1 1 y y 1 3

5. Determine whether the set of all positive real numbers with addition dened by x y = xy and scalar multiplication dened by c x = xc is a vector space.  By denition, scalar multiplication must be associative for a vector space to exist. This is dened as x V, and for scalars c, k R, c (kx) = (ck ) x. Given this specic rule, we must then prove
c c (k (k x) = (c k)
c

x = xk
c

x) = c (c k)

xk = xk x = (k c )

= xc

x = xk

xc = xk scalar multiplication is not associative and the set is not a vector space

6. Start by showing the set of vectors e3x , xe3x , x2 e3x are linearly independent and then show that 3x 3x 2 3x the dierential operation, restricted of D, is invertible, and use 2 3x to the subspace W = span e , xe , x e this fact to nd the integral x e dx.  In problem 3 we showed that a set of vectors is linearly independent if its coordinate mapping is also linearly independent. W = span e3x , xe3x , x2 e3x = span {v1 , v2 , v3 }, therefore we know the elements of W span our space. Now we'll prove that they are a basis for W by showing they are independent. Consider B = e3x , xe3x , x2 e3x . Changing these row vectors to column vectors and grouping them in a matrix we nd the transformation matrix A = I3 , and therefore it's easily deduced that these vectors are linearly independent. Now let's describe the dierential operation in the space W as a linear transformation.
d 3x dx [v1 ] = 3e 3x [v2 ] = 3e + e3x d 2 3x + 2xe3x dx [v3 ] = 3x e d dx

[v1 ]B = (1, 0, 0) , [v2 ]B = (0, 1, 0) , [v3 ]B = (0, 0, 1)

As done previously,

[T (v1 )]B = (3, 0, 0) [T (v2 )]B = (1, 3, 0) [T (v3 )]B = (0, 2, 3) 1 3 0 0 2 3

3 A= 0 0

Hence for dierentiation,


T (x) = A [x]B A [x]B = d (x) dx

A1 A [x]B = [x]B = A1

Integrating both sides gives,


xdx
B

d (x) dx B d (x) dx B d (x) dx

=A xdx

= A1 [x]B
B

Now we'll nd A1


A | I3 0 1 xdx
B

I3 | A 1

A 1 = 0 0

1 3

1 9
1 3

2 27 2 9 1 3

For the given x = x2 e3x , [x]B = 0

1 3

1 9
1 3

= 0 0 xdx =

2 27 2 9 1 3

2 0 27 0 = 2 9 1 1 3

2 3x 2 3x 1 2 3x e xe + x e 27 9 3

orthogonal basis. 

1 1 1 . Show the vectors form an 7. Find an orthogonal basis for R3 that contains the vector a1 = 3 1

1 1 1 The standard basis for R3 is B = 1 , 1 , 0 = 1 0 0

b1 , b2 , b3 . We see that a1 is a scalar

multiple of b1 , so we can replace b1 with a1 to have a new basis for R3 containing a1 , B = a1 , b2 , b3 . The next step will be to build an orthogonal basis BO from the elements of B . Let's project b2 onto span {a1 } to then nd a2 , a vector orthogonal to b2 .
1 1 1 = c1 x 1 a1 = 3 1 b2 a1 a1 a1 a1 1 1 1 1 1 1 1 (1, 1, 0) , , 2 1 3 3 3 1 = 1 1 a2 = 1 1 3 3 0 1 1 0 1 1 3 = 1 1 = c2 x2 a2 = 1 3 3 2 2 3 a2 = b2

By Gram-Schmidt,
a3 = b3 b3 a2 b3 a1 a1 + a2 a1 a1 a2 a2 1 a2 = 0 0
1 3 1 3 1 3

1 (1, 0, 0) a3 = 0 0

1 1 , 13 , 3 3

a1 +

(1, 0, 0)
2 3

1 1 2 3, 3, 3

1 6 1 6 1 3

1 1 = 1 = c3 x3 a3 = 2 0 0

1 2 1 2

Notice a1 = c1 x1 , a2 = c2 x2 , and a3 = c3 x3 . Since a1 , a2 , a3 are scalar multiples of x1 , x2 , x3 respectively, if x1,2,3 are pairwise orthogonal, then a1,2,3 are also pairwise orthogonal.
x1 x2 = (1, 1, 1) (1, 1, 2) = 1 + 1 2 = 0 x1 x3 = (1, 1, 1) (1, 1, 0) = 1 1 + 0 = 0 x2 x3 = (1, 1, 2) (1, 1, 0) = 1 1 + 0 = 0 BO = {a1 , a2 , a3 } is therefore an orthogonal basis

10

8. Determine whether the space P[a,b] of all polynomial functions with real coecients and domain b a x b is an inner-product space if for p, q P[a,b] p, q = a p (x) q (x) dx. Then nd the magnitude of the polynomial p (x) = x + 1 and compute the distance from x2 to x.  To determine if P[a,b] is an inner-product space, it must abide by the four axioms of an inner-product space.
p, q = q, p p, q =
a b

p (x) q (x) dx =
a

q (x) p (x) dx = q, p

p + q, r = p, r + q, r p + q, r =
a b

(p (x) + q (x)) r (x) dx =


a

(p (x) r (x) + q (x) r (x)) dx

p + q, r =
a

p (x) r (x) dx +
a

q (x) r (x) dx = p, r + q, r

cp, q = c p, q scalars c R cp, q =


a b

cp (x) q (x) dx = c
a

p (x) q (x) dx = c p, q

p, p 0 and p, p = 0 p = 0 p, p =
a b

p (x) p (x) dx =
a

p (x) dx

p (x) represents a polynomial, so the most general form for this would be p (x) = an xn + + a2 x2 + a1 x + a0

Regardless if p (x) x [a, b] is negative or positive, p (x)2 will be positive x [a, b] scalars c b 2 2 R, c2 0. If p (x) represents a positive quantity on the entire interval [a, b], then a p (x) dx represents the summation of a denite amount of positive p (x)2 quantities. The summation of any number of positive quantities is always a positive quantity. For p (x) = 0, then
p, p =
a b

(0) dx = 0
b 2 ?

p, p =
a

p (x) dx = 0
2

for

p (x) dx = 0 p (x) = 0 p (x) = 0


a

Now to nd the magnitude of p (x) = x + 1, we'll rst nd the magnitude of p (x) in general and then plug in.
p (x) , p (x) = p (x)
2 2 b

p (x) p (x) dx =
a

=
a b

p (x) dx

p (x)

1 3 p ( x) 3

=
a

1 3 3 p (b) p (a) 3

11

p (x) =

1 3 3 p (b) p (a) 3 p (a) = a + 1 p (a) = (a + 1)


3 3

p (b) = b + 1 p (b) = (b + 1)
3 3

p (x + 1) =

Next, let's compute the distance from x2 to x. dist (p, q ) p q . So if p (x) = x2 and q (x) = x, then 2 r (x) = p (x) q (x) = x x. Using the previously derived formula for magnitude,
r (x) = dist x, x2 = 1 3 3 3 b (b + 1) a3 (a + 1) 3 1 3 3 3 b (b + 1) a3 (a + 1) 3

1 3 3 (b + 1) (a + 1) 3

12

9. The eld of play is C[1,1] , the space of all functions that are continuous on the interval [1, 1]. If 1 f, g C[1,1] , we'll dene the inner product as f, g = 1 p (x) q (x) dx. Find the orthogonal projection of ex onto P2 and graph both functions on [2, 2].  The standard basis for P2 = 1, x, x2 = B = {a1 , a2 , a3 }. We'll transform B into an orthogonal basis BO = {v1 , v2 , v3 }. To start, let v1 = a1 = 1. By Gram-Schmidt,
a2 v1 v2 = a2 v1 = (x) v1 v1 1 1
1

(x) (1) dx

(1) = x

(1) (1) dx 1

0 2

(1)

v2 = x

Similarly,
v3 = a3 a3 v1 a3 v2 v1 + v2 v1 v1 v2 v2 1 = x
2

2 3

1 1 1

x2 (1) dx (1) (1) dx 0


2 3

1
1 (1) + 1 1

x2 (x) dx (x) (x) dx

(x)

v3 = x2

(1) + 1 3

(x)

v3 = x2

Therefore, BO = {v1 , v2 , v3 } = 1, x, x2 1 3 . Now we'll project f (x) = ex onto P2 and graph both functions on [2, 2].
P2 = span {BO } = span 1, x, x2 projP2 ex = 1 3
1 3

1 ex x2 3 ex ( x) ex (1) (1) + (x) + 2 1 (1) (1) (x) (x) x 3 x2

x2

1 3

projP2 ex =

3 e2 11 3 15 2 e 7 x2 + x 4e e 4e

Figure 1: ex and projP2 ex on [2, 2]

13

10. On a cold winter night when the outside temperature is 0o F at 9:00 PM, the funace in a two-story home fails. Suppose the rates of heat ow between the rst oor, second oor, and outside are shown in Figure 2. Further suppose the temperature of the rst oor is 70o F and that of the second oor is 60o F when the furnace fails. Set up the dierential equations that model the heat ow and solve the system. Then determine the temperature of each room at 10:00 PM. Figure 2: Heat Flow for a Two-Story Home

Let y0 (t) , y1 (t) , y2 (t) represent the temperature in each space after t hours. Thus, y0 , y1 , y2 represent the rates of change for the temperatures in all three spaces. To deveop a system of equation to model this, note that rate of change = rate in rate out.
outside story 1 story 2
1 3 y0 + 1 y0 = 10 5 y1 + 10 y2 7 1 1 y1 = 5 y0 10 y1 + 2 y2 1 3 y2 = 10 y0 + 1 2 y1 5 y2

y0 (0) = 0 y1 (0) = 70 y2 (0) = 60

Putting this system of coupled dierential equations into matrix form gives
3 10 y0 1 y1 = 5 1 y2 10
y 1 5 7 10 1 2 A 1 10 1 2 3 5

y0 0 y1 y (0) = 70 60 y2
y

The eigenvalues associated with the matrix A are 1 1.16056, 2 0.439445, 3 = 0. The corresponding eigenvectors are
0.1514 v1 = 1.1514 1 1.6514 v2 = 0.6514 1 1 v3 = 1 1

Next, we'll use the theory of diagonalization to uncouple this system. Because this system models empirical data which is assumed to be approximate, we'll use decimal approximations throughout.
P = v1 v2 v3 0.1514 = 1.1514 1 1.6514 1 0.6514 1 1 1 0.3978 1 3 1 0.1569 3 1 0.2409 3

0.06446 P 1 = 0.4902 0.4258

The related uncoupled system then becomes


w0 w0 w = w1 = P 1 AP w, for w = w1 w2 w2

14

Crunching the numbers, we get


1.1587 0.01861 w = 0.07045 0.4413 0.1814 0.07044 y (0) ekt . Hence, 0 0 0

Since all of the dierential equations are rst-order, they all have a general solution of the form y =
e1 t 0 0 e2 t 0 w (0) w (t) = 0 3 t 0 0 e 1.16t e 0 0 0 e0.44t 0 w (0) w (t) = 0 0 1
D

Therefore, the solution for the original system is y (t) = P DP 1 y (0)


0.1514 y (t) = 1.1514 1 t e 1 1.6514 1 0.6514 1 0 1 1 0 0 e2 t 0 0 0 e3 t 0.06446 0.4902 0.4258 0.3978 0.1569 0.2409
1 3 1 3 1 3

0 70 60

0.04 30e0.44t 1287e1.16t + 920e1.6t 1.6t 0.04 230e0.44t + 507e + 920e1.6t y (t) = e 0.2 40e0.44t + 156e1.16t + 184e1.6t

Since time is measured in hours and zero is dened to be 9:00 PM, we'll nd y (1) to see the temperature in each room at 10:00 PM.
3.27 y (1) = 52.7 o F 54.4

15

Anda mungkin juga menyukai