Anda di halaman 1dari 6

MA265 FINAL EXAM STUDY GUIDE

See Exam 1 Study Guide and Exam 2 Study Guide and...

Theorem. (Gram-Schmidt Theorem) Let V be an inner product space and let W 6= {0} be
an mdimensional subspace of V . Then, there exists an orthonormal basis T = {w1 , w2 , . . . , wm }
for W .

Gram-Schmidt Process
Let S = {u1 , u2 , . . . , um } be a basis for W . Set
v1 = u1
hu2 , v1 i
v2 = u2 v1
hv1 , v1 i
hu3 , v1 i hu3 , v2 i
v3 = u3 v1 v2
hv1 , v1 i hv2 , v2 i
..
.
m
X hum , vj i
vm = um vj
j=0
hvj , vj i

Then, S 0 = {v1 , v2 , . . . , vm } is an orthogonal basis  for W , and


v1 v2 vm
T = w1 = , w2 = , . . . , wm = is an orthonormal basis for W .
||v1 || ||v2 || ||vm ||

Theorem. Let T = {w1 , w2 , . . . , wn } be an orthonormal basis for a Euclidean space V and


let v V . Then,
v = c1 w1 + c2 w2 + + cn wn where cj = hv, wj i
= hv, w1 i w1 + hv, w2 i w2 + + hv, wn i wn

Definition. Let S = {v1 , v2 , . . . , vn } be an ordered basis for a vector space V . If v V with


a1

a2
v = a1 v1 +a2 v2 + +an vn , then [v]S = ... is the coordinate vector of v with respect to S.

an

Theorem. Let V be an n-dimensional Euclidean space and let S = {u1 , u2 , . . . , un } be an


orthonormal basis for V . If v = a1 u1 + a2 u2 + + an un and w = b1 u1 + b2 u2 + + bn un ,
then
hv, wi = a1 b1 + a2 b2 + + an bn .

Corollary. ||v|| = ||[v]S ||


Definition. Let W be a subspace of an inner product space V . A vector u in V is said to
be orthogonal to W if it is orthogonal to every vector in W . The set of all vectors in V that
are orthogonal to all the vectors in W is called the orthogonal complement of W in V and
is denoted by W .

Theorem. Let W be a subspace of an inner product space V . Then,


(a) W is a subspace of V .
(b) W W = {0}

Theorem. If W is a finite-dimensional subspace of an inner product space V , then



W = W.

Theorem. If A is an m n matrix, then


(a) the null space of A is the orthogonal complement of the row space of A.
(b) the null space of AT is the orthogonal complement of the column space of A.

Remark. The four fundamental vector spaces associated with an m n matrix A are
rowsp(A) nullsp(A)
colsp(A) nullsp(AT )

Theorem. Let W be a finite-dimensional subspace of an inner product space V . Then,


V = W W , i.e., for every v V , there exists w W and u W such that v = w + u.

Definition. If W is a finite-dimensional subspace of an inner product space V with orthonormal


basis w1 , w2 , . . . , wm and v V , then w = hv, w1 i w1 + hv, w2 i w2 + + hv, wm i wm is
the orthogonal projection of v on W , denoted w = projW v.

Theorem. Let W be a finite-dimensional subspace of an inner product space V . Then, for


v V , the vector in W closest to v is projW v, i.e., ||v w|| for w W is minimized when
w = projW v or dist(v, W ) = ||v projW v||.

b to AT Ab
Definition. A solution x x = AT b is a least squares solution to Ab
x = b.

Definition. Let V and W be a vector space. A function L : V W is called a linear transformation


of V into W if
(a) L(u + v) = L(u) + L(v) for all u, v V , and
(b) L(cu) = cL(u) for all u V, c R.
If V = W , L is called a linear operator on V .

Theorem. Let L : V W be a linear transformation. Then,


(a) L(0V ) = 0W
(b) L(u v) = L(u) L(v)
2
Theorem. Let L : V W be a linear transformation of an ndimensional vector space
V into a vector space W . Let S = {v1 , v2 , . . . , vn } be a basis for V . If v is any vector
in V , then L(v) is completely determined by {L(v1 ), L(v2 ), . . . , L(vn )}. In particular, if
v = c1 v1 + c2 v2 + + cn vn , then
L(v) = c1 L(v1 ) + c2 L(v2 ) + + cn L(vn ).

Theorem. Let L : Rn Rm be a linear transformation and consider the standard basis


{e1 , e2 , . . . , en } for Rn . Let A be the m n matrix whose j th column is L(ej ). Then,
L(x) = Ax for every x Rn . A is the standard matrix representing L.

Definition. Let L : V V be a linear operator on an ndimensional vector space V .


The number is an eigenvalue of L if there exists a nonzero vector x in V such that
L(x) = x. Every nonzero x satisfying this equation is an eigenvector of L associated
with the eigenvalue . The set of eigenvectors of L associated with the eigenvalue is the
eigenspace of L associated with the eigenvalue .

Definition. Let A be an n n matrix (i.e., a linear operator on Rn ). The number is an


eigenvalue of A if there exists a nonzero vector x in Rn such that Ax = x. Every nonzero
x satisfying this equation is an eigenvector of A associated with the eigenvalue . The set
of eigenvectors of A associated with the eigenvalue is the eigenspace of A associated with
the eigenvalue .

Definition. The characteristic polynomial of A is p() = det(A In ) and p() = 0 is the


characteristic equation of A.

Theorem. The eigenvalues of A are the roots of the characteristic polynomial of A.

Theorem. The eigenspace of A associated with the eigenvalue is the null space of AIn .

Theorem. The eigenvalues of a diagonal or triangular matrix are the entries on the diagonal.

Definition. If A and B are nn matrices, then B is similar to A if there exists a nonsingular


matrix P such that B = P 1 AP .

Definition. An n n matrix A is diagonalizable if there exists a nonsingular matrix P such


that P 1 AP is diagonal, i.e., D = P 1 AP is a diagonal matrix or A = P DP 1 .

Theorem. Similar matrices have the same eigenvalues.

Corollary. If A is a diagonalizable n n matrix where D = P 1 AP is diagonal, then A


and D have the same eigenvalues.

3
Theorem. An n n matrix A is similar to a diagonal matrix D if and only if A has n
linearly independent eigenvectors. Moreover, the entries on the main diagonal of D are the
eigenvalues of A.

Theorem. If the roots of the characteristic polynomial (i.e., the eigenvalues) of an n n


matrix A are distinct, then A is diagonalizable. Moreover, their eigenvectors are linearly
independent and hence form a basis for Rn .

Definition. An n n matrix A is defective if A is not diagonalizable.

Definition. A complex number c is of the form c = a + bi where a and b are real numbers

and i = 1. a is the real part of c and b is the imaginary part of c.

Operations on Complex Numbers C c1 = a1 + b1 i, c2 = a2 + b2 i


addition: c1 + c2 = (a1 + a2 ) + (b1 + b2 )i
subtraction: c1 c2 = (a1 a2 ) + (b1 b2 )i
multiplication: c1 c2 = (a1 a2 b1 b2 ) + (a1 b2 + a2 b1 )i
(complex) conjugation: c1 = a1 b1 i
c1 a1 a2 + b 1 b 2 a1 b 2 a2 b 1
division: = 2 2
+ i
c2 a1 + b 2 a22 + b22
modulus: |c| = cc = a2 + b2
Remark. (Properties of Complex Numbers C) c, d C
c=c
c+d=c+d
cd = cd
c is a real number c = c
cc is a nonnegative real number
cc = 0 c = 0

 
Definition. If A = aij is an m n matrix with complex entries, the conjugate of A is
 
A = aij .

Remark. (Properties of Complex Matrices)


A=A
A+B =A+B
AB = AB
cA = cA for any complex number c C
(A)T = AT
If A is nonsingular, then (A)1 = A1

Theorem. All of the roots of the characteristic polynomial (i.e., the eigenvalues) of a
symmetric matrix are real numbers.

4
Theorem. If A is a symmetric matrix, then the eigenvectors that belong to distinct eigenvalues
of A are orthogonal.

Definition. A real square matrix A is orthogonal if A1 = AT or AT A = In = AAT .

Theorem. An n n matrix A is orthogonal if and only if the columns of A form an


orthonormal set.

Theorem. If A is a symmetric n n matrix, then there exists an orthogonal matrix P such


that D = P 1 AP = P T AP is a diagonal matrix. The eigenvalues of A lie on the main
diagonal of D.

Definition. A differential equation is an equation that involves an unknown function and


its derivatives.

Definition. A first-order homogeneous linear system of differential equations has the form
0
x (t) = a11 x1 (t) + a12 x2 (t) + + a1n xn (t)
01

x2 (t) = a21 x1 (t) + a22 x2 (t) + + a2n xn (t)
..
0 .


xn (t) = an1 x1 (t) + an2 x2 (t) + + ann xn (t)
a11 a12 a1n x1 (t)

a21 a22 a2n x (t)
or x0 = Ax where A = . . . and x = 2. is a solution.
.. . . .. ..
an1 an2 ann xn (t)

Definition. If there are n linearly independent solutions x(1) , x(2) , . . . , x(n) , then they form
a fundamental set of solutions to x0 = Ax. The general solution to x0 = Ax has the form
x = c1 x(1) + c2 x(2) + + cn x(n) for arbitrary constants c1 , c2 , . . . , cn . For specific values of
c1 , c2 , . . . , cn given by an initial condition x(0) = x0 , then x = c1 x(1) + c2 x(2) + + cn x(n)
is the particular solution to the initial value problem x0 = Ax, x(0) = x0 .

Theorem. If an n n matrix A has n linearly independent eigenvectors x1 , x2 , . . . , xn


associated with the eigenvalues 1 , 2 , . . . , n , respectively, then the general solution to
x0 = Ax is
x = c1 e1 t x1 + c2 e2 t x2 + + cn en t xn .

Definition. A point in the phase-plane (xy-plane) at which both x0 (t) and y 0 (t) are zero is
an equilibrium point of the dynamical system x0 = Ax.

5
Classifying the Equilibrium Point at the Origin
1 < 2 < 0
The origin is an attractor and all trajectories approach the origin as t .
1 > 2 > 0
The origin is unstable and all trajectories diverge from the origin as t .
1 = 2 < 0
The origin is an attractor.
(a) If A has two linearly independent eigenvectors, then all trajectories are lines.
(b) If A has only one eigenvector, then all trajectories align themselves to be tangent
to the eigenvector at the origin.
1 = 2 > 0
The origin is unstable.
(a) If A has two linearly independent eigenvectors, then all trajectories are lines.
(b) If A has only one eigenvector, then all trajectories align themselves to be tangent
to the eigenvector at the origin.
1 < 0 < 2
The origin is a saddle point and all trajectories approach the origin along the
eigenvector associated with 1 but diverge along the eigenvector associated with 2
as t .
= i, = 0, 6= 0
The origin is marginally stable and all trajectories are orbits.
= i, 6= 0, 6= 0
(a) If < 0, the origin is a stable spiral and all trajectories approach the origin as
t .
(b) If > 0, the origin is an unstable spiral and all trajectories diverge from the
origin as t .

Anda mungkin juga menyukai