Anda di halaman 1dari 4

1 Vector spaces and matrices

1.1 Basics

Vector spaces

Let an array of n numbers be denoted


Some examples to think  
a1
about: Euclidean coordi-  a2 
nates, P-V-T, health indices, a= . 
 
(1.1)
 .. 
grocery shopping list. The
an
elements form an ordered
list. Let R represent the set of real numbers; the column n-vector a then is in the real vector
space Rn . Simply put, vectors are objects which can be added or scaled. We do not need the
For convenience we use row school-level concepts of direction, magnitude, angle between vectors etc. It is possible to take
vectors: vectors which obey the properties listed above, and visualize them as points in n-dimensional
Euclidean space (Rn ). Some examples include:
a> = [a1 , a2 , . . . , an ]
1. Note that the ‘vectors’ we have studied in mechanics at school do add and scale.
a = [a1 , a2 , . . . , an ]> 2. The eigenvectors ui of Au = λu add and scale.
3. Polynomials in x can be added and scaled to get new polynomials.
4. Similarly, matrices can be added and scaled to get new matrices. This (confusingly) means
We will not use the school
that matrices can be vectors as per the definition above! (The other way round is also
notation ~v to denote vectors
possible: a vector may be an ordered list with many ’rows’ and ’columns’ and hence it can
but either use v or |vi.
be a matrix.)
5. Also note that if T1 (x, t) and T2 (x, t) are solutions to the transient conduction PDE
∂2T 1 ∂T
2
=
∂x α ∂t
then the two solutions also add and scale to provide new solutions.

Definition 1.1. A vector space V is a set of elements (vectors) which are manipulated using
two operations: addition, and multiplication by a scalar.

A vector space V is not simply a collection of points, but a concept which interconnects
V with a field F and the addition and scalar multiplication options. The field we normally
work with is either real numbers (R) or complex numbers (C). The following axioms result,
assuming x and y are two vectors in V:

1. x + y = v which is also a vector ∈ V.


2. x + y = y + x (commutative law)
3. (x + y) + z = x + (y + z) (associative law)
4. There is a zero vector such that x + 0 = 0 + x = x.
5. For every x ∈ V, there is a y = −x such that x + y = 0.
6. Given a scalar a, ax belongs to V.
1
SBN,
c IITB, 2019
2 Vector spaces and matrices

7. a(bx) = (ab)x. (associative law)


8. (a + b)x = ax + bx and a(x + y) = ax + ay. (distributivity)
9. For every x in V, 1x = x.

Definition 1.2. Let a and b be vectors in a subset V of Rn . Then V is called a subspace if


Note that 0 is in every sub- it is closed under the operations of vector addition and scalar multiplication (i.e. for every
space: a − a = 0. scalar α, a + b and αa are in ν).

Definition 1.3. Let a1 , . . . , ak be arbitrary vectors in Rn . The set of all their linear combi-
nations is called the span of a1 , . . . , ak .
( k )
X
span[a1 , . . . , ak ] = αi ai : α1 , . . . , αk ∈ R
i=1

If a = α1 a1 + · · · + αk ak , then span[a1 , . . . , ak , a] = span[a1 , . . . , ak ]. The span of any set of


vectors is a subspace.
Pn
Definition 1.4. A set of n vectors {xi } are linearly dependent if at least one ai in i=1 ai xi
is nonzero. Then, a vector can be expressed as a linear combination of the other vectors.
Pn
Conversely, if each ai = 0 for i=1 ai xi = 0, then {xi } are linearly independent. A vector
a is a linear combination of the vectors a1 ,a2 ,. . .,ak if there are scalars α1 , . . . , αk such that
a = α1 a1 + · · · + αk ak .

Definition 1.5. A basis is a subset of vectors which is linearly independent.

In general a space may have more vectors than necessary to describe a vector span. However,
all possible bases of a subspace ν contain the same number of vectors. That number is the
dimension of V = dim V. An arbitrary vector u in Rn can be expressed in terms of a basis
A real vector space of dimen- {ui }i=1,..,n as
sion n, Rn has a basis of Xn

n vectors which are linearly u= ci ui (1.2)


i=1
independent. Conversely, a
set of n linearly independent Equating the coordinates on both sides would give n equations in n unknowns (ci ’s).
vectors can form the basis of If is also possible for a vector space to be infinite dimensional; this will be of use when
Rn . describing an arbitrary function, for example using a summation of cosines (as a Fourier
here basis is these 2 vector as the definition of
series).
the basis is we must have independent vector
Given a basis {a1 , . . . , ak }, and other solutions of having all three direction
Examples of vector spaces: can be linearly dependent hence these two
and a vector a = α1 a1 +
3 basis. >
· · · + αk ak , then the coeffi- • All vectors in R satisfying v1 − 3v2 + 2v3 = 0. Dimension = 2, basis = [3, 1, 0] and
>
cients αi are often called the [2, 0, −1] .
coordinates of a. • The set of points satisfying x + y = 1 cannot be used to create a vector space because (0, 0)
is not in it.
2 3
• All polynomials in x, of degree not exceeding 3. Dimension = 4, basis  = {1, x, x , x }.
0 1
• All skew-symmetric 2 × 2 matrices. Dimension = 1, basis = .
−1 0      
1 0 0 1 0 0 0 0
• Real 2×2 matrices form a 4 dimensional vector space with basis , , , .
0 0  0 0  1 0 0 1
0 1 0 0 1 0
• All 2×2 matrices such that a11 +a22 = 0. Dimension = 3, basis = , ,
0 0 1 0 0 −1
• All m × n matrices with positive entries. Not a vector space.
• Second order homogeneous linear differential equations. The solutions form a vector space:
Given y 00 + p(x)y 0 + q(x)y = 0, if y1 and y2 are solutions, then c1 y1 + c2 y2 is also a solution.
For example, ex and e−x are solutions of y 00 − y = 0 and so is y = −3ex + 0.4e−x .
• Find 3 different bases for R2 . [1, 0], [0, 1] and [1, 1], [1, −1] and [1, 0], [0, −1].
3 Basics

Inner product spaces

Definition 1.6. A real vector space V is called a real inner product space if for every pair of
vectors a and b there is a real number, denoted ha, bi, and called the inner product, which
satisfies the following axioms.

ha, ai > 0 for a 6= 0 and = 0 for a = 0 (1.3)


ha, bi = hb, ai (1.4)
ha, αbi = hαa, bi = α ha, bi (1.5)
hc, a + b, i = hc, ai + hc, bi (1.6)

Definition 1.7. The norm (length) of a vector is defined by


The norm serves as a gener- p
kak = ha, ai (≥ 0)
alization of the notion of the
magnitude of a vector. Definition 1.8. A vector space is a normed vector space if

1. kvk ≥ 0 for any vector v.


A normed space is a specific
2. kvk = 0 if and only if v = 0.
case of a vector space.
3. kαvk = |α| kvk.
4. ku + vk ≤ kuk + kvk.

Unit vectors have norm = 1.


Projections, lines and planes
Consider two orthonormal vectors e and e∗ , such that kek = ke∗ k = 1 and e · e∗ = 0. Then
a vector a may be written as a = αe + βe∗ which implies that the projections onto the two
unit vectors are α = a · e and β = a · e∗ .

Fig. 1.1: Projecting onto unit vectors.

Definition 1.9. The projection of a onto a nonunit vector f is


ha, f i
2 f (1.7)
kf k

Definition 1.10. The vector of a point along a direction u is given as p = a + su. If n


denotes the normal to the line, then u and n are orthonormal, which implies

(p − a) · n = 0 ⇔ p·n=a·n (1.8)

Fig. 1.2: A line (left) and a plane (right).


4 Vector spaces and matrices

Definition 1.11. Given two vectors u and v that lie in a plane, along with a position vector
u and v need not be mutu- a, any vector p on the plane may be written as
ally orthogonal.
p = a + su + tv s, t ∈ R (1.9)

Alternately, since any vector in the plane must be orthogonal to the normal to the plane,

(p − a) · n = 0 ⇔ p·n=a·n (1.10)

This second definition is more convenient because (a) it is of the same form as the definition
of a line, and this can therefore be seen to extrapolate to hyperplanes of any dimension, and
(b) needs only two vectors (a point in the plane and a normal).

If u and v are orthogonal, then hu, vi = hv, ui = 0, and


2
ku + vk = hu + v, u + vi = hu, ui + hu, vi + hv, ui + hv, vi = hu, ui + hv, vi

which is the Pythagoras theorem. The following three (in)equalities then hold:

1. Cauchy-Schwarz inequality: kak kbk ≥ | ha, bi |. By definition this holds when a or b is 0.


If all three terms in this inequality are nonzero (and by definition, positive), we first let
ha, bi
w =b− a ⇒ ha, wi = 0
ha, ai
which implies that a and w are orthogonal to each other. w would also be orthogonal to
any scalar multiple of a, such as ha,bi
ha,ai a. Using the Pythagoras theorem,
2
w + ha, bi a = kbk2 = hw, wi + ha,bi ha, ai
2
ha,bi 2

ha,ai ≥ ha,ai ha, ai
ha, ai

since hw, wi ≥ 0. Multiply by ha, ai which is > 0 to get the Cauchy-Schwarz inequality.
2. Triangle inequality: kak + kbk ≥ ka + bk.
2 2 2 2
3. Parallelogram equality: ka + bk + ka − bk = 2(kak + kbk ).

Examples:

• Find the Euclidean norm of [3, 2, −2, 4, 0]> . Answer = 33.
• Does the triangle inequality hold for a = [0.4, 1.3, −2.2]> and b = [2, 3, −5]> .
√ √ √
[2.4, 4.3, −7.2]> = 76.09 = 8.72 ≤ 6.69 + 38 = 8.75.

In n-dimensional Euclidean space Rn ,

ha, bi = a> b = a1 b1 + · · · + an bn (1.11)

If θ is the angle between the two vectors whose inner product is real,
ha, bi
a · b = kak kbk cos(θ) ⇒ cos(θ) = p (1.12)
ha, ai hb, bi
and cos(θ) ≤ 1 by the Cauchy-Schwarz inequality.
p √ q
kak = ha, ai = a> a = a21 + · · · + a2n (1.13)

For a distance d(u, v) to qualify as a metric, the following axioms need to be satisfied:
Normed spaces with such
distance metrics are called d(u, v) ≥ 0 (1.14)
metric spaces. d(u, u) = 0 (1.15)
d(u, v) = d(v, u) (1.16)
d(u, w) ≤ d(u, v) + d(v, w) (1.17)

Anda mungkin juga menyukai