Anda di halaman 1dari 7

Inner-Products, Norms, Orthogonality

Inner-Products
Definition: An inner-product is an operation on two vectors, producing a scalar result. The inner product
of two vectors x and y is denoted < , > and must have the following properties:
1. < , >=< , >, where the overbar indicates complex conjugate.
2. < , 1 + 2 >= < , 1 > + < , 2 >.
3. < , > 0 for all x and < , >= 0 if and only if = 0.
There are two other properties that can be derived from this definition.
Vector spaces with inner products defined for them are referred to as inner product spaces.
For our purposes, we use the dot product of two vectors as an inner product, i.e.:

< 1 , 2 >= 1 2
=1

There are several other useful inner products. For example, for functions of time defined on the interval
[, ], consisting of the form (), we define:

< , >= ()()

Norms
A norm generalizes our concept of length or magnitude of a vector.
Definition: A norm is a function of a single vector that produces a scalar result. For a vector x, it is
denoted and must satisfy the following rules:
1.
2.
3.
4.

= 0 if and only if = 0
= || for any scalar .
+ + (triangle inequality)
| < , > | (Cauchy-Schwartz inequality)

Vector spaces with such norms defined for them are referred to as normed linear spaces.
Norms are sometimes induced from the given inner product. For example, the Euclidean norm is given
1

using: =< , >2


A unit vector is a vector, whose norm is a unit. Every vector can be converted to a unit length if we divide
by its norm. This will not change its direction, just its scale.

Commonly used norms:


1. 1 norm: 1 = =1 | |. Also called the Manhattan norm.
2. 2 norm: 2 = =1| |2 . This is what we usually call the Euclidean norm.
3. If the vector is a signal vector, (), the 2 norm becomes:

2 = ()()

4. norm: = max | |. This is the infinity norm, which we use a lot in robust control. In
many cases, we try to minimize the magnitude of the largest component of the vector, which will
reduce the size of all the other components.

Orthogonality and Orthonormality


Angle: the angle between two vectors, x and y is denoted by (, ) and satisfies the following equation:
< , >= cos (, )
Definition of orthogonality: Two vectors are said to be orthogonal if and only if the inner product of them
is equal to zero.
In mathematical terms: a set of vectors { } is orthogonal if:
< , >= 0
< , > 0 =
A set of vectors is orthonormal if:
< , >= 0
< , > 1 =
Orthogonality, and in particular, orthonormality, is very important to us in various engineering
applications, but in particular it is important for us when constructing a basis for a vector space, because if
the basis is comprised of orthogonal vectors then it is easy to resolve any vector in the vector space to its
components along each one of the orthogonal vectors in the basis.

Checking for Linear Dependency and Gram-Schmidt Orthonormalization


There are several ways to check for linear dependency of a set of vectors.

By The Definition of Linear Dependency


One option is by the definition of linear dependency. In this way, we are looking for nonzero scalars that
can be used in a linear combination of the vectors such that the linear combination gives a zero. This is a
good way if the problem is pretty small, but becomes cumbersome if the set of vectors is rather large.
For example, suppose there are three vectors in the set:

2
1
0
1 = [1] , 2 = [3] , 3 = [7]
0
4
8
To find if this set is linearly dependent, we do:
1 1 + 2 2 + 3 3 = 0
Which gives us the following three equations:
21 + 2 = 0
1 + 32 + 73 = 0
42 + 83 = 0
From the first equation: 2 = 21. From the third equation, 2 = 23 . So if 1 = 1, 2 = 2, 3 = 1
we get a zero in the linear combination, thus the three vectors are linearly dependent.

By Matrix Methods
If we organize each of the given vectors in the set as column vectors in a matrix, we can check the rank
of this matrix (see the document about matrices). If the matrix rank is equal to the number of vectors in
the set, the vectors are linearly independent. Otherwise, they are linearly dependent.
For the example above:
2 1 0
= [1 3 7]
0 4 8
We can use Gaussian Elimination to check how many rows or columns are linearly independent by
reducing the matrix to a row-reduced echelon form (RREF):
2 1 0
2 1
[1 3 7] [0 7
0 4 8
0 4

0
2 1
14] [0 1
8
0 4

0
2 1 0
2] [ 0 1 2]
8
0 0 0

So, only two vectors are linearly independent and the third one depends on them.
Another method, which works only for square matrices, is to calculate the determinant of the matrix. If
the determinant is zero, the vectors are linearly dependent. Otherwise, they are linearly independent.
For the example above, the determinant is:
det() = 2 3 8 (1) 1 8 4 7 2 = 48 + 8 56 = 0
A third method is to calculate the Gramian, which is a matrix that is calculated as:
< 1 , 1 > < 1 , >

=[
]
< , 1 > < , >

If the determinant of G is zero, the vectors are linearly dependent. Otherwise, they are linearly
independent. Obviously, this method works for any number of vectors of any size, even if they dont
form a square matrix, because the Gramian itself is always a square matrix.
For the example above:
< 1 , 1 > < 1 , >
5 1 7

= = [
] = [1 26 53 ]
< , 1 > < , >
7 53 113
|| = 0
Using Singular Value Decomposition (see notes on SVD):
When we use SVD, we get three matrices: U, S, and V. The matrix S, which contains the singular values
along its diagonal, shows us if the vectors are linearly independent or not. If the number of nonzero
singular values is equal to the number of vectors in the set, they are linearly independent. Else, they are
linearly dependent.
For our example, a Matlab code of the SVD is given below.
>> [U,S,V]=svd(A)
U=
0.0276 -0.9355 -0.3522
0.6504 0.2844 -0.7044
0.7591 -0.2096 0.6163
S=
11.7647

0 2.3648

0 0.0000

V=
-0.0506 -0.9115 -0.4082
0.4263 -0.3894 0.8165
0.9032 0.1327 -0.4082

It is obvious from the result that the third singular value is zero. Thus, there are only two linearly
independent vectors in the given set of three vectors.
The command rank in Matlab will also give us this result.
>> rank(A)
ans =
2

Gram-Schmidt Orthonormalization
It is computationally convenient to use basis sets that are orthonormal. However, if we are provided with
a set of vectors, first we need to check that they are even linearly independent and can serve as a basis,
and, if they are, we can use the Gram-Schmidt orthonormalization procedure to create an orthonormal
basis out of them.
The process works in the following way:
Given: a set of { }, = 1, ,
1. Let
=1
1 =
2. If < , = + 1. Else: stop.
3. Compute:

1
1

= < , >
=1

This, essentially, strips all the directions that already exist in the orthonormal set { }, = 1, ,
from the vector , leaving us with only a new direction.
4. If 0 then we discovered a new orthogonal direction, which needs to be normalized:

=

Else: we now know how is linearly dependent on the previous vectors in the set { }, =
1, , , and thus cannot be used in the basis.
5. Go back to step 2.
Applying this process on the vectors in the example, copied here for convenience:
2
1
0
1 = [1] , 2 = [3] , 3 = [7]
0
4
8
Step 1:
=1

2
1 =

0.8944
5
1
=
1 = [0.4472]
1

0
5
[ 0 ]

Step 2: = 2
Step 3:
2

2
1+
1
1
1
1.4
5
5
5
5
1
2 = 2 < , 2 > = [3]
1 , [3]
1 = [3] ( )
1 =
1 = [2.8]

3
5
4
4
4
=1
5
5 4
5
5
[
]
4
[ 0 ]
[ 0 ]
[ 0 ]
1

Step 4: Obviously, 2 0, so we can divide by its norm:


2 =

0.2756
2
= [0.5512]
2
0.7875

Step 5: since k=2<3 (number of vectors in the set), we increase = + 1 = 3


Step 6: we now calculate:
2

0
0
0.2756 0 0.2756
5
5
3 = [7]
1 , [7]
1 [0.5512] , [7] [0.5512] =

8
0.7875 8 0.7875
5 8
5
[ 0 ]
[ 0 ]
2
14
0
0
2.8
0
0.2756
5
5
7
[7] ( )
1 10.1587 [0.5512] = [7] + 7 [5.6] = [0]

5
8
8
8
0
0.7875
5
5
[ 0 ]
[ 0 ]
So, we got a zero vector, which means that 3 depends on the first two vectors, as we already knew. The
orthonormal basis for this set of vectors is just the first two vectors:
0.8944
0.2756
{[0.4472] , [0.5512]}
0
0.7875
We already know that each vector in this set is normalized, by the way we got them. Are they really
orthogonal?
< 1 , 2 >= 0.8944 0.2756 0.4472 0.5512 = 0

It should be noted that if we process the vectors in the set in a different order, we get a different
orthonormal basis, but thats still OK, because a vector space can have any number of different
orthonormal bases, they are just rotated relative to each other.

Anda mungkin juga menyukai