21
2.7.3 Example
The addition and multiplication of two quantities, each with its own magnitude
and phase, are common operations in signal processing and system analysis.
The rectangular representation is more convenient for addition (Equation 2.8)
whereas the exponential notation facilitates multiplication (Equation 2.12). The
multiplication of two complex quantities is demonstrated in Figure 2.2 using
complex, polar, and exponential forms. Note the efficient implementation using
exponentials.
2.2
MATRIX ALGEBRA
2.2.7
22
MATHEMATICAL CONCEPTS
where the index i refers to the row number and varies from 1 to M, and k indicates
column number and varies from 1 to N, where M and N are integers. Sometimes
it is more convenient to vary i from 0 to M 1, and k from 0 to N 1. For
example, this is the case when the first entry refers to zero time or zero frequency.
A matrix a is
square if M = N
real when all its elements are real numbers
complex if one or more of its elements are complex numbers
nonnegative if all aj k > 0
positive if all ^ k > 0
Negative and nonpositive matrices are similarly defined.
The trace of a square matrix is the sum of the elements in the main diagonal,
a; j. The identity matrix I is a square matrix where all its elements are zeros,
except for the elements in the main diagonal, which are ones: Ii>k = 1.0 if i = k,
else Ij k = 0. Typical operations with matrices include the following:
addition:
subtraction:
scalar multiplication:
matrix multiplication:
Note that matrix multiplication is a summation of binary products; this type of
expression is frequently encountered in signal processing (Chapter 4).
The transpose aT of the matrix a is obtained by switching columns and rows:
MATRIX ALGEBRA
The matrix a
23
Finally, a matrix is called unitary if the Hermitian adjoint is equal to the inverse,
a.*1^!"1.
The determinant of the square matrix a denoted as |a| is the number whose
computation can be defined in recursive form as
where the minor is the submatrix obtained by suppressing row i and column k. The
determinant of a single element is the value of the element itself. If the determinant
of the matrix is zero, the matrix is singular and noninvertible. Conversely, if
|a| 7^ 0 the matrix is invertible.
The following relations hold:
24
MATHEMATICAL CONCEPTS
Matrices as Transformations
It follows from these definitions that the sum of the dimensions of the null
space and the range is N:
MATRIX ALGEBRA
25
Figure 2.3 Definition of null space and range of a transformation a [N x N]. The dimension of the range is equal to the rank of a. The dimension of the range plus the dimension
of the null space is equal to N
26
MATHEMATICAL CONCEPTS
then x is an eigenvector of a and A is its corresponding eigenvalue. The eigenvalues of a are obtained by solving the polynomial
where I is the identity matrix. For each eigenvalue A p , the corresponding eigenvector Xp is computed by replacing Ap in Equation 2.42,
where 0 is an array of zeros. The eigenvectors corresponding to distinct eigenvalues of a Hermitian or symmetric matrix are orthogonal vectors; that is, the dot
product is equal to zero. The eigenvalues of a Hermitian or symmetric matrix
are real. The eigenvalues of a symmetric, positive-definite matrix are real and
positive A; > 0, and the matrix is invertible. Last, for any given matrix a [M x N],
the eigenvalues of (aT a) and (a aT) are nonnegative and their nonzero values
are equal.
2.2.4
Matrix Decomposition
Eigen Decomposition
A invertible square matrix a with distinct eigenvalues can be expressed as the
multiplication of three matrices
MATRIX ALGEBRA
27
where
U [M x M]
Orthogonal matrix
Its columns are eigenvectors of a aT (in order as in A)
Vectors u t . . . ur span the range of a
A [M x N]
V [N x N]
Orthogonal matrix
Its columns are eigenvectors of aT a (in same order as A in A)
The null space of a is spanned by vectors v r+1 ... VN
For a real, the resulting three matrices are also real. The SVD is generalized to
complex matrices using the Hermitian instead of the transpose. The method is
equally applicable when the size of the matrix is M < N, with proper changes in
indexes.
Other Decompositions
Two efficient algorithms are used to solve systems of equations that involve
square matrices a [N x N]. The LU decomposition converts a into the multiplication of a lower triangular matrix L (L; j = 0 if i < j) and an upper triangular matrix
U (Uj j = 0 if i > j), such that a = L U. Furthermore, if the matrix a is symmetric
and positive definite, the Cholesky decomposition results in a = UT U, where U
is upper triangular.
28
2.3
MATHEMATICAL CONCEPTS
(Note that for a family of functions f!...f M , the array a becomes matrix a.)
Likewise, the partial derivatives 6f/9x; are organized into an array
for a symmetric
In each case, the function is written in explicit form, partial derivatives are
computed, and the result is once again expressed in matrix form.
Given M-measurements yj that depend on N-parameters xk, the partial derivative 3yj/9xk indicates the sensitivity of the i-th measurement to the k-th parameter.
The Jacobian matrix is the arrangement of the M x N partial derivatives in matrix
form. The Jacobian matrix is useful to identify extrema and to guide optimization
algorithms.
The extremum of a function is tested for minimum or maximum with the
Hessian matrix Hes formed with the second derivatives of f(x):