k d k wH k x k
(7.32)
(7.33)
where
(7.34)
2
D Ed
We may employ the gradient method to locate the minimum of Eq. (7.34). Thus
w J w 2 Rxx w 2r
(7.35)
The minimum occurs when the gradient is zero. Thus, the solution of the weights is the optimum
Wiener solution as given by
1
wopt Rxx
r
(7.36)
The solution Eq. (7.36) is predicated on our knowledge of all signal statistics and thus is our
calculation of the correlation matrix.
In general, we do not know the signal statistics and thus must resort to estimating the array
correlation matrix ( Rxx ) and the signal correlation vector ( r ) over a range of snapshots or for
each instant in time. The instantaneous estimates of these values are given as
R xx k x k x H k
(7.37)
r k d k x k
(7.38)
We can employ an iterative technique called the method of steepest descent to approximate the
gradient of the cost function. The method of steepest descent can be approximated in terms of the
weights using the LMS method advocated by Widrow. The steepest descent iterative
approximation is given by:
w k 1 w k
1
w J w k
2
(7.39)
where, is the step-size parameter and w is the gradient of the performance surface.
The gradient of the performance surface is given in Eq. (7.34). If we substitute the instantaneous
correlation approximations, we have the LMS solution.
w k 1 w k R xx w r
w k e k x k
where
(7.40)
The convergence of the LMS algorithm in Eq. (7.40) is directly proportional to the step-size
parameter . If the step-size is too small, the convergence is slow and we will have the
overdamped case. If the convergence is slower than the changing angles of arrival, it is possible
that the adaptive array cannot acquire the signal of interest fast enough to track the changing
signal. If the step-size is too large, the LMS algorithm will overshoot the optimum weights of
interest. This is called the underdamped case. If attempted convergence is too fast, the weights
will oscillate about the optimum weights but will not accurately track the solution desired. It is
therefore imperative to choose a step-size in a range that ensures convergence. It can be shown
that stability is ensured provided that the following condition is met:
0
1
2max
(7.41)
1
2 trace R xx
(7.42)