Anda di halaman 1dari 3

Lecture-5

7.7 Adaptive Beamforming


The fixed beamforming approaches are usually applied to fixed arrival angle emitters. If the
arrival angles don't change with time, the optimum array weights won't need to be adjusted.
However, if the desired arrival angles change with time, it is necessary to devise an optimization
scheme that operates on-the-fly so as to keep recalculating the optimum array weights. In this
scenario of ever-changing electromagnetic environment, a receiver signal processing algorithm is
needed for the continuous update of the weight vectors, so that the spatial filtering beam will
adaptively steer to the targets's time-varying DOA, thus resulting in optimal
transmission/reception of the desired signal. The adaptive algorithm takes the fixed beamforming
process one step further and allows for the calculation of continuously updated weights. The
adaptation process must satisfy a specified optimization criterion. Several examples of popular
optimization techniques include LMS, SMI, recursive least squares (RLS), the constant modulus
algorithm (CMA), conjugate gradient, and waveform diverse algorithms. We will discuss and
explain the some of the algorithms in the following sections.

7.7.1 Least Mean Square Algorithm


The least mean squares algorithm (LMS) is the most widely used adaptive filtering algorithm
being employed in several communication systems. It has gain popularity due to its low
computational complexity and proven robustness. It incorporates new observations and
iteratively minimizes linearly the mean square error. The LMS algorithm changes the weight
vector along the direction of the estimated gradient based on the negative steepest descent
method. We can establish the performance surface (cost function) by again finding the MSE.
The error, as indicated in Fig. 7.10, is

k d k wH k x k

(7.32)

The squared error is given by


k d k wH k x k
2

(7.33)

Momentarily, we will suppress the time dependence. As calculated in minimum mean-square


error, the cost function is given as
J w D 2 w H r w H Rxx w

where

(7.34)

2
D Ed

We may employ the gradient method to locate the minimum of Eq. (7.34). Thus

w J w 2 Rxx w 2r

(7.35)

The minimum occurs when the gradient is zero. Thus, the solution of the weights is the optimum
Wiener solution as given by
1
wopt Rxx
r

(7.36)

The solution Eq. (7.36) is predicated on our knowledge of all signal statistics and thus is our
calculation of the correlation matrix.
In general, we do not know the signal statistics and thus must resort to estimating the array
correlation matrix ( Rxx ) and the signal correlation vector ( r ) over a range of snapshots or for
each instant in time. The instantaneous estimates of these values are given as
R xx k x k x H k

(7.37)

r k d k x k

(7.38)

We can employ an iterative technique called the method of steepest descent to approximate the
gradient of the cost function. The method of steepest descent can be approximated in terms of the
weights using the LMS method advocated by Widrow. The steepest descent iterative
approximation is given by:
w k 1 w k

1
w J w k
2

(7.39)

where, is the step-size parameter and w is the gradient of the performance surface.
The gradient of the performance surface is given in Eq. (7.34). If we substitute the instantaneous
correlation approximations, we have the LMS solution.
w k 1 w k R xx w r

w k e k x k

where

(7.40)

e k d k w H k x k is the error signal.

The convergence of the LMS algorithm in Eq. (7.40) is directly proportional to the step-size
parameter . If the step-size is too small, the convergence is slow and we will have the
overdamped case. If the convergence is slower than the changing angles of arrival, it is possible
that the adaptive array cannot acquire the signal of interest fast enough to track the changing
signal. If the step-size is too large, the LMS algorithm will overshoot the optimum weights of
interest. This is called the underdamped case. If attempted convergence is too fast, the weights
will oscillate about the optimum weights but will not accurately track the solution desired. It is
therefore imperative to choose a step-size in a range that ensures convergence. It can be shown
that stability is ensured provided that the following condition is met:
0

1
2max

(7.41)

where max is the largest eigenvalue of R xx .


Since the correlation matrix is positive definite, all eigenvalues are positive. If all the interfering
signals are noise and there is only one signal of interest, we can approximate the condition in Eq.
(7.41) as:
0

1
2 trace R xx

(7.42)

Anda mungkin juga menyukai