PART A
7. Obtain the necessary and sufficient condition for optimality of the linear filter.
8. Show that backward prediction error filter is having its maximum phase on the unit
circle in the z plane.
10. What are the approaches to the development of linear adaptive filters?
1
16. Prove that the necessary and sufficient condition for stability of steepest descent al-
gorithm is depends on the step size parameter which satisfy the double inequality
0 < < 2/max .
17. Explain steepest descent algorithm applied to Weiner filter with the help of a signal
flow graph
18. Explain how LMS algorithm is used for estimating the frequency content of a narrow
band signal characterized by a rapidly varying power spectrum.
24. Explain the method to overcome the gradient noise amplification problem experi-
enced by the LMS algorithm.
31. Explain RLS algorithm with the help of signal flow graph
2
34. What are the properties of least squares estimates?
35. Explain pseudo inverse for over determined and under determined system.
37. What are the properties of innovation in recursive MMSE for scalar random vari-
ables?
40. Briefly explain the problem called stalling arises in digital implementation of LMS
algorithm.
42. Explain equation error method used for adaptive filters with long impulse responses.
PART B
10 M arks Each
49. State the orthogonality principle and obtain a solution of Wiener- Hopf equations
for FIR filters.
50. Derive Weiner Hopf equations by examining the dependence of the cost function J on
the tap weights of the transversal filter. What is the significance of the cost function
J? Also evaluate the minimum mean square error produced by the transversal filter.
3
51. Explain the properties of prediction error filter.
52. State the orthogonality principle and derive the expression of Wiener-Hopf equation.
53. Obtain the I/O relation of the back ward prediction error filter. Can you convert a
forward prediction error filter into a back ward prediction error filter? Justify.
54. (a) Explain Levinson-Durbin algorithm.
(b) Explain stochastic gradient approach of linear adaptive filtering algorithm.
55. What is meant by adaptive filters and what are the approaches to the development
of linear adaptive filters?
56. (a) Show that the forward prediction error filter is minimum phase.
(b) Explain stochastic gradient approach of linear adaptive filtering algorithm.
57. (a) Derive the augmented Weiner- Hopf equation for forward and backward pre-
diction filters.
(b) Derive inverse Levinson Durbin algorithm.
58. Show that the minimum mean square value produced by a transversal filter is
Jmin = d2 pH wo .
59. (a) Briefly discuss about joint process estimation.
(b) Write short note on Linear optimum filtering.
60. Obtain the prediction error power equation for Levinson- Durbin Algorithm.
61. Derive the Weiner-Hopf Equations.
62. Consider the linear prediction of a stationary autoregressive process u(n) generated
from the first order difference equation
u(n)=0.9u(n-1)+v(n)
where v(n) is white noise of zero mean and unit variance. Determine the tap weights
a2,1 and a2,2 of the forward prediction error filter and the reflection coefficients k1
and k2 of the lattice predictor.
1 0.5
64. Consider a Weiner filtering problem with R =
0.5 1
0.5
and p =
0.25
Obtain the tap weights and the minimum mean-square error produced by the Weiner
filter. Formulate a representation of the Weiner filter in terms of the eigenvalues of
matrix R and associated eigenvectors.
4
65. Give the backward prediction error filter coefficients defined in terms of the tap
weights of the corresponding forward prediction error filter. Consider the linear
prediction of a stationary autoregressive process u(n) generated from the first order
difference equation
(n) = 0.9u(n 1) + v(n) where v(n) is white noise of zero mean and unit variance.
Determine the tap weights a2,1 and a2,2 of the forward prediction error filter and
draw the prediction error filter representation of the process.
66. Consider a WSS process u(n) whose autocorrelation function has the following val-
ues for different lags:
r(0)=1;
r(1)=0.8;
r(2)=0.6;
r(3)=0.4;
Use Levinson-Durbin recursion to evaluate the reflection coefficients k1 ,k2 and k3
and set up the three stage lattice predictor for this process. Evaluate the average
power of the prediction error produced at the output of each of the three stages of
the lattice predictor.
67. State and explain orthogonality principle and derive the weiner Hopf equations in
matrix form.
68. Explain backward linear prediction with necessary equations . Give the relations
between backward and forward predictors.
71. Descibe a direct method for computing the prediction error filter coefficients and
prediction error power by solving augmented Wiener-Hopf equations.
75. What is the need of Normalization in LMS algorithm. Describe NLMS algorithm
with necessary equations.
76. Explain fast block LMS algorithm and compare the computational complexity of
fast block LMS algorithm with conventional LMS .
5
77. Explain the stability analysis of Normalised LMS filter.
79. Explain LMS adaptation algorithm and about the robustness of LMS algorithm.
1 0.5
80. Consider a Weiner filtering problem with R =
0.5 1
0.5
and p = Obtain a suitable value for the step-size parameter that would
0.25
ensure convergence of the method of steepest descent and using the value determine
the recursions for computing the elements w1 (n) and w2 (n) tap-weight vector w(n).
For this computation assume the initial values w1 (n)=w2 (0)= 0.
84. Explain in detail about the stability of the Steepest descent algorithm.
85. Explain LMS adaptation algorithm and about the robustness of LMS algorithm.
86. Discuss about the step size control for acoustic echo cancellation. Also explain how
step size can be controlled
87. Explain normalized LMS filter as the solution to a constrained optimization problem
88. Explain the least mean square adaptation algorithm with the help of any one ap-
plication
1 0.5 0.5
89. Consider a Weiner filtering problem with R = and p = Obtain
0.5 1 0.25
a suitable value for the step-size parameter that would ensure convergence of
the method of steepest descent and using the value determine the recursions for
computing the elements w1 (n) and w2 (n) of the tap-weight vector w(n).For this
computation assume the initial values w1(0)=w2(0)= 0.
90. Breifly explain block LMS algorithm that can be applied to block adaptive filters.
Also explain the convergence properties of the same.
6
91. (a) Explain the summary of Fast Block LMS algorithm based on overlap save
sectioning (assuming real valued data).
(b) Write down the expression for time constants and misadjustment in block LMS
algorithm.
93. With the help of a block diagram explain step size control for acoustic echo cancel-
lation.
94. What are the equations that define the operation of the LMS algorithm of the
canonical model of the complex LMS algorithm.
95. Obtain the stability of Steepest Descent algorithm and how the transient behaviour
of MSE can be generated.
96. (a) Using modified Newtons method show that the transient behavior of Newtons
algorithm is characterized by a single exponential whose time constant is
defined by (1 )2k = ek/ .
(b) Draw the signal flow representation of steepest descent algorithm.
99. With the help of over determined system and underdetermined system prove the
singular value decomposition theorem.
100. Define RLS algorithm. Obtain the ensemble average learning curve of the RLS
algorithm
101. Briefly explain the properties of least square estimates of RLS algorithm.
102. Explain SVD theorem for the numerical solution of the least squares problem. Cal-
culate
the singular values and singular vectors of the two-by-two real matrix: R =
1 1
Do the Eigen decomposition of matrix product AT AandAAT . Hence,
0.5 2
find the pseudo inverse of matrix A.
103. Explain briefly the method of least squares. Derive the exponentially weighted Re-
cursive Least square algorithm.
104. Give the three assumptions for the convergence analysis of RLS algorithm and de-
rive the convergence analysis in mean value and mean square deviation.
105. Explain Singular Value Decomposition for under determined and over determined
system with necessary equations.
7
106. Describe MVDR spectrum estimation and derive the equation for MVDR spectrum
estimate.
107. What are the two methods of describing the least square conditions of the linear
transversal filters
108. Derive the matrix form of the normal equations for linear least squares filters .Ex-
plain the properties of time average correlation matrix.
110. Write a detailed note on exponentially weighted recursive least square algorithm.
112. Explain SVD theorem for the numerical solution of the least squares problem.
Calculate
the singular values and singular vectors of the two-by-two real matrix:
1 1
R=
0.5 2
Do the eigen decomposition of matrix product AT A and AAT . Hence, find the
pseudoinverse of matrix A.
113. Derive the Normal equations and Linear least square filters.
114. Give the matrix inversion lemma to compute the least square estimate.
Consider the correlation matrix (n)=u(n) uH (n)+ I , where u(n) is a tap-input
vector and is a small positive constant. Use the matrix inversion lemma to evaluate
P (n) = 1 (n)
116. (a) Prove that the least square estimate w is unbiased provided that the measure-
ment process e0 has zero mean.
(b) Derive normal equations in expanded form and matrix form.
119. With the help of over determined system and underdetermined system prove the
singular value decomposition theorem.
8
120. Describe about the robustness of RLS algorithm.
123. Discuss the system identification using IIR adaptive filter-output error method.
126. Explain system identification using IIR adaptive filter-Equation error method.
127. Substantiate the computational efficiency of RLS algorithm in dealing with finite
precision effects.
129. Solve the recursive minimum mean square estimation problem for a scalar random
variable
131. Explain system identification using IIR adaptive filter, by using output error method
133. Explain the statement of the Kalman filtering problem with neat sketches and equa-
tions.
134. Discuss about finite precision LMS algorithm with neat block diagram and equa-
tions.
9
138. Draw and explain the block diagram of system identification using adaptive IIR
filter using output error method.
141. (a) Evaluate the tracking performance of LMS algorithm based on mean square
deviation.
(b) Explain tracking of time varying system.
142. Analyze the output error method for IIR adaptive filters.
144. Briefly explain the Kalman filters by solving the recursive minimum mean square
estimation for scalar random variables.
10