Anda di halaman 1dari 10

06EC6032

A P J ABDUL KALAM TECHNOLOGICAL UNIVERSITY


M.TECH DEGREE
SECOND SEMESTER
Communication Engineering
Adaptive Signal Processing
Question Bank

PART A

1. Write briefly about Linear Regression model.

2. Explain the Joint Process Estimation.

3. Compare the Forward and Backward Prediction Error Filters.

4. Derive augmented Wiener- Hopf Equation for forward prediction

5. Mention the applications of Levinson- Durbin algorithm

6. Define linear optimum filters

7. Obtain the necessary and sufficient condition for optimality of the linear filter.

8. Show that backward prediction error filter is having its maximum phase on the unit
circle in the z plane.

9. Derive augmented Wiener-Hopf Equation for backward prediction.

10. What are the approaches to the development of linear adaptive filters?

11. Write short notes on error performance surface.

12. What are the properties of prediction error filters?

13. What is the basic idea of steepest decent algorithm.

14. Compare LMS algorithm with steepest decent algorithm.

15. Mention any two applications of LMS algorithm.

1
16. Prove that the necessary and sufficient condition for stability of steepest descent al-
gorithm is depends on the step size parameter which satisfy the double inequality
0 < < 2/max .

17. Explain steepest descent algorithm applied to Weiner filter with the help of a signal
flow graph

18. Explain how LMS algorithm is used for estimating the frequency content of a narrow
band signal characterized by a rapidly varying power spectrum.

19. Compare and contrast LMS algorithm with Steepest-Descent algorithm

20. Write short note on Block LMS Algorithm

21. Write short notes on Normalized LMS Algorithm

22. Compare LMS algorithm with Steepest Descent algorithm.

23. Give the Virtue and limitation of steepest descent algorithm.

24. Explain the method to overcome the gradient noise amplification problem experi-
enced by the LMS algorithm.

25. Give the significance of pseudoinverse matrix.

26. Give the properties of Time average correlation matrix.

27. Explain data windowing in method of least squares.

28. Discuss about the properties of time averge correlation matrix

29. State and explain the Singular Value Decomposition Theorem

30. Define RLS Algorithm

31. Explain RLS algorithm with the help of signal flow graph

32. What are the properties of time average correlation matrix?

33. State matrix inversion lemma

2
34. What are the properties of least squares estimates?

35. Explain pseudo inverse for over determined and under determined system.

36. Explain different data windowing methods.

37. What are the properties of innovation in recursive MMSE for scalar random vari-
ables?

38. Compare tracking performance of LMS with RLS algorithm.

39. What is meant by Sigmoid neuronal model ?

40. Briefly explain the problem called stalling arises in digital implementation of LMS
algorithm.

41. Explain briefly about kalman filter.

42. Explain equation error method used for adaptive filters with long impulse responses.

43. Write short notes on Stalling

44. Briefly describe multilayer perceptron

45. What are the two types of quantization error

46. Explain the statement of Kalman Filtering Problem.

47. Give a short note on multi-layer perceptron.

48. Give short note on finite precision effects.

PART B
10 M arks Each

49. State the orthogonality principle and obtain a solution of Wiener- Hopf equations
for FIR filters.

50. Derive Weiner Hopf equations by examining the dependence of the cost function J on
the tap weights of the transversal filter. What is the significance of the cost function
J? Also evaluate the minimum mean square error produced by the transversal filter.

3
51. Explain the properties of prediction error filter.
52. State the orthogonality principle and derive the expression of Wiener-Hopf equation.
53. Obtain the I/O relation of the back ward prediction error filter. Can you convert a
forward prediction error filter into a back ward prediction error filter? Justify.
54. (a) Explain Levinson-Durbin algorithm.
(b) Explain stochastic gradient approach of linear adaptive filtering algorithm.
55. What is meant by adaptive filters and what are the approaches to the development
of linear adaptive filters?
56. (a) Show that the forward prediction error filter is minimum phase.
(b) Explain stochastic gradient approach of linear adaptive filtering algorithm.
57. (a) Derive the augmented Weiner- Hopf equation for forward and backward pre-
diction filters.
(b) Derive inverse Levinson Durbin algorithm.
58. Show that the minimum mean square value produced by a transversal filter is
Jmin = d2 pH wo .
59. (a) Briefly discuss about joint process estimation.
(b) Write short note on Linear optimum filtering.
60. Obtain the prediction error power equation for Levinson- Durbin Algorithm.
61. Derive the Weiner-Hopf Equations.

62. Consider the linear prediction of a stationary autoregressive process u(n) generated
from the first order difference equation

u(n)=0.9u(n-1)+v(n)

where v(n) is white noise of zero mean and unit variance. Determine the tap weights
a2,1 and a2,2 of the forward prediction error filter and the reflection coefficients k1
and k2 of the lattice predictor.

63. Derive the Levinson Durbin Algorithm.

 
1 0.5
64. Consider a Weiner filtering problem with R =
0.5 1
 
0.5
and p =
0.25
Obtain the tap weights and the minimum mean-square error produced by the Weiner
filter. Formulate a representation of the Weiner filter in terms of the eigenvalues of
matrix R and associated eigenvectors.

4
65. Give the backward prediction error filter coefficients defined in terms of the tap
weights of the corresponding forward prediction error filter. Consider the linear
prediction of a stationary autoregressive process u(n) generated from the first order
difference equation
(n) = 0.9u(n 1) + v(n) where v(n) is white noise of zero mean and unit variance.
Determine the tap weights a2,1 and a2,2 of the forward prediction error filter and
draw the prediction error filter representation of the process.
66. Consider a WSS process u(n) whose autocorrelation function has the following val-
ues for different lags:
r(0)=1;
r(1)=0.8;
r(2)=0.6;
r(3)=0.4;
Use Levinson-Durbin recursion to evaluate the reflection coefficients k1 ,k2 and k3
and set up the three stage lattice predictor for this process. Evaluate the average
power of the prediction error produced at the output of each of the three stages of
the lattice predictor.

67. State and explain orthogonality principle and derive the weiner Hopf equations in
matrix form.
68. Explain backward linear prediction with necessary equations . Give the relations
between backward and forward predictors.

69. Explain forward linear prediction with neat sketches.

70. Derive the equtions for MMSE.

71. Descibe a direct method for computing the prediction error filter coefficients and
prediction error power by solving augmented Wiener-Hopf equations.

72. Explain joint process estimation with neat diagrams.

73. Derive the conditions for stability of steepest descent algorithm.

74. Describe the convergence analysis of LMS algorithm.

75. What is the need of Normalization in LMS algorithm. Describe NLMS algorithm
with necessary equations.

76. Explain fast block LMS algorithm and compare the computational complexity of
fast block LMS algorithm with conventional LMS .

5
77. Explain the stability analysis of Normalised LMS filter.

78. Explain frequency domain adaptive filters.What are its advantages?

79. Explain LMS adaptation algorithm and about the robustness of LMS algorithm.

 
1 0.5
80. Consider a Weiner filtering problem with R =
0.5 1
 
0.5
and p = Obtain a suitable value for the step-size parameter that would
0.25
ensure convergence of the method of steepest descent and using the value determine
the recursions for computing the elements w1 (n) and w2 (n) tap-weight vector w(n).
For this computation assume the initial values w1 (n)=w2 (0)= 0.

81. Explain Fast LMS Algorithm.

82. Give a description on Frequency domain adaptive filter.

83. Give any two applications of LMS algorithm.

84. Explain in detail about the stability of the Steepest descent algorithm.

85. Explain LMS adaptation algorithm and about the robustness of LMS algorithm.

86. Discuss about the step size control for acoustic echo cancellation. Also explain how
step size can be controlled

87. Explain normalized LMS filter as the solution to a constrained optimization problem

88. Explain the least mean square adaptation algorithm with the help of any one ap-
plication
   
1 0.5 0.5
89. Consider a Weiner filtering problem with R = and p = Obtain
0.5 1 0.25
a suitable value for the step-size parameter that would ensure convergence of
the method of steepest descent and using the value determine the recursions for
computing the elements w1 (n) and w2 (n) of the tap-weight vector w(n).For this
computation assume the initial values w1(0)=w2(0)= 0.

90. Breifly explain block LMS algorithm that can be applied to block adaptive filters.
Also explain the convergence properties of the same.

6
91. (a) Explain the summary of Fast Block LMS algorithm based on overlap save
sectioning (assuming real valued data).
(b) Write down the expression for time constants and misadjustment in block LMS
algorithm.

92. Derive Normalized Least-Mean-square Adaptation (NLMS) algorithm.

93. With the help of a block diagram explain step size control for acoustic echo cancel-
lation.

94. What are the equations that define the operation of the LMS algorithm of the
canonical model of the complex LMS algorithm.

95. Obtain the stability of Steepest Descent algorithm and how the transient behaviour
of MSE can be generated.

96. (a) Using modified Newtons method show that the transient behavior of Newtons
algorithm is characterized by a single exponential whose time constant is
defined by (1 )2k = ek/ .
(b) Draw the signal flow representation of steepest descent algorithm.

97. Explain MVDR spectrum estimation.

98. Discuss about convergence analysis of RLS algorithm.

99. With the help of over determined system and underdetermined system prove the
singular value decomposition theorem.

100. Define RLS algorithm. Obtain the ensemble average learning curve of the RLS
algorithm

101. Briefly explain the properties of least square estimates of RLS algorithm.

102. Explain SVD theorem for the numerical solution of the least squares problem. Cal-
culate
 the singular values and singular vectors of the two-by-two real matrix: R =
1 1
Do the Eigen decomposition of matrix product AT AandAAT . Hence,
0.5 2
find the pseudo inverse of matrix A.

103. Explain briefly the method of least squares. Derive the exponentially weighted Re-
cursive Least square algorithm.

104. Give the three assumptions for the convergence analysis of RLS algorithm and de-
rive the convergence analysis in mean value and mean square deviation.

105. Explain Singular Value Decomposition for under determined and over determined
system with necessary equations.

7
106. Describe MVDR spectrum estimation and derive the equation for MVDR spectrum
estimate.

107. What are the two methods of describing the least square conditions of the linear
transversal filters

108. Derive the matrix form of the normal equations for linear least squares filters .Ex-
plain the properties of time average correlation matrix.

109. Convergence Analysis and Robustness of RLS algorithm.

110. Write a detailed note on exponentially weighted recursive least square algorithm.

111. Explain MVDR spectrum estimation.

112. Explain SVD theorem for the numerical solution of the least squares problem.
Calculate
 the singular values and singular vectors of the two-by-two real matrix:
1 1
R=
0.5 2
Do the eigen decomposition of matrix product AT A and AAT . Hence, find the
pseudoinverse of matrix A.

113. Derive the Normal equations and Linear least square filters.

114. Give the matrix inversion lemma to compute the least square estimate.
Consider the correlation matrix (n)=u(n) uH (n)+ I , where u(n) is a tap-input
vector and is a small positive constant. Use the matrix inversion lemma to evaluate
P (n) = 1 (n)

115. State exponentially weighted Recursive Least Squares algorithm.

116. (a) Prove that the least square estimate w is unbiased provided that the measure-
ment process e0 has zero mean.
(b) Derive normal equations in expanded form and matrix form.

117. State the properties of least-squares estimates.

118. (a) List the properties of correlation matrix .


(b) Prove matrix inversion lemma.

119. With the help of over determined system and underdetermined system prove the
singular value decomposition theorem.

8
120. Describe about the robustness of RLS algorithm.

121. Compare the tracking performance of LMS and RLS algorithms.

122. Give the criteria for tracking assessment.

123. Discuss the system identification using IIR adaptive filter-output error method.

124. Discuss the quantization errors in LMS algorithm.

125. Brief description on the models of neurons.

126. Explain system identification using IIR adaptive filter-Equation error method.

127. Substantiate the computational efficiency of RLS algorithm in dealing with finite
precision effects.

128. Compare the tracking performance of LMS and RLS algorithm.

129. Solve the recursive minimum mean square estimation problem for a scalar random
variable

130. Discuss about finite precision effects of RLS algorithm.

131. Explain system identification using IIR adaptive filter, by using output error method

132. Discuss about finite precision effects of LMS algorithm

133. Explain the statement of the Kalman filtering problem with neat sketches and equa-
tions.

134. Discuss about finite precision LMS algorithm with neat block diagram and equa-
tions.

135. Describe tracking performance of LMS algorithm.Compare it with RLS algorithm

136. Discuss about finite precision effects of RLS algorithm.

137. Describe tracking performance of RLS algorithm.Compare it with LMS algorithm

9
138. Draw and explain the block diagram of system identification using adaptive IIR
filter using output error method.

139. Explain about error propagation model in RLS algorithm.

140. (a) Discuss about sigmoid neuronal model.


(b) Explain the Criteria for Tracking Assessment.

141. (a) Evaluate the tracking performance of LMS algorithm based on mean square
deviation.
(b) Explain tracking of time varying system.

142. Analyze the output error method for IIR adaptive filters.

143. (a) Compare Stalling effects in LMS and RLS algorithm.


(b) Discuss about parameter drift in LMS algorithm.

144. Briefly explain the Kalman filters by solving the recursive minimum mean square
estimation for scalar random variables.

10

Anda mungkin juga menyukai