Anda di halaman 1dari 154

DSP Lab Report

Kurian Abraham B070027EC

Shanas P. Shoukath B070059EC

T. Venkateswarlu B070427EC

V. A. Amarnath B070032EC
Creative Commons License
DSP Lab Report by
Kurian Abraham, Shanas P. Shoukath
T. Venkateswarlu and V. A. Amarnath is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 2.5 India License.


C
CC BY: $
\
Contents
I Lab Report 1 1
1 z Plane and Pole - Zero Plots 1
1.1 Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 z-Plane . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Pole-Zero Plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2 FIR lters 5
2.1 Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Responses of FIR lters . . . . . . . . . . . . . . . . . . . . . . . 6
2.2.1 First Degree Transfer Function . . . . . . . . . . . . . . . 6
2.2.2 Linear Phase Filters . . . . . . . . . . . . . . . . . . . . . 8
2.2.3 Minimum Phase Filters . . . . . . . . . . . . . . . . . . . 9

3 Linear Convolution 11
3.1 Concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.2 Convolution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.3 Illustrations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

4 Exercise 15

II Lab Report 2 16
1 Overlap Save Method 16
1.1 Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
1.2 Overlap Save Routine . . . . . . . . . . . . . . . . . . . . . . . . 17

2 Overlap Add Method 18


2.1 Concept . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.2 Overlap Add Routine . . . . . . . . . . . . . . . . . . . . . . . . . 19

3 Inference 20

III Lab Report 3 21


1 Low Pass Filter Design using Window Method 21
1.1 Window method for FIR lter design . . . . . . . . . . . . . . . . 21
1.2 Window Functions . . . . . . . . . . . . . . . . . . . . . . . . . . 23
1.3 Designing a Lowpass Filter . . . . . . . . . . . . . . . . . . . . . 23
1.3.1 Specications . . . . . . . . . . . . . . . . . . . . . . . . . 23
1.3.2 Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
1.3.3 Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
1.3.4 LPF Frequency Response . . . . . . . . . . . . . . . . . . 25
1.3.5 LPF in action . . . . . . . . . . . . . . . . . . . . . . . . . 26
1.4 Designing a Bandpass Filter . . . . . . . . . . . . . . . . . . . . . 30
1.4.1 Specications . . . . . . . . . . . . . . . . . . . . . . . . . 30
1.4.2 Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
1.4.3 Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
1.4.4 BPF Frequency Response . . . . . . . . . . . . . . . . . . 32
1.4.5 BPF in action . . . . . . . . . . . . . . . . . . . . . . . . . 33

IV Lab Report 4 37
1 Fast Fourier Transform 37
2 Decimation In Time FFT Algorithm 38
3 Decimation In Frequency FFT Algorithm 39
4 Spectrum Analysis 41
5 FIR Filter Design using Frequency Sampling Method 55

V Lab Report 5 61
1 IIR Filters 61
2 IIR Design through Analog Filters 62
2.1 Butterworth Approximation . . . . . . . . . . . . . . . . . . . . . 62
2.2 Chebyshev Approximation . . . . . . . . . . . . . . . . . . . . . . 64

3 Transforming Analog Filter to Digital Filter 68


3.1 Impulse Invariant Technique . . . . . . . . . . . . . . . . . . . . . 68
3.2 Bilinear Transformation . . . . . . . . . . . . . . . . . . . . . . . 69

4 Design of Filters 72
4.1 Butterworth Filter Design . . . . . . . . . . . . . . . . . . . . . . 72
4.1.1 Specications . . . . . . . . . . . . . . . . . . . . . . . . 72
4.1.2 Design using Butterworth approximation . . . . . . . . . 72
4.1.3 IIT Method . . . . . . . . . . . . . . . . . . . . . . . . . . 73
4.1.4 BLT Method . . . . . . . . . . . . . . . . . . . . . . . . . 74
4.2 Chebyshev Filter Design . . . . . . . . . . . . . . . . . . . . . . . 75
4.2.1 Specications . . . . . . . . . . . . . . . . . . . . . . . . 75
4.2.2 Design using Chebyshev approximation . . . . . . . . . . 75
4.2.3 IIT Method . . . . . . . . . . . . . . . . . . . . . . . . . . 77
4.2.4 BLT Method . . . . . . . . . . . . . . . . . . . . . . . . . 77

4
4.3 Integrator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.3.1 IIT Method . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.3.2 BLT Method . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.4 Comb Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

5 Observations 80
5.1 Butterworth Filter . . . . . . . . . . . . . . . . . . . . . . . . . . 80
5.2 Chebyshev Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
5.3 Integrator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
5.4 Comb Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

6 Code 96
6.1 Butterworth Filter . . . . . . . . . . . . . . . . . . . . . . . . . . 96
6.2 Chebyshev Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
6.3 Comb Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98

VI Lab Report 6 100


1 Lattice Realization of FIR Filters 100
1.1 Filter of order 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
1.2 Filter of order 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
1.3 Filter of order M . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
1.4 Design Procedure . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
1.5 Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
1.6 Observations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
1.7 Inference . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

VII Lab Report 7 111


1 Introduction 111
2 Levinson Durbin Recursion Algorithm 111
3 Inverse Filter 117
4 Observations 118
5 Code 126
6 Inference 130

VIII Lab Report 8 131


1 Introduction 131

5
2 Quantisation Technique 131
2.1 Uniform Quantization . . . . . . . . . . . . . . . . . . . . . . . . 131
2.2 Non Uniform Quantization . . . . . . . . . . . . . . . . . . . . . 132
2.3 PDF Optimized Non Uniform Quantizer . . . . . . . . . . . . . . 132
2.3.1 Determination of optimum ∆k . . . . . . . . . . . . . . . 133

3 Logarithmic Companding 135


3.1 A-law Companding . . . . . . . . . . . . . . . . . . . . . . . . . . 136
3.2 µ-law Companding . . . . . . . . . . . . . . . . . . . . . . . . . . 137

4 Digital Companding:
Segmentation of Companding Characteristics 137
4.1 Segmentation of A-law Companding Characteristics . . . . . . . 138
4.2 Segmentation of µ-law Companding Characteristics . . . . . . . . 138

5 Observations 140
6 Code 143
Part I
Lab Report 1
Date: 31st December, 2009

1 z Plane and Pole - Zero Plots


1.1 Concepts
Introduction
Laplace transform is utilized for analysis of continuous time signals whereas
z-transform is used to analyse discrete time signals. z-transform is dened as

X
X(z) = x(n)z −n (Eq. I.1.1)
n=−∞

where z is a complex variable.

Laplace domain to z domain


ˆ∞
X(s) = x(t)e−st dt (Eq. I.1.2)
−∞

where s = σ + jω .
When x(t)is discretized to x(n),

t = nT
where T is the sampling period.

z = esT (Eq. I.1.3)


X
⇒ X(z) = x(n)z −n
n=−∞

Mapping from s-plane to z-plane


From (Eq. I.1.3),

z = eσ ejω

= rejw (Eq. I.1.4)


When s-plane is mapped to z-plane, the jω axis maps onto unit circle. The
points on LHP of s-plane maps onto interior of this circle and the points on
RHP maps onto exterior.

1
Pole-Zero Plot
Let

N (z)
H(z) =
D(z)
Solving N (z) = 0 and D(z) = 0 we obtain the zeros and poles of H(z).

Stability Criterion All poles of H(z) should lie within the unit circle.

1.2 z-Plane
Code

clf;
w = linspace(0,2*pi,100);
r = 1; %plot z plane
z = r*exp(1j*w);
plot (z);
axis equal;
hold on;
n = [1 2 3];
d = [1 0 0];
ze = roots(n); %finding zeros and poles
po = roots(d);
plot (real(ze), imag(ze), `or');
hold on;
plot (real(po), imag(po), `xr');
xlabel(`real z');
ylabel(`imag z');
title(`z-Plane');
grid on;

2
Figure

Figure 1: z-Plane

1.3 Pole-Zero Plot


Code

clf;
w = linspace(-pi,pi,1000);
r = 1;
z = r*exp(1j*w);
plot (z); %plotting z plane
axis equal;
hold on;
z = exp(1j*w);
n = [1, 2, 3, 4];
d = [1, 0, 0, 0];
ze = roots(n);
po = roots(d);
plot (real(ze), imag(ze), `or'); %plot zeros and poles
hold on;
plot (real(po), imag(po), `xr');
xlabel(`real z');
ylabel(`imag z');
title(`Pole Zero Plot');
grid on;

3
Figure

Figure 2: Pole Zero Plot

1.4 Inference
For the given transfer function, the pole-zero plot is obtained(see gure 2). Here,
we observe that all three zeros are outside the unit circle and we have multiple
poles at origin. Hence, the given transfer function is stable whereas the inverse
transfer function is not stable.

4
2 FIR lters
2.1 Concepts
Introduction
Filters with nite impulse responses are called FIR lters. A lter with
transfer function H(z) is stable if all its poles lie within the unit circle.
A causal FIR lter can be represented as
N
X −1
H(z) = h(n)z −n (Eq. I.2.1)
n=0

where h(n) is its impulse response. There exists (N − 1) zeros for H(z) and
(N − 1) poles which lies on z = 0. Therefore, FIR lters are stable.

Linear Phase Filters


Linear phase lters do not alter the shape of the original signal given to it.
It simply introduces a phase shift to the signal. So, linear phase lters do not
cause distortion to the input signal.

Condition for linear phase


h(n) should be f inite duration N (Eq. I.2.2)

h(n) = h(N − 1 − n) (Eq. I.2.3)


Solving,

 h PN i
e−jω (N2−1) 2 n=0 2 −1
h(n) cos ω(n − N 2−1 ) N is even
H(ejω ) = h N −3 i
e−jω (N2−1) h( N −1 ) + 2 P 2 h(n) cos ω(n − N −1
2 n=0 2 ) N is odd
(Eq. I.2.4)
From (Eq. I.2.4), the underlined term represents the phase

(N − 1)
φ = −ω
2
Hence, any lter with transfer function satisfying (Eq. I.2.2) and (Eq. I.2.3)
is linear phase.

Minimum Phase Filters


A lter is called minimum phase lter when its transfer function and inverse
transfer functions are stable and causal. A stable lter has all its poles within
the unit circle whereas its zeros can be anywhere on z-plane. Thus, for minimum
phase lter all its zeros and poles lie within the unit circle.

5
2.2 Responses of FIR lters
2.2.1 First Degree Transfer Function
H(z) = 1 + z −1

Code
Here we have plotted real and imaginary part of H(ejw ) versus ω .

clf;
w = linspace(-3*pi,3*pi,1000);
r = 1;
z = r*exp(1j*w);
figure(1);
hold on;
plot (real(z), imag(z)); %plotting z plane
axis ([-2,2,-2,2]);
axis equal;
z = exp(1j*w);
n = [1, 1];
d = [1, 0];
ze = roots(n);
po = roots(d);
plot (real(ze), imag(ze), `or'); %pole zero plot
hold on;
plot (real(po), imag(po), `xr');
xlabel(`real z');
ylabel(`imag z');
title(`Pole Zero Plot');
grid on;
figure(2);
h = 1 + 1*z.^(-1); %finding DTFT
subplot (2,1,1);
plot (w,real(h));
grid on;
title (`magnitude response'); %magnitude response
xlabel (`w');
ylabel (`magnitude');
subplot (2,1,2);
plot (w,imag(h));
grid on;
title (`phase response'); %phase response
xlabel (`w');
ylabel (`phase');

6
Figures

Figure 3: Pole Zero Plot

Figure 4: Real and Imaginary

Inference
H(ejω ) = 1 + e−jω
Real part of H(ejω ) is cos(ω)with a DC oset 1 and imaginary part −sin(ω).

7
2.2.2 Linear Phase Filters
H(z) = 1 + 2z −1 − 3z −2 + 2z −3 + z −4

Code
Using code on on page 6, and replacing
n = [1, 2, -3, 2, 1];
d = [1, 0, 0, 0, 0];
And then plotting magnitude and phase responses.

Figures

Figure 5: Pole Zero Plot

8
Figure 6: Magnitude and Phase Response

Inference
Frequency response is obtained by taking DTFT of the transfer function.
That is evaluating the transfer function along the unit circle once. Zeros try to
bring the magnitude response to zero and poles try to make it innity. When ω is
varied from -π to π , whenever the product of the Eucledian distances from zeros
is minimum, magnitude reponse will encounter a local minimum. Similarly, in
the case of poles response will have a local maxima. Zeros lying on unit circle
makes the response zero and poles on unit circle makes the response innite at
that ω .

2.2.3 Minimum Phase Filters


z −2
H(z) = 1 + z −1 +
4

Code
Using code on on page 6, and replacing
n = [1, 1, .25];
d = [1, 0, 0];
And then plotting magnitude and phase responses.

9
Figures

Figure 7: Pole Zero Plot

Figure 8: Magnitude and Phases Response

10
3 Linear Convolution
3.1 Concepts

X ∞
X
y(n) = x(n − k)h(k) = x(k)h(n − k)
k=−∞ k=−∞

where x(n) is the input to the LTI system with impulse reponse h(n) and
y(n) is its output.

y(n) = x(n) ∗ h(n) (Eq. I.3.1)


In z-domain,
Y (z) = X(z)H(z) (Eq. I.3.2)

3.2 Convolution
Code
x = [1 2 3];
y = [3 2 1];
xn = length (x);
yn = length (y);
zn = xn+yn-1;
z = zeros(1, zn);
x1 = [x, zeros(1, yn-1)]; %zero padding
y1 = [y, zeros(1, xn-1)];
for i = 1:zn %convolving
for j = 1:i
z(i) = z(i) + x1(j)*y1(i-j+1);
end
end
z

Result
z = 3 8 14 8 3

3.3 Illustrations
Unit impulse response (rect(n) ∗ δ(n))
Code
x = 0:1:22;
y = [1,zeros(1,20)];
H = [ 1, 1, 1 ];
xn = length (H); %convolution
yn = length (y);

11
zn = xn+yn-1;
z = zeros(1, zn);
x1 = [H, zeros(1, yn-1)];
y1 = [y, zeros(1, xn-1)];
for m = 1:zn
for n = 1:m
z(m) = z(m) + x1(n)*y1(m-n+1);
end
end
clf;
plot(x,z);
axis([0,10,0,4]);
grid on;

Figure

Figure 9: Impulse Response

12
Unit step response (rect(n) ∗ u(n))
Using code from on page 11 with
y = ones(1,21);

Figure

Figure 10: Step response

Exponential Input (rect(n) ∗ exp(n))


Using code from on page 11 with
a = 0:1:7;
y = exp(a);

Figure

Figure 11: rect(n) ∗ exp(n)

13
Square input
Using code from on page 11 with
a = 0:1:30;
y = square(a);

Figure

Figure 12: Square input

Sinusoidal input
Using code from on page 11 with
a = 0:1:30;
y = sin(a);

Figure

Figure 13: Sinusoidal input

14
4 Exercise
Are all linear phase lters, minimum phase?
Consider a linear phase lter with transfer function,

H(z) = a0 + a1 z −1 + a2 z −2 + ... + an z −n (Eq. I.4.1)


where a0 = 1 for FIR lters. Let zeros of Eq. I.4.1 be ri , where i = 0, 1, 2, ...n.
Hence, we have
(
Y −1 n is odd
ri = (Eq. I.4.2)
i
1 n is even
Assuming here n is even, we may have 2k zeros which are complex conjugates
and (n − 2k) real zeros. Now, consider dierent cases of absolute value of these
zeros and their corresponding locations on z-plane.

1. absolute value of each zeros is one⇐⇒ all of these lie on unit circle. This
violates condition specied in 2.1 for minimum phase lters.

2. absolute value of each complex zeros is one and real zeros exist in α and
1/α form where α < 1 ⇐⇒ zero with absolute value 1/α lies outside the

unit circle violating the minimum phase condition.

3. absolute value of each real zeros is one and complex zeros have absolute
value greater than one ⇐⇒That is, both the complex zeros lies outside
the unit circle violating the minimum phase condition.

4. absolute value of all zeros are not equal to one, is a combination of above
mentioned cases 3 and 4 ⇐⇒ This one also violates the minimum phase
condition.

Corollary: For a minimum phase lter to be linear phase the transfer function
should be symmetric and should satisfy Eq. I.4.2. From above case 1 - 4, this
is never satised.

15
Part II
Lab Report 2
Date: 7th January, 2010

1 Overlap Save Method


1.1 Concept
The sequence x(n) of length n is the input to the LTI system with impulse
response h(n) of length m( n). x(n) is partitioned into blocks of length l with
rst(m−1) terms are taken from previous block. Product of DF T of blocks and
of h(n) is computed. IDF T of the result gives the circularly convolved result.
First (m − 1) terms of each resultant block is discarded and the remaining terms
gives the result.

Figure 14: Overlap Save

16
1.2 Overlap Save Routine
Code
%routine for h*x
h=[1,1,1];
x=[3,-1,0,1,3,2,0,1,2,1];
l=3;
%linear convolution result
conv(x,h)
m=length(h);
L=length(x);
%creating blocks of length `l'
y=[zeros(1,m-1),x];
r=rem(length(y),(l-(m-1)));
y=[y,zeros(1,r)];
if r==0
y=[y,zeros(1,m-1)];
end
res = zeros(1,L+m-1);
k=1;
pos=0;
H=fft(h,l);
while k <= length(res)
temp=zeros(1,l);
for p=0:l-1
temp(p+1)=y(k+p);
end
temp=fft(temp,l);
temp=temp.*H;
temp=ifft(temp,l);
p=1;
while p <= l-m+1
res(p+pos)=temp(p+m-1);
p=p+1;
end
pos=pos+l-m+1;
k=k+l-m+1;
end
res

Output
ans =
3 2 2 0 4 6 5 3 3 4 3 1
res =
3 2 2 0 4 6 5 3 3 4 3 1

17
2 Overlap Add Method
2.1 Concept
The sequence x(n) of length n is the input to the LTI system with impulse
response h(n) of length m( n). x(n) is partitioned into blocks of nite length,
l. These blocks are linearly convolved with h(n) one by one. The result of
previous step gives sequences of length (l + m − 1). This result contains errors
in rst and last (m − 1) terms due to original sequence x(n) being partitioned
into blocks. The last (m − 1) terms of a block is added to the rst (m − 1) terms
of succeeding block with all other terms retained as such.

Figure 15: Overlap Add

18
2.2 Overlap Add Routine
Code
%routine for h*x
h=[1,1,1];
x=[3,-1,0,1,3,2,0,1,2,1];
l=3;
%linear convolution result
conv(x,h)
m=length(h);
n=length(x);
%creating blocks of length `l'
x=[x,zeros(1,l-rem(n,l))];
xn = length(x);
k=1;
res=zeros(1,n+m-1);
pos=1;
while k <= xn
temp = zeros(1,l+m-1);
for p=1:l
temp(p)=x(p+k-1);
end
temp=conv(temp,h);
k=k+l;
for p=1:l+m-1
res(pos)=res(pos)+temp(p);
if (pos>=n+m-1)
break;
end;
pos=pos+1;
end;
pos=pos-m+1;
end;
res

Output
ans =
3 2 2 0 4 6 5 3 3 4 3 1
res =
3 2 2 0 4 6 5 3 3 4 3 1

19
3 Inference
Both overlap save and overlap add methods are ecient in computing linear
convolution of long sequences. These techniques reduce computational com-
plexity.

References
[1] Emmanuel C. Ifeachor and Barrie W. Jervis, Digital Signal Processing: A
Practical Approach, 2nd Ed., Pearson Education

[2] Bill Rison, EE 451: Digital Signal Processing, http://www.ee.nmt.edu/


~rison/ee451_fall99/

20
Part III
Lab Report 3
Date: 14th January, 2010

1 Low Pass Filter Design using Window Method


1.1 Window method for FIR lter design
As we know, the impulse response hd (n) and the frequency response of a lter
Hd (ω) is related as,
ˆπ
1
hd (n) = Hd (ω)ejωn dω, −∞ < n < ∞ (Eq. III.1.1)

−π

Ideal lowpass filter frequency response


2

1.5

0.5

0
−5 0 5
w→

Ideal lowpass filter impulse response


1

0.5

−0.5
−5 0 5
n→

Figure 16: Ideal Low Pass Filter

21
Evaluating hd (n) from ideal low pass lter Hd (ω) as shown in gure 16,
(
2fc sin(nωc )
, n 6= 0, −∞ < n < ∞
hd (n) = nωc (Eq. III.1.2)
2fc , n=0
where fc is the cuto frequency.
The impulse response plotted in reveals that hd (n) is symmetrical about
n = 0. Hence, the lter will have a linear phase reponse. But, the practical
problem is that hd (n) extends from −∞ < n < ∞, i.e. the lter is not FIR.
Eq. III.1.2 derived above is a non causal lter. If we take the frequency
response, Hd (ω)
(
e−jωα |ω| ≤ ωc
Hd (ω) =
0 otherwise
where −α is the slope of phase response Hd (ω). That is in time domain
every frequency component undergoes a delay of α.
From Eq. III.1.1 we have, hd (n)
(
sin(ωc (n−α))
, n 6= α
hd (n) = ωc π(n−α) (Eq. III.1.3)
α , n=α
The causal FIR is obtained by windowing Eq. III.1.3 with w(n)(exists from
0 to N-1).

h(n) = hd (n) ∗ w(n)


where hd (n) is symmetric about α and w(n) is symmetrical about (N −1)/2.

Hence, for h(n) to be symmetric,

(N − 1)
α=
2
Using a rectangular window introduces ripples and overshoots near the tran-
sition region in the frequency response. This phenomenon is called Gibb's phe-
nomenon.
To minimize the ripples we need to use windows with smoother transition
and to get a close approximation to Hd (ω) we need to retain as many coecients
of hd (n) as possible. Several window functions available to obtain a FIR lter
from hd (n). The selection of the window is based on the stopband attenuation
required and the order of the lter.

22
1.2 Window Functions
Name Time domain sequence [0 ≤ n < N ] k Min. stopband att.
2|n− N −1 |
Bartlett 1- N −12 4 -25 dB
  
Hanning 1
2 1 − cos 2πn
N −1 4 -44 dB
 
Hamming 0.54 − 0.46 cos N2πn −1 4 -53 dB
   
Blackman 0.42 − 0.5 cos N2πn−1 + 0.08 cos 4πn
N −1 6 -74 dB

1.3 Designing a Lowpass Filter


1.3.1 Specications

Cuto frequency, fc = 1000 Hz


Stopband edge frequency, fs = 1500 Hz
Passband attenuation, Ap ≈ 0 dB
Stopband attenuation, As ≥ 50 dB
Sampling frequency, fsam = 10000 Hz

For a stop band attenuation of 50 dB, Hamming window can be used. The
required order of lter, N is given by
4fs
N=
transition width
Substituting values, we obtain N = 81. (Here we have assumed fc = fp ).

Note: h(n) is designed using ωc = ωp . Verify if h(n) meets the design


specications. If passband attenuation is not satised choose a dierent ωc
value. and repeat the design steps. Also try to minimize N maintaining the
specications for ease of implementation.

1.3.2 Design

Hamming widow is given by,


 
2πn
w(n) = 0.54 − 0.46 cos , 0≤n<N (Eq. III.1.4)
N
The windowed impulse response is given by,

h(n) = hd (n) ∗ w(n), 0≤n<N

23
1.3.3 Code

%Low Pass Filter Design


sam = 10000;
fp = 1000/sam;
fs = 1500/sam;
att = -50;
fc = fp;
%selecting Hamming window
k = 4;
N = ceil(k/(fs-fp));
a = (N-1)/2;
%windowing
n = 1:1:81;
hd = (sin(2*pi*fc.*(n-1-a))./(pi*(n-1-a))).*(.54-(.46*cos(2*pi*(n-1)./80)));
hd(a+1) = 2*fc;
%Frequency Transformation
H=fft(hd,1800);
x = linspace(-pi,pi,1800);
%plotting magnitude response
plot(x,fftshift(20*log(abs(H))));
grid on;
xlabel('Frequency (\omega) \rightarrow');
ylabel('|H(e^{j\omega}) (db) \rightarrow');
title('Low Pass Filter Response');
%impulse as input
x=[zeros(1,1000),1,zeros(1,1000)];
l=100;
m=length(hd);
n=length(x);
x=[x,zeros(1,l-rem(n,l))];
subplot(2,1,1);
plot(x);
title('Unit Impulse Input');
xlabel('Time\rightarrow');
ylabel('Amp\rightarrow');
%convolution through overlap add method on page 18
xn = length(x);
k=1;
res=zeros(1,n+m-1);
pos=1;
while k <= xn
temp = zeros(1,l);
for p=1:l
temp(p)=x(p+k-1);

24
end
temp=conv(temp,hd);
k=k+l;
for p=1:l+m-1
res(pos)=res(pos)+temp(p);
if (pos>=n+m-1)
break;
end;
pos=pos+1;

end;
pos=pos-m+1;
end;
%plotting
subplot(2,1,2);
plot(res)
title('Unit Impulse Response');
xlabel('Time\rightarrow');
ylabel('Amp\rightarrow');
axis([950,1150,-.1,.3]);

1.3.4 LPF Frequency Response

Low Pass Filter Response


100
|H(ejω)| (db) →

−100

−200

−300
−4 −3 −2 −1 0 1 2 3 4
Frequency (ω) →
Low Pass Filter Response
4
angle(H(ejω) →

−2

−4
−1 −0.5 0 0.5 1
Frequency (ω) →

Figure 17: Frequency Response

25
1.3.5 LPF in action

Unit Impulse Input


1

0.8
Amp→

0.6

0.4

0.2

0
0 500 1000 1500 2000 2500
Time→
Unit Impulse Response
0.3

0.2
Amp→

0.1

−0.1
950 1000 1050 1100 1150
Time→

Figure 18: Impulse Response

Unit Step Input


2

1.5
Amp→

0.5

0
0 100 200 300 400 500
Time→
Step Response
2

1
Amp→

−1
0 100 200 300 400 500
Time→

Figure 19: Step Response

26
Exponential Input
1

0.8

Amp→
0.6

0.4

0.2

0
0 20 40 60 80 100 120
Time→
Exponential Response
0.3

0.2
Amp→

0.1

−0.1
0 50 100 150 200
Time→

Figure 20: Exponential Response

Square Input
2

1
Amp→

−1
0 50 100 150 200
Time→
Square Response
2

1
Amp→

−1

−2
0 50 100 150 200
Time→

Figure 21: Square Response

From fourier series of square wave, we know that it has its natural frequency
and its odd harmonics as frequency. So when we pass it through a low pass
lter, all frequency components beyond the cut o freequency is attenuated and

27
so we obtain a distorted square wave as output.

Sinusoidal Input
2
fc = 500 Hz
Amp→ 1

−1

−2
0 20 40 60 80 100
Time→
Sinusoidal Response
1

0.5
Amp→

−0.5

−1
0 20 40 60 80 100
Time→

(a) fc in pass band

Sinusoidal Input
2
fc = 1.8 kHz
1
Amp→

−1

−2
0 20 40 60 80 100
Time→
Sinusoidal Response
1

0.5
Amp→

−0.5

−1
0 20 40 60 80 100
Time→

(b) fc in stop band

Figure 22: Sinusoidal Input

28
Sinusoidal Input
4
fc1 = 300 Hz, fc2 = 2000 Hz
2
Amp→
0

−2

−4
0 20 40 60 80 100
Time→
Frequency Response
4000

3000

2000

1000

0
0 2000 4000 6000 8000 10000 12000
Frequency (ω) →

Figure 23: Composite Sinusoidal Input

Lets give a composite sinusoid as in gure 23. According to our design, cut
o frequency = 1000 Hz.

x(t) = sin(600πt) + sin(4000πt)


x(t)have frequency components of 300Hz and 2kHz . When we pass x(t)
through the designed LPF, we expect it to remove 2kHz component. The
output and its frequency response is shown in gure 23. We see that we have
only 300Hz component in the output as expected.
We observe that all the outputs are delayed by 40 as α = 40.

29
1.4 Designing a Bandpass Filter
1.4.1 Specications

Passband: −1 ≤|H(ejω )| ≤ 1 and 0.42π ≤ ω ≤ 0.61π


Stopband: |H(ejω )| < −60dB and 0 ≤ ω ≤ 0.16π , 0.87π ≤ ω ≤ π

1.4.2 Design
The Bandpass lter is obtained as the dierence of two low pass lters having
cuto frequencies ωc1 and ωc2 . ωc1 and ωc2 are selected in such a way that ripples
in the passband is as specied. The window function is selected depending on
the stopband attenuation.
ωc1 = 0.3π
ωc2 = 0.7π
ωs1 = 0.16π
ωs2 = 0.87π
ωp1 = 0.42π
ωp2 = 0.61π
Stopband Attenuation ≤ −60dB =⇒ k = 6
2πk 2πk
N = max( , ) = 47
ωs2 − ωp2 ωp1 − ωs1
N −1
α= = 23
2
∴ The window selected is Blackman window.
   
2πn 4πn
w(n) = 0.42 − 0.5 cos + 0.08 cos
N −1 N −1
sin(ωc2 (n − α))
hd (n) =
π(n − α)
∴ Required response is

h(n) = hd (n)w(n)

1.4.3 Code

%band pass filter design


%specifications
wc2=.61*pi;
wc1=.42*pi;
ws1=.16*pi;
ws2=.87*pi;

30
N=min((wc1-ws1),(ws2-wc2));
N=ceil(20*pi/N);
if rem(N,2)~=1
N=N+1;
end;
a=(N-1)/2;
%selecting Blackman Window
k=1:N;
hd =(sin(wc2*(k-1-a))./(pi*(k-1-a))-sin(wc1*(k-1-a))./(pi*(k-1-a)))
W=(.42-(.5*cos(2*pi*(k-1)./(N-1)))+(.08*cos(4*pi*(k-1)./(N-1))));
h(k)=hd .*W;
h((N+1)/2)=(wc2-wc1)/pi;
x=linspace(-pi,pi,1800);
H=fft(h,1800);
%plotting
plot(x,fftshift(20*log(abs(H))));
grid on;
xlabel('Frequency \rightarrow');
ylabel('|H(e^{j\omega})| \rightarrow');
title ('Band Pass Filter Response');
%unit impulse input
x=[1,zeros(1,100)];
%plotting input
subplot(2,1,1);
plot(x)
axis([-100,100,0,2])
xlabel('Time \rightarrow')
ylabel('Amp \rightarrow')
Title('Unit Impulse Input')
%convolution through overlap add method given 2
l=100;
m=length(h);
n=length(x);
x=[x,zeros(1,l-rem(n,l))];
xn = length(x);
k=1;
res=zeros(1,n+m-1);
pos=1;
while k <= xn
temp = zeros(1,l);
for p=1:l
temp(p)=x(p+k-1);
end
temp=conv(temp,h);
k=k+l;
for p=1:l+m-1

31
res(pos)=res(pos)+temp(p);
if (pos>=n+m-1)
break;
end;
pos=pos+1;
end;
pos=pos-m+1;
end;
%plotting output
subplot(2,1,2);
kl = linspace(-pi,pi,n+m-1);
plot(res);
xlabel('Time \rightarrow');
ylabel('Amp \rightarrow');
Title('Unit Impulse Response');

1.4.4 BPF Frequency Response

BandPass Filter Response


50

−50

−100
|H(ejω)| →

−150

−200

−250

−300

−350
−4 −3 −2 −1 0 1 2 3 4
Frequency →

Figure 24: Frequency Response

32
1.4.5 BPF in action

Unit Impulse Input


2

Amp → 1.5

0.5

0
−100 −50 0 50 100
Time →
Unit Impulse Response
0.4

0.2
Amp →

−0.2

−0.4
0 50 100 150
Time →

Figure 25: Impulse Response

Step Input
2

1.5
Amp →

0.5

0
−100 −50 0 50 100
Time →
Step Response
0.4

0.2
Amp →

−0.2

−0.4
0 50 100 150
Time →

Figure 26: Step Response

For step input to a band pass lter, we observe that dc component of input is
removed as low frequency components are eliminated.

33
Exponential Input
1

0.8
Amp →
0.6

0.4

0.2

0
0 20 40 60 80 100
Time →
Exponential Respnse
0.4

0.2
Amp →

−0.2

−0.4
0 50 100 150
Time →

Figure 27: Exponential Response

Square Input
2
fc=.45pi
1
Amp →

−1

−2
0 50 100 150 200
Time →
Square Response
2

1
Amp →

−1

−2
0 50 100 150 200
Time →

Figure 28: Square Response

34
Sinusoidal Input
2
sin(.45pi.x)
1
Amp →
0

−1

−2
0 20 40 60 80 100
Time →
Sinusoidal Response
2

1
Amp →

−1

−2
0 20 40 60 80 100
Time →

(a) in passband

Sinusoidal Input
2
sin(.91pi.x)
1
Amp →

−1

−2
0 20 40 60 80 100
Time →
Sinusoidal Response
0.1

0.05
Amp →

−0.05

−0.1
0 2000 4000 6000 8000 10000 12000
Time →

(b) in stop band

Figure 29: Sinusoidal Response

35
Sinusoidal Input
2
sin(.91pi.x)+sin(.45*pi.x)
1
Amp →
0

−1

−2
0 20 40 60 80 100
Time →
Sinusoidal Response
6000

4000

2000

0
0 2000 4000 6000 8000 10000 12000
Frequency(ω) →

Figure 30: Composite Sinusoidal Input

When a composite sinusoid is given as input, we observe that only sinusoids


of frequency fp1 < f < fp2 are not attenuated.
We observe that all the outputs are delayed by 23 as α = 23.

References
[1] Emmanuel C. Ifeachor and Barrie W. Jervis, Digital Signal Processing: A
Practical Approach, 2nd Ed., Pearson Education

[2] John G. Proakis and Dimitris G. Manolakis, Digital Signal Processing:


Principles, Algorithms, and Applications, 3rd Ed., Prentice Hall Interna-
tional, Inc

36
Part IV
Lab Report 4
Date: 28th January, 2010

1 Fast Fourier Transform


Concepts

Circular convolution of two sequences can be done by calculating the discrete


fourier transforms (DFT) of two sequences and multiplying those two DFTs and
taking inverse discrete fourier transform of the resulted signal will give the re-
sult. But when the sequences are of very long length, nding the DFTs of such
sequences takes more time and computations, because it needs to perform a
number of multiplications proportional to the square of the length of the se-
quences say N , after padding them with sucient number of zeroes. So to
compute discrete fourier transform faster than this, the need to have an algo-
rithm which reduces the number of multiplications and one of those algorithms
is fast fourier transform (FFT).
In this algorithm, the periodicity and symmetry properties of the twiddle
factor, ωNkn
= e−j(2π/N )kn is exploited which reduces the total number of com-
putations.
ˆ Symmetry property

ωN (k + N/2) = −ωN (k)


ˆ Periodicity property

ωN (k + N ) = ωN (k)
In fast fourier transform, the sequence is split into two subsequences of length
N/2 and calculate the N/2 point DFT of those two sequences and adding those

two DFTs to calculate the DFT of original sequence. This process is repeated
by making this N/2 point DFTs into two subsequences and so on until 2 point
DFTs is reached. In this process the total number of multiplications reduces to
a value equal to N ∗ log2 N from a value which was proportional to N 2 .

Types of Fast Fourier Transforms


There are two types of fast fourier transforms depending upon whether sampling
is done in the time domain or in frequency domain.
1. Decimation In Time FFT
2. Decimation In Frequency FFT

37
2 Decimation In Time FFT Algorithm
Here the original signal is sampled into two subsequences depending upon their
position. All the samples present in odd places are grouped into one sequence
and all the samples present in even places are grouped into another sequence.
Then nd the N/2 point DFTs of both sequences. Let g(n) represent even
numbered samples sequence and h(n)represent odd numbered sample sequence.
These sequence are added using symmetry and periodicity of twiddle factor,
ω N . In this way a 8 point DFT is reduced to two 4 point DFTs and the number
of multiplications can still be reduced by computing the two 2 point DFTs to
calculate each of these 4 point DFTs.

Properties
1. Input is in bit reversal order

2. Output is in normal order

3. In place computation is possible

Figure 31: DIT FFT

38
N
X −1
kn
X[k] = x[n]ωN , k = 0, 1, ..., N − 1
n=0

then separating even and odd terms, and substituting n = 2r for n even and
n = 2r + 1 for n odd.
(N/2)−1 (N/2)−1
2 kr
X  k
X
2
kr
X[k] = x[2r] ωN + ωN x[2r + 1] ωN
r=0 r=0

as 2
ωN = ωN/2
(N/2)−1 (N/2)−1
X X
kr k kr
X[k] = x[2r]ωN/2 + ωN x[2r + 1]ωN/2
r=0 r=0

k
= G[k] + ωN H[k], k = 0, 1, ..., N − 1
First term is N/2 point DFT of even terms and second term is N/2 point DFT
of odd terms. This procedure can be repeated till two point DFTs are reached.
Then making use of symmetry and periodicity of twiddle factor, the required
N point DFT is obtained. The ow graph of 8 point DFT using DIT FFT is
shown in gure 31 on the previous page.

3 Decimation In Frequency FFT Algorithm


Here the sequence of length N is split into two sequences of length N/2 by taking
the rst half of total number of samples. A sequence, g(n) is made by adding
the element in the rst sequence to the corresponding element in the second
sequence. Similarly, another sequence, h(n) is made by subtracting an element
of second sequence from the corresponding element in rst sequence. Then the
N/2 point DFTs of g(n) and h(n) ∗ ω n to get N point DFT of the original
N
sequence, where ω N is twiddle factor. These sequences can be further split into
subsequences in the same manner as mentioned above and apply the procedure,
so that total number of multiplications reduces.

Properties
1. Input is in normal order

2. Output is in bit reversal order

3. In place computation is possible

39
Figure 32: DIF FFT

N
X −1
kn
X[k] = x[n]ωN , k = 0, 1, ..., N − 1
n=0
for even numbered samples
N −1
X
2
nr
X[2r] = x[n] ωN r = 0, 1, ..., (N/2) − 1
n=0

(N/2)−1 N −1
X X
2nr 2nr
= x[n]ωN + x[n]ωN
n=0 n=N/2

(N/2)−1 (N/2)−1
2(n+N/2)r
X X
2nr
= x[n]ωN + x[n + N/2]ωN
n=0 n=0
using periodicity of twiddle factor

(N/2)−1
X
X[2r] = nr
(x[n] + x[n + N/2]) ωN/2 , r = 0, 1, ..., (N/2)−1 (Eq. IV.3.1)
n=0

similarly for odd numbered samples

(N/2)−1
X
n nr
X[2r + 1] = (x[n] − x[n + N/2]) ωN ωN/2 , r = 0, 1, ..., (N/2) − 1
n=0
(Eq. IV.3.2)

40
Eq. IV.3.1 is the N/2 point DFT of the N/2 point sequence obtained by
adding the rst half and the last half of the input sequence. Eq. IV.3.2 is the
N/2 point DFT of the sequence obtained by substracting the second half of the

input sequence from the rst half and multiplying the resulting sequence by ωN
n
.
Thus from Eq. IV.3.1 and Eq. IV.3.2,

g[n] = x[n] + x[n + N/2]

h[n] = x[n] − x[n + N/2]


DFT is computed by rst forming g[n] and h[n], then computing ωN
n
h[n] and
nally computing the /2 point DFT of these sequences to obtain even samples
N

and odd samples respectively. The ow graph of 8 point DFT using DIF FFT
is illustrated in gure 32 on the preceding page.

4 Spectrum Analysis
Sinusoidal Signal

Sinusoidal Input
2
f=500Hz
1
Magnitude →

−1

−2
0 10 20 30 40 50 60 70 80 90 100
Time(n) →
Sinusoidal Response
0.4
ω=π/10
0.3
Magnitude →

0.2

0.1

0
−4 −3 −2 −1 0 1 2 3 4
Frequency(ω) →

Figure 33: Sinusoidal Signal

The DFT of a sine or cosine wave will give only two spikes at conjugate fre-
quencies and there will not be any other frequencies. This is because pure sine
or cosine functions contains only one frequency at all times.

Observations From gure 33, we observe that the frequency components on


both sides of origin at the frequencies equal to f2πf
sam
where f is the analog
frequency of the given signal and fsam is the sampling frequency and these two

41
spikes will have an amplitude which is equal to half the amplitude of original
signal.

Pulse Input

Pulse Input
1

0.8
Magnitude →

0.6

0.4

0.2

0
0 100 200 300 400 500 600
Time(n) →
Response
1

0.8
Magnitude →

0.6

0.4

0.2

0
−4 −3 −2 −1 0 1 2 3 4
Frequency(ω) →

Figure 34: Pulse Input

Observations The DFT of a step function will give a sinc function centred
at the origin, which is of amplitude same as that of step function. If the width
of the pulse is small, the sinc would be more spead out and if the pulse is of
longer duration, the sinc would be sharper.

42
Square Input

SquareWave Input
2
f=500Hz
1

Magnitude → 0

−1

−2
0 10 20 30 40 50 60 70 80 90 100
Time(n) →
Response
0.5

0.4
Magnitude →

0.3

0.2

0.1

0
−4 −3 −2 −1 0 1 2 3 4
Frequency(ω) →

Figure 35: Square Input

Concepts When a square wave is applied as input, the frequency spectrum


indicates that the frequency components are present on both sides of origin at
frequencies equal to the digital frequency of the square wave applied and all its
odd harmonics.
Consider a square wave of period 2L, and in [0, 2L] it can be represented as,

f (x) = 2 {u(x) − u(x − L)} − 1 where x  [0, 2L]


Fourier series representation of f (x),

4 X 1  nπx 
f (x) = sin
π n=1,3,5...
n L
From the fourier series representation of square wave, we can observe that
it has coecients of nonzero value for fundamental frequency and for its odd
harmonics only.

Observations From gure 35, we observe that the amplitude of the frequency
components present at fundamental frequency on either side of the origin will
have an amplitude equal to half the amplitude of square pulse and the side
components amplitude decreases as we move farther from the centre.

Frequency Modulated Signals


Concepts Frequency modulation is the angle modulation in which the instan-
taneous frequency of the signal is modulated in proportional with the amplitude

43
of the message signal.
If fc is frequency of carrier and m(t)is the message signal then the modulated
signal instantaneous frequency is

f (t) = fc + k m(t) (Eq. IV.4.1)


where k is the frequency sensitivity of the signal.
Angle θ of FM signal is,

ˆt
θ = 2π f (τ )dτ (Eq. IV.4.2)
0

Linear FM (Chirp Signal)


For chirp function, m(t) = t, then using (Eq. IV.4.1)

f (t) = fc + k t
from (Eq. IV.4.2),

t2
⇒ θ = 2πfc t + 2πk
2

Linear FM
2
Slope=.4, f=200Hz, fs=500Hz
1
Magnitude →

−1

−2
0 10 20 30 40 50 60 70 80 90 100
Time(n) →
Response
0.5

0.4
Magnitude →

0.3

0.2

0.1

0
−4 −3 −2 −1 0 1 2 3 4
Frequency(ω) →

Figure 36: Chirp Signal (smaller slope)

44
Linear FM
2
Slope=24, f=200Hz, fs=500Hz
1

Magnitude →
0

−1

−2
0 10 20 30 40 50 60 70 80 90 100
Time(n) →
Response
0.06

0.05
Magnitude →

0.04

0.03

0.02

0.01

0
−4 −3 −2 −1 0 1 2 3 4
Frequency(ω) →

Figure 37: Chirp Signal (larger slope)

Observations From gure (36), we observe that as k is a small value we have


lesser frequency deviation and hence frequency spectrum gives lesser spread.
As k increases deviation from fc increases (see gure (37)). The high frequency
components become much dominant supressing the lower frequencies. Since the
energy of the signal is constant the amplitude of the spectrum decreases as the
spectrum spreads out.

Sinusoidal Modulation
For a sinusoidal modulating signal, m(t) = Acos(2πfm t) then from (Eq. IV.4.1),

f = fc + kAcos(2πfm t)

= fc + ∇f cos(2πfm t)
where ∇f is frequency deviation.
Modulation index, β
∇f
β=
fc
Modulated wave is

s(t) = cos(2πfc t + βsin(2πfm t)


When β << 1 , r

s(t) = cos(2πfc t) − βsin(2πfc t)sin(2πfm t)


which resembles amplitude modulated signal. This is narrowband FM.

45
For other β ,

s(t) = Re [exp (j2πfc t + jβsin(2πfm t))] (Eq. IV.4.3)


and using complex fourier series,

X
s(t) = Jn (β)cos [2π(fc + nfm )t] (Eq. IV.4.4)
n=−∞

where Jn (β) is nth order Bessel Function of the rst kind.


ˆπ
1
Jn (β) = exp (j(βsinx − nx)) dx

−π

Sine Modulated FM
2
β=.2,fc=50Hz,fm=5
1
Magnitude →

−1

−2
0 10 20 30 40 50 60 70 80 90 100
Time(n) →
Response
0.5

0.4
Magnitude →

0.3

0.2

0.1

0
−4 −3 −2 −1 0 1 2 3 4
Frequency(ω) →

Figure 38: Narrowband FM (β =0.2)

Observations From (Eq. IV.4.3), we expect frequency components at ±ω .


fc
ω = 2π
fsam
and is veried (see gure 38). Here β will be less and so the frequency
deviation. So in the frequency spectrum we will have a component at fc only.

46
Sine modulated FM
Sine modulated FM
2
2
β=2, fc=50Hz, fm=3, fs=200Hz
β=2, fc=50Hz, fm=5, fs=200Hz
1
1
Magnitude →

Magnitude →
0 0

−1 −1

−2 −2
0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100
Time(n) → Time(n) →
Response Response
0.4 0.4

0.3 0.3
Magnitude →

Magnitude →
0.2 0.2

0.1 0.1

0 0
−4 −3 −2 −1 0 1 2 3 4 −4 −3 −2 −1 0 1 2 3 4
Frequency(ω) → Frequency(ω) →

(a) fm=3 (b) fm=5

Figure 39: Wideband FM (β =2)

As expected from (Eq. IV.4.4), we observe frequency components at f c ±


nfm . The components are centred at fc and the separation between adjacent
components is fm . In this case β will be high. So there will be a spread of
frequency components innitely located on both sides of the carrier frequency
present at a distances of fc , 2fc , 3fc etc. Here the amplitude of the carrier wave
will not be constant for all modulation indices unlike in an AM and narrow
band FM. The amplitude of carrier component varies with β as J 0 (β). The
physical explaination for this is that the envelop of FM signal is constant. So
the average power developed across load resistor is constant. Here when carrier
is modulated to generate FM signal, power in the side frequencies may appear
only at the expense of power originally generated in the carrier, thereby making
amplitude of carrier wave dependent on β . When fm is increased from 3 to 5
we observe that the spectrum spreads out more (see gure 39).

47
Kernel of DFT

1 1

0.8 0.8
Magnitude →

Magnitude →
0.6 0.6

0.4 0.4

0.2 0.2

0 0
0 100 200 300 400 500 600 0 200 400 600 800 1000 1200
Time(n) → Frequency(ω) →

1.5 1

0.8
Magnitude →

Magnitude →
1
0.6

0.4
0.5
0.2

0 0
0 200 400 600 800 1000 1200 0 200 400 600 800 1000 1200
Time(n) → Frequency(ω) →

0.8
Magnitude →

0.6

0.4

0.2

0
0 200 400 600 800 1000 1200
Time(n) →

Figure 40

Consider that DFT of signal x(t) is X(ω). If you nd the DFT of X(ω) and
normalize, the resultant signal is same as x(−t). Once more taking DFT of
x(−t) will give X(−ω) and on applying DFT fourth time and normalizing will
give back x(t). This is because while we are nding the DFT, we are just
rotating the samples on a unit circle. So when DFT is performed twice, 180o
out of phase signal is obtained and performing DFT four times will give your
original signal back. So here when a pulse function is applied as input and DFT
of this function gives a sinc. On applying DFT once again and normalizing gives
a mirror image of original pulse signal. On applying DFT two more times and
normalizing gives back your original rectangular signal (see gure 40).

Inference Taking DFT each time may be visualised as rotating the sequence
anti-clockwise by π/2 in time frequency plane.
So in this case eigen values of DFT is j, −1, −j, 1 and kernel is j .

F (x(t)) = X(ω)

F (X(ω)) = F 2 (x(t)) = x(−t)

F 3 (x(t)) = X(−ω)

F 4 (x(t)) = x(t)

48
Code
DIT FFT
clc;
x=[1 2 1 1 1 2 3 1 4 5];
n = length(x);
N=2^(ceil(log2(n)));
z=[x, zeros(1,N-n)];
fft(z,N)
y=bitrevorder(z);
r=log2(N);
for p=1:r
pow=2^p;
wn=exp(-j*2*pi/pow);
o=1;
while(o<=N)
for q=1:pow/2
y(o+pow/2+(q-1))=y(o+pow/2+(q-1))*wn^(q-1);
end
for q=1:pow/2
g=y(o+(q-1))+y(o+pow/2+(q-1));
h=y(o+(q-1))-y(o+pow/2+(q-1));
y(o+(q-1))=g;
y(o+pow/2+(q-1))=h;
end
o=o+pow;
end
end
y

DIF FFT1
function [y]=jan28_2(x)
clc;
n = length(x);
N=2^(ceil(log2(n)));
z=[x, zeros(1,N-n)];
a = log2(N);
y = z;
for p = 1:a
temp=[];
w = exp(-j*2*pi/(2^(a-p+1)));
for q = 1:(2^(p-1))
for m = 1:2^(a-p)
1 jan28_2() used in computation of DFT in other programs

49
temp = [temp y((2^(a-p))*(q-1)*2+m)+y((2^(a-p))*(q-1)*2+m+2^(a-p))];
end
for m = 1:2^(a-p)
temp = [temp (w^(m-1))*(y((2^(a-p))*(q-1)*2+m)-y((2^(a-p))*(q-1)*2+m+2^(a-p)))];
end
end
y = temp;
end
y=bitrevorder(y);
end

Spectrum Analysis
% sine
figure(1);
pt = 2048;
n = linspace(0,pt-1,pt);
y = sin(2*pi*500*n/10000);
u=length(y)
subplot(2,1,1);
plot(y);
xlabel('Time(n) \rightarrow')
ylabel('Magnitude \rightarrow')
title('Sinusoidal Input')
text(80,1.5,'f=500Hz')
axis([0,100,-2,2]);
res = jan28_2(y)./pt;
subplot(2,1,2);
w=linspace(-pi,pi,pt);
plot(w,fftshift(abs(res)));
xlabel('Frequency(\omega) \rightarrow')
ylabel('Magnitude \rightarrow')
title('Sinusoidal Response')
text(2.7,.35,'\omega=\pi/10')
figure(2);
%pulse
pt = 1024;
n = linspace(0,pt-1,pt);
y = [ones(1,100),zeros(1,500)];
subplot(2,1,1);
plot(y);
xlabel('Time(n) \rightarrow')
ylabel('Magnitude \rightarrow')
title('Pulse Input')
res = jan28_2(y)./(100);
subplot(2,1,2);

50
w=linspace(-pi,pi,pt);
plot(w,fftshift(abs(res)))
xlabel('Frequency(\omega) \rightarrow')
ylabel('Magnitude \rightarrow')
title('Response')
figure(3);
% square
pt = 2048;
n = linspace(0,pt-1,pt);
y = square(2*pi*500*n/10000);
subplot(2,1,1);
plot(y);
xlabel('Time(n) \rightarrow')
ylabel('Magnitude \rightarrow')
title('SquareWave Input')
text(80,1.5,'f=500Hz')
axis([0,100,-2,2]);
res = jan28_2(y)./(pt);
subplot(2,1,2);
w=linspace(-pi,pi,pt);
plot(w,fftshift(abs(res)))
xlabel('Frequency(\omega) \rightarrow')
ylabel('Magnitude \rightarrow')
title('Response')
% linear fm
figure(4);
pt = 1024;
n = linspace(0,2,1000);
y = sin(2*pi*(200*n+12*n.^2));
subplot(2,1,1);
plot(y);
axis([0,100,-2,2]);
xlabel('Time(n) \rightarrow')
ylabel('Magnitude \rightarrow')
title('Linear FM')
text(60,1.5,'Slope=24, f=200Hz, f_s=500Hz')
res = jan28_2(y)./(pt);
subplot(2,1,2);
w=linspace(-pi,pi,pt);
plot(w,fftshift(abs(res)));
xlabel('Frequency(\omega) \rightarrow')
ylabel('Magnitude \rightarrow')
title('Response')
y = sin(2*pi*(200*n+.2*n.^2));
res = jan28_2(y)./(pt);
figure(5)

51
subplot(2,1,1);
plot(y);
axis([0,100,-2,2]);
xlabel('Time(n) \rightarrow')
ylabel('Magnitude \rightarrow')
title('Linear FM')
text(60,1.5,'Slope=.4, f=200Hz, f_s=500Hz')
subplot(2,1,2);
plot(w,fftshift(abs(res)));
xlabel('Frequency(\omega) \rightarrow')
ylabel('Magnitude \rightarrow')
title('Response')
figure(6);
% narrow fm
pt = 2048;
n = linspace(0,10,2000);
y = cos(2*pi*50*n+.2*sin(2*pi*5*n));
subplot(2,1,1);
plot(y);
axis([0,100,-2,2]);
xlabel('Time(n) \rightarrow')
ylabel('Magnitude \rightarrow')
title('Sine Modulated FM')
text(60,1.5,'\beta=.2,f_c=50Hz,f_m=5')
res = jan28_2(y)./(pt);
subplot(2,1,2);
w=linspace(-pi,pi,pt);
plot(w,fftshift(abs(res)));
xlabel('Frequency(\omega) \rightarrow')
ylabel('Magnitude \rightarrow')
title('Response')
% wideband fm
figure(7);
pt = 2048;
n = linspace(0,10,2000);
y = cos(2*pi*50*n+2*sin(2*pi*5*n));
subplot(2,1,1);
plot(y);
axis([0,100,-2,2]);
xlabel('Time(n) \rightarrow')
ylabel('Magnitude \rightarrow')
title('Sine modulated FM')
text(60,1.5,'\beta=2, f_c=50Hz, f_m=5, f_s=200Hz')
res = jan28_2(y)./(pt);
subplot(2,1,2);
w=linspace(-pi,pi,pt);

52
plot(w,fftshift(abs(res)));
xlabel('Frequency(\omega) \rightarrow')
ylabel('Magnitude \rightarrow')
title('Response')
% wideband fm
figure(8);
pt = 2048;
n = linspace(0,10,2000);
y = cos(2*pi*50*n+2*sin(2*pi*3*n));
subplot(2,1,1);
plot(y);
axis([0,100,-2,2]);
xlabel('Time(n) \rightarrow')
ylabel('Magnitude \rightarrow')
title('Sine modulated FM')
text(60,1.5,'\beta=2, f_c=50Hz, f_m=3, f_s=200Hz')
res = jan28_2(y)./(pt);
subplot(2,1,2);
w=linspace(-pi,pi,pt);
plot(w,fftshift(abs(res)));
xlabel('Frequency(\omega) \rightarrow')
ylabel('Magnitude \rightarrow')
title('Response')
% fft
figure(9);
pt = 1024;
n = linspace(0,pt-1,pt);
y = [ones(1,100),zeros(1,500)];
subplot(3,2,1);
plot(y);
xlabel('Time(n) \rightarrow')
ylabel('Magnitude \rightarrow')
res = jan28_2(y);
res1 = jan28_2(res)./pt;
res2 = jan28_2(res1);
res3 = jan28_2(res2)./pt;
subplot(3,2,2);
plot(n,fftshift(abs(res/100)));
xlabel('Frequency(\omega) \rightarrow')
ylabel('Magnitude \rightarrow')
subplot(3,2,3);
plot(n,abs(res1));
xlabel('Time(n) \rightarrow')
ylabel('Magnitude \rightarrow')
subplot(3,2,4);
plot(n,fftshift(abs(res2/100)));

53
xlabel('Frequency(\omega) \rightarrow')
ylabel('Magnitude \rightarrow')
subplot(3,2,5);
plot(n,abs(res3));
xlabel('Time(n) \rightarrow')
ylabel('Magnitude \rightarrow')

Normalisation of FFT
Lets consider a rectangular pulse of length N in time domain.
(
1 0≤n<N
x[n] =
0 otherwise
The DTFT of x[n] gives,

X
X(ω) = x[n]e−jωn
n=0
N
X −1
= e−jωn
n=0

1 − e−jωN
=
1 − e−jω
using L' Hospital's Rule to nd X(0)

jN e−jωN

X(0) = =N
je−jω ω=0
From Parseval's Theorem, energy of any signal, x[n] in time domain and
frequency domain are equal.
N −1 N −1
X 1 X 2
x2 [n] = |X[k]|
n=0
N
k=0
Hence, after taking Fourier Transform, the result is normalised with N .

References
[1] Alan V. Oppenheim, Ronald W. Schafer and John R. Buck, Discrete-Time
Signal Processing, 2nd Ed., Pearson Education

[2] John G. Proakis and Dimitris G. Manolakis, Digital Signal Processing:


Principles, Algorithms, and Applications, 3rd Ed., Prentice Hall Interna-
tional, Inc.
[3] Simon Haykin, Communications Systems, 4th Ed. , Wiley India

54
Date: 4th February 2010

5 FIR Filter Design using Frequency Sampling


Method
Concepts
In frequency sampling method, the ideal frequency response of the required
lter is given, say H(ω). N samples of H(ω) is taken by sampling at kFsam/N ,
k = 0, 1, 2, ..., N − 1. The lter coecients h(n) is obtained by taking inverse
DFT of the sampled values,
N −1
1 X
H(k)ej ( N )
2πnk
h(n) = (Eq. IV.5.1)
N
k=0

where H(k), k = 0, 1, 2, ..., N − 1, are samples of ideal frequency response.


From (Eq. IV.5.1),

N −1
1 X
H(k)ej ( N )
2πnk
h(n) =
N
k=0
N −1
1
|H(k)| e−j ( ) ej ( 2πnk
N )
X 2παk
= N
N
k=0
N −1
1 2π(n−α)k
|H(k)| ej ( )
X
= N
N
k=0
N −1     
1 X 2π(n − α)k 2π(n − α)k
= |H(k)| cos + j sin
N N N
k=0
N −1  
1 X 2π(n − α)k
= |H(k)| cos
N N
k=0

since h(n) is real, imaginary part of h(n) is zero. And for h(n) to be linear
phase lter, h(n) needs to be symmetrical. For n even,

/2−1
N  
2 X 2π(n − α)k
h(n) = |H(k)| cos + H(0) (Eq. IV.5.2)
N N
k=0

and for n odd, the upper limit of sum is (N −1)/2.


(Eq. IV.5.2) is implemented to obtain the FIR lter coecients.

55
Ideal Low Pass Filter
Here samples are taken from the ideal low pass lter as shown in gure 41. As
we observe the designed FIR lter have ripples in the response, where ever there
is as sudden transition in the frequency reponse in ideal lter. This is because
their is a sudden transition in frequency domain. And so we have taken less or
no samples (as in this case) in the transition band of the response.

Ideal Low Pass Filter


1

0.8
magnitude →

0.6

0.4

0.2

0
−4 −3 −2 −1 0 1 2 3 4
frequency (ω) →

1.5
magnitude →

0.5

0
−4 −3 −2 −1 0 1 2 3 4
frequency (ω) →

Figure 41: Ideal Low Pass Filter with less number of samples in transition band

Now let's compare the resultant FIR lter with the ideal lter by placing
the responses one over the other as in gure 42 on the next page.

56
1.4

1.2

0.8
magnitude →

0.6

0.4

0.2

0
−4 −3 −2 −1 0 1 2 3 4
frequency (ω) →

Figure 42: Comparison of ideal versus designed FIR lters

Let's now redesign the FIR lter by including more samples from transition
band of the response. As we observe in gure 43 on the following page, the
designed lter have smoother transition band and less ripples.

57
Low Pass Filter (smooth transition band)
1

0.8
magnitude →

0.6

0.4

0.2

0
−4 −3 −2 −1 0 1 2 3 4
frequency (ω) →

1.5
magnitude →

0.5

0
−4 −3 −2 −1 0 1 2 3 4
frequency (ω) →

Figure 43: Ideal Low Pass Filter with more number of samples in transition
band

Arbitary Frequency Response


For an arbitary frequency response as in gure 44 on the next page, the lter
obtained is show below.

58
Arbitrary Frequency Response
2

1.5
magnitude →

0.5

0
−4 −3 −2 −1 0 1 2 3 4
frequency (ω) →

2.5

2
magnitude →

1.5

0.5

0
−4 −3 −2 −1 0 1 2 3 4
frequency (ω) →

Figure 44: Arbitary Frequency Response

Frequency sampling method can be used to obtain FIR lter from any ar-
bitary frequency response satisfying the symmetricity property.

Code
N = 255;
alpha = (N-1)/2;
H = [ones(1, 50), zeros(1,156), ones(1,50)];
figure(1);
subplot(2,1,1);
w = linspace(-pi, pi, N+1);
plot(w,H);
xlabel('frequency (\omega) \rightarrow');
ylabel('magnitude \rightarrow');
title('Ideal Low Pass Filter');
h = zeros(1, N);
for k = 0 : alpha

59
for m = 1 : alpha

h(k+1) = h(k+1) + 2*H(m+1)*cos(2*pi*m*(k-alpha)/N);


end;
h(k+1) = (h(k+1)+H(1))/N;
h(N-k) = h(k+1);
end;
pt = 1024;
subplot(2,1,2);
w = linspace(-pi, pi, pt);
z = exp(j*w);
freq = 0;
for k = 0 : N-1
freq = freq + h(k+1)*z.^(-k);
end
plot(w, fftshift(abs(freq)));
xlabel('frequency (\omega) \rightarrow');
ylabel('magnitude \rightarrow');

60
Part V
Lab Report 5
Date: 4th & 11th February, 2010

1 IIR Filters
Basic Features
IIR digital lters which can be realized is characterised by the recursuive equa-
tion,


X
y[n] = h[k]x[n − k] (Eq. V.1.1)
k=0
N
X M
X
= bk x[n − k] − ak y[n − k]
k=0 k=1

where h[k] is the impulse response of the lter, bk and ak are the lter
coecients of the lter, and x[n] and y[n] are the input and output to the lter.
The transfer function for the IIR lter is,

b0 + b1 z −1 + . . . + bN z −N
H(z) = (Eq. V.1.2)
1 + a1 z −1 + . . . + aM z −M
PN −k
k=0 bk z
= PM
1 + k=1 ak z −k

From (Eq. V.1.1), it can be noted that output, y[n] depends on the past
outputs, y[n − k] and present as well as past input samples, x[n − k], that is IIR
lter is a feedback system and hence the name Innite Impulse Response (IIR)
lter.
The transfer function of the IIR lterm, H(z), given by (Eq. V.1.2) can be
represented as,

K(z − z1 )(z − z2 ). . .(z − zN )


H(z) =
(z − p1 )(z − p2 ). . .(z − pM )
where z1 , z2, . . . are the zeros and p1 , p2, . . . are the poles of the transfer func-
tion, H(z).

61
2 IIR Design through Analog Filters
IIR lters are generally designed by rst designing its analog counterparts and
then transforming it to digital lters. This method is preferred as analog lter
design procedures are highly advanced and the classical lters like Butterworth,
Chebychev and elliptic are readily available.
In case of IIR lter design ,the most common practice is to convert the
digital lter specications into analog low pass prototype lter specications,
to determine the analog low pass lter transfer function H(s) meeting these
specications, then to transform it into the desired digital lter transfer function.
This approach is widely used because
1. Analog approximations are highly advanced.
2. They usually yield closed form solutions.
3. Extensive tables are available for analog lter design.

2.1 Butterworth Approximation


The low pass Butterworth lter is characteristics by the following magnitude
squared frequency response,

2 1
|H(jΩ)| =  2N (Eq. V.2.1)

1+ ΩC

where N is the order of the lter and ΩC is the 3dB cut o frequency of the
low pass lter.

Figure 45: Classical Butterworth Low Pass Filter Response

The response of typical Butterworth low pass lter is depicted in Figure 45


on page 62. It can be seen that the response is monotonic in pass band and stop

62
band regions. The response is also said to be maximally at due to the initial
atness with a slope of almost zero at DC.
Normalising (Eq. V.2.1),

1
2
|H(jΩ)| =
1 + Ω2N
⇒ H(jΩ)H(−jΩ) = H(s)H(−s)
1
=  2N
1 + sj
1
H(s)H(−s) =
1 + (−1)N s2N
1
=
1 + (−s2 )N
Now, poles of transfer function are obtained by solving,

1 + (−s2 )N = 0
solving, the poles sk are obtained as
(
ej 2πk/2N , f or N odd
s = (Eq. V.2.2)
ej (2k−1)π/2N , f or N even
Here, the poles are placed on the circumference of the unit circle (as the
function is normalised ).
f
F = (Eq. V.2.3)
fs
where F is the digital frequency,f is the analog frequency and fs is the sampling
frequency.
To nd n
 2n !

α = 10log 1 + dB (Eq. V.2.4)
Ωc
The attenuation in the passband should not exceed αmax and that in the
stopband should be atleast αmin .
(Eq. V.2.4)

⇒ Ωc = α 1 (Eq. V.2.5)
10 10 − 1 2n
Solving for n in (Eq. V.2.4) by substituting for α and Ω in the passband
and stopband,  0.1α 
10 min −1
1 log 100.1α max −1
n=   (Eq. V.2.6)
2 log Ωs
Ωp

63
Maximal Flatness 2
All derivatives of the |H(jΩ)| up to but not including
the 2N derivative are zero, resulting in "maximal atness". If the requirement
th

to be monotonic is limited to the passband only and ripples are allowed in the
stopband, then it is possible to design a lter of the same order that is atter
in the passband than the "maximally at" Butterworth. Such a lter is inverse
Chebyshev lter or Type II Chebyshev lter.

2.2 Chebyshev Approximation


The Butterworth approximation, which is monotonic in both the pass band and
stop band, gives us a magnitude response which exceeds the specications for
the pass band ripple. The problem is that the order of the lter will be high.
Usually, we can tolerate some amount of ripple, and it would be advantageous to
have a trade-o between the ripple and the lter order, so that we can obtain a
lter which meets the specications and has a lower order than its Butterworth
counterpart. For this we use the Chebychev approximation. The characteristic
of Chebyshev approximation is that it minimises the error between the idealised
and the actual lter over the range of lter allowing some ripples in pass band.

ˆ Type I, with equal ripple in the pass band, monotonic in the stop band.

ˆ Type II, with equal ripple in the stop band, monotonic in the pass band.

Type I Chebyshev Filter is characterised by the magnitude squared response


given by,

2 1
|H(Ω)| =  
2 Ω
1 + ε2 CN Ωp
 
where CN ΩΩp is a Chebyshev polynomial which exhibits equal ripple in
the pass band, N is the order of the polynomial as well as that of the lter, and
ε determines the pass band ripple, which in decibels is given by,

passband ripple ≤ 10 log10 (1 + ε2 )


Chebyshev polynomial is given by,
(
cos(N cos−1 (x)) , |x| ≤ 1
CN (x) =
cosh(N cosh−1 (x)) , |x| > 1

64
Figure 46: Classical Chebyshev Low Pass Filter Response

A typical response of a type I Chebyshev characteristic is shown in Figure


46 on page 65.
Here

α = 10log 1 + 2 Cn2 (Ω) (Eq. V.2.7)




Pole Calculation

1 + 2 Cn2 (Ω) = 0 (Eq. V.2.8)


Substituting for Cn and solving (Eq. V.2.8) gives

cos (nu) cosh (nv) −  sin (nu) sinh (nv) = ± (Eq. V.2.9)

 
where ω = u + v = cos−1 s

(Eq. V.2.9) gives

cos (nu) cosh (nv) = 0

⇒ cos (nu) = 0
n
(2m+1)π
∴u= 2n , m = 0, 1, 2....n − 1

1
sin (nu) sinh (nv) = ±

 
1 1
⇒ v = ± sinh−1 = ±a
n 

65
Then,

s =  cos ω
Replace s → s
ωp

n
1
(2m + 1) π2 ±  sinh−1 1
 
⇒ s = ωp cos n  , m = 0, 1, 2....n − 1
(Eq. V.2.10)

Guillemin's Algorithm
Guillemin's algorithm is used to transform a Butterworth lter design to a
Chebyshev design in a simple and easier way. The steps to obtain Chebyshev
poles from Butterworth poles is given below.
The Butterworth angles are,
2k + 1 π
φk = , k = 0, 1, 2, ...(2n − 1)
n 2
From (Eq. V.2.10),

−uk = sinh a sin φk


±vk = cosh a cos φk

The complementary angle to φk is ψk .


π 2k + 1 π
ψk = −
2 n 2
From the above diagram,

66
sin φk = cos ψk
cos φk = sin ψk

Hence,

−uk = sinh a cos ψk


±vk = cosh a sin ψk

67
3 Transforming Analog Filter to Digital Filter
After analog lter design is completed as per the specications, the design needs
to be converted into digital domain. The commonly used techniques are

1. Impulse Invariance

2. Bilinear Transformation

3.1 Impulse Invariant Technique


Impulse invariant technique is used to obatin the digital lter whose impulse
response is as close as possible to analog lter's impulse response. In the impulse
invariant method, starting with the analog lter transfer function, H(s), the
impulse response, h(t) is obtained using the Laplace transform. h(t) is then
suitably sampled to produce h(nT ), and the desired transfer function H(z) is
then obtained by z-transforming h(nT ), where T is the sampling interval.
The analog lter impulse response, ha (t) is given by

ha (t) = L−1 (H(s))


Now, sampling h(t) at t = nT ,

h[n] = ha (nT )
and then now H(z) by taking z-transform of h[n].
Consider the case when no poles have multiplicity more than one, H(z) can
be obtained directly from H(s).

N
X Ak
H(s) = (Eq. V.3.1)
s − pk
k=1
−1
ha (t)= L (H(s))
(P
N pk t
k=1 Ak e ,t≥0
⇒ ha (t) =
0 ,t<0
h[n] = T ha (nT ), n≥0
N
X
⇒ h[n] = Ak T epk nT (Eq. V.3.2)
k=1
N
X Ak T
⇒ H(z) =
1 − epk T z −1
k=1
N
X Ak T z
= (Eq. V.3.3)
z − e pk T
k=1

68
Ignoring zero at z = 0 for (Eq. V.3.3) causes the transfer function to be
delayed.
N
X Ak T
H(z) = (Eq. V.3.4)
z − e pk T
k=1

From (Eq. V.3.4), it can be infered that to transform analog transfer func-
tion, H(s) to digital transfer function, H(z) by replacing pk in (Eq. V.3.1) with
epk T . The transformations are given below:

s → z
H(s) → T H(z)
pk → e pk T

Relation between Analog and Digital Frequencies Ω is the analog


frequency and ω is the digital frequency.

s = jΩ , z = ejω
jΩT = jω
ω
⇒Ω = (Eq. V.3.5)
T
Hence, from (Eq. V.3.5) it can be seen that analog frequency and digital
frequency are linearly related.

3.2 Bilinear Transformation


Bilinear transformation is much more important and useful than IIT method.
The method is derived by approximating the rst order dierential equation with
a dierence equation. This transformation maps the analog transfer function,
H(s) from the s-plane onto the discrete transfer function, H(z) in the z-plane.
The entire jω axis in the s-plane is mapped onto the unit circle , and the left
half s-plane is mapped inside the unit circle and the right half s-plane is mapped
outside the unit circle in the z-plane.

69
= esT
z
1
⇒s = ln(z)
T " #
 1  3  5
2 z−1 1 z−1 1 z−1
= + + + ...
T z+1 3 z+1 5 z+1
 
2 z−1
⇒s ≈
T z+1
2 1 − z −1
 
∴s = (Eq. V.3.6)
T 1 + z −1
 
1 + sT/2
⇒z =
1 − sT/2

Hence,

H(s) → H(z)
2 1 − z −1
 
s →
T 1 + z −1

Relation between Analog and Digital Frequencies Ω is the analog


frequency and ω is the digital frequency.

s = jΩ , z = ejω
2 1 − z −1
 
s =
T 1 + z −1
2 1 − e−jω
 
jΩ =
T 1 + e−jω
2 ω
⇒Ω = tan (Eq. V.3.7)
T 2
 
TΩ
⇒ω = 2 tan−1
(Eq. V.3.8)
2

Hence, from (Eq. V.3.5) it can be seen that analog frequency and digital
frequency are not linearly related (see Figure 47 on page 71). This deviation of
Ω − ω from linear relation is known as frequency warping.
To compensate for this eect, the analog frequencies are prewarped before
applying bilinear transformation using (Eq. V.3.8).

70
Figure 47: Ω − ω plot

71
4 Design of Filters
4.1 Butterworth Filter Design
4.1.1 Specications
αmax = 0.5 dB
αmin = 20 dB
Ωp = 1000 rad/sec
Ωs = 2000 rad/sec
fsampling = 10000Hz

4.1.2 Design using Butterworth approximation


Now Ωc is calculated using α and Ω in the passband and stopband and n from
(Eq. V.2.5). From the two Ωc value,the one giving better attenuation is selected.
(Eq. V.2.6)⇒ n = 4.83
( , so n is taken as 5.
1234 , when Ω = Ωs , α = αmin
(Eq. V.2.5)⇒ Ωc =
1263 , when Ω = Ωp , α = αmax

Ωc = 1234 =⇒ α = 21dB . That is αmin increased by 5.

Ωc = 1263 =⇒ α = 0.4dB . That is αmax decreased by 20.

So Ωc is chosen as 1263 rad/sec.

As n is odd, (Eq. V.2.2) gives the poles as


n  
sk = exp 2πk 5 , k = 1, 2...10

From these poles, the ones lying on the left half plane is chosen for stability.
The selected poles as per this criterion are,

s3 = −0.309 + 0.951
s4 = −0.809 + 0.587
s5 = −1
s6 = −0.809 − 0.587
s7 = −0.309 − 0.951

The analog transfer function is given by,

1
H(s) =
(s − s3 ) (s − s4 ) (s − s5 ) (s − s6 ) (s − s7 )
1
=
(s + 1) (s2 + 0.618s + 1) (s2 + 1.62s + 1)

72
Replacing s → Ωsc = 1263 ,
s

12635
H(s) =
(s + 1263) (s2 + 783s + 12632 ) (s2 + 2046s + 12632 )

Considering it as a cascade of one rst order and two second order systems
gives,

H(s) = H1 (s)H2 (s)H3 (s)


where

1263
H1 (s) =
s + 1263
12632
H2 (s) =
s + 783s + 12632
2

12632
H3 (s) =
s + 2046s + 12632
2

4.1.3 IIT Method


1263
H1 (s) =
s + 1263
0.1263
∴ H1 (z) = (Eq. V.4.1)
1 − 0.8814z −1
2
(1263)
H2 (s) = 2
s2 + 783s + (1263)
664 664
= −
s + 390 + 1201 s + 390 − 1201
0.664 0.664
∴ H2 (z) = −
1 − (0.955 − 0.115) z −1 1 − (0.955 + 0.115) z −1
0.1527z −1
= (Eq. V.4.2)
1 − 1.9z −1 + 0.925z −2
2
(1263)
H3 (s) = 2
s2 + 2046s + (1263)
1076.9 1076.9
= −
s + 1023 + 740 s + 1023 + 740
0.108 0.108
∴ H3 (z) = −
1 − (0.9 − 0.667) z −1 1 − (0.9 + 0.667) z −1
0.0144z −1
= (Eq. V.4.3)
1 − 1.804z −1 + 0.815z −2

H (z) = H1 (z) H2 (z) H3 (z)

73
where
Y1 (z)
H1 (z) = (Eq. V.4.4)
X (z)
Y2 (z)
H2 (z) = (Eq. V.4.5)
Y1 (z)
Y (z)
H3 (z) = (Eq. V.4.6)
Y2 (z)
(Eq. V.4.1) and (Eq. V.4.4) gives
y1 [n] = 0.1263x[n] + 0.88y1 [n − 1]
(Eq. V.4.2) and (Eq. V.4.5) gives
y2 [n] = 0.152y1 [n] + 1.9y2 [n − 1] − 0.925y2 [n − 2]
(Eq. V.4.3) and (Eq. V.4.6) gives
y[n] = 0.0144y2 [n] + 1.804y[n − 1] − 0.815y[n − 2]

Figure 48: Signal Flow Graph

4.1.4 BLT Method


Prewarping the frequencies gives
Ωp = 1000.83 rad/sec
Ωs = 2006.69 rad/sec

1 + z −1
H1 (z) = (Eq. V.4.7)
16.83 − 14.83z −1
1 + 2z −1 + z −2
H2 (z) = (Eq. V.4.8)
242z −2 − 500z −1 + 262
1 + 2z −1 + z −2
H3 (z) = (Eq. V.4.9)
226z −2 − 500z −1 + 277

74
(Eq. V.4.7) and (Eq. V.4.4) gives
1
y1 [n] = (x[n] + x[n − 1] + 14.83y1 [n − 1])
16.83
(Eq. V.4.8) and (Eq. V.4.5) gives
1
y2 [n] = (y1 [n] + 2y1 [n − 1] + y1 [n − 2] + 500y2 [n − 1] − 242y2 [n − 2])
262
(Eq. V.4.9) and (Eq. V.4.6) gives
1
y[n] = (y2 [n] + 2y12 [n − 1] + y2 [n − 2] + 500y[n − 1] − 226y[n − 2])
277

Figure 49: Signal Flow Graph

4.2 Chebyshev Filter Design


4.2.1 Specications
αmax = 0.5 dB
αmin = 20 dB
Ωp = 1000 rad/sec
Ωs = 2000 rad/sec
fsampling = 10000Hz

4.2.2 Design using Chebyshev approximation


To nd , put α = αmax and ω = ωp in (Eq. V.2.7)

 = 0.3493
∵Cn = 1, when ω = ωp
To nd n put α = αmin and ω = ωs in (Eq. V.2.7)

n=4

75
∴The poles are obtained as
n
⇒ s = ωp cos (2m + 1) π8 ± 0.4435 , m = 0, 1, 2, 3


s1 = −175.338 + 1016.238
s2 = −423.305 + 419.464
s3 = −423.305 − 419.464
s4 = −175.338 − 1016.238

2
(1031.25)
H1 (s) = 2
s2 + 350.676s + (1031.25)
2
(596.93)
H2 (s) = 2
s2 + 846.61s + (595.93)

Pole Zero Plot


1.5

0.5

−0.5

−1

−1.5
−1.5 −1 −0.5 0 0.5 1 1.5

Figure 50: Pole Zero Plot

76
4.2.3 IIT Method
523.249 −523.249
H1 (s) = +
s + 175.338 + 1016.238 s + 175.338 − 1016.238
0.05232 −0.05232
H1 (z) = +
1 − e−(0.01753+0.10162) z −1 1 − e(−0.01753+0.10162) z −1
0.01043z −1
=
1 − 1.9549z −1 + 0.9655z −2
⇒ y1 [n] = 0.01043x[n − 1] + 1.9549y1 [n − 1] − 0.9655y1 [n − 2]
423.303 −423.303
H2 (s) = +
s + 423.303 + 420.9 s + 423.303 − 420.9
0.04233 −0.04233
H2 (z) = +
1 − e−(0.04233+0.04209) z −1 1 − e(−0.04233+0.04209) z −1
0.034z −1
=
1 − 1.915z −1 + 0.9655z −2
⇒ y[n] = 0.000321y1 [n − 1] + 1.915y[n − 1] − 0.9187y[n − 2]

4.2.4 BLT Method


Prewarping the frequencies gives

Ωp = 1000.83 rad/sec
Ωs = 2006.67 rad/sec
∴ Poles are

s1 = −175.484 + 1017.073
s2 = −423.654 + 421.28
s3 = −423.654 − 421.28
s4 = −175.484 − 1017.073

2
(1032.1)
H1 (s) = 2
s2 + 351.49s + (1032.1)
0.00251 1 + 2z −1 + z −2

∴ H1 (z) =
1.0202 − 1.99468z −1 + 0.985z −2
⇒ y1 [n] = 0.00246x[n] + 2x[n − 1] + x[n − 2] + 1.9552y1 [n − 1] − 0.9654y1 [n − 2]
2
(597.44)
H2 (s) = 2
s2 + 847.35s + (597.44)
0.00089 1 + 2z −1 + z −2

∴ H2 (z) =
1.043 − 1.998z −1 + 0.9585z −2
⇒ y[n] = 0.000855 (y1 [n] + 2y1 [n − 1] + y1 [n − 2]) + 1.9156y1 [n − 1] − 0.919y1 [n − 2]

77
4.3 Integrator
The analog transfer function of integrator is given by,
1
H(s) =
s

4.3.1 IIT Method


1
H (z) =
1 − z −1
⇒ y (n) = x (n) + y (n − 1)

Here the digital lter obtained has a zero at z = 0. So to ignore this a delay
is given to the input. So

z −1
H (z) =
1 − z −1
⇒ y[n] = x[n − 1] + y[n − 1]

4.3.2 BLT Method


Taking T = 1

1 + z −1

H (z) =
2 (1 − z −1 )
⇒ y[n] = 0.5x[n] + 0.5x[n − 1] + y[n − 1]

4.4 Comb Filter


The poles and zeros are selected in such a manner that, the frequency response
of the lter looks like a comb. Here, the zeros are placed on the jω axis on
s-plane at equally spaced locations and poles are placed on left half plane not
far away from jω axis as shown in the gure. Poles tries to take the response to
innite value where as the zeros placed at regular intervals cause the response
to have dips. So the analog transfer function, H(s) is chosen as

s (s + a) (s − a)
H (s) =
(s + b + c) (s + b − c) (s + b + d) (s + b − d)
Taking a = 2, b = 1, c = 1, d = 3

78

s s2 + 4
H (s) =
(s2 + 2s + 2) (s2 + 2s + 10)
s
H1 (s) = 2
(s + 2s + 2)

s2 + 4
H2 (s) =
(s2 + 2s + 10)

using BLT Method where T = 1

1 − z −2
H1 (z) =
5 − 2z −1 + z −2
4 1 + z −2

H2 (z) =
9 + 6z −1 + 5z −2

1
y1 [n] = (x[n] − x[n − 2] + 2y1 [n − 1] − y1 [n − 2])
5
1
y[n] = (4y1 [n] − 4y1 [n − 1] − 6y[n − 1] − 5y[n − 2])
9

Figure 51: Signal Flow Graph

79
5 Observations
5.1 Butterworth Filter
IIT Method

Frequency Response
1

0.5
Amplitude →

−0.5

−1
0 100 200 300 400 500 600 700
n→

−50
Magnitude →

−100

−150

−200

−250

−300
−4 −3 −2 −1 0 1 2 3 4
n→

1
Phase →

−1

−2

−3
−4 −3 −2 −1 0 1 2 3 4
ω→

Figure 52: Filter Response

80
BLT Method

Frequency Response
1

0.5
Amplitude →

−0.5

−1
0 100 200 300 400 500 600 700
n→

0
Magnitude →

−50

−100

−150
−4 −3 −2 −1 0 1 2 3 4
ω→

2
Phase →

−2
X: 1.29
Y: −1.845

−4
−4 −3 −2 −1 0 1 2 3 4
ω→

Figure 53: Filter Response

81
Frequency Response →
0

−20

−40
Magnitude(db) →

−60

−80

−100

−120
−4 −3 −2 −1 0 1 2 3 4
ω→

Figure 54: Frequency Response

Impulse Response →
1

0.8
Amplitude →

0.6

0.4

0.2

0
0 2000 4000 6000 8000 10000 12000
n

0.06

0.04
Amplitude →

0.02

−0.02
0 100 200 300 400 500 600 700
n→

Figure 55: Impulse Response

82
Pulse Response Pulse Response
1 1

0.8 0.8

0.6 0.6

0.4 0.4

Amplitude →
Amplitude →

0.2 0.2

0 0

−0.2 −0.2

−0.4 −0.4

−0.6 −0.6

−0.8 −0.8

−1 −1
0 100 200 300 400 500 600 700 0 100 200 300 400 500 600 700
n→ n→

0.3 1.2

0.25 1

0.2 0.8
Magnitude →

Magnitude →
0.15 0.6

0.1 0.4

0.05 0.2

0 0

−0.05 −0.2
0 100 200 300 400 500 600 700 800 0 100 200 300 400 500 600 700 800
ω→ ω→

Figure 56: Pulse Response

Sinusoidal Response
1

0.5
Amplitude →

−0.5

−1
0 100 200 300 400 500 600 700
n→

0.5
Amplitude →

−0.5

−1
0 100 200 300 400 500 600 700
n→

Figure 57: Sinusoidal Response (pass band frequency)

83
Sinusoidal Response
1

0.8

0.6

0.4
Amplitude →

0.2

−0.2

−0.4

−0.6

−0.8

−1
0 100 200 300 400 500 600 700
n→

0.08

0.07

0.06

0.05
Amplitude →

0.04

0.03

0.02

0.01

−0.01

−0.02
0 100 200 300 400 500 600 700
n→

Figure 58: Sinusoidal Response (stop band frequency)

84
Sinusoidal Response
1

0.5
Amplitude →

−0.5

−1
0 100 200 300 400 500 600 700
n→

0.5
Amplitude →

−0.5

−1
0 100 200 300 400 500 600 700
n→

0.5
X: 0.0764
Y: 0.4865
0.4
Magnitude →

0.3

0.2

0.1

0
−4 −3 −2 −1 0 1 2 3 4
ω→

Figure 59: Sinusoidal Response (mixed sinusoids)

85
Pulse Input
1.5

Amplitude →
0.5

−0.5

−1
0 100 200 300 400 500 600 700
Time(n) →

1.5

1
Amplitude →

0.5

−0.5

−1
0 100 200 300 400 500 600 700
Time(n) →

Figure 60: Pulse Response (including all poles)

Pulse Input(Dominant pole alone)


1.5

1
Amplitude →

0.5

−0.5

−1
0 100 200 300 400 500 600 700
Time(n) →

1.5

1
Amplitude →

0.5

−0.5

−1
0 100 200 300 400 500 600 700
Time(n) →

Figure 61: Pulse Response (excluding all poles other than the dominant pole)

86
5.2 Chebyshev Filter
IIT Method

Frequency response
0

−20

−40
Gain in dB

−60

−80

−100

−120
−4 −3 −2 −1 0 1 2 3 4
Frequency(rad/sam) →

Figure 62: Frequency Response

Frequency response
0

−0.5

−1

−1.5
Gain in dB

−2

−2.5

−3

−3.5

−0.1 −0.05 0 0.05 0.1


Frequency(rad/sam) →

Figure 63: Frequency Response (zoomed view of pass band)

87
BLT Method

Frequency Response
20

−20

−40
Amplitude

−60

−80

−100

−120

−140

−160
−4 −3 −2 −1 0 1 2 3 4
Frequency(rad/sam) →

Figure 64: Frequency Response

Frequency Response

0.5

−0.5
Amplitude

−1

−1.5

−2

−2.5

−3

−3.5

−4
−0.1 −0.05 0 0.05 0.1
Frequency(rad/sam) →

Figure 65: Frequency Response (zoomed view of pass band)

Figure 66: Impulse Response

88
Pulse Input
1.5

Amplitude
0.5

0
0 100 200 300 400 500 600 700
Time(n) →

1.5

1
Amplitude

0.5

−0.5
0 100 200 300 400 500 600 700
Time(n) →

Figure 67: Pulse Response (including all poles)

The rising and falling edges of the pulse excites the system causing a damped
sinusoid as output as seen in the case of the impulse response of the system.
The frequency of the sinusoid is expected to be that of dominant pole frequency.
Here, from the dominant pole frequency is 1016rad/sec. And from the Figure
67 on page 89, frequency of sinusoid, f
 −1
166 − 107
f= × 2π = 1064.9rad/sec
10000
Here, we observe that there is a variation of frequency from what is expected.
This is because of the eect of non dominant poles. So, let us take the lter
with only dominant pole and other poles are removed. The response of such a
system is shown in Figure 68 on page 90.

89
Pulse input(Dominant pole alone)
1.5

Amplitude →
1

0.5

0
0 100 200 300 400 500 600 700
Time(n) →

1.5
Amplitude →

0.5

−0.5

−1
0 100 200 300 400 500 600 700
Time(n) →

Figure 68: Pulse Response (excluding all poles except the dominant pole)

With the system with only dominant pole, the frequency of sinusoid f ,
 −1
142 − 80
f= × 2π = 1014rad/sec
10000
And as expected, we obtain f equal to dominant pole frequency.

Sinusoidal Input(Passband)
2
f=0.075 rad/sam
1
Amplitude

−1

−2
0 100 200 300 400 500 600 700
Time(n) →

0.5
Amplitude

−0.5

−1
0 100 200 300 400 500 600 700
Time(n) →

Figure 69: Sinusoidal Response (pass band frequency)

90
Sinusoidal input(stopband)
1.5
f=0.75 rad/sam
1

Amplitude →
0.5
0
−0.5
−1
−1.5
0 100 200 300 400 500 600 700
Time(n) →

1.5
1
Amplitude →

0.5
0
−0.5
−1
−1.5
0 100 200 300 400 500 600 700
Time(n) →

Figure 70: Sinusoidal Response (stop band frequency)

Sinusoidal input(Mixed sine)


2

1
Amplitude →

−1

−2
0 100 200 300 400 500 600 700
Time(n) →

1.5

1
Amplitude →

0.5

−0.5

−1

−1.5
0 100 200 300 400 500 600 700
Time(n) →

Figure 71: Sinusoidal Response (mixed sinusoids)

91
Preprocessor

Amplitude →
1

−1
0 100 200 300 400 500 600 700
Time(n) →
Amplitude →
1

−1
0 100 200 300 400 500 600 700
Time(n) →
0.05
Amplitude →

−0.05
0 100 200 300 400 500 600 700 800
Time(n) →

Figure 72: Preprocessing of input signal

If we have a preprocessing block to which the input signal is fed before giving
to our system which has a zero at z=1. This causes the eect of poles of pulse
input to be compensated or nullied.
z-transform of input signal

1 − z −N
X(z) =
1 − z −1
and transfer function of preprocessing system is

P (z) = 1 − z −1
So the output of preprocessing unit is

Y1 (z) = 1 − z −N

y1 [n] = δ[n] − δ[n − N ]


So, we can observe two impulses as in Figure 72 on page 92. And once this
is fed into our Chebyshev lter, we obtain two exponentially decaying sinusoids
which is excited by these impulses.

92
5.3 Integrator
IIT Method

Integrator Impulse Response


1

0.8

Amp → 0.6

0.4

0.2

0
0 200 400 600 800 1000 1200
Time(n) →

0.5
Amp →

−0.5

−1
−4 −3 −2 −1 0 1 2 3 4
Frequency(rad/sam) →

Figure 73: Impulse Response

Integrator Impulse Response(Delayed input)


1

0.8
Amp →

0.6

0.4

0.2

0
0 200 400 600 800 1000 1200
Time(n) →

−20
Amp →

−40

−60
−4 −3 −2 −1 0 1 2 3 4
Frequency(rad/sam) →

Figure 74: Delayed Impulse Input

93
BLT Method

Integrator Impulse Response


1

0.8
Amp →
0.6

0.4

0.2

0
0 100 200 300 400 500 600 700 800 900 1000
Time(n) →

0.8
Amp →

0.6

0.4

0.2

0
−4 −3 −2 −1 0 1 2 3 4
Frequency(rad/sam) →

Figure 75: Impulse Response

Integrator(Sinusoidal Input)
1

0.5
Amp →

−0.5

−1
0 100 200 300 400 500 600 700 800 900 1000
Time(n) →

40

30
Amp →

20

10

−10
0 100 200 300 400 500 600 700 800 900 1000
Time(n) →

Figure 76: Sinusoidal Response

94
Integrator(Square wave Input)
1

0.5

Amp →
0

−0.5

−1
0 100 200 300 400 500 600 700 800 900 1000
Time(n) →

50

40
Amp →

30

20

10

0
0 100 200 300 400 500 600 700 800 900 1000
Time(n) →

Figure 77: Square Input

Integrator(Ramp Input)
10

8
Amp →

0
0 100 200 300 400 500 600 700 800 900 1000
Time(n) →

5000

4000
Amp →

3000

2000

1000

0
0 100 200 300 400 500 600 700 800 900 1000
Time(n) →

Figure 78: Ramp Input

95
5.4 Comb Filter

Comb Filter
0

−100
Magnitude

−200

−300

−400
−4 −3 −2 −1 0 1 2 3 4
Frequency(rad/sam) →

1
Phase

−1

−2

−3
−4 −3 −2 −1 0 1 2 3 4
Frequency(rad/sam) →

Figure 79: Filter Response

6 Code
6.1 Butterworth Filter
clc;
x=[1,zeros(1,10000)];
%x=[zeros(1,50),ones(1,10),zeros(1,1000)];
%t=linspace(0,999,1000);
%x=sin(2*pi*1.2*t/100)+sin(2*pi*12*t/100);
figure(5);
subplot(3,1,1)
plot(x);
xlabel('n \rightarrow')
ylabel('Amplitude \rightarrow')
title('Frequency Response ')
axis([0,700,-1,1])
y=[];
pt=700;
for i=1:pt
if(i==1)
y(i)=.1263*x(1);

96
else
y(i)=.88*y(i-1)+.1263*x(i);
end
end
z(1)=0;
for i=2:pt
if(i==2)
z(i)=.0152*y(1);
else
z(i)=1.908*z(i-1)-.925*z(i-2)+.0152*y(i-1);
end
end
w=[];
w(1)=0;
for i=2:pt
if(i==2)
w(i)=.01436*z(1);
else
w(i)=1.804*w(i-1)-.818*w(i-2)+.01436*z(i-1);
end
end
w1=-pi:2*pi/(pt-1):pi;
l1=length(w);
n=0:l1-1;
subplot(3,1,2)
plot(w1,(fftshift(20*log((abs(fft(w)))))))
%plot(n,(w))
xlabel('n \rightarrow')
ylabel('Magnitude \rightarrow')
%figure(3);
subplot(3,1,3)
plot(w1,(fftshift(angle(fft(w)))))
xlabel('\omega \rightarrow')
ylabel('Phase \rightarrow')
%xlabel('\omega \rightarrow')
%ylabel('Magnitude(db) \rightarrow')
%title('Frequency Response \rightarrow')

6.2 Chebyshev Filter


clc
x=[1 zeros(1,999)];
%x=[zeros(1,50),ones(1,300),zeros(1,1000)];
%t=linspace(0,999,1000);
%x=sin(2*pi*1.2*t/100)+sin(2*pi*12*t/100);
figure(1);

97
pt=700;
y(1)=0;
y(2)=0.01043*0.944*x(1)+1.9549*y(1);

for i=3:pt
y(i)=1.9549*y(i-1)-0.9655*y(i-2)+0.01043*0.944*x(i-1);
end
w(1)=0;
w(2)=(3.4154*10^(-3))*y(1)+1.915*w(1);
for i=3:pt
w(i)=1.915*w(i-1)-0.9187*w(i-2)+(3.4154*10^(-3))*y(i-1);
end
w1=-pi:2*pi/(pt-1):pi;
l1=length(w);
n=0:l1-1;
%plot(n,(w))
%axis([0,700,-1,1.5])
plot(w1,20*log10(fftshift(abs(fft(w,pt)))))
grid on
xlabel('Frequency in radians per sample')
ylabel('Gain in dB')
title('Frequency response')

6.3 Comb Filter


clc;
x=[1,zeros(1,10000)];
figure(5);
%subplot(3,1,1)
%plot(x);
%axis([0,700,-1,1])
y=[];
pt=1024;
for i=1:pt
if(i==1)
y(i)=4*x(i)/17;
elseif(i==2)
y(i)=(-2*y(i-1)+4*x(i))/17;

else
y(i)=(-2*y(i-1)-1*y(i-2)+4*x(i)-4*x(i-2))/17;
end
end
z=[];
for i=1:pt
if(i==1)

98
z(i)=(8*y(i))/25;
elseif(i==2)
z(i)=(-18*z(i-1)+8*y(i))/25;
else
z(i)=(-18*z(i-1)-9*z(i-2)+8*y(i)+8*y(i-2))/25;
end
end
w=z;
w1=-pi:2*pi/(pt-1):pi;
l1=length(w);
n=0:l1-1;
subplot(3,1,2)
plot(w1,(fftshift(20*log10(abs(fft(w))))))
subplot(3,1,1)
plot(w1,abs(fft(w)))
%plot(n,(w))
%figure(3);
subplot(3,1,3)
plot(w1,(fftshift(angle(fft(w)))))

References
[1] Alan V. Oppenheim, Ronald W. Schafer and John R. Buck, Discrete-Time
Signal Processing, 2nd Ed., Pearson Education, 2002

[2] Emmanuel C. Ifeachor and Barrie W. Jervis, Digital Signal Processing: A


Practical Approach, 2nd Ed., Pearson Education, 2003

99
Part VI
Lab Report 6
Date: 18th February, 2010

1 Lattice Realization of FIR Filters


Consider a FIR lter with system function, Hm (z)

Hm (z) = Am (z) m = 0, 1, 2 ..., M − 1


where Am (z) is the polynomial
m
X
Am (z) = 1 + αm (k)z −k m≥1
k=0

with A0 (z) = 1. The unit impulse response of the mth order lter is
hm (0) = 1 and hm (k) = αm (k), k = 1, 2, ..., m. m is the degree of the polyno-
mial Am (z). Let αm (0) = 1.
If {x(n)} is the input sequence to the lter Am (z) and {y(n)} is the output
sequence,
m
X
y(n) = x(n) + αm (k)x(n − k) (Eq. VI.1.1)
k=1

Figure 80: Direct form realization of (Eq. VI.1.1)

1.1 Filter of order 1

Suppose m = 1

y(n) = x(n) + α1 (1)x(n − 1) (Eq. VI.1.2)


This lter can be realized using lattice structure as shown in Figure 81 on
page 101. Here both the inputs are x(n) and output is taken from upper branch.
The output is same as (Eq. VI.1.2) if K1 = α1 (1) where K1 is lattice parameter
called reection coecient.

100
Figure 81: First order Lattice Filter

1.2 Filter of order 2

For m = 2,

y(n) = x(n) + α2 (1)x(n − 1) + α2 (2)x(n − 2) (Eq. VI.1.3)


By cascading two lattice stages (Eq. VI.1.3) can be realized. The output of
rst stage is

f1 (n) = x(n) + K1 x(n − 1) (Eq. VI.1.4)


g1 (n) = K1 x(n) + x(n − 1)

and output of second stage is

f2 (n) = f1 (n) + K2 g1 (n − 1) (Eq. VI.1.5)


g2 (n) = K2 f1 (n) + g1 (n − 1)

From (Eq. VI.1.4) and (Eq. VI.1.5),

y(n) = f2 (n) = x(n) + K1 x(n − 1) + K2 [K1 x(n − 1) + x(n − 2)]


y(n) = x(n) + K1 (1 + K2 ) x(n − 1) + K2 x(n − 2)(Eq. VI.1.6)

Comparing (Eq. VI.1.3) and (Eq. VI.1.6),

α2 (2) = K2 α2 (1) = K1 (1 + K2 )

or

α2 (1)
K2 = α2 (2) K1 =
1 + α2 (2)

101
1.3 Filter of order M

As order 2 lter was developed in 1.2, a lter of any order, say M can be
realized by cascading the lattice stages. Such a lter is described by

f0 (n) = g0 (n) = x(n)


fm (n) = fm−1 (n) + Km gm−1 (n − 1) (Eq. VI.1.7)
gm (n) = Km fm−1 (n) + gm−1 (n − 1)

where m = 1, 2, ..., M − 1. The recursive equations in z-domain are

F0 (z) = G0 (z) = X(z)


Fm (z) = Fm−1 (z) + Km z −1 Gm−1 (z) (Eq. VI.1.8)
−1
Gm (z) = Km Fm−1 (z) + z Gm−1 (z)

And output of (M − 1)th stage lter is output of the (M − 1)th order FIR
lter,

y(n) = fM −1 (n)
As mth order FIR lter can be expressed using output of m stage lattice
fm (n),
m
X
fm (n) = αm (k)x(n − k), αm (0) = 1 (Eq. VI.1.9)
k=0

z-transform of (Eq. VI.1.9) gives

Fm (z) = Am (z)X(z)
Fm (z)
⇒ Am (z) =
F0 (z)

Similarly for gm (n) can also be expressed like fm (n) as in (Eq. VI.1.9). But
it is seen that the coecients are in the reverse order to that given in (Eq.
VI.1.9).
m
X
gm (n) = βm (k)x(n − k)
k=0

where

βm (k) = αm (m − k), k = 0, 1, 2, ..., m & βm (m) = 1


or in z domain,

102
G(z)= Bm (z)X(z)
Gm (z)
⇒ Bm (z) =
X(z)
Now lets see the relation between Am (z) and Bm (z)

m
X
Bm (z) = βm (k)z −k
k=0
Xm
= αm (m − k)z −k
k=0
m
X
= z −m αm (l)z l
l=0
−m
Bm (z) = z Am (z −1 )

From the above relation, the zeros of FIR lter with system function Am (z)
are the reciprocals of the zeros of Bm (z). Thus, Bm (z) is called the reverse
polynomial of Am (z).
From (Eq. VI.1.8), and dividing each equation with X(z),

A0 (z) = B0 (z) = 1
Am (z) = Am−1 (z) + Km z −1 Bm−1 (z) (Eq. VI.1.10)
Bm (z) = Km Am−1 (z) + z −1 Bm−1 (z)

where m = 1, 2, ..., M −1. Therefore, a stage of lattice lter can be expressed


as
    
Am (z) 1 Km Am−1 (z)
=
Bm (z) Km 1 z −1 Bm−1 (z)
    
Am (z) 1 1 −Km Am (z)
=
z −1 Bm (z) 2
1 − Km −Km 1 Bm (z)

1.4 Design Procedure

Given the FIR lter coecients for direct form realization, the corresponding
lattice lter parameters, Ki can be determined. For a lter of m stages, Km =
αm and to compute Km−1 , Am−1 polynomial is required. From (Eq. VI.1.10),

Am (z) − Km Bm (z)
Am−1 (z) = 2
, m = M − 1, M − 2, ..., 1
1 − Km
Hence all Ki are determined by stepping down from m = M − 1 to m = 1.

103
1.5 Code
clc ;
p=[2 3 4 5 ] ;
a =[];
b=[];
H=[1 , p ] ;
l=l e n g t h (H ) ;
k=p ( l − 1);
c =[k ] ;

% determining the K values


f o r n=1: l −2
l=l e n g t h (H ) ;
a=H;
f o r m=1: l
b (m)=a ( l −m+1);
end
b=b ( 1 : l ) ;
H=(a−k*b)/(1 − k ^ ( 2 ) ) ;
H=H( 1 : l − 1);
k=H( l − 1);
c =[k , c ] ;
end
c
k=c ;

%e x c i t i n g t h e l a t t i c e s t r u c t u r e with an i m p u l s e
l =100;
x=z e r o s ( 1 , l ) ;
f 1=z e r o s ( 1 , l ) ;
g1=z e r o s ( 1 , l ) ;
x (1)=1;
f 0=x ;
g0=x ;
n=1;
f o r n=1:4
f o r i =1: l
i f i==1
f 1 ( i )= f 0 ( i ) ;
g1 ( i )=k ( n ) * f 0 ( i ) ;
else
f 1 ( i )=k ( n ) * g0 ( i −1)+ f 0 ( i ) ;
g1 ( i )=k ( n ) * f 0 ( i )+g0 ( i − 1);
end
end

104
n=n+1;
f 0=f 1 ;
g0=g1 ;
end
stem ( f 1 ) ;

1.6 Observations
Impulse Response

Impulse Response
5

4.5

3.5

3
amplitude→

2.5

1.5

0.5

0
0 5 10 15 20
n→

Figure 82: Impulse Response

For the direct lter coecients [1 2 3 4 5], the corresponding lattice parame-
ters are computed and is
k =
1 0.50000 0.33333 0.25000 5.00000
When the lattice lter is excited with impulse input, the output obtained is
equal to the direct lter coecients as expected.

105
Frequency Response
Frequency Response
16

14

12
magnitude→

10

2
−4 −3 −2 −1 0 1 2 3 4
ω (rad/sec) →

Figure 83: Frequency Response

Pulse Response
Pulse Response
15

10
amplitude→

0
0 10 20 30 40 50
n→

Figure 84: Pulse Response

106
Sinusoidal Response

20
f=1Hz
10

−10

−20
0 200 400 600 800 1000

Sinusoidal Response
8000

6000
amplitude→

4000

2000

0
−4 −3 −2 −1 0 1 2 3 4
n→

Figure 85: Sinusoidal Response

White Gaussian Noise is given as input to lattice structure, it is obsereved


that, lattice structure introduces correlation to the signal. Therefore, lattice
structure is similar to a correlator. Here, when WGN is given as input, lattice
being a FIR lter causes truncation of the signal and thereby producing a more
correlated output.
When output is observed after each lattice stage, the signal is getting more
and more correlated. Each stage of lattice contributes to introduction of corre-
lation into input WGN.

100

80

60

40

20

−20

−40
0 20 40 60 80 100 120

Figure 86: Output after stage 1 of lattice lter

107
120

100

80

60

40

20

−20

−40
0 20 40 60 80 100 120

Figure 87: Output of stage 2 of lattice lter

140

120

100

80

60

40

20

−20

−40
0 20 40 60 80 100 120

Figure 88: Output of stage 3 of lattice lter

108
200

150

100

50

−50
0 20 40 60 80 100 120

Figure 89: Output of stage 4 of lattice lter

6000

5000

4000

3000

2000

1000

−1000

−2000
0 20 40 60 80 100 120

Figure 90: Output of stage 5 of lattice lter

1.7 Inference

Lattice structures are easy and simple to implement. Addition or deletion


of a part of lattice lter does not aect the other stages of the lter. So, lattice
lters are used in linear prediction applications. For a mth order lter, there
are only m reection coecients and it is possible to use this lter as a lter
of any order from 1 to m, without the need of change of previous coecients.
But, if we had used a direct form realization of mth order lter, we would have
m(m+1)
2 lter coecients to implement a lter of order between 1 and m.

109
References
[1] John G. Proakis and Dimitris G. Manolakis, Digital Signal Processing:
Principles, Algorithms, and Applications, 3rd Ed., Prentice Hall Interna-
tional, Inc

110
Part VII
Lab Report 7
Date: 25th February & 11th March, 2010

1 Introduction
At the transmitting side, the messge signal is passed through a decorrelator
to give a signal which is approximately white. Then, only the coecients of the
lter needs to be transmitted along with the compressed error signal.
 
y(n) −→ A(z) −→ ŵ(n)
By transmitting the lter coecients, A(z) and the error signal, ŵ(n) the
message signal can be reconstructed. It is also possible to reconstruct the mes-
sage signal without transmitting the error signal, ŵ(n) as the error signal is
approximately white, it is sucient if a white noise of same power as error is
generated locally.
Receiver side,
 
w(n) −→ 1/A(z) −→ ŷ(n)
where ŷ(n) is the predicted message signal.

2 Levinson Durbin Recursion Algorithm


Consider ideal sampled signal, y(n). Each sample of the signal, y(n) can
be represented as linear combination of previous samples. Lets represent nth
sample of y(n),

ŷ(n) = a1 yn−1 + a2 yn−2 + . . . + am yn−m


en = y(n) − ŷ(n) (Eq. VII.2.1)

(n − m) depends on the extend of correlation of y(n), and m gives the order


of the lter and en is the prediction error.

Determining the order of the lter


Lets assume the order of the lter, m = 1. That is,

ŷ(n) = −a1 yn−1


en = yn + a1 yn−1 (Eq. VII.2.2)

111
Now toPobtain more accurate prediction, the error should be minimized, i.e.
minimize e2n . Take z-transform of (Eq. VII.2.1),

En (z) = Y (z) + a1 z −1 Y (z)


En (z)
= A(z) = 1 + a1 z −1
Y (z)

Error energy is to be minimized. Error energy, ξ

ξ = E e2n (Eq. VII.2.3)


 

minimizing (Eq. VII.2.3),


∂ξ
⇒ =0
∂a1
This can be intepreted in vector space as projection of a vector on to 1D.
Error is minimum when projected perpendicularly.

E e2n
 
ξ =
 
∂ξ ∂en
⇒ = 2E en = 0
∂a1 ∂a1

now from (Eq. VII.2.2),

∂en
= yn−1
∂a1
⇒ E [en yn−1 ] = 0 (Eq. VII.2.4)
E [(yn + a1 yn−1 ) yn−1 ] = 0
E [yn yn−1 ] + a1 E [yn−1 yn−1 ] = 0
⇒ RY Y (1) + a1 RY Y (0) = 0 (Eq. VII.2.5)
RY Y (1)
a1 = −
RY Y (0)

As RY Y (0) is the maximum value, a1 < 1 ⇒ zeros are within the unit circle
and hence the lter is minimum phase and is invertible. (Eq. VII.2.4) is known
as the orthogonality relation.
 
yn −→ A(z) −→ en

112
Figure 91: Direct Form Realization

Lets compute the energy left in the error sequence, en , when the error is
minimum

= E e2n
 
ξ(a1 )min
= E [en (yn + a1 yn−1 )]
= E [en yn ] + a1 E [en yn−1 ]
= E [(yn + a1 yn−1 ) yn ]
ξ(a1 )min = RY Y (0) + a1 RY Y (1) (Eq. VII.2.6)
= RY Y (0) 1 − a21
 

From (Eq. VII.2.5) and (Eq. VII.2.6),

    
RY Y (0) RY Y (1) 1 ξ
=
RY Y (1) RY Y (0) a1 0
| {z } | {z } | {z }
R a ξ
Ra = ξ

The matrix R is symmetric


But the error, en obtained in one dimensional space need not be the min-
imial projection error possible. May be two dimensional or some other higher
dimensional results in much smaller error, i.e. more uncorrelation.
Let m = 2,

ŷn = − (a01 yn−1 + a02 yn−2 )


en = yn − ŷn

following similar procedure as described above,

113
∂ξ(a01 , a02 )
= 0
∂a01
∂ξ(a01 , a02 )
= 0
∂a02

Here, R is a 3 × 3 matrix and


    
RY Y (0) RY Y (1) RY Y (2) 1 ξ
 RY Y (1) RY Y (0) RY Y (1)   a01  =  0 
RY Y (2) RY Y (1) RY Y (0) a02 0

Hence, en also decreases. en can be further minimized by adding more


dimensions. It is required to obtain a completely white error signal so that
signal need not be transmitted.

en = yn + a01 yn−1 + a02 yn−2


A(z) = 1 + a01 z −1 + a02 z −2

Figure 92: Direct Form Realization

Filter A(z) is generating the prediction error en and so A(z) is called the
prediction error lter. Drawback of direct form realization is that whenever
lter order is changed or updated, all coecients computed earlier are also to
be redened. So it is computationally intensive.
So we require a lter which does not change coecient values when the
dimension of the lter changes except that new coecients are introduced. Gap
function method is utilized to make such a lter.

114
Let g(1) = E [en yn−1 ], and for rst order, g(1) = 0

" 1
! #
X
g(1) = E am yn−m yn−1
m=0
1
X
= am E [yn−m yn−1 ]
m=0
1
X
= am RY Y (1 − m)
m=0

Now for k th order,

k
X
g(k) = am RY Y (k − m)
m=0
= ak ∗ RY Y (k)
The lter is similar to convolution operation.
 
RY Y (k) −→ {1, a1 , a2 , . . .} −→ g(k)

g(k) = E [en yn−k ] = 0


For rst order lter we have a zero at 1, and for a second order lter we
should have zeros at 1, 2. Lets assume that our prediction error lter is of order
two, i.e. we have a gap length of two. Lets assume we are extending a already
designed rst order lter two second order without aecting the existing lter
coecients.

g(k) = ak ∗ R(k) (Eq. VII.2.7)


g(1) = E [en yn−1 ] = 0
Optimal rst order predictor, ŷ(n) = yn + a1 yn−1 . Lets dene a new param-
eter called reection coecients. For rst order predictor, γ1 = −a1 . To extend
to second order lter, we create a gap function of second order.

g 0 (k) = g(k) + γ2 g(2 − k) (Eq. VII.2.8)


0
g (k) 0
= a (k) ∗ R(k) (Eq. VII.2.9)
0
already we have, g (1) = 0
we need, g 0 (2) = 0
⇒0 = g(2) + γ2 g(0)
g(2)
∴ γ2 = −
g(0)

115
Relation between γ in terms of R

g(0) = E [en yn ] = E e2n = εmin


 

and

g(2) = a0 R(2) + a1 R(0)


g(0) = a0 R(0) + a1 R(1)
 
a0 R(2) + a1 R(1)
∴ γ2 = −
a0 R(0) + a1 R(1)
 
R(2) − γ1 R(1)
= −
εmin
similarly
 
a0 R(3) + a1 R(2) + a2 R(1)
γ3 = −
a2 R(2) + a1 R(1) + a0 R(0)
in general
 
am R(1) + am−1 R(2) + . . . + a0 R(m + 1)
γm+1 = −
am R(m) + am−1 R(m − 1) + . . . + a0 R(0)

z Domain relationships Take z-transform of (Eq. VII.2.7) and (Eq. VII.2.8)


to give

G0 (z) = G(z) + γ2 z −2 G(z −1 )

and

G(z) = A(z)SY Y (z)


G0 (z) = A0 (z)SY Y (z)

from the above,

A0 (z)SY Y (z) = A(z)SY Y (z) + γ2 z −2 A(z −1 )SY Y (z −1 )


⇒ A0 (z) = A(z) + γ2 z −2 A(z −1 )
⇒ A0 (z) = A(z) + γ2 z −1 AR (z) (Eq. VII.2.10)

where AR (z) represents the reversed sequence of z transform of ak ,

AR (z) = z −1 A(z −1 )

also,

A0 (z −1 ) = A(z −1 ) + γ2 zAR (z −1 )
⇒ A0R (z) = z −1 AR (z) + γ2 A(z) (Eq. VII.2.11)

116
Representing (Eq. VII.2.10) and (Eq. VII.2.11) as matrix form,

A0 (z) 1 γ2 z −1
    
A(z)
=
A0R (z) γ2 z −1 AR (z)
In general,

γM +1 z −1
    
AM +1 (z) 1 AM (z)
= (Eq. VII.2.12)
AR
M +1 (z) γM +1 z −1 AR
M (z)

(Eq. VII.2.12) is called the forward Levinson Durban Recursion and can
be easily implemented using Lattice Structures. The order of the lter can
be determined by prediction error analysis. Error decreases when the order of
the lter is increased, but the lter order is xed by xing a threshold value
of error. When the threshold is satised, the order of the lter need not be
increased further.

3 Inverse Filter

Figure 93: Inverse Filter

For transmitting a given message signal, it is passed through a decorrelator or


an analyser. It computes the reection coecients and lter coecients for op-
timum error energy. These lter coecients are transmitted over the channel to
the receiver. The receiver uses this set of coecients for predicting the message
signal from a locally generated WGN of energy of the error signal generated at
the transmitter. At the receiver, the lter is called synthesiser (inverse lter).
Linear prediction can be implemented by Residual Excited Linear Prediction or
by Code Excited Linear Prediction.

117
4 Observations
Gap Function

50 50 50

40
40 40

30
30 30

20
20 20
10

10 10
0

0 0
−10

−20 −10 −10


0 5 10 15 20 25 30 0 5 10 15 20 25 30 0 5 10 15 20 25 30

40 40 40

35 35 35

30
30 30
25
25 25
20
20 20
15
15 15
10
10 10
5
5 5
0

−5 0 0

−10 −5 −5
0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40

Figure 94: Gap Function

Here while a gap is added, actually a new new dimension is added to the
lter. The new component in orthogonal to the rest, so a zero component is
obtained.

Linear Prediction

When x=[1 2 3 4 5 3 2 3 1 5 6 6 3]; is given as input to the prediction


error lter, the reection coeents and lter coeents are found as

118
Figure 95

And the following gures illustrates decorrelator in action,

50 50

40
40

30
30

20
20
10

10
0

0
−10

−20 −10
0 5 10 15 20 25 30 0 5 10 15 20 25 30 35

40 40

35 35

30 30

25 25

20 20

15 15

10 10

5 5

0 0

−5 −5
0 5 10 15 20 25 30 35 40 0 5 10 15 20 25 30 35 40 45

Figure 96: Output after each stage of decorrelator

119
40

35

30

25

20

15

10

−5
0 10 20 30 40 50 60

Figure 97: Output of decorrelator

Predicted signal

Linear predictor
7

0
0 5 10 15

input ⇒ red; predicted with error signal ⇒ blue; predicted with wgn ⇒ green

Figure 98: Predicted Signal

Speech Signal Processing

When a speech signal is to be transmitted, the signal is passed through the


decorrelator and at receiver either error signal or WGN is used to excite the
inverse lter. Input sequence is given as
x = [-120 186 -348 -517 555 -5 -434 17 595 -225 -473 189 237 -3
-188 117 -249 12 617 -932 -92 970 -497 -592 873 -175 -314 178 -249
149 481 -884 -166 1087 -497 -991 1034 606 -1205 523 644 -706 205 320
-642 512 294 -892 3 688 -525 -298 412 -9 -331 9 -79 11 80 -505 -200

120
449 -196 -538 144 363 -341 -203 373 106 -98 86 227 54 88 37 -136 337
359 -281 -4 772 317 -34 427 610 296 318 90 -192 -17 -583 -1108 -863
-946 -1505 -1381 -899 -1012 -637 -73 -1 594 1112 867 1369 1585 968
1164 1420 709 403 402 181 105 220 540 882 693 538 503 304 -296 -1051
-1640 -2066 -2706 -3267 -3150 -3013 -2460 -2001 -1129 -5 1023 1795
2596 3491 3482 3114 2508 2256 1581 231 -612 -666 -1222 -1628 -1490
-980 -747 -402 284 827 1530 2143 2147 2117 1361 1078 418 -603 -2202
-3100 -3740 -4572 -5140 -4530 -3996 -3005 -1731 -308 1260 2898 4169
5079 5698 5376 4730 3589 2403 760 -658 -1734 -2531 -3103 -3249 -2651
-1758 -870 -17 659 1860 2621 2790 3551 3473 1997 837 380 -1169 -3620
-4224 -5309 -6060 -5987 -5841 -4288 -3035 -1301 703 2439 4205 5564
6520 6693 5896 4825 3262 1413 -429 -1887 -2974 -3409 -3690 -3565 -2645
-1439 -79 1097 1682 2373 3352 3431 3091 3414 2477 450 -891 -1903 -3887
-5615 -5664 -6152 -6432 -5684 -4636 -2713 -569 1284 3138 4577 5609
6757 7069 6248];
The signal is segmented into eight segments with 32 samples each. Then
each of them are processed separately.

Speech signal
8000

6000

4000

2000

−2000

−4000

−6000

−8000
0 50 100 150 200 250 300

Figure 99: Input Sequence

121
1000 1000

800 800

600 600

400 400

200 200

0 0

−200 −200

−400 −400

−600 −600

−800 −800

−1000 −1000
0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35

(a) Predicted using error signal (b) Predicted using WGN

Figure 100: Speech Segment 1

1500 1500

1000 1000

500 500

0 0

−500 −500

−1000 −1000

−1500 −1500
0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35

(a) Predicted using error signal (b) Predicted using WGN

Figure 101: Speech Segment 2

122
1000 1000

500 500

0 0

−500 −500

−1000 −1000

−1500 −1500

−2000 −2000
0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35

(a) Predicted using error signal (b) Predicted using WGN

Figure 102: Speech Segment 3

2000 2000

1000 1000

0 0

−1000 −1000

−2000 −2000

−3000 −3000

−4000 −4000
0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35

(a) Predicted using error signal (b) Predicted using WGN

Figure 103: Speech Segment 4

123
4000 4000

3000 3000

2000 2000

1000 1000

0 0

−1000 −1000

−2000 −2000

−3000 −3000

−4000 −4000
0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35

(a) Predicted using error signal (b) Predicted using WGN

Figure 104: Speech Segment 5

6000 8000

6000
4000

4000
2000

2000
0
0

−2000
−2000

−4000
−4000

−6000 −6000
0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35

(a) Predicted using error signal (b) Predicted using WGN

Figure 105: Speech Segment 6

124
8000 8000

6000 6000

4000 4000

2000 2000

0 0

−2000 −2000

−4000 −4000

−6000 −6000

−8000 −8000
0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35

(a) Predicted using error signal (b) Predicted using WGN

Figure 106: Speech Segment 7

8000 8000

6000 6000

4000 4000

2000 2000

0 0

−2000 −2000

−4000 −4000

−6000 −6000

−8000 −8000
0 5 10 15 20 25 30 35 0 5 10 15 20 25 30 35

(a) Predicted using error signal (b) Predicted using WGN

Figure 107: Speech Segment 8

125
5 Code
Decorrelator

%d e c o r r e l a t o r ( l e v i n s o n s r e c u r s i o n )

clear
clc
x=[1 2 3 4 5 3 2 3 1 5 6 6 3 ] ;
%c a l c u l a t i n g r e f l e c t i o n c o e f f i e n t s
a1=l e n g t h ( x ) ;
% g1=a1 ;
r1 = [ ] ;
p1=x c o r r ( x , x ) ;
ord = 6 ;
f o r i=a1 : l e n g t h ( p1 )
r 1 ( i −a1+1)=p1 ( i ) ;
end
e = [];
a =[];
ar = [ ] ;
gam = [ ] ;
g =[];
a (1)=1;
a(2)= − r 1 ( 2 ) / r 1 ( 1 ) ;
gam(1)= a ( 2 ) ;

a r (1)= a ( 2 ) ;
a r (2)= a ( 1 ) ;
e ( 1 ) = p1 ( ( l e n g t h ( p1 ) + 1 ) / 2 ) ;

f o r i =2: ord
g=conv ( a , p1 ) ;
figure ( i )
stem ( g ) ;
gam( i ) = −g ( a1+i ) / g ( a1 ) ;
b=a ;
a=[a 0 ] + gam( i ) * [ 0 a r ] ;
a r=gam( i ) * [ b 0 ] + [ 0 a r ] ;
e ( i )=e ( i − 1)*(1 − gam( i − 1)^2);
end
i=i +1;
e ( i )=e ( i − 1)*(1 − gam( i − 1)^2);
gam
a

126
e
fn = x ;
gn = x ;

%l e v i n s o n s r e c u r s i o n
f o r i = 1 : ord
f t = fn ;
f n = [ f t 0]+gam( i ) * [ 0 gn ] ;
gn = [ 0 gn]+gam( i ) * [ f t 0 ] ;

end

Speech Processing

%s p e e c h p r o c e s s i n g

clc
clf
clear ;
x = [ − 120 186 −348 −517 555 −5 −434 17 595 −225 −473 189 237 −3 −188 117 −249 12
figure (1)
plot (x)
g r i d on
t i t l e ( ' Speech s i g n a l ' )
sgmt = z e r o s ( 8 , 3 2 ) ;

for i = 1:8

for j = 1:32

sgmt ( i , j ) = x ( j + ( i − 1 ) * 3 2 ) ;

end

end

%f i l t e r c o e f f i c i e n t s computation

ab = x ;

l = l e n g t h ( ab ) ;
ord = 5 0 ;

127
ryy1 = x c o r r ( ab ) ;
ryy = [ ] ;

f o r i = l : l e n g t h ( ryy1 )

ryy ( i − l +1) = ryy1 ( i ) ;

end

a = [];
arev = [ ] ;

a (1) = 1;
a ( 2 ) = −ryy ( 2 ) / ryy ( 1 ) ;

arev (2) = a ( 1 ) ;
arev (1) = a ( 2 ) ;
gamma = [ ] ;
g = [];

gamma ( 1 ) = a ( 2 ) ;

f o r j = 2 : ord

g = conv ( ryy1 , a ) ;

gamma( j ) = −g ( l+j ) / g ( l ) ;

b = a;

a = c a t ( 2 , a , z e r o s ( 1 , 1 ) ) + gamma( j ) * c a t ( 2 , z e r o s ( 1 , 1 ) , a r e v ) ;
a r e v = gamma( j ) * c a t ( 2 , b , z e r o s ( 1 , 1 ) ) + c a t ( 2 , z e r o s ( 1 , 1 ) , a r e v ) ;

end

gamma

f 0 = ab ;
g0 = ab ;

gamma ( 1 ) = gamma ( 1 ) ;

f o r j = 1 : ord

128
for i = 1: l

i f ( i == 1 )

f1 ( i ) = f0 ( i ) ;
g1 ( i ) = gamma( j ) * f 0 ( i ) ;

else

f 1 ( i ) = f 0 ( i ) + gamma( j ) * g0 ( i − 1);
g1 ( i ) = gamma( j ) * f 0 ( i ) + g0 ( i − 1);

end

end

f0 = f1 ;
g0 = g1 ;

end

%p r e d i c t i o n u s i n g s i g n a l e h e r e u s i n g e r r o r s i g n a l

v a r i n = var ( f 1 ) ;
e = f1 ;

o = ord ;

z = [ z e r o s ( o+1, l ) ] ;
t = [ z e r o s ( o+1, l ) ] ;
kr = f l i p l r (gamma ) ;

for i = 1: l
z (1 , i ) = e ( i ) ;
end

for i = 1: l
i f i == 1
for j = 1: o
z ( j +1, i ) = z ( j , i ) ;
t ( j , i ) = kr ( j ) * z ( j +1, i ) ;
end
t ( o+1, i ) = z ( o+1, i ) ;

129
else
for j = 1: o
z ( j +1, i ) = z ( j , i ) − kr ( j ) * t ( j +1, i − 1);
t ( j , i ) = kr ( j ) * z ( j +1, i ) + t ( j +1, i − 1);
end
t ( o+1, i ) = z ( o+1, i ) ;
end
end
for j = 1: l
pred ( j ) = z ( o+1, j ) ;
end

figure (1)
clf
p l o t ( ab , ' b ' )
g r i d on
h o l d on
p l o t ( pred , ' ro ' )
g r i d on

6 Inference
In speech transmission, Levinson Durbin Algorithm helps in reducing the
bandwidth and power requirement for transmission. Here, the speech is analysed
and the lter coecients along with the error variance is transmitted instead
of the entire signal being transmitted. At receiver, the inverse lter synthesises
the original signal using these lter coecients.
This algorithm can be used for various applications like vocal tract modelling
and speech synthesis.

References
[1] John G. Proakis and Dimitris G. Manolakis, Digital Signal Processing:
Principles, Algorithms, and Applications, 3rd Ed., Prentice Hall Interna-
tional, Inc

130
Part VIII
Lab Report 8
Date: 18th March, 2010

1 Introduction
Most of the real world signals are continuous amplitude, time varying signals.
So whenever there is a need for signal processing, analog to digital conversion is
essential. The signal is sampled initially and then passed onto a quantizer. The
output of a sampler can be visualised as weighted impulse train whose weights
are determined by the signal amplitude at that specic instant. Quantizer helps
in amplitude quantization. Quantizer maps the output sample amplitudes to
the set of quantization values.

2 Quantisation Technique
The signal amplitude range is segmented into l segments, and each of these
is represented by a amplitude value say I k .

Ik = {x : xk ≤ x ≤ xk+1 ∀ 1 ≤ k ≤ l}
Then each of these segments Ik is assigned a representative amplitude value
say yk . Then either the amplitude levels are encoded and transmitted or else
the index k itself can be transmitted. At receiver, the encoded signal is decoded
and transformed to the respective amplitude level. This process inevitably in-
troduces error due to approximating the message signal to various nite quan-
tization levels. This error is called quantization error. Thus for ecient signal
reconstruction, quantization error should be minimized.

2.1 Uniform Quantization

Uniform quantization has uniform step size. The dierence between the
quantization levels are xed. The step size for k th interval is given as

∆k = Ik+1 − Ik
Here, in uniform quantization

∆1 = ∆2 = . . . = ∆l = ∆

131
2.2 Non Uniform Quantization

Uniform quantizers are not used in speech quantization even though their
design is simple and easy to implement. Speech signals have higher probabiliy
of low amplitude components than larger amplitude components. So, uniform
quantizers would introduce a lot more of quantization error.
So non uniform quantizers are used in such cases. Non uniform quantizer
uses dierent step sizes in dierent amplitude regions, therey minimizing error.
In small amplitude region, step size is much smaller than in high amplitude
regions.

2.3 PDF Optimized Non Uniform Quantizer

Assumptions for a PDF optimized non uniform quantizer are


ˆ The number of quantization levels ,L is large as a esult of high bit rate, so
that the step size ∆k will be small for all intervals. So, it can be assumed
that pX (x) is uniform in each interval Ik for k = 0, 1, 2, . . . , L.
ˆ The input signal is upper bounded at |xmax |. The input signal often shows
decaying pdf and probability of tail is lower. Hence, overload error is lower.
ˆ The pdf is symmetric.
Error variance,

ˆ
2
σQ e
= q 2 pQ (x)dq
L ˆ
X xk+1
2
= (x − yk ) pX (x)dx (Eq. VIII.2.1)
k=1 xk

In the interval Ik (xk ≤ x ≤ xk+1 ), pX (x) is assumed to be uniform. pX (x) =


pX (yk ) where yk is in between xk and xk+1 . The probability of x in Ik is

Pk = P (XεIk )
= pX (yk )∆k (Eq. VIII.2.2)

Here ∆k is the length of interval Ik , i.e., ∆k = xk+1 − xk . Substituting (Eq.


VIII.2.2) in (Eq. VIII.2.1),
L ˆ xk+1
X pk 2
2
σQ = (x − yk ) dx (Eq. VIII.2.3)
e
∆k x k
k=1
To nd the optimum reconstruction level and reduce the error variance.
Hence,

132
2
∂σQ e
= 0, k = 1, 2, . . . , L
∂yk
ˆ xk+1
−2 (x − yk ) dx = 0
xk
xk+1
x2

− yk (xk+1 − xk ) = 0
2 xk
1
⇒ yk = (xk + xk+1 ) (Eq. VIII.2.4)
2
Hence, it can be concluded that the reconstruction levels should be in the
middle of Ik for the minimum error variance. Since, PX (x) is a constant in Ik ,
this translates to yk being the centroid of Ik .
Now substitute (Eq. VIII.2.4) in (Eq. VIII.2.3),

L ˆ
2
X Pk xk+1 2
σQ e (min)
= (x − yk ) dx
∆k xk
k=1
3 xk+1
L

X Pk (x − yk )
=
∆k 3


k=1 xk
L  2
X ∆
= k
Pk (Eq. VIII.2.5)
12
k=1

∆2
Here, 12k is the error variance condition in the interval Ik . That is, the
error variance is dependent only on the length of the interval Ik . Thus, it is the
2
parameter ∆k which is to be optimized. For uniform quantizer, σQ 2
e (min)
=∆12 ,
where ∆ = ∆k .

2.3.1 Determination of optimum ∆k


We have
L
2 1 X
σQ e (min)
= pX (yk )∆3k & Pk = pX (yk )∆k
12
k=1

1
Lets dene, αk = (pX (yk )) 3 ∆k , then
L
2 1 X 3
σQ e (min)
= αk
12
k=1
PL
with the constraint k=1 αk =constant. That is,

133
L L ˆ xk+1
X X 1 1
αk = (pX (yk )) 3 ∆k = (pX (x)) 3 dx
k=1 k=1 xk

We need to nd the optimum value of αk under the above constraint. La-
grange Multiplier technique is used to incorporate the constaint.

" L
#
∂ 2
X
σQ +λ αk = 0 f or k = 1, 2, . . , L
∂αk e
k=1

where λ is the Lagrange Mmultiplier

" L L
#
∂ 1 X 3 X
αk + λ αk = 0
∂αk 12
k=1 k=1
L
1X 2
αk + λL = 0
4
k=1
L
1 X 2
⇒λ = − αk
4L
k=1

Thus,

  
L L L
2
X
2 1 X  2 X 
σQ +λ αk = σQ − α αj
e e
4L  k 
k=1 j=1
k=1
L L
2 1 X 2
2
X 1
= σQ − (pX (yk )) ∆k
3
(pX (yj )) 3 ∆k
e
4L j=1
k=1
L
2 1 X
= σQ − pX (yk )∆3k
e
4L
k=1

as pX (yk )pX (yi ) = 0 for j 6= k . Now

L L
X X 1
αk = {pX (yk )} 3 ∆k
k=1 k=1
ˆ xmax
1
' {pX (x)} 3 dx
−xmax
' constant
⇒ pX (yk )∆3k = constant
⇒ Pk ∆2k = constant

134
From the above result, we can interpret that the interval of higher proba-
bility of occurence must have a smaller step size. This agrees with our earlier
assumption that the smaller amplitudes require smaller step sizes. We have,

L
2 1 X
σQ = Pk ∆2k and Pk ∆2k = constant
e
12
k=1

Hence, all intervals produce identical quantization error variance contribu-


tions. Therefore we need to use a non uniform quantizer which practically
dicult to implement. So, we go for companding technique which in general
transforms an arbitrary pdf to uniform pdf. The uniform pdf can be easily
quantized using the uniform quantizers. The expander helps in retriving the
original pdf from unifom pdf.

3 Logarithmic Companding
The companding techniques implemented for dierent signal distributions
are dierent. Speech signals being exponentially distributed, logarithmic com-
pression leads to a uniform distribution. To maintain the transmission quality of
loud and soft sounds, voice signals require relatively constant signal to quantiza-
tion noise ratio over a wide range of amplitude levels. This requires logarithmic
compression to be implemented.
We have,
ˆ −2
x2max xmax

2 dC(x)
σQe = pX (x) dx
3L2 −xmax dx
2
σX
⇒ SN R = 2
σQe
´ xmax
 
2
3L  2 x pX (x)dx
−xmax
=

x2max ´ ∞
  −2 
p (x) dC(x)
−∞ X dx dx

dC(x)
We require constant SNR, i.e. dx should be always proportional to x

dC(x)
= (kx)−1
dx
where k is a constant. Thus,
3L2
SN R =
k 2 x2max
1
⇒ C(x) = ln x + d ; x > 0
k

135
where d is constant of integration. Thus, logarithmic companding results in
constant SNR. Using the condition,

C(xmax ) = xmax
 
1 |x|
C(x) = ln + xmax sgn(x)
k xmax

but C(x) → −∞ as x → 0. This is not practically implementable. So, we


require C(x) → 0 for x → 0. Two companding laws which incorporate this
correction are

1. A-law companding

2. µ-law companding

3.1 A-law Companding


The companding characteristic is linear near origin, i.e. |x| ≤ xmax/A and
logarithmic for large x. Logarithmic curve is expressed as
 
A(x)
C(x) = k 0 1 + ln
xmax

and using the condition, C(xmax ) = xmax ,


xmax
k0 =
1 + ln A
A-law companding characteristic is given by
( A(x) |x|
1+ln A sgn(x) ; 0 ≤ xmax ≤ 1/A
C(x) = A(x)
1+ln( xmax ) |x|
xmax 1+ln A sgn(x) ; 1/A ≤ xmax ≤1

Note: The parameter `A' controls the degree of compression and may be
chosen so that large amplitude changes in input is not reected at the output.
The ratio of maximum to minimum slope is A. The practical value of A is 87.56

136
3.2 µ-law Companding
The µ-law characteristic is neither strictly logarithmic nor strictly linear any-
where. It is approximately linear at low levels i.e. µ|x| << xmax or x <<
µ−1 xmax and approximately logarithmic at high levels i.e. x >> µ−1 xmax .
 
µ|x|
C(x) = k 'log 1 +
xmax
Apply the condition, C(xmax ) = xmax ,
0 xmax
k =
log(1 + µ)
Hence, the µ-law characteristic is given by

xmax loge (1 + xµmax


|x|
)
C(x) = sgn(x)
log(1 + µ)
 xmax loge ( xµmax
 |x|
)
log(1+µ) sgn(x) ; µ |x|  xmax
C(x) = µ|x|

log(1+µ) ; µ |x|  xmax

4 Digital Companding:
Segmentation of Companding Characteristics
The logarithmic form of companding laws is slow to calculate. In order to
implement companding in a digital transmission system, we need a digital form
of the companding laws. Digital companding involves sampling an analog signal,
converting it to a PCM code and digitally compressing the code. At the receiver

137
end, the compressed PCM code is received, expanded and decoded. Digital
companding is obtained by approximating the companding characteristics by
linear segments. This process is called segmentation.

4.1 Segmentation of A-law Companding Characteristics

In A-law companding we compress a 13-bit code to an 8-bit code. We again


approximate the curve using 8 straight line segments. The slope of each succes-
sive segment is half that of the previous segment, except segment 0 and 1 which
have the same slope:

Segment 0 1 2 3 4 5 6 7
Step size 32 32 64 128 256 512 1024 2048

Algorithm for A-law companding

Encoding:
1. Determine the bit position of the most signicant 1-bit among bits 5
through 11 of the input.

2. If such a 1-bit is found, the segment code becomes that position minus 4.
Otherwise, the segment code is 0.

3. The 4 bits following this 1-bit position determine the quantization code.
If segment code is 0, the quantization code is half the input value.

4. The sign bit is same as the input sign bit.

Decoding:
1. If the segment code is non-zero, multiply quantization code by 2 and add
33. Multiply result by 2 raised to the power of the segment code minus 1.

2. If the segment code is zero, multiply quantization code by 2 and add 1.

3. Append sign bit.

4.2 Segmentation of µ-law Companding Characteristics

In µ-law encoding, we compress a 14-bit code to an 8-bit code. The curve


is segmented and approximated as straight lines. The slope of each successive
segment is half that of the previous segment. This is done so that smaller
amplitudes need smaller step size. If the rst segment has a step size of 32, we
have

138
Segment 0 1 2 3 4 5 6 7
Step size 32 64 128 256 512 1024 2048 4096

The output levels are equispaced. Hence, output step size is 128/8 = 16.
This leads to segmentation of the curve as shown in the gure. The same
procedure can be applied to the negative half of the characteristic. In order to
make the segment end points multiples of 2, we add a bias of 33. The motivation
behind such a step is ease in computation. Thus, the maximum allowable input
amplitude reduces to 214  33 = 8159.
Each segment is further sub-divided into 16 equally spaced quantization
intervals (of step size 1 each). In this way, we form an 8-bit compressed code
consisting of sign bit, a 3-bit segment identier and a 4-bit quantization interval
identier. The 4-bit quantization code identies the quantization interval within
the segment. Sinse each segment change implies a change of 16, the format of
the 8-bit code is

P S2 S1 S0 Q3 Q2 Q1 Q0

where P is sign bit, S2 − S0 is segment code, Q3 − Q0 is quantization.


Consider any one segment, say, segment 1 (64-128) long with its 16 quanti-
zation levels. Any 12-bit number falling in this segment can be reperesented as
00 0000 01ab cdef. Each quantization interval has an input step size 4 and an
output step size 1. Alternatively, a change in the input by 4 elicits a change in
the output by 1. Since 4 = 22 this leads us to the conclusion that changes in the
lower 2 bits  bits 1 and 0 are ignored by the compressor. Thus, the compressed
version of 00 0000 01ab cdef is 0 001 abcd (after ignoring bits 1 and 0). As the
segment number increases, the number of lower bits ignored increases.

Algorithm for µ-law companding:

Encoding:
1. Add 33 to absolute value of input.

2. Determine bit position of the most signicant 1-bit among bits 5 through
12of the input.

3. Subtract 5 from the position. The resulting number gives the segment
code.

4. The 4 bits following the bit-position determined in step2 form the 4-bit
quantization code.

5. The sign bit is the same as the sign bit of the input.

139
Decoding:
1. Shift left by 1 position and add 33.

2. Shift left by the amount of segment code.

3. Decrement the result by 33.

4. Append sign bit.

5 Observations
A-law Companding
PDF Transformation

Histogram of input Histogram of companded input


1800 900

1600 800

1400 700

1200 600
Frequency→

Frequency→

1000 500

800 400

600 300

400 200

200 100

0 0
0 100 200 300 400 500 0 100 200 300 400 500
Amplitude→ Amplitude→

(a) Input Histogram (b) Companded Histogram

Figure 108: A-law Companding

140
Companding

Companded output−Alaw
8000

6000

4000

2000
Amplitude→

−2000

−4000

−6000

−8000
0 50 100 150 200 250 300
t→

Figure 109: Companded Speech Signal

µ-law Companding
PDF Transformation

Histogram of input Histogram of companded input


700 200

180
600
160

500 140

120
Frequency→

Frequency→

400
100
300
80

200 60

40
100
20

0 0
0 100 200 300 400 500 600 700 0 100 200 300 400 500 600 700
Amplitude→ Amplitude→

(a) Input Histogram (b) Companded Histogram

Figure 110: µ-law Companding

141
Companding

Companded output−ulaw
8000

6000

4000

2000
Amplitude→

−2000

−4000

−6000

−8000
0 50 100 150 200 250 300
t→

Figure 111: Companded Speech Signal

In both these companded outputs, we observe that the low amplitude com-
ponents are amplied more than higher amplitude components. So, we have
almost a uniform PDF as per requirement.

Reconstruction
Here we have taken a mixed sinusoid as input to the compander (black
colour) and it is companded by a µ-law compander (red colour). The companded
output is used to reconstruct the original input and this expanded signal is shown
with blue colour.

142
Companding And Reconstruction
1

0.8

0.6

0.4

0.2
Amplitude→

−0.2

−0.4

−0.6

−0.8

−1
0 50 100 150 200 250 300 350 400 450
t→

Figure 112: Compression and Expansion

6 Code
In MATLAB, it is possible ot create a uniform PDF but here we have a
speech sequence which is exponentially distributed. So, we need to transform
to uniform PDF.

A-law Companding
%Alaw s p e e c h companding
clc
A=87.6
x=[ − 120 186 −348 −517 555 −5 −434 17 595 −225 −473
189 237 −3 −188 117 −249 12 617 −932 −92 970 −497
−592 873 −175 −314 178 −249 149 481 −884 −166 1087
−497 −991 1034 606 − 1205 523 644 −706 205 320 −642
512 294 −892 3 688 −525 −298 412 −9 −331 9 −79 11 80
−505 −200 449 −196 −538 144 363 −341 −203 373 106 −98
86 227 54 88 37 −136 337 359 −281 −4 772 317 −34 427
610 296 318 90 −192 −17 −583 − 1108 −863 −946 − 1505
− 1381 −899 − 1012 −637 −73 −1 594 1112 867 1369 1585
968 1164 1420 709 403 402 181 105 220 540 882 693 538

143
503 304 −296 − 1051 − 1640 − 2066 − 2706 − 3267 − 3150
− 3013 − 2460 − 2001 − 1129 −5 1023 1795 2596 3491 3482
3114 2508 2256 1581 231 −612 −666 − 1222 − 1628 − 1490
−980 −747 −402 284 827 1530 2143 2147 2117 1361 1078
418 −603 − 2202 − 3100 − 3740 − 4572 − 5140 − 4530 − 3996
− 3005 − 1731 −308 1260 2898 4169 5079 5698 5376 4730
3589 2403 760 −658 − 1734 − 2531 − 3103 − 3249 − 2651 − 1758
−870 −17 659 1860 2621 2790 3551 3473 1997 837 380 − 1169
− 3620 − 4224 − 5309 − 6060 − 5987 − 5841 − 4288 − 3035 − 1301
703 2439 4205 5564 6520 6693 5896 4825 3262 1413 −429
− 1887 − 2974 − 3409 − 3690 − 3565 − 2645 − 1439 −79 1097 1682
2373 3352 3431 3091 3414 2477 450 −891 − 1903 − 3887 − 5615
− 5664 − 6152 − 6432 − 5684 − 4636 − 2713 −569 1284 3138 4577
5609 6757 7069 6 2 4 8 ] ;
ymax=max( x ) ;

figure (4)
y=x ;
n=l e n g t h ( y ) ;
i =1:n ;
plot ( i , y ) ;
g r i d on ;
h o l d on ;
C=0;
f o r l =1: l e n g t h ( y )
i f abs ( y ( l )) <(ymax/A)
C( l )=A. * abs ( y ( l ) ) . / ( 1 + l o g (A) ) . * s i g n ( y ( l ) ) ;
else
C( l )=ymax . * ( ( 1 + l o g ( (A. * y ( l ) ) / ymax))/(1+ l o g (A ) ) ) . * s i g n ( y ( l ) ) ;
end
end

n=l e n g t h (C ) ;
i =1:n ;
p l o t ( i , C, ' r ' ) ;
g r i d on ;
t i t l e ( ' Companded output −Alaw ' )
xlabel ( ' t \ rightarrow ' )
y l a b e l ( ' Amplitude \ r i g h t a r r o w ' )

144
µ-law Companding
%ulaw s p e e c h companding
clc
clf
u=255;
y=[ − 120 186 −348 −517 555 −5 −434 17 595 −225 −473
189 237 −3 −188 117 −249 12 617 −932 −92 970 −497
−592 873 −175 −314 178 −249 149 481 −884 −166 1087
−497 −991 1034 606 − 1205 523 644 −706 205 320 −642
512 294 −892 3 688 −525 −298 412 −9 −331 9 −79 11
80 −505 −200 449 −196 −538 144 363 −341 −203 373 106
−98 86 227 54 88 37 −136 337 359 −281 −4 772 317
−34 427 610 296 318 90 −192 −17 −583 − 1108 −863 −946
− 1505 − 1381 −899 − 1012 −637 −73 −1 594 1112 867 1369
1585 968 1164 1420 709 403 402 181 105 220 540 882
693 538 503 304 −296 − 1051 − 1640 − 2066 − 2706 − 3267
− 3150 − 3013 − 2460 − 2001 − 1129 −5 1023 1795 2596 3491
3482 3114 2508 2256 1581 231 −612 −666 − 1222 − 1628
− 1490 −980 −747 −402 284 827 1530 2143 2147 2117 1361
1078 418 −603 − 2202 − 3100 − 3740 − 4572 − 5140 − 4530
− 3996 − 3005 − 1731 −308 1260 2898 4169 5079 5698 5376
4730 3589 2403 760 −658 − 1734 − 2531 − 3103 − 3249 − 2651
− 1758 −870 −17 659 1860 2621 2790 3551 3473 1997 837
380 − 1169 − 3620 − 4224 − 5309 − 6060 − 5987 − 5841 − 4288
− 3035 − 1301 703 2439 4205 5564 6520 6693 5896 4825
3262 1413 −429 − 1887 − 2974 − 3409 − 3690 − 3565 − 2645
− 1439 −79 1097 1682 2373 3352 3431 3091 3414 2477 450
−891 − 1903 − 3887 − 5615 − 5664 − 6152 − 6432 − 5684 − 4636
− 2713 −569 1284 3138 4577 5609 6757 7069 6 2 4 8 ] ;
ymax=max( y ) ;

figure (1)
n=l e n g t h ( y )
i =1:n
plot ( i , y , ' b ' )
h o l d on
C=ymax * ( ( l o g (1+(u . * ( abs ( y ) . / ymax ) ) ) ) . / l o g (1+u ) ) . * s i g n ( y ) ;

n=l e n g t h (C)
i =1:n
p l o t ( i , C, ' r ' )
g r i d on
t i t l e ( ' Companded output −ulaw ' )
xlabel ( ' t \ rightarrow ' )
y l a b e l ( ' Amplitude \ r i g h t a r r o w ' )

145
Reconstruction
t =0:0.01:4;
x =8158*( s i n ( 2 * p i * 2 . * t )+ s i n ( 2 * p i *3* t ) ) . / 2 ;
x=x +33;
l e n=l e n g t h ( x ) ;
x1=x . / max( x ) ;
p l o t ( x1 , ' black ' )
y=z e r o s ( l e n , 1 4 ) ;

f o r i =1: l e n ;
i f x ( i ) >=0
x ( i )= c e i l ( x ( i ) ) ;
s=0
y( i , : ) = [ de2bi (x( i ) ,13) s ] ;
e l s e i f x ( i )<0
s =1;
x ( i )= f l o o r ( x ( i ) ) ;
y ( i , : ) = [ d e 2 b i (− x ( i ) , 1 3 ) s ] ;
end
end
d=y ( : , 1 4 ) ;
t r i a l 1 =0;
a=z e r o s ( l e n , 4 ) ;
b=z e r o s ( l e n , 3 ) ;
z=z e r o s ( l e n , 8 ) ;

f o r j =1: l e n
t r i a l 1 =0;

f o r i =0:13

i f y ( j ,13 − i )==1
t r i a l 1 =13− i −1
break ;
end
end

t r i a l 2=t r i a l 1 − 5;

i f t r i a l 2 ==8
break ;

146
end

b( j ,:)= de2bi ( t r i a l 2 , 3 ) ;

a ( j , : ) = y ( j ,13 − i − 3:13 − i ) ;

end
z =[a b d ] ;

f i n a l=z e r o s ( 1 , l e n ) ;
t r i a l 3=z e r o s ( 1 , 7 )

f o r k=1: l e n
i f z ( k ,8)==1
t r i a l 3=z ( k , 1 : 7 ) ;
f i n a l ( k)=−1* b i 2 d e ( t r i a l 3 ) ;
e l s e i f z ( k ,8)==0
t r i a l 3=z ( k , 1 : 7 ) ;
f i n a l ( k)= b i 2 d e ( t r i a l 3 ) ;
end

end

h o l d on
p l o t ( f i n a l /max( f i n a l ) , ' r ' )
hold o f f
s i g n=z ( : , 8 ) ;
segment=z ( : , 5 : 7 ) ;
q=z ( : , 1 : 4 ) ;
q=[ z e r o s ( l e n , 1 ) q ] ;
p=b i 2 d e ( q )
p=p+33;
segment=b i 2 d e ( segment ) ;
p=p . * 2 . ^ segment
p=p − 33;
new=d e 2 b i ( p , 1 3 ) ;
new=[new s i g n ] ;
t r i a l 4=z e r o s ( 1 , l e n )
t r i a l 5=z e r o s ( 1 , 1 4 ) ;

f o r i =1: l e n
i f new ( i ,14)==0
t r i a l 5=new ( i , 1 : 1 3 ) ;
t r i a l 4 ( i )= b i 2 d e ( t r i a l 5 ) ;

147
e l s e i f new ( i ,14)==1
t r i a l 5=new ( i , 1 : 1 3 ) ;
t r i a l 4 ( i )= − 1.* b i 2 d e ( t r i a l 5 ) ;
end

end
op=t r i a l 4 /max( t r i a l 4 ) ;
h o l d on
p l o t ( op , ' b ' )
t i t l e ( ' Companding And R e c o n s t r u c t i o n ' )
xlabel ( ' t \ rightarrow ' )
y l a b e l ( ' Amplitude \ r i g h t a r r o w ' )
hold o f f

References
[1] Fathima Shifa Akbar, Geetu George, Kala K. and Sneha Sara Suresh,
Mini Pro ject Report on A-Law and µ-Law Companding: Principles,

Formulations and Implementation, NIT Calicut 2006

148

Anda mungkin juga menyukai