Anda di halaman 1dari 197

Agni college of Technology

Chennai – 130
B.E./B.Tech. DEGREE EXAMINATIONS, NOV/DEC 2011
Fourth Semester
Common to ECE/BIOMEDICAL
MA6453 – PROBABILITY AND RANDOM PROCESSES

(Regulations 2008)
Time: Three hours Maximum: 100 marks
Answer all question

Part a-(10x2=20 marks)

𝟎𝟎, 𝒙𝒙 < 𝟎𝟎
1. The CDF of a continuous random variable is given by F(x)= � −𝒙𝒙 Find
𝟏𝟏 − 𝒆𝒆 , 𝟎𝟎 ≤ 𝒙𝒙 < ∞
𝟓𝟓

the PDF and mean of x

Solution: Given the CDF of the Continuous random variable X is


−𝑥𝑥

F(x)=�1 − 𝑒𝑒 5 , 𝑥𝑥 ≥ 0
0, 𝑥𝑥 < 0
The PDF of X is f(x)=𝐹𝐹1 (𝑥𝑥)
−𝑥𝑥
1
𝑒𝑒 5 , 𝑥𝑥 ≥ 0
=� 5
0 , 𝑜𝑜𝑜𝑜ℎ𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒
−𝑥𝑥
∞ ∞ 𝑥𝑥
Mean of X =E(X)=∫−∞ 𝑥𝑥𝑥𝑥(𝑥𝑥)𝑑𝑑𝑑𝑑 = ∫0 𝑒𝑒 5 𝑑𝑑𝑑𝑑
5
−𝑥𝑥 −𝑥𝑥 ∞
1 𝑒𝑒 5 𝑒𝑒 5
= �𝑥𝑥 −1 − 1. −1 2

5 �5�
5
0
−𝑥𝑥 ∞
=-�𝑒𝑒 (𝑥𝑥 + 5)� 5
0
=-[𝑒𝑒 −∞ − 𝑒𝑒 0 . 5]
=5

2. If X is a normal random variable with mean zero and variance σ 2 , find the pdf of Y = e X

Solution: Out of syllabus.

3. If the joint pdf of (X,Y) is


=
f ( x, y ) {e −( x + y )
x > 0, y > 0
, check whether X and Y are
= 0 0therwise
independent.

Solution: Given
=
f ( x, y ) {e −( x + y )
x > 0, y > 0
= 0 0therwise

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

f ( x) = ∫
−∞
f ( x, y )dy

The M.d.f of X is = ∫ e − ( x + y ) dy
0


= e − x ∫ e − y dy= e − x  −e − y  = e − x  −e −∞ + e0 = e − x [ 0 + 1]= e − x
0
o


f (y) = ∫
−∞
f ( x, y )dx

The M.d.f of Y is = ∫ e − ( x + y ) dx
0


= e − y ∫ e − x dx= e − y  −e − x  = e − y  −e −∞ + e0 = e − y [ 0 + 1]= e − y
0
o

).f(y) e −=
f ( x= x −y
.e −( x + y )
e= f ( x, y )
Now

Hence X and Y are independent.

4.The regress equations are 3x=


+ 2 y 26 & 6=
x + y 31 . Find the correlation coefficient between X
and Y.

Solution:

3x + 2 y = 26 (1)
Given 6x + y = 31 (2)
26 3
(1) ⇒ y = − x
2 2

3
bxy = − (3)
2
The regression coefficient of Y on X is
31 1
(2) ⇒ x = − y
6 6

3
byx = − (4)
2
−1 −3  1
Hence the regression coefficient of X on Y is From (3) & (4) bxy.byx=  
 = <1
 6  2  4
1 1 1
r 2 =bxy.byx = ⇒ r =− =−
4 4 2

5. When is a random process said to be mean ergodic?

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
Solution: Let {X(t)} be a random process wit constant mean µ and let X T be its time average.
Then {X(t)} is mean-ergodic i

6. If {X(t)} is a normal process with µ(t)=10 and C(t 1 ,t 2 )=16 𝒆𝒆−|𝒕𝒕𝟏𝟏 −𝒕𝒕𝟐𝟐 | find the variance
of X(10)-X(6)

Solution: Out of syllabus.

7. Given that the autocorrelation function for a stationary ergodic process with no
4
periodic components is Rxx (τ= ) 25 + . Find the mean and the variance of the
1 + 6τ 2
process { X (t )} .

Sol:By the property of autocorrelation function,

µ x2 = lim Rxx (τ )
τ →∞

= 25

∴ µx = 5 .

E { X 2 (t )} = Rxx (0) = 25 + 4 = 29

Var { X (t )} = E { X 2 (t )} − ( E { X (t )} ) = 29 − 25 = 4.
2

8. Prove that for WSS process { X (t ), RXX (t, t + τ )

τ
Is an even function of .

(τ ) RXX (−τ )
RXX=
To prove that

RXX (τ ) E[X(t) X(t + τ )


= (1)
(−τ ) E[X(t) X(t − τ )
RXX=
Put t − τ = t1 ⇒ t = t1 + τ
τ ) E[X(t1 + τ ) X(t=
∴ RXX (−= 1 )] E[X(t1 ) X (t1 + τ=
)] RXX (τ )
(τ ) RXX (−τ )
Hence RXX=

9. State any two properties of a linear time-invariant system.

Solution: Property 1:

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
If the input to a time-invariant, stable linear system is a WSS process, then the output will
also be a WSS process,ie., To show that if {X(t)} is aWSS process then the output {Y(t)} is
a WSS process.

Property 2.

If the input X(t) and its output Y(t) are related=
by Y (t ) ∫ h(u ) X (t − u )du, then the system
−∞

is a linear time-invariant system.



=
10. If {X(t) and {Y(t)} in the system Y (t ) ∫ h(u ) X (t − u )du, are WSS process, how are their
−∞

autocorrelation functions relation.

RYY (τ ) RXY (τ ) * h(−τ )


=
Solution: where RXY (τ ) = RXX (τ ) * h(τ )
(or) R YY (τ ) RXX (τ ) * h(τ ) * h(−τ )
=

Part B( 5X16=80 marks)

11(a)(i) The probability function of an infinite discrete distribution is given by

1
P( X= x=
) , =
x 1, 2,..........∞ ,Find (1) mean of X (2) P(X is even) (3) P(X is divisible by
2x
3)

Solution:

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
x

1  et 
∞ ∞
M X (t ) ∑=
= e P( x) ∑
tx
= e x ∑  tx

= x 1 = x 1= 2 x 1 2 
2
et  et 
=+   + ........
2 2

et  et  et  
2

= 1 + +   + ......
2 2 2 

−1
et  et  et
( 2 et ) et
−1
= 1 −  = t =− (1)
2  2 2−e
−et ( 2 − et ) ( −e ) + ( 2 − e )
−2 t −1`
M X' (t ) = t
et

= e2t (2 − e ) + (2 − e ) e
t −2 t −1 t
(2)
Mean = µ1' = M X (0) = 1 + 1 = 2
P( X =
even) =P( X =
2) + P ( X =
4) + ........
2
1 1
2 4  
1 1  2 1
=   +  + ....... = = 4 =
2 2
2
1 1 3
1−   1−
2 4
P( X is divisible by 3) =
P( X =+3) P ( X = 6) + ............∞
3 6
1 1
=   +   + ...............∞
2 2
3
1
 
=   3 = × =
2 1 8 1
1 8 7 7
1−  
2

 k
=f ( x)  , −∞ < x < ∞
11(a)(ii) A continuous R.V.X has the pdf 1 + x . Find
2

=0 , otherwise

(1) The value of k, (2) Distribution function of X. (3)P(X>0).


Solution: Since f(x) is p.d.f., we have
∞ ∞
1 ∞

−∞
f ( x)dx = 1 ⇒ k ∫
−∞ 1 + x
2
dx = 1 ⇒ k  tan −1 x  = 1 ⇒ k  tan −1 ∞ − tan −1 ( −∞)  = 1
−∞

π π  1
k  +  =1 ⇒ kπ =1 ⇒ k =
2 2 π
1 1
=
∴ f ( x)  . , −∞ < x < ∞
π 1 + x
2

=0 otherwise
To find F(x)

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
x x
1 1 1 1
∫= ∫=
x
=
F ( x) f ( x)dx dx  tan
= −1
x   tan −1 x − tan −1 (−∞) 
−∞
π 1+ x
−∞
2
π −∞ π

1  −1 π
=  tan x +  ( A)
π 2
To find P(X>0)
1 π 1 1
P ( X > 0) =1 −  0 +  =1 − = [ In ( A ) put x =0 ]
π  2 2 2
11(b)(i) Let X and Y be independent normal variates with mean 45 and 44 and standard
deviation 2 and 1.5 respectively.What is the probability that randomly chosen values of X
and Y differ by1.5 or more.
Solution:
X is N(45,2) and Y is N(44,1.5) .
Hence by the property of additive U= X − Y follows the distribution
N (1, 4 + 2.25) ie., N (1, 2.5)
P[ X & Y differ by 1.5 or more]
= P  X − Y  ≥ 1.5= P[U ≥ 1.5]
=1 − P U ≤ 1.5 =1 − P [ −1.5 ≤ U ≤ 1.5]
 −1.5 − 1 U − 1 1.5 − 1 
=
1− P  ≤ ≤
 2.5 2.5 2.5 
=1 − P [ −1 ≤ Z ≤ 0.2] =1 − [ P(0 ≤ z ≤ 1 + P(0 ≤ z ≤ 0.2]
1 − [ 0.3413 + 0.0793] =
= 0.5794

11(b)(ii) If X is a uniform random variable in the interval (-2,2), find the probability density
function Y = X & E[Y ].

Solution: Since X is uniformly distributed in (-2,2), the pdf of X is

 1
=
f X ( x)  , −2< x < 2
 2 − (−2)
=0 otherwise
1
f X ( x=
)  , −2< x < 2
4
=0 otherwise (1)

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
Y= X
x , x≥0
=
x= 
y
 − x, x<0
=x1 y , y > 0
dx1
x2 =− y ⇒ =1
dy

Pdf of

dx1 dx 1 1 2 1
fY ( y ) = f X ( x1 ) + 2 f X ( x2 ) = 1. + 1. = = , 0 < y < 2
dy dy 4 4 4 2
Pdf of is
a+b 0+2
=
Mean = = 1
2 2

12(a)(i) The joint p.df of the two dimensional random variable (X,Y) is given by
8𝑥𝑥𝑥𝑥
𝑓𝑓(𝑥𝑥, 𝑦𝑦) = � 9 1 ≤ 𝑥𝑥 ≤ 𝑦𝑦 ≤ 2. Find
0 , 𝑜𝑜𝑜𝑜ℎ𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒
(i) Marginal densities of X and Y
(ii) The conditional density functions 𝑓𝑓(𝑥𝑥/𝑦𝑦) and 𝑓𝑓(𝑦𝑦/𝑥𝑥)
Solution :
(i) The Marginal density functions of X and Y are given by
2
𝑓𝑓𝑋𝑋 (𝑥𝑥) = � 𝑓𝑓(𝑥𝑥, 𝑦𝑦)𝑑𝑑𝑑𝑑
𝑥𝑥
2
8𝑥𝑥𝑥𝑥
=� 𝑑𝑑𝑑𝑑
𝑥𝑥 9
2
8 𝑥𝑥𝑦𝑦 2
= � �
9 2 𝑥𝑥
8𝑥𝑥 4 − 𝑥𝑥 2
= � �
9 2
4𝑥𝑥
= ⌊4 − 𝑥𝑥 2 ⌋ , 1 ≤ 𝑥𝑥 ≤ 2
9

𝑦𝑦
𝑓𝑓𝑌𝑌 (𝑦𝑦) = � 𝑓𝑓(𝑥𝑥, 𝑦𝑦)𝑑𝑑𝑑𝑑
1
𝑦𝑦
8𝑥𝑥𝑥𝑥
=� 𝑑𝑑𝑑𝑑
1 9
𝑦𝑦
8 𝑦𝑦𝑥𝑥 2
= � �
9 2 1
8𝑦𝑦 𝑦𝑦 2 − 1
= � �
9 2
4𝑦𝑦 2
= (𝑦𝑦 − 1) , 1 ≤ 𝑦𝑦 ≤ 2
9

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

(ii) The conditional density functions are given by


𝑓𝑓(𝑥𝑥, 𝑦𝑦)
𝑓𝑓(𝑥𝑥/𝑦𝑦) =
𝑓𝑓𝑋𝑋 (𝑥𝑥)
8𝑥𝑥𝑥𝑥
9
= 4𝑥𝑥
⌊4− 𝑥𝑥 2 ⌋
9
2𝑦𝑦
= (𝑦𝑦 2 1 ≤ 𝑥𝑥 ≤ 𝑦𝑦 ≤ 2
− 1)
𝑓𝑓(𝑥𝑥, 𝑦𝑦)
𝑓𝑓(𝑦𝑦/𝑥𝑥) =
𝑓𝑓𝑌𝑌 (𝑦𝑦)
8𝑥𝑥𝑥𝑥
9
= 4𝑦𝑦 2
(𝑦𝑦 − 1)
9
2𝑥𝑥
= ⌊4− 1 ≤ 𝑥𝑥 ≤ 𝑦𝑦 ≤ 2
𝑥𝑥 2 ⌋

12(a)(ii)The joint probability density function of the two dimensional random


2 − x − y, 0 ≤ x ≤ 1, 0 ≤ y ≤ 1
variable (X,Y) is f ( x, y ) =  .Find
0 , otherwise

The correlation coefficient between X and Y.

Solution:

The Marginal density function of X is


∞ 1
1
 y2 
f ( x)= ∫−∞ ∫0 ( 2 − x − y )dy= 2 y − x − 2 
f ( x, y )dy=
0

 1 1 3
=  2(1) − x(1) −  − ( 2(0) − x(0) − 0 ) = 2 − x − = − x ... (1)
 2 2 2

The Marginal density function of Y is

∞ 1
1
 x2 
f (y)= ∫−∞ f ( x, y )dx=
0
∫ ( 2 − x − y )dx=


2 x −
2
− xy 
0
 1  1 3
=  2(1) − − (1) y  − ( 2(0) − 0 − 0 ) = 2 − − y = − y ... ( 2 )
 2  2 2

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

3  3 2
1 1
E ( X ) = ∫ xf ( x)dx = ∫ x  − x dx = ∫0  2 x − x dx
−∞ 0  2 
1
 3  x 2  x3  3  1  1 3 1 9 − 4 5
=   −  =   −  = − = = (3)
 2  2  3 0  2  2  3  4 3 12 12
5
Similarly E (Y ) = ...(4)
12


3  3 
1 1
E ( X 2 )= ∫x f ( x)dx= ∫x  − x dx= ∫  2 x − x 3 dx
2 2 2

−∞ 0  2  0 
1
3  x  x 
3
 1  1  1
4
=    −  =   − = ...(5)
 2  3  4  0  2  4  4
1
Similarly E (Y 2 ) = ...(6)
4

1 25 36 − 25 11
2
1  5
Var (X) =E[X 2 ] − [ E ( X ) ] = −   = − = = = σX2
2

4  12  4 144 144 144


11
σX = (7)
12
11
similarly σ Y = (8)
12

∞ ∞ 1 1
E (=
XY ) ∫∫
−∞ −∞
=
xyf ( x, y )dxdy ∫ ∫ xy(2 − x − y)dxdy
0 0
1 1
= ∫ ∫ (2 xy − x y − xy 2 )dxdy
2

0 0
1
1
 2 x 2 y x3 y x 2 y 2 
= ∫0  2 − 3 − 2  dy
0
1
2 13 y 12 y 2 
= ∫0 1 y −
3

2 
 dy

1
1
 y y2   y2
y 2 y3 
= ∫0  y − 3 − 2  dy=  2 −
6
− 
6 0
1 1 1 1 2 3− 2 1
=  − −  − (0 − 0 − 0) = − = =
2 6 6 2 6 6 6
1 5 5 1 25
Cov( X , Y ) =
E[ XY ] − E[ X ]E[Y ] = − . = −
6 12 12 6 144
1
= −
144

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
1

Cov( X , Y ) 144 = − 1
ρ ( X ,Y ) = =
Var ( X ) Var (Y ) 11 11 11
12 12

12(b)(i) If X 1 , X 2 ,...... X n , are uniform variates with mean 2.5 and variance ¾. Use central limit
theorem to estimate

Solution: Out of syllabus.

12(b)(ii)X and Y are independent with common PDF(exponential);

{
f ( x) = e − x , x≥o
and
{
f ( y) = e− y , y≥o
=0 ,x < 0 = 0 ,y < 0

Find the PDF of X-Y.

Solution:

Ans: The pdf of X and Y are


𝑓𝑓(𝑥𝑥) = 𝑒𝑒 −𝑥𝑥 𝑥𝑥 ≥ 0
𝑓𝑓(𝑦𝑦) = 𝑒𝑒 −𝑦𝑦 𝑦𝑦 ≥ 0

The joint pdf of X and Y is 𝑓𝑓(𝑥𝑥, 𝑦𝑦) = 𝑒𝑒 −𝑥𝑥 𝑒𝑒 −𝑦𝑦 = 𝑒𝑒 − (𝑥𝑥+𝑦𝑦) ; 𝑥𝑥, 𝑦𝑦 ≥ 0
Take 𝑢𝑢 = 𝑥𝑥 − 𝑦𝑦, 𝑣𝑣 = 𝑦𝑦
𝑥𝑥 = 𝑢𝑢 + 𝑣𝑣, 𝑦𝑦 = 𝑣𝑣
𝜕𝜕𝜕𝜕 𝜕𝜕𝜕𝜕
1 1
𝐽𝐽 = �𝜕𝜕𝜕𝜕 𝜕𝜕𝜕𝜕 � = � �=1
𝜕𝜕𝜕𝜕 𝜕𝜕𝜕𝜕 0 1
𝜕𝜕𝜕𝜕 𝜕𝜕𝜕𝜕
Hence the jpdf of U and V is
𝑓𝑓(𝑢𝑢, 𝑣𝑣) = 𝑓𝑓(𝑥𝑥, 𝑦𝑦)|𝐽𝐽|
= 𝑒𝑒 − (𝑥𝑥+𝑦𝑦) (1) = 𝑒𝑒 − (𝑢𝑢+𝑣𝑣+𝑣𝑣) = 𝑒𝑒 − (𝑢𝑢+2𝑣𝑣)
Range space:
Given 𝑦𝑦 ≥ 0 ⇒ 𝑣𝑣 ≥ 0
𝑥𝑥 ≥ 0 ⇒ 𝑢𝑢 + 𝑣𝑣 ≥ 0
⇒ 𝑢𝑢 ≥ −𝑢𝑢
For the region 𝑢𝑢 < 0, 𝑣𝑣 𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣 𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓 − 𝑢𝑢 𝑡𝑡𝑡𝑡 ∞. 𝑖𝑖. 𝑒𝑒. , −𝑢𝑢 < 𝑣𝑣 < ∞
For the region 𝑢𝑢 > 0, 𝑣𝑣 𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣 𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓 0 𝑡𝑡𝑡𝑡 ∞. 𝑖𝑖. 𝑒𝑒. , 0 < 𝑣𝑣 < ∞
To find the density of 𝑈𝑈 = 𝑋𝑋 − 𝑌𝑌, we have to find 𝑓𝑓𝑈𝑈 (𝑢𝑢) for the regions:
Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130
Agni college of Technology
Chennai – 130
(𝑖𝑖) 𝑢𝑢 < 0, −𝑢𝑢 < 𝑣𝑣 < ∞
(𝑖𝑖𝑖𝑖) 𝑢𝑢 > 0, 𝑣𝑣 > 0
∞ ∞
⎧ � 𝑔𝑔(𝑢𝑢, 𝑣𝑣) 𝑑𝑑𝑑𝑑 = � 𝑒𝑒 − (𝑢𝑢+2𝑣𝑣) 𝑑𝑑𝑑𝑑 𝑓𝑓𝑓𝑓𝑓𝑓 𝑢𝑢 < 0
⎪ −𝑢𝑢 −𝑢𝑢
𝑓𝑓𝑈𝑈 (𝑢𝑢) = ∞
⎨ = � 𝑒𝑒 − (𝑢𝑢+2𝑣𝑣) 𝑑𝑑𝑑𝑑 𝑓𝑓𝑓𝑓𝑓𝑓 𝑢𝑢 > 0

⎩ ∞
0
𝑒𝑒 −2𝑣𝑣 𝑒𝑒 𝑢𝑢
⎧𝑒𝑒 −𝑢𝑢
� � = 𝑓𝑓𝑓𝑓𝑓𝑓 𝑢𝑢 < 0
⎪ −2 −𝑢𝑢 2
𝑓𝑓𝑈𝑈 (𝑢𝑢) = ∞
⎨ −𝑢𝑢 𝑒𝑒 −2𝑣𝑣 𝑒𝑒 −𝑢𝑢
⎪𝑒𝑒 � � = 𝑓𝑓𝑓𝑓𝑓𝑓 𝑢𝑢 > 0
⎩ −2 0 2
𝑒𝑒 −|𝑢𝑢 |
Hence the pdf is 𝑓𝑓𝑈𝑈 (𝑢𝑢) = for −∞ < 𝑢𝑢 < ∞
2

13(a)(i) Show that the random process 𝑋𝑋(𝑡𝑡) = 𝐴𝐴 cos⁡(𝜔𝜔𝜔𝜔 + 𝜃𝜃) is a wide sense stationary, where
A and 𝜔𝜔 are constants and 𝜃𝜃 is uniformly distributed on the interval (0,2𝜋𝜋)

Solution:

Given 𝑋𝑋(𝑡𝑡) = 𝐴𝐴 cos⁡


(𝜔𝜔𝜔𝜔 + 𝜃𝜃)

𝜃𝜃 is uniformly distributed in (0,2𝜋𝜋)

1
∴ 𝑓𝑓(𝜃𝜃) = 0 ≤ 𝜃𝜃 ≤ 2𝜋𝜋
2𝜋𝜋
We have to prove X(t) is a WSS process using the following

(i) 𝐸𝐸[𝑋𝑋(𝑡𝑡)] = 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐


(ii) 𝑅𝑅𝑋𝑋𝑋𝑋 (𝑡𝑡, 𝑡𝑡 + 𝜏𝜏) 𝑖𝑖𝑖𝑖 𝑎𝑎 𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓 𝑜𝑜𝑜𝑜 𝜏𝜏

𝐸𝐸[𝑋𝑋(𝑡𝑡)] = 𝐸𝐸[𝐴𝐴 cos(𝜔𝜔𝜔𝜔 + 𝜃𝜃)]

2𝜋𝜋
= 𝐴𝐴 ∫0 cos(𝜔𝜔𝜔𝜔 + 𝜃𝜃) 𝑓𝑓( 𝜃𝜃) 𝑑𝑑𝑑𝑑

2𝜋𝜋
1
= 𝐴𝐴 � cos(𝜔𝜔𝜔𝜔 + 𝜃𝜃) 𝑑𝑑𝑑𝑑
2𝜋𝜋
0

𝐴𝐴 𝐴𝐴
= [sin(𝜔𝜔𝜔𝜔 + 𝜃𝜃)]2𝜋𝜋
0 = [𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 − 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠]
2𝜋𝜋 2𝜋𝜋
Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130
Agni college of Technology
Chennai – 130
= 0 = 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐

𝑅𝑅𝑋𝑋𝑋𝑋 (𝑡𝑡, 𝑡𝑡 + 𝜏𝜏) = 𝐸𝐸[𝑋𝑋(𝑡𝑡)𝑋𝑋(𝑡𝑡 + 𝜏𝜏)]

= 𝐸𝐸[𝐴𝐴 cos(𝜔𝜔𝜔𝜔 + 𝜃𝜃) 𝐴𝐴 cos(𝜔𝜔(𝑡𝑡 + 𝜏𝜏) + 𝜃𝜃)]

= 𝐴𝐴2 𝐸𝐸[cos(𝜔𝜔𝜔𝜔 + 𝜃𝜃) cos(𝜔𝜔(𝑡𝑡 + 𝜏𝜏) + 𝜃𝜃)]


𝐴𝐴2
= 𝐸𝐸[cos(2𝜔𝜔𝜔𝜔 + 𝜔𝜔𝜔𝜔 + 2𝜃𝜃) +cos(−𝜔𝜔𝜔𝜔)]
2

𝐴𝐴2 𝐴𝐴2
= 𝐸𝐸[cos(2𝜔𝜔𝜔𝜔 + 𝜔𝜔𝜔𝜔 + 2𝜃𝜃)] + E[cos(𝜔𝜔𝜔𝜔)]
2 2
2𝜋𝜋
𝐴𝐴2 𝐴𝐴2
= � cos(2𝜔𝜔𝜔𝜔 + 𝜔𝜔𝜔𝜔 + 2𝜃𝜃) 𝑓𝑓( 𝜃𝜃) 𝑑𝑑𝑑𝑑 + 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐
2 2
0

𝐴𝐴2 2𝜋𝜋 1 𝐴𝐴2


= ∫0 cos(2𝜔𝜔𝜔𝜔 + 𝜔𝜔𝜔𝜔 + 2𝜃𝜃) 2𝜋𝜋 𝑑𝑑𝑑𝑑 + 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐
2 2

2𝜋𝜋
𝐴𝐴2 𝑠𝑠𝑠𝑠𝑠𝑠(2𝜔𝜔𝜔𝜔 + 𝜔𝜔𝜔𝜔 + 2𝜃𝜃) 𝐴𝐴2
= � � + 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐
4𝜋𝜋 2 0
2

𝐴𝐴2 𝐴𝐴2
= [𝑠𝑠𝑠𝑠𝑠𝑠(2𝜔𝜔𝜔𝜔 + 𝜔𝜔𝜔𝜔) − 𝑠𝑠𝑠𝑠𝑠𝑠(2𝜔𝜔𝜔𝜔 + 𝜔𝜔𝜔𝜔)] + 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐
8𝜋𝜋 2

𝐴𝐴2 𝐴𝐴2
= (0) + 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐
8𝜋𝜋 2

𝐴𝐴2
= 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 = 𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓 𝑜𝑜𝑜𝑜 𝜏𝜏
2
Since 𝐸𝐸[𝑋𝑋(𝑡𝑡)] = 0 𝑎𝑎𝑎𝑎𝑎𝑎 𝑅𝑅𝑋𝑋𝑋𝑋 (𝑡𝑡, 𝑡𝑡 + 𝜏𝜏) 𝑖𝑖𝑖𝑖 𝑎𝑎 𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓 𝑜𝑜𝑜𝑜

Hence X(t) is WSS.

13(a)(ii) The process {𝑋𝑋(𝑡𝑡)} whose probability distribution under certain condition is given by
(𝑎𝑎𝑎𝑎 )𝑛𝑛 −1
𝑛𝑛 −1 , 𝑛𝑛 = 1,2,3, …
𝑃𝑃{𝑋𝑋(𝑡𝑡) = 𝑛𝑛} = �(1+𝑎𝑎𝑎𝑎 ) 𝑎𝑎𝑎𝑎
, 𝑛𝑛 = 0
1+𝑎𝑎𝑎𝑎

Mean and variance of the process. Is the process first order stationary?

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
Given

X(t)=n 0 1 2 3 ...
P{X(t)}=p(x 𝑎𝑎𝑎𝑎 1 𝑎𝑎𝑎𝑎 (𝑎𝑎𝑎𝑎)2 ...
n) 1 + 𝑎𝑎𝑎𝑎 (1 + 𝑎𝑎𝑎𝑎)2 (1 + 𝑎𝑎𝑎𝑎)3 (1 + 𝑎𝑎𝑎𝑎)4
Mean 𝐸𝐸{𝑋𝑋(𝑡𝑡)} =
∑∞
𝑛𝑛=0 𝑛𝑛𝑛𝑛(𝑥𝑥𝑛𝑛 )
𝑎𝑎𝑎𝑎 1 𝑎𝑎𝑎𝑎 (𝑎𝑎𝑎𝑎)2
=0 +1 + 2 + 3 +⋯
1 + 𝑎𝑎𝑎𝑎 (1 + 𝑎𝑎𝑎𝑎)2 (1 + 𝑎𝑎𝑎𝑎)3 (1 + 𝑎𝑎𝑎𝑎)4
1 𝑎𝑎𝑎𝑎 𝑎𝑎𝑎𝑎 2
= �1 + 2 + 3� � + ⋯�
(1 + 𝑎𝑎𝑎𝑎)2 1 + 𝑎𝑎𝑎𝑎 1 + 𝑎𝑎𝑎𝑎
1 𝑎𝑎𝑎𝑎 −2
= �1 − �
(1 + 𝑎𝑎𝑎𝑎)2 1 + 𝑎𝑎𝑎𝑎
1 1 + 𝑎𝑎𝑎𝑎 − 𝑎𝑎𝑎𝑎 −2
= � �
(1 + 𝑎𝑎𝑎𝑎)2 1 + 𝑎𝑎𝑎𝑎
1 1
= 2
×
(1 + 𝑎𝑎𝑎𝑎) (1 + 𝑎𝑎𝑎𝑎)−2
1
= (1 + 𝑎𝑎𝑎𝑎)2
(1 + 𝑎𝑎𝑎𝑎)2
= 1, 𝑤𝑤ℎ𝑖𝑖𝑖𝑖ℎ 𝑖𝑖𝑖𝑖 𝑎𝑎 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐
Now

𝐸𝐸(𝑋𝑋 (𝑡𝑡)) = � 𝑛𝑛2 𝑝𝑝(𝑥𝑥𝑛𝑛 )


2

𝑛𝑛=0

= �[𝑛𝑛(𝑛𝑛 + 1) − 𝑛𝑛]𝑝𝑝(𝑥𝑥𝑛𝑛 )
𝑛𝑛=0
∞ ∞

= � 𝑛𝑛(𝑛𝑛 + 1)𝑝𝑝(𝑥𝑥𝑛𝑛 ) − � 𝑛𝑛𝑛𝑛(𝑥𝑥𝑛𝑛 )


𝑛𝑛=0 𝑛𝑛=0
1 𝑎𝑎𝑎𝑎 (𝑎𝑎𝑎𝑎)2
= �0 + 1.2 + 2.3 + 3.4 + ⋯ � − 𝐸𝐸{𝑋𝑋(𝑡𝑡)}
(1 + 𝑎𝑎𝑎𝑎)2 (1 + 𝑎𝑎𝑎𝑎)3 (1 + 𝑎𝑎𝑎𝑎)4
2 𝑎𝑎𝑎𝑎 𝑎𝑎𝑎𝑎 2
= �1 + 3. + 6� � + ⋯� − 1
(1 + 𝑎𝑎𝑎𝑎)2 (1 + 𝑎𝑎𝑎𝑎) 1 + 𝑎𝑎𝑎𝑎
2 𝑎𝑎𝑎𝑎 −3
= �1 − � −1
(1 + 𝑎𝑎𝑎𝑎)2 1 + 𝑎𝑎𝑎𝑎
2
= . (1 + 𝑎𝑎𝑎𝑎)3 − 1
(1 + 𝑎𝑎𝑎𝑎)2
= 2(1 + 𝑎𝑎𝑎𝑎) − 1
= 2 + 2𝑎𝑎𝑎𝑎 − 1
= 1 + 2𝑎𝑎𝑎𝑎
𝑉𝑉𝑉𝑉𝑉𝑉{𝑋𝑋(𝑡𝑡)} = 𝐸𝐸(𝑋𝑋 2 (𝑡𝑡)) − [𝐸𝐸(𝑥𝑥(𝑡𝑡))]2
= 1 + 2𝑎𝑎𝑎𝑎 − 1

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
= 2𝑎𝑎𝑎𝑎 𝑊𝑊ℎ𝑖𝑖𝑖𝑖ℎ 𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑 𝑜𝑜𝑜𝑜 𝑡𝑡
∴ {𝑥𝑥(𝑡𝑡)} 𝑖𝑖𝑖𝑖 𝑛𝑛𝑛𝑛𝑛𝑛 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠

13(b) State the postulates of Poisson process and derive the probability distribution.Also prove
that the sum of two independent poisson process is again a poisson process.

Solution: If X(t) represents the number of occurences of a certain event in (0,t) then the discrete
random process {X(t)} is called the poisson process, provided the following postulates are
satisfied.

1. 𝑃𝑃�1 𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜 𝑖𝑖𝑖𝑖 (𝑡𝑡, 𝑡𝑡 + ∆𝑡𝑡)� =⋋ ∆𝑡𝑡 + 𝑂𝑂(∆𝑡𝑡)

2. 𝑃𝑃�0 𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜 𝑖𝑖𝑖𝑖 (𝑡𝑡, 𝑡𝑡 + ∆𝑡𝑡)� = 1 −⋋ ∆𝑡𝑡 + 𝑂𝑂(∆𝑡𝑡)

3. 𝑃𝑃�2 𝑜𝑜𝑜𝑜 𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚 𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜 𝑖𝑖𝑖𝑖 (𝑡𝑡, 𝑡𝑡 + ∆𝑡𝑡)� = 𝑂𝑂(∆𝑡𝑡)

4. 𝑋𝑋(𝑡𝑡) is independent of the number of occurences of the event in any interval before and after
interval (0,t).

5. The probability that an event occurs a specified number of times in (𝑡𝑡0 , 𝑡𝑡0 + 𝜏𝜏) depends only
on 𝜏𝜏 and not on 𝑡𝑡0

To prove that the sum of two poisson processes is a poisson process.

Let X(t)=𝑋𝑋1 (𝑡𝑡) + 𝑋𝑋2 (𝑡𝑡)

𝑒𝑒 −⋋1𝑡𝑡 (⋋1 𝑡𝑡)𝑛𝑛


where 𝑃𝑃(𝑋𝑋1 (𝑡𝑡) = 𝑛𝑛) = , 𝑛𝑛 = 0,1,2 …
𝑛𝑛 !

𝑒𝑒 −⋋2 𝑡𝑡 (⋋2 𝑡𝑡)𝑛𝑛


𝑃𝑃(𝑋𝑋2 (𝑡𝑡) = 𝑛𝑛) = , 𝑛𝑛 = 0,1,2 …
𝑛𝑛!
𝑛𝑛

𝑃𝑃(𝑋𝑋(𝑡𝑡) = 𝑛𝑛) = � 𝑃𝑃(𝑋𝑋1 (𝑡𝑡) = 𝑟𝑟). 𝑃𝑃(𝑋𝑋2 (𝑡𝑡) = 𝑛𝑛 − 𝑟𝑟)


𝑟𝑟=0

By independence of 𝑋𝑋1 (𝑡𝑡) & 𝑋𝑋2 (𝑡𝑡)

𝑒𝑒 −⋋1 𝑡𝑡 (⋋1 𝑡𝑡)𝑟𝑟 𝑒𝑒 −⋋2 𝑡𝑡 (⋋2 𝑡𝑡)𝑛𝑛 −𝑟𝑟


= ∑𝑛𝑛𝑟𝑟=0 .
𝑟𝑟! (𝑛𝑛−𝑟𝑟)!

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
𝒏𝒏
(⋋1 𝑡𝑡)𝒓𝒓 (⋋2 𝑡𝑡)𝒏𝒏−𝒓𝒓
= 𝑒𝑒 −⋋1 𝑡𝑡 𝑒𝑒 −⋋2 𝑡𝑡 �
𝒏𝒏!
𝒓𝒓=𝟎𝟎
𝒏𝒏𝒄𝒄𝒓𝒓
𝒏𝒏
(⋋1 𝑡𝑡)𝒓𝒓 (⋋2 𝑡𝑡)𝒏𝒏−𝒓𝒓
= 𝒆𝒆−𝒕𝒕(⋋𝟏𝟏 +⋋𝟐𝟐 ) � 𝒏𝒏𝒄𝒄𝒓𝒓
𝒏𝒏!
𝒓𝒓=𝟎𝟎

𝒆𝒆−𝒕𝒕(⋋𝟏𝟏 +⋋𝟐𝟐 ) (⋋𝟏𝟏 𝒕𝒕 +⋋𝟐𝟐 𝒕𝒕)𝒏𝒏


=
𝒏𝒏!

𝒆𝒆−𝒕𝒕(⋋𝟏𝟏+⋋𝟐𝟐) ((⋋𝟏𝟏 +⋋𝟐𝟐 )𝒕𝒕)𝒏𝒏


=
𝒏𝒏!

Hence 𝑋𝑋1 (𝑡𝑡) + 𝑋𝑋2 (𝑡𝑡) is a poisson process with parameter (⋋𝟏𝟏 +⋋𝟐𝟐 )𝒕𝒕

14(a)(i) The autocorrelation function of a random process is given by

{
R(τ ) = λ2 for τ >∈
λ  τ 
= λ2 +  1 −  for τ ≤∈
∈ ∈ 
Find the power spectral density of the process.

Solution:

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

S (ω ) = ∫ R(τ )e − iωτ
−∞
∈  λ  τ  −∈ ∞
= ∫ λ2 + 1 − e − iωτ dτ + ∫ λ2 e − iωτ dτ + ∫ λ2 e − iωτ dτ
− ∈ ∈  ∈  −∞ ∈
∞ λ ∉  τ  − iωτ
= ∫ λ2 e − iωτ dτ + ∫ 1− e dτ
−∞ ∈ − ∈ ∈ 

2λ ∞  τ 
= F (λ 2 ) + ∫ 1 −  cos ωτdτ
∈ 0  ∈

2 2λ  τ  sin ωτ 1  − cos ωτ 
= F (λ ) + 1 −  +  
∈  ∈  ω ω  ω 2 
0
2λ 1 − cos ω ∈
= F (λ 2 ) +  
∈  ω 2 
 ω ∈
4λ sin 2  
2
= F (λ ) +  2 
∈2 ω 2
4λ sin 2 (ω ∈ 2)
= 2πλ2δ (ω ) +
∈2 ω 2

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

S (ω ) = ∫ R(τ )e − iωτ
−∞
∈  λ  τ  −∈ ∞
= ∫ λ2 + 1 − e − iωτ dτ + ∫ λ2 e − iωτ dτ + ∫ λ2 e − iωτ dτ
− ∈ ∈  ∈  −∞ ∈
∞ λ ∉  τ  − iωτ
= ∫ λ2 e − iωτ dτ + ∫ 1− e dτ
−∞ ∈ − ∈ ∈ 

2λ ∞  τ 
= F (λ 2 ) + ∫ 1 −  cos ωτdτ
∈ 0  ∈

2 2λ  τ  sin ωτ 1  − cos ωτ 
= F (λ ) + 1 −  +  
∈  ∈  ω ω  ω 2 
0
2λ 1 − cos ω ∈
= F (λ 2 ) +  
∈  ω 2 
 ω ∈
4λ sin 2  
2
= F (λ ) +  2 
∈2 ω 2
4λ sin 2 (ω ∈ 2)
= 2πλ2δ (ω ) +
∈2 ω 2

𝜔𝜔 2 +9
14(a)(ii) Given the power spectral density of a continuous process as 𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔) = . Find
𝜔𝜔 4 +5𝜔𝜔 2 +4
the mean square value of the process.

𝜔𝜔 2 +9
Given 𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔) =
𝜔𝜔 4 +5𝜔𝜔 2 +4

𝜔𝜔2 + 9
=
(𝜔𝜔 2 + 1)(𝜔𝜔 2 + 4)
1 ∞
𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏) = � 𝑆𝑆 (𝜔𝜔)𝑒𝑒 𝑖𝑖𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑
2𝜋𝜋 −∞ 𝑋𝑋𝑋𝑋
The mean square value is given by
𝐸𝐸[𝑋𝑋 2 (𝑡𝑡)] = 𝑅𝑅𝑋𝑋𝑋𝑋 (0)

1 ∞
= � 𝑆𝑆 (𝜔𝜔) 𝑑𝑑𝑑𝑑
2𝜋𝜋 −∞ 𝑋𝑋𝑋𝑋

1
= 2 � 𝑆𝑆 (𝜔𝜔) 𝑑𝑑𝑑𝑑 ∵ 𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔) 𝑖𝑖𝑖𝑖 𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒
2𝜋𝜋 0 𝑋𝑋𝑋𝑋
Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130
Agni college of Technology
Chennai – 130
1 ∞ 𝜔𝜔2 + 9
= � 𝑑𝑑𝑑𝑑
𝜋𝜋 0 (𝜔𝜔 2 + 1)(𝜔𝜔 2 + 4)

To evaluate this integral by using partial fraction


Put 𝜔𝜔2 = 𝑢𝑢 we have
𝑢𝑢 + 9 𝐴𝐴 𝐵𝐵
= +
(𝑢𝑢 + 1)(𝑢𝑢 + 4) (𝑢𝑢 + 1) (𝑢𝑢 + 4)

𝑢𝑢 + 9 = 𝐴𝐴(𝑢𝑢 + 4) + 𝐵𝐵(𝑢𝑢 + 1)
𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆 𝑓𝑓𝑓𝑓𝑓𝑓 𝐴𝐴 𝑎𝑎𝑎𝑎𝑎𝑎 𝐵𝐵, 𝑤𝑤𝑤𝑤 𝑔𝑔𝑔𝑔𝑔𝑔
8 5
𝐴𝐴 = , 𝐵𝐵 = −
3 3
8 5
𝜔𝜔2 + 9 −
∴ = 3 + 3
(𝜔𝜔 2 + 1)(𝜔𝜔 2 + 4) (𝑢𝑢 + 1) (𝑢𝑢 + 4)

8 5
2 (𝑡𝑡)]
1 ∞ 3

3
𝐸𝐸[𝑋𝑋 = �� 𝑑𝑑𝑑𝑑 − � 𝑑𝑑𝑑𝑑�
𝜋𝜋 0 (𝜔𝜔 2 + 1) 0 (𝜔𝜔 2 + 4)

1 8 5 1 𝜔𝜔 ∞
= � [𝑡𝑡𝑡𝑡𝑡𝑡−1 𝜔𝜔]∞
0 − × �𝑡𝑡𝑡𝑡𝑡𝑡
−1
� �
𝜋𝜋 3 3 2 2 0
1 8 5
= � [𝑡𝑡𝑡𝑡𝑡𝑡−1 (∞) − 𝑡𝑡𝑡𝑡𝑡𝑡−1 (0)] − [𝑡𝑡𝑡𝑡𝑡𝑡−1 (∞) − 𝑡𝑡𝑡𝑡𝑡𝑡−1 (0)]�
𝜋𝜋 3 6
1 8 𝜋𝜋 5 𝜋𝜋 1 𝜋𝜋 8 5
= � × − × �= × � − �
𝜋𝜋 3 2 6 2 𝜋𝜋 2 3 6
11
=
12

14(b)(i) State and prove Wiener-Khintchine theorem

Statement:

Let X(t) be a real WSS process with power density spectrum S XX (ω ) . Let X T (t ) be a
 X (t ) ,−T < t < T
X T (t ) = 
portion of the process X(t) in time interval –T to T. i.e.,  0 , elsewhere
Let X T (ω ) be the Fourier transform of X T (t ) , then

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
S XX (ω ) =
lim 1
T → ∞ 2T
E X T (ω )
2
{ }
Proof: Given X T (ω ) is the Fourier transorm of X T (t )

∴ X T (ω ) = ∫X
−∞
T (t )e −iωt dt

T
= ∫X
−T
T (t )e −iωt dt

∫ X (t )e
−iωt
= dt
−T

X T (ω ) = X T* (ω ) X T (ω ) [where * denotes complex conjugate]


2

T T

∫ X (t )e dt. ∫ X (t )e dt
i ωt − iωt
=
−T −T [ X (t ) is real]

T iω t T − i ωt
= ∫ X (t )e 1 dt . ∫ X (t )e 2 dt
1 1 2 2
−T −T

T T
= ∫ ∫ X (t ) X (t
−T −T
1 2 )e −iω ( t2 −t2 ) dt1dt 2

T T

∫ ∫ X (t ) X (t )e −iω ( t2 −t2 ) dt1dt 2



T →∞
lim
[
E X T (ω )
2
] =
lim 1
T → ∞ 2T
−T −T
1 2

But E [X ((t1 )(t 2 )] = R XX (t1 , t 2 ) if − T < t1 , t 2 < T


T T

∫ ∫R (t1 , t 2 )e −iω ( t2 −t2 ) dt1dt 2



lim
T →∞
[
E X T (ω )
2
] =
lim 1
T → ∞ 2T
−T −T
XX

We shall now make a change of variables as below


Put t1 = t and t 2 − t1 = τ ⇒ t 2 = τ + t
∴ thejacobian of transformation is

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
∂t1 ∂t1 1 0
= =1
J = ∂t ∂τ 1 1
∂t 2 ∂t 2
∂t ∂τ
 dt1 dt 2 = J dtdτ
The limits of t and –T and T
When t 2 = −T ,τ = −T − t and t 2 = T ,τ = T − t


lim 1
T → ∞ 2T
[
E X T (ω ) =
2 lim 1
T → ∞ 2T
] T −t T

∫ ∫R
−T −t −T
XX (t , t + τ )e −iωτ dtdτ

Since X(t) is WSS Process, R XX (t , t + τ ) = R XX (τ )


lim 1
T → ∞ 2T
[
E X T (ω ) =
2 lim 1
T → ∞ 2T
] T −t T

∫ ∫R
−T −t −T
XX (τ )e −iωτ dtdτ

lim 1
= T −t T
T → ∞ 2T
∫ R XX (τ )e dτ . ∫ dt
−iωτ

−T −t −T

lim 1
= T −t
T → ∞ 2T
∫R
−T −t
XX (τ )e −iωτ dτ .2T

lim
= T −t
T →∞
∫R
−T −t
XX (τ )e −iωτ dτ


= ∫R
−∞
XX (τ )e −iωτ dτ = S XX (ω )
, by definition.

∴ S XX (ω ) = lim E X T (ω ) [ 2
]
T →∞ 2T
Hence proved.
14(b)(ii) The cross power spectrum of real random process X(t) and Y(t) is given by
{a + jbω , ω < 1
S XY (ω ) =
=0 , elsewhere
Find the cross correlation function.
Solution:

Given {a + jbω , ω < 1


S XY (ω ) =

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
∞ ∞
1 1
∫−∞ S XY (ω )=
eiωτ d ω ∫ (a + jbω )e
iωτ
RXY (τ )
= dω
2π 2π −∞
1 1
1 1
∫ ae dω + 2π −∫1
iωτ
= jb ω eiωτ d ω
2π −1
1 1
a jb
=
2π ∫ [cos ωτ + j sin ωτ ]dω =
−1
2π −∫1
ω [ cos ωτ + j sin ωτ ]d ω
1 1
a 2 jb
2 ∫ cos ωτ d ω +
2π ∫0
= ω sin ωτ d ω
2π 0
a  sin ωτ  b   − cos ωτ    − sin ωτ 
1 1

= − ω  −  τ 2 
π  τ  0 π   τ   0
a  sin τ  b  − cos τ sin τ  
= − + 2  − (0 − 0) 
π  τ  π  τ τ  
a b b
= sin τ + cos τ − 2 sin τ
πτ πτ πτ

15(a)(i) Show that if the input X(t) is WSS process for a linear system then output Y(t) is a WSS
process.

Sol: Let X(t) be a WSS process for a linear time invariant stable system with Y(t) as the output
process.

Y (t ) = ∫ h(u ) X (t − u )du
−∞
Then where h(t ) is weighting function or unit impulse response.


∴ E [Y (t )] = ∫ E[h(u ) X (t − u )]du
−∞

= ∫ h(u ) E[ X (t − u )]du
−∞

Since X(t) is a WSS process, E [ X (t )] is a constant µ X for any t.

∴ E[ X (t − u )] = µ X
∞ ∞
∴ E [Y (t )] = ∫ h(u )µ X du = µ X ∫ h(u )du
−∞ −∞

∫ h(u )du
Since the system is stable , −∞ is finite
∴ E [Y (t )] is a constant.

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
Now RYY (t , t + τ ) = E[Y (t )Y (t + τ )]

∞ ∞
= E[ ∫ h(u1 ) E[ X (t − u1 )]du1 ∫ h(u 2 ) E[ X (t + τ − u 2 )]du 2 ]
−∞ −∞
∞ ∞
= E[ ∫ ∫ h(u1 )h(u 2 ) X (t − u1 ) X (t + τ − u 2 )du1 du 2 ]
− ∞− ∞
∞ ∞
= ∫ ∫ h(u )h(u
− ∞− ∞
1 2 ) E[ X (t − u1 ) X (t + τ − u 2 )]du1 du 2

Since X(t) is a WSS process, auto correlation function is only a function of time
difference
∞ ∞
∴ RYY (t , t + τ ) = ∫ ∫ h(u )h(u
− ∞− ∞
1 2 ) R XX (τ + u1 − u 2 )du1 du 2

When this double integral is evaluated by integrating with respect to

u1 and u 2 , the RHS is only a function of τ . Hence Y(t) is a WSS process.

15(a)(ii) Let X(t) be a wide sense stationary process which is the input to a linear time

Invariant system with unit impulse h(t) and output Y(t), then prove that

SYY (ω ) = H (ω ) S XX (ω ) where H (ω ) is
2
Fourier transform of h(t).


Solution: =
Let Y (t )
−∞
∫ h(u ) X (t − u )du

=
Y (t ) ∫ X (t − α )h(α )dα
−∞

∴ X (t + τ )Y (t ) = ∫ X (t + τ ) X (t − α )h(α )dα
−∞


E[ X (t + τ )Y (t )]= ∫ E{ X (t + τ ) X (t − α )}h(α )dα
−∞
Hence ∞
= ∫R
−∞
XX (τ + α )h(α )dα


= ∫R
−∞
XX (τ − β )h(− β )d β

RXY (τ ) RXX (τ ) * h(−τ )


i.e.,= (1)
Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130
Agni college of Technology
Chennai – 130
RYX (τ ) = RXX (τ ) * h(τ ) (1a )

Y (t )Y (t − τ=
) ∫ X (t − α )Y (t − τ )h(α )dα
−∞


E{Y (t )Y (t − τ=
)} ∫R
−∞
XY (t − α )h(α )dα

Assuming that {X(t) & Y(t) are jointly WSS

i.e., RYY (τ ) = RXY (τ ) * h(τ ) ( 2)

Taking FT’s of (1) & (2) we get

S XY (ω ) = S XX (ω ) H* (ω ) (3)

Where H*(ω ) is the conjugate of H(ω ) & SYY (ω ) = S XY (ω ) H(ω ) (4)

Inserting (3) In (4) SYY (ω ) = H (ω ) 2 S XX (ω ) .

15(b)(i) For a input-output linear system ( X (t ).h(t ), Y (t ) ) , derive thew cross correlation function
RXY (τ ) and the output autocorrelation function RYX (τ ).

Solution: The cross correlation function between the input X(t) and the output Y(t) is given by

(i ) RXY (τ ) = h(τ ) * R XX (τ )
=
(ii) R YX (τ ) RXX (τ ) * h(−τ )
(i) =
Proof : RXY (t, t + τ ) E{X(t).Y(t + τ )} ...(1)

t + τ ) h(t ) * X (=
Y (= t +τ ) ∫ h(ε ) X (t + τ − ε )d ε ... ( 2 )
−∞

 

RXY (t, t + τ ) E  X (t ) ∫ h(ε ) X (t + τ − ε )d ε 
=
 −∞ 
Substitute (2) in (1) ∞
= ∫ E{ X (t ) X (t + τ − ε }h(ε )d ε ... ( 3)
−∞

Since X(t) is wide-sense stationary, equation (3) reduces to

XY (τ )
R= ∫R
−∞
XX τ − ε )d ε h(ε )d ε which is the convolution RXY (τ ) = RXX (τ ) * h(ε )

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

τ)
RYX (= ∫R
−∞
XX (τ − ε }h(−ε )d ε

A similar development shown that= RXX (τ ) * h(−τ )

.
From the above it is very clear that cross-correlation functions depends on τ and not
on absolute time ‘t’.
N0
15(b)(ii) A white Gaussian Noise X(t) with zero mean and spectral density is
2
applied to a low-pass filter.Determine the autocorrelation of the output Y(t).
Solution : Out of syllabus.

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

B.E./B.Tech. DEGREE EXAMINATIONS, NOV/DEC 2012


Fourth Semester
Common to ECE/BIOMEDICAL
MA2261 – PROBABILITY AND RANDOM PROCESSES

(Regulations 2008)
Time: Three hours Maximum: 100 marks
Answer ALL Questions

Part A – (10x2=20 marks)

Part A

Answer ALL Questions. (10x2=20marks)

1. The moment generating function of a random variable X is given by M (t ) = e3( e −1) , What is
t

P[X=0].
 (3e t ) (3e t ) 2 
M (t ) = e 3(e −1) = e 3e e −3 =
1
1 + + + ...
t t

e3  1! 2! 
But E e tx [ ] = M (t )

Solution: Given ∑e
x =0
tx
p ( x) = M (t )

 (3e t ) 1(e t ) 2 
p (0) + e t p (1) + e 2t p (2) + ... =
1 + + 9 + ...
 1! e3 2! 
Equating the like terms p (1) = 3.

2. An experiment succeeds twice as often as it fails. Find the chance that in the next 4 trials,
there shall be atleast one success.
2 1
p= ,q= ,n=4
3 3
Solution: P ( X ≥ 1) = 1 − P ( X < 1) = 1 − P ( X = 0)
0 4
 2 1 1 80
= 1 − 4c0     = 1 − =
 3  3 81 81

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

6
f ( x, y=
)  ( x + y 2 ), 0 ≤ x ≤ 1, o ≤ y ≤ 1
5
3. Find the marginal density functions of X and Y if =0 , otherwise

6
f ( x, y=
)  ( x + y 2 ), 0 ≤ x ≤ 1, o ≤ y ≤ 1
5
=0 , otherwise

that
The marginal density fu ction of X is
∞ y =1
6 y3  6 1
1
6
f ( x) = ∫ f ( x, y )dy = ∫ ( x + y 2 )dy =  xy +  =  x + , 0 ≤ x ≤ 1
−∞
50 5 3  y =0 5  3 

The marginal density function of Y is


∞ x =1
6 x2  6 1
1
6
f ( y ) = ∫ f ( x, y )dx = ∫ ( x + y )dx =  + xy 2  =  y 2 + , 0 ≤ y ≤ 1
2

−∞
50 5 2  x =0 5  2

4. Find the acute angle between the two lines of regression, assuming the two lines of
regression.
Solution :
1− 𝑟𝑟 2 𝜎𝜎𝑥𝑥 𝜎𝜎𝑦𝑦
If 𝜃𝜃 is the angle between two regression lines , then 𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡 = � � .
𝑟𝑟 𝜎𝜎𝑥𝑥 2 + 𝜎𝜎𝑦𝑦 2

5. Define a strictly stationary random process.


Solution: A random process X(t) is said to be stationary in the strict sense if its statistical
characteristics do not change with respect to time.
i.e., the random processes
X (t ) & X (t ) where t = t + ∆
1 2 2 1 will have all statistical properties the same.

6. Prove that the sum of two independent poisson process is again a poisson process.

Solution: To prove that the sum of two poisson processes is a poisson process.

Let X(t)=𝑋𝑋1 (𝑡𝑡) + 𝑋𝑋2 (𝑡𝑡)

𝑒𝑒 −⋋1𝑡𝑡 (⋋1 𝑡𝑡)𝑛𝑛


where 𝑃𝑃(𝑋𝑋1 (𝑡𝑡) = 𝑛𝑛) = , 𝑛𝑛 = 0,1,2 …
𝑛𝑛 !

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

𝑒𝑒 −⋋2 𝑡𝑡 (⋋2 𝑡𝑡)𝑛𝑛


𝑃𝑃(𝑋𝑋2 (𝑡𝑡) = 𝑛𝑛) = , 𝑛𝑛 = 0,1,2 …
𝑛𝑛!
𝑛𝑛

𝑃𝑃(𝑋𝑋(𝑡𝑡) = 𝑛𝑛) = � 𝑃𝑃(𝑋𝑋1 (𝑡𝑡) = 𝑟𝑟). 𝑃𝑃(𝑋𝑋2 (𝑡𝑡) = 𝑛𝑛 − 𝑟𝑟)


𝑟𝑟=0

By independence of 𝑋𝑋1 (𝑡𝑡) & 𝑋𝑋2 (𝑡𝑡)

𝑒𝑒 −⋋1 𝑡𝑡 (⋋1 𝑡𝑡)𝑟𝑟 𝑒𝑒 −⋋2 𝑡𝑡 (⋋2 𝑡𝑡)𝑛𝑛 −𝑟𝑟


= ∑𝑛𝑛𝑟𝑟=0 .
𝑟𝑟! (𝑛𝑛−𝑟𝑟)!

𝒏𝒏
(⋋1 𝑡𝑡)𝒓𝒓 (⋋2 𝑡𝑡)𝒏𝒏−𝒓𝒓
= 𝑒𝑒 −⋋1 𝑡𝑡 𝑒𝑒 −⋋2 𝑡𝑡 �
𝒏𝒏!
𝒓𝒓=𝟎𝟎
𝒏𝒏𝒄𝒄𝒓𝒓
𝒏𝒏
(⋋1 𝑡𝑡)𝒓𝒓 (⋋2 𝑡𝑡)𝒏𝒏−𝒓𝒓
= 𝒆𝒆−𝒕𝒕(⋋𝟏𝟏+⋋𝟐𝟐 ) � 𝒏𝒏𝒄𝒄𝒓𝒓
𝒏𝒏!
𝒓𝒓=𝟎𝟎

𝒆𝒆−𝒕𝒕(⋋𝟏𝟏 +⋋𝟐𝟐 ) (⋋𝟏𝟏 𝒕𝒕 +⋋𝟐𝟐 𝒕𝒕)𝒏𝒏


=
𝒏𝒏!

𝒆𝒆−𝒕𝒕(⋋𝟏𝟏+⋋𝟐𝟐) ((⋋𝟏𝟏 +⋋𝟐𝟐 )𝒕𝒕)𝒏𝒏


=
𝒏𝒏!

Hence 𝑋𝑋1 (𝑡𝑡) + 𝑋𝑋2 (𝑡𝑡) is a poisson process with parameter (⋋𝟏𝟏 +⋋𝟐𝟐 )𝒕𝒕

7. Find the variance of the stationary process {X(t)} whose autocorrelation function is given
by
−2 τ
RXX (τ )= 2 + 4e
Solution: Given RXX (τ )= 2 + 4e−2 τ
µ x2 = lim R(τ ) = R(∞) = 2 + 4e −2 ∞ 2 + 0 = 2.
τ →∞

Mean t ) µ=
= E[ X (= x 2
E[ X (t )] = lim R(τ ) = R(0) = 2 + 4 e −0 = 2 + 4 = 6
2
τ →0

Var[ X (t )] = R (0) − R (∞) = 6 − 2 = 4


8. Prove that for a WSS process {X(t)}, RXX (t , t + τ ) is an even function of τ .

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
RXX (τ ) E[X(t) X(t + τ )]
=
=
RXX (−τ ) E[X(t) X(t − τ )]
Put t − τ = P ⇒ t = P + τ
Solution:
RXX (−τ= ) E[X(P + τ ) X(P)]
= E[ X ( P ) X(P + τ )]
= RXX (τ )
9. Find the system Transfer function, if a Linear Time Invariant system has an impulse
function
1
 2c ; t≤c

H (t ) =  0 otherwise



Solution:

∫ H (t)e
− iω t
H (ω ) = dt
−∞
c
 1  − iωt 1  e − iωt 
c
= ∫−= 2c e dt 2c  −iω 
c    −c
1   e − iωt   eiωt  
=  − 
2c   −iω   −iω  
eiω c − e − iω c sin ω c
= =
2iω c ωc

10. Define Band-limited White noise.


Solution: Noise having a non-zero and constant power spectral density over a finite
frequency band and zero elsewhere is called band-limited white noise. ∴ The PSD of the
band-limited white noise is given by

 N0
 , for ω ≤ WB
S NN (ω ) =  2
 0 , elsewhere

Part B ( 5x16=80marks)
2(1 − x) 0 < x <1
11(a)(i) If the probability density of X is given by f ( x) =  find its
0 otherwise
rth moment. Hence evaluate E[(2 X + 1)2 ]

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
1
1 1
 r +1 r +2

E[ X ] =r
∫x
r
2(− x)dx = 2 ∫ ( x − xr r +1
) dx = 2  rx + 1 − rx+ 2 
0 0  0
 1 1   r + 2 − r −1 
= 2 −  = 2 
 r +1 r + 2   (r + 1)(r + 2) 
2
=
(r + 1)(r + 2)
2 2 1
E=[X ] = =
Solution: (1 + 1)(1 + 2) (2)(3) 3
2 2 1
E =
X 2  = =
(2 + 1)(2 + 2) (3)(4) 6
E (2 X += 4 X  4 E  X 2  + E [1] + 4 E [ X ]
1) 2  E  4 X 2 + 1 +=
1 1
= 4  +1+ 4 
 
6 3
2 4 2+3+ 4 9
= +1+ = = = 3
3 3 3 3
θ
1 −
= f (θ )  e 2 θ >0
(ii) Fins MGF corresponding to the distribution 2 and hence find
= 0 otherwise
its mean and variance.
Solution: The m.g.f. is given by
∞ ∞ θ

tθ 1
∫ e f= ∫0 2 2 dθ
tθ tθ
M (θ ) E=
= e  (θ )dθ e e
−∞

 − 1 −t  x 
1  e 2  
∞ ∞  1   1 
1 1 t − x − −t  x

2 ∫0 2 ∫0
= = e dx =  2
e dx   2 
2   1 
− − t 
  2  0
 
1 1  1 2  1
=
− 0 − = =
2 1  2 1 − 2t  1 − 2t
−t
 2 
=(1 − 2t ) =1 + 2t + (2t ) 2 + ...
−1

=1 + 2t + 4t 2 + ...
t t2
=
1+ 2+ 8 + ...
1! 2!
t
= E=
Mean [θ ] coefficient of= 2
1!
t2
= E[θ ]2 coefficient
= of 8
2!
Variance = E[θ ] − E[θ ]2 = 8 − 4 = 4

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

(iii) Show that for the probability function,


P( x=
) P( X= x=
) { 1
x ( x +1)
, =
x 1, 2,3...
E(X) does
=0 , otherwise.
not exists .
 1
P=
( x) f=
( x)  = , x 1, 2,3...
 x( x + 1)
=0 otherwise
∞ ∞
1 1
Solution: Given ∴E[ X =]
=
∑ xP( x=) ∑ x x(1 + x)= ∑ 1 + x=
x 1=x 1

1 1 1
=
+ + + ...
2 3 4

1
= ∑ − 1, it is a divergent series
n =1 n

Hence E[X] does not exists.


(b)(i) Assume that the reduction of a person’s oxygen consumption during a period of
Transcendental Meditation (T.M) is a continuous random variable X normally distributed
with mean 37.6cc/mm and S.D 4.6 cc/min. Determine the probability that a period of
T.M. a person’s oxygen consumption will be reduced by
(1) atleast 44.5 cc/min
(2) atmost 35.0 cc/min
(3) anywhere from 30.0 to 40.0cc/min
X − µ X − 37.6
=
Solution: Given: µ 37.6,
= σ 4.6,
= Z =
σ 4.6
44.5 − 37.6
X= 44.5 ∴ Z= = 1.5
(1) For atleast 44.5 cc/min ie., 4.6
P[ X ≥ 44.5] =P[ Z ≥ 1.5] =0.5 − P[ Z < 1.5] =0.5 − 0.4332 =0.068
35 − 37.6
Z= = −0.5652
(2) For atmost 35cc/min ie., X=35 4.6
P[ X ≤ 35]= P[− Z ≤ −0.5652]= 0.5 − 0.2157= 0.2843
(3) Anywhere from 30 to 40 cc/min
30 − 37.6
X 1 = 30, Z1 = = −1.6521
4.6
40 − 37.6
Ie.,= =
X 2 40, Z2 = 0.52173
4.6
P[30 ≤ X ≤ 40] = P[−1.6521 ≤ Z ≤ 0.52173]
= 0.4505 + 0.1985 = 0.6490
e − x , 0< x<∞
(b)(ii) The random variable X has exponential distribution with f ( x) = 
0, otherwise
Find the density function of the variable give by
(1)=
Y 3X + 5
(2) Y = X 2

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
Solution: Out of syllabus.

12(a)(i)The joint pdf of a two-dimensional random variable (X,Y) is given by

 x2
f ( x, y=
)  xy 2 + , 0 ≤ x ≤ 2, 0 ≤ y ≤ 1
 8
 1  1
Compute P  Y <  , P  X > 1 Y <  , P ( X + Y ≤ 1)
 2  2

x2
Solution: Given: f ( x, y )= xy 2 + , 0 ≤ x ≤ 2, 0 ≤ y ≤ 1
8

The marginal density of X is given by


∞ 1
1
 x2   y3 x2 
f ( x) = ∫ f ( x, y )dy =∫  xy 2 + dy = x + y
0
−∞
8   3 8 0
 x x2   x x2 
=  +  − (0 + 0) =  +  , 0 ≤ x ≤ 2
3 8  3 8 

The marginal density of Y is given by

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
∞ 2
2
x2   x2 x3 
f (y) = ∫
−∞
f ( x, y )dx = ∫  xy 2 + dx =  y 2 + 
0
8  2 24  0
 1
)  2 y 2 +  , 0 ≤ y ≤ 1
1
=  2 y 2 +  − ( 0 + 0=
 3  3
1 1 1

 1 2

2
1  2 y3 1  2
(i ) P Y < =
 2 ∫
0
f ( y )dy= ∫  2 y 2 + dy= 
0
3  3
+ y
3 0
 2  1  1  1  1 1 1+ 2 3 1
=    +    − [0 + 0] = + = = =
  
3 8 3  
2 12 6 12 12 4
1

 1
2 2
 2 x2 
 1
P  X > 1, Y < 
 = 2
∫1 ∫0  xy + 8  dydx

(ii ) P  X =
>1 Y < 
 2  1 1
P Y < 
 2 4
1
2
 y3 x2  2
2 2
 x x2   x 2 x3 
=4 ∫  x + y  dx =4 ∫  + dx =4  + 
1 
3 8 0 1
24 16   48 48 1
4 2 4 10 5
[(4 + 8) − (1 + 1)] =
2
=  x + x3  = =
48 1 48 12 6
1 1− y x = 1− y
 2 x2  1
 x 2 2 x3 
(iii ) P [ X + Y=
≤ 1] ∫∫  xy + =  dxdy ∫0  2 y + 24  dy

0 0  8  x =0
1
 (1 − y ) 2 2 (1 − y )3 
= ∫0  2 y + 24 dy
 ( y − y 2 )2 
(1 − y )3 
1
= ∫ + dy
 2 24 
0
 
1
 y + y − 2 y (1 − y )3 
2 4 3
= ∫ + dy
0 
2 24 
1
1  y3 y5 y 4 1 (1 − y ) 4 
=  + − 2 + 
2 3 5 4 12 −4  0
1  1 1 1   1   13
=  − + −−  =
2  5 2 3   48   480

12(a)(ii) If the independent random variables X and Y have variances 36 and 16 respectively,
find the correlation coefficient between (X+Y) and (X-Y).

Solution: Given V(X)=36, V(Y)=16

E[XY]=E[X]E[Y]

U=X+Y, V=X-Y

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
cov(U, V) E[UV] − E[U]E[V]
=r =
σUσV σUσV
E[(X + Y)( X − Y )] − E[ X + Y ]E[X − 9]
=
σUσV
E ( X − Y ) + {E[ X ] + E[Y ]}{E[ X ] − E[Y ]}
2 2
=
σUσV
E[ X ] − E[Y ] + {E[ X ]}2 − {E[Y ]}2
2 2
=
σUσV
σ U = V ar(U ) = Var ( X + Y ) = Var ( X ) + Var (Y ) = 36 + 16 = 52 ⇒ σ U = 52
2

σ=
V
2
= Var ( X − Y=
V ar(V) ) Var ( X ) − Var (Y ) = 36 − 16 = 20 ⇒ σ V = 20
36 − 16 + 0 − 0 20 20 5
=r = = =
52 20 52 20 52 13

12(b) If X and Y are independent random variables with probability density functions
) 4e −4 x ,
f X ( x= ) 2e −2 y
x ≥ 0; fY ( y= y≥0

X
(i) Find the density function of U= , V= X + Y
X +Y
(ii) Are U and V independent?
(iii) What is P(U>0.5)?
f x ( x) = 4e −4 x , x ≥ 0; f y ( y ) = 2e −2 y , y ≥ 0
(i ) f xy x, y ) = 8e − 4 x e − 2 y x ≥ 0; y ≥ 0
[f xy x, y ) = f x ( x) f y ( y ) sin ce x and y are independent ]
x
Solution: Solving the equations u = , v = x + y, we get
x+ y
x = uv, y = v − uv
v u
J= = v, ⇒ J = v
− v 1− u
The joint pdf of (u,v) is given by
f UV (u, v) = J f XY ( x, y ) = 8ve −2 v (1+u )
The range space : 0 ≤ u ≤ 1 & v ≥ 0.
f UV (u , v) = 8ve 2 v (1+u ) , 0 ≤ u ≤ 1 & v ≥ 0.
The pdf of u is given by

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

f U (u ) = ∫ 8ve 2 v (1+u ) dv
0

2
= , 0 ≤ u ≤1
(1 + u ) 2
The pdf of v is given by

f V (v) = ∫ 8ve − 2 v (1+u ) du
0

= 4e − 2 v (1 − e − 2 v ) v ≥ 0
8e − 2 v (1 − e − 2 v )
(ii ) f U (u ) f V (v) = ≠ f UV (u , v) ⇒ U & V are not independent.
(1 + u ) 2
1
1
2  (1 + u ) − 2+1  1
(iii ) P(U > o.5) = ∫ du = 2   =
0.5 (1 + u )  − 2 + 1  0.5 3
2

13(a)(i) The Probability distribution of the process{X(t)} is given by


(𝒂𝒂𝒂𝒂)𝒏𝒏−𝟏𝟏
⎧ , 𝒏𝒏 = 𝟏𝟏, 𝟐𝟐, …
𝒏𝒏+𝟏𝟏
𝑷𝑷(𝑿𝑿(𝒕𝒕) = 𝒏𝒏) (𝟏𝟏 + 𝒂𝒂𝒂𝒂)
⎨ 𝒂𝒂𝒂𝒂
⎩ 𝟏𝟏 + 𝒂𝒂𝒂𝒂 , 𝒏𝒏 = 𝟎𝟎
Find the mean and variance of the process. Is the process first-order stationary?

Solution: Given
X(t)=n 0 1 2 3 ...
P{X(t)}=p(x n ) 𝑎𝑎𝑎𝑎 1 𝑎𝑎𝑎𝑎 2
(𝑎𝑎𝑎𝑎) ...
1 + 𝑎𝑎𝑎𝑎 (1 + 𝑎𝑎𝑎𝑎)2 (1 + 𝑎𝑎𝑎𝑎)3 (1 + 𝑎𝑎𝑎𝑎)4

Mean 𝐸𝐸{𝑋𝑋(𝑡𝑡)} = ∑∞
𝑛𝑛=0 𝑛𝑛𝑛𝑛(𝑥𝑥𝑛𝑛 )

𝑎𝑎𝑎𝑎 1 𝑎𝑎𝑎𝑎 (𝑎𝑎𝑎𝑎)2


= 0. + 1. + 2. + 3. +⋯
1 + 𝑎𝑎𝑎𝑎 (1 + 𝑎𝑎𝑎𝑎)2 (1 + 𝑎𝑎𝑎𝑎)3 (1 + 𝑎𝑎𝑎𝑎)4
1 𝑎𝑎𝑎𝑎 𝑎𝑎𝑎𝑎 2
= �1 + 2. + 3. � � + ⋯�
(1 + 𝑎𝑎𝑎𝑎)2 1 + 𝑎𝑎𝑎𝑎 1 + 𝑎𝑎𝑎𝑎
1 𝑎𝑎𝑎𝑎 −2
= �1 − �
(1 + 𝑎𝑎𝑎𝑎)2 1 + 𝑎𝑎𝑎𝑎
1 1 + 𝑎𝑎𝑎𝑎 − 𝑎𝑎𝑎𝑎 −2
= � �
(1 + 𝑎𝑎𝑎𝑎)2 1 + 𝑎𝑎𝑎𝑎
Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130
Agni college of Technology
Chennai – 130
1 1
= ×
(1 + 𝑎𝑎𝑎𝑎)2 (1 + 𝑎𝑎𝑎𝑎)−2
1
= 2
(1 + 𝑎𝑎𝑎𝑎)2
(1 + 𝑎𝑎𝑎𝑎)
= 1, 𝑤𝑤ℎ𝑖𝑖𝑖𝑖ℎ 𝑖𝑖𝑖𝑖 𝑎𝑎 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐
Now

𝐸𝐸(𝑋𝑋 (𝑡𝑡)) = � 𝑛𝑛2 𝑝𝑝(𝑥𝑥𝑛𝑛 )


2

𝑛𝑛=0

= � [𝑛𝑛(𝑛𝑛 + 1) − 𝑛𝑛]𝑝𝑝(𝑥𝑥𝑛𝑛 )
𝑛𝑛=0
∞ ∞

= � 𝑛𝑛(𝑛𝑛 + 1)𝑝𝑝(𝑥𝑥𝑛𝑛 ) − � 𝑛𝑛𝑛𝑛(𝑥𝑥𝑛𝑛 )


𝑛𝑛=0 𝑛𝑛=0
2
1 𝑎𝑎𝑎𝑎 (𝑎𝑎𝑎𝑎)
= �0 + 1.2 2
+ 2.3 3
+ 3.4 + ⋯ � − 𝐸𝐸{𝑋𝑋(𝑡𝑡)}
(1 + 𝑎𝑎𝑎𝑎) (1 + 𝑎𝑎𝑎𝑎) (1 + 𝑎𝑎𝑡𝑡)4
2 𝑎𝑎𝑎𝑎 𝑎𝑎𝑎𝑎 2
= �1 + 3. + 6. � � + ⋯� − 1
(1 + 𝑎𝑎𝑎𝑎)2 (1 + 𝑎𝑎𝑎𝑎) 1 + 𝑎𝑎𝑎𝑎
2 𝑎𝑎𝑎𝑎 −3
= �1 − � −1
(1 + 𝑎𝑎𝑎𝑎)2 1 + 𝑎𝑎𝑎𝑎
2
= . (1 + 𝑎𝑎𝑎𝑎)3 − 1
(1 + 𝑎𝑎𝑎𝑎)2
= 2(1 + 𝑎𝑎𝑎𝑎) − 1
= 2 + 2𝑎𝑎𝑎𝑎 − 1
= 1 + 2𝑎𝑎𝑎𝑎
𝑉𝑉𝑉𝑉𝑉𝑉{𝑋𝑋(𝑡𝑡)} = 𝐸𝐸(𝑋𝑋2(𝑡𝑡)) − [𝐸𝐸(𝑥𝑥(𝑡𝑡))]2
= 1 + 2𝑎𝑎𝑎𝑎 − 1
= 2𝑎𝑎𝑎𝑎
𝑊𝑊ℎ𝑖𝑖𝑖𝑖ℎ 𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑 𝑜𝑜𝑜𝑜 𝑡𝑡
{𝑥𝑥(𝑡𝑡)} 𝑖𝑖𝑖𝑖 𝑛𝑛𝑛𝑛𝑛𝑛 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠
13(a)(ii) If the Wss process {X(t)} is given by X(t)=10cos(100t+𝜽𝜽),where 𝜽𝜽 is
uniformly distributed over (-π,π).Prove that {X(t)} is correlation ergodic
Solution: We Know that
R xx (𝜏𝜏)= E(X(t).X(t+ 𝜏𝜏))
=E(100cos(100t+𝜃𝜃)cos(100t+100 𝜏𝜏 + 𝜃𝜃))
=100E(cos(100t+𝜃𝜃+100t+100 𝜏𝜏 + 𝜃𝜃)+cos(100t+𝜃𝜃-100t-100 𝜏𝜏 − 𝜃𝜃))
100
= E(cos(200t+2𝜃𝜃+100 𝜏𝜏) +cos(-100 𝜏𝜏))
2

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

= 50 E(cos(200t+2𝜃𝜃+100 𝜏𝜏) +cos100 𝜏𝜏)


= 50cos100 𝜏𝜏+50E(cos(200t+2𝜃𝜃+100 𝜏𝜏)) → (1)
1 𝜋𝜋
Now E(cos(200t+2𝜃𝜃+100 𝜏𝜏)) = ∫ cos(200t + 2𝜃𝜃 + 100 𝜏𝜏)𝑑𝑑𝑑𝑑
2𝜋𝜋 −𝜋𝜋
𝜋𝜋
1
= � cos(200t + 2𝜃𝜃 + 100 𝜏𝜏)𝑑𝑑𝑑𝑑
𝜋𝜋 0
𝑠𝑠𝑠𝑠𝑠𝑠(200t + 2𝜃𝜃 + 100 𝜏𝜏) 𝜋𝜋
=� �
2 0
1
= [ 𝑠𝑠𝑠𝑠𝑠𝑠(200t + 2𝜋𝜋 + 100 𝜏𝜏)- 𝑠𝑠𝑠𝑠𝑠𝑠(200t + 100 𝜏𝜏)]
2𝜋𝜋
=0
Substituting in Equation (1),we have
E(X(t).X(t+ 𝜏𝜏))=50cos100 𝜏𝜏 → (2)
1 𝑇𝑇
Therefore ZT= ∫ 𝑋𝑋(𝑡𝑡). 𝑋𝑋(𝑡𝑡 + 𝜏𝜏)𝑑𝑑𝑑𝑑
2𝑇𝑇 −𝑇𝑇
1 𝑇𝑇
= ∫ 100 cos(100𝑡𝑡 + 𝜃𝜃) cos(100𝑡𝑡 + 100𝜏𝜏 + 𝜃𝜃) 𝑑𝑑𝑑𝑑
2𝑇𝑇 −𝑇𝑇
50 𝑇𝑇
= ∫−𝑇𝑇 [𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶(200𝑡𝑡 + 2𝜃𝜃 + 100𝜏𝜏) + cos(−100𝜏𝜏) 𝑑𝑑𝑑𝑑
2𝑇𝑇
25 𝑇𝑇 25 𝑇𝑇
= ∫−𝑇𝑇 cos(100𝜏𝜏) 𝑑𝑑𝑑𝑑 + ∫𝑇𝑇 𝐶𝐶𝐶𝐶𝐶𝐶(200𝑡𝑡 + 100𝜏𝜏 + 2𝜃𝜃)𝑑𝑑𝑑𝑑
𝑇𝑇 𝑇𝑇
25 𝑇𝑇
=50cos(100 𝜏𝜏)+ ∫𝑇𝑇 𝐶𝐶𝐶𝐶𝐶𝐶(200𝑡𝑡 + 100𝜏𝜏 + 2𝜃𝜃)𝑑𝑑𝑑𝑑
𝑇𝑇
25 1
= 50cos(100 𝜏𝜏)+ � (sin(200𝑡𝑡 + 2𝜃𝜃 + 100𝜏𝜏) − (𝑠𝑠𝑠𝑠𝑠𝑠200𝑡𝑡 +
𝑇𝑇 200
2𝜃𝜃)
1
=50cos(100 𝜏𝜏)+ [sin(200𝑡𝑡 + 2𝜃𝜃 + 100𝜏𝜏) − (𝑠𝑠𝑠𝑠𝑠𝑠200𝑡𝑡 + 2𝜃𝜃)]
4𝑇𝑇
Now lim 𝑇𝑇→∞ (𝑍𝑍𝑇𝑇 ) = 50cos(100 𝜏𝜏)
=R(𝜏𝜏)
Therefore {X(t)} is correlation ergodic
13(b) (i) If the process { X (t ); t ≥ 0} is a poisson process with parameter λ , obtain
P { X (t ) = n} . is the process first order stationary?
Ans: 𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 = 𝐸𝐸[𝑋𝑋(𝑡𝑡)]

= � 𝑛𝑛𝑃𝑃𝑛𝑛 (𝑡𝑡)
𝑛𝑛=0

(𝜆𝜆𝜆𝜆)𝑛𝑛 𝑒𝑒 −𝜆𝜆𝜆𝜆
= � 𝑛𝑛
𝑛𝑛!
𝑛𝑛=0

−𝜆𝜆𝜆𝜆
(𝜆𝜆𝜆𝜆)𝑛𝑛
= 𝑒𝑒 � 𝑛𝑛
𝑛𝑛(𝑛𝑛 − 1)!
𝑛𝑛=1

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

(𝜆𝜆𝜆𝜆)1 (𝜆𝜆𝜆𝜆)2 (𝜆𝜆𝜆𝜆)3


= 𝑒𝑒 −𝜆𝜆𝜆𝜆 � + + + ⋯�
0! 1! 2!
(𝜆𝜆𝜆𝜆)1 (𝜆𝜆𝜆𝜆)2
= 𝑒𝑒 −𝜆𝜆𝜆𝜆 𝜆𝜆𝜆𝜆 �1 + + + ⋯�
1! 2!
= 𝑒𝑒 −𝜆𝜆𝜆𝜆 𝜆𝜆𝜆𝜆 𝑒𝑒 𝜆𝜆𝜆𝜆
𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 = 𝜆𝜆𝜆𝜆

𝐸𝐸[𝑋𝑋 (𝑡𝑡)] = � 𝑛𝑛2 𝑃𝑃𝑛𝑛 (𝑡𝑡)


2

𝑛𝑛=0

(𝜆𝜆𝜆𝜆)𝑛𝑛 𝑒𝑒 −𝜆𝜆𝜆𝜆
= �[𝑛𝑛(𝑛𝑛 − 1) + 𝑛𝑛]
𝑛𝑛!
𝑛𝑛=0
∞ ∞
(𝜆𝜆𝜆𝜆)𝑛𝑛 𝑒𝑒 −𝜆𝜆𝜆𝜆 (𝜆𝜆𝜆𝜆)𝑛𝑛 𝑒𝑒 −𝜆𝜆𝜆𝜆
= �[𝑛𝑛(𝑛𝑛 − 1)] + � 𝑛𝑛
𝑛𝑛! 𝑛𝑛!
𝑛𝑛=0 𝑛𝑛=0

𝑛𝑛
(𝜆𝜆𝜆𝜆)
= 𝑒𝑒 −𝜆𝜆𝜆𝜆 � 𝑛𝑛 (𝑛𝑛 − 1) + 𝜆𝜆𝜆𝜆
𝑛𝑛(𝑛𝑛 − 1)(𝑛𝑛 − 2)!
𝑛𝑛=2

(𝜆𝜆𝜆𝜆)𝑛𝑛
= 𝑒𝑒 −𝜆𝜆𝜆𝜆 � + 𝜆𝜆𝜆𝜆
(𝑛𝑛 − 2)!
𝑛𝑛=2
(𝜆𝜆𝜆𝜆)2 (𝜆𝜆𝜆𝜆)3 (𝜆𝜆𝜆𝜆)4
= 𝑒𝑒 −𝜆𝜆𝜆𝜆 � + + + ⋯ � + 𝜆𝜆𝜆𝜆
0! 1! 2!
(𝜆𝜆𝜆𝜆)1 (𝜆𝜆𝜆𝜆)2
= 𝑒𝑒 −𝜆𝜆𝜆𝜆 (𝜆𝜆𝜆𝜆)2 �1 + + + ⋯ � + 𝜆𝜆𝜆𝜆
1! 2!
= 𝑒𝑒 −𝜆𝜆𝜆𝜆 (𝜆𝜆𝜆𝜆)2 𝑒𝑒 𝜆𝜆𝜆𝜆 + 𝜆𝜆𝜆𝜆
= (𝜆𝜆𝜆𝜆)2 + 𝜆𝜆𝜆𝜆
𝑉𝑉𝑉𝑉𝑉𝑉[𝑋𝑋(𝑡𝑡)] = 𝐸𝐸[𝑋𝑋 2 (𝑡𝑡)] − (𝐸𝐸[𝑋𝑋(𝑡𝑡)])2
= (𝜆𝜆𝜆𝜆)2 + 𝜆𝜆𝜆𝜆 − (𝜆𝜆𝜆𝜆)2
= 𝜆𝜆𝜆𝜆
Since 𝑉𝑉𝑉𝑉𝑉𝑉[𝑋𝑋(𝑡𝑡)] is a function of t. Hence Poisson process is not first order stationary.

13(b)(ii)
Prove that a random telegraph signal process 𝑌𝑌(𝑡𝑡) = 𝛼𝛼 𝑋𝑋(𝑡𝑡) is a Wide Sense Stationary process where
𝛼𝛼 is a random variable which is independent of X(t) and assumes value -1 and +1 with equal
probability and 𝑅𝑅𝑋𝑋𝑋𝑋 (𝑡𝑡1 , 𝑡𝑡2 ) = 𝑒𝑒 − 2𝜆𝜆|𝑡𝑡 1 −𝑡𝑡2 |
Let 𝑌𝑌(𝑡𝑡) = 𝛼𝛼 𝑋𝑋(𝑡𝑡)
1
𝑃𝑃(𝛼𝛼 = 1) = 𝑃𝑃(𝛼𝛼 = −1) =
2
By the definition 𝐸𝐸(𝛼𝛼) = 0, 𝐸𝐸(𝛼𝛼 2 ) = 1
To prove Y(t) is a WSS process
To Prove (i) 𝐸𝐸[𝑌𝑌(𝑡𝑡)] = 𝑎𝑎 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐
(ii) 𝑅𝑅(𝑡𝑡1 , 𝑡𝑡2 ) = 𝑎𝑎 𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓 𝑜𝑜𝑜𝑜 𝑡𝑡1 − 𝑡𝑡2

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

(i) 𝐸𝐸[𝑌𝑌(𝑡𝑡)] = 𝐸𝐸[𝛼𝛼𝛼𝛼(𝑡𝑡)]


= 𝐸𝐸(𝛼𝛼)𝐸𝐸[𝑋𝑋(𝑡𝑡)] since 𝛼𝛼 𝑎𝑎𝑎𝑎𝑎𝑎 𝑋𝑋(𝑡𝑡)𝑎𝑎𝑎𝑎𝑎𝑎 𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖
= 0 = 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐
(ii) 𝑅𝑅(𝑡𝑡1 , 𝑡𝑡2 ) = 𝐸𝐸[𝑌𝑌(𝑡𝑡1 )𝑌𝑌(𝑡𝑡2 )] = 𝐸𝐸[𝛼𝛼𝛼𝛼(𝑡𝑡1 )𝛼𝛼𝛼𝛼(𝑡𝑡2 )]
= 𝐸𝐸[𝛼𝛼 2 ]𝐸𝐸[𝑋𝑋(𝑡𝑡1 )𝑋𝑋(𝑡𝑡2 )]
= 1 × 𝑒𝑒 −2𝜆𝜆|𝑡𝑡 1 −𝑡𝑡2 |
= 𝑎𝑎 𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓 𝑜𝑜𝑜𝑜 𝑡𝑡1 − 𝑡𝑡2
∴ 𝑌𝑌(𝑡𝑡) 𝑖𝑖𝑖𝑖 𝑎𝑎 𝑊𝑊𝑊𝑊𝑊𝑊

14(a)(i)If { X (t )} & {Y(t )} are two random processes with autocorrelation function RXX (τ ) & RYY (τ )
Then prove that |𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏)| ≤ �𝑅𝑅𝑋𝑋𝑋𝑋 (0)𝑅𝑅𝑌𝑌𝑌𝑌 (0). Establish any two properties of auto correlation
function RXX (τ ) .
Solution: By Cauchy-Schwartz inequality, we have
[ ][
E [X (t )Y (t + τ )] ≤ E X 2 (t ) E Y 2 (t + τ )
2
]
⇒ [R XY (τ )]2 ≤ R XX (0) RYY (0)
[ E[ X 2
(t ) = R XX (0) by A.C.F property ]
⇒ R XY (τ ) ≤ R XX (0) RYY (0)

14(a)(ii) Find the power spectral density of the random process whose autocorrelation function is

{1 − τ ,
R(τ ) = for τ ≤ 1
=0 elsewhere

{1 − τ ,
R(τ ) = for τ ≤ 1

Solution: Given: =0 elsewhere

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
S (ω ) = F [ R (τ )]

∫ R(τ )e
−iατ
ie., S (ω ) = dτ
−∞
1
= ∫ (1 − τ )e −iατ dτ
−1
1
= ∫ (1 − τ )(cos ωτ − i sin ωτ )dτ
−1
1 1
= ∫ (1 − τ )(cos ωτ )dτ − i ∫ (1 − τ )(sin ωτ )dτ
−1 −1
1
= 2 ∫ (1 − τ ) cos ωτdτ
0
1
  − sin ωτ   − cos ωτ 
= 2 (1 − τ )  − (−1) 
 ω   ω  0
2

 − cos ω 1  2(1 − cos ω )
= 2 + 2=
 ω ω  ω2
2

14(a)(ii) Find the power spectral density of the random process whose auto correlation function
is given by
1 − |𝜏𝜏| 𝑖𝑖𝑖𝑖 |𝜏𝜏| ≤ 1
𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏) = �
0, 𝑜𝑜𝑜𝑜ℎ𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒
1 − |𝜏𝜏| 𝑖𝑖𝑖𝑖 |𝜏𝜏| ≤ 1
Given 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏) = �
0, 𝑜𝑜𝑜𝑜ℎ𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒

𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔) = � 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏)𝑒𝑒 − 𝑖𝑖𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑
−∞
1
= � (1 − |𝜏𝜏|)𝑒𝑒 − 𝑖𝑖𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑
−1
1
= � (1 − |𝜏𝜏|) (𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 − 𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖) 𝑑𝑑𝑑𝑑
−1
1
1
= � (1 − |𝜏𝜏|) 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 𝑑𝑑𝑑𝑑 − 𝑖𝑖 �(1 − |𝜏𝜏|) 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 𝑑𝑑𝑑𝑑
−1
−1
1

= 2 �(1 − 𝜏𝜏) 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 𝑑𝑑𝑑𝑑


0
𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 1
= 2 �(1 − 𝜏𝜏) − (−1) �− ��
𝜔𝜔 𝜔𝜔 2 0
𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 1
= 2 ��0 − 2 � − �0 − 2 ��
𝜔𝜔 𝜔𝜔
Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130
Agni college of Technology
Chennai – 130
1 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐
= 2� − �
𝜔𝜔 2 𝜔𝜔 2
2
= [1 − 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐]
𝜔𝜔 2
4 𝜔𝜔
∴ 𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔) = 2 𝑠𝑠𝑠𝑠𝑠𝑠2 � �
𝜔𝜔 2

14(b) State and prove Wiener Khintchine theorem and hence find the power spectral density of a
WSS process X(t) which has an autocorrelation RXX (τ=) A0 [1 − τ / T ], − T ≤ t ≤ T
Statement:

Let X(t) be a real WSS process with power density spectrum S XX (ω ) . Let X T (t ) be a
 X (t ) ,−T < t < T
X T (t ) = 
portion of the process X(t) in time interval –T to T. i.e.,  0 , elsewhere
Let X T (ω ) be the Fourier transform of X T (t ) , then

S XX (ω ) =
lim1
T → ∞ 2T
E X T (ω )
2
{ }
Proof: Given X T (ω ) is the Fourier transorm of X T (t )

∴ X T (ω ) = ∫X
−∞
T (t )e −iωt dt

T
= ∫X
−T
T (t )e −iωt dt

∫ X (t )e
−iωt
= dt
−T

X T (ω ) = X T* (ω ) X T (ω ) [where * denotes complex conjugate]


2

T T

∫ X (t )e dt. ∫ X (t )e −iωt dt
i ωt
=
−T −T [ X (t ) is real]

T iω t T − i ωt
= ∫ X (t )e 1 dt . ∫ X (t )e 2 dt
1 1 2 2
−T −T

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
T T
= ∫ ∫ X (t ) X (t
−T −T
1 2 )e −iω ( t2 −t2 ) dt1dt 2

T T

∫ ∫ X (t ) X (t )e −iω ( t2 −t2 ) dt1dt 2



lim
T →∞
E X T (ω ) =
2
[ lim 1
T → ∞ 2T
] −T −T
1 2

But E [X ((t1 )(t 2 )] = R XX (t1 , t 2 ) if − T < t1 , t 2 < T


T T

∫ ∫R (t1 , t 2 )e −iω ( t2 −t2 ) dt1dt 2



lim
T →∞
[
E X T (ω ) =
2 lim 1
T → ∞ 2T
] −T −T
XX

We shall now make a change of variables as below


Put t1 = t and t 2 − t1 = τ ⇒ t 2 = τ + t
∴ thejacobian of transformation is

∂t1 ∂t1 1 0
= =1
J = ∂t ∂τ 1 1
∂t 2 ∂t 2
∂t ∂τ
 dt1 dt 2 = J dtdτ
The limits of t and –T and T
When t 2 = −T ,τ = −T − t and t 2 = T ,τ = T − t


lim 1
T → ∞ 2T
[
E X T (ω ) =
2 lim 1
T → ∞ 2T
] T −t T

∫ ∫R
−T −t −T
XX (t , t + τ )e −iωτ dtdτ

Since X(t) is WSS Process, R XX (t , t + τ ) = R XX (τ )


lim 1
T → ∞ 2T
[
E X T (ω ) =
2 lim 1
T → ∞ 2T
] T −t T

∫ ∫R
−T −t −T
XX (τ )e −iωτ dtdτ

lim 1
= T −t T
T → ∞ 2T
∫R (τ )e dτ . ∫ dt
−iωτ
XX
−T −t −T

lim 1
= T −t
T → ∞ 2T
∫R
−T −t
XX (τ )e −iωτ dτ .2T

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
lim
= T −t
T →∞
∫R
−T −t
XX (τ )e −iωτ dτ


= ∫R
−∞
XX (τ )e −iωτ dτ = S XX (ω )
, by definition.

∴ S XX (ω ) =lim E X T (ω ) [ 2
]
T →∞ 2T
Hence proved.

15(a)(i)Show that if the input X(t) is WSS process for a linear system then output Y(t) is a WSS
process.

Sol: Let X(t) be a WSS process for a linear time invariant stable system with Y(t) as the output
process.

Y (t ) = ∫ h(u ) X (t − u )du
−∞
Then where h(t ) is weighting function or unit impulse response.


∴ E [Y (t )] = ∫ E[h(u ) X (t − u )]du
−∞

= ∫ h(u ) E[ X (t − u )]du
−∞

Since X(t) is a WSS process, E [ X (t )] is a constant µ X for any t.

∴ E[ X (t − u )] = µ X
∞ ∞
∴ E [Y (t )] = ∫ h(u )µ X du = µ X ∫ h(u )du
−∞ −∞

∫ h(u )du
−∞
Since the system is stable , is finite
∴ E [Y (t )] is a constant.

Now RYY (t , t + τ ) = E[Y (t )Y (t + τ )]

∞ ∞
= E[ ∫ h(u1 ) E[ X (t − u1 )]du1 ∫ h(u 2 ) E[ X (t + τ − u 2 )]du 2 ]
−∞ −∞

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
∞ ∞
= E[ ∫ ∫ h(u1 )h(u 2 ) X (t − u1 ) X (t + τ − u 2 )du1 du 2 ]
− ∞− ∞
∞ ∞
= ∫ ∫ h(u )h(u
− ∞− ∞
1 2 ) E[ X (t − u1 ) X (t + τ − u 2 )]du1 du 2

Since X(t) is a WSS process, auto correlation function is only a function of time
difference
∞ ∞
∴ RYY (t , t + τ ) = ∫ ∫ h(u )h(u
− ∞− ∞
1 2 ) R XX (τ + u1 − u 2 )du1 du 2

When this double integral is evaluated by integrating with respect to

u1 and u 2 , the RHS is only a function of τ . Hence Y(t) is a WSS process.

15(a)(ii) If X(t) is the input voltage to a circuit and Y(t) is the output voltage {X(t)} is a
−2 τ
stationary process with µ X = 0 & R XX (τ ) = e . Find the mean µ Y and the power spectrum
1
S YY (ω ) of the output if the system transfer function is given by H (ω ) = .
ω + 2i

(i ) µ X = 0 ie., E[ X (t )] = 0
−2 τ
Solution: Given : (i) (ii R XX (τ ) = e
1
(iii ) H (ω ) =
ω + 2i

To find : (1) Mean o0f Y(t) ie., µ Y ie.,E[Y(t)]

(2) S YY (ω )

(1) E[Y(t)]=E[X(t) H(0)=0H(0)=0

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

S YY (ω ) = H (ω ) S XX (ω )
2
...( A)
1
(iii ) ⇒ H (ω ) =
2

ω +4
2

−2 τ
S XX (ω ) = F [ R XX (τ )] = F [e ] by (ii )
(2) 4
=
ω +4 2

1 4
( A) ⇒ S YY (ω ) = 2
ω +4ω +4 2

4
= 2
(ω + 4) 2

Y (t ) A cos(ω0 t + θ ) + N (t ), where A is a constant θ is a random variable with a uniform


15(b)(i) I(f=
distribution on ( −π , π ) and {N(t)} is a band-limited Gaussian white noise with
N
S NN (ω )  0 ,
= for ω − ω0 < ωB
 2 Find the power spectral density {Y(t)}. Assume that {N(t)} and
=0 elsewhere.
θ are independent.

Y (t=
1 )Y (t 2 ) { A cos(ω0t1 + θ ) + N (t1 )}{ A cos(ω0t2 + θ ) + N (t2 )}
= A2 cos(ω0 t1 + θ ) cos(ω0 t2 + θ ) + N (t1 ) N (t2 ) + A cos(ω0 t1 + θ ) N (t2 ) + A cos(ω0 t2 + θ ) N (t1 )
Solution:
R=
YY (t1 , t 2 ) A2 E [ cos(ω0 t1 + θ ) cos(ω0 t2 + θ ) ] + RNN (t1, t2 ) + AE cos(ω0 t1 + θ ) E [ N (t2 ) ]
+ AE cos(ω0 t2 + θ ) E [ N (t2 ) ]

15 (b)(ii) A system has an impulse response h(t ) = e − βttU (t ), find the power spectral density of
the output Y(t) corresponding to the input X(t).

1
h(t ) = e − βttU (t ) ⇒ H (ω ) = F [h(t )] = F [e − βttU (t )] =
β + iω
1
H (ω ) =
2

Solution: Given : β +ω2


2

We know that S YY (ω ) = H (ω ) S XX (ω )
2

1
⇒ S YY (ω ) = S XX (ω )
β +ω22

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

B.E./B.Tech. DEGREE EXAMINATIONS, MAY/JUNE 2012


Fourth Semester
Common to ECE/BIOMEDICAL
MA6453 – PROBABILITY AND RANDOM PROCESSES

(Regulations 2008)
Time: Three hours Maximum: 100 marks
PART A

Answer all question

Part A-(10x2=20 marks)


n
2
1. Find C, if P[ X = n] = C   ; n = 1,2,3....
3

Solution:

∞ n
2
∑ P[ X = n] = 1, ∑n =1
C  = 1
3
 2 1  2  2 
C   +   + ...... = 1
 3   3  
−1
 2  2 
C   1 −  = 1 {1 + x + x 2 + ...} = (1 − x) −1 }
 3  3 
1
⇒C =
2

2. The probability that a man shooting a target is 1/4. How many times must he fire so that
the probability of hitting the target atleast once is more than 2/3?

Solution:

Let p be the probability of hitting a target. ie.,p=1/4

P[ X ≥ 1) > 2 / 3
1 − P( X < 1) > 2 / 3 ⇒ 1 − P( X = 0) > 2 / 3
Then q=1-p. To find 'n' such that
⇒ 1 − q n > 2 / 3 ⇒ (3 / 4) n < 1 / 3

This is true for n=4. Hence n=4.

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

3.Lett X and Y be two discrete random variables with joint probability mass function

1
P[ X = x, Y = y ] =  (2 x + y ), x = 1,2 & y = 1,2
8
=0 otherwise

Find the marginal probability mass functions of X and Y.

Solution:

1 2 1
PX ( x) = ∑ Pxy ( x, y ) = ∑ (2 x + y ) = (4 x + 3), x = 1,2
y 8 y =1 18
1 2 1
PY ( y ) = ∑ Pxy ( x, y ) = ∑ (2 x + y ) = (2 y + 6), y = 1,2
x 8 x =1 18

4. State Central limit theorem for iid random variables;

Solution: Out of syllabus.

5. Define Wide sense stationary process

A random process {X(t)} is called Wide sense stationary if its mean is constant
and autocorrelation function depends only on the time difference
i.e (i) E(X(t)) is always a constant
(ii)E(X(t)X(t+𝜏𝜏))= R xx (𝜏𝜏)
6.If {X(t)} is a normal process with µ(t)=10 and C(t 1 ,t 2 )=16 𝒆𝒆−|𝒕𝒕𝟏𝟏 −𝒕𝒕𝟐𝟐 | find the
variance of X(10)-X(6)

Solution: Since {X(t)} is a Normal process, any member of {X(t)} is a normal


random
variable
By definition C(t 1 ,t 2 )= R(t 1 ,t 2 )-E[X(t 1 )].E[X(t 2 )]
C(t 1 ,t 2 )= Var(X(t 1 ))
C(t 1 ,t 2 )= Var {X(t)}
Now X (10) is a normal random variable with mean µ(10)=10 and variance
C(10,10) =16
Var(U)= Var[X(10)-X(6)]
=Var[X(10)]+Var[X(6)]-2Cov[X910,X(6)]
=Cov(10,10)+Cov(6,6)-2Cov(10,6)
=16 𝑒𝑒 −|10−10| + 16𝑒𝑒 −|6−6| − 2 × 16𝑒𝑒 −|10−6|
=16+16-2×16e-4
= 31.4139

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

𝜎𝜎𝑢𝑢=√31.4139
=5.6048
9
R (τ ) = 16 + .
7. The auto correlation function of a stationary random process is 1 + 6τ 2 Find
the mean and variance of the process.

Solution:

9
R(τ ) = 16 +
1 + 6τ 2
9
[ E ( X (t )]2 . = lim R(τ ) = lim 16 + = 16
τ →∞ τ →∞ 1 + 6τ 2
∴ E{ X (t )} = 4
Given
{E ( X 2 (t )} = R(0) = 16 + 9 = 25
var iance σ 2 = {E ( X 2 (t )} − [ E ( X (t )] 2 = 25 − 16 = 9

S xy (ω ) = S yx (−ω )
8. Prove that

Solution:
∞ ∞

∫ Rxy (τ )e dτ = ∫R
−iωτ
S xy (ω ) = xy (−τ )e −iωτ dτ
−∞ −∞

Put τ 1 = −τ , dτ 1 = − dτ
When τ = ∞, τ 1 = −∞
τ = −∞, τ 1 = ∞
∞ ∞

∫ Rxy (τ 1 )e 1 (−dτ 1 ) = ∫R
−iωτ
S xy (ω ) = xy (τ 1 )e iωτ 1 dτ 1 = S yx (−ω )
−∞ −∞

9. Prove that the system


y (t ) = ∫ h(u ) X (t − u )du is a linear time invariant system.
−∞

Solution:

y (t + h) = f ( X (t + h), where y (t ) = f [ X (t )]
Let y (t ) = a1 X 1 (t ) + a 2 X 2 (t ), then
A system is said to be time-invariant if ∞

∫ h(u ) X [(t + h) − u ]du = y(t + h)


−∞

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

Hence time-invariant.

10. What is unit impulse response of a system?Why is it called so?

Solution:

If a system is of the form y (t ) = ∫ h(u ) X (t − u )du then the system weighting function
−∞
h(t)

is also called unit impulse resonse of the system. It is called so because Y(t)=h(t) where X(t)=
the unit impulse function δ (t ).

PART B (5X16=80marks)

11(a)(i)

A random variable X has the following probability distribution


X: 0 1 2 3 4 5 6 7
P(x): 0 K 2K 2K 3 𝐾𝐾 2 2𝐾𝐾 2 7𝐾𝐾 2 + 𝐾𝐾
K
Find
(1) The value of K
(2) 𝑃𝑃(1.5 < 𝑋𝑋 < 4.5⁄𝑋𝑋 > 2) and
(3) The smallest value of n for which 𝑃𝑃(𝑋𝑋 ≤ 𝑛𝑛) ≥ 1/2
Ans: Solution:(𝑖𝑖) ∑7𝑥𝑥=0 𝑝𝑝(𝑥𝑥) = 1
0 + 𝐾𝐾 + 2𝐾𝐾 + 2𝐾𝐾 + 3𝐾𝐾 + 𝐾𝐾 2 + 2𝑘𝑘 2 + 7𝑘𝑘 2 + 𝐾𝐾 = 1
10𝑘𝑘 2 + 9𝐾𝐾 = 1
10𝑘𝑘 2 + 9𝐾𝐾 − 1 = 0
1
𝐾𝐾 = 𝑜𝑜𝑜𝑜 𝐾𝐾 = −1
10
since 𝑃𝑃(𝑥𝑥) cannot de negative ,𝐾𝐾 = −1 is rejected
1
Hence 𝐾𝐾 =
10
X 0 1 2 3 4 5 6 7
0 1 2 2 3 1 2 17
𝑃𝑃(𝑋𝑋 = 𝑥𝑥) 10 10 10 10 100 100 100
(𝑖𝑖𝑖𝑖) 𝑃𝑃(𝑋𝑋 < 6) = 𝑃𝑃(𝑋𝑋 = 0) + 𝑃𝑃(𝑋𝑋 = 1) + ⋯ . +𝑃𝑃(𝑋𝑋 = 5)

1 2 2 3 1 81
= + + + + =
10 10 10 10 100 100
81 19
𝑃𝑃(𝑋𝑋 ≥ 6) = 1 − 𝑃𝑃(𝑋𝑋 < 6) = 1 − =
100 100
𝑃𝑃(0 < 𝑋𝑋 < 5) = 𝑃𝑃(𝑋𝑋 = 0) + 𝑃𝑃(𝑋𝑋 = 1) + ⋯ . +𝑃𝑃(𝑋𝑋 = 4)
1 2 2 3
= + + +
10 10 10 10

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

8 4
= =
10 5
1
(𝑖𝑖𝑖𝑖𝑖𝑖) 𝑃𝑃(𝑋𝑋 ≤ 3) =
2
8 1
𝑃𝑃(𝑋𝑋 ≤ 4) = >
10 2
𝑇𝑇ℎ𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒 𝑛𝑛 = 4
(𝑖𝑖𝑖𝑖)The distribution function 𝐹𝐹𝑋𝑋 (𝑥𝑥) of X is given by
X 0 1 2 3 4 5 6 7
0 1 3 5 8 81 83 100
=1
𝐹𝐹𝑋𝑋 (𝑥𝑥) = 𝑃𝑃(𝑋𝑋 10 10 10 10 100 100 100
≤ 𝑥𝑥)
11(a)(ii) Find the M.G.F.of the random variable X having the probability density function
x −
x
=f ( x)  e 2 , x>0
4 Also deduce the first four moments about the origin.
=0 , elsewhere

x −
x
= f ( x)  e 2 , x>0
Solution: Given 4
=0 , elsewhere

∞ ∞ ∞ 1 
x − 2x 1 − −t  x
M= E=
[etX ] ∫ e f (= ∫e e= ∫ xe  2  dx
tx tx
X (t ) x)dx dx
0 0
4 40

   1    1 

 −  2 − t  x 
1   e  2   
− −t  x
e
= x  − (1)  
4  1    1 
2

  −  2 − t   − − t  
      2  0
  
  
1   1=
(0 − 0) −  0 − =
1 4 1
=
4   1 
2 
4 (1 − 2t )2 (1 − 2t )
2

   − t  
  2   
(1 − 2t )
−2
= =
1 + 2(2t ) + 3(2t ) 2 + 4(2t )3 + ......
=1 + 4 t + 12 t 2 + 32t 3 + 80t 4 + ......
t t2 t3 t4
=1 + (4) + (24) + (192) + (1920) + ....
1! 2! 3! 4!
t
=∴µ1 coefficient
'
= of 4
1!
t2
= µ 2' coefficient
= of 24
2!
t3
= µ3' coefficient
= of 192
3!
t4
= µ 4' coefficient
= of 1920
4!

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

(b)(i) Given that X is distributed normally, if P[ X < 45] = 0.31 & P[ X > 64] = 0.08, find the
mean and standard deviation of the distribution.

Solution:

The value of z corresponds to the area 0.19=0.5

45 − µ
∴ = −0.5 ⇒ 45 − µ = −0.5σ (1)
σ

The value of z corr. to area 0.42=1.4

64 − µ
∴ = 1.4 ⇒ 64 − µ = 1.4σ (2)
σ

Solving (1) and (2) mean=50, SD=10.

(ii) The time in hours required to repair a machine is exponenetially distributed with
1
parameter λ = .
2

A. What is the probability that the repair time exceeds 2 hours?

B. What is the conditional probability that a repair takes atleast 10 hours given that its
duration exceeds 9 hours?

Solution:

Let X be a random variable of time to repair the machine.

1
Given X is exponentially distributed with λ = .
2
x
1 −2
∴ f ( x) = e , x>0
2

To find the probability that the repair time exceeds 2 hours;


∞∞
 −x 
1 e 2 

[ ]
x
1 −
P( X > 2) = ∫ e 2 dx =   = − 0 − e −1 = e −1 = 0.3679.
2
2 2 −1 
 2  2

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

To find the probability that the repair time takes atleast 10 hours given that it
exceeds 9 hours
∞∞
 −x 
∇∞
1 e 2   − 
x 1 1
1 − −
P( X / .10 X > 9) = P( X > 1) = ∫ e 2 dx   = − 0 − e  = e 2 = 0.6065
2

1
2 2  −1   
 2  1

12(a)(i) The joint p.d.f of the random variable (X,Y) is given b y 𝑓𝑓(𝑥𝑥, 𝑦𝑦) =
2 2
𝑘𝑘𝑘𝑘𝑘𝑘𝑒𝑒 −�𝑥𝑥 +𝑦𝑦 � ,
𝑥𝑥 > 0, 𝑦𝑦 > 0. Find the value of k and also prove that X and Y are independent.
Solution: Here the range space is the entire first quadrant of the XY-place.
∞ ∞
2 +𝑦𝑦 2 �
� � 𝑘𝑘𝑘𝑘𝑘𝑘𝑒𝑒 −�𝑥𝑥 𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑 = 1 … … … … . (1)
0 𝑐𝑐
∞ ∞
2 +𝑦𝑦 2 �
𝑘𝑘 � � 𝑥𝑥𝑥𝑥𝑒𝑒 −�𝑥𝑥 𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑 = 1
0 𝑐𝑐
∞ ∞
2 2
𝑘𝑘 �� 𝑥𝑥𝑒𝑒 −𝑥𝑥 𝑑𝑑𝑑𝑑� �� 𝑦𝑦𝑒𝑒 −𝑦𝑦 𝑑𝑑𝑑𝑑� = 1
0 0

𝑑𝑑𝑑𝑑
Put 𝑥𝑥 2 = 𝑡𝑡, 2𝑥𝑥𝑥𝑥𝑥𝑥 = 𝑑𝑑𝑑𝑑. 𝑖𝑖. 𝑒𝑒. , 𝑥𝑥𝑥𝑥𝑥𝑥 = .
2

When 𝑥𝑥 → 0 ⇒ 𝑡𝑡 ⇢ 0, 𝑥𝑥 ⇢ ∞ ⇒ 𝑡𝑡 ⟶ ∞
𝑑𝑑𝑑𝑑
Put 𝑦𝑦 2 = 𝑣𝑣 , 2𝑦𝑦𝑦𝑦𝑦𝑦 = 𝑑𝑑𝑑𝑑 , 𝑖𝑖𝑖𝑖. , 𝑦𝑦𝑦𝑦𝑦𝑦 =
2
As 𝑦𝑦 → 0 ⇒ 𝑣𝑣 ⇢ 0, 𝑦𝑦 ⇢ ∞ ⇒ 𝑣𝑣 ⇢ ∞
∞ 𝑑𝑑𝑑𝑑 ∞ 𝑑𝑑𝑑𝑑
(1) ⇒ 𝑘𝑘 �∫0 𝑒𝑒 −𝑡𝑡 � �∫0 𝑒𝑒 −𝑣𝑣 �=1
2 2
𝑒𝑒 −𝑡𝑡 𝑒𝑒 −𝑣𝑣
�� �=1 𝑘𝑘 �
2 2
−1 1
𝑘𝑘 �0 − � �� �0 − � �� = 1.
2 −2
1 1
𝑘𝑘 � � � � = 1
2 2
𝑘𝑘
= 1.
4
The marginal density of X is given by
∞ ∞
2 +𝑦𝑦 2 �
𝑓𝑓𝑋𝑋 (𝑥𝑥) = � 𝑓𝑓(𝑥𝑥, 𝑦𝑦)𝑑𝑑𝑑𝑑 = � 4𝑥𝑥𝑥𝑥 𝑒𝑒 −�𝑥𝑥 𝑑𝑑𝑑𝑑
0 0

2 2
= � 4𝑥𝑥𝑥𝑥 𝑒𝑒 −𝑥𝑥 𝑒𝑒 −𝑦𝑦 𝑑𝑑𝑑𝑑
0

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130


−𝑥𝑥 2 2
= 4𝑥𝑥𝑒𝑒 � 𝑦𝑦 𝑒𝑒 −𝑦𝑦 𝑑𝑑𝑑𝑑
0
2 1
= 4𝑥𝑥𝑒𝑒 −𝑥𝑥 � �
2
−𝑥𝑥 2
= 2𝑥𝑥𝑒𝑒 , 𝑥𝑥 > 0
The marginal density of Y is given by
∞ ∞
2 +𝑦𝑦 2 �
𝑓𝑓𝑦𝑦 (𝑦𝑦) = � 𝑓𝑓(𝑥𝑥, 𝑦𝑦)𝑑𝑑𝑑𝑑 = � 4𝑥𝑥𝑥𝑥 𝑒𝑒 −�𝑥𝑥 𝑑𝑑𝑑𝑑
0 0

2 2
= � 4𝑥𝑥𝑥𝑥 𝑒𝑒 −𝑥𝑥 𝑒𝑒 −𝑦𝑦 𝑑𝑑𝑑𝑑
0

2 2
= 4𝑦𝑦𝑒𝑒 −𝑦𝑦 � 𝑥𝑥 𝑒𝑒 −𝑥𝑥 𝑑𝑑𝑑𝑑
0
2 1
= 4𝑦𝑦𝑒𝑒 −𝑦𝑦 � �
2
−𝑦𝑦 2
= 2𝑦𝑦𝑒𝑒 , 𝑦𝑦 > 0.
If 𝑓𝑓𝑋𝑋 (𝑥𝑥)𝑓𝑓
𝑦𝑦 (𝑦𝑦) = 𝑓𝑓(𝑥𝑥, 𝑦𝑦) then X and Y are independent.
2 2 2 +𝑦𝑦 2 �
𝑓𝑓𝑋𝑋 (𝑥𝑥)𝑓𝑓𝑦𝑦 (𝑦𝑦) = �2𝑥𝑥𝑒𝑒 −𝑥𝑥 ��2𝑦𝑦𝑒𝑒 −𝑦𝑦 � = 4𝑥𝑥𝑥𝑥𝑥𝑥 −�𝑥𝑥 = 𝑓𝑓(𝑥𝑥, 𝑦𝑦). Hence X and Y
are independent.

(ii) If X and Y are uncorrelated random variables with variances 16 and 9. Find the
correlation coefficient between X+Y and X-Y.

Solution:

Let U=X+Y and V=X-Y, then E(U)= E(X)+E(Y) and E(V)= E(X)-E(Y)

∴ Cov (u , v) = σ x − σ y
2 2

var(U ) = var( X ) + var(Y ) + 2 cov( X , Y )


 X and Y are uncorrelated cov( x, Y ) = 0.
(ie., )σ u = σ x + σ y & σ v = σ x + σ y
2 2 2 2 2 2

σ x2 −σ y2
ruv =
If r is then correlation coefficient between u and v, σ x2 + σ y2
7
Given σ x = 16 & σ y = 9 ∴ r =
2 2

25

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

12(b)(i) If the p.d.f of a two dimensional random variable (X,Y) is given by 𝑓𝑓(𝑥𝑥, 𝑦𝑦) =
𝑥𝑥 + 𝑦𝑦 , 0 ≤ 𝑥𝑥, 𝑦𝑦 ≤ 1. Find the p.d.f of U=XY.
𝑢𝑢
Solution: 𝑢𝑢 = 𝑥𝑥𝑥𝑥 , 𝑣𝑣 = 𝑦𝑦. Hence 𝑥𝑥 = and 𝑦𝑦 = 𝑣𝑣.
𝑣𝑣
𝜕𝜕𝜕𝜕 𝜕𝜕𝜕𝜕 1
0 1
𝐽𝐽 = �𝜕𝜕𝜕𝜕 𝜕𝜕𝜕𝜕 � = � 𝑣𝑣
−𝑢𝑢 �=
𝜕𝜕𝜕𝜕 𝜕𝜕𝜕𝜕 𝑣𝑣
1
𝜕𝜕𝜕𝜕 𝜕𝜕𝜕𝜕 𝑣𝑣 2

The joint p.d.f. of 𝑢𝑢 and 𝑣𝑣 is given by


1 𝑢𝑢 1 𝑢𝑢 𝑢𝑢
𝑔𝑔(𝑢𝑢, 𝑣𝑣) = 𝑓𝑓(𝑥𝑥, 𝑦𝑦)|𝐽𝐽| = (𝑥𝑥 + 𝑦𝑦) = � + 𝑣𝑣� = 2 + 1 = 1 + 2
𝑣𝑣 𝑣𝑣 𝑣𝑣 𝑣𝑣 𝑣𝑣

Since , 0 ≤ 𝑦𝑦 ≤ 1 , 0 ≤ 𝑣𝑣 ≤ 1, , 0 ≤ 𝑥𝑥 ≤ 1 ⇒ , 0 ≤ 𝑢𝑢 ≤ 𝑣𝑣

𝑣𝑣varies from 𝑣𝑣 = 𝑢𝑢 to 𝑣𝑣 = 1.

Hence the p.d.f of U is given by



𝑓𝑓(𝑢𝑢) = � 𝑔𝑔(𝑢𝑢, 𝑣𝑣)𝑑𝑑𝑑𝑑
−∞
1
𝑢𝑢
= � �1 + � 𝑑𝑑𝑑𝑑
𝑢𝑢 𝑣𝑣 2
𝑢𝑢 1
= [𝑣𝑣]1𝑢𝑢 − � � = 1 − 𝑢𝑢 − 𝑢𝑢 + 1
𝑣𝑣 𝑢𝑢

𝑓𝑓(𝑢𝑢) = 2(1 − 𝑢𝑢), 0 ≤ 𝑢𝑢 ≤ 1.

(ii) Let X 1 , X 2 ,..., X n be poisson variates with


parameter λ = 2 & S n = X 1 + X 2 + ... X n where n=75.
Find P[120 ≤ S n ≤ 160] using central limit theorem.
Solution: Out of syllabus.
13(a)(i) If {X(t)} is WSS process with
autocorrelation R(𝝉𝝉) = 𝑨𝑨𝒆𝒆−𝜶𝜶|𝝉𝝉| ,determine the
second order moment of the RV {X(8)-X(5)}
Solution: E[X(8)-X(5)]2=E[X2(8)+x2(5)-2X(8)X(5)]
=E[X2(8)]+E[x2(5)]-2E[X(8)X(5)] → (1)
Now E[x2(t)]=R xx (0)
E[X2(8)]=𝐴𝐴𝑒𝑒 −𝛼𝛼|0| =E[X2(5)]=A

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

E(X(8)X(5)) =R xx (8-5)=R xx (3)=A𝑒𝑒 −3𝛼𝛼


Substituting in (1)
E[X(8)-X(5)]2=A+A-2A𝑒𝑒 −3𝛼𝛼
=2A(1-𝑒𝑒 −3𝛼𝛼 )
(ii) Suppose X(t) is a normal process with mean
µ (t ) = 3 & C (t1 , t 2 ) = 4e −0.2 t −t , find the probability that
1 2

(i ) X (5) ≤ 2 (ii ) X (8) − X (5) ≤ 1


Solution: Out of syllabus.
13(a)(ii) If the Wss process {X(t)} is given by X(t)=10cos(100t+𝜽𝜽),where 𝜽𝜽 is
uniformly distributed over (-π,π).Prove that {X(t)} is correlation ergodic
Solution: We Know that
R xx (𝜏𝜏)= E(X(t).X(t+ 𝜏𝜏))
=E(100cos(100t+𝜃𝜃)cos(100t+100 𝜏𝜏 + 𝜃𝜃))
=100E(cos(100t+𝜃𝜃+100t+100 𝜏𝜏 + 𝜃𝜃)+cos(100t+𝜃𝜃-100t-100 𝜏𝜏 − 𝜃𝜃))
100
= E(cos(200t+2𝜃𝜃+100 𝜏𝜏) +cos(-100 𝜏𝜏))
2
= 50 E(cos(200t+2𝜃𝜃+100 𝜏𝜏) +cos100 𝜏𝜏)
= 50cos100 𝜏𝜏+50E(cos(200t+2𝜃𝜃+100 𝜏𝜏)) → (1)
1 𝜋𝜋
Now E(cos(200t+2𝜃𝜃+100 𝜏𝜏)) = ∫ cos(200t + 2𝜃𝜃 + 100 𝜏𝜏)𝑑𝑑𝑑𝑑
2𝜋𝜋 −𝜋𝜋
𝜋𝜋
1
= � cos(200t + 2𝜃𝜃 + 100 𝜏𝜏)𝑑𝑑𝑑𝑑
𝜋𝜋 0
𝑠𝑠𝑠𝑠𝑠𝑠(200t + 2𝜃𝜃 + 100 𝜏𝜏) 𝜋𝜋
=� �
2 0
1
= [ 𝑠𝑠𝑠𝑠𝑠𝑠(200t + 2𝜋𝜋 + 100 𝜏𝜏)- 𝑠𝑠𝑠𝑠𝑠𝑠(200t + 100 𝜏𝜏)]
2𝜋𝜋
=0
Substituting in Equation (1),we have
E(X(t).X(t+ 𝜏𝜏))=50cos100 𝜏𝜏 → (2)
1 𝑇𝑇
Therefore ZT= ∫ 𝑋𝑋(𝑡𝑡). 𝑋𝑋(𝑡𝑡 + 𝜏𝜏)𝑑𝑑𝑑𝑑
2𝑇𝑇 −𝑇𝑇
1 𝑇𝑇
= ∫−𝑇𝑇 100 cos(100𝑡𝑡 + 𝜃𝜃) cos(100𝑡𝑡 + 100𝜏𝜏 + 𝜃𝜃) 𝑑𝑑𝑑𝑑
2𝑇𝑇
50 𝑇𝑇
= ∫−𝑇𝑇 [𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶(200𝑡𝑡 + 2𝜃𝜃 + 100𝜏𝜏) + cos(−100𝜏𝜏) 𝑑𝑑𝑑𝑑
2𝑇𝑇
25 𝑇𝑇 25 𝑇𝑇
= ∫−𝑇𝑇 cos(100𝜏𝜏) 𝑑𝑑𝑑𝑑 + ∫𝑇𝑇 𝐶𝐶𝐶𝐶𝐶𝐶(200𝑡𝑡 + 100𝜏𝜏 + 2𝜃𝜃)𝑑𝑑𝑑𝑑
𝑇𝑇 𝑇𝑇
25 𝑇𝑇
=50cos(100 𝜏𝜏)+ ∫𝑇𝑇 𝐶𝐶𝐶𝐶𝐶𝐶(200𝑡𝑡 + 100𝜏𝜏 + 2𝜃𝜃)𝑑𝑑𝑑𝑑
𝑇𝑇
25 1
= 50cos(100 𝜏𝜏)+ � (sin(200𝑡𝑡 + 2𝜃𝜃 + 100𝜏𝜏) − (𝑠𝑠𝑠𝑠𝑠𝑠200𝑡𝑡 + 2𝜃𝜃))�
𝑇𝑇 200
1
=50cos(100 𝜏𝜏)+ [sin(200𝑡𝑡 + 2𝜃𝜃 + 100𝜏𝜏) − (𝑠𝑠𝑠𝑠𝑠𝑠200𝑡𝑡 + 2𝜃𝜃)]
4𝑇𝑇
Now lim 𝑇𝑇→∞ (𝑍𝑍𝑇𝑇 ) = 50cos(100 𝜏𝜏)

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

=R(𝜏𝜏)
Therefore {X(t)} is correlation ergodic
.

13(b)(i) If customers arrive at a counter in accordance with poisson process


with a mean rate of 2 per minute, find the probability that the interval
between 2 consecutive arrivals is

(a)more than 1 minute (b) between 1 minute and 2 minute and (c) 4
minute or less
Solution: Let T be the random variable denoting inter arrival time.
By Property (3)
f T (t)= λe − λt and P(T>t)= e − λt
Here λ =2/minute
−2 t
Therefore f T (t)=2 e

(a) P(T>1)=∫1 2𝑒𝑒 −2𝑡𝑡 𝑑𝑑𝑑𝑑 = 0.1353
2
(b) P(1≤T≤2)=∫1 2𝑒𝑒 −2𝑡𝑡 𝑑𝑑𝑑𝑑 = 0.117
4
(c) P(T≤4) =∫0 2𝑒𝑒 −2𝑡𝑡 𝑑𝑑𝑑𝑑 = 0.99967
−0.2 τ
13(b)(ii) Suppose that X(t) is a Gaussian process with µ x = 2, R XX (τ ) = 5e
Find the probability that X (4) ≤ 1.
Solution: Out of syllabus.
14(a)(i) A stationary random process X(t) with mean 2 has the autocorrelation
τ 1
R xx (τ ) = 4 + e 10 . Find the mean and variance of Y = ∫ X (t )dt
0

Solution:

1  1 1
E[Y ] = E  ∫ X (t )dt  = ∫ E[ X (t )]dt = 2 ∫ dt = 2
0  0 0

Comparing ω & y we have K = o & T = 1


1 −1 1
∴E[Y ] = ∫ 1 − τ R xx (τ )dτ = ∫ (1 + τ )(4 + e
2 τ / 10
)dτ = ∫ (1 − τ )(4 + eτ / 10 )dτ = 200e −0.1 − 176
0 1 0
− 0.1
Var (Y ) = E (Y ) − E (Y ) = 200e
2 2
− 176 − 4 = 20[10e −0.1 − 9]

14(a)(ii) Find the power spectral density function whose autocorrelation


A2
function is given by R xx (τ ) = cos(ω 0τ ).
2

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

Solution:
Power spectral density is

S Xx (ω ) = ∫R
−ℵ
XX (τ )e iωτ dτ


A2
∫−∞ 2 cos(ω 0τ )e dτ
−iωτ
=


A2
=
2 −∞
∫ cos(ω τ )(cos ωτ − i sin ωτ )dτ
0

A2 

 ∞ 
= ∫ [cos(ω − ω 0 )τ + cos(ω + ω 0 )τ ] d τ  − i  ∫ [sin(ω + ω 0 )τ + sin(ω − ω 0 )τ ]dτ 
4  −∞   −∞ 
{2 cos A cos B = cos( A − B) + cos( A + B), 2 sin A sin B = sin( A + B) + sin( A − B)}
∞ ∞
A2 A2
4 −∫ℵ ∫e
−i (ω +ω0 )τ −i (ω −ω0 )τ
S XX (ω ) = e dτ + dτ ...(**)
4 −ℵ
By defn of dirac delta function
∞ ∞
1
∫e ∫ S (ω )dω = 1
−iωτ
S (ω ) = dτ such that
2π −ℵ −ℵ

2π ∫ e −i (ω +ω )0 τ dτ = δ (ω + ω 0 )
−ℵ

∴ (**) becomes
πA 2
S XX (ω ) = [δ (ω + ω 0 ) + δ (ω − ω 0 )
2
14(b)(i) The cross correlation function of two processes X(t) and Y(t) is given by
AB
R XY (t , t + τ ) = {sin(ω 0τ ) + cos ω 0 (2t + τ )} where A,B and ω 0 are constants. Find
2
the cross power spectrum S XY (ω ).
Solution:
T
1
= lim
τ →∞ 2T ∫R
−T
XY (t , t + τ )dt

T
1 AB
Time average = lim
τ →∞ 2T ∫
−T
2
{sin(ω 0τ ) + cos ω 0 ( 2t + τ )}dt

AB  
T T
1 1
= lim
2 τ →∞ 2T −T
∫ {sin(ω τ )dt + τlim 2T ∫ {cos(ω
0
→∞
−T
0 ( 2t + τ )dt 

T
AB  1 
= sin(ω 0τ ) + lim sin(ω 0 ( 2t + τ ) 
2  τ →∞ 2T
 −T
AB
= sin(ω 0τ )
2

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130


AB
S XY (ω ) = ∫
−∞
2
sin(ω 0τ )e −iωτ dτ


sin(ω 0τ )[cos ωτ − i sin ωτ ]dτ
AB
2 −∫ℵ
=

AB  

=  ∫ sin(ω 0τ )[cos ωτ − i sin ωτ ]dτ 
2 −ℵ 
AB  
∞ ∞
= ∫ sin(ω 0τ ) cos ωτdτ − i ∫ sin(ω 0τ ) sin ωτdτ 
2 −ℵ −∞ 
AB  

=  ∫ [sin(ω 0 + ω )τ ) + sin(ω 0 − ω )τ − i cos(ω − ω 0 )τ + i cos(ω + ω 0τ ]dτ 
4 −∞ 
AB  
 − i ∫ [e
−i (ω −ω0 )τ
= − e −i (ω +ω0 )τ ]dτ 
4  
− iAB
.
= [2πδ (ω − ω 0 ) − 2πδ (ω + ω 0 )]
4

− iπAB
Hence S XY (ω ) = [δ (ω − ω 0 ) − δ (ω + ω 0 )]
2
14(b)(ii) Let X(t) and Y(t) be both zero-mean and WSS random processes.Consider
the random process Z(t) defined by Z(t)=X(t)+Y(t). Find
A. The autocorrelation function and the power spectrum of Z(t) if X(t) and Y(t) are
jointly WSS.
B. The power spectrum of Z(t) if X(t) and Y(t) are orthogonal.
Solution:
The autocorrelation function and the power spectrum of Z(t) if X(t) and Y(t)
are jointly WSS.
Autocorrelation function of Z(t) is given by
RZZ (t1 , t 2 ) = E[ Z (t1 ) Z (t 2 )]
= E{( X (t1 ) + Y (t1 )}{ X (t 2 ) + Y (t 2 )}
RZZ (t1 , t 2 ) = R XX (t1 , t 2 ) + R XY (t1 , t 2 ) + RYX (t1 , t 2 ) + RYY (t1 , t 2 )
If X(t) and Y(t) are jointly WSS, then
RZZ (τ ) = R XX (τ ) + R XY (τ ) + RYY (τ ) whereτ = (t 2 − t1 )
Taking Fourier transform on both sides, we obtain
S ZZ (ω ) = S XX (ω ) + S XY (ω ) + S YX (ω ) + S YY (ω )
If X(t) and Y(t) are orthogonal, RXX (τ ) = RYX (τ ) = 0 .
Then RZZ (τ ) = R XX (τ ) + RYY (τ ) and the Fourier transform of this result gives,
S ZZ (ω ) = S XX (ω ) + S YY (ω )

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

1
15(a)(i) Consider a system with transfer function . An input signal with
1 + jω
autocorrelation function mδ (τ ) + m 2 is fed as input to the system.Find the mean-
square value of the output.
Solution:
1
H (ω ) = . & R XX (τ ) = mδ (τ ) + m 2
1 + iω
S X (ω ) = m + 2πm 2δ (ω )
2
1
Given S Y (ω ) = H (ω ) S X (ω ) = {m + 2πm 2δ (ω )}
2

1 + iω
1
= {m + 2πm 2δ (ω )}
1+ ω 2

m −τ
RYY (τ ) is the Fourier inverse transform of S Y (ω ) . So RYY (τ ) = e
2
Hence mean of the output = RYY (∞) = m
m
Mean square value of the output = y 2 = RYY (0) = + m2
2
15(a)(ii) A stationary random process X(t) having the autocorrelation function
R XX (τ ) = Aδ (t )
is applied to a linear system at time t=0 where f (τ ) represent the impulse
function.The linear function has the impulse response of h(t ) = e − bt u (t ) where u(t)
represents the unit step function. Find RYY (τ ) .Also find the mean and variance of
Y(t).
Solution:
R XX (τ ) = Aδ (t )

RYY (τ ) = ∫R
−∞
XX (t − τ ) h(t ) dt

∫ Aδ (t − τ )e
− bt
= u (t ) dt
−∞

= A ∫ δ (t − τ )e −bt u (t ) dt
−∞

= Ae −bt
2
Y = RYY (∞) = 0
2
Y = RYY (0) = A ⇒ Variance (Y (t )) = A

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130


15(b)(b)(i) If {X(t)} is a WSS process and if Y(t)= Y (t ) = ∫ h(ξ ) X (t − ξ )dξ
−∞

then prove that


A..R XY (τ ) = R XY (τ ) * h(t )
.
B. S XY (ω ) = S XX (ω ) H * (ω )
Solution:
Cross correlation function of input X(t) and outputvY(t) is
R XY (t + τ ) = E[ X (t )Y (t + τ )]
 ∞

= E  X (t ) ∫ h(ξ ) X (t + τ − ξ )dξ 
 −∞ 

= ∫ E[ X (t ) X (t + τ − ξ )]h(ξ )dξ
−∞

If X(t) is a wide-sense stationary,



R XY (τ ) = ∫R
−∞
XX (τ − ξ )h(ξ )dξ

R XX (τ ) with h(τ )
which is the convolution of
R XY (τ ) = R XX (τ ) * h(τ )
Taking Fourier transform, S XY (ω ) = S XX (ω ) H * (ω )
15(b)(ii) If {N(t)} is a band limited white noise centered at a carrier frequency ω 0
such that
N
S NN (ω ) =  0 , for ω − ω 0 < ω B
 2
=0 elsewhere
Find the autocorrelation of {N(t)}.
Solution:
N
S NN (ω ) =  0 , for ω − ω 0 < ω B
 2
=0 elsewhere

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130


1
R NN (τ ) =
2π ∫S
−∞
NN (ω )e iωτ dω

ω B +ω 0
1 N0
∫e
iωτ
= dω
2π 2 −ω B +ω0

1 N 0  e ( ω B + ω 0 ) iτ − e ( − ω B + ω 0 ) iτ 
=  
2π τ  2i 
N ω  sin ω Bτ 
= 0 B (cos ω 0τ + i sin ω 0τ )
π 2  ω Bτ 
Since R NN (τ ) is a real function,
N 0 ω B  sin ω Bτ 
R NN (τ ) = cos ω 0τ
π 2  ω Bτ 

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
B.E./B.Tech. DEGREE EXAMINATIONS, NOV/DEC 2013
Fourth Semester
Common to ECE/BIOMEDICAL
MA6453 – PROBABILITY AND RANDOM PROCESSES

(Regulations 2008)
Time: Three hours Maximum: 100 marks
Part A

Answer all questions

1. Define Random variable.

Solution: A real variable X whose value is deteerrmined by the outcome of a random experiment
is called a random variable.

2.Define Geometric distributiob.

Solution: A random variable X is said to have a geometric distribution with parameter p if the
probability mass function is given by

P ( X = x ) = q x − 1 p, x=1,2,3.... where q=1-p.

3. The joint pdf of the RV (x,y) is given by

2)
− (x2 + y
f ( x, y ) = kxye , x > 0, y > o. Find the value of k.

Solution: By the property of joint pdf

∞ ℑ − ( x 2 + y 2)
∫∫ f ( x , y ) dxdy = 1 ⇒ k ∫ ∫ xye dxdy = 1
R 00
2
∞ − y2 ∞ − y2 ∞ − x 2 
⇒ k ∫ ye dy ∫ ye dx = 1 ⇒ k  ∫ xe dx  = 1
0 0 
0 

x 2 = t ⇒ 2 xdx = dt
Put Whenx = ∞, t = ∞ & x = 0, t = 0

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
2
∞ − t dt 
2  − t ∞
k  = 1 ⇒ k (10 = 1 ⇒ K = 4
Hence k k  ∫ e  = 1 ⇒ e
 0 2  4  −1 0  4
 

2 x , 0 < x <1
f ( x) =  Find the pdf of y = 8x 3
0, elsewhere
4. Given the RV X with density function

Solution: Out of syllabus.

5.Define random process.

Solution: A random process is a collection of random variables {X(s,t)} which are functions of a
s∈S t ∈T
real variable t(time) Here (sample space ) and t and {X(s,t)} is a real valued function.

6. Define Markov process.

t1 < t 2 < ... < t n


Solution: If for we have
P{ X (t ) ≤ x = P{ X (t ) ≤ x / X (t n ) = x n }
X (t1 ) = x1 , X (t 2 ) = x 2 ,... X (t n ) = x n}

then the process {X(t)} is caleed a Markov process.

7. Define power spectral density function.

Solution: If {X(t)} is a stationary process with the autocorrelation function R(τ ) , then the Fourier
transform of R(τ ) is caleed the power spectral density function of {X(t)} and is given by

∫ R(τ )e
− iωτ
S (ω ) = dτ
−∞

8. State Wiener-Khinchine theorem.

Solution: Let X(t) be a real WSS process with power density spectrum S XX (ω ) . Let X T (t ) be a
portion of the process X(t) in time interval –T to T.

i.e.,

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
 X (t ) ,−T < t < T
X T (t ) = 
 0 , elsewhere

Let X T (ω ) be the Fourier transform of X T (t ) , then


S XX (ω ) =
lim 1
T → ∞ 2T
E X T (ω )
2
{ }
9. Define white noise process.

Solution: A sample function X(t) of a WSS noise random process {X(t)} is called white noise if
the power spectral density of {X(t)} is a constant at all frequencies. We denote the power
N
spectral density of white noise w(t) as S w ( f ) = 0
2

10. Define linear time invariant system.

A linear system given by y (t ) = f [ x(t )] is said to be time-invariant if


Solution:
y (t + h) = f [ x(t + h)] for any h ∈ (−∞, ∞) i.e., any time shift h in the input results in the same
shift in time of the output.

PART B

11(a)(i) Derive poisson distribution from Binomial distribution.

Solution:

Suppose in a binomial distribution,

1. The n umber of trials is indefinitely large.i.e., n → ∞ .

2. p,the probability of success in each trial is very small i.e., p → 0.


3. np (= −λ ) is finite and p =λ , q =1 − p =1 − λ
n n
Now for binomial distribution
) nCx p x q n − x ,
P( X= x= =
x 0,1, 2,.....n
n(n − 1)(n − 2)...(n − x + 1) p x q n − x
=
x!
n− x
n(n − 1)(n − 2)...(n − x + 1)  λ   λ
x

=   1 − 
x! n  n
λ x   λ 
−x
1  2   x − 1  λ 
n

= 1 − 1 −  ... 1 − 1 −   1 −  


x !  n  n   n  n   n  
Taking limit on both sides,

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
λ  1  2   x − 1   λ  n  λ  − x 
x
lim p=
( x) lim 1 − 1 −  ... 1 −  1 −  1 −  
n →∞ x ! n →∞  n  n   n   n   n  
λx
= = e−λ for x 0,1, 2...
x!
Which is the p.m.f. of the poissondistributin.
(ii) Find mean and variance of Gamma distribution.

Solution:

To find the M.G.F. of Gamma distribution



M= E=
[etx } ∫e
tx
X (t ) f ( x)dx
−∞

etx e − x x λ −1
=∫ dx
0
Γ (λ )

1
Γ(λ ) ∫0
= e − (1− t ) x x λ −1dx

1 Γ (λ ) 1
= =
Γ(λ ) (1 − t )λ (1 − t )λ

To find the mean and variance of Gamma distribution;

d  d
µ1′ E=
= [ X ]  M X (t )=
 [(1 − t ) − λ ]t = 0
 dt  t = 0 dt
 λ (1 − t ) − λ −1 (−1) 
=−
t =0


Mean = λ
 d2  d
µ2′ = E[ X 2 ] =  2 M X (t )  =  −λ (1 − t ) − λ −1 (−1)  t = 0
 dt  t = 0 dt

=[λ (−1)(λ + 1)(1 − t ) − ( λ + 2) (−1)]t = 0


= [λ (λ + 1)(1 − t ) − ( λ + 2) ]t = 0
= λ (λ + 1)

Variance( X ) = E[ X 2 ] − [ E ( X )]2 = λ (λ + 1) − λ 2 = λ 2 + λ − λ 2 = λ
(OR)

(b)(i) Suppose that a customer arrive at a bank according to poisson process with a mean rate of
3 per minute.Find the probability that during a time interval of 2 minutes.

A. Exactly 4 customers arrive.

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
B.More than 4 customers arrive.

Solution:
=λ 3=
/ minute, t 2
e − λ t (λ t ) n
= n)
P(X(t) =
n!
e −6 (6) 4
= 4)
(1) P(X(2) = = 0.1338
4!
(2) P(X(2) > 4) =1 − (0.1338 + 0.1512) =0.715

(ii) If X and Y are independent RVs each normally distributed with mean zero and variance σ 2 .

Y 
Find the pdf of R= X 2 + Y 2 & φ=
tan −1  
X

Solution: The joint pdf is


 x2 + y2 
− 
1  2σ 2
f ( x, y ) e  
− ∞ < x, y < ∞
2πσ 2

Y 
Given R =
X 2 +Y2 & φ=
tan −1  
X

=⇒ x r cos
= θ y r sin θ
Rangespace − ∞ < x < ∞, − ∞ < y < ∞ ⇒ 0 ≤ R < ∞, 0 ≤ θ ≤ 2π
∂x ∂x
∂R ∂θ cos θ − sin θ
=J = = R
∂x ∂x sin θ R cos θ
∂R ∂θ
Now the joint pdf of R and θ is

To find the marginal density function of R


∞ R2 R2 R2
R − 2σ 2 R − R −
[θ ]0

g ( R) = ∫ 2πσ
0
=
2
e dθ
2πσ 2
e= 2σ 2
σ2
e 2σ 2
,R ≥ 0

Which is the pdf of Rayleigh distribution.

The marginal pdf of θ is


∞ R2
R −
g (θ ) = ∫ e 2σ 2
dR
0 2πσ 2

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
R2
= t.
2σ 2
When = R 0,= t 0.
Let R =∞, t =∞
2R
dR = dt. ⇒ RdR =(σ 2 )dt
2σ 2
∞ ∞
1 1  e−t  1 1
g (θ ) = ∫ e − t dt =   = − [0 − 1] = , 0 ≤ θ ≤ 2π
2π 0 2π  −1  0 2π 2π

Which is the pdf of uniform distribution in (0, 2π )

(ii) If X and Y are independent RV’s with pdf’s e − x ; x ≥ 0 & e − y ; y ≥ 0, respectively.Find the
X
pdf’s of U = & V = X + Y . Are U and V independent?
X +Y

Solution:

Given f ( x) = e − x ; x ≥ 0 & f ( y ) = e − y ; y ≥ 0,

The joint pdf of X and Y is f ( x, y ) = e − ( x + y ) ; x ≥ 0 & y ≥ 0,

X
Given U = &V = X + Y.
X +Y

x = u( x + y) v= x+ y
Hence x = uv y = v(1 − u )
∂x ∂x
J = ∂u ∂v = v u
= v(1 − u ) + uv = v
∂y ∂y − v 1 − u
∂u ∂v

The joint pdf of (U,V) is given by

g (u , v) = f ( x, y ) J = e − ( x + y ) v = ve − (uv + v (1−u ) = ve − v

Range space:

Given x ≥ 0 & y ≥ 0 ⇒ uv ≥ 0 & u − uv ≥ 0 ⇒ uv ≥ 0, v > uv ⇒ v ≥ 0

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
v(1 − u ) ≥ 0 ⇒ 1 − u ≥ 0 ⇒ u ≤ 1
uv ≥ 0, v ≥ 0 ⇒ u ≥ 0
∴ 0 ≤ u ≤1 & v ≥ 0
∴ g (u , v) = ve −v , 0 ≤ u ≤ 1 & v ≥ 0

The pdf of U is
∞ ∞
g u (u ) = ∫ g (u , v)dv = ∫ ve −v dv
0 0

 ve −v e −v 
= − 2 
 (−1) (−1)  0
[
= − ve −v − e −v ]

0 = −[0 − 1] = 1
g (u ) = 1, 0 ≤ u ≤1

The pdf of V is
1 1
g v (v) = ∫ g (u , v)du = ∫ ve −v du
0 0

= ve −v [u ]0
1

= ve −v , v ≥ 0

g (u , v) = g (u ).g (v).

Hence U and V are independent.

(OR)

(b) The joint probability mass function of (X,Y) is given by p( x, y ) =k (2 x + 3 y ), x =0,1, 2; y =1, 2,3.
Find all the marginal and conditional probability distributions. Also find the probability
distribution of (X+Y)

Solution:

x/y 1 2 3 P(x)
0 3k 6k 9k 18k
1 5k 8k 11k 24k
2 7k 10k 13k 30k
P(y) 15k 24k 33k 72k

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
1
We know that ∑p ij =1 ⇒ 72k =1 ⇒ k =
72

Hence the joint probability function is given by

x/y 1 2 3 P(x)
0 3 6 9 18
72 72 72 72
1 5 8 11 24
72 72 72 72
2 7 10 13 30
72 72 72 72
P(y) 15 24 33 1
72 72 72

(i) The marginal distributions of X and Y


X=x 0 1 2
P(X=x) 18 24 30
72 72 72

Y=y 1 2 3
P(Y=y) 15 24 33
72 72 72

(ii) To find the probability distribution of X+Y


X+Y p
1 3
(0,1) 72
2 6
+
5 11
=
(0,2),(1,1) 72 72 72
3 9
+
8
+
7 24
=
(0,3),(1,2),(2,1) 72 72 72 72
4 11 10 21
+ =
(1,3),(2,2) 72 72 72
5 13
(2,3) 72
Total 1
(iii) The conditional distribution of X given Y: P  X = x Y = y 
 

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
P(= X 0,= Y 1) 3 15 1
P( X= 0 Y= 1)= = =
P() 72 72 5
P(=X 1,= Y 1) 5 15 1
P( X= 1 Y= 1)= = =
P(Y = 1) 72 72 3
P(= X 2,= Y 1) 7 15 7
P( X= 2 Y= 1)= = =
P(Y = 1) 72 72 15
P(= X 0,= Y 2) 6 24 1
P( X= 0 Y= 2)
= = =
P(Y = 2) 72 72 4
P(= X 1,= Y 2) 8 24 1
P( X= 1 Y= 2)
= = =
P (Y = 2) 72 72 3
P ( X = 2, Y = 2) 10 24 5
P ( X= 2 Y= 2)
= = =
P(Y = 2) 72 72 12
P(= X 0,= Y 3) 9 33 9
P( X= 0 Y= 3)= = =
P (Y = 3) 72 72 33
P (=X 1,= Y 3) 11 33 1
P ( X= 1 Y= 3)= = =
P (Y = 3) 72 72 3
P (=X 2,= Y 3) 13 33 13
P ( X= 2 Y= 3)= = =
P (Y = 3) 72 72 33

The conditional distribution of Y given X is P Y = y X = x 


 

P= (Y 1,= X 0) 3 18 1
P(Y= 1 X= 0)
= = =
P( X = 0) 72 72 6
P= (Y 2,= X 0) 6 18 1
P(Y= 2 X= 0)
= = =
P( X = 0) 72 72 3
P=(Y 3,= X 0) 9 18 1
P(Y= 3 X= 0)
= = =
P( X = 0) 72 72 2
P=(Y 1,= X 1) 5 24 5
P(Y= 1 X= 1)= = =
P ( X = 1) 72 72 24
P= (Y 2,= X 1) 8 24 1
P (Y= 2 X= 1)= = =
P ( X = 1) 72 72 3
P (Y = 3, X = 1) 11 24 11
P (Y= 3 X= 1)= = =
P( X = 1) 72 72 24
P= (Y 1,= X 2) 7 30 7
P(Y= 1 X= 2)
= = =
P( X = 2) 72 72 30
P= (Y 2,= X 2) 10 30 1
P(Y= 2 X= 2)
= = =
P( X = 2) 72 72 3
P=(Y 3,= X 2) 13 30 13
P(Y= 3 X= 2)
= = =
P( X = 2) 72 72 30

The conditional distributions can be tabulated as below:

X 0 1 2
Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130
Agni college of Technology
Chennai – 130
P=
( X x=
Y 1) 3 15 1 5 15 1 7 15 7
= = =
72 72 5 72 72 3 72 72 15
Conditional distribution of X Y = 1

Te conditional distribution of X Y = 2

X 0 1 2
P=
( X x=
Y 2) 6 24 1 8 24 1 10 24 5
= = =
72 72 4 72 72 3 72 72 12

Te conditional distribution of X Y = 3

X 0 1 2
P=
( X x=
Y 3) 9 1 13
33 3 33

Te conditional distribution of Y X = 0

X 0 1 2
P=
(Y y=
X 0) 3 18 1 6 18 1 9 18 1
= = =
72 72 6 72 72 3 72 72 2

Te conditional distribution of Y X = 1

X 0 1 2
P=
(Y y=
X 1) 5 1 11
24 3 24

Te conditional distribution of Y X = 2

X 0 1 2
P=
(Y y=
X 2) 7 1 13
30 3 30

13(a)(i) If the two random variables Ar and Br are uncorrelated with zero mean and
( Ar2 ) E=
E= ( Br2 ) σ rs

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
n
=
Show that the process x(t ) ∑ ( A cos ω t + B
r =1
r r r sin ωr t ) is wide sense stationary.

E{ X (t )} E ( A cos ωt + B sin ωt )
=
Solution: Mean == cos ωtE ( A) + sin ωtE ( B)
= cos ωt (0) + sin ωt (0) = 0

Hence mean of the process is a constant.

Auto correlation

= R=
XX (t1 , t 2 ) E ( X (t1 ) X (t2 )
E [ ( A cos ωt1 + B sin ωt1 )( A cos ωt2 + B sin ωt2 ) ]
=
= E ( A2 cos ωt1 cos ωt2 + AB cos ωt1 sin ωt2 + BA sin ωt1 cos ωt2 + B 2 sin ωt1 sin ωt2 )
σ 2 (cos ωt1 cos ωt2 + sin ωt1 sin ωt2 )
= since E(A 2 ) = σ 2 & E ( AB) =
E(B2 ) = 0
= σ 2 cos ω (t1 − t2 )

Auto correlation is a function of t1 − t2 .

Hence {X(t) is a WSS process.

13(a)(ii)

If{𝑋𝑋(𝑡𝑡)} is Gaussian process with 𝜇𝜇(𝑡𝑡) = 10 and C(𝑡𝑡1 , 𝑡𝑡2 ) = 16𝑒𝑒 − |𝑡𝑡 1 −𝑡𝑡 2 | . Find the probability that
(1) 𝑋𝑋(10) ≤ 8 and (2) |𝑋𝑋(10) − 𝑋𝑋(6)| ≤ 4
Since {X(t)} is a Guassian process, any member of {X(t)} is a normal random
variable
By definition C(t 1 ,t 2 )= R(t 1 ,t 2 )-E[X(t 1 )].E[X(t 2 )]
C(t 1 ,t 2 )= Var(X(t 1 ))
C(t 1 ,t 2 )= Var {X(t)}
Now X (10) is a normal random variable with mean µ(10)=10 and variance
C(10,10) =16
(a). To find P(x(10)≤8),we have
𝑋𝑋(10)−10 8−10
P(X(10)≤8)=P� ≤ �
4 4
=P[z≤-0.5]
=0.5-P[0≤X≤0.5]
=0.5-0.1915(from normal tables)
=0.3085
(b).To find P(|𝑋𝑋(10) − 𝑋𝑋(6)| ≤ 4)let U=X(10)-X(6).
Here U is also a Random variable and we have
E(U)=E[X(10)-X(6)]
=10-10
=0
Var(U)= Var[X(10)-X(6)]

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
=Var[X(10)]+Var[X(6)]-2Cov[X910,X(6)]
=Cov(10,10)+Cov(6,6)-2Cov(10,6)
=16 𝑒𝑒 −|10−10| + 16𝑒𝑒 −|6−6| − 2 × 16𝑒𝑒 −|10−6|
=16+16-2×16e-4
= 31.4139
𝜎𝜎𝑢𝑢=√31.4139
=5.6048
Now P(|X(10) − X(6)| ≤ 4)=P(|𝑈𝑈| ≤ 4)
= P(-4≤U≤4)
𝑈𝑈−𝐸𝐸(𝑈𝑈)
Where Z=
𝜎𝜎𝜎𝜎
−4−0 4−0
= P� ≤ 𝑍𝑍 ≤ �
5.6048 5.6048
= 2× P[0≤Z≤0.7137]
=2×0.2611
=0.5222

13(b)(i) Define Random telegraph signal process and prove that it is wide sense stationary.

Solution: Let {N (t ), t ≥ 0}, denote a poisson process, and let X 0 be independent of this process
1
and be such that P{ X 0 =1} =P{ X 0 = (t ) X 0 (−1) N ( t ) then { X (t ), t ≥ 0} is called
−1} = . Defining X =
2
random telegraph signal process.

Let 𝑌𝑌(𝑡𝑡) = 𝛼𝛼 𝑋𝑋(𝑡𝑡)

1
𝑃𝑃(𝛼𝛼 = 1) = 𝑃𝑃(𝛼𝛼 = −1) =
2
By the definition 𝐸𝐸(𝛼𝛼) = 0, 𝐸𝐸(𝛼𝛼 2 ) = 1

To prove Y(t) is a WSS process

To Prove (i) 𝐸𝐸[𝑌𝑌(𝑡𝑡)] = 𝑎𝑎 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐

(ii) 𝑅𝑅(𝑡𝑡1 , 𝑡𝑡2 ) = 𝑎𝑎 𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓 𝑜𝑜𝑜𝑜 𝑡𝑡1 − 𝑡𝑡2

(i) 𝐸𝐸[𝑌𝑌(𝑡𝑡)] = 𝐸𝐸[𝛼𝛼𝛼𝛼(𝑡𝑡)]

= 𝐸𝐸(𝛼𝛼)𝐸𝐸[𝑋𝑋(𝑡𝑡)] since 𝛼𝛼 𝑎𝑎𝑎𝑎𝑎𝑎 𝑋𝑋(𝑡𝑡)𝑎𝑎𝑎𝑎𝑎𝑎 𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖

= 0 = 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐

(ii) 𝑅𝑅(𝑡𝑡1 , 𝑡𝑡2 ) = 𝐸𝐸[𝑌𝑌(𝑡𝑡1 )𝑌𝑌(𝑡𝑡2 )] = 𝐸𝐸[𝛼𝛼𝛼𝛼(𝑡𝑡1 )𝛼𝛼𝛼𝛼(𝑡𝑡2 )]

= 𝐸𝐸[𝛼𝛼 2 ]𝐸𝐸[𝑋𝑋(𝑡𝑡1 )𝑋𝑋(𝑡𝑡2 )]

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
−2𝜆𝜆|𝑡𝑡 1 −𝑡𝑡 2 |
= 1 × 𝑒𝑒

= 𝑎𝑎 𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓 𝑜𝑜𝑜𝑜 𝑡𝑡1 − 𝑡𝑡2

∴ 𝑌𝑌(𝑡𝑡) 𝑖𝑖𝑖𝑖 𝑎𝑎 𝑊𝑊𝑊𝑊𝑊𝑊

13(b)(ii) Prove that the sum of two independent poisson process is a poisson process.

To prove that the sum of two poisson processes is a poisson process.

Let X(t)=𝑋𝑋1 (𝑡𝑡) + 𝑋𝑋2 (𝑡𝑡)

𝑒𝑒 −⋋1𝑡𝑡 (⋋1 𝑡𝑡)𝑛𝑛


where 𝑃𝑃(𝑋𝑋1 (𝑡𝑡) = 𝑛𝑛) = , 𝑛𝑛 = 0,1,2 …
𝑛𝑛 !

𝑒𝑒 −⋋2 𝑡𝑡 (⋋2 𝑡𝑡)𝑛𝑛


𝑃𝑃(𝑋𝑋2 (𝑡𝑡) = 𝑛𝑛) = , 𝑛𝑛 = 0,1,2 …
𝑛𝑛!
𝑛𝑛

𝑃𝑃(𝑋𝑋(𝑡𝑡) = 𝑛𝑛) = � 𝑃𝑃(𝑋𝑋1 (𝑡𝑡) = 𝑟𝑟). 𝑃𝑃(𝑋𝑋2 (𝑡𝑡) = 𝑛𝑛 − 𝑟𝑟)


𝑟𝑟=0

By independence of 𝑋𝑋1 (𝑡𝑡) & 𝑋𝑋2 (𝑡𝑡)

𝑒𝑒 −⋋1 𝑡𝑡 (⋋1 𝑡𝑡)𝑟𝑟 𝑒𝑒 −⋋2 𝑡𝑡 (⋋2 𝑡𝑡)𝑛𝑛 −𝑟𝑟


= ∑𝑛𝑛𝑟𝑟=0 .
𝑟𝑟! (𝑛𝑛−𝑟𝑟)!

𝒏𝒏
(⋋1 𝑡𝑡)𝒓𝒓 (⋋2 𝑡𝑡)𝒏𝒏−𝒓𝒓
= 𝑒𝑒 −⋋1 𝑡𝑡 𝑒𝑒 −⋋2 𝑡𝑡 � 𝒏𝒏!
𝒓𝒓=𝟎𝟎 𝒏𝒏𝒄𝒄𝒓𝒓

𝒏𝒏
−𝒕𝒕(⋋𝟏𝟏 +⋋𝟐𝟐 )
(⋋1 𝑡𝑡)𝒓𝒓 (⋋2 𝑡𝑡)𝒏𝒏−𝒓𝒓
= 𝒆𝒆 � 𝒏𝒏𝒄𝒄𝒓𝒓
𝒏𝒏!
𝒓𝒓=𝟎𝟎

𝒆𝒆−𝒕𝒕(⋋𝟏𝟏 +⋋𝟐𝟐 ) (⋋𝟏𝟏 𝒕𝒕 +⋋𝟐𝟐 𝒕𝒕)𝒏𝒏


=
𝒏𝒏!

𝒆𝒆−𝒕𝒕(⋋𝟏𝟏+⋋𝟐𝟐) ((⋋𝟏𝟏 +⋋𝟐𝟐 )𝒕𝒕)𝒏𝒏


=
𝒏𝒏!

Hence 𝑋𝑋1 (𝑡𝑡) + 𝑋𝑋2 (𝑡𝑡) is a poisson process with parameter (⋋𝟏𝟏 +⋋𝟐𝟐 )

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
14(a)(i) The autocorrelation function of the random telegraph signal process is given by
−2 τ
R(τ ) = a 2 e . Determine the power density spectrum of the random telegraph signal.

Solution:

Power spectral density is


∫ R(τ )e
−iωτ
S (ω ) = dτ
−∞

τ
= a2 ∫ e
−2
e −iωτ dτ .
−∇

βτ
∫e
−2
= a2 (cos ωτ − i sin ωτ )dτ
−∞
∞∞

∫e
− 2σ
= 2a 2 cos τdατ
0

 e − 4ατ 
= 2a  2
(−2α cos ωτ + ω sin ωτ ) =
 4α + ω
2 2
0
 1 
= 2a 2  − (−2α )
 4α + αω 
2 2

4a 2 ε
=
4α 2 + ω 2

(ii) The autocorrelation function of the poisson increment process is given by

{
R(τ ) = λ2 for τ >∈
λ  τ 
= λ2 + 1 −  for τ ≤∈
∈  ∈ 

Prove that its spectral density is

4λ sin 2 (ω ∈ 2)
S (ω ) = 2πλ2δ (ω ) +
∈2 ω 2

Solution:

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

S (ω ) = ∫ R(τ )e − iωτ
−∞
∈  λ  τ  −∈ ∞
= ∫ λ2 + 1 − e − iωτ dτ + ∫ λ2 e − iωτ dτ + ∫ λ2 e − iωτ dτ
− ∈ ∈  ∈  −∞ ∈
∞ λ ∉  τ  − iωτ
= ∫ λ2 e − iωτ dτ + ∫ 1− e dτ
−∞ ∈ − ∈ ∈ 

2λ ∞  τ 
= F (λ 2 ) + ∫ 1 −  cos ωτdτ
∈ 0  ∈

2 2λ  τ  sin ωτ 1  − cos ωτ 
= F (λ ) + 1 −  +  
∈  ∈  ω ω  ω 2 
0
2λ 1 − cos ω ∈
= F (λ 2 ) +  
∈  ω 2 
 ω ∈
4λ sin 2  
2
= F (λ ) +  2 
∈2 ω 2
4λ sin 2 (ω ∈ 2)
= 2πλ2δ (ω ) +
∈2 ω 2

(b)(i) If the power spectral density of a WSS process is given by

b
 (a − ω ),
S (ω ) = ω ≤a
a
= 0, ω >a

Find the autocorrelation function of the process.

Solution:

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

1
∫ S (ω )e
iωτ
R(τ ) = dω
2π −∞
a

∫ a ( a − ω ) ( cos ωτ + i sin ωτ ) dω
1 b
=
2π −a
a
2 b
=
2π ∫ a ( a − ω ) cos ωτ dω
0
a
b b
=
aπ ∫ a ( a − ω ) cos ωτ dω
0
a
b   sin ωτ  cos ωτ 
=( a − ω )  −
aπ   τ  τ 2  0
b
= (1 − cos aτ )
aπτ 2
2b  aτ 
= sin 2  
aπτ 2
 2 

(ii) If the process {X(t)}=Y(t)Z(t) where {Y(t)} and {Z(t)} are independent WSS processes,
prove that

A. Rxx (τ ) = Ryy (τ ) Rzz (τ )



1
B.Sxx (ω )
= ∫S (α ) S zz (ω − α ) dα

yy
−∞

Solution:

RXX (τ ) E [Y (t ).Z (t ).Y (t + τ ) Z (t + τ ) ]


=
= E [Y (t ).Y (t + τ ).Z(t) Z (t + τ ) ]
=E [Y (t ).Y (t + τ )]E[Z(t) Z (t + τ ) ]
= RYY (τ ) RZZ (τ )


S XX (ω ) = ∫R
−∞
XX (τ )e − iωτ dτ


1
RXX (τ ) =
2π ∫S
−∞
XX (ω )eiωτ d ω

∞ 
F −1  ∫ SYY (α ) S ZZ (ω − α )eiωτ dα 
 −∞ 
Consider ∞ ∞
1
=
2π ∫ ∫S
−∞ −∞
YY (α ) S ZZ (ω − α )eiωτ dα d ω

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
α − y & ω −α = z
α y αz 1 0
dα d ω =
= dydz dydz
ω y ωz 1 1
Putting ∞ 
∴ F −1  ∫ SYY (α ) S ZZ (ω − α )dα  = 2π RYY (τ ) RZZ (τ )
 −∞ 
1  

F [ RYY (τ ) RZZ (τ )]
=  ∫ SYY (α ) S ZZ (ω − α )dα 
2π  −∞ 

From the above relation, we get S ZZ (ω ) in the above form,

15(a)(i)

If 𝑌𝑌(𝑡𝑡) = 𝐴𝐴 cos(𝜔𝜔0 𝑡𝑡 + 𝜃𝜃) + 𝑁𝑁(𝑡𝑡), where A is a constant, 𝜃𝜃 is a random variable with uniform
distribution in (−𝜋𝜋, 𝜋𝜋)and N(t) is a band-limited Gaussian white noise with a power spectral
𝑁𝑁0
𝑓𝑓𝑓𝑓𝑓𝑓 |𝜔𝜔 − 𝜔𝜔0 | < 𝜔𝜔𝐵𝐵
density 𝑆𝑆𝑁𝑁𝑁𝑁 (𝜔𝜔) = � 2 Fnd the power spectral density of Y(t). Assume that
0, 𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒ℎ𝑒𝑒𝑒𝑒𝑒𝑒
N(t) and 𝜃𝜃 are independent.
𝑌𝑌(𝑡𝑡)𝑌𝑌(𝑡𝑡 + 𝜏𝜏) = [𝐴𝐴 cos(𝜔𝜔0 𝑡𝑡 + 𝜃𝜃) + 𝑁𝑁(𝑡𝑡)][𝐴𝐴 cos(𝜔𝜔0 (𝑡𝑡 + 𝜏𝜏) + 𝜃𝜃) + 𝑁𝑁(𝑡𝑡 + 𝜏𝜏)]
2
= 𝐴𝐴 cos(𝜔𝜔𝜔𝜔 + 𝜃𝜃) cos(𝜔𝜔0 (𝑡𝑡 + 𝜏𝜏) + 𝜃𝜃) + 𝑁𝑁(𝑡𝑡)𝑁𝑁(𝑡𝑡 + 𝜏𝜏) + 𝐴𝐴 cos(𝜔𝜔0 𝑡𝑡 + 𝜃𝜃)𝑁𝑁(𝑡𝑡 + 𝜏𝜏)
+ 𝐴𝐴 cos(𝜔𝜔0 (𝑡𝑡 + 𝜏𝜏) + 𝜃𝜃)𝑁𝑁(𝑡𝑡)
𝑅𝑅𝑌𝑌𝑌𝑌 (𝑡𝑡, 𝑡𝑡 + 𝜏𝜏) = 𝐴𝐴2 E[cos(𝜔𝜔0 𝑡𝑡 + 𝜃𝜃) cos(𝜔𝜔0 (𝑡𝑡 + 𝜏𝜏) + 𝜃𝜃)] + 𝐸𝐸[𝑁𝑁(𝑡𝑡)𝑁𝑁(𝑡𝑡 + 𝜏𝜏)]
+ 𝐴𝐴 𝐸𝐸[cos(𝜔𝜔0 𝑡𝑡 + 𝜃𝜃)] 𝐸𝐸[𝑁𝑁(𝑡𝑡 + 𝜏𝜏)] + 𝐴𝐴 𝐸𝐸[cos(𝜔𝜔0 (𝑡𝑡 + 𝜏𝜏) + 𝜃𝜃)] 𝐸𝐸[𝑁𝑁(𝑡𝑡)]
Since 𝑁𝑁(𝑡𝑡) 𝑎𝑎𝑎𝑎𝑎𝑎 𝜃𝜃 are independent.
By hypothesis 𝐸𝐸[𝑁𝑁(𝑡𝑡)] = 0, 𝐸𝐸[𝑁𝑁(𝑡𝑡 + 𝜏𝜏) = 0
𝐴𝐴2
∴ 𝑅𝑅𝑌𝑌𝑌𝑌 (𝜏𝜏) = 𝑐𝑐𝑐𝑐𝑐𝑐𝜔𝜔0 𝜏𝜏 + 𝑅𝑅𝑁𝑁𝑁𝑁 (𝜏𝜏)
2
𝐴𝐴2 ∞
𝑆𝑆𝑌𝑌𝑌𝑌 (𝜔𝜔) = � 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 𝑒𝑒 − 𝑖𝑖𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑 + 𝑆𝑆𝑁𝑁𝑁𝑁 (𝜔𝜔)
2 −∞
𝜋𝜋𝐴𝐴2
= [𝛿𝛿(𝜔𝜔 − 𝜔𝜔0 ) + 𝛿𝛿(𝜔𝜔 − 𝜔𝜔0 )] + 𝑆𝑆𝑁𝑁𝑁𝑁 (𝜔𝜔)
2

15(a)(ii)Prove that the spectral density of two WSS process is non-negative.

Solution: Since the mean square value is always positive, PSD is also positive.

5. 15(b) If X(t) is the input voltage to a circuit (system) and Y(t) is the output
−α τ
voltage.{X(t)} is a stationary process with µ x = 0 and Rxx (τ ) = e . Find
R
µ y , S yy and Ryy (τ ) , if the power transfer function is H (ω ) = .
R + iLω

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

Y (t ) =
−∞
∫ h(u ) X (t − u )du
Sol: (i) We know that

∴ E [Y (t )] = ∫ h(u ) E[ X (t − u )]du
−∞

Since X(t) is stationary with mean 0, E[ X (t )] = 0 for all t ∴ E[ X (t − u )] = 0 ∴ E[Y (t )] = 0


S XX (ω ) = ∫R
−∞
XX (τ )e −iωτ dτ
(ii)We know that

∫e
−α τ
= e −iωτ dτ
−∞
0 ∞
= ∫e e ατ −iωτ
dτ + ∫ e −ατ e −iωτ dτ
−∞ 0
0 ∞
= ∫ e (α −iω )τ dτ + ∫ e (α +iω )τ dτ
−∞ 0
0 ∞
 e (α −iω )τ   e − (α +iω )τ 
=  +  
 α − iω  −∞  − (α + iω )  0
1 1
= [1 − 0] − [0 − 1]
α − iω α + iω
1 1 α + iω + α − iω 2α
= + = = 2
α − iω α + iω (α − iω )(α + iω ) α + ω 2

R
Given H (ω ) = .
R + iLω

We know that S yy (ω ) = H (ω ) S xx (ω )
2

2
R2 2α
=
R + iLω α + ω 2
2

R2 2α
= . 2
R + L ω α +ω2
2 2 2

(iii) The autocorrelation function of Y(t) is

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

1
RYY (τ ) =
2π −∞
∫S YY (ω )e iωτ dω


1 2αR 2 e iωτ
=
2π ∫ 2 2 2 2 2 dω
.
−∞ ( R + L ω ) α + ω

αR 2 1 e iωτ
=
π ∫ 2 2 2 2 2 dω
.
−∞ ( R + L ω ) α + ω

1
First we shall write ( R + L ω )(α + ω ) as partial fraction, treating
2 2
2 2 2

ω2 as u. We shall write the special partial fraction as


1 A B
= 2 + 2
( R + L ω )(α + ω ) R + L ω
2 2 2 2 2 2 2
α +ω2

R2 L2
Put u = − we get A =
L2 α 2 L2 − R 2
1
Put u = −α 2 we get B = 2
R − L2α 2
L2 1
∴ 2
1
= α L − R + R − L2α 2
2 2 2 2

( R + L2ω 2 )(α 2 + ω 2 ) R 2 + L2ω 2 α 2 +ω2


L2 1 1 1
= + 2
α L −R R +Lω
2 2 2 2 2 2
R − L α α +ω2
2 2 2

∞ ∞
αR 2 L2 1 αR 2 1
∫ ∫
iτω
∴ RYY (τ ) = .e dω − .e iτω dω
π (α L − R ) −∞ ( R + L ω )
2 2 2 2 2 2
π (α L − R ) −∞α + ω 2
2 2 2 2

∞ ∞
αR 2 L2 1 1 αR 2 1
∫ ) ∫α
iτω
= .e dω − .e iτω dω
π (α L − R ) L2
2 2 2
R 2
π (α L − R
2 2 2 2
+ω 2
+ω2) −∞
( −∞
L2
By Contour integration, we know that

e imz π − ma
∫−∞ z 2 + a 2 dz = a e , m > 0

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
αR L π −τ
2 2
αR 2 π −α τ
∴ RYY (τ ) = e − e
π (α L − R )  R 
2 2 2
π (α L − R ) α
2 2 2
 
L
αLR −τ R R2 −α τ
= e  − 2 2 e
α 2 L2 − R 2  
L α L − R 2

αLR −τ R R2 −α τ
= e  − e
L
2 2
R R
L2 (α 2 − ) L2 (α 2 − )
L2 L2
2
R R
α   
L −τ  R 
 e  −   2 e
L −α τ
=
L
2
R R
α2 −  α2 − 
L L

R
   − R  τ  R  −α τ 
L
= αe  L  −  e 
L
2
R  
α2 − 
L

______________________________________________________________________________

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
B.E./B.Tech. DEGREE EXAMINATIONS, MAY/JUNE 2013
Fourth Semester
Common to ECE/BIOMEDICAL
MA6453 – PROBABILITY AND RANDOM PROCESSES
(Regulations 2013)
Time: Three hours Maximum: 100 marks
Answer ALL Questions
Part A – (10x2=20 marks)
1. A random variable X has cdf
0 ∶ 𝑥𝑥 < 1
1
𝐹𝐹𝑥𝑥 (𝑥𝑥) = � (𝑥𝑥 − 1) ∶ 1 ≤ 𝑥𝑥 < 3{
2
1 ∶ 𝑥𝑥 ≥ 3
Find the pdf of X and the expected value of X.
1
, 1 ≤ 𝑥𝑥 ≤ 3
Solution: 𝑓𝑓(𝑥𝑥) = � 2
0, 𝑜𝑜𝑜𝑜ℎ𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒 ,
The expected value of X =

𝐸𝐸(𝑋𝑋) = � 𝑥𝑥𝑥𝑥(𝑥𝑥)𝑑𝑑𝑑𝑑
𝑥𝑥
1 3 1 𝑥𝑥 2
=
∫ 𝑥𝑥𝑥𝑥𝑥𝑥 = � �
2 1 2 2
1 9 1 1 8
= � − �= � �=2
2 2 2 2 2
2. Find the moment generationg function of the Binomial distribution.
Solution: 𝑀𝑀𝑥𝑥 (𝑡𝑡) = 𝐸𝐸(𝑒𝑒 𝑡𝑡𝑡𝑡 ) = ∑𝑛𝑛𝑥𝑥=0 𝑒𝑒 𝑡𝑡𝑡𝑡 𝑝𝑝(𝑥𝑥)
= ∑𝑛𝑛𝑥𝑥=0 𝑒𝑒 𝑡𝑡𝑡𝑡 𝑛𝑛𝑐𝑐𝑥𝑥 𝑝𝑝 𝑥𝑥 𝑞𝑞 𝑛𝑛−𝑥𝑥

𝑛𝑛

= � 𝑛𝑛𝑐𝑐𝑥𝑥 (𝑒𝑒 𝑡𝑡 𝑝𝑝)𝑥𝑥 𝑞𝑞 𝑛𝑛−𝑥𝑥 = (𝑝𝑝𝑒𝑒 𝑡𝑡 + 𝑞𝑞)𝑛𝑛


𝑥𝑥=0
3. The joint pmf of two random variables X and Y is given by
𝑘𝑘𝑘𝑘𝑘𝑘, 𝑥𝑥 = 1,2,3; 𝑦𝑦 = 1,2,3
𝑃𝑃𝑋𝑋,𝑌𝑌 (𝑥𝑥, 𝑦𝑦) = �
0 𝑜𝑜𝑜𝑜ℎ𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒
Solution:
𝑦𝑦/𝑥𝑥 1 2 3 𝑝𝑝𝑌𝑌 (𝑦𝑦)

1 k 2k 3k 6k
2 2k 4k 6k 12k
3 3k 6k 9k 18k
𝑝𝑝𝑋𝑋 (𝑥𝑥) 6k 12k 18k 36k

Solution: Here 𝑝𝑝(𝑥𝑥, 𝑦𝑦) ≥ 0, ∀𝑥𝑥, 𝑦𝑦 & ∑ 𝑝𝑝(𝑥𝑥, 𝑦𝑦) = 1,


Agni college of Technology
Chennai – 130
36𝑘𝑘 = 1
1
𝑘𝑘 =
36
4. The joint pdf of a random variable (X,Y) is
𝑥𝑥 2
𝑓𝑓𝑥𝑥𝑥𝑥 (𝑥𝑥, 𝑦𝑦) = 𝑥𝑥𝑦𝑦 2 + , 0 ≤ 𝑥𝑥 ≤ 2,0 ≤ 𝑦𝑦 ≤ 1.
8
Find 𝑃𝑃(𝑋𝑋 < 𝑌𝑌).
Solution:
𝑃𝑃(𝑋𝑋 < 𝑌𝑌) = ∬𝑅𝑅 𝑓𝑓𝑥𝑥𝑥𝑥 (𝑥𝑥, 𝑦𝑦)𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑
1 𝑦𝑦 𝑥𝑥 2
= ∫0 ∫0 �𝑥𝑥𝑦𝑦 2 + � 𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑
8
1 𝑥𝑥 2 𝑥𝑥 3
= ∫0 � 𝑦𝑦 2 + � 𝑑𝑑𝑑𝑑
2 24
𝑦𝑦
1 𝑥𝑥 2 2 𝑥𝑥 3
= ∫0 � 2 𝑦𝑦 + � 𝑑𝑑𝑑𝑑
24 0
1
𝑦𝑦 4 𝑦𝑦 3 53
=� � + � 𝑑𝑑𝑑𝑑 =
0 2 24 480

53
∴ 𝑃𝑃(𝑋𝑋 < 𝑌𝑌) =
480
5. Define wide sense stationary process.
Solution: A random process {𝑋𝑋(𝑡𝑡)} is called wide sense stationary process if its mean is
constant and autocorrelation function depends only on the time difference 𝜏𝜏.
i.e.,
(i) 𝐸𝐸{𝑋𝑋(𝑡𝑡)} is always a constant.
(ii) 𝐸𝐸(𝑋𝑋(𝑡𝑡1 ). 𝑋𝑋(𝑡𝑡2 ) = 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏), where 𝜏𝜏 = 𝑡𝑡2 − 𝑡𝑡1.
6. Show that a binomial process is Markov.
Solution:
𝑆𝑆𝑛𝑛 = 𝑋𝑋1 + 𝑋𝑋2 + ⋯ + 𝑋𝑋𝑛𝑛 = 𝑆𝑆𝑛𝑛 + 𝑥𝑥𝑛𝑛
𝑃𝑃(𝑆𝑆𝑛𝑛 = 𝑚𝑚⁄𝑆𝑆𝑛𝑛−1 = 𝑚𝑚) = 𝑃𝑃(𝑥𝑥𝑛𝑛 = 0) = 1 − 𝑃𝑃
𝑃𝑃(𝑆𝑆𝑛𝑛 = 𝑚𝑚⁄𝑆𝑆𝑛𝑛−1 = 𝑚𝑚 − 1) = 𝑃𝑃(𝑥𝑥𝑛𝑛 = 0) = 𝑃𝑃
The probability distribution of 𝑆𝑆𝑛𝑛 depends only on 𝑆𝑆𝑛𝑛−1 .
Hence the binomial process is Markov.
7. A random process 𝑋𝑋(𝑡𝑡) is defined by 𝑋𝑋(𝑡𝑡) = 𝐾𝐾𝐾𝐾𝐾𝐾𝐾𝐾𝐾𝐾𝐾𝐾, 𝑡𝑡 ≥ 0 where 𝜔𝜔 is a constant and 𝐾𝐾
is uniformly distributed over (0,2). Find the autocorrelation of 𝑋𝑋(𝑡𝑡).
Solution:
Given 𝑋𝑋(𝑡𝑡) = 𝑋𝑋(𝑡𝑡) = 𝐾𝐾𝐾𝐾𝐾𝐾𝐾𝐾𝐾𝐾𝐾𝐾 and 𝐾𝐾 is uniformly distributed in (0,2).
Hence the pdf of 𝐾𝐾 is 𝑓𝑓𝑘𝑘 (𝑘𝑘), 0 < 𝑘𝑘 < 2.
𝑅𝑅𝑋𝑋𝑋𝑋 (𝑡𝑡, 𝑡𝑡 + 𝜏𝜏) = 𝐸𝐸[𝑋𝑋(𝑡𝑡) + 𝑋𝑋(𝑡𝑡 + 𝜏𝜏)]
= 𝐸𝐸[𝐾𝐾𝐾𝐾𝐾𝐾𝐾𝐾𝐾𝐾𝐾𝐾. 𝐾𝐾𝐾𝐾𝐾𝐾𝐾𝐾𝐾𝐾(𝑡𝑡 + 𝜏𝜏)]
= 𝐸𝐸[𝐾𝐾 2 ][𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 𝑐𝑐𝑐𝑐𝑐𝑐(𝜔𝜔𝜔𝜔 + 𝜔𝜔𝜔𝜔)]
2
= � 𝐾𝐾 2 𝑓𝑓𝐾𝐾 (𝑘𝑘)𝑑𝑑𝑑𝑑. [𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 𝑐𝑐𝑐𝑐𝑐𝑐(𝜔𝜔𝜔𝜔 + 𝜔𝜔𝜔𝜔)]
0
Agni college of Technology
Chennai – 130
2
1 𝑘𝑘 3
= � � 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐(𝜔𝜔𝜔𝜔 + 𝜔𝜔𝜔𝜔)
2 3 0
1 8
= � � 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 𝑐𝑐𝑐𝑐𝑐𝑐(𝜔𝜔𝜔𝜔 + 𝜔𝜔𝜔𝜔)
2 3
4
= [𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 𝑐𝑐𝑐𝑐𝑐𝑐(𝜔𝜔𝜔𝜔 + 𝜔𝜔𝜔𝜔)]
3
8. Define cross correlation function of X(t) and Y(t). When do you say that they are
independent?
Solution: The cross correlation of the two processes X(t) and Y(t) is defined as
𝑅𝑅𝑋𝑋𝑋𝑋 (𝑡𝑡1 , 𝑡𝑡2 ) = 𝐸𝐸[𝑋𝑋(𝑡𝑡1 ). 𝑌𝑌(𝑡𝑡2 )]. X(t) and Y(t) are independent if autocovariance
is zero
9. Define linear time invariant.
Solution: A general linear system is said to be time-invariant if the input X(t) is time shifted by
an amount h, the corresponding output Y(t) will also be time shifted by the same amount.
i.e., If Y(t+h)=F(X(t+h)), where Y(t)=F(X(t))
F is called a time invariaYnt system or X(t) and Y(t) are said to form a time invariant system.
10.State the convolution form of the output of a linear time invariant system.
Solution: Let X(t) be a WSS random input process to linear time-invariant system with unit
impulse response h(t) and let Y(t) be the corresponding output process, then
𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏) = ℎ(𝜏𝜏) ∗ 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏)
𝑅𝑅𝑌𝑌𝑌𝑌 (𝜏𝜏) = ℎ(−𝜏𝜏) ∗ 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏)
𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔) = ℎ(𝜔𝜔) ∗ 𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔)
𝑆𝑆𝑌𝑌𝑌𝑌 (𝜔𝜔) = ℎ(𝜔𝜔) ∗ 𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔)
Where * denotes the convolution.

PART B ( 5X16=80 marks)


11 (a)(i) A random variable X has pdf
2 −𝑥𝑥
𝑓𝑓𝑋𝑋 (𝑥𝑥) = � 𝑘𝑘𝑥𝑥 𝑒𝑒 𝑥𝑥 > 0
0 𝑜𝑜𝑜𝑜ℎ𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒
Find the rth moment of X about origin. Hence find the mean and variances.
2 −𝑥𝑥
Solution:Given pdf of X is 𝑓𝑓𝑋𝑋 (𝑥𝑥) = � 𝑘𝑘𝑥𝑥 𝑒𝑒 𝑥𝑥 > 0
0 𝑜𝑜𝑜𝑜ℎ𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒
∞ ∞
Hence ∫−∞ 𝑓𝑓(𝑥𝑥)𝑑𝑑𝑑𝑑 = 1 ⟹ ∫0 𝑘𝑘𝑥𝑥 2 𝑒𝑒 −𝑥𝑥 𝑑𝑑𝑑𝑑 = 1
Agni college of Technology
Chennai – 130

⟹ 𝑘𝑘 � 𝑒𝑒 −𝑥𝑥 𝑥𝑥 3−1 𝑑𝑑𝑑𝑑 = 1 ⟹ 𝑘𝑘2! = 1 ⟹ 𝑘𝑘 = 1.
−∞
∞ ∞
1 1 ∞
𝜇𝜇𝑟𝑟′ = 𝐸𝐸(𝑋𝑋 𝑟𝑟 ) = � 𝑥𝑥 𝑟𝑟 𝐹𝐹(𝑋𝑋)𝑑𝑑𝑑𝑑 = � 𝑥𝑥 𝑟𝑟 𝑥𝑥 2 𝑒𝑒 −𝑥𝑥 𝑑𝑑𝑑𝑑 = � 𝑒𝑒 −𝑥𝑥 𝑥𝑥 𝑟𝑟+3−1 𝑑𝑑𝑑𝑑
−∞ 0 2 2 0
1
= (𝑟𝑟 + 2)!
2
1 1
Hence 𝜇𝜇1′, = (3!) = 3; 𝜇𝜇2′, = (4!) = 12
2 2

Hence mean of X =3
2
Variance of X = 𝜇𝜇2′ − 𝜇𝜇1′ = 12 − 32 = 3.
(ii) A random variable X is uniformly distributed over (0,10). Find
(1) 𝑃𝑃(𝑋𝑋 < 3), 𝑃𝑃(𝑋𝑋 > 7)𝑎𝑎𝑎𝑎𝑎𝑎 𝑃𝑃(2 < 𝑋𝑋 < 5) (2)𝑃𝑃(𝑋𝑋 = 7).
Solution: X is uniformly distributed over (0,10).
1
, 0 < 𝑥𝑥 < 10
Hence pdf is 𝑓𝑓(𝑥𝑥) = �10
0 , 𝑜𝑜𝑜𝑜ℎ𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒
3 1 1 3
(1) 𝑃𝑃(𝑋𝑋 < 3) = ∫0 𝑑𝑑𝑑𝑑 = [𝑥𝑥]30 =
10 10 10
10 1 1 3
(2) 𝑃𝑃(𝑋𝑋 > 7) = ∫7 𝑑𝑑𝑑𝑑 = [𝑥𝑥]10
7 =
10 10 10
5 1 1 5 3
(3) 𝑃𝑃(2 < 𝑥𝑥 < 5) = ∫2 𝑑𝑑𝑑𝑑 = (𝑥𝑥)2 =
10 10 10
(4) Since X is a continuous random variable, 𝑃𝑃(𝑋𝑋 = 7) = 0.
(OR)
(b)(i) An office has four phone lines.Each is busy about 10% of the time. Assume that the phone
lines act independently.
(1) What is the probability that all four phones are busy?
(2) What is the probability that atleast two of them are busy?
Solution: Using Geometric distribution with 𝑥𝑥 = 4 & 𝑝𝑝 = 0.1, 𝑞𝑞 = 0.9
𝑃𝑃[𝑋𝑋 = 𝑥𝑥] = 𝑝𝑝𝑞𝑞 𝑥𝑥−1 , ⋋= 1 − 𝑃𝑃[𝑋𝑋 = 1]
= 1 − (0.1)(0.9)1−1 = 1 − 0.1 = 0.9.
11(b)(ii) Describe Gamma distribution. Obtain its moment generating function.Hence compute
its mean and variance.
Agni college of Technology
Chennai – 130
Solution: The moment generating function of a random variable X defined as 𝑀𝑀𝑋𝑋 (𝑡𝑡) = 𝐸𝐸[𝑒𝑒 𝑡𝑡𝑡𝑡 ] =
∑∞𝑖𝑖=1 𝑒𝑒
𝑡𝑡𝑥𝑥 𝑖𝑖
𝑃𝑃(𝑥𝑥𝑖𝑖 ) 𝑖𝑖𝑖𝑖 𝑋𝑋 𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑 𝑅𝑅𝑅𝑅
� ∞
∫−∞ 𝑓𝑓(𝑥𝑥)𝑒𝑒 𝑡𝑡𝑡𝑡 𝑑𝑑𝑑𝑑 𝑖𝑖𝑖𝑖 𝑋𝑋 𝑖𝑖𝑖𝑖 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 𝑅𝑅𝑅𝑅

Moment Generating Function of Gamma distribution


The Gamma distribution is
∝−1
𝜆𝜆 𝑒𝑒 − 𝜆𝜆𝜆𝜆 ( 𝜆𝜆𝜆𝜆 )
𝑓𝑓(𝑥𝑥) = 𝑥𝑥 ≥ 0
Γ𝛼𝛼

𝑀𝑀𝑋𝑋 (𝑡𝑡) = 𝐸𝐸[𝑒𝑒 𝑡𝑡𝑡𝑡 ]



= � 𝑓𝑓(𝑥𝑥)𝑒𝑒 𝑡𝑡𝑡𝑡 𝑑𝑑𝑑𝑑
−∞

∞ ∝−1
𝑡𝑡𝑡𝑡
𝜆𝜆 𝑒𝑒 − 𝜆𝜆𝜆𝜆 ( 𝜆𝜆𝜆𝜆 )
= � 𝑒𝑒 𝑑𝑑𝑑𝑑
0 Γ𝛼𝛼
𝜆𝜆∝ ∞ − (𝜆𝜆−𝑡𝑡)𝑥𝑥 ( 𝜆𝜆𝜆𝜆 )∝−1
= � 𝑒𝑒 𝑑𝑑𝑑𝑑
Γ𝛼𝛼 0
Put 𝑢𝑢 = (𝜆𝜆 − 𝑡𝑡)𝑥𝑥
𝑑𝑑𝑑𝑑 = (𝜆𝜆 − 𝑡𝑡)𝑑𝑑𝑑𝑑
𝑥𝑥 → 0 ⟹ 𝑢𝑢 → 0
𝑥𝑥 → ∞ ⟹ 𝑢𝑢 → ∞

∞ ∝−1
𝜆𝜆∝ ∫0 𝑒𝑒 −𝑢𝑢 ( 𝑢𝑢)
∴ 𝑀𝑀𝑋𝑋 (𝑡𝑡) = 𝑑𝑑𝑑𝑑
Γ𝛼𝛼 (𝜆𝜆 − 𝑡𝑡)𝛼𝛼
𝜆𝜆∝
= Γ𝛼𝛼
Γ𝛼𝛼(𝜆𝜆 − 𝑡𝑡)𝛼𝛼
𝜆𝜆 𝛼𝛼 𝑡𝑡 − 𝛼𝛼
=� � = �1 − �
𝜆𝜆 − 𝑡𝑡 𝜆𝜆

′(𝑡𝑡) 𝑡𝑡 − 𝛼𝛼−1 1
𝑀𝑀𝑋𝑋 = −𝛼𝛼 �1 − � �− ��
𝜆𝜆 𝜆𝜆 𝑡𝑡=0
𝛼𝛼
𝜇𝜇′1 =
𝜆𝜆
Agni college of Technology
Chennai – 130
′′(𝑡𝑡) 𝑡𝑡 − 𝛼𝛼−2 1 2
𝑀𝑀𝑋𝑋 = 𝛼𝛼[−(∝ +1)] �1 − � � � �
𝜆𝜆 𝜆𝜆 𝑡𝑡=0

𝛼𝛼(∝ +1)
𝜇𝜇′2 =
𝜆𝜆2
2
𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉 (𝑋𝑋) = 𝜇𝜇′2 − �𝜇𝜇′1 �
𝛼𝛼(∝ +1) 𝛼𝛼 2
= − � �
𝜆𝜆2 𝜆𝜆

𝛼𝛼 2 +∝ −𝛼𝛼 2
=
𝜆𝜆2

∴ 𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉(𝑋𝑋) =
𝜆𝜆2
12(a)(i) Two independent random variables X and Y are defined by
4𝑎𝑎𝑎𝑎 ∶ 0 < 𝑥𝑥 < 1
𝑓𝑓𝑋𝑋 (𝑥𝑥) = � and
0 ∶ 𝑜𝑜𝑜𝑜ℎ𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒
4𝑏𝑏𝑏𝑏 ∶ 0 < 𝑦𝑦 < 1
𝑓𝑓𝑌𝑌 (𝑦𝑦) = � 𝑎𝑎
0 ∶ 𝑜𝑜𝑜𝑜ℎ𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒
Show that 𝑈𝑈 = 𝑋𝑋 + 𝑌𝑌 & 𝑉𝑉 = 𝑋𝑋 − 𝑌𝑌 are uncorrelated.
Solution: First we find a and b in the p.d.f’s.
From the property of pdf we have,

� 𝑓𝑓(𝑥𝑥)𝑑𝑑𝑑𝑑 = 1.
−∞

1 1
𝑥𝑥 2 1
� 4𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎 = 1 ⟹ 4𝑎𝑎 � � = 1 ⟹ 2𝑎𝑎 = 1 ⟹ 𝑎𝑎 =
0 2 0 2

1 1
𝑦𝑦 2 1
� 4𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏 = 1 ⟹ 4𝑏𝑏 � � = 1 ⟹ 2𝑏𝑏 = 1 ⟹ 𝑏𝑏 =
0 2 0 2

2𝑥𝑥 ,0 < 𝑥𝑥 < 1 2𝑦𝑦 ,0 < 𝑦𝑦 < 1


Hence 𝑓𝑓(𝑥𝑥) = � and 𝑓𝑓(𝑦𝑦) = �
0, 𝑜𝑜𝑜𝑜ℎ𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒 0, 𝑜𝑜𝑜𝑜ℎ𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒
Since X and Y are independent, Cov(X,Y)=0.
Now, 𝐶𝐶𝐶𝐶𝐶𝐶(𝑈𝑈, 𝑉𝑉) = 𝐶𝐶𝐶𝐶𝐶𝐶[𝑋𝑋 + 𝑌𝑌, 𝑋𝑋 − 𝑌𝑌]
= 𝐸𝐸[(𝑋𝑋 + 𝑌𝑌)(𝑋𝑋 − 𝑌𝑌)] − 𝐸𝐸[𝑋𝑋 + 𝑌𝑌]𝐸𝐸[𝑋𝑋 − 𝑌𝑌]
Agni college of Technology
Chennai – 130
= 𝐸𝐸[𝑋𝑋 2 − 𝑌𝑌 2 ] − �𝐸𝐸[𝑋𝑋] + 𝐸𝐸[𝑌𝑌]��𝐸𝐸[𝑋𝑋] − 𝐸𝐸[𝑌𝑌]�
2 2
= 𝐸𝐸[𝑋𝑋 2 ] − 𝐸𝐸[𝑌𝑌 2 ] − [�𝐸𝐸[𝑋𝑋]� − �𝐸𝐸[𝑌𝑌]� ]
2 2
= 𝐸𝐸[𝑋𝑋 2 ] − �𝐸𝐸[𝑋𝑋]� − [𝐸𝐸[𝑌𝑌 2 ] − �𝐸𝐸[𝑌𝑌]� ]

= 𝑉𝑉𝑉𝑉𝑉𝑉[𝑋𝑋] − 𝑉𝑉𝑉𝑉𝑉𝑉[𝑌𝑌] = 𝜎𝜎𝑥𝑥2 − 𝜎𝜎𝑦𝑦2

𝑉𝑉𝑉𝑉𝑉𝑉[𝑈𝑈] = 𝑉𝑉𝑉𝑉𝑉𝑉[𝑋𝑋 + 𝑌𝑌] = 𝑉𝑉𝑉𝑉𝑉𝑉[𝑋𝑋] + 𝑉𝑉𝑉𝑉𝑉𝑉[𝑌𝑌] = 𝜎𝜎𝑥𝑥2 + 𝜎𝜎𝑦𝑦2

𝑉𝑉𝑉𝑉𝑉𝑉[𝑉𝑉] = 𝑉𝑉𝑉𝑉𝑉𝑉[𝑋𝑋 − 𝑌𝑌] = 𝑉𝑉𝑉𝑉𝑉𝑉[𝑋𝑋] + 𝑉𝑉𝑉𝑉𝑉𝑉[𝑌𝑌]−= 𝜎𝜎𝑥𝑥2 + 𝜎𝜎𝑦𝑦2

𝐶𝐶𝐶𝐶𝐶𝐶(𝑈𝑈, 𝑉𝑉) 𝜎𝜎𝑥𝑥2 − 𝜎𝜎𝑦𝑦2 𝜎𝜎𝑥𝑥2 − 𝜎𝜎𝑦𝑦2


𝑟𝑟(𝑈𝑈, 𝑉𝑉) = = = 2 2
(1)
𝜎𝜎𝑢𝑢 𝜎𝜎𝑣𝑣 �(𝜎𝜎𝑥𝑥2 + 𝜎𝜎𝑦𝑦2 )(𝜎𝜎𝑥𝑥2 + 𝜎𝜎𝑦𝑦2 ) 𝜎𝜎𝑥𝑥 + 𝜎𝜎𝑦𝑦
1 1 1
𝑥𝑥 3 2 2
𝐸𝐸[𝑋𝑋] = � 𝑥𝑥𝑥𝑥(𝑥𝑥)𝑑𝑑𝑑𝑑 = 2 � 𝑥𝑥 𝑑𝑑𝑑𝑑 = 2 � � =
0 0 3 0 3

1 1 1
2] 2
𝑥𝑥 4 2 1 2
𝐸𝐸[𝑋𝑋 = � 𝑥𝑥 𝑓𝑓(𝑥𝑥)𝑑𝑑𝑑𝑑 = � 2𝑥𝑥𝑥𝑥 𝑑𝑑𝑑𝑑 = 2 � � = =
0 0 4 0 4 2

1 2 22 1 4 1
𝜎𝜎𝑥𝑥2 = 𝑉𝑉𝑉𝑉𝑉𝑉[𝑋𝑋] = 𝐸𝐸[𝑋𝑋 2]
− �𝐸𝐸[𝑋𝑋]� = − � � = − =
2 3 2 9 18
1
Similarly 𝜎𝜎𝑦𝑦2 = 𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉 =
18
1 1

Hence by equation (1) 𝑟𝑟(𝑈𝑈𝑈𝑈) = 18 18
1 1 = 0.
+
18 18

12(a)(ii) State and prove central liit theorem in the case of two dimensional random variables.
Solution: Out of syllabus.
(OR)
(b)(i) The equations of two regression lines are 3𝑥𝑥 + 12𝑦𝑦 = 19 & 3𝑦𝑦 + 9𝑥𝑥 = 46.
Find 𝑥𝑥̅ &𝑦𝑦� and the correlation coefficient between X and Y.
Solution: (i) Let the regression line of Y on X be 3𝑥𝑥 + 12𝑦𝑦 = 19 ⟹
19 1
12𝑦𝑦 = 19 − 3𝑥𝑥 ⟹ 𝑦𝑦 = − 𝑥𝑥.
12 4

The regression line of Y on X be 3𝑦𝑦 − 9𝑥𝑥 = 46 ⟹ 9𝑥𝑥 = 46 − 3𝑦𝑦 ⟹


46 1 1 1
𝑥𝑥 = − 𝑦𝑦. Hence 𝑏𝑏𝑦𝑦𝑦𝑦 = − &𝑏𝑏𝑥𝑥𝑥𝑥 = −
9 3 4 3

1 1 1 1
Hence 𝑟𝑟 2 = �𝑏𝑏𝑦𝑦𝑦𝑦 ��𝑏𝑏𝑥𝑥𝑥𝑥 � = �− � �− � = ⇒ 𝑟𝑟 = ±
4 3 12 2√3
Agni college of Technology
Chennai – 130
(ii) To find the mean of X and Y.
Both the regression lines passes through (𝑥𝑥̅ , 𝑦𝑦�)
1
3𝑥𝑥̅ + 12𝑦𝑦� = 19 & 3𝑦𝑦� + 9𝑥𝑥̅ = 46. Solving, 𝑥𝑥̅ = 5 &𝑦𝑦� =
3

(b)(ii) Given the joint pdf of X and Y


𝐶𝐶𝐶𝐶(𝑥𝑥 − 𝑦𝑦): 0 < 𝑥𝑥 < 2, −𝑥𝑥 < 𝑦𝑦 < 𝑥𝑥
𝑓𝑓𝑋𝑋,𝑌𝑌 (𝑥𝑥, 𝑦𝑦) = �
0 ∶ 𝑜𝑜𝑜𝑜ℎ𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒
(1)Evaluate C (2) Find marginal pdf of X (3) Find the conditional density of 𝑌𝑌⁄𝑋𝑋
2 𝑥𝑥
Solution: (1) ∫𝑥𝑥=0 ∫𝑦𝑦 =−𝑥𝑥 𝐶𝐶𝐶𝐶(𝑥𝑥 − 𝑦𝑦)𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑 = 1 ⟹
2 𝑥𝑥
𝐶𝐶 � � (𝑥𝑥 2 − 𝑥𝑥𝑥𝑥)𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑 = 1 ⟹
𝑥𝑥=0 𝑦𝑦 =−𝑥𝑥

2
𝑥𝑥 3 33
𝑥𝑥 3
𝐶𝐶 � [�𝑥𝑥 − � − (−𝑥𝑥 − )]𝑑𝑑𝑑𝑑 = 1 ⟹
𝑥𝑥=0 2 2

2 2
3
2𝑥𝑥 4 1
𝐶𝐶 � 2𝑥𝑥 𝑑𝑑𝑑𝑑 = 1 ⟹ 𝐶𝐶 � � ⟹ 𝐶𝐶 =
0 4 0 8

1
𝑓𝑓𝑋𝑋,𝑌𝑌 (𝑥𝑥, 𝑦𝑦) = �8 𝑥𝑥(𝑥𝑥 − 𝑦𝑦): 0 < 𝑥𝑥 < 2, −𝑥𝑥 < 𝑦𝑦 < 𝑥𝑥
0 ∶ 𝑜𝑜𝑜𝑜ℎ𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒
𝑥𝑥 1 1 𝑥𝑥
(ii) 𝑓𝑓𝑥𝑥 (𝑥𝑥) = ∫−𝑥𝑥 𝑥𝑥(𝑥𝑥 − 𝑦𝑦)𝑑𝑑𝑑𝑑 = ∫−𝑥𝑥 (𝑥𝑥 2 − 𝑥𝑥𝑥𝑥) 𝑑𝑑𝑑𝑑 =
8 8
𝑥𝑥
1 2 𝑥𝑥𝑦𝑦 2 𝑥𝑥 3 𝑥𝑥 3 1
�𝑥𝑥 𝑦𝑦 − � = ��𝑥𝑥 3 − � − �−𝑥𝑥 3 − �� = 𝑥𝑥 3 ; 0 < 𝑥𝑥 < 2.
8 2 −𝑥𝑥 2 2 4

1
𝑓𝑓(𝑥𝑥,𝑦𝑦) 𝑥𝑥(𝑥𝑥−𝑦𝑦) 1
(iii)𝑓𝑓𝑌𝑌 ⁄𝑋𝑋 (𝑦𝑦⁄𝑥𝑥) = = 8
𝑥𝑥 3
= (𝑥𝑥 − 𝑦𝑦); −𝑥𝑥 < 𝑦𝑦 < 𝑥𝑥.
𝑓𝑓 𝑋𝑋 (𝑥𝑥) 2𝑥𝑥 2
4

13(a) (i) Define a semi random telegraph signal process and prove that it

is evolutionary.

Solution: If N(t) represents the number of occurences of a specified event


Agni college of Technology
Chennai – 130
in (0,t) and 𝑋𝑋(𝑡𝑡) = (−1)𝑁𝑁(𝑡𝑡) then {X(t)} is called a semi telegraph process.

𝑒𝑒 −⋋𝑡𝑡 (⋋𝑡𝑡)𝑛𝑛
{N(t)} is a process with 𝑃𝑃{𝑁𝑁(𝑡𝑡) = 𝑟𝑟} = , 𝑛𝑛 = 𝑜𝑜, 1,2 …
𝑛𝑛!

To prove that {X(t)} is evolutionary.

Now {X(t)} can take values +1 and -1 only.

𝑃𝑃{𝑋𝑋(𝑡𝑡)} = 1} = 𝑃𝑃(𝑁𝑁(𝑡𝑡)𝑖𝑖𝑖𝑖 𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒) = 𝑃𝑃(𝑁𝑁(𝑡𝑡) = 0,2,4 … )

= 𝑃𝑃(𝑁𝑁(𝑡𝑡) = 0) = +𝑃𝑃(𝑁𝑁(𝑡𝑡) = 2) + 𝑃𝑃(𝑁𝑁(𝑡𝑡) = 4) + ⋯

𝑒𝑒 −⋋𝑡𝑡 (⋋ 𝑡𝑡)0 𝑒𝑒 −⋋𝑡𝑡 (⋋ 𝑡𝑡)2 𝑒𝑒 −⋋𝑡𝑡 (⋋ 𝑡𝑡)4


= + + + ⋯…
0! 2! 4!

𝑒𝑒 −⋋𝑡𝑡 (⋋ 𝑡𝑡)0 𝑒𝑒 −⋋𝑡𝑡 (⋋ 𝑡𝑡)2 𝑒𝑒 −⋋𝑡𝑡 (⋋ 𝑡𝑡)4


= + + +⋯
0! 2! 4!
[⋋𝑡𝑡]2 [⋋𝑡𝑡]4
= 𝑒𝑒 −⋋𝑡𝑡 �1 + + + ⋯�
2! 4!

= 𝑒𝑒 −⋋𝑡𝑡 𝑐𝑐𝑐𝑐𝑐𝑐ℎ ⋋ 𝑡𝑡

𝑃𝑃{𝑋𝑋(𝑡𝑡)} = −1} = 𝑃𝑃(𝑁𝑁(𝑡𝑡)𝑖𝑖𝑖𝑖 𝑜𝑜𝑜𝑜𝑜𝑜) = 𝑃𝑃(𝑁𝑁(𝑡𝑡) = 1,3,5 … )

= 𝑃𝑃(𝑁𝑁(𝑡𝑡) = 1) = +𝑃𝑃(𝑁𝑁(𝑡𝑡) = 3) + 𝑃𝑃(𝑁𝑁(𝑡𝑡) = 5) + ⋯

𝑒𝑒 −⋋𝑡𝑡 (⋋ 𝑡𝑡)1 𝑒𝑒 −⋋𝑡𝑡 (⋋ 𝑡𝑡)3 𝑒𝑒 −⋋𝑡𝑡 (⋋ 𝑡𝑡)5


= + + +⋯
1! 3! 5!
[⋋𝑡𝑡]1 [⋋𝑡𝑡]3 [⋋𝑡𝑡]5
= 𝑒𝑒 −⋋𝑡𝑡 � + + …�
1! 3! 5!

= 𝑒𝑒 −⋋𝑡𝑡 𝑠𝑠𝑠𝑠𝑠𝑠ℎ ⋋ 𝑡𝑡

Hence 𝐸𝐸{𝑋𝑋(𝑡𝑡)} = [(1)𝑃𝑃(𝑋𝑋 = 1) + (−1)𝑃𝑃(𝑋𝑋 = −1)

= (1)𝑒𝑒 −⋋𝑡𝑡 𝑐𝑐𝑐𝑐𝑐𝑐ℎ ⋋ 𝑡𝑡 + (−1)𝑒𝑒 −⋋𝑡𝑡 𝑠𝑠𝑠𝑠𝑠𝑠ℎ ⋋ 𝑡𝑡

−⋋𝑡𝑡 −⋋𝑡𝑡
𝑒𝑒 ⋋𝑡𝑡 + 𝑒𝑒 −⋋𝑡𝑡 𝑒𝑒 ⋋𝑡𝑡 − 𝑒𝑒 −⋋𝑡𝑡
= 𝑒𝑒 [𝑐𝑐𝑐𝑐𝑐𝑐ℎ ⋋ 𝑡𝑡 − 𝑠𝑠𝑠𝑠𝑠𝑠ℎ ⋋ 𝑡𝑡 = 𝑒𝑒 � − �
2 2

= 𝑒𝑒 −⋋𝑡𝑡 𝑒𝑒 −⋋𝑡𝑡 = 𝑒𝑒 −2⋋𝑡𝑡


Agni college of Technology
Chennai – 130
Hence E{X(t)} is not a constant.

So {X(t)} is evolutionary.

(ii) Mention any three properties each of auto correlation and of cross correlation

functions of a wide sense stationary process.

Solution: Properties of auto correlation: Let X(t) be a WSS process. Then the auto correlation
function 𝑅𝑅𝑋𝑋𝑋𝑋 (𝑡𝑡, 𝑡𝑡 + 𝜏𝜏) is a function of time difference 𝜏𝜏 only.It

is denoted by 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏).

Thus 𝑅𝑅𝑋𝑋𝑋𝑋 (𝑡𝑡, 𝑡𝑡 + 𝜏𝜏) = 𝐸𝐸(𝑋𝑋(𝑡𝑡)𝑋𝑋(𝑡𝑡 + 𝜏𝜏) = 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏).

(1) 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏) = 𝑅𝑅𝑋𝑋𝑋𝑋 (−𝜏𝜏). (i.e., autocorrelation function is an even function)

(2) |𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏)| ≤ 𝑅𝑅𝑋𝑋𝑋𝑋 (0). (i.e., Max. value of 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏) is 𝑅𝑅𝑋𝑋𝑋𝑋 (0)).

(3) If the process X(t) contains a periodic component, then 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏) will also

contain periodic component of the same period.

Properties of cross correlation:

(1) 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏)= 𝑅𝑅𝑋𝑋𝑋𝑋 (−𝜏𝜏).

(2) If the random process X(t) and Y(t) are independent, then

𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏) = 𝐸𝐸(𝑋𝑋). 𝐸𝐸(𝑌𝑌)

(3) If the R.P X(t) & Y(t) are of zero mean,

lim𝜏𝜏→∞ 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏) = lim𝜏𝜏→∞ 𝑅𝑅𝑌𝑌𝑌𝑌 (𝜏𝜏) = 0

(OR)

(b)(i) A random process X(t) definded by 𝑋𝑋(𝑡𝑡) = 𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴 + 𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵; −∞ < 𝑥𝑥 < ∞ where A and B
are independent random variables each of which has a value -2 with probability 1/3 and a value 1
with probability 2/3. Show that X(t) is a wide sense stationary process.

Solution: A&B are discrete RV which assumes values

A; -2 1

P(A): 1/3 2/3


Agni college of Technology
Chennai – 130
B: -2 1

P(B): 1/3 2/3

1 2 2 2
𝐸𝐸[𝐴𝐴] = � 𝑎𝑎𝑖𝑖 𝑝𝑝(𝑎𝑎𝑖𝑖 ) = (−2) � � + (1) � � = − + = 0.
3 3 3 3
1 2 4 2
𝐸𝐸[𝐴𝐴2 ] = � 𝑎𝑎𝑖𝑖2 𝑝𝑝(𝑎𝑎𝑖𝑖 ) = (−2)2 � � + (1)2 � � = + = 2.
3 3 3 3
1 2 2 2
𝐸𝐸[𝐵𝐵] = ∑ 𝑏𝑏𝑖𝑖 𝑝𝑝(𝑏𝑏𝑖𝑖 ) = (−2) � � + (1) � � = − + = 0
3 3 3 3

1 2 4 2
𝐸𝐸[𝐵𝐵2 ] = � 𝑏𝑏𝑖𝑖2 𝑝𝑝(𝑏𝑏𝑖𝑖 ) = (−2)2 � � + (1)2 � � = + = 2.
3 3 3 3

Since A & B are independent RV's, E[AB ]= E[A] E[B]=0.

(i) E[X(t)] = E[Acost+Bsin t] = E[A]cost+E[B]sint = 0+0+0 = 0= constant.

(ii) 𝑅𝑅𝑋𝑋𝑋𝑋 (𝑡𝑡, 𝑡𝑡 + 𝜏𝜏) = 𝐸𝐸[𝑋𝑋(𝑡𝑡)𝑋𝑋(𝑡𝑡 + 𝜏𝜏)]

= 𝐸𝐸[(𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴 + 𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵)((𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴(𝑡𝑡 + 𝜏𝜏) + 𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵(𝑡𝑡 + 𝜏𝜏

= 𝐸𝐸[𝐴𝐴2 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐(𝑡𝑡 + 𝜏𝜏) + 𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴(𝑡𝑡 + 𝜏𝜏)

+𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴(𝑡𝑡 + 𝜏𝜏) + 𝐵𝐵2 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠(𝑡𝑡 + 𝜏𝜏)]

= 𝐸𝐸[𝐴𝐴2 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐(𝑡𝑡 + 𝜏𝜏) + 𝐸𝐸[𝐴𝐴𝐴𝐴] 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐(𝑡𝑡 + 𝜏𝜏)

𝐸𝐸[𝐴𝐴𝐴𝐴}𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠(𝑡𝑡 + 𝜏𝜏) + 𝐸𝐸[𝐵𝐵2 ]𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠(𝑡𝑡 + 𝜏𝜏)

= 2𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐(𝑡𝑡 + 𝜏𝜏) + 0 + 0 + 2𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠(𝑡𝑡 + 𝜏𝜏)

= 2[𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐(𝑡𝑡 + 𝜏𝜏) + 2𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠(𝑡𝑡 + 𝜏𝜏)]

= 2𝑐𝑐𝑐𝑐𝑐𝑐(𝑡𝑡 + 𝜏𝜏 − 𝑡𝑡) = 2𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐, which depends on 𝜏𝜏

Hence by (i) & (ii) X(t) is a WSS process.

(ii) Define a poisson process.Show that the sum of two poisson processes is a poisson process.

Solution: If X(t) represents the number of occurences of a certain event in (0,t) then the discrete
random process {X(t)} is called the poisson process, provided the following postulates are
satisfied.

1. 𝑃𝑃�1 𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜 𝑖𝑖𝑖𝑖 (𝑡𝑡, 𝑡𝑡 + ∆𝑡𝑡)� =⋋ ∆𝑡𝑡 + 𝑂𝑂(∆𝑡𝑡)


Agni college of Technology
Chennai – 130
2. 𝑃𝑃�0 𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜 𝑖𝑖𝑖𝑖 (𝑡𝑡, 𝑡𝑡 + ∆𝑡𝑡)� = 1 −⋋ ∆𝑡𝑡 + 𝑂𝑂(∆𝑡𝑡)

3. 𝑃𝑃�2 𝑜𝑜𝑜𝑜 𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚 𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜 𝑖𝑖𝑖𝑖 (𝑡𝑡, 𝑡𝑡 + ∆𝑡𝑡)� = 𝑂𝑂(∆𝑡𝑡)

4. 𝑋𝑋(𝑡𝑡) is independent of the number of occurences of the event in any interval before and after
interval (0,t).

5. The probability that an event occurs a specified number of times in (𝑡𝑡0 , 𝑡𝑡0 + 𝜏𝜏) depends only
on 𝜏𝜏 and not on 𝑡𝑡0

To prove that the sum of two poisson processes is a poisson process.

Let X(t)=𝑋𝑋1 (𝑡𝑡) + 𝑋𝑋2 (𝑡𝑡)

𝑒𝑒 −⋋1𝑡𝑡 (⋋1 𝑡𝑡)𝑛𝑛


where 𝑃𝑃(𝑋𝑋1 (𝑡𝑡) = 𝑛𝑛) = , 𝑛𝑛 = 0,1,2 …
𝑛𝑛 !

𝑒𝑒 −⋋2 𝑡𝑡 (⋋2 𝑡𝑡)𝑛𝑛


𝑃𝑃(𝑋𝑋2 (𝑡𝑡) = 𝑛𝑛) = , 𝑛𝑛 = 0,1,2 …
𝑛𝑛!
𝑛𝑛

𝑃𝑃(𝑋𝑋(𝑡𝑡) = 𝑛𝑛) = � 𝑃𝑃(𝑋𝑋1 (𝑡𝑡) = 𝑟𝑟). 𝑃𝑃(𝑋𝑋2 (𝑡𝑡) = 𝑛𝑛 − 𝑟𝑟)


𝑟𝑟=0

By independence of 𝑋𝑋1 (𝑡𝑡) & 𝑋𝑋2 (𝑡𝑡)

𝑒𝑒 −⋋1 𝑡𝑡 (⋋1 𝑡𝑡)𝑟𝑟 𝑒𝑒 −⋋2 𝑡𝑡 (⋋2 𝑡𝑡)𝑛𝑛 −𝑟𝑟


= ∑𝑛𝑛𝑟𝑟=0 .
𝑟𝑟! (𝑛𝑛−𝑟𝑟)!

𝒏𝒏
−⋋1 𝑡𝑡 −⋋2 𝑡𝑡
(⋋1 𝑡𝑡)𝒓𝒓 (⋋2 𝑡𝑡)𝒏𝒏−𝒓𝒓
= 𝑒𝑒 𝑒𝑒 � 𝒏𝒏!
𝒓𝒓=𝟎𝟎 𝒏𝒏𝒄𝒄𝒓𝒓

𝒏𝒏
−𝒕𝒕(⋋𝟏𝟏 +⋋𝟐𝟐 )
(⋋1 𝑡𝑡)𝒓𝒓 (⋋2 𝑡𝑡)𝒏𝒏−𝒓𝒓
= 𝒆𝒆 � 𝒏𝒏𝒄𝒄𝒓𝒓
𝒏𝒏!
𝒓𝒓=𝟎𝟎

𝒆𝒆−𝒕𝒕(⋋𝟏𝟏 +⋋𝟐𝟐 ) (⋋𝟏𝟏 𝒕𝒕 +⋋𝟐𝟐 𝒕𝒕)𝒏𝒏


=
𝒏𝒏!

𝒆𝒆−𝒕𝒕(⋋𝟏𝟏+⋋𝟐𝟐) ((⋋𝟏𝟏 +⋋𝟐𝟐 )𝒕𝒕)𝒏𝒏


=
𝒏𝒏!

Hence 𝑋𝑋1 (𝑡𝑡) + 𝑋𝑋2 (𝑡𝑡) is a poisson process with parameter (⋋𝟏𝟏 +⋋𝟐𝟐 )𝒕𝒕
Agni college of Technology
Chennai – 130
14(a)(i) Define spectral density of a stationary random process X(t).Prove that for

a real random process X(t) the power spectral density is an even function.

Solution: The power spectral density of 𝑆𝑆𝑋𝑋 (𝜔𝜔) of a continuous random process X(t) is defined as
Fourier Transform of 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏):

To prove that the spectral density function of a real random process is an even function.

S XX (ω ) = ∫ R (τ )e−iωτ dτ
−∞ XX
∞ iωτ dτ
S XX (−ω ) = ∫ RXX (τ )e
−∞

Letτ = −u τ → −∞ ⇒ u → ∞
dτ = −du τ → ∞ ⇒ u → −∞

∞ iω (−u ) (−du )
(−ω )
S XX=∫ RXX (− u)e
−∞

=− ∫ R (− u)eiω (−u ) (du )
−∞ XX

= ∫ R (− u)eiω (−u ) (−du )
−∞ XX

∞ iω (−τ ) (dτ )
= ∫ RXX (−τ )e
−∞

Since τ is a dummy variable,

=R (−τ )
XX

Since R = (τ ) R (−τ )
XX XX

(1) ⇒ S (−ω ) ∫ R (−τ )eiω (−τ ) (dτ ) = S XX (ω )
=
XX −∞ XX

(ii) Two random processes X(t) and Y(t) are defined as follows:

X (t ) =A cos(ωt + θ ) & Y (t ) =B sin(ωt + θ ) where A,B and ω are constants; θ is a uniform


random variable over (0,2π ) . Find the cross correlation function of X(t) & Y(t).

Solution:
Agni college of Technology
Chennai – 130
t + τ ) E[X(t) Y(t + τ )
(t,=
R
XY
= A 2 E[cos(ωt + θ )sin(ωt + ωτ + θ )]
A2
= E[sin(2ω t + ωτ + 2θ )cos(ωt + θ )]
2
A2
= E[sin(2ωt + ωτ + 2θ ) + sin ωτ ]
2
A2 A2
= E[sin ωτ ] + E[sin(2ω t + ωτ + 2θ )]
2 2

A2 A2
sin(2ω t + ωτ + 2θ )dθ
4π ∫0
= sin ωτ +
2
A2 A2
= sin ωτ
= +0 sin ωτ
2 2

(b)(i) State and prove Wiener-Khinchine therem.

Soltion:
Statement:

Let X(t) be a real WSS process with power density spectrum S XX (ω ) . Let X T (t ) be a
 X (t ) ,−T < t < T
X T (t ) = 
portion of the process X(t) in time interval –T to T. i.e.,  0 , elsewhere
Let X T (ω ) be the Fourier transform of X T (t ) , then

S XX (ω ) =
lim 1
T → ∞ 2T
E X T (ω )
2
{ }
Proof: Given X T (ω ) is the Fourier transorm of X T (t )

∴ X T (ω ) = ∫X
−∞
T (t )e −iωt dt

T
= ∫X
−T
T (t )e −iωt dt

∫ X (t )e
−iωt
= dt
−T

X T (ω ) = X T* (ω ) X T (ω ) [where * denotes complex conjugate]


2

T T

∫ X (t )e dt. ∫ X (t )e −iωt dt
i ωt
=
−T −T [ X (t ) is real]
Agni college of Technology
Chennai – 130
T iω t T − i ωt
= ∫ X (t )e 1 dt . ∫ X (t )e 2 dt
1 1 2 2
−T −T

T T
= ∫ ∫ X (t ) X (t
−T −T
1 2 )e −iω ( t2 −t2 ) dt1dt 2

T T

∫ ∫ X (t ) X (t )e −iω ( t2 −t2 ) dt1dt 2



lim
T →∞
E X T (ω ) =
2
[ lim 1
T → ∞ 2T
] −T −T
1 2

But E [X ((t1 )(t 2 )] = R XX (t1 , t 2 ) if − T < t1 , t 2 < T


T T

1 ∫ ∫ XX 1 2
R (t , t )e −iω ( t2 −t2 ) dt1dt 2

lim
T →∞
[
E X T (ω ) =
2
]
T → ∞ 2T
lim
−T −T

We shall now make a change of variables as below


Put t1 = t and t 2 − t1 = τ ⇒ t 2 = τ + t
∴ thejacobian of transformation is

∂t1 ∂t1 1 0
= =1
J= ∂ t ∂τ 1 1
∂t 2 ∂t 2
∂t ∂τ
 dt1 dt 2 = J dtdτ
The limits of t and –T and T
When t 2 = −T ,τ = −T − t and t 2 = T ,τ = T − t

lim 1
T → ∞ 2T
[
E X T (ω ) =
2 lim 1
T → ∞ 2T
] T −t T

∫ ∫R
−T −t −T
XX (t , t + τ )e −iωτ dtdτ

Since X(t) is WSS Process, R XX (t , t + τ ) = R XX (τ )


lim 1
T → ∞ 2T
[
E X T (ω ) =
2 lim 1
T → ∞ 2T
] T −t T

∫ ∫
R (τ )e −iωτ dtdτ
−T −t −T
XX

lim 1
= T −t T
T → ∞ 2T
∫ R XX (τ )e dτ . ∫ dt
−iωτ

−T −t −T

lim 1
= T −t
T → ∞ 2T
∫R
−T −t
XX (τ )e −iωτ dτ .2T
Agni college of Technology
Chennai – 130
lim
= T −t
T →∞
∫R
−T −t
XX (τ )e −iωτ dτ


= ∫R
−∞
XX (τ )e −iωτ dτ = S XX (ω )
, by definition.
∴ S XX (ω ) =lim E X T (ω ) [ 2
]
T →∞ 2T
Hence proved.

 ibω
a + − a < ω < a, a > 0
S XY ω ) =  a
(ii) If the cross power spectral density of X(t) ad Y(t) is o otherwise

Where a and b are constants.Find the cross correlation function.

Solution : The cross correlation function is given by



1
RXY (τ ) =
2π ∫S
−∞
XY (ω )eiωτ dτ

1 a ibω iωτ
= ∫ (a + )e dω
2π −a a
a

1  ibω  eiωτ  ib  eiωτ 


=  a +  −   
2π  a  iτ  a  −τ 2   −a
iaτ − iaτ
+ 2 {eiaτ − e − iaτ }
1 e e ib
= ( a + ib )  − (a − ib)
2π iτ iτ aτ
1  a iaτ − iaτ b iaτ − iaτ ib 
=
2π  iτ (e − e ) + τ (e − e ) + ωτ 2 (2isina τ ) 
 
1 a b b 
=  (2sin aτ ) + (2 cos aτ ) − 2 (2sin ωτ ) 
2π  τ τ ωτ 
1  a sina τ b cos aτ b sin aτ 
= + −
π  τ τ aτ 2 

15(a)(i) A random process X(t) is the input to a linear system whose impulse function is
−2 τ
h(t ) = 2e − t , t ≥ 0. The auto correlation function of the process is R XX (τ ) = e . Find the power
spectral density of the output process Y(t).
Agni college of Technology
Chennai – 130
Solution: Given h(t ) = 2e − t , t ≥ 0.

∫ h(t )e
− i ωt
H (ω ) = dt
−∞

= ∫ 2e −t e −iωt dt
0

= 22 ∫ e −(1+iωt ) dt
0

e −(1+iωt )
= 2[ ]
− (1 + iωt )
2
=
(1 + iω )
4
H (ω ) =
1 + w2

4
PDS of input X (t ) = FT [ R XX (τ )] = FT [e − 2 τ =
4 + w2

PDS of the output of the systembe S YY (ω )

S YY (ω ) = [ H (ω ) 2 [PDS of the input]


4 4
=( )( )
1 + w 4 + w2
2

16
= 2
(ω + 1)(ω 2 + 4)

(ii) A wide sense stationary noise process N(t) has an autocorrelation function RNN (τ ) = Pe−3 τ

Where P is a constant. Find its power spectrum.

Solution: Given RNN (τ ) = Pe−3 τ


Agni college of Technology
Chennai – 130

∫ R(τ )e
− iωτ
(ω ) S=
S XX= (ω ) dτ
−∞

= P∫ e
−3 τ
(cos ωτ − i sin ωτ )dτ
−∞

= 2P ∫ e
−3 τ
(cos ωτ )dτ
−∞

= 2 P ∫ e −3τ (cos ωτ )dτ
0

 e−3τ (−3cos ωτ +ω sin ωτ ) 
= 2P  
 9 +ω 2  0

2P
= (3)
9 + ω2
6P
=
9 + ω2

15(b)(i)If the input to a time invariant stable, linear system is a wide sense stationary
process,prove that the output will also be q wide sense stationary process.

Solution:
Sol: Let X(t) be a WSS process for a linear time invariant stable system with Y(t) as the output
process.

Y (t ) = ∫ h(u ) X (t − u )du
−∞
Then where h(t ) is weighting function or unit impulse response.

∴ E [Y (t )] = ∫ E[h(u ) X (t − u )]du
−∞

= ∫ h(u ) E[ X (t − u )]du
−∞

Since X(t) is a WSS process, E [ X (t )] is a constant µ X for any t.

∴ E[ X (t − u )] = µ X
∞ ∞
∴ E [Y (t )] = ∫ h(u )µ X du = µ X ∫ h(u )du
−∞ −∞

∫ h(u )du
Since the system is stable , −∞ is finite
Agni college of Technology
Chennai – 130
∴ E [Y (t )] is a constant.

Now RYY (t , t + τ ) = E[Y (t )Y (t + τ )]


∞ ∞
= E[ ∫ h(u1 ) E[ X (t − u1 )]du1 ∫ h(u 2 ) E[ X (t + τ − u 2 )]du 2 ]
−∞ −∞
∞ ∞
= E[ ∫ ∫ h(u1 )h(u 2 ) X (t − u1 ) X (t + τ − u 2 )du1 du 2 ]
− ∞− ∞
∞ ∞
= ∫ ∫ h(u )h(u
− ∞− ∞
1 2 ) E[ X (t − u1 ) X (t + τ − u 2 )]du1 du 2

Since X(t) is a WSS process, auto correlation function is only a function of time
difference
∞ ∞
∴ RYY (t , t + τ ) = ∫ ∫ h(u )h(u
− ∞− ∞
1 2 ) R XX (τ + u1 − u 2 )du1 du 2

When this double integral is evaluated by integrating with respect to


u1 and u 2 , the RHS is only a function of τ . Hence Y(t) is a WSS process.

15(b)(ii) Let X(t) be a wide sense stationary process which is the input to a linear time
Invariant system with unit impulse h(t) and output Y(t), then prove that

SYY (ω ) = H (ω ) S XX (ω ) where H (ω ) is
2
Fourier transform of h(t).

Solution: =
Let Y (t )
−∞
∫ h(u ) X (t − u )du

=
Y (t ) ∫ X (t − α )h(α )dα
−∞

∴ X (t + τ )Y (t ) = ∫ X (t + τ ) X (t − α )h(α )dα
−∞


E[ X (t + τ )Y (t )]= ∫ E{ X (t + τ ) X (t − α )}h(α )dα
−∞
Hence ∞
= ∫R
−∞
XX (τ + α )h(α )dα


= ∫R
−∞
XX (τ − β )h(− β )d β

RXY (τ ) RXX (τ ) * h(−τ )


i.e.,= (1)
RYX (τ ) = RXX (τ ) * h(τ ) (1a )
Agni college of Technology
Chennai – 130

Y (t )Y (t − τ=
) ∫ X (t − α )Y (t − τ )h(α )dα
−∞


E{Y (t )Y (t − τ=
)} ∫R
−∞
XY (t − α )h(α )dα

Assuming that {X(t) & Y(t) are jointly WSS


i.e., RYY (τ ) = RXY (τ ) * h(τ ) ( 2)
Taking FT’s of (1) & (2) we get
S XY (ω ) = S XX (ω ) H* (ω ) (3)

Where H*(ω ) is the conjugate of H(ω ) & SYY (ω ) = S XY (ω ) H(ω ) (4)

Inserting (3) In (4) SYY (ω ) = H (ω ) 2 S XX (ω ) .


Agni college of Technology
Chennai – 130

B.E./B.Tech. DEGREE EXAMINATIONS, NOVEMBER/DECEMBER 2014


Fourth Semester
Common to ECE/BIOMEDICAL
MA6453 – PROBABILITY AND RANDOM PROCESSES
(Regulations 2013)
Time: Three hours Maximum: 100 marks
Answer ALL Questions

PART – A (10 X 2 = 20 marks)


𝑐𝑐
1. Find C, if a continuous random variable X has the density function 𝑓𝑓(𝑥𝑥) = ,
1+𝑥𝑥 2
−∞ < 𝑥𝑥 < ∞.

Ans:
� 𝑓𝑓(𝑥𝑥)𝑑𝑑𝑑𝑑 = 1
−∞

𝑐𝑐
� 2
𝑑𝑑𝑑𝑑 = 1
−∞ 1 + 𝑥𝑥

𝑑𝑑𝑑𝑑
2𝑐𝑐 � 2
=1
0 1 + 𝑥𝑥
2𝑐𝑐 [𝑡𝑡𝑡𝑡𝑡𝑡−1 𝑥𝑥]∞ 0 =1
𝜋𝜋
2𝑐𝑐 � − 0� = 1
2
𝑐𝑐𝑐𝑐 = 1
1
∴ 𝑐𝑐 =
𝜋𝜋
2. Find the moment generating function of Poisson distribution.
Ans: M X (t)=E[𝑒𝑒 𝑡𝑡𝑡𝑡 ] = ∑∞ 𝑡𝑡𝑡𝑡
𝑥𝑥=0 𝑒𝑒 𝑝𝑝(𝑥𝑥)

𝑒𝑒 − λ 𝑥𝑥
=∑∞
𝑥𝑥=0 𝑒𝑒
𝑡𝑡𝑡𝑡
λ
𝑥𝑥!
𝑥𝑥
� λ et �
=𝑒𝑒 − λ ∑∞
𝑥𝑥=0 𝑥𝑥!
2
λ et � λ et �
=𝑒𝑒 − λ �1 + + + ⋯.�
1! 2!

𝑡𝑡
= 𝑒𝑒 − λ 𝑒𝑒 − λ e
M X (t)= 𝑒𝑒 λ (e
t −1)

3. 2𝑥𝑥, 0 < 𝑥𝑥 < 1


Given the random variable X with density function 𝑓𝑓(𝑥𝑥) = � Find
0, 𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒ℎ𝑒𝑒𝑒𝑒𝑒𝑒
the pdf of 𝑌𝑌 = 8𝑋𝑋 3 .
Ans: Out of syllabus
4. Define the joint pmf of a two dimensional discrete random variable.
Ans: The function 𝑃𝑃(𝑥𝑥, 𝑦𝑦) is the jpmf of the discrete random variable (X,Y) if
(𝑖𝑖) 𝑃𝑃(𝑋𝑋 = 𝑥𝑥𝑖𝑖 , 𝑌𝑌 = 𝑦𝑦𝑗𝑗 ) ≥ 0
(𝑖𝑖𝑖𝑖) � � 𝑃𝑃�𝑋𝑋 = 𝑥𝑥𝑖𝑖 , 𝑌𝑌 = 𝑦𝑦𝑗𝑗 � = 1
𝑖𝑖 𝑗𝑗
Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130
Agni college of Technology
Chennai – 130

Where 𝑖𝑖 = 1,2, … . , 𝑛𝑛 and 𝑗𝑗 = 1,2, … . , 𝑚𝑚


We denote 𝑃𝑃�𝑋𝑋 = 𝑥𝑥𝑖𝑖 , 𝑌𝑌 = 𝑦𝑦𝑗𝑗 � = 𝑃𝑃�𝑥𝑥𝑖𝑖 , 𝑦𝑦𝑗𝑗 � = 𝑝𝑝𝑖𝑖𝑖𝑖
5. Define stochastic processes.
Ans: A stochastic process is a collection of random variables {𝑋𝑋(𝑠𝑠, 𝑡𝑡)} which are the
functions of a real variable t. Here 𝑠𝑠 ∈ 𝑆𝑆 (sample space) and 𝑡𝑡 ∈ 𝑇𝑇(index set)
and each {𝑋𝑋(𝑠𝑠, 𝑡𝑡)} is a real valued function.
6. Define Markov process
Ans: If for 𝑡𝑡1 < 𝑡𝑡2 < 𝑡𝑡3 … < 𝑡𝑡𝑛𝑛 we have
𝑃𝑃[𝑋𝑋(𝑡𝑡) ≤ 𝑥𝑥⁄𝑋𝑋(𝑡𝑡1 ) = 𝑥𝑥1 , 𝑋𝑋(𝑡𝑡2 ) = 𝑥𝑥2 , … , 𝑋𝑋(𝑡𝑡𝑛𝑛 ) = 𝑥𝑥𝑛𝑛 ]
= 𝑃𝑃[𝑋𝑋(𝑡𝑡) ≤ 𝑥𝑥⁄𝑋𝑋(𝑡𝑡1 ) = 𝑥𝑥1 ]
7. Write any two properties of auto correlation.
Ans:
𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏) = 𝑅𝑅𝑋𝑋𝑋𝑋 (−𝜏𝜏). (i.e., autocorrelation function is an even function)

(2) |𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏)| ≤ 𝑅𝑅𝑋𝑋𝑋𝑋 (0). (i.e., Max. value of 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏)is 𝑅𝑅𝑋𝑋𝑋𝑋 (0)).

8. Write the Wiener-Khintchine relation.


Ans: Let X(t) be a real WSS process with power density spectrum S XX (ω ) .
Let X T (t ) be a portion of the process X(t) in time interval –T to T. i.e.,
 X (t ) ,−T < t < T
X T (t ) = 
 0 , elsewhere Let X (ω ) be the Fourier transform of
T
X T (t ) , then

S XX (ω ) =
lim
1
T → ∞ 2T
{
E X T (ω )
2
}
9. Define white noise
Ans: A sample function X(t) of a WSS noise random process {X(t)} is called white
noise if the power spectral density of {X(t)} is a constant at all frequencies.
N
We denote the power spectral density of white noise w(t) as S w ( f ) = 0
2
10. The autocorrelation function for a stationary ergodic with no periodic components is
4
{𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏)} = 25 + 2 . Find the mena and variance of the process
{𝑋𝑋(𝑡𝑡)}.
1+6𝜏𝜏
Ans:
𝐸𝐸�𝑋𝑋(𝑡𝑡)� = 𝜇𝜇𝑋𝑋 = � lim 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏)
𝜏𝜏→∞

4
= � lim 25 + = √25 = 5
𝜏𝜏→∞ 1 + 6𝜏𝜏 2
𝐸𝐸�𝑋𝑋 2 (𝑡𝑡)� = 𝑅𝑅𝑋𝑋𝑋𝑋 (0)
= 25 + 4 = 29
𝑉𝑉𝑉𝑉𝑉𝑉[𝑋𝑋(𝑡𝑡)] = 𝐸𝐸�𝑋𝑋 2 (𝑡𝑡)� − 𝐸𝐸(𝑋𝑋(𝑡𝑡))
29 − 52 = 4
Part - B
11. (a) (i) th
Find the n moment about mean of normal distribution.
Ans: Central moments of normal distribution 𝑁𝑁(𝜇𝜇, 𝜎𝜎)
Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130
Agni college of Technology
Chennai – 130

Central moments 𝜇𝜇𝑟𝑟 of 𝑁𝑁(𝜇𝜇, 𝜎𝜎) are given by 𝜇𝜇𝑟𝑟 = 𝐸𝐸(𝑋𝑋 − 𝜇𝜇)𝑟𝑟

1 (𝑥𝑥−𝜇𝜇 )2
𝑟𝑟 − 2𝜎𝜎 2
= �(𝑋𝑋 − 𝜇𝜇) 𝑒𝑒 𝑑𝑑𝑑𝑑
𝜎𝜎√2𝜋𝜋
−∞
(𝑥𝑥−𝜇𝜇 )
Put 𝑡𝑡 =
√2𝜎𝜎
⇒ 𝑥𝑥 − 𝜇𝜇 = 𝑡𝑡√2𝜎𝜎
𝑑𝑑𝑑𝑑 = √2𝜎𝜎𝑑𝑑𝑑𝑑

1 2
= �(√2𝜎𝜎𝜎𝜎)𝑟𝑟 𝑒𝑒 − 𝑡𝑡 √2𝜎𝜎𝑑𝑑𝑑𝑑
𝜎𝜎√2𝜋𝜋
−∞

1 2
= �(√2𝜎𝜎𝜎𝜎)𝑟𝑟 𝑒𝑒 − 𝑡𝑡 𝑑𝑑𝑑𝑑
√𝜋𝜋
−∞

𝑟𝑟/2 𝑟𝑟
2 𝜎𝜎 2
= � 𝑡𝑡 𝑟𝑟 𝑒𝑒 − 𝑡𝑡 𝑑𝑑𝑑𝑑
√𝜋𝜋
−∞
Case: (i) r is an odd integer i.e., 𝑟𝑟 = 2𝑛𝑛 + 1

2(2𝑛𝑛+1)/2 𝜎𝜎 2𝑛𝑛+1 2
𝜇𝜇2𝑛𝑛+1 = � 𝑡𝑡 2𝑛𝑛+1 𝑒𝑒 − 𝑡𝑡 𝑑𝑑𝑑𝑑
√𝜋𝜋
−∞
= 0 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 𝑡𝑡ℎ𝑒𝑒 𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖 𝑖𝑖𝑖𝑖 𝑎𝑎𝑎𝑎 𝑜𝑜𝑜𝑜𝑜𝑜 𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓 𝑜𝑜𝑜𝑜 𝑡𝑡

Case: (ii) r is an even integer i.e., 𝑟𝑟 = 2𝑛𝑛



2𝑛𝑛 𝜎𝜎 2𝑛𝑛 2
𝜇𝜇2𝑛𝑛 = � 𝑡𝑡 2𝑛𝑛 𝑒𝑒 − 𝑡𝑡 𝑑𝑑𝑑𝑑
√𝜋𝜋
−∞
2𝑛𝑛 𝜎𝜎 2𝑛𝑛 ∞ 2𝑛𝑛 − 𝑡𝑡 2
= 2 ∫0 𝑡𝑡 𝑒𝑒 𝑑𝑑𝑑𝑑 (since the integrand is an even function of t)
√𝜋𝜋

2𝑛𝑛 𝜎𝜎 2𝑛𝑛 1
= � 𝑢𝑢𝑛𝑛−2 𝑒𝑒 − 𝑢𝑢 𝑑𝑑𝑑𝑑 𝑜𝑜𝑜𝑜 𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝 𝑢𝑢 = 𝑡𝑡 2
√𝜋𝜋
0

2𝑛𝑛 𝜎𝜎 2𝑛𝑛 1
= Γ �𝑛𝑛 + � ---------------------------------------(1)
√𝜋𝜋 2
𝑛𝑛 2𝑛𝑛
2 𝜎𝜎 2𝑛𝑛 − 1 2𝑛𝑛 − 1
= � �Γ� �
√𝜋𝜋 2 2
2𝑛𝑛 𝜎𝜎 2𝑛𝑛 2𝑛𝑛 − 1 2𝑛𝑛 − 3 2𝑛𝑛 − 3
= � �� �� �
√𝜋𝜋 2 2 2
2𝑛𝑛 𝜎𝜎 2𝑛𝑛 2𝑛𝑛 − 1 2𝑛𝑛 − 3 1
= � �� �…Γ� �
√𝜋𝜋 2 2 2
= 1.3.5 … (2𝑛𝑛 − 1)𝜎𝜎 2𝑛𝑛
From (1) we get
2𝑛𝑛 −1 𝜎𝜎 2𝑛𝑛 −2 1
𝜇𝜇2𝑛𝑛−2 = Γ �𝑛𝑛 − � -----------------------------(2)
√𝜋𝜋 2
From (1) and (2) we get
𝜇𝜇2𝑛𝑛 1
= 2𝜎𝜎 2 �𝑛𝑛 − �
𝜇𝜇2𝑛𝑛−2 2
Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130
Agni college of Technology
Chennai – 130

𝜇𝜇2𝑛𝑛 = 𝜎𝜎 2 (2𝑛𝑛 − 1)𝜇𝜇2𝑛𝑛−2 -----------------------------(3)


(3) gives a recurrence relation for the even order central moments of the
normal distribution 𝑁𝑁(𝜇𝜇, 𝜎𝜎)
(ii) Derive Poisson distribution from the Binomial distribution.
Ans: Suppose in a binomial distribution,
1. The n umber of trials is indefinitely large.i.e., n → ∞ .

2. p,the probability of success in each trial is very small i.e., p → 0.


3. np (= −λ ) is finite and p =λ , q =1 − p =1 − λ
n n
Now for binomial distribution
) nCx p x q n − x ,
P( X= x= =
x 0,1, 2,.....n
n(n − 1)(n − 2)...(n − x + 1) p x q n − x
=
x!
n− x
n(n − 1)(n − 2)...(n − x + 1)  λ   λ
x

=   1 − 
x! n  n
λ x   λ 
−x
1  2   x − 1  λ 
n

= 1 − 1 −  ... 1 − 1 −   1 −  


x !  n  n   n  n   n  
Taking limit on both sides,
λx  1   2   x − 1   λ  n  λ  − x 
lim p=
( x) lim 1 −  1 −  ... 1 −  1 −  1 −  
n →∞ x ! n →∞  n   n   n   n   n  
λx
= = e−λ for x 0,1, 2...
x!
Which is the p.m.f. of the Poisson distribution
(OR)
(b) (i) Find the mean and variance of the Gamma distribution.
Ans: The moment generating function of a random variable X defined as 𝑀𝑀𝑋𝑋 (𝑡𝑡) =
∑∞ 𝑖𝑖=1 𝑒𝑒
𝑡𝑡𝑥𝑥 𝑖𝑖
𝑃𝑃(𝑥𝑥𝑖𝑖 ) 𝑖𝑖𝑖𝑖 𝑋𝑋 𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑 𝑅𝑅𝑅𝑅
𝑡𝑡𝑡𝑡 ]
𝐸𝐸[𝑒𝑒 =� ∞
∫−∞ 𝑓𝑓(𝑥𝑥)𝑒𝑒 𝑡𝑡𝑡𝑡 𝑑𝑑𝑑𝑑 𝑖𝑖𝑖𝑖 𝑋𝑋 𝑖𝑖𝑖𝑖 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 𝑅𝑅𝑅𝑅
Moment Generating Function of Gamma distribution
The Gamma distribution is
∝−1
𝜆𝜆 𝑒𝑒 − 𝜆𝜆𝜆𝜆 ( 𝜆𝜆𝜆𝜆 )
𝑓𝑓(𝑥𝑥) = 𝑥𝑥 ≥ 0
Γ𝛼𝛼

𝑀𝑀𝑋𝑋 (𝑡𝑡) = 𝐸𝐸[𝑒𝑒 𝑡𝑡𝑡𝑡 ]



= � 𝑓𝑓(𝑥𝑥)𝑒𝑒 𝑡𝑡𝑡𝑡 𝑑𝑑𝑑𝑑
−∞
∞ ∝−1
𝑡𝑡𝑡𝑡
𝜆𝜆 𝑒𝑒 − 𝜆𝜆𝜆𝜆 ( 𝜆𝜆𝜆𝜆 )
= � 𝑒𝑒 𝑑𝑑𝑑𝑑
0 Γ𝛼𝛼
∝ ∞
𝜆𝜆 ∝−1
= � 𝑒𝑒 − (𝜆𝜆−𝑡𝑡)𝑥𝑥 ( 𝜆𝜆𝜆𝜆 ) 𝑑𝑑𝑑𝑑
Γ𝛼𝛼 0
Put 𝑢𝑢 = (𝜆𝜆 − 𝑡𝑡)𝑥𝑥
𝑑𝑑𝑑𝑑 = (𝜆𝜆 − 𝑡𝑡)𝑑𝑑𝑑𝑑
𝑥𝑥 → 0 ⟹ 𝑢𝑢 → 0
𝑥𝑥 → ∞ ⟹ 𝑢𝑢 → ∞
Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130
Agni college of Technology
Chennai – 130

∞ ∝−1
𝜆𝜆∝ ∫0 𝑒𝑒 −𝑢𝑢 ( 𝑢𝑢)
∴ 𝑀𝑀𝑋𝑋 (𝑡𝑡) = 𝑑𝑑𝑑𝑑
Γ𝛼𝛼 (𝜆𝜆 − 𝑡𝑡)𝛼𝛼

𝜆𝜆
= Γ𝛼𝛼
Γ𝛼𝛼(𝜆𝜆 − 𝑡𝑡)𝛼𝛼
𝜆𝜆 𝛼𝛼 𝑡𝑡 − 𝛼𝛼
=� � = �1 − �
𝜆𝜆 − 𝑡𝑡 𝜆𝜆

′(𝑡𝑡) 𝑡𝑡 − 𝛼𝛼−1 1
𝑀𝑀𝑋𝑋 = −𝛼𝛼 �1 − � �− ��
𝜆𝜆 𝜆𝜆 𝑡𝑡=0
𝛼𝛼
𝜇𝜇′1 =
𝜆𝜆
′′(𝑡𝑡) 𝑡𝑡 − 𝛼𝛼−2 1 2
𝑀𝑀𝑋𝑋 = 𝛼𝛼[−(∝ +1)] �1 − � � � �
𝜆𝜆 𝜆𝜆 𝑡𝑡=0
𝛼𝛼(∝ +1)
𝜇𝜇′2 =
𝜆𝜆2
2
𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉 (𝑋𝑋) = 𝜇𝜇′2 − �𝜇𝜇′1 �
𝛼𝛼(∝ +1) 𝛼𝛼 2
= − � �
𝜆𝜆2 𝜆𝜆

𝛼𝛼 2 +∝ −𝛼𝛼 2
=
𝜆𝜆2

∴ 𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉(𝑋𝑋) = 2
𝜆𝜆
(ii) 2𝑒𝑒 − 2𝑥𝑥 , 𝑥𝑥 ≥ 0
A random variable X has the pdf 𝑓𝑓(𝑥𝑥) = �
0, 𝑥𝑥 < 0
Obtain the MGF and first four moments about the origin. Find the mean and variance.
Ans: 𝑀𝑀𝑋𝑋 (𝑡𝑡) = 𝐸𝐸[𝑒𝑒 𝑡𝑡𝑡𝑡 ]

= � 𝑓𝑓(𝑥𝑥)𝑒𝑒 𝑡𝑡𝑡𝑡 𝑑𝑑𝑑𝑑
−∞


= � 2𝑒𝑒 −2𝑥𝑥 𝑒𝑒 𝑡𝑡𝑡𝑡 𝑑𝑑𝑑𝑑
0

= 2 � 𝑒𝑒 −(2−𝑡𝑡)𝑥𝑥 𝑑𝑑𝑑𝑑
0

𝑒𝑒 −(2−𝑡𝑡)𝑥𝑥 1
= 2� � = 2 �0 − �
−(2 − 𝑡𝑡) 0 −(2 − 𝑡𝑡)
2
𝑀𝑀𝑋𝑋 (𝑡𝑡) =
2 − 𝑡𝑡
2 𝑡𝑡 −1
= �1 − �
2 2
𝑡𝑡 𝑡𝑡 2 𝑡𝑡 3 𝑡𝑡 4
=1+ +� � +� � +� � +⋯
2 2 2 2
1 2 1
𝜇𝜇′1 = [𝑀𝑀′𝑋𝑋 (𝑡𝑡)]𝑡𝑡=0 = , 𝜇𝜇′2 = [𝑀𝑀′′𝑋𝑋 (𝑡𝑡)]𝑡𝑡=0 = =
2 4 2

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

6 3 24 3
𝜇𝜇′3 = [𝑀𝑀′′′𝑋𝑋 (𝑡𝑡)]𝑡𝑡=0 = = 𝜇𝜇′4 = �𝑀𝑀𝑖𝑖𝑖𝑖 𝑋𝑋 (𝑡𝑡)�𝑡𝑡=0 = =
8 4 16 2
1 2
𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 = 𝜇𝜇′1 = 𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉 = 𝜇𝜇′2 − �𝜇𝜇′1 �
2
1 1 2 1
= −� � =
2 2 4
12. (a) The joint probability mass function of (X,Y) is given by 𝑃𝑃(𝑥𝑥, 𝑦𝑦) = 𝑘𝑘(2𝑥𝑥 + 3𝑦𝑦), 𝑥𝑥 =
0,1,2 𝑎𝑎𝑎𝑎𝑎𝑎 𝑦𝑦 = 1,2,3. Find K and all the marginal and conditional probability
distributions. Also find the probability distribution of (𝑋𝑋 + 𝑌𝑌).
Ans:
The JPMF of (X,Y) is
x/y 1 2 3 P(x)
0 3k 6k 9k 18k
1 5k 8k 11k 24k
2 7k 10k 13k 30k
P(y) 15k 24k 33k 72k

1
We know that ∑p ij =1 ⇒ 72k =1 ⇒ k =
72
Hence the joint probability function is given by
x/y 1 2 3 P(x)
0 3 6 9 18
72 72 72 72
1 5 8 11 24
72 72 72 72
2 7 10 13 30
72 72 72 72
P(y) 15 24 33 1
72 72 72

(i) The marginal distributions of X and Y


X=x 0 1 2
P(X=x) 18 24 30
72 72 72

Y=y 1 2 3
P(Y=y) 15 24 33
72 72 72

(ii) To find the probability distribution of X+Y


X+Y p
1 3
(0,1) 72
2 6
+
5 11
=
(0,2),(1,1) 72 72 72

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

3 9
+
8
+
7 24
=
(0,3),(1,2),(2,1) 72 72 72 72
4 11 10 21
+ =
(1,3),(2,2) 72 72 72
5 13
(2,3) 72
Total 1
(iii) The conditional distribution of X given Y: P  X = x Y = y 
 
P (=X 0,= Y 1) 3 15 1
P ( X= 0 Y= 1)= = =
P () 72 72 5
P (=X 1,= Y 1) 5 15 1
P ( X= 1 Y= 1)= = =
P (Y = 1) 72 72 3
P (=X 2,= Y 1) 7 15 7
P ( X= 2 Y= 1)= = =
P (Y = 1) 72 72 15
P (=X 0,= Y 2) 6 24 1
P ( X= 0 Y= 2)
= = =
P (Y = 2) 72 72 4
P (=X 1,= Y 2) 8 24 1
P ( X= 1 Y= 2)
= = =
P (Y = 2) 72 72 3
P ( X = 2, Y = 2) 10 24 5
P ( X= 2 Y= 2)
= = =
P(Y = 2) 72 72 12
P(= X 0,= Y 3) 9 33 9
P( X= 0 Y= 3)= = =
P (Y = 3) 72 72 33
P (=X 1,= Y 3) 11 33 1
P ( X= 1 Y= 3)= = =
P (Y = 3) 72 72 3
P (=X 2,= Y 3) 13 33 13
P ( X= 2 Y= 3)= = =
P (Y = 3) 72 72 33

The conditional distribution of Y given X is P Y = y X = x 


 

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
P= (Y 1,= X 0) 3 18 1
P(Y= 1 X= 0)
= = =
P( X = 0) 72 72 6
P= (Y 2,= X 0) 6 18 1
P(Y= 2 X= 0)
= = =
P( X = 0) 72 72 3
P=(Y 3,= X 0) 9 18 1
P(Y= 3 X= 0)
= = =
P( X = 0) 72 72 2
P=(Y 1,= X 1) 5 24 5
P(Y= 1 X= 1)= = =
P ( X = 1) 72 72 24
P= (Y 2,= X 1) 8 24 1
P (Y= 2 X= 1)= = =
P ( X = 1) 72 72 3
P (Y = 3, X = 1) 11 24 11
P (Y= 3 X= 1)= = =
P( X = 1) 72 72 24
P= (Y 1,= X 2) 7 30 7
P(Y= 1 X= 2)
= = =
P( X = 2) 72 72 30
P= (Y 2,= X 2) 10 30 1
P(Y= 2 X= 2)
= = =
P ( X = 2) 72 72 3
P=(Y 3,= X 2) 13 30 13
P(Y= 3 X= 2)
= = =
P( X = 2) 72 72 30
The conditional distributions can be tabulated as below:
X 0 1 2
P=
( X x=
Y 1) 3 15 1 5 15 1 7 15 7
= = =
72 72 5 72 72 3 72 72 15
Conditional distribution of X Y = 1

Te conditional distribution of X Y = 2
X 0 1 2
P=
( X x=
Y 2) 6 24 1 8 24 1 10 24 5
= = =
72 72 4 72 72 3 72 72 12

Te conditional distribution of X Y = 3
X 0 1 2
P=
( X x=
Y 3) 9 1 13
33 3 33

Te conditional distribution of Y X = 0
X 0 1 2
P=
(Y y=
X 0) 3 18 1 6 18 1 9 18 1
= = =
72 72 6 72 72 3 72 72 2

Te conditional distribution of Y X = 1
X 0 1 2
P=
(Y y=
X 1) 5 1 11
24 3 24

Te conditional distribution of Y X = 2
X 0 1 2
Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130
Agni college of Technology
Chennai – 130
P=
(Y y=
X 2) 7 1 13
30 3 30

(OR)
(b) (i) State and prove central limit theorem .

Ans: Out of syllabus for regulation 2013.


(ii) The life time of a certain brand of an electric bulb may be considered a RV with mean
1200h and standard deviation 250h. Find the probability, using central limit theorem
that the average lifetime of 60 bulbs exceed 1250h.

Ans: Out of syllabus for regulation 2013.


13. (a) (i) The process {𝑋𝑋(𝑡𝑡)} whose probability distribution under certain condition is given
(𝑎𝑎𝑎𝑎 )𝑛𝑛 −1
𝑛𝑛 −1 , 𝑛𝑛 = 1,2,3, …
by 𝑃𝑃{𝑋𝑋(𝑡𝑡) = 𝑛𝑛} = �(1+𝑎𝑎𝑎𝑎 ) 𝑎𝑎𝑎𝑎
, 𝑛𝑛 = 0
1+𝑎𝑎𝑎𝑎
Show that it is not stationary.
Ans: Given

P{X(t)}=p(x n ) 𝑎𝑎𝑎𝑎 1 𝑎𝑎𝑎𝑎 (𝑎𝑎𝑎𝑎)2 ...


1 + 𝑎𝑎𝑎𝑎 3
(1 + 𝑎𝑎𝑎𝑎) (1 + 𝑎𝑎𝑎𝑎) (1 + 𝑎𝑎𝑎𝑎)4
2

Mean 𝐸𝐸{𝑋𝑋(𝑡𝑡)} = ∑∞ 𝑛𝑛=0 𝑛𝑛𝑛𝑛(𝑥𝑥𝑛𝑛 )


𝑎𝑎𝑎𝑎 1 𝑎𝑎𝑎𝑎 (𝑎𝑎𝑎𝑎)2
=0 +1 + 2 + 3 +⋯
1 + 𝑎𝑎𝑎𝑎 (1 + 𝑎𝑎𝑎𝑎)2 (1 + 𝑎𝑎𝑎𝑎)3 (1 + 𝑎𝑎𝑎𝑎)4
1 𝑎𝑎𝑎𝑎 𝑎𝑎𝑎𝑎 2
= �1 + 2 + 3� � + ⋯�
(1 + 𝑎𝑎𝑎𝑎)2 1 + 𝑎𝑎𝑎𝑎 1 + 𝑎𝑎𝑎𝑎
1 𝑎𝑎𝑎𝑎 −2
= �1 − �
(1 + 𝑎𝑎𝑎𝑎)2 1 + 𝑎𝑎𝑎𝑎
1 1 + 𝑎𝑎𝑎𝑎 − 𝑎𝑎𝑎𝑎 −2
= � �
(1 + 𝑎𝑎𝑎𝑎)2 1 + 𝑎𝑎𝑎𝑎
1 1
= 2
×
(1 + 𝑎𝑎𝑎𝑎) (1 + 𝑎𝑎𝑎𝑎)−2
1
= (1 + 𝑎𝑎𝑎𝑎)2
(1 + 𝑎𝑎𝑎𝑎)2
= 1, 𝑤𝑤ℎ𝑖𝑖𝑖𝑖ℎ 𝑖𝑖𝑖𝑖 𝑎𝑎 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐
Now

𝐸𝐸(𝑋𝑋 2 (𝑡𝑡)) = � 𝑛𝑛2 𝑝𝑝(𝑥𝑥𝑛𝑛 )


𝑛𝑛=0

= �[𝑛𝑛(𝑛𝑛 + 1) − 𝑛𝑛]𝑝𝑝(𝑥𝑥𝑛𝑛 )
𝑛𝑛=0
∞ ∞

= � 𝑛𝑛(𝑛𝑛 + 1)𝑝𝑝(𝑥𝑥𝑛𝑛 ) − � 𝑛𝑛𝑛𝑛(𝑥𝑥𝑛𝑛 )


𝑛𝑛=0 𝑛𝑛=0
Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130
Agni college of Technology
Chennai – 130

1 𝑎𝑎𝑎𝑎 (𝑎𝑎𝑎𝑎)2
= �0 + 1.2 + 2.3 + 3.4 + ⋯ � − 𝐸𝐸{𝑋𝑋(𝑡𝑡)}
(1 + 𝑎𝑎𝑎𝑎)2 (1 + 𝑎𝑎𝑎𝑎)3 (1 + 𝑎𝑎𝑎𝑎)4
2 𝑎𝑎𝑎𝑎 𝑎𝑎𝑡𝑡 2
= �1 + 3. + 6 � � + ⋯� − 1
(1 + 𝑎𝑎𝑎𝑎)2 (1 + 𝑎𝑎𝑎𝑎) 1 + 𝑎𝑎𝑎𝑎
2 𝑎𝑎𝑎𝑎 −3
= �1 − � −1
(1 + 𝑎𝑎𝑎𝑎)2 1 + 𝑎𝑎𝑎𝑎
2
= 2
. (1 + 𝑎𝑎𝑎𝑎)3 − 1
(1 + 𝑎𝑎𝑎𝑎)
= 2(1 + 𝑎𝑎𝑎𝑎) − 1
= 2 + 2𝑎𝑎𝑎𝑎 − 1
= 1 + 2𝑎𝑎𝑎𝑎
𝑉𝑉𝑉𝑉𝑉𝑉{𝑋𝑋(𝑡𝑡)} = 𝐸𝐸(𝑋𝑋 2 (𝑡𝑡)) − [𝐸𝐸(𝑥𝑥(𝑡𝑡))]2
= 1 + 2𝑎𝑎𝑎𝑎 − 1
= 2𝑎𝑎𝑎𝑎 𝑊𝑊ℎ𝑖𝑖𝑖𝑖ℎ 𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑 𝑜𝑜𝑜𝑜 𝑡𝑡
∴ {𝑥𝑥(𝑡𝑡)} 𝑖𝑖𝑖𝑖 𝑛𝑛𝑛𝑛𝑛𝑛 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠

(ii) If the two random variables 𝐴𝐴𝑟𝑟 𝑎𝑎𝑎𝑎𝑎𝑎 𝐵𝐵𝑟𝑟 are uncorrelated with zero mean and
𝐸𝐸(𝐴𝐴2𝑟𝑟 ) = 𝐸𝐸(𝐵𝐵𝑟𝑟2 ) = 𝜎𝜎𝑟𝑟2 , show that the process 𝑋𝑋(𝑡𝑡) = ∑𝑛𝑛𝑟𝑟=1(𝐴𝐴𝑟𝑟 cos 𝜔𝜔𝑟𝑟 𝑡𝑡 + 𝐵𝐵𝑟𝑟 𝑠𝑠𝑠𝑠𝑠𝑠𝜔𝜔𝑟𝑟 𝑡𝑡 )
is wide sense stationary. What are the mean and autocorrelation of X(t) ?
Ans: Given 𝑋𝑋(𝑡𝑡) = ∑𝑛𝑛𝑟𝑟=1(𝐴𝐴𝑟𝑟 cos 𝜔𝜔𝑟𝑟 𝑡𝑡 + 𝐵𝐵𝑟𝑟 𝑠𝑠𝑠𝑠𝑠𝑠𝜔𝜔𝑟𝑟 𝑡𝑡 )
𝐸𝐸(𝐴𝐴𝑟𝑟) = 𝐸𝐸(𝐵𝐵𝑟𝑟 ) = 0 -----------------(1)
𝐸𝐸(𝐴𝐴2𝑟𝑟 ) = 𝐸𝐸(𝐵𝐵𝑟𝑟2 ) = 𝜎𝜎𝑟𝑟2 ---------------(2)
𝐸𝐸(𝐴𝐴𝑟𝑟 𝐵𝐵𝑟𝑟 ) = 0 ---------------------------(3)
𝑛𝑛

𝐸𝐸[𝑋𝑋(𝑡𝑡)] = �(𝐸𝐸(𝐴𝐴𝑟𝑟 ) cos 𝜔𝜔𝑟𝑟 𝑡𝑡 + 𝐸𝐸(𝐵𝐵𝑟𝑟 ) 𝑠𝑠𝑠𝑠𝑠𝑠𝜔𝜔𝑟𝑟 𝑡𝑡 ) = 0 = 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 𝑏𝑏𝑏𝑏(1)


𝑟𝑟=1
𝑅𝑅𝑋𝑋𝑋𝑋 (𝑡𝑡1 , 𝑡𝑡2 ) = 𝐸𝐸[𝑋𝑋(𝑡𝑡1 )𝑋𝑋(𝑡𝑡2 )]
𝑛𝑛

= 𝐸𝐸 ��(𝐴𝐴𝑟𝑟 cos 𝜔𝜔𝑟𝑟 𝑡𝑡1 + 𝐵𝐵𝑟𝑟 𝑠𝑠𝑠𝑠𝑠𝑠𝜔𝜔𝑟𝑟 𝑡𝑡1 )(𝐴𝐴𝑟𝑟 cos 𝜔𝜔𝑟𝑟 𝑡𝑡2 + 𝐵𝐵𝑟𝑟 𝑠𝑠𝑠𝑠𝑠𝑠𝜔𝜔𝑟𝑟 𝑡𝑡2 )�
𝑟𝑟=1

𝑛𝑛

= 𝐸𝐸 �� 𝐴𝐴𝑟𝑟 2 cos 𝜔𝜔𝑟𝑟 𝑡𝑡1 cos 𝜔𝜔𝑟𝑟 𝑡𝑡2


𝑟𝑟=1
+ 𝐴𝐴𝑟𝑟 𝐵𝐵𝑟𝑟 𝑠𝑠𝑠𝑠𝑠𝑠𝜔𝜔𝑟𝑟 𝑡𝑡1 cos 𝜔𝜔𝑟𝑟 𝑡𝑡2 + 𝐴𝐴𝑟𝑟 𝐵𝐵𝑟𝑟 𝑠𝑠𝑠𝑠𝑠𝑠𝜔𝜔𝑟𝑟 𝑡𝑡1 cos 𝜔𝜔𝑟𝑟 𝑡𝑡2

+ 𝐵𝐵𝑟𝑟 2 𝑠𝑠𝑠𝑠𝑠𝑠𝜔𝜔𝑟𝑟 𝑡𝑡1 𝑠𝑠𝑠𝑠𝑠𝑠𝜔𝜔𝑟𝑟 𝑡𝑡2 �


𝑛𝑛

� 𝐸𝐸 (𝐴𝐴𝑟𝑟 2 ) cos 𝜔𝜔𝑟𝑟 𝑡𝑡1 cos 𝜔𝜔𝑟𝑟 𝑡𝑡2


𝑟𝑟=1
+ 𝐸𝐸(𝐴𝐴𝑟𝑟 𝐵𝐵𝑟𝑟 ) 𝑠𝑠𝑠𝑠𝑠𝑠𝜔𝜔𝑟𝑟 𝑡𝑡1 cos 𝜔𝜔𝑟𝑟 𝑡𝑡2 + 𝐸𝐸( 𝐴𝐴𝑟𝑟 𝐵𝐵𝑟𝑟 )𝑠𝑠𝑠𝑠𝑠𝑠𝜔𝜔𝑟𝑟 𝑡𝑡1 cos 𝜔𝜔𝑟𝑟 𝑡𝑡2
+ 𝐸𝐸�𝐵𝐵𝑟𝑟 2 �𝑠𝑠𝑠𝑠𝑠𝑠𝜔𝜔𝑟𝑟 𝑡𝑡1 𝑠𝑠𝑠𝑠𝑠𝑠𝜔𝜔𝑟𝑟 𝑡𝑡2
= ∑𝑛𝑛𝑟𝑟=1 𝜎𝜎𝑟𝑟2 cos 𝜔𝜔𝑟𝑟 𝑡𝑡1 cos 𝜔𝜔𝑟𝑟 𝑡𝑡2 + 0 + 0 + 𝜎𝜎𝑟𝑟2 𝑠𝑠𝑠𝑠𝑠𝑠𝜔𝜔𝑟𝑟 𝑡𝑡1 𝑠𝑠𝑠𝑠𝑠𝑠𝜔𝜔𝑟𝑟 𝑡𝑡2 by (2) & (3)

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
𝑛𝑛

= � 𝜎𝜎𝑟𝑟2 cos 𝜔𝜔𝑟𝑟 𝑡𝑡1 cos 𝜔𝜔𝑟𝑟 𝑡𝑡2 + 𝜎𝜎𝑟𝑟2 𝑠𝑠𝑠𝑠𝑠𝑠𝜔𝜔𝑟𝑟 𝑡𝑡1 𝑠𝑠𝑠𝑠𝑠𝑠𝜔𝜔𝑟𝑟 𝑡𝑡2
𝑟𝑟=1
𝑛𝑛

= � 𝜎𝜎𝑟𝑟2 cos 𝜔𝜔𝑟𝑟 (𝑡𝑡1 − 𝑡𝑡2 ) = 𝑎𝑎 𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓 𝑜𝑜𝑜𝑜 (𝑡𝑡1 − 𝑡𝑡2 )


𝑟𝑟=1

Hence mean of the process is a constant


Auto correlation is a function of t1 − t2 .
Hence {X(t) is a WSS process.

(OR)
(b) (i) Define semi-random telegraph signal process and random telegraph signal process
and prove also that the former is evolutionary and the later is wide sense stationary.
Ans:
Semi-random telegraph signal process

If N(t) represents the number of occurrences of a specified event

in (0,t) and 𝑋𝑋(𝑡𝑡) = (−1)𝑁𝑁(𝑡𝑡) then {X(t)} is called a semi telegraph process.

𝑒𝑒 −⋋𝑡𝑡 (⋋𝑡𝑡)𝑛𝑛
{N(t)} is a process with 𝑃𝑃{𝑁𝑁(𝑡𝑡) = 𝑟𝑟} = , 𝑛𝑛 = 𝑜𝑜, 1,2 …
𝑛𝑛!

Random telegraph signal process:

It is defined as a discrete state continuous parameter process


{𝑋𝑋(𝑡𝑡)⁄−∞ < 𝑡𝑡 < ∞} with the state space (-1,1). Assume that these two values
are equally likely

1
𝑖𝑖. 𝑒𝑒. , 𝑃𝑃[𝑋𝑋(𝑡𝑡) = 1] = = 𝑃𝑃[𝑋𝑋(𝑡𝑡) = −1]∞ < 𝑡𝑡 < ∞
2
To prove that {X(t)} is evolutionary.

Now {X(t)} can take values +1 and -1 only.

𝑃𝑃{𝑋𝑋(𝑡𝑡)} = 1} = 𝑃𝑃(𝑁𝑁(𝑡𝑡)𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖) = 𝑃𝑃(𝑁𝑁(𝑡𝑡) = 0,2,4 … )

= 𝑃𝑃(𝑁𝑁(𝑡𝑡) = 0) = +𝑃𝑃(𝑁𝑁(𝑡𝑡) = 2) + 𝑃𝑃(𝑁𝑁(𝑡𝑡) = 4) + ⋯

𝑒𝑒 −⋋𝑡𝑡 (⋋ 𝑡𝑡)0 𝑒𝑒 −⋋𝑡𝑡 (⋋ 𝑡𝑡)2 𝑒𝑒 −⋋𝑡𝑡 (⋋ 𝑡𝑡)4


= + + + ⋯…
0! 2! 4!
𝑒𝑒 −⋋𝑡𝑡 (⋋ 𝑡𝑡)0 𝑒𝑒 −⋋𝑡𝑡 (⋋ 𝑡𝑡)2 𝑒𝑒 −⋋𝑡𝑡 (⋋ 𝑡𝑡)4
= + + +⋯
0! 2! 4!
[⋋ 𝑡𝑡]2 [⋋ 𝑡𝑡]4
= 𝑒𝑒 −⋋𝑡𝑡 �1 + + + ⋯�
2! 4!

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

= 𝑒𝑒 −⋋𝑡𝑡 𝑐𝑐𝑐𝑐𝑐𝑐ℎ ⋋ 𝑡𝑡

𝑃𝑃{𝑋𝑋(𝑡𝑡)} = −1} = 𝑃𝑃(𝑁𝑁(𝑡𝑡)𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖) = 𝑃𝑃(𝑁𝑁(𝑡𝑡) = 1,3,5 … )

= 𝑃𝑃(𝑁𝑁(𝑡𝑡) = 1) = +𝑃𝑃(𝑁𝑁(𝑡𝑡) = 3) + 𝑃𝑃(𝑁𝑁(𝑡𝑡) = 5) + ⋯

𝑒𝑒 −⋋𝑡𝑡 (⋋ 𝑡𝑡)1 𝑒𝑒 −⋋𝑡𝑡 (⋋ 𝑡𝑡)3 𝑒𝑒 −⋋𝑡𝑡 (⋋ 𝑡𝑡)5


= + + +⋯
1! 3! 5!
[⋋ 𝑡𝑡]1 [⋋ 𝑡𝑡]3 [⋋ 𝑡𝑡]5
= 𝑒𝑒 −⋋𝑡𝑡 � + + …�
1! 3! 5!

= 𝑒𝑒 −⋋𝑡𝑡 𝑠𝑠𝑠𝑠𝑠𝑠ℎ ⋋ 𝑡𝑡

Hence 𝐸𝐸{𝑋𝑋(𝑡𝑡)} = [(1)𝑃𝑃(𝑋𝑋 = 1) + (−1)𝑃𝑃(𝑋𝑋 = −1)

= (1)𝑒𝑒 −⋋𝑡𝑡 𝑐𝑐𝑐𝑐𝑐𝑐ℎ ⋋ 𝑡𝑡 + (−1)𝑒𝑒 −⋋𝑡𝑡 𝑠𝑠𝑠𝑠𝑠𝑠ℎ ⋋ 𝑡𝑡

−⋋𝑡𝑡 −⋋𝑡𝑡
𝑒𝑒 ⋋𝑡𝑡 + 𝑒𝑒 −⋋𝑡𝑡 𝑒𝑒 ⋋𝑡𝑡 − 𝑒𝑒 −⋋𝑡𝑡
= 𝑒𝑒 [𝑐𝑐𝑐𝑐𝑐𝑐ℎ ⋋ 𝑡𝑡 − 𝑠𝑠𝑠𝑠𝑠𝑠ℎ ⋋ 𝑡𝑡] = 𝑒𝑒 � − �
2 2

= 𝑒𝑒 −⋋𝑡𝑡 𝑒𝑒 −⋋𝑡𝑡 = 𝑒𝑒 −2⋋𝑡𝑡

Hence E{X(t)} is not a constant.

So {X(t)} is evolutionary.

To prove Random telegraph signal process X(t) is WSS

1 1
𝐸𝐸{𝑋𝑋(𝑡𝑡)} = (−1) + (1) = 0 = 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐
2 2
𝑅𝑅𝑋𝑋𝑋𝑋 (𝑡𝑡1 , 𝑡𝑡2 ) = 𝐸𝐸[𝑋𝑋(𝑡𝑡1 )𝑋𝑋(𝑡𝑡2 )]
= 𝑃𝑃[𝑋𝑋(𝑡𝑡1 ) = 1, 𝑋𝑋(𝑡𝑡2 ) = 1] − 𝑃𝑃[𝑋𝑋(𝑡𝑡1 ) = 1, 𝑋𝑋(𝑡𝑡2 ) = −1]
− 𝑃𝑃[𝑋𝑋(𝑡𝑡1 ) = −1, 𝑋𝑋(𝑡𝑡2 ) = 1] + 2𝑃𝑃[𝑋𝑋(𝑡𝑡1 ) = −1, 𝑋𝑋(𝑡𝑡2 ) = −1]
𝑁𝑁𝑁𝑁𝑁𝑁 𝑡𝑡ℎ𝑒𝑒 𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒 [𝑋𝑋(𝑡𝑡1 ) = 1, 𝑋𝑋(𝑡𝑡2 ) = 1] = [𝑋𝑋(𝑡𝑡1 ) = −1, 𝑋𝑋(𝑡𝑡2 ) = −1] and
[𝑋𝑋(𝑡𝑡1 ) = −1, 𝑋𝑋(𝑡𝑡2 ) = 1] = [𝑋𝑋(𝑡𝑡1 ) = 1, 𝑋𝑋(𝑡𝑡2 ) = −1]
𝑅𝑅𝑋𝑋𝑋𝑋 (𝑡𝑡1 , 𝑡𝑡2 ) = 2𝑃𝑃[𝑋𝑋(𝑡𝑡1 ) = 1, 𝑋𝑋(𝑡𝑡2 ) = 1] − 2𝑃𝑃[𝑋𝑋(𝑡𝑡1 ) = 1, 𝑋𝑋(𝑡𝑡2 ) = −1]
= 2𝑃𝑃[𝑋𝑋(𝑡𝑡2 ) = 1⁄(𝑡𝑡1 ) = 1] 𝑃𝑃[𝑋𝑋(𝑡𝑡1 ) = 1]
− 2𝑃𝑃[𝑋𝑋(𝑡𝑡2 ) = −1⁄(𝑡𝑡1 ) = 1] 𝑃𝑃[𝑋𝑋(𝑡𝑡1 ) = 1]
= 𝑃𝑃[𝑋𝑋(𝑡𝑡2 ) = 1⁄(𝑡𝑡1 ) = 1] − 𝑃𝑃[𝑋𝑋(𝑡𝑡2 ) = −1⁄(𝑡𝑡1 ) = 1]
Let 𝜏𝜏 = 𝑡𝑡2 − 𝑡𝑡1 then
𝑃𝑃[𝑋𝑋(𝑡𝑡2 ) = 1⁄(𝑡𝑡1 ) = 1] = 𝑃𝑃[𝑁𝑁(𝜏𝜏) = 𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒]

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

𝑒𝑒 −⋋𝜏𝜏 (⋋ 𝜏𝜏)𝑘𝑘
= �
𝑘𝑘!
𝑘𝑘=𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒
= 𝑃𝑃[𝑁𝑁(𝜏𝜏) = 0] + 𝑃𝑃[𝑁𝑁(𝜏𝜏) = 2] + 𝑃𝑃[𝑁𝑁(𝜏𝜏) = 4] + ⋯
−⋋𝜏𝜏
(⋋ 𝜏𝜏)2 (⋋ 𝜏𝜏)4
= 𝑒𝑒 �1 + + + ⋯�
2! 4!
= 𝑒𝑒 −⋋𝜏𝜏 𝑐𝑐𝑐𝑐𝑐𝑐ℎ ⋋ 𝜏𝜏
𝑒𝑒 ⋋𝜏𝜏 + 𝑒𝑒 −⋋𝜏𝜏
= 𝑒𝑒 −⋋𝜏𝜏 � �
2
1 + 𝑒𝑒 −⋋𝜏𝜏
=
2
𝑃𝑃[𝑋𝑋(𝑡𝑡2 ) = 1⁄(𝑡𝑡1 ) = −1] = 𝑃𝑃[𝑁𝑁(𝜏𝜏) = 0𝑑𝑑𝑑𝑑]
= 𝑃𝑃[𝑁𝑁(𝜏𝜏) = 1] + 𝑃𝑃[𝑁𝑁(𝜏𝜏) = 3] + 𝑃𝑃[𝑁𝑁(𝜏𝜏) = 5] + ⋯

⋋ 𝜏𝜏 (⋋ 𝜏𝜏)3 (⋋ 𝜏𝜏)5
= 𝑒𝑒 −⋋𝜏𝜏 � + + + ⋯�
1! 2! 4!
= 𝑒𝑒 −⋋𝜏𝜏 𝑠𝑠𝑠𝑠𝑠𝑠 ℎ ⋋ 𝜏𝜏
−⋋𝜏𝜏
𝑒𝑒 ⋋𝜏𝜏 − 𝑒𝑒 −⋋𝜏𝜏
= 𝑒𝑒 � �
2

1 − 𝑒𝑒 −⋋𝜏𝜏
=
2
1 + 𝑒𝑒 −⋋𝜏𝜏 1 − 𝑒𝑒 −⋋𝜏𝜏
𝑅𝑅𝑋𝑋𝑋𝑋 (𝑡𝑡1 , 𝑡𝑡2 ) = 2 � � − 2� � = 𝑒𝑒 −2⋋𝜏𝜏
2 2
𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆 𝐸𝐸[𝑋𝑋(𝑡𝑡)] = 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 𝑎𝑎𝑎𝑎𝑎𝑎𝑅𝑅𝑋𝑋𝑋𝑋 (𝑡𝑡1 , 𝑡𝑡2 ) = 𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓 𝑜𝑜𝑜𝑜 𝜏𝜏
Hence Random telegraph signal process is WSS.

(ii) If{𝑋𝑋(𝑡𝑡)} is Gaussian1 process with 𝜇𝜇(𝑡𝑡) = 10 and C(𝑡𝑡1 , 𝑡𝑡2 ) = 16𝑒𝑒 − |𝑡𝑡 1 −𝑡𝑡 2 |
. Find the
probability that (1) 𝑋𝑋(10) ≤ 8 and (2) |𝑋𝑋(10) − 𝑋𝑋(6)| ≤ 4
Ans: Since {X(t)} is a Guassian process, any member of {X(t)} is a
normal random variable
By definition C(t 1 ,t 2 )= R(t 1 ,t 2 )-E[X(t 1 )].E[X(t 2 )]
C(t 1 ,t 2 )= Var(X(t 1 ))
C(t 1 ,t 2 )= Var {X(t)}
Now X (10) is a normal random variable with mean µ(10)=10
and variance C(10,10) =16
(a). To find P(x(10)≤8),we have
𝑋𝑋(10)−10 8−10
P(X(10)≤8)=P� ≤ �
4 4
=P[z≤-0.5]
=0.5-P[0≤X≤0.5]
=0.5-0.1915(from normal tables)
=0.3085
(b).To find P(|𝑋𝑋(10) − 𝑋𝑋(6)| ≤ 4)let U=X(10)-X(6).
Here U is also a Random variable and we have
E(U)=E[X(10)-X(6)]
=10-10
Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130
Agni college of Technology
Chennai – 130

=0
Var(U)= Var[X(10)-X(6)]
=Var[X(10)]+Var[X(6)]-2Cov[X910,X(6)]
=Cov(10,10)+Cov(6,6)-2Cov(10,6)
=16 𝑒𝑒 −|10−10| + 16𝑒𝑒 −|6−6| − 2 × 16𝑒𝑒 −|10−6|
=16+16-2×16e-4
= 31.4139
𝜎𝜎𝑢𝑢=√31.4139
=5.6048
Now P(|X(10) − X(6)| ≤ 4)=P(|𝑈𝑈| ≤ 4)
= P(-4≤U≤4)
𝑈𝑈−𝐸𝐸(𝑈𝑈)
Where Z=
𝜎𝜎𝜎𝜎
−4−0 4−0
= P� ≤ 𝑍𝑍 ≤ �
5.6048 5.6048
= 2× P[0≤Z≤0.7137]
=2×0.2611
=0.5222

14. (a) (i) The random binary transmission process {𝑋𝑋(𝑡𝑡)} is a WSS with zero mean and auto
|𝜏𝜏|
correlation function 𝑅𝑅(𝜏𝜏) = 1 − , where T is a constant. Find the mean and
𝑇𝑇
variance of the time average of {𝑋𝑋(𝑡𝑡)} over (0, 𝑇𝑇). Is {𝑋𝑋(𝑡𝑡)} mean ergodic?
Ans: By definition
1 𝑇𝑇
����
𝑋𝑋𝑇𝑇 = � 𝑋𝑋(𝑡𝑡) 𝑑𝑑𝑑𝑑
𝑇𝑇 0
𝐸𝐸[𝑋𝑋𝑇𝑇 ] = 𝐸𝐸[𝑋𝑋(𝑡𝑡)] = 0
1 𝑇𝑇 |𝜏𝜏|
����
𝑉𝑉(𝑋𝑋𝑇𝑇 ) = � �1 − � 𝑅𝑅(𝜏𝜏) 𝑑𝑑𝑑𝑑
𝑇𝑇 −𝑇𝑇 𝑇𝑇
𝑇𝑇 |𝜏𝜏| |𝜏𝜏|
1
= � �1 − � �1 − � 𝑑𝑑𝑑𝑑
𝑇𝑇 −𝑇𝑇 𝑇𝑇 𝑇𝑇
𝑇𝑇 2
2 |𝜏𝜏|
= � �1 − � 𝑑𝑑𝑑𝑑
𝑇𝑇 0 𝑇𝑇
𝑇𝑇 2
2 𝜏𝜏 2𝜏𝜏
= � �1 + 2 − � 𝑑𝑑𝑑𝑑
𝑇𝑇 0 𝑇𝑇 𝑇𝑇
3 𝑇𝑇
2 𝜏𝜏 2𝜏𝜏 2
= �𝜏𝜏 + 2 − �
𝑇𝑇 3𝑇𝑇 2𝑇𝑇 0
2 𝑇𝑇 2 𝑇𝑇
= �𝑇𝑇 + − 𝑇𝑇� = ×
𝑇𝑇 3 𝑇𝑇 3
2
∴ lim 𝑉𝑉(𝑋𝑋 ����𝑇𝑇 ) = ≠ 0
𝑇𝑇→∞ 3
i.e., The condition for mean ergodic of X(t) is not satisfied. Therefore X(t) is
not mean ergodic.

(ii) Find the power spectral density of a WSS process with auto correlation function
2
𝑅𝑅(𝜏𝜏) = 𝑒𝑒 − 𝛼𝛼𝜏𝜏 .
Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130
Agni college of Technology
Chennai – 130

Ans:
𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔) = � 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏)𝑒𝑒 −𝑖𝑖𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑
−∞

− 𝛼𝛼𝜏𝜏 2
= � 𝑒𝑒 𝑒𝑒 −𝑖𝑖𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑
−∞

2 +𝑖𝑖𝑖𝑖𝑖𝑖 �
= � 𝑒𝑒 − 𝛼𝛼�𝜏𝜏 𝛼𝛼 𝑑𝑑𝑑𝑑
−∞
∞ 𝑖𝑖𝑖𝑖𝑖𝑖 𝑖𝑖𝑖𝑖 2 𝑖𝑖𝑖𝑖 2
− 𝛼𝛼�𝜏𝜏 2 + 𝛼𝛼 +�2𝛼𝛼 � −�2𝛼𝛼 � �
= � 𝑒𝑒 𝑑𝑑𝑑𝑑
−∞

∞ 𝑖𝑖𝑖𝑖 2 𝜔𝜔 2
− 𝛼𝛼��𝜏𝜏+2𝛼𝛼 � + 2 �
4𝛼𝛼
= � 𝑒𝑒 𝑑𝑑𝑑𝑑
−∞
∞ 𝑖𝑖𝑖𝑖 2 𝛼𝛼𝛼𝛼 2
− 𝛼𝛼��𝜏𝜏+2𝛼𝛼 � � −
= � 𝑒𝑒 𝑒𝑒 4𝛼𝛼 2 𝑑𝑑𝑑𝑑
−∞
𝜔𝜔 2 ∞ 𝑖𝑖𝑖𝑖 2
− 4𝛼𝛼 − 𝛼𝛼��𝜏𝜏+2𝛼𝛼 � �
= 𝑒𝑒 � 𝑒𝑒 𝑑𝑑𝑑𝑑
−∞
𝑖𝑖𝑖𝑖
Put 𝑢𝑢 = 𝜏𝜏 +
2𝛼𝛼
𝑑𝑑𝑑𝑑 = 𝑑𝑑𝑑𝑑
𝜔𝜔 2 ∞
2
= 𝑒𝑒 − 4𝛼𝛼 � 𝑒𝑒 −𝛼𝛼𝑢𝑢 𝑑𝑑𝑑𝑑
−∞
𝜔𝜔 2 ∞
− 4𝛼𝛼 2
= 𝑒𝑒 � 𝑒𝑒 −(√𝛼𝛼𝑢𝑢) 𝑑𝑑𝑑𝑑
−∞
𝜋𝜋 − 𝜔𝜔 2
= � 𝑒𝑒 4𝛼𝛼
𝛼𝛼

(OR)
(b) (i) A random process {𝑋𝑋(𝑡𝑡)} is given by 𝑋𝑋(𝑡𝑡) = 𝐴𝐴 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 + 𝐵𝐵 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠, where A and B are
independent random variables such that 𝐸𝐸(𝐴𝐴) = 𝐸𝐸(𝐵𝐵) = 0 𝑎𝑎𝑎𝑎𝑎𝑎 𝐸𝐸(𝐴𝐴2 ) = 𝐸𝐸(𝐵𝐵2 ) =
𝜎𝜎 2 . Find the power spectral density of the process.
Ans: Given 𝑋𝑋(𝑡𝑡) = 𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴 + 𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵
And the random variables A and B satisfy
𝐸𝐸(𝐴𝐴) = 𝐸𝐸(𝐵𝐵) = 0, 𝐸𝐸(𝐴𝐴𝐴𝐴) = 0, 𝐸𝐸(𝐴𝐴2 ) = 𝐸𝐸(𝐵𝐵2 ) = 𝜎𝜎 2
We have to prove
(𝑖𝑖)𝐸𝐸[𝑋𝑋(𝑡𝑡)] = 0
(𝑖𝑖𝑖𝑖)𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏) 𝑖𝑖𝑖𝑖 𝑎𝑎 𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓 𝑜𝑜𝑜𝑜 𝜏𝜏

(𝑖𝑖) 𝐸𝐸[𝑋𝑋(𝑡𝑡)] = 𝐸𝐸[𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴 + 𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵 ] = 𝐸𝐸[𝐴𝐴]𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 + 𝐸𝐸[𝐵𝐵]𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠


= 0 + 0 + 0 = 0 = 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐.

(𝑖𝑖𝑖𝑖)𝑅𝑅𝑋𝑋𝑋𝑋 (𝑡𝑡, 𝑡𝑡 + 𝜏𝜏) = 𝐸𝐸[𝑋𝑋(𝑡𝑡)𝑋𝑋(𝑡𝑡 + 𝜏𝜏)]

= 𝐸𝐸[(𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴 + 𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵)(�𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴(𝑡𝑡 + 𝜏𝜏) + 𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵(𝑡𝑡 + 𝜏𝜏)�]

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

= 𝐸𝐸[𝐴𝐴2 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐(𝑡𝑡 + 𝜏𝜏) + 𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴(𝑡𝑡 + 𝜏𝜏) + 𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴(𝑡𝑡 + 𝜏𝜏)


+ 𝐵𝐵2 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠(𝑡𝑡 + 𝜏𝜏)]

= 𝐸𝐸[𝐴𝐴2 ]𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐(𝑡𝑡 + 𝜏𝜏) + 𝐸𝐸[𝐴𝐴𝐴𝐴]𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠(𝑡𝑡 + 𝜏𝜏)


+ 𝐸𝐸[𝐴𝐴𝐴𝐴]𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐(𝑡𝑡 + 𝜏𝜏) + 𝐸𝐸[𝐵𝐵2 ] 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠(𝑡𝑡 + 𝜏𝜏)

= 𝜎𝜎 2 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐(𝑡𝑡 + 𝜏𝜏) + 0 + 0 + 𝜎𝜎 2 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠(𝑡𝑡 + 𝜏𝜏)

= 𝜎𝜎 2 [𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐(𝑡𝑡 + 𝜏𝜏) + 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠(𝑡𝑡 + 𝜏𝜏)]

= 𝜎𝜎 2 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐(𝑡𝑡 − 𝜏𝜏 − 𝑡𝑡) = 𝜎𝜎 2 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐(−𝜏𝜏) = 𝜎𝜎 2 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐, which depends on 𝜏𝜏

Hence by (i) & (ii) X(t) is a WSS process.

(ii) If the power spectral density of a WSS process is given by


𝑏𝑏
(𝑎𝑎 − |𝜔𝜔|), |𝜔𝜔| ≤ 𝑎𝑎
𝑆𝑆(𝜔𝜔) = �𝑎𝑎
0, |𝜔𝜔| > 𝑎𝑎
Find the autocorrelation function of the process.

Ans: 1
∫ S (ω )e
iωτ
R(τ ) = dω
2π −∞
a

∫ a ( a − ω ) ( cos ωτ + i sin ωτ ) dω
1 b
=
2π −a
a
2 b
=
2π ∫ a ( a − ω ) cos ωτ dω
0
a
b b
=
aπ ∫ a ( a − ω ) cos ωτ dω
0
a
b   sin ωτ  cos ωτ 
=( a − ω )  −
aπ   τ  τ 2  0
b
= (1 − cos aτ )
aπτ 2
2b  aτ 
= sin 2  
aπτ 2
 2 
15. (a) (i) Check whether the following systems are linear (1) 𝑌𝑌(𝑡𝑡) = 𝑡𝑡𝑡𝑡(𝑡𝑡) (2) 𝑌𝑌(𝑡𝑡) = 𝑋𝑋 2 (𝑡𝑡).
Ans: (1) If 𝑋𝑋(𝑡𝑡) = 𝑎𝑎1 𝑋𝑋1 (𝑡𝑡) + 𝑎𝑎2 𝑋𝑋2 (𝑡𝑡)

𝑌𝑌(𝑡𝑡) = � ℎ(𝛼𝛼)𝑋𝑋(𝑡𝑡 − 𝛼𝛼) 𝑑𝑑𝑑𝑑
−∞
𝑌𝑌(𝑡𝑡) = 𝑡𝑡𝑡𝑡(𝑡𝑡)
= 𝑡𝑡[𝑎𝑎1 𝑋𝑋1 (𝑡𝑡) + 𝑎𝑎2 𝑋𝑋2 (𝑡𝑡)]

= 𝑡𝑡 � ℎ(𝛼𝛼)[𝑎𝑎1 𝑋𝑋1 (𝑡𝑡 − 𝛼𝛼) + 𝑎𝑎2 𝑋𝑋2 (𝑡𝑡 − 𝛼𝛼)] 𝑑𝑑𝑑𝑑
−∞

= 𝑡𝑡 � ℎ(𝛼𝛼)𝑎𝑎1 𝑋𝑋1 (𝑡𝑡 − 𝛼𝛼) + ℎ(𝛼𝛼)𝑎𝑎2 𝑋𝑋2 (𝑡𝑡 − 𝛼𝛼) 𝑑𝑑𝑑𝑑
−∞

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
∞ ∞
= 𝑡𝑡 � ℎ(𝛼𝛼)𝑎𝑎1 𝑋𝑋1 (𝑡𝑡 − 𝛼𝛼)𝑑𝑑𝑑𝑑 + 𝑡𝑡 � ℎ(𝛼𝛼)𝑎𝑎2 𝑋𝑋2 (𝑡𝑡 − 𝛼𝛼) 𝑑𝑑𝑑𝑑
−∞ −∞
= 𝑡𝑡[𝑎𝑎1 𝑌𝑌1 (𝑡𝑡) + 𝑎𝑎2 𝑌𝑌2 (𝑡𝑡)]
(2)𝑌𝑌(𝑡𝑡) = 𝑋𝑋 2 (𝑡𝑡)
𝑋𝑋 2 (𝑡𝑡) = 𝑋𝑋(𝑡𝑡)𝑋𝑋(𝑡𝑡)
= [𝑎𝑎1 𝑋𝑋1 (𝑡𝑡) + 𝑎𝑎2 𝑋𝑋2 (𝑡𝑡)]2
= 𝑎𝑎12 (𝑡𝑡)𝑋𝑋12 + 𝑎𝑎22 𝑋𝑋22 (𝑡𝑡) +2𝑎𝑎1 𝑎𝑎2 𝑋𝑋1 (𝑡𝑡)𝑋𝑋2 (𝑡𝑡)
≠ 𝑎𝑎12 (𝑡𝑡)𝑋𝑋12 + 𝑎𝑎22 𝑋𝑋22 (𝑡𝑡)
∴ 𝑌𝑌(𝑡𝑡)𝑖𝑖𝑖𝑖 𝑛𝑛𝑛𝑛𝑛𝑛 𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙.

(ii) The power spectral density of a signal X(t) is𝑆𝑆𝑋𝑋 (𝜔𝜔) and its power is P. Find the
power of the signal 𝑏𝑏𝑏𝑏(𝑡𝑡).
Ans: Let 𝑌𝑌(𝑡𝑡) = 𝑏𝑏𝑏𝑏(𝑡𝑡)
𝑅𝑅𝑌𝑌𝑌𝑌 (𝜏𝜏) = 𝐸𝐸[𝑌𝑌(𝑡𝑡)𝑌𝑌(𝑡𝑡 + 𝜏𝜏)]
= 𝐸𝐸[𝑏𝑏𝑏𝑏(𝑡𝑡)𝑏𝑏𝑏𝑏(𝑡𝑡 + 𝜏𝜏)]
𝑅𝑅𝑌𝑌𝑌𝑌 (𝜏𝜏) = 𝑏𝑏 2 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏)

Taking Fourier transform on both sides


𝑆𝑆𝑌𝑌𝑌𝑌 (𝜔𝜔) = 𝑏𝑏 2 𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔)

Power of Y(t)= ∫−∞ 𝑆𝑆𝑌𝑌𝑌𝑌 (𝜔𝜔) 𝑑𝑑𝑑𝑑

= � 𝑏𝑏 2 𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔) 𝑑𝑑𝑑𝑑
−∞
= 𝑏𝑏 2 𝑃𝑃
(OR)
1
(b) 1
A linear system is described by the impulse response ℎ(𝑡𝑡) = 𝑒𝑒 − �𝑅𝑅𝑅𝑅 � . Assume an
𝑅𝑅𝑅𝑅
input signal whose autocorrelation function is 𝐵𝐵𝐵𝐵(𝑡𝑡). Find the autocorrelation, mean
and power of output.
Ans: 1 𝑡𝑡
𝛽𝛽𝑒𝑒 −𝛽𝛽𝛽𝛽 , 𝑡𝑡 ≥ 0 1
ℎ(𝑡𝑡) = 𝑒𝑒 −𝑅𝑅𝑅𝑅 = � where 𝛽𝛽 =
𝑅𝑅𝑅𝑅 0, 𝑡𝑡 < 0 𝑅𝑅𝑅𝑅

𝐻𝐻(𝜔𝜔) = � ℎ(𝑡𝑡)𝑒𝑒 − 𝑗𝑗𝑗𝑗𝑗𝑗 𝑑𝑑𝑑𝑑
−∞

= � 𝛽𝛽𝑒𝑒 −𝛽𝛽𝛽𝛽 𝑒𝑒 − 𝑗𝑗𝑗𝑗𝑗𝑗 𝑑𝑑𝑑𝑑
0

= 𝛽𝛽 � 𝑒𝑒 − (𝛽𝛽 +𝑗𝑗𝑗𝑗 )𝑡𝑡 𝑑𝑑𝑑𝑑
0

𝑒𝑒 − (𝛽𝛽 +𝑗𝑗𝑗𝑗 )𝑡𝑡
= 𝛽𝛽 � �
− (𝛽𝛽 + 𝑗𝑗𝑗𝑗) 0
𝛽𝛽
=
𝛽𝛽 + 𝑗𝑗𝑗𝑗
2
𝛽𝛽2
|𝐻𝐻(𝜔𝜔)| = 2
𝛽𝛽 + 𝜔𝜔 2
If Y(t) is the output process then

𝑌𝑌(𝑡𝑡) = � ℎ(𝛼𝛼)𝑋𝑋(𝑡𝑡 − 𝛼𝛼)𝑑𝑑𝑑𝑑
−∞

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

𝐸𝐸[𝑌𝑌(𝑡𝑡)] = � ℎ(𝛼𝛼)𝐸𝐸[𝑋𝑋(𝑡𝑡 − 𝛼𝛼)] 𝑑𝑑𝑑𝑑
−∞
=0 ∵ 𝐸𝐸[𝑋𝑋(𝑡𝑡 − 𝛼𝛼)] = 0
𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏) = 𝐵𝐵𝐵𝐵(𝜏𝜏)
𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔) = 𝐵𝐵𝐵𝐵[𝛿𝛿(𝜏𝜏)] = 𝐵𝐵
∵ 𝑡𝑡ℎ𝑒𝑒 𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹𝐹 𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇 𝑜𝑜𝑜𝑜 𝛿𝛿(𝜏𝜏) = 1
𝑆𝑆𝑌𝑌𝑌𝑌 (𝜔𝜔) = 𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔)|𝐻𝐻(𝜔𝜔)|2
𝛽𝛽2
= 𝐵𝐵 2
𝛽𝛽 + 𝜔𝜔 2
Auto correlation of the output Y(t) is
𝑅𝑅𝑌𝑌𝑌𝑌 (𝜏𝜏) = 𝐹𝐹 −1 [𝑆𝑆𝑌𝑌𝑌𝑌 (𝜔𝜔)]
𝛽𝛽2
= 𝐹𝐹 −1 �𝐵𝐵 2 �
𝛽𝛽 + 𝜔𝜔 2
𝐵𝐵𝛽𝛽 −1 2𝛽𝛽
= 𝐹𝐹 � 2 �
2 𝛽𝛽 + 𝜔𝜔 2
𝐵𝐵𝛽𝛽 −𝛽𝛽 |𝜏𝜏|
= 𝑒𝑒
2
1
𝐵𝐵 𝑅𝑅𝑅𝑅 − 1 |𝜏𝜏|
𝑅𝑅𝑌𝑌𝑌𝑌 (𝜏𝜏) = 𝑒𝑒 𝑅𝑅𝑅𝑅
2
𝐵𝐵 − 1 |𝜏𝜏|
∴ 𝑅𝑅𝑌𝑌𝑌𝑌 (𝜏𝜏) = 𝑒𝑒 𝑅𝑅𝑅𝑅
2𝑅𝑅𝑅𝑅

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

B.E./B.Tech. DEGREE EXAMINATIONS, MAY/JUNE 2012


Fourth Semester
Common to ECE/BIOMEDICAL
MA6453 – PROBABILITY AND RANDOM PROCESSES
(Regulations 2013)
Time: Three hours Maximum: 100 marks
Answer ALL Questions

Part – A (10X2 = 20 marks)

1. X and Y are independent random variables with variance 2 and 3. Find the variance of
3𝑋𝑋 + 4𝑌𝑌.
Ans: Given 𝑉𝑉(𝑋𝑋) = 2 𝑎𝑎𝑎𝑎𝑎𝑎 𝑉𝑉(𝑌𝑌) = 3.
𝑉𝑉(3𝑋𝑋 + 4𝑌𝑌) = 32 𝑉𝑉(𝑋𝑋) + 42 𝑉𝑉(𝑌𝑌) = 9𝑥𝑥2 + 16𝑥𝑥3 = 18 + 48 = 66
2. A continuous random variable X has probability density function 𝑓𝑓(𝑥𝑥) =
3𝑥𝑥 2 , 0 ≤ 𝑥𝑥 ≤ 1
� Find K such that 𝑃𝑃(𝑋𝑋 > 𝑘𝑘) = 0.5
0, 𝑜𝑜𝑜𝑜ℎ𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒
Ans: 𝑃𝑃(𝑋𝑋 > 𝑘𝑘) = 0.5
1
� 3𝑥𝑥 2 𝑑𝑑𝑑𝑑 = 0.5
𝑘𝑘
1
𝑥𝑥 3
3 � � = 0.5
3 𝑘𝑘
1 − 𝑘𝑘 3 = 0.5
−𝑘𝑘 3 = 0.5 − 1
𝑘𝑘 3 = 0.5
𝑘𝑘 = (0.5)1/3
3. State central Limit Theorem for iid random variables.
Ans: out of syllabus for 2013 regulation
4. State the basic properties of joint distribution of (X,Y) when X and Y are random
variables.
Ans: (𝑖𝑖) 0 ≤ 𝐹𝐹(𝑥𝑥, 𝑦𝑦) ≤ 1
(𝑖𝑖𝑖𝑖) 𝑃𝑃[𝑎𝑎 < 𝑋𝑋 < 𝑏𝑏, 𝑌𝑌 ≤ 𝑦𝑦] = 𝐹𝐹(𝑏𝑏, 𝑦𝑦) − 𝐹𝐹(𝑎𝑎, 𝑦𝑦)
(𝑖𝑖𝑖𝑖𝑖𝑖) 𝑃𝑃[𝑋𝑋 ≤ 𝑥𝑥, 𝑐𝑐 < 𝑌𝑌 < 𝑑𝑑] = 𝐹𝐹(𝑥𝑥, 𝑑𝑑) − 𝐹𝐹(𝑥𝑥, 𝑐𝑐)
(𝑖𝑖𝑖𝑖) 𝑃𝑃[𝑎𝑎 < 𝑋𝑋 < 𝑏𝑏, 𝑐𝑐 < 𝑌𝑌 < 𝑑𝑑] = 𝐹𝐹(𝑏𝑏, 𝑑𝑑) − 𝐹𝐹(𝑎𝑎, 𝑑𝑑) − 𝐹𝐹(𝑏𝑏, 𝑐𝑐) + 𝐹𝐹(𝑎𝑎, 𝑐𝑐)
(𝑣𝑣) 𝐹𝐹(𝑥𝑥, 𝑦𝑦) is non- decreasing function.
(𝑣𝑣𝑣𝑣) 𝐹𝐹(−∞, 𝑦𝑦) = 0, 𝐹𝐹(𝑥𝑥, −∞) = 0, 𝐹𝐹(∞, ∞) = 1
𝜕𝜕 2 𝐹𝐹(𝑥𝑥, 𝑦𝑦)
(𝑣𝑣𝑣𝑣𝑣𝑣) 𝐴𝐴𝐴𝐴 𝑡𝑡ℎ𝑒𝑒 𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝 𝑜𝑜𝑜𝑜 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 𝑜𝑜𝑜𝑜 𝑓𝑓(𝑥𝑥, 𝑦𝑦) = 𝑓𝑓(𝑥𝑥, 𝑦𝑦)
𝜕𝜕𝜕𝜕 𝜕𝜕𝜕𝜕
5. State the properties of an ergodic process.
Ans: A random process X(t) is said to be ergodic if the ensemble averages are equal
to the corresponding time averages.
6. Explain any two application of binomial process.
Ans: out of syllabus for 2013 regulation
7. Define cross correlation function and state any two of its properties.
Ans: The cross correlation of X(t) and Y(t) is denoted by 𝑅𝑅𝑋𝑋𝑋𝑋 (𝑡𝑡1 , 𝑡𝑡2) and is defined

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

as 𝑅𝑅𝑋𝑋𝑋𝑋 (𝑡𝑡, 𝑡𝑡 + 𝜏𝜏) = 𝐸𝐸[𝑋𝑋(𝑡𝑡)𝑌𝑌(𝑡𝑡 + 𝜏𝜏)] 𝑓𝑓𝑓𝑓𝑓𝑓 𝑎𝑎𝑎𝑎𝑎𝑎 𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡 𝑡𝑡, 𝑡𝑡 + 𝜏𝜏


Properties
(1) 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏) = 𝑅𝑅𝑌𝑌𝑌𝑌 (−𝜏𝜏)
(2) |𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏)| ≤ �𝑅𝑅𝑋𝑋𝑋𝑋 (0)𝑅𝑅𝑌𝑌𝑌𝑌 (0)
8. Find the variance of the stationary ergodic process {𝑋𝑋(𝑡𝑡)} whose auto correlation
4
function is given by {𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏)} = 25 + .
1+6𝜏𝜏 2
Ans:
𝐸𝐸�𝑋𝑋(𝑡𝑡)� = 𝜇𝜇𝑋𝑋 = � lim 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏)
𝜏𝜏→∞

4
= � lim 25 + = √25 = 5
𝜏𝜏→∞ 1 + 6𝜏𝜏 2
2 (𝑡𝑡)�
𝐸𝐸�𝑋𝑋 = 𝑅𝑅𝑋𝑋𝑋𝑋 (0)
= 25 + 4 = 29
2 (𝑡𝑡)�
𝑉𝑉𝑉𝑉𝑉𝑉[𝑋𝑋(𝑡𝑡)] = 𝐸𝐸�𝑋𝑋 − 𝐸𝐸(𝑋𝑋(𝑡𝑡))
29 − 52 = 4
9. Define a system. When is it called a linear system?
Ans: Mathematically, a system is a functional relation between input X(t) and
output Y(t). Symbolically, 𝑌𝑌(𝑡𝑡) = 𝑓𝑓[𝑋𝑋(𝑡𝑡)], −∞ < 𝑡𝑡 < ∞
The system said to be linear if for any two inputs 𝑋𝑋1 (𝑡𝑡) 𝑎𝑎𝑎𝑎𝑎𝑎 𝑋𝑋2 (𝑡𝑡)and
constants 𝑎𝑎1 , 𝑎𝑎2
𝑓𝑓[𝑎𝑎1 𝑋𝑋1 (𝑡𝑡) + 𝑎𝑎2 𝑋𝑋2 (𝑡𝑡)] = 𝑎𝑎1 𝑓𝑓[𝑋𝑋1 (𝑡𝑡)] + 𝑎𝑎2 𝑓𝑓[𝑋𝑋2 (𝑡𝑡)]
10. Define Band-Limited white noise.
Ans: Noise having a non-zero and constant power spectral density over a finite
frequency abnd and zero elsewhere is called band-limited white noise.
∴ the PSD of the Band limited white noise is given by
𝑁𝑁0
|𝜔𝜔| ≤ 𝑊𝑊𝐵𝐵
𝑆𝑆𝑁𝑁𝑁𝑁 (𝜔𝜔) = � 2 , 𝑓𝑓𝑓𝑓𝑓𝑓
0, 𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒ℎ𝑒𝑒𝑒𝑒𝑒𝑒

Part – B
11. (a) (i) Define the moment generating function of a random variable? Derive the MGF ,
mean, variance and the first four moments of a Gamma distribution.
Ans: The moment generating function of a random variable X defined as 𝑀𝑀𝑋𝑋 (𝑡𝑡) =
∑∞ 𝑖𝑖=1 𝑒𝑒
𝑡𝑡𝑥𝑥 𝑖𝑖
𝑃𝑃(𝑥𝑥𝑖𝑖 ) 𝑖𝑖𝑖𝑖 𝑋𝑋 𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑 𝑅𝑅𝑅𝑅
𝑡𝑡𝑡𝑡 ]
𝐸𝐸[𝑒𝑒 =� ∞ 𝑡𝑡𝑡𝑡
∫−∞ 𝑓𝑓(𝑥𝑥)𝑒𝑒 𝑑𝑑𝑑𝑑 𝑖𝑖𝑖𝑖 𝑋𝑋 𝑖𝑖𝑖𝑖 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 𝑅𝑅𝑅𝑅
Moment Generating Function of Gamma distribution
The Gamma distribution is
∝−1
𝜆𝜆 𝑒𝑒 − 𝜆𝜆𝜆𝜆 ( 𝜆𝜆𝜆𝜆 )
𝑓𝑓(𝑥𝑥) = 𝑥𝑥 ≥ 0
Γ𝛼𝛼
𝑀𝑀𝑋𝑋 (𝑡𝑡) = 𝐸𝐸[𝑒𝑒 𝑡𝑡𝑡𝑡 ]

= � 𝑓𝑓(𝑥𝑥)𝑒𝑒 𝑡𝑡𝑡𝑡 𝑑𝑑𝑑𝑑
−∞
∞ ∝−1
𝑡𝑡𝑡𝑡
𝜆𝜆 𝑒𝑒 − 𝜆𝜆𝜆𝜆 ( 𝜆𝜆𝜆𝜆 )
= � 𝑒𝑒 𝑑𝑑𝑑𝑑
0 Γ𝛼𝛼

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
𝜆𝜆∝ ∞ − (𝜆𝜆−𝑡𝑡)𝑥𝑥 ( 𝜆𝜆𝜆𝜆 )∝−1
= � 𝑒𝑒 𝑑𝑑𝑑𝑑
Γ𝛼𝛼 0
Put 𝑢𝑢 = (𝜆𝜆 − 𝑡𝑡)𝑥𝑥
𝑑𝑑𝑑𝑑 = (𝜆𝜆 − 𝑡𝑡)𝑑𝑑𝑑𝑑
𝑥𝑥 → 0 ⟹ 𝑢𝑢 → 0
𝑥𝑥 → ∞ ⟹ 𝑢𝑢 → ∞
∞ ∝−1
𝜆𝜆∝ ∫0 𝑒𝑒 −𝑢𝑢 ( 𝑢𝑢)
∴ 𝑀𝑀𝑋𝑋 (𝑡𝑡) = 𝑑𝑑𝑑𝑑
Γ𝛼𝛼 (𝜆𝜆 − 𝑡𝑡)𝛼𝛼

𝜆𝜆
= Γ𝛼𝛼
Γ𝛼𝛼(𝜆𝜆 − 𝑡𝑡)𝛼𝛼
𝜆𝜆 𝛼𝛼 𝑡𝑡 − 𝛼𝛼
=� � = �1 − �
𝜆𝜆 − 𝑡𝑡 𝜆𝜆

′(𝑡𝑡) 𝑡𝑡 − 𝛼𝛼−1 1
𝑀𝑀𝑋𝑋 = −𝛼𝛼 �1 − � �− ��
𝜆𝜆 𝜆𝜆 𝑡𝑡=0
𝛼𝛼
𝜇𝜇′1 =
𝜆𝜆
′′(𝑡𝑡) 𝑡𝑡 − 𝛼𝛼−2 1 2
𝑀𝑀𝑋𝑋 = 𝛼𝛼[−(∝ +1)] �1 − � � � �
𝜆𝜆 𝜆𝜆 𝑡𝑡=0
𝛼𝛼(∝ +1)
𝜇𝜇′2 =
𝜆𝜆2
2
𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉 (𝑋𝑋) = 𝜇𝜇′2 − �𝜇𝜇′1 �
𝛼𝛼(∝ +1) 𝛼𝛼 2
= − � �
𝜆𝜆2 𝜆𝜆

𝛼𝛼 2 +∝ −𝛼𝛼 2
=
𝜆𝜆2

∴ 𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉(𝑋𝑋) = 2
𝜆𝜆

(ii) Describe Binomial 𝐵𝐵(𝑛𝑛, 𝑝𝑝) distribution and obtain the moment generating function.
Hence compute (1) the first four moments and (2) the recursion relation for the central
moments.
Ans: Binomial Distribution
A random variable X is said to follow a Binomial distribution if it assumes
only no-negative values with probability mass function
𝑃𝑃(𝑋𝑋 = 𝑥𝑥) = 𝑛𝑛𝑛𝑛𝑥𝑥 𝑝𝑝 𝑥𝑥 𝑞𝑞 𝑛𝑛−𝑥𝑥 , 𝑥𝑥 = 0,1,2,3, … , 𝑛𝑛 𝑎𝑎𝑎𝑎𝑎𝑎 𝑞𝑞 = 1 − 𝑝𝑝
Moment generating function of Binomial distribution
𝑀𝑀𝑋𝑋 (𝑡𝑡) = 𝐸𝐸(𝑒𝑒 𝑡𝑡𝑡𝑡 )
𝑛𝑛

= � 𝑒𝑒 𝑡𝑡𝑡𝑡 𝑝𝑝(𝑥𝑥)
𝑥𝑥=0

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
𝑛𝑛

= � 𝑒𝑒 𝑡𝑡𝑡𝑡 𝑛𝑛𝑛𝑛𝑥𝑥 𝑝𝑝 𝑥𝑥 𝑞𝑞 𝑛𝑛−𝑥𝑥


𝑥𝑥=0
𝑛𝑛

= � 𝑛𝑛𝑛𝑛𝑥𝑥 (𝑒𝑒 𝑡𝑡 𝑝𝑝)𝑥𝑥 𝑞𝑞 𝑛𝑛−𝑥𝑥


𝑥𝑥=0
𝑀𝑀𝑋𝑋 (𝑡𝑡) = (𝑝𝑝𝑒𝑒 𝑡𝑡 + 𝑞𝑞)𝑛𝑛
𝑑𝑑𝑀𝑀𝑋𝑋 (𝑡𝑡) 𝑑𝑑(𝑝𝑝𝑒𝑒 𝑡𝑡 + 𝑞𝑞)𝑛𝑛
𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 = 𝐸𝐸(𝑋𝑋) = � � =� �
𝑑𝑑𝑑𝑑 𝑡𝑡=0
𝑑𝑑𝑑𝑑 𝑡𝑡=0
= [𝑛𝑛(𝑝𝑝𝑒𝑒 𝑡𝑡 + 𝑞𝑞)𝑛𝑛−1 𝑝𝑝𝑒𝑒 𝑡𝑡 ]𝑡𝑡=0
= 𝑛𝑛𝑛𝑛(𝑝𝑝 + 𝑞𝑞)𝑛𝑛−1 = 𝑛𝑛𝑛𝑛 ∵ 𝑝𝑝 + 𝑞𝑞 = 1
∴ 𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚 = 𝐸𝐸(𝑋𝑋) = 𝑛𝑛𝑛𝑛
𝑑𝑑2 𝑀𝑀𝑋𝑋 (𝑡𝑡) 𝑑𝑑 2 (𝑝𝑝𝑒𝑒 𝑡𝑡 + 𝑞𝑞)𝑛𝑛
𝐸𝐸(𝑋𝑋 2 ) = � � = � �
𝑑𝑑𝑡𝑡 2 𝑡𝑡=0
𝑑𝑑𝑡𝑡 2
𝑡𝑡=0
𝑑𝑑[𝑛𝑛(𝑝𝑝𝑒𝑒 𝑡𝑡 + 𝑞𝑞)𝑛𝑛−1 𝑝𝑝𝑒𝑒 𝑡𝑡 ]
=� �
𝑑𝑑𝑑𝑑 𝑡𝑡=0
= 𝑛𝑛𝑛𝑛[(𝑛𝑛 − 1)(𝑝𝑝𝑒𝑒 𝑡𝑡 + 𝑞𝑞)𝑛𝑛−2 𝑝𝑝𝑒𝑒 𝑡𝑡 + (𝑝𝑝𝑒𝑒 𝑡𝑡 + 𝑞𝑞)𝑛𝑛−1 𝑒𝑒 𝑡𝑡 ]𝑡𝑡=0
= 𝑛𝑛𝑛𝑛[(𝑛𝑛 − 1)(𝑝𝑝 + 𝑞𝑞)𝑛𝑛−2 𝑝𝑝 + (𝑝𝑝 + 𝑞𝑞)𝑛𝑛−1 ]
= 𝑛𝑛𝑛𝑛[(𝑛𝑛 − 1)𝑝𝑝 + 1]
= 𝑛𝑛𝑛𝑛(𝑛𝑛𝑛𝑛 − 𝑝𝑝 + 1) = 𝑛𝑛𝑛𝑛(𝑛𝑛𝑛𝑛 + 𝑞𝑞)
𝐸𝐸(𝑋𝑋 2 ) = 𝑛𝑛2 𝑝𝑝2 + 𝑛𝑛𝑛𝑛𝑛𝑛
𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉(𝑋𝑋) = 𝐸𝐸(𝑋𝑋 2 ) − [𝐸𝐸(𝑋𝑋)]2
= 𝑛𝑛2 𝑝𝑝2 + 𝑛𝑛𝑛𝑛𝑛𝑛 − (𝑛𝑛𝑛𝑛)2
𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉 (𝑋𝑋) = 𝑛𝑛𝑛𝑛𝑛𝑛
(OR)
(b) (i) A random variable X has the following probability distribution
X: 0 1 2 3 4 5 6 7
P(x): 0 K 2K 2K 3 𝐾𝐾 2 2𝐾𝐾 2 7𝐾𝐾 2 + 𝐾𝐾
K
Find
(1) The value of K
(2) 𝑃𝑃(1.5 < 𝑋𝑋 < 4.5⁄𝑋𝑋 > 2) and
(3) The smallest value of n for which 𝑃𝑃(𝑋𝑋 ≥ 𝑛𝑛) ≥ 1/2
Ans: Solution:(𝑖𝑖) ∑7𝑥𝑥=0 𝑝𝑝(𝑥𝑥) = 1
0 + 𝐾𝐾 + 2𝐾𝐾 + 2𝐾𝐾 + 3𝐾𝐾 + 𝐾𝐾 2 + 2𝑘𝑘 2 + 7𝑘𝑘 2 + 𝐾𝐾 = 1
10𝑘𝑘 2 + 9𝐾𝐾 = 1
10𝑘𝑘 2 + 9𝐾𝐾 − 1 = 0
1
𝐾𝐾 = 𝑜𝑜𝑜𝑜 𝐾𝐾 = −1
10
since 𝑃𝑃(𝑥𝑥) cannot de negative ,𝐾𝐾 = −1 is rejected
1
Hence 𝐾𝐾 =
10
X 0 1 2 3 4 5 6 7
0 1 2 2 3 1 2 17
𝑃𝑃(𝑋𝑋 = 𝑥𝑥) 10 10 10 10 100 100 100
(𝑖𝑖𝑖𝑖) 𝑃𝑃(𝑋𝑋 < 6) = 𝑃𝑃(𝑋𝑋 = 0) + 𝑃𝑃(𝑋𝑋 = 1) + ⋯ . +𝑃𝑃(𝑋𝑋 = 5)

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

1 2 2 3 1 81
= + + + + =
10 10 10 10 100 100
81 19
𝑃𝑃(𝑋𝑋 ≥ 6) = 1 − 𝑃𝑃(𝑋𝑋 < 6) = 1 − =
100 100
𝑃𝑃(0 < 𝑋𝑋 < 5) = 𝑃𝑃(𝑋𝑋 = 0) + 𝑃𝑃(𝑋𝑋 = 1) + ⋯ . +𝑃𝑃(𝑋𝑋 = 4)
1 2 2 3
= + + +
10 10 10 10
8 4
= =
10 5
1
(𝑖𝑖𝑖𝑖𝑖𝑖) 𝑃𝑃(𝑋𝑋 ≤ 3) =
2
8 1
𝑃𝑃(𝑋𝑋 ≤ 4) = >
10 2
𝑇𝑇ℎ𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒 𝑛𝑛 = 4
(𝑖𝑖𝑖𝑖)The distribution function 𝐹𝐹𝑋𝑋 (𝑥𝑥) of X is given by
X 0 1 2 3 4 5 6 7
0 1 3 5 8 81 83 100
=1
𝐹𝐹𝑋𝑋 (𝑥𝑥) = 𝑃𝑃(𝑋𝑋 10 10 10 10 100 100 100
≤ 𝑥𝑥)
𝑥𝑥
(ii) , 𝑥𝑥 > 0
Find the MGF of a random variable X having the pdf 𝑓𝑓(𝑥𝑥) = � 4𝑒𝑒 𝑥𝑥 /2
0, 𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒ℎ𝑒𝑒𝑒𝑒𝑒𝑒
Also deduce the first four moments about the origin.
𝑥𝑥
Ans: , 𝑥𝑥 > 0
Given 𝑓𝑓(𝑥𝑥) = � 𝑥𝑥 /2 4𝑒𝑒
0, 𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒ℎ𝑒𝑒𝑒𝑒𝑒𝑒
𝑥𝑥 − 𝑥𝑥
𝑒𝑒 2 , 𝑥𝑥 > 0
= �4
0, 𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒ℎ𝑒𝑒𝑒𝑒𝑒𝑒
𝑀𝑀𝑋𝑋 (𝑡𝑡) = 𝐸𝐸[𝑒𝑒 𝑡𝑡𝑡𝑡 ]

= � 𝑓𝑓(𝑥𝑥)𝑒𝑒 𝑡𝑡𝑡𝑡 𝑑𝑑𝑑𝑑
−∞

𝑥𝑥 − 𝑥𝑥
= � 𝑒𝑒 𝑡𝑡𝑡𝑡 𝑒𝑒 2 𝑑𝑑𝑑𝑑
0 4

𝑥𝑥 − �1−𝑡𝑡�𝑥𝑥
=� 𝑒𝑒 2 𝑑𝑑𝑑𝑑
0 4
1 1 ∞
− �2 −𝑡𝑡�𝑥𝑥 − �2 −𝑡𝑡�𝑥𝑥
𝑥𝑥 𝑒𝑒 1 𝑒𝑒
=� � �− � 2 ��
4 − � − 𝑡𝑡�
1 4 1
� − 𝑡𝑡�
2 2 0

1 1
= �0 − 0 + 0 + 2�
4 1
� − 𝑡𝑡�
2
1 1 1 4 1
= 2 = =
4 1 4 (1 − 2𝑡𝑡)2 (1 − 2𝑡𝑡)2
� − 𝑡𝑡�
2

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
1
∴ 𝑀𝑀𝑋𝑋 (𝑡𝑡) =
(1 − 2𝑡𝑡)2

= (1 − 2𝑡𝑡)−2
= 1 + 2(2𝑡𝑡) + 3(2𝑡𝑡)2 + 4(2𝑡𝑡)3 + 5(2𝑡𝑡)4 + ⋯
= 1 + 4𝑡𝑡 + 12𝑡𝑡 2 + 32𝑡𝑡 3 + 80𝑡𝑡 4 + ⋯
𝑡𝑡 2 𝑡𝑡 3 𝑡𝑡 4
= 1 + 4 𝑡𝑡 + (24) + (192) + (1920) + ⋯
2! 3! 4!
𝑡𝑡 𝑟𝑟
Now 𝜇𝜇′𝑟𝑟 = 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 𝑜𝑜𝑜𝑜
𝑟𝑟!
The first four moments are
𝜇𝜇′1 = 4, 𝜇𝜇′2 = 24, 𝜇𝜇′3 = 192, 𝜇𝜇′4 = 1920
12 (a) (i) If the joint pdf of two dimensional random variable (X,Y) is given by 𝑓𝑓(𝑥𝑥, 𝑦𝑦) =
𝑥𝑥𝑥𝑥
𝑥𝑥 2 + , 0 < 𝑥𝑥 < 1; 0 < 𝑦𝑦 < 2
� 3
0, 𝑜𝑜𝑜𝑜ℎ𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒
Find
1
(1) 𝑃𝑃 �𝑋𝑋 > �
2
(2) 𝑃𝑃(𝑌𝑌 < 𝑋𝑋) and
(3) Find the conditional density function.
𝑥𝑥𝑥𝑥
Ans: 𝑥𝑥 2 + , 0 < 𝑥𝑥 < 1; 0 < 𝑦𝑦 < 2
Given 𝑓𝑓(𝑥𝑥, 𝑦𝑦) = � 3
0, 𝑜𝑜𝑜𝑜ℎ𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒
The marginal density of X

𝑓𝑓𝑋𝑋 (𝑥𝑥) = � 𝑓𝑓(𝑥𝑥, 𝑦𝑦)𝑑𝑑𝑑𝑑
−∞
2
𝑥𝑥𝑥𝑥
= � 𝑥𝑥 2 + 𝑑𝑑𝑑𝑑
0 3
2
2
𝑥𝑥𝑦𝑦 2
= �𝑥𝑥 𝑦𝑦 + �
3×2 0
4𝑥𝑥
= 2𝑥𝑥 2 +
3×2
2𝑥𝑥
= 2𝑥𝑥 2 + , 0 < 𝑥𝑥 < 1
3
The marginal density of X

𝑓𝑓𝑌𝑌 (𝑦𝑦) = � 𝑓𝑓(𝑥𝑥, 𝑦𝑦)𝑑𝑑𝑑𝑑
−∞
1
𝑥𝑥𝑥𝑥
= � 𝑥𝑥 2 + 𝑑𝑑𝑑𝑑
0 3
1
𝑥𝑥 2 𝑥𝑥 2 𝑦𝑦
=� + �
3 6 0
1 𝑦𝑦
=� + �
3 6
𝑦𝑦 + 2
=� �, 0 < 𝑦𝑦 < 2
6

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
1
1
(1) 𝑃𝑃 �𝑋𝑋 > � = � 𝑓𝑓𝑋𝑋 (𝑥𝑥)𝑑𝑑𝑑𝑑
2 1/2
1
2𝑥𝑥
= � �2𝑥𝑥 2 + � 𝑑𝑑𝑑𝑑
1/2 3
1
2𝑥𝑥 3 2𝑥𝑥 2
=� + �
3 3(2) 1/2
2 1 1 1
=� + − − �
3 3 12 12
5
=
6

(2)𝑃𝑃(𝑌𝑌 < 𝑋𝑋) = � 𝑓𝑓(𝑥𝑥, 𝑦𝑦)𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑


𝑅𝑅
1 𝑥𝑥
𝑥𝑥𝑥𝑥
= � � �𝑥𝑥 2 + � 𝑑𝑑𝑑𝑑 𝑑𝑑𝑑𝑑
0 0 3
1 𝑥𝑥
2
𝑥𝑥𝑦𝑦 2
= � �𝑥𝑥 𝑦𝑦 + � 𝑑𝑑𝑑𝑑
0 3(2) 0

1
3
𝑥𝑥 3
= � �𝑥𝑥 + � 𝑑𝑑𝑑𝑑
0 6
1
7
= � 𝑥𝑥 3 𝑑𝑑𝑑𝑑
0 6
1
7 𝑥𝑥 4
= � �
6 4 0
7
=
24
Conditional density function
𝑥𝑥𝑥𝑥
𝑓𝑓(𝑥𝑥, 𝑦𝑦) 𝑥𝑥 2 +
3
𝑓𝑓(𝑦𝑦⁄𝑥𝑥) = = 2𝑥𝑥
𝑓𝑓(𝑥𝑥) 2
2𝑥𝑥 +
3
3𝑥𝑥 2 + 𝑥𝑥𝑥𝑥
= 2 , 0 < 𝑥𝑥 < 1, 0 < 𝑦𝑦 < 2
6𝑥𝑥 + 2𝑥𝑥
2 𝑥𝑥𝑥𝑥
𝑓𝑓(𝑥𝑥, 𝑦𝑦) 𝑥𝑥 + 3
𝑓𝑓(𝑥𝑥 ⁄𝑦𝑦) = = 𝑦𝑦+2
𝑓𝑓(𝑦𝑦)
6
2(3𝑥𝑥 2 + 𝑥𝑥𝑥𝑥)
= , 0 < 𝑥𝑥 < 1, 0 < 𝑦𝑦 < 2
𝑦𝑦 + 2

(OR)
(b) (i) The joint pdf of the random variables (X,Y) is 𝑓𝑓(𝑥𝑥, 𝑦𝑦)3(𝑥𝑥 + 𝑦𝑦), 0 ≤ 𝑥𝑥 ≤ 1, 0 ≤ 𝑦𝑦 ≤
1, 𝑥𝑥 + 𝑦𝑦 ≤ 1. Find 𝐶𝐶𝐶𝐶𝐶𝐶(𝑋𝑋, 𝑌𝑌).

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

Ans:

The marginal density of X


1−𝑥𝑥
𝑓𝑓𝑋𝑋 (𝑥𝑥) = � 𝑓𝑓(𝑥𝑥, 𝑦𝑦)𝑑𝑑𝑑𝑑
0
1−𝑥𝑥
=� 3(𝑥𝑥 + 𝑦𝑦) 𝑑𝑑𝑑𝑑
0
1−𝑥𝑥
𝑦𝑦 2
= 3 �𝑥𝑥𝑥𝑥 + �
2 0
(1 − 𝑥𝑥)2
= 3 �𝑥𝑥(1 − 𝑥𝑥) + �
2
3
= [2𝑥𝑥 − 2𝑥𝑥 2 + 1 − 2𝑥𝑥 + 𝑥𝑥 2 ]
2
3
= [1 − 𝑥𝑥 2 ], 0 ≤ 𝑥𝑥 ≤ 1
2

The marginal density of X


1−𝑦𝑦
𝑓𝑓𝑌𝑌 (𝑦𝑦) = � 𝑓𝑓(𝑥𝑥, 𝑦𝑦)𝑑𝑑𝑑𝑑
0
1
= � 3(𝑥𝑥 + 𝑦𝑦)𝑑𝑑𝑑𝑑
0
1−𝑦𝑦
𝑥𝑥 2
= 3 � + 𝑥𝑥𝑥𝑥�
2 0
(1 − 𝑦𝑦)2
= 3� + (1 − 𝑦𝑦)𝑦𝑦�
2

3
= [1 − 𝑦𝑦 2 ], 0 ≤ 𝑦𝑦 ≤ 1
2
1
𝐸𝐸(𝑋𝑋) = � 𝑥𝑥 𝑓𝑓𝑋𝑋 (𝑥𝑥) 𝑑𝑑𝑑𝑑
0
1
3
= � 𝑥𝑥[1 − 𝑥𝑥 2 ] 𝑑𝑑𝑑𝑑
0 2
3 1
= � (𝑥𝑥 − 𝑥𝑥 3 )𝑑𝑑𝑑𝑑
2 0

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
1
3 𝑥𝑥 2 𝑥𝑥 4
= � − �
2 2 4 0
3 1 1
= � − �
2 2 4
3
=
8
1
𝐸𝐸(𝑌𝑌) = � 𝑦𝑦𝑓𝑓𝑌𝑌 (𝑦𝑦)𝑑𝑑𝑑𝑑
0
1
3 3
= � [1 − 𝑦𝑦 2 ] 𝑑𝑑𝑑𝑑 =
0 2 8
∞ ∞
𝐸𝐸(𝑋𝑋𝑋𝑋) = � � 𝑥𝑥𝑥𝑥 𝑓𝑓(𝑥𝑥, 𝑦𝑦)𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑
−∞ −∞
1 1−𝑦𝑦
= 3� � 𝑥𝑥𝑥𝑥 (𝑥𝑥 + 𝑦𝑦)𝑑𝑑𝑑𝑑 𝑑𝑑𝑑𝑑
0 0
1 1−𝑦𝑦
= 3� � (𝑥𝑥 2 𝑦𝑦 + 𝑥𝑥𝑥𝑥 2 )𝑑𝑑𝑑𝑑 𝑑𝑑𝑑𝑑
0 0

1 1−𝑦𝑦
𝑥𝑥 3 𝑦𝑦 𝑥𝑥 2 𝑦𝑦 2
= 3� � + � 𝑑𝑑𝑑𝑑
0 3 2 0
1
(1 − 𝑦𝑦)3 𝑦𝑦 (1 − 𝑦𝑦)2 𝑦𝑦 2
= 3� � + � 𝑑𝑑𝑑𝑑
0 3 2
1
1 1
= 3 � [𝑦𝑦 − 3𝑦𝑦 2 + 3𝑦𝑦 3 − 𝑦𝑦 4 ] + [𝑦𝑦 2 − 2𝑦𝑦 3 + 𝑦𝑦 4 ] 𝑑𝑑𝑑𝑑
0 3 2
1
1 𝑦𝑦 2 3𝑦𝑦 3 3𝑦𝑦 4 𝑦𝑦 5 1 𝑦𝑦 3 2𝑦𝑦 4 𝑦𝑦 5
= 3� � − + − �+ � − + ��
3 2 3 4 5 2 3 4 5 0
1 1 3 1 1 1 1 1 1
= 3 � � − 1 + − � + � − + �� =
3 2 4 5 2 3 2 5 10
𝑐𝑐𝑐𝑐𝑐𝑐(𝑋𝑋, 𝑌𝑌) = 𝐸𝐸(𝑋𝑋𝑋𝑋) − 𝐸𝐸(𝑋𝑋) − 𝐸𝐸(𝑌𝑌)
1 3 3
= −� × �
10 8 8
13
=−
320

(ii) Marks obtained by 10 students in Mathematics (X) and Statistics (Y) are given below:

X 60 34 40 50 45 40 22 43 42 64
Y 75 32 33 40 45 33 12 30 34 51
Find the two regression lines. Also find Y when X=55.

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

Ans:

𝑥𝑥 𝑦𝑦 𝑥𝑥 2 𝑦𝑦 2 𝑥𝑥𝑥𝑥
60 75 3600 5625 4500
34 32 1156 1024 1088
40 33 1600 1089 1320
50 40 2500 1600 2000
45 45 2025 2025 2025
40 33 1600 1089 1320
22 12 484 144 264
43 30 1849 900 1290
42 34 1764 1156 1428
64 51 4096 2601 3264
Total 440 385 20674 17253 18499
∑ 𝑥𝑥 440 ∑ 𝑦𝑦 385
𝑥𝑥̅ = = = 44 𝑎𝑎𝑎𝑎𝑎𝑎 𝑦𝑦� = = = 38.5
𝑛𝑛 10 𝑛𝑛 10
𝑛𝑛 ∑ 𝑥𝑥𝑥𝑥 − ∑ 𝑥𝑥 ∑ 𝑦𝑦 (10 × 18499) − (440 × 385)
𝑏𝑏𝑦𝑦𝑦𝑦 = = = 1.1865
𝑛𝑛 ∑ 𝑥𝑥 2 − (∑ 𝑥𝑥)2 (10 × 20674) − (440)2
𝑛𝑛 ∑ 𝑥𝑥𝑥𝑥 − ∑ 𝑥𝑥 ∑ 𝑦𝑦 (10 × 18499) − (440 × 385)
𝑏𝑏𝑥𝑥𝑥𝑥 = = = 0.6414
𝑛𝑛 ∑ 𝑦𝑦 2 − (∑ 𝑦𝑦)2 (10 × 17253) − (385)2
Regression line of y on x is
𝑦𝑦 − 𝑦𝑦� = 𝑏𝑏𝑦𝑦𝑦𝑦 (𝑥𝑥 − 𝑥𝑥̅ )
𝑦𝑦 − 38.5 = 1.1865(𝑥𝑥 − 44)
⟹ 𝑦𝑦 = 1.1865𝑥𝑥 − 13.706
𝑤𝑤ℎ𝑒𝑒𝑒𝑒 𝑥𝑥 = 55, 𝑦𝑦 = (1.1865 × 55) − 13.706 = 51.55
Regression line of x on y is
𝑥𝑥 − 𝑥𝑥̅ = 𝑏𝑏𝑥𝑥𝑥𝑥 (𝑦𝑦 − 𝑦𝑦�)
𝑥𝑥 − 44 = 0.6414(𝑦𝑦 − 38.5)
⟹ 𝑥𝑥 = 0.6414𝑦𝑦 − 19.306
13 (a) (i) The process {𝑋𝑋(𝑡𝑡)} whose probability distribution under certain condition is given by
(𝑎𝑎𝑎𝑎 )𝑛𝑛 −1
𝑛𝑛 −1 , 𝑛𝑛 = 1,2,3, …
𝑃𝑃{𝑋𝑋(𝑡𝑡) = 𝑛𝑛} = �(1+𝑎𝑎𝑎𝑎 ) 𝑎𝑎𝑎𝑎
, 𝑛𝑛 = 0
1+𝑎𝑎𝑎𝑎
Mean and variance of the process. Is the process first order stationary?

Ans: Given

X(t)=n 0 1 2 3 ...
P{X(t)}=p(x 𝑎𝑎𝑎𝑎 1 𝑎𝑎𝑎𝑎 (𝑎𝑎𝑎𝑎) 2 ...

n) 1 + 𝑎𝑎𝑎𝑎 (1 + 𝑎𝑎𝑎𝑎)2 (1 + 𝑎𝑎𝑎𝑎)3 (1 + 𝑎𝑎𝑎𝑎)4


Mean
𝐸𝐸{𝑋𝑋(𝑡𝑡)} = ∑∞𝑛𝑛=0 𝑛𝑛𝑛𝑛(𝑥𝑥𝑛𝑛 )
𝑎𝑎𝑎𝑎 1 𝑎𝑎𝑎𝑎 (𝑎𝑎𝑎𝑎)2
=0 +1 + 2 + 3 +⋯
1 + 𝑎𝑎𝑎𝑎 (1 + 𝑎𝑎𝑎𝑎)2 (1 + 𝑎𝑎𝑎𝑎)3 (1 + 𝑎𝑎𝑎𝑎)4

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

1 𝑎𝑎𝑎𝑎 𝑎𝑎𝑎𝑎 2
= �1 + 2 + 3 � � + ⋯�
(1 + 𝑎𝑎𝑎𝑎)2 1 + 𝑎𝑎𝑎𝑎 1 + 𝑎𝑎𝑎𝑎
1 𝑎𝑎𝑎𝑎 −2
= �1 − �
(1 + 𝑎𝑎𝑎𝑎)2 1 + 𝑎𝑎𝑎𝑎
1 1 + 𝑎𝑎𝑎𝑎 − 𝑎𝑎𝑎𝑎 −2
= � �
(1 + 𝑎𝑎𝑎𝑎)2 1 + 𝑎𝑎𝑎𝑎
1 1
= ×
(1 + 𝑎𝑎𝑎𝑎)2 (1 + 𝑎𝑎𝑎𝑎)−2
1
= (1 + 𝑎𝑎𝑎𝑎)2
(1 + 𝑎𝑎𝑎𝑎)2
= 1, 𝑤𝑤ℎ𝑖𝑖𝑖𝑖ℎ 𝑖𝑖𝑖𝑖 𝑎𝑎 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐
Now

𝐸𝐸(𝑋𝑋 (𝑡𝑡)) = � 𝑛𝑛2 𝑝𝑝(𝑥𝑥𝑛𝑛 )


2

𝑛𝑛=0

= �[𝑛𝑛(𝑛𝑛 + 1) − 𝑛𝑛]𝑝𝑝(𝑥𝑥𝑛𝑛 )
𝑛𝑛=0
∞ ∞

= � 𝑛𝑛(𝑛𝑛 + 1)𝑝𝑝(𝑥𝑥𝑛𝑛 ) − � 𝑛𝑛𝑛𝑛(𝑥𝑥𝑛𝑛 )


𝑛𝑛=0 𝑛𝑛=0
1 𝑎𝑎𝑎𝑎 (𝑎𝑎𝑎𝑎)2
= �0 + 1.2 + 2.3 + 3.4 + ⋯ � − 𝐸𝐸{𝑋𝑋(𝑡𝑡)}
(1 + 𝑎𝑎𝑎𝑎)2 (1 + 𝑎𝑎𝑎𝑎)3 (1 + 𝑎𝑎𝑎𝑎)4
2 𝑎𝑎𝑎𝑎 𝑎𝑎𝑎𝑎 2
= �1 + 3. + 6� � + ⋯� − 1
(1 + 𝑎𝑎𝑎𝑎)2 (1 + 𝑎𝑎𝑎𝑎) 1 + 𝑎𝑎𝑎𝑎
2 𝑎𝑎𝑎𝑎 −3
= �1 − � −1
(1 + 𝑎𝑎𝑎𝑎)2 1 + 𝑎𝑎𝑎𝑎
2
= . (1 + 𝑎𝑎𝑎𝑎)3 − 1
(1 + 𝑎𝑎𝑎𝑎)2
= 2(1 + 𝑎𝑎𝑎𝑎) − 1
= 2 + 2𝑎𝑎𝑎𝑎 − 1
= 1 + 2𝑎𝑎𝑎𝑎
𝑉𝑉𝑉𝑉𝑉𝑉{𝑋𝑋(𝑡𝑡)} = 𝐸𝐸(𝑋𝑋 2 (𝑡𝑡)) − [𝐸𝐸(𝑥𝑥(𝑡𝑡))]2
= 1 + 2𝑎𝑎𝑎𝑎 − 1
= 2𝑎𝑎𝑎𝑎 𝑊𝑊ℎ𝑖𝑖𝑖𝑖ℎ 𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑 𝑜𝑜𝑜𝑜 𝑡𝑡
∴ {𝑥𝑥(𝑡𝑡)} 𝑖𝑖𝑖𝑖 𝑛𝑛𝑛𝑛𝑛𝑛 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠

(ii) If the WSS process {𝑋𝑋(𝑡𝑡)} is given by 𝑋𝑋(𝑡𝑡) = 10 cos(100𝑡𝑡 + 𝜃𝜃), where 𝜃𝜃 is
uniformly distributed over(−𝜋𝜋, 𝜋𝜋), prove that {𝑋𝑋(𝑡𝑡)} is correlation ergodic.
Ans: We Know that
𝑅𝑅𝑥𝑥𝑥𝑥(𝜏𝜏) = 𝐸𝐸(𝑋𝑋(𝑡𝑡)𝑋𝑋(𝑡𝑡 + 𝜏𝜏))
= 𝐸𝐸(100𝑐𝑐𝑐𝑐𝑐𝑐(100𝑡𝑡 + 𝜃𝜃)𝑐𝑐𝑐𝑐𝑐𝑐(100𝑡𝑡 + 100 𝜏𝜏 + 𝜃𝜃))
= 100𝐸𝐸(𝑐𝑐𝑐𝑐𝑐𝑐(100𝑡𝑡 + 𝜃𝜃 + 100𝑡𝑡 + 100 𝜏𝜏 + 𝜃𝜃) + 𝑐𝑐𝑐𝑐𝑐𝑐(100𝑡𝑡 + 𝜃𝜃 − 100𝑡𝑡
− 100 𝜏𝜏 − 𝜃𝜃))

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
100
= 𝐸𝐸(𝑐𝑐𝑐𝑐𝑐𝑐(200𝑡𝑡 + 2𝜃𝜃 + 100 𝜏𝜏) + 𝑐𝑐𝑐𝑐𝑐𝑐(−100 𝜏𝜏))
2
= 50 E(cos(200t+2θ+100 τ)+cos100 τ)
= 50cos100 τ+50E(cos(200t+2θ+100 τ)) → (1)
1 𝜋𝜋
𝑁𝑁𝑁𝑁𝑁𝑁 𝐸𝐸(𝑐𝑐𝑐𝑐𝑐𝑐(200𝑡𝑡 + 2𝜃𝜃 + 100 𝜏𝜏)) = � 𝑐𝑐𝑐𝑐𝑐𝑐(200𝑡𝑡 + 2𝜃𝜃 + 100 𝜏𝜏)𝑑𝑑𝑑𝑑
2𝜋𝜋 −𝜋𝜋
1 𝜋𝜋
= � cos(200t + 2𝜃𝜃 + 100 𝜏𝜏)𝑑𝑑𝑑𝑑
𝜋𝜋 0
𝑠𝑠𝑠𝑠𝑠𝑠(200t + 2𝜃𝜃 + 100 𝜏𝜏) 𝜋𝜋
=� �
2 0
1
= [ 𝑠𝑠𝑠𝑠𝑠𝑠(200𝑡𝑡 + 2𝜋𝜋 + 100 𝜏𝜏) − 𝑠𝑠𝑠𝑠𝑠𝑠(200𝑡𝑡 + 100 𝜏𝜏)] = 0
2𝜋𝜋
Substituting in Equation (1),we have
𝐸𝐸(𝑋𝑋(𝑡𝑡). 𝑋𝑋(𝑡𝑡 + 𝜏𝜏)) = 50𝑐𝑐𝑐𝑐𝑐𝑐100 𝜏𝜏 → (2)
1 𝑇𝑇
Therefore Z T = ∫−𝑇𝑇 𝑋𝑋(𝑡𝑡). 𝑋𝑋(𝑡𝑡 + 𝜏𝜏)𝑑𝑑𝑑𝑑
2𝑇𝑇
1 𝑇𝑇
= � 100 cos(100𝑡𝑡 + 𝜃𝜃) cos(100𝑡𝑡 + 100𝜏𝜏 + 𝜃𝜃) 𝑑𝑑𝑑𝑑
2𝑇𝑇 −𝑇𝑇
50 𝑇𝑇
= � [𝐶𝐶𝐶𝐶𝐶𝐶𝐶𝐶(200𝑡𝑡 + 2𝜃𝜃 + 100𝜏𝜏) + cos(−100𝜏𝜏) 𝑑𝑑𝑑𝑑
2𝑇𝑇 −𝑇𝑇
25 𝑇𝑇 25 𝑇𝑇
= � cos(100𝜏𝜏) 𝑑𝑑𝑑𝑑 + � 𝐶𝐶𝐶𝐶𝐶𝐶(200𝑡𝑡 + 100𝜏𝜏 + 2𝜃𝜃)𝑑𝑑𝑑𝑑
𝑇𝑇 −𝑇𝑇 𝑇𝑇 𝑇𝑇
25 𝑇𝑇
= 50𝑐𝑐𝑐𝑐𝑐𝑐(100 𝜏𝜏)+ ∫𝑇𝑇 𝐶𝐶𝐶𝐶𝐶𝐶(200𝑡𝑡 + 100𝜏𝜏 + 2𝜃𝜃)𝑑𝑑𝑑𝑑
𝑇𝑇
25 1
= 50𝑐𝑐𝑐𝑐𝑐𝑐(100 𝜏𝜏) + � (sin(200𝑡𝑡 + 2𝜃𝜃 + 100𝜏𝜏) − (𝑠𝑠𝑠𝑠𝑠𝑠200𝑡𝑡 + 2𝜃𝜃))�
𝑇𝑇 200
1
= 50𝑐𝑐𝑐𝑐𝑐𝑐(100 𝜏𝜏) + [sin(200𝑡𝑡 + 2𝜃𝜃 + 100𝜏𝜏) − (𝑠𝑠𝑠𝑠𝑠𝑠200𝑡𝑡 + 2𝜃𝜃)]
4𝑇𝑇
Now lim 𝑇𝑇→∞ (𝑍𝑍𝑇𝑇 ) = 50cos(100 𝜏𝜏)
= 𝑅𝑅(𝜏𝜏)
Therefore {X(t)} is correlation ergodic

(OR)
(b) (i) If the process {𝑋𝑋(𝑡𝑡) ≥ 0}is a poisson process with parameter 𝜆𝜆, obtain 𝑃𝑃(𝑋𝑋(𝑡𝑡) =
𝑛𝑛). Is the process first order stationary?
Ans:
Probability Law for the Poisson Process {𝑋𝑋(𝑡𝑡)}
𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 = 𝐸𝐸[𝑋𝑋(𝑡𝑡)]
Let 𝜆𝜆, be the number of occurrences of the event in unit time.
𝑃𝑃𝑛𝑛 (𝑡𝑡) = 𝑃𝑃[𝑋𝑋(𝑡𝑡) = 𝑛𝑛]
Let 𝑃𝑃𝑛𝑛 (𝑡𝑡 + ∆𝑡𝑡) = 𝑃𝑃[𝑋𝑋(𝑡𝑡 + ∆𝑡𝑡) = 𝑛𝑛]
= 𝑃𝑃[(𝑛𝑛 − 1)𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 𝑖𝑖𝑖𝑖 (0, 𝑡𝑡) 𝑎𝑎𝑎𝑎𝑎𝑎 1 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 𝑖𝑖𝑖𝑖 (𝑡𝑡, 𝑡𝑡 + ∆𝑡𝑡)]
+ 𝑃𝑃[𝑛𝑛 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 𝑖𝑖𝑖𝑖 (0, 𝑡𝑡)𝑎𝑎𝑎𝑎𝑎𝑎 𝑛𝑛𝑛𝑛 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 𝑖𝑖𝑖𝑖 (𝑡𝑡, 𝑡𝑡 + ∆𝑡𝑡)]
= 𝑃𝑃𝑛𝑛−1 (𝑡𝑡)𝜆𝜆∆𝑡𝑡 + 𝑃𝑃𝑛𝑛 (𝑡𝑡)(1 − 𝜆𝜆∆𝑡𝑡)

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
𝑃𝑃𝑛𝑛 (𝑡𝑡 + ∆𝑡𝑡) − 𝑃𝑃𝑛𝑛 (𝑡𝑡)
= 𝜆𝜆[𝑃𝑃𝑛𝑛−1 (𝑡𝑡) − 𝑃𝑃𝑛𝑛 (𝑡𝑡)]
∆𝑡𝑡
Taking the limits as ∆𝑡𝑡 → 0
𝑃𝑃𝑛𝑛 (𝑡𝑡 + ∆𝑡𝑡) − 𝑃𝑃𝑛𝑛 (𝑡𝑡)
lim = lim 𝜆𝜆[𝑃𝑃𝑛𝑛−1 (𝑡𝑡) − 𝑃𝑃𝑛𝑛 (𝑡𝑡)]
∆𝑡𝑡→0 ∆𝑡𝑡 ∆𝑡𝑡→0

𝑃𝑃′𝑛𝑛 (𝑡𝑡) = 𝜆𝜆[𝑃𝑃𝑛𝑛−1 (𝑡𝑡) + 𝑃𝑃𝑛𝑛 (𝑡𝑡)]-------------------(1)


Let the solution of the equation (1) be
(𝜆𝜆𝜆𝜆 )𝑛𝑛
𝑃𝑃𝑛𝑛 (𝑡𝑡) = 𝑓𝑓(𝑡𝑡) -------------------(2)
𝑛𝑛!
Differentiating (2) w.r.to t
𝜆𝜆 𝑛𝑛
𝑃𝑃′𝑛𝑛 (𝑡𝑡) = [𝑛𝑛𝑡𝑡 𝑛𝑛−1 𝑓𝑓(𝑡𝑡) − 𝑓𝑓′(𝑡𝑡)] --------------------(3)
𝑛𝑛!
Using (2) and (3) in (1)
𝜆𝜆𝑛𝑛 (𝜆𝜆𝜆𝜆)𝑛𝑛−1 (𝜆𝜆𝜆𝜆)𝑛𝑛
[𝑛𝑛𝑡𝑡 𝑛𝑛−1 𝑓𝑓(𝑡𝑡) + 𝑡𝑡 𝑛𝑛 𝑓𝑓′(𝑡𝑡)] = 𝜆𝜆 � 𝑓𝑓(𝑡𝑡) − 𝑓𝑓(𝑡𝑡)�
𝑛𝑛! (𝑛𝑛 − 1)! 𝑛𝑛!

𝜆𝜆𝑛𝑛 𝑛𝑛−1 𝜆𝜆𝑛𝑛 𝜆𝜆𝑛𝑛 𝑡𝑡 𝑛𝑛−1 𝜆𝜆𝑛𝑛 𝑡𝑡 𝑛𝑛


𝑛𝑛𝑡𝑡 𝑓𝑓(𝑡𝑡) + 𝑡𝑡 𝑛𝑛 𝑓𝑓 ′ (𝑡𝑡) = 𝑓𝑓(𝑡𝑡) − 𝜆𝜆 𝑓𝑓(𝑡𝑡)
𝑛𝑛! 𝑛𝑛! (𝑛𝑛 − 1)! 𝑛𝑛!

𝜆𝜆𝑛𝑛 𝑛𝑛 ′ 𝜆𝜆𝑛𝑛 𝑡𝑡 𝑛𝑛
𝑡𝑡 𝑓𝑓 (𝑡𝑡) = − 𝜆𝜆 𝑓𝑓(𝑡𝑡)
𝑛𝑛! 𝑛𝑛!

𝑖𝑖. 𝑒𝑒., 𝑓𝑓 ′ (𝑡𝑡) = − 𝜆𝜆 𝑓𝑓(𝑡𝑡)


𝑓𝑓(𝑡𝑡) = 𝑘𝑘𝑒𝑒 −𝜆𝜆𝜆𝜆 ---------------------(4)
From (2)
𝑓𝑓(0) = 𝑃𝑃𝑛𝑛 (𝑡𝑡) = 𝑃𝑃[𝑋𝑋(0) = 0]
= 𝑃𝑃[𝑁𝑁𝑁𝑁 𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒 𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜𝑜 𝑖𝑖𝑖𝑖 (0,0)]
=1 ------------(5)

Using (5) in (4), we get k=1 and hence

𝑓𝑓(𝑡𝑡) = 𝑒𝑒 −𝜆𝜆𝜆𝜆 ---------------(6)


Using (6) in (2)
(𝜆𝜆𝜆𝜆)𝑛𝑛 −𝜆𝜆𝜆𝜆
𝑃𝑃𝑛𝑛 (𝑡𝑡) =
𝑒𝑒 , 𝑛𝑛 = 0,1,2, ….
𝑛𝑛!
Thus the probability distribution of X(t) is the Poisson distribution with
parameter 𝜆𝜆𝜆𝜆

Mean of Poisson Process


𝐸𝐸[𝑋𝑋(𝑡𝑡) = � 𝑛𝑛𝑃𝑃𝑛𝑛 (𝑡𝑡)


𝑛𝑛=0


(𝜆𝜆𝜆𝜆)𝑛𝑛 𝑒𝑒 −𝜆𝜆𝜆𝜆
= � 𝑛𝑛
𝑛𝑛!
𝑛𝑛=0

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

−𝜆𝜆𝜆𝜆
(𝜆𝜆𝜆𝜆)𝑛𝑛
= 𝑒𝑒 � 𝑛𝑛
𝑛𝑛(𝑛𝑛 − 1)!
𝑛𝑛=1
−𝜆𝜆𝜆𝜆
(𝜆𝜆𝜆𝜆)1 (𝜆𝜆𝜆𝜆)2 (𝜆𝜆𝜆𝜆)3
= 𝑒𝑒 � + + + ⋯�
0! 1! 2!
−𝜆𝜆𝜆𝜆
(𝜆𝜆𝜆𝜆)1 (𝜆𝜆𝜆𝜆)2
= 𝑒𝑒 𝜆𝜆𝜆𝜆 �1 + + + ⋯�
1! 2!
= 𝑒𝑒 −𝜆𝜆𝜆𝜆 𝜆𝜆𝜆𝜆 𝑒𝑒 𝜆𝜆𝜆𝜆
𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 = 𝜆𝜆𝜆𝜆

𝐸𝐸[𝑋𝑋 2 (𝑡𝑡)] = � 𝑛𝑛2 𝑃𝑃𝑛𝑛 (𝑡𝑡)


𝑛𝑛=0

(𝜆𝜆𝜆𝜆)𝑛𝑛 𝑒𝑒 −𝜆𝜆𝜆𝜆
= �[𝑛𝑛(𝑛𝑛 − 1) + 𝑛𝑛]
𝑛𝑛!
𝑛𝑛=0
∞ ∞
(𝜆𝜆𝜆𝜆)𝑛𝑛 𝑒𝑒 −𝜆𝜆𝜆𝜆 (𝜆𝜆𝜆𝜆)𝑛𝑛 𝑒𝑒 −𝜆𝜆𝜆𝜆
= �[𝑛𝑛(𝑛𝑛 − 1)] + � 𝑛𝑛
𝑛𝑛! 𝑛𝑛!
𝑛𝑛=0 𝑛𝑛=0

𝑛𝑛
(𝜆𝜆𝜆𝜆)
= 𝑒𝑒 −𝜆𝜆𝜆𝜆 � 𝑛𝑛 (𝑛𝑛 − 1) + 𝜆𝜆𝜆𝜆
𝑛𝑛(𝑛𝑛 − 1)(𝑛𝑛 − 2)!
𝑛𝑛=2

−𝜆𝜆𝜆𝜆
(𝜆𝜆𝜆𝜆)𝑛𝑛
= 𝑒𝑒 � + 𝜆𝜆𝜆𝜆
(𝑛𝑛 − 2)!
𝑛𝑛=2
−𝜆𝜆𝜆𝜆
(𝜆𝜆𝜆𝜆)2 (𝜆𝜆𝜆𝜆)3 (𝜆𝜆𝜆𝜆)4
= 𝑒𝑒 � + + + ⋯ � + 𝜆𝜆𝜆𝜆
0! 1! 2!
−𝜆𝜆𝜆𝜆 (𝜆𝜆𝜆𝜆)2
(𝜆𝜆𝜆𝜆)1 (𝜆𝜆𝜆𝜆)2
= 𝑒𝑒 �1 + + + ⋯ � + 𝜆𝜆𝜆𝜆
1! 2!
= 𝑒𝑒 −𝜆𝜆𝜆𝜆 (𝜆𝜆𝜆𝜆)2 𝑒𝑒 𝜆𝜆𝜆𝜆 + 𝜆𝜆𝜆𝜆
= (𝜆𝜆𝜆𝜆)2 + 𝜆𝜆𝜆𝜆
𝑉𝑉𝑉𝑉𝑉𝑉[𝑋𝑋(𝑡𝑡)] = 𝐸𝐸[𝑋𝑋 2 (𝑡𝑡)] − (𝐸𝐸[𝑋𝑋(𝑡𝑡)])2
= (𝜆𝜆𝜆𝜆)2 + 𝜆𝜆𝜆𝜆 − (𝜆𝜆𝜆𝜆)2
= 𝜆𝜆𝜆𝜆
Since 𝑉𝑉𝑉𝑉𝑉𝑉[𝑋𝑋(𝑡𝑡)] is a function of t. Hence Poisson process is not first order
stationary.
(ii) Prove that a random telegraph signal process 𝑌𝑌(𝑡𝑡) = 𝛼𝛼 𝑋𝑋(𝑡𝑡) is a Wide Sense
Stationary process where 𝛼𝛼 is a random variable which is independent of X(t) and
assumes value -1 and +1 with equal probability and 𝑅𝑅𝑋𝑋𝑋𝑋 (𝑡𝑡1 , 𝑡𝑡2 ) = 𝑒𝑒 − 2𝜆𝜆|𝑡𝑡 1 −𝑡𝑡 2 |
Ans: Let 𝑌𝑌(𝑡𝑡) = 𝛼𝛼 𝑋𝑋(𝑡𝑡)
1
𝑃𝑃(𝛼𝛼 = 1) = 𝑃𝑃(𝛼𝛼 = −1) =
2
By the definition 𝐸𝐸(𝛼𝛼) = 0, 𝐸𝐸(𝛼𝛼 2 ) = 1
To prove Y(t) is a WSS process
To Prove (i) 𝐸𝐸[𝑌𝑌(𝑡𝑡)] = 𝑎𝑎 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐
(ii) 𝑅𝑅(𝑡𝑡1 , 𝑡𝑡2 ) = 𝑎𝑎 𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓 𝑜𝑜𝑜𝑜 𝑡𝑡1 − 𝑡𝑡2
(i) 𝐸𝐸[𝑌𝑌(𝑡𝑡)] = 𝐸𝐸[𝛼𝛼𝛼𝛼(𝑡𝑡)]

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

= 𝐸𝐸(𝛼𝛼)𝐸𝐸[𝑋𝑋(𝑡𝑡)] since 𝛼𝛼 𝑎𝑎𝑎𝑎𝑎𝑎 𝑋𝑋(𝑡𝑡)𝑎𝑎𝑎𝑎𝑎𝑎 𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖


= 0 = 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐
(ii) 𝑅𝑅(𝑡𝑡1 , 𝑡𝑡2 ) = 𝐸𝐸[𝑌𝑌(𝑡𝑡1 )𝑌𝑌(𝑡𝑡2 )] = 𝐸𝐸[𝛼𝛼𝛼𝛼(𝑡𝑡1 )𝛼𝛼𝛼𝛼(𝑡𝑡2 )]
= 𝐸𝐸[𝛼𝛼 2 ]𝐸𝐸[𝑋𝑋(𝑡𝑡1 )𝑋𝑋(𝑡𝑡2 )]
= 1 × 𝑒𝑒 −2𝜆𝜆|𝑡𝑡 1 −𝑡𝑡2 |
= 𝑎𝑎 𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓 𝑜𝑜𝑜𝑜 𝑡𝑡1 − 𝑡𝑡2
∴ 𝑌𝑌(𝑡𝑡) 𝑖𝑖𝑖𝑖 𝑎𝑎 𝑊𝑊𝑊𝑊𝑊𝑊
14 (a) (i) Find the mean and auto correlation of the Poisson process
Ans: Mean of the Poisson process
𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 = 𝐸𝐸[𝑋𝑋(𝑡𝑡)]

= � 𝑛𝑛𝑃𝑃𝑛𝑛 (𝑡𝑡)
𝑛𝑛=0

(𝜆𝜆𝜆𝜆)𝑛𝑛 𝑒𝑒 −𝜆𝜆𝜆𝜆
= � 𝑛𝑛
𝑛𝑛!
𝑛𝑛=0

−𝜆𝜆𝜆𝜆
(𝜆𝜆𝜆𝜆)𝑛𝑛
= 𝑒𝑒 � 𝑛𝑛
𝑛𝑛(𝑛𝑛 − 1)!
𝑛𝑛=1
−𝜆𝜆𝜆𝜆
(𝜆𝜆𝜆𝜆)1 (𝜆𝜆𝜆𝜆)2 (𝜆𝜆𝜆𝜆)3
= 𝑒𝑒 � + + + ⋯�
0! 1! 2!
−𝜆𝜆𝜆𝜆
(𝜆𝜆𝜆𝜆)1 (𝜆𝜆𝜆𝜆)2
= 𝑒𝑒 𝜆𝜆𝜆𝜆 �1 + + + ⋯�
1! 2!
= 𝑒𝑒 −𝜆𝜆𝜆𝜆 𝜆𝜆𝜆𝜆 𝑒𝑒 𝜆𝜆𝜆𝜆
𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 = 𝜆𝜆𝜆𝜆

𝐸𝐸[𝑋𝑋 2 (𝑡𝑡)] = � 𝑛𝑛2 𝑃𝑃𝑛𝑛 (𝑡𝑡)


𝑛𝑛=0

(𝜆𝜆𝜆𝜆)𝑛𝑛 𝑒𝑒 −𝜆𝜆𝜆𝜆
= �[𝑛𝑛(𝑛𝑛 − 1) + 𝑛𝑛]
𝑛𝑛!
𝑛𝑛=0
∞ ∞
(𝜆𝜆𝜆𝜆)𝑛𝑛 𝑒𝑒 −𝜆𝜆𝜆𝜆 (𝜆𝜆𝜆𝜆)𝑛𝑛 𝑒𝑒 −𝜆𝜆𝜆𝜆
= �[𝑛𝑛(𝑛𝑛 − 1)] + � 𝑛𝑛
𝑛𝑛! 𝑛𝑛!
𝑛𝑛=0 𝑛𝑛=0

𝑛𝑛
(𝜆𝜆𝜆𝜆)
= 𝑒𝑒 −𝜆𝜆𝜆𝜆 � 𝑛𝑛 (𝑛𝑛 − 1) + 𝜆𝜆𝜆𝜆
𝑛𝑛(𝑛𝑛 − 1)(𝑛𝑛 − 2)!
𝑛𝑛=2

−𝜆𝜆𝜆𝜆
(𝜆𝜆𝜆𝜆)𝑛𝑛
= 𝑒𝑒 � + 𝜆𝜆𝜆𝜆
(𝑛𝑛 − 2)!
𝑛𝑛=2
−𝜆𝜆𝜆𝜆
(𝜆𝜆𝜆𝜆)2 (𝜆𝜆𝜆𝜆)3 (𝜆𝜆𝜆𝜆)4
= 𝑒𝑒 � + + + ⋯ � + 𝜆𝜆𝜆𝜆
0! 1! 2!
−𝜆𝜆𝜆𝜆 (𝜆𝜆𝜆𝜆)2
(𝜆𝜆𝜆𝜆)1 (𝜆𝜆𝜆𝜆)2
= 𝑒𝑒 �1 + + + ⋯ � + 𝜆𝜆𝜆𝜆
1! 2!
= 𝑒𝑒 −𝜆𝜆𝜆𝜆 (𝜆𝜆𝜆𝜆)2 𝑒𝑒 𝜆𝜆𝜆𝜆 + 𝜆𝜆𝜆𝜆
= (𝜆𝜆𝜆𝜆)2 + 𝜆𝜆𝜆𝜆
𝑉𝑉𝑉𝑉𝑉𝑉[𝑋𝑋(𝑡𝑡)] = 𝐸𝐸[𝑋𝑋 2 (𝑡𝑡)] − (𝐸𝐸[𝑋𝑋(𝑡𝑡)])2

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

= (𝜆𝜆𝜆𝜆)2 + 𝜆𝜆𝜆𝜆 − (𝜆𝜆𝜆𝜆)2


= 𝜆𝜆𝜆𝜆

(ii) Prove that the random processes 𝑋𝑋(𝑡𝑡) 𝑎𝑎𝑎𝑎𝑎𝑎 𝑌𝑌(𝑡𝑡) defined by 𝑋𝑋(𝑡𝑡) = 𝐴𝐴 cos 𝜔𝜔𝜔𝜔 +
𝐵𝐵 𝑠𝑠𝑖𝑖𝑛𝑛𝜔𝜔𝑡𝑡 𝑎𝑎𝑛𝑛𝑑𝑑 𝑌𝑌𝑡𝑡=𝐵𝐵cos𝜔𝜔𝑡𝑡−𝐴𝐴𝑠𝑠𝑖𝑖𝑛𝑛𝜔𝜔𝑡𝑡 where 𝜔𝜔 is a constant and A and B are
independent random variables both having zero mean and variance
𝜎𝜎 2 . 𝐴𝐴𝐴𝐴𝐴𝐴 𝑋𝑋(𝑡𝑡) 𝑎𝑎𝑎𝑎𝑎𝑎 𝑌𝑌(𝑡𝑡)e jointly wide sense stationary.
Ans: The cross correlation of 𝑋𝑋(𝑡𝑡) 𝑎𝑎𝑎𝑎𝑎𝑎 𝑌𝑌(𝑡𝑡) is
𝑅𝑅𝑋𝑋𝑋𝑋 (𝑡𝑡, 𝑡𝑡 + 𝜏𝜏) = 𝐸𝐸[𝑋𝑋(𝑡𝑡)𝑌𝑌(𝑡𝑡 + 𝜏𝜏)]
= 𝐸𝐸[(𝐴𝐴 cos 𝜔𝜔𝜔𝜔 + 𝐵𝐵 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠)(𝐵𝐵 cos 𝜔𝜔(𝑡𝑡 + 𝜏𝜏) − 𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴(𝑡𝑡 + 𝜏𝜏))]
= 𝐸𝐸(𝐴𝐴𝐴𝐴)[cos 𝜔𝜔𝜔𝜔 cos 𝜔𝜔(𝑡𝑡 + 𝜏𝜏) − 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠(𝑡𝑡 + 𝜏𝜏)]
− 𝐸𝐸(𝐴𝐴2 )𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠(𝑡𝑡 + 𝜏𝜏) + 𝐸𝐸(𝐵𝐵2 )𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐(𝑡𝑡 + 𝜏𝜏)
𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆 𝐸𝐸(𝐴𝐴𝐴𝐴) = 𝐸𝐸(𝐴𝐴)𝐸𝐸(𝐵𝐵) = 0, 𝐸𝐸(𝐴𝐴2 ) = 𝐸𝐸(𝐵𝐵2 ) = 𝜎𝜎 2
𝑅𝑅𝑋𝑋𝑋𝑋 (𝑡𝑡, 𝑡𝑡 + 𝜏𝜏) = 𝜎𝜎 2 [𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐(𝑡𝑡 + 𝜏𝜏) − 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠(𝑡𝑡 + 𝜏𝜏)]
= 𝜎𝜎 2 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠(𝑡𝑡 − 𝑡𝑡 + 𝜏𝜏)
= 𝜎𝜎 2 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠(𝜏𝜏)
Which implies that 𝑋𝑋(𝑡𝑡) 𝑎𝑎𝑎𝑎𝑎𝑎 𝑌𝑌(𝑡𝑡) are jointly WSS processes.
(OR)
(b) State and prove Weiner-Khinchine Theorem.
Ans: Statement:
Let X(t) be a real WSS process with power density spectrum S XX (ω ) .
Let X T (t ) be a portion of the process X(t) in time interval –T to T. i.e.,
 X (t ) ,−T < t < T
X T (t ) = 
 0 , elsewhere Let X (ω ) be the Fourier transform of
T
X T (t ) , then

S XX (ω ) =
lim 1
T → ∞ 2T
E X T (ω )
2
{ }
Proof: Given X T (ω ) is the Fourier transform of X T (t )

∴ X T (ω ) = ∫X
−∞
T (t )e −iωt dt

T
= ∫X
−T
T (t )e −iωt dt

∫ X (t )e
−iωt
= dt
−T

X T (ω ) = X T* (ω ) X T (ω ) [where * denotes complex conjugate]


2

T T

∫ X (t )e dt. ∫ X (t )e −iωt dt
i ωt
=
−T −T [ X (t ) is real]

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

T iω t T − i ωt
= ∫ X (t )e 1 dt . ∫ X (t )e 2 dt
1 1 2 2
−T −T
T T
= ∫ ∫ X (t ) X (t
−T −T
1 2 )e −iω ( t2 −t2 ) dt1dt 2


lim
T →∞
E X T (ω ) =
2
[lim 1
T → ∞ 2T
]
T T

∫ ∫ X (t ) X (t
−T −T
1 2 )e −iω ( t2 −t2 ) dt1dt 2

But E [X ((t1 )(t 2 )] = R XX (t1 , t 2 ) if − T < t1 , t 2 < T


T T

∫ ∫ R XX (t1 , t 2 )e −iω ( t2 −t2 ) dt1dt 2



lim
T →∞
[
E X T (ω ) =
2 lim
T → ∞ 2T
1
] −T −T

We shall now make a change of variables as below


Put t1 = t and t 2 − t1 = τ ⇒ t 2 = τ + t
∴ the jacobian of transformation is

∂t1 ∂t1 1 0
= =1
J = ∂t ∂τ 1 1
∂t 2 ∂t 2
∂t ∂τ
 dt1 dt 2 = J dtdτ
The limits of t and –T and T
When t 2 = −T ,τ = −T − t and t 2 = T ,τ = T − t

lim 1
T → ∞ 2T
[
E X T (ω ) =
2 lim 1
T → ∞ 2T
] T −t T

∫ ∫R
−T −t −T
XX (t , t + τ )e −iωτ dtdτ

Since X(t) is WSS Process, R XX (t , t + τ ) = R XX (τ )


lim 1
T → ∞ 2T
[
E X T (ω ) =
2 lim 1
T → ∞ 2T
]T −t T
R (τ )e −iωτ dtdτ∫ ∫
−T −t −T
XX

lim 1
= T −t T
T → ∞ 2T
∫R (τ )e dτ . ∫ dt
−iωτ
XX
−T −t −T

lim 1
= T −t
T → ∞ 2T
∫R
−T −t
XX (τ )e −iωτ dτ .2T

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

lim
= T −t
T →∞
∫R
−T −t
XX (τ )e −iωτ dτ


= ∫R
−∞
XX (τ )e −iωτ dτ = S XX (ω )
, by definition.
∴ S XX (ω ) =lim E X T (ω ) [ 2
]
T →∞ 2T
Hence proved.

15 (a) (i) Show that the input {𝑋𝑋(𝑡𝑡)} is a WSS process for a linear system then output {𝑌𝑌(𝑡𝑡)} is
a WSS process. Also find 𝑅𝑅𝑌𝑌𝑌𝑌 (𝜏𝜏).
Ans: Let X(t) be a WSS process for a linear time invariant stable system with Y(t)
as the output process.

Y (t ) = ∫ h(u ) X (t − u )du
Then −∞ where h(t ) is weighting function or unit impulse
response.

∴ E [Y (t )] = ∫ E[h(u ) X (t − u )]du
−∞

= ∫ h(u ) E[ X (t − u )]du
−∞

Since X(t) is a WSS process, E [ X (t )] is a constant µ X for any t.


∴ E[ X (t − u )] = µ X
∞ ∞
∴ E [Y (t )] = ∫ h(u )µ X du = µ X ∫ h(u )du
−∞ −∞

∫ h(u )du
Since the system is stable , −∞ is finite
∴ E [Y (t )] is a constant.
Now RYY (t , t + τ ) = E[Y (t )Y (t + τ )]
∞ ∞
= E[ ∫ h(u1 ) E[ X (t − u1 )]du1 ∫ h(u 2 ) E[ X (t + τ − u 2 )]du 2 ]
−∞ −∞
∞ ∞
= E[ ∫ ∫ h(u1 )h(u 2 ) X (t − u1 ) X (t + τ − u 2 )du1 du 2 ]
− ∞− ∞
∞ ∞
= ∫ ∫ h(u )h(u
− ∞− ∞
1 2 ) E[ X (t − u1 ) X (t + τ − u 2 )]du1 du 2

Since X(t) is a WSS process, auto correlation function is only a


function of time difference

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
∞ ∞
∴ RYY (t , t + τ ) = ∫ ∫ h(u )h(u
− ∞− ∞
1 2 ) R XX (τ + u1 − u 2 )du1 du 2

When this double integral is evaluated by integrating with respect to


u1 and u 2 , the RHS is only a function of τ . Hence Y(t) is a WSS
process.

(ii) If {𝑋𝑋(𝑡𝑡)} is the input voltage to a circuit and {𝑌𝑌(𝑡𝑡)} is the output voltage, {𝑋𝑋(𝑡𝑡)} is a
stationary random process with 𝜇𝜇𝑋𝑋 = 0 and 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏) = 𝑒𝑒 − 𝛼𝛼 |𝜏𝜏| .Find the mean 𝜇𝜇𝑌𝑌 and
the power spectrum 𝑆𝑆𝑌𝑌𝑌𝑌 (𝜔𝜔) of the output if the power transfer function is given by
𝑅𝑅
𝐻𝐻(𝜔𝜔) =
𝑅𝑅+𝑖𝑖𝑖𝑖𝑖𝑖
Ans: ∞
Y (t ) = ∫ h(u ) X (t − u )du
(i) We know that −∞

∴ E [Y (t )] = ∫ h(u ) E[ X (t − u )]du
−∞
Since X(t) is stationary with mean 0, E[ X (t )] = 0 for all t
∴ E[ X (t − u )] = 0 ∴ E[Y (t )] = 0

S XX (ω ) = ∫R
−∞
XX (τ )e −iωτ dτ
(i) We know that

∫e
−α τ
= e −iωτ dτ
−∞
0 ∞
= ∫ e ατ e −iωτ dτ + ∫ e −ατ e −iωτ dτ
−∞ 0
0 ∞
= ∫ e (α −iω )τ dτ + ∫ e (α +iω )τ dτ
−∞ 0
0 ∞
 e (α −iω )τ   e − (α +iω )τ 
=  +  
 α − iω  −∞  − (α + iω )  0
1 1
= [1 − 0] − [0 − 1]
α − iω α + iω
1 1 α + iω + α − iω
= + =
α − iω α + iω (α − iω )(α + iω )

=
α +ω2
2

R
Given H (ω ) = .
R + iLω
We know that S yy (ω ) = H (ω ) S xx (ω )
2

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
2
R2 2α
=
R + iLω α + ω 2
2

R2 2α
= . 2
R + L ω α +ω2
2 2 2

(iii) The autocorrelation function of Y(t) is



1
2π −∫∞
RYY (τ ) = S YY (ω )e iωτ dω

1 2αR 2 e iωτ
=
2π ∫ 2 2 2 2 2 dω
.
−∞ ( R + L ω ) α + ω

αR 2 1 e iωτ
=
π∫−∞ ( R 2 + L2ω 2 ) α 2 + ω 2 dω
.

1
First we shall write ( R + L ω )(α + ω ) as partial fraction, treating
2 2 2 2 2

ω2 as u. We shall write the special partial fraction as


1 A B
= 2 + 2
( R + L ω )(α + ω ) R + L ω
2 2 2 2 2 2 2
α +ω2
R2 L2
Put u = − 2 we get A = 2 2
L α L − R2
1
Put u = −α 2 we get B = 2
R − L2α 2
L2 1
∴ 2
1
= α L − R + R − L2α 2
2 2 2 2

( R + L2ω 2 )(α 2 + ω 2 ) R 2 + L2ω 2 α 2 +ω2


L2 1 1 1
= + 2
α L −R R +Lω
2 2 2 2 2 2
R − L α α +ω2
2 2 2

∞ ∞
αR 2 L2 1 αR 2 1
∫ ∫
iτω
∴ RYY (τ ) = .e dω −
π (α L − R ) −∞ ( R + L ω )
2 2 2 2 2 2
π (α L − R ) −∞α +
2 2 2 2

∞ ∞
αR 2 L2 1 1 αR 2 1
=
π (α L − R ) L2
2 2 2 ∫ R2
.e iτω dω −
π (α L − R
2 2 2
) ∫α 2
+ω2
.e iτω d
−∞
( 2
+ω2) −∞
L

By Contour integration, we know that



e imz π − ma
∫−∞ z 2 + a 2 dz = a e , m > 0

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

αR 2 L2 π −τ αR 2 π −α τ
∴ RYY (τ ) = e − e
π (α L − R )  R 
2 2 2
π (α L − R ) α
2 2 2
 
L
αLR −τ R R2 −α τ
= e  − 2 2 e
α L −R
2 2 2
L α L −R
2

αLR −τ R R2 −α τ
= e  − e
R2 L
R2
L (α − 2 )
2 2
L (α − 2 )
2 2

L L
2
R R
α   
L −τ  R 
 e  −   2 e
L −α τ
=
L
2
R R
α2 −  α2 − 
L L

R
   − R  τ  R  −α τ 
 L
= αe  L  −  e 
L
2
 R  
α − 
2 
L
(OR)
(b) (i) If 𝑌𝑌(𝑡𝑡) = 𝐴𝐴 cos(𝜔𝜔𝜔𝜔 + 𝜃𝜃) + 𝑁𝑁(𝑡𝑡), where A is a constant, 𝜃𝜃 is a random variable with
a uniform distribution (−𝜋𝜋, 𝜋𝜋), and {𝑁𝑁(𝑡𝑡)} is a band limited Gaussian white noise
𝑁𝑁0
𝑓𝑓𝑓𝑓𝑓𝑓 |𝜔𝜔 − 𝜔𝜔0 | < 𝜔𝜔𝐵𝐵
with power spectral density 𝑆𝑆𝑁𝑁𝑁𝑁 (𝜔𝜔) = � 2
0, 𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒ℎ𝑒𝑒𝑒𝑒𝑒𝑒
Find the power spectral density of Y(t). Assume that {𝑁𝑁(𝑡𝑡)} and 𝜃𝜃 are independent.
Ans: 𝑌𝑌(𝑡𝑡)𝑌𝑌(𝑡𝑡 + 𝜏𝜏) = [𝐴𝐴 cos(𝜔𝜔0 𝑡𝑡 + 𝜃𝜃) + 𝑁𝑁(𝑡𝑡)][𝐴𝐴 cos(𝜔𝜔0 (𝑡𝑡 + 𝜏𝜏) + 𝜃𝜃) + 𝑁𝑁(𝑡𝑡 + 𝜏𝜏)]
= 𝐴𝐴2 cos(𝜔𝜔𝜔𝜔 + 𝜃𝜃) cos(𝜔𝜔0 (𝑡𝑡 + 𝜏𝜏) + 𝜃𝜃) + 𝑁𝑁(𝑡𝑡)𝑁𝑁(𝑡𝑡 + 𝜏𝜏)
+ 𝐴𝐴 cos(𝜔𝜔0 𝑡𝑡 + 𝜃𝜃)𝑁𝑁(𝑡𝑡 + 𝜏𝜏) + 𝐴𝐴 cos(𝜔𝜔0 (𝑡𝑡 + 𝜏𝜏) + 𝜃𝜃)𝑁𝑁(𝑡𝑡)
𝑅𝑅𝑌𝑌𝑌𝑌 (𝑡𝑡, 𝑡𝑡 + 𝜏𝜏) = 𝐴𝐴2 E[cos(𝜔𝜔0 𝑡𝑡 + 𝜃𝜃) cos(𝜔𝜔0 (𝑡𝑡 + 𝜏𝜏) + 𝜃𝜃)] + 𝐸𝐸[𝑁𝑁(𝑡𝑡)𝑁𝑁(𝑡𝑡 + 𝜏𝜏)]
+ 𝐴𝐴 𝐸𝐸[cos(𝜔𝜔0 𝑡𝑡 + 𝜃𝜃)] 𝐸𝐸[𝑁𝑁(𝑡𝑡 + 𝜏𝜏)]
+ 𝐴𝐴 𝐸𝐸[cos(𝜔𝜔0 (𝑡𝑡 + 𝜏𝜏) + 𝜃𝜃)] 𝐸𝐸[𝑁𝑁(𝑡𝑡)]
Since 𝑁𝑁(𝑡𝑡) 𝑎𝑎𝑎𝑎𝑎𝑎 𝜃𝜃 are independent.
By hypothesis 𝐸𝐸[𝑁𝑁(𝑡𝑡)] = 0, 𝐸𝐸[𝑁𝑁(𝑡𝑡 + 𝜏𝜏) = 0
𝐴𝐴2
∴ 𝑅𝑅𝑌𝑌𝑌𝑌 (𝜏𝜏) = 𝑐𝑐𝑐𝑐𝑐𝑐𝜔𝜔0 𝜏𝜏 + 𝑅𝑅𝑁𝑁𝑁𝑁 (𝜏𝜏)
2
𝐴𝐴2 ∞
𝑆𝑆𝑌𝑌𝑌𝑌 (𝜔𝜔) = � 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 𝑒𝑒 − 𝑖𝑖𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑 + 𝑆𝑆𝑁𝑁𝑁𝑁 (𝜔𝜔)
2 −∞
𝜋𝜋𝐴𝐴2
= [𝛿𝛿(𝜔𝜔 − 𝜔𝜔0 ) + 𝛿𝛿(𝜔𝜔 − 𝜔𝜔0 )] + 𝑆𝑆𝑁𝑁𝑁𝑁 (𝜔𝜔)
2
(ii) A system has an impulse response ℎ(𝑡𝑡) = 𝑒𝑒 − 𝛽𝛽𝛽𝛽 𝑈𝑈(𝑡𝑡), find the power spectral density
of the output Y(t) corresponding to the input X(t).
Ans: Given X (t ) is the input process and Y (t ) is the output process of a linear
system.

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

S (ω ) = H (ω ) S YY (ω )
2

We know that YY
Where H (ω ) = Fourier transform of the function h(t ) . But the unit step
function
0 , t < 0
U (t ) = 
1 , t ≥ 0
 0 ,t < 0
∴ h(t ) =  − βt
e ,t ≥ 0

∫ h(t )e
− i ωt
∴ H (ω ) = F [h(t )] = dt
−∞

= ∫ e − βt e −iωt dt
0  h(t ) = 0 if t<0

= ∫ e −( β +iω )t dt
0

 e − ( β + iω ) t 
= 
 − ( β + iω )  0
1
=− [0 − 1]
( β + iω )

1
=
( β + iω )
1 1
∴ H (ω ) = =
β + iω β 2 +ω2
1
∴ S YY (ω ) = .S XX (ω )
β +ω2
2

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

B.E./B.Tech. DEGREE EXAMINATIONS, NOV/DEC 2015


Fourth Semester
Common to ECE/BIOMEDICAL
MA6453 – PROBABILITY AND RANDOM PROCESSES
(Regulations 2008)
Time: Three hours Maximum: 100 marks
Answer ALL Questions

Part – A (10X2 = 20) marks

1 Assume that X is a continuous random variable with the probability


3
(2𝑥𝑥 − 𝑥𝑥 2 ), 0 < 𝑥𝑥 < 2
density function 𝑓𝑓(𝑥𝑥) = �4 . Find 𝑃𝑃(𝑋𝑋 > 1)
0, 𝑜𝑜𝑜𝑜ℎ𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒
3
Ans: (2𝑥𝑥 − 𝑥𝑥 2 ), 0 < 𝑥𝑥 < 2
Given 𝑓𝑓(𝑥𝑥) = �4
0, 𝑜𝑜𝑜𝑜ℎ𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒
2
3
𝑃𝑃(𝑋𝑋 > 1) = � (2𝑥𝑥 − 𝑥𝑥 2 ) 𝑑𝑑𝑑𝑑
1 4
2 3 2
3 𝑥𝑥 𝑥𝑥 3 8 1 3 2 1
= �2 − � = ��4 − � − �1 − �� = × =
4 2 3 1 4 3 3 4 3 2
2 A random variable X is uniformly distributed between 3 and 15, Find the
variance of X.
Ans: Given a=3 and b=15
(𝑏𝑏 − 𝑎𝑎)2 (15 − 3)2 144
𝑉𝑉𝑉𝑉𝑉𝑉(𝑋𝑋) = = = = 12
12 12 12
3 The joint probability mass function of a two dimensional random variable
(𝑋𝑋, 𝑌𝑌) is given by 𝑃𝑃(𝑥𝑥, 𝑦𝑦) = 𝑘𝑘(2𝑥𝑥 + 𝑦𝑦), 𝑥𝑥 = 1,2 𝑎𝑎𝑎𝑎𝑎𝑎 𝑦𝑦 = 1,2 where k is
a constant. Find the value of k.
Ans: The PMF is
X
Y 1 2 P(x)
1 3k 4k 7k
2 5k 6k 11k
P(y) 8k 10k 18k

Since ∑ ∑ 𝑝𝑝(𝑥𝑥, 𝑦𝑦) = 1


3𝑘𝑘 + 4𝑘𝑘 + 5𝑘𝑘 + 6𝑘𝑘 = 1
18𝑘𝑘 = 1
1
𝑘𝑘 =
18
4 The two regression equations of two random variables X and Y are
4𝑥𝑥 − 5𝑦𝑦 + 33 = 0 𝑎𝑎𝑎𝑎𝑎𝑎 20𝑥𝑥 − 9𝑦𝑦 = 107. Find the mean values of X and
Y.
Ans: 4𝑥𝑥 − 5𝑦𝑦 + 33 = 0 𝑎𝑎𝑎𝑎𝑎𝑎 20𝑥𝑥 − 9𝑦𝑦 = 107

5 Define SSS process.


Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130
Agni college of Technology
Chennai – 130

Ans: A random process is called a SSS process if all its finite


dimensional distribution are invariant under translation of the time
parameter. i.e., if the joint distribution of 𝑋𝑋(𝑡𝑡1 ), 𝑋𝑋(𝑡𝑡2 ), … , 𝑋𝑋(𝑡𝑡𝑛𝑛 )is
the same as that of 𝑋𝑋(𝑡𝑡1 + ℎ), 𝑋𝑋(𝑡𝑡2 + ℎ), … , 𝑋𝑋(𝑡𝑡𝑛𝑛 + ℎ)for all
𝑡𝑡1 , 𝑡𝑡2 ,… 𝑡𝑡𝑛𝑛 and ℎ > 0 𝑎𝑎𝑎𝑎𝑎𝑎 for all 𝑛𝑛 ≥ 1.
6 Write down any two properties of Poisson process.
Ans:
(1) Sum of two Poisson processes is a Poisson process.

(2) Difference of two Poisson processes is not a Poisson process

7 Compute the mean value of the random process {𝑋𝑋(𝑡𝑡)} whose


4
autocorrelation function is given by {𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏)} = 25 +
1+6𝜏𝜏 2
Ans:
𝐸𝐸�𝑋𝑋(𝑡𝑡)� = 𝜇𝜇𝑋𝑋 = � lim 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏)
𝜏𝜏→∞

4
= � lim 25 + = √25 = 5
𝜏𝜏→∞ 1 + 6𝜏𝜏 2
𝐸𝐸�𝑋𝑋 2 (𝑡𝑡)� = 𝑅𝑅𝑋𝑋𝑋𝑋 (0)
= 25 + 4 = 29
2 (𝑡𝑡)�
𝑉𝑉𝑉𝑉𝑉𝑉[𝑋𝑋(𝑡𝑡)] = 𝐸𝐸�𝑋𝑋 − 𝐸𝐸(𝑋𝑋(𝑡𝑡))
29 − 52 = 4
8 Prove that the spectral density of a real random process is an even
function.
Ans: The spectral density function of a real random process is an even
function. 𝑖𝑖. 𝑒𝑒, 𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔) = 𝑆𝑆𝑋𝑋𝑋𝑋 (−𝜔𝜔)
9 Define casual system.
Ans: A causal system (also known as a physical or
nonanticipative system) is a system where the output depends on
past and current inputs but not future inputs
10 Define transfer function of a system.
Ans: The output Y(t) is a linear time invariant system is fully determined
by the impulse response h(t). The Fourier Transform of the impulse

response h(t) is defined by 𝐻𝐻(𝜔𝜔) = ∫−∞ ℎ(𝑡𝑡)𝑒𝑒 − 𝑗𝑗𝑗𝑗𝑗𝑗 𝑑𝑑𝑑𝑑. Here 𝐻𝐻(𝜔𝜔)
is called the transfer function of the system.
PART – B
11. (a) (i) 2𝑥𝑥, 0 ≤ 𝑥𝑥 ≤ 𝑏𝑏
The pdf of a random variable X is given by 𝑓𝑓(𝑥𝑥) = � for
0, 𝑜𝑜𝑜𝑜ℎ𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒
what value of b is 𝑓𝑓(𝑥𝑥) a valid pdf?. Also find the cdf of the random
variable X with the above pdf.
Ans: Since ∫∞ 𝑓𝑓(𝑥𝑥)𝑑𝑑𝑑𝑑 = 1
−∞
𝑏𝑏
� 2𝑥𝑥 𝑑𝑑𝑑𝑑 = 1
0
𝑏𝑏
2𝑥𝑥 2
� � =1
2 0

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

𝑏𝑏 2 − 0 = 1, 𝑏𝑏 = ±1
𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡 𝑏𝑏 = 1 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 𝑎𝑎 < 𝑏𝑏
𝑥𝑥
The cdf of X is 𝐹𝐹(𝑥𝑥) = 𝑃𝑃(𝑋𝑋 ≤ 𝑥𝑥) = ∫−∞ 𝑓𝑓(𝑥𝑥) 𝑑𝑑𝑑𝑑
(𝑖𝑖) 𝐼𝐼𝐼𝐼 𝑥𝑥 < 0
𝑥𝑥
𝐹𝐹(𝑥𝑥) = � 𝑓𝑓(𝑥𝑥) 𝑑𝑑𝑑𝑑 = 0
−∞
(𝑖𝑖𝑖𝑖) 𝐼𝐼𝐼𝐼 0 < 𝑥𝑥 < 1
0 𝑥𝑥
𝐹𝐹(𝑥𝑥) = � 𝑓𝑓(𝑥𝑥) 𝑑𝑑𝑑𝑑 + � 𝑓𝑓(𝑥𝑥) 𝑑𝑑𝑑𝑑
−∞ 0
𝑥𝑥
= 0 + � 2𝑥𝑥 𝑑𝑑𝑑𝑑
0
2 𝑥𝑥
2𝑥𝑥
=� � = 𝑥𝑥 2
2 0
(𝑖𝑖𝑖𝑖𝑖𝑖) 𝐼𝐼𝐼𝐼 𝑥𝑥 > 1
0 1 𝑥𝑥
𝐹𝐹(𝑥𝑥) = � 𝑓𝑓(𝑥𝑥) 𝑑𝑑𝑑𝑑 + � 𝑓𝑓(𝑥𝑥) 𝑑𝑑𝑑𝑑 + � 𝑓𝑓(𝑥𝑥) 𝑑𝑑𝑑𝑑
−∞ 0 1
2 1
2𝑥𝑥
=0+� � +0
2 0
=1
0, 𝑥𝑥 < 0
2
The cdf of X is 𝐹𝐹(𝑥𝑥) = �𝑥𝑥 , 0 < 𝑥𝑥 < 1
1, 𝑥𝑥 > 1

(ii) State and prove memory less property of Geometric distribution.


Ans: Definition of Geometric Distribution
The random variable X is said to have a geometric distribution with
parameter p if the probability mass function is given by 𝑃𝑃(𝑋𝑋 =
𝑥𝑥=𝑞𝑞𝑥𝑥−1𝑝𝑝, 𝑥𝑥=1,2,3,…
Memory less Property of Geometric Distribution
If X is said to follow Geometric system with parameter p, then for
any two positive integers ‘𝑠𝑠’ 𝑎𝑎𝑎𝑎𝑎𝑎 ‘𝑡𝑡’.
𝑃𝑃[𝑋𝑋 > 𝑠𝑠 + 𝑡𝑡/𝑋𝑋 > 𝑠𝑠] = 𝑃𝑃[𝑋𝑋 > 𝑡𝑡]
Proof:
𝑃𝑃[𝑋𝑋 > 𝑠𝑠 + 𝑡𝑡 ∩ 𝑋𝑋 > 𝑠𝑠]
𝑃𝑃[𝑋𝑋 > 𝑠𝑠 + 𝑡𝑡/𝑋𝑋 > 𝑠𝑠] =
𝑃𝑃(𝑋𝑋 > 𝑠𝑠)
𝑥𝑥−1
Now 𝑃𝑃(𝑋𝑋 = 𝑥𝑥) = 𝑞𝑞 𝑝𝑝, 𝑥𝑥 = 1,2,3, …

𝑃𝑃[𝑋𝑋 > 𝑘𝑘] = � 𝑞𝑞 𝑥𝑥−1 𝑝𝑝


𝑥𝑥=𝑘𝑘+1
= 𝑞𝑞 𝑘𝑘 𝑝𝑝 + 𝑞𝑞 𝑘𝑘+1 𝑝𝑝 + 𝑞𝑞 𝑘𝑘+2 𝑝𝑝 + ⋯
= 𝑞𝑞 𝑘𝑘 𝑝𝑝[1 + 𝑞𝑞 + 𝑞𝑞 2 + ⋯ ]
= 𝑞𝑞 𝑘𝑘 𝑝𝑝(1 − 𝑞𝑞)−1

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

𝑞𝑞 𝑘𝑘 𝑝𝑝 𝑞𝑞 𝑘𝑘 𝑝𝑝
𝑃𝑃[𝑋𝑋 > 𝑘𝑘] = = = 𝑞𝑞 𝑘𝑘
1 − 𝑞𝑞 𝑝𝑝
Hence 𝑃𝑃[𝑋𝑋 > 𝑠𝑠 + 𝑡𝑡] = 𝑞𝑞 𝑠𝑠+𝑡𝑡

𝑃𝑃[𝑋𝑋 > 𝑠𝑠] = 𝑞𝑞 𝑠𝑠


𝑃𝑃[𝑋𝑋 > 𝑡𝑡] = 𝑞𝑞 𝑡𝑡
𝑞𝑞 𝑠𝑠+𝑡𝑡
∴ 𝑃𝑃[𝑋𝑋 > 𝑠𝑠 + 𝑡𝑡⁄𝑋𝑋 > 𝑠𝑠] = 𝑠𝑠 = 𝑞𝑞 𝑡𝑡 = 𝑃𝑃[𝑋𝑋 > 𝑡𝑡]
𝑞𝑞
𝑖𝑖. 𝑒𝑒. , 𝑃𝑃[𝑋𝑋 > 𝑠𝑠 + 𝑡𝑡/𝑋𝑋 > 𝑠𝑠] = 𝑃𝑃[𝑋𝑋 > 𝑡𝑡]

(OR)

(b) (i) Find the moment generating function of Poisson distribution and hence
find its mean and variance.
Ans: M X (t)=E[𝑒𝑒 𝑡𝑡𝑡𝑡 ] = ∑∞ 𝑡𝑡𝑡𝑡
𝑥𝑥=0 𝑒𝑒 𝑝𝑝(𝑥𝑥)
𝑒𝑒 − λ 𝑥𝑥
=∑∞
𝑥𝑥=0 𝑒𝑒
𝑡𝑡𝑡𝑡
λ
𝑥𝑥!
𝑥𝑥
� λ et �
=𝑒𝑒 − λ ∑∞
𝑥𝑥=0 𝑥𝑥!
2
λ et � λ et �
=𝑒𝑒 − λ �1 + + + ⋯.�
1! 2!

𝑡𝑡
−λ −λ e
= 𝑒𝑒 𝑒𝑒
M X (t)= 𝑒𝑒 λ (e t −1)

= �𝑒𝑒 λ (e
𝑑𝑑 t −1)
� 𝑀𝑀𝑋𝑋 (𝑡𝑡)�
𝑑𝑑𝑑𝑑
λ et �
𝑡𝑡=0 𝑡𝑡=0

= 𝑒𝑒 λ (e
0 −1)
λ e0

Mean= λ
=�𝑒𝑒 λ (e λ et . λ et + 𝑒𝑒 λ (e
𝑑𝑑 2 t −1) t −1)

𝑑𝑑𝑑𝑑
𝑀𝑀𝑋𝑋 (𝑡𝑡)� λ et �
𝑡𝑡=0 𝑡𝑡=0
2
E[X ]= λ + λ 2

Variance(X) =E[X2]+[𝐸𝐸[𝑋𝑋]]2
2 2
= λ + λ -� λ �
Var(X) = λ

(ii) In a normal distribution, 31% of items are under 45 and 8% of items over
64. Find the mean and the standard deviation of the distribution.
Ans: Let mean be 𝜇𝜇 and the standard deviation 𝜎𝜎
𝑋𝑋 − 𝜇𝜇
𝑍𝑍 =
𝜎𝜎

Here 31% of the items are under 45.


Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130
Agni college of Technology
Chennai – 130

𝑃𝑃(𝑋𝑋 < 45) = 0.31


45 − 𝜇𝜇
𝑃𝑃 �𝑍𝑍 < � = 0.31
𝜎𝜎
(45−𝜇𝜇 )
The area lying to the left of the ordinate 𝑎𝑎𝑎𝑎 𝑍𝑍 = 𝑖𝑖𝑖𝑖 0.31
𝜎𝜎
and therefore the area lying to the right of the ordinate upto mean is
0.5 − 0.31 = 0.19
The value of Z, corresponding to this area is 0.5
45−𝜇𝜇
Hence 𝑍𝑍 = = −0.5 -------------------(1)
𝜎𝜎
8% of the items are above 64.

𝑃𝑃(𝑋𝑋 < 64) = 0.08


64 − 𝜇𝜇
𝑃𝑃 �𝑋𝑋 < � = 0.08
𝜎𝜎
64−𝜇𝜇
Area to the left of the ordinate at 𝑍𝑍 = upto the mean is 0.5-
𝜎𝜎
0.08=0.42 and the value of Z corresponding to this area is 1.4.
64−𝜇𝜇
Hence 𝑍𝑍 = = 1.4 ------------------------(2)
𝜎𝜎
From (1) and (2)

−𝜇𝜇 + 0.5𝜎𝜎 = −45


−𝜇𝜇 − 1.40.5𝜎𝜎 = −64
Solving for 𝜇𝜇 and 𝜎𝜎 we get 𝜇𝜇 = 50 and 𝜎𝜎 = 10
12. (a) (i) The joint probability mass function of (X,Y) is given by 𝑃𝑃(𝑥𝑥, 𝑦𝑦) =
1
(2𝑥𝑥 + 3𝑦𝑦), 𝑥𝑥 = 0,1,2 𝑎𝑎𝑎𝑎𝑎𝑎 𝑦𝑦 = 1,2,3. Find the marginal and
72
conditional probability function of X and y.
Ans:
Y 1 2 3 � 𝑝𝑝(𝑥𝑥)
X
0 3 6 9 18
72 72 72 72
1 5 8 11 24
72 72 72 72
2 7 10 13 30
72 72 72 72
15 24 33 1
� 𝑝𝑝(𝑦𝑦)
72 72 72

The Marginal distribution of X


𝑋𝑋 = 𝑥𝑥 0 1 2
18 24 30
� 𝑝𝑝(𝑥𝑥)
72 72 72

The Marginal distribution of Y


𝑌𝑌 = 𝑦𝑦 1 2 3
Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130
Agni college of Technology
Chennai – 130
15 24 33
� 𝑝𝑝(𝑦𝑦)
72 72 72

The conditional distribution of X given Y: P  X = x Y = y 


 

P(= X 0,= Y 1) 3 15 1
P( X= 0 Y= 1)= = =
P() 72 72 5
P(= X 1,= Y 1) 5 15 1
P( X= 1 Y= 1)= = =
P(Y = 1) 72 72 3
P(= X 2,= Y 1) 7 15 7
P( X= 2 Y= 1)= = =
P (Y = 1) 72 72 15
P (=X 0,= Y 2) 6 24 1
P ( X= 0 Y= 2)
= = =
P (Y = 2) 72 72 4
P (=X 1,= Y 2) 8 24 1
P ( X= 1 Y= 2)
= = =
P (Y = 2) 72 72 3
P ( X = 2, Y = 2) 10 24 5
P ( X= 2 Y= 2)
= = =
P(Y = 2) 72 72 12
P(= X 0,= Y 3) 9 33 9
P( X= 0 Y= 3)= = =
P (Y = 3) 72 72 33
P (=X 1,= Y 3) 11 33 1
P ( X= 1 Y= 3)= = =
P (Y = 3) 72 72 3
P (= X 2,= Y 3) 13 33 13
P ( X= 2 Y= 3)= = =
P (Y = 3) 72 72 33

The conditional distribution of Y given X is P Y = y X = x 


 
P= (Y 1,= X 0) 3 18 1
P(Y= 1 X= 0)
= = =
P( X = 0) 72 72 6
P= (Y 2,= X 0) 6 18 1
P(Y= 2 X= 0)
= = =
P( X = 0) 72 72 3
P=(Y 3,= X 0) 9 18 1
P(Y= 3 X= 0)
= = =
P( X = 0) 72 72 2
P=(Y 1,= X 1) 5 24 5
P(Y= 1 X= 1)= = =
P ( X = 1) 72 72 24
P= (Y 2,= X 1) 8 24 1
P (Y= 2 X= 1)= = =
P ( X = 1) 72 72 3
P (Y = 3, X = 1) 11 24 11
P (Y= 3 X= 1)= = =
P( X = 1) 72 72 24
P= (Y 1,= X 2) 7 30 7
P(Y= 1 X= 2)
= = =
P( X = 2) 72 72 30
P= (Y 2,= X 2) 10 30 1
P(Y= 2 X= 2)
= = =
P ( X = 2) 72 72 3
P=(Y 3,= X 2) 13 30 13
P(Y= 3 X= 2)
= = =
P( X = 2) 72 72 30
The conditional distributions can be tabulated as below:
X 0 1 2

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
P=
( X x=
Y 1) 3 15 1 5 15 1 7 15 7
= = =
72 72 5 72 72 3 72 72 15
Conditional distribution of X Y = 1

Te conditional distribution of X Y = 2
X 0 1 2
P=
( X x=
Y 2) 6 24 1 8 24 1 10 24 5
= = =
72 72 4 72 72 3 72 72 12

Te conditional distribution of X Y = 3
X 0 1 2
P=
( X x=
Y 3) 9 1 13
33 3 33

Te conditional distribution of Y X = 0
X 0 1 2
P=
(Y y=
X 0) 3 18 1 6 18 1 9 18 1
= = =
72 72 6 72 72 3 72 72 2

The conditional distribution of Y X = 1


X 0 1 2
P=
(Y y=
X 1) 5 1 11
24 3 24

The conditional distribution of Y X = 2


X 0 1 2
P=
(Y y=
X 2) 7 1 13
30 3 30
(ii) The joint pdf of (X,Y) is 𝑓𝑓(𝑥𝑥, 𝑦𝑦) = 𝑒𝑒 − (𝑥𝑥+𝑦𝑦) ; 𝑥𝑥, 𝑦𝑦 ≥ 0. Are X and y
independent?
Ans: Given 𝑓𝑓(𝑥𝑥, 𝑦𝑦) = 𝑒𝑒 − (𝑥𝑥+𝑦𝑦) , 𝑥𝑥 ≥ 0, 𝑦𝑦 ≥ 0
The marginal density function of X is

𝑓𝑓𝑋𝑋 (𝑥𝑥) = � 𝑓𝑓(𝑥𝑥, 𝑦𝑦)𝑑𝑑𝑑𝑑
−∞

= � 𝑒𝑒 − (𝑥𝑥+𝑦𝑦) 𝑑𝑑𝑑𝑑
0
𝑒𝑒 − 𝑦𝑦 ∞
= 𝑒𝑒 − 𝑥𝑥 � �
−1 0
= 𝑒𝑒 − 𝑥𝑥 , 𝑥𝑥 > 0
The marginal density function of Y is

𝑓𝑓𝑌𝑌 (𝑦𝑦) = � 𝑓𝑓(𝑥𝑥, 𝑦𝑦) 𝑑𝑑𝑑𝑑
−∞

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

= � 𝑒𝑒 − (𝑥𝑥+𝑦𝑦) 𝑑𝑑𝑑𝑑
0
− 𝑦𝑦
𝑒𝑒 −𝑥𝑥 ∞
= 𝑒𝑒 � �
−1 0
= 𝑒𝑒 − 𝑦𝑦 , 𝑦𝑦 > 0
𝑁𝑁𝑁𝑁𝑁𝑁 𝑓𝑓𝑋𝑋 (𝑥𝑥)𝑓𝑓𝑌𝑌 (𝑦𝑦) = 𝑒𝑒 − 𝑥𝑥 𝑒𝑒 − 𝑦𝑦 = 𝑒𝑒 − (𝑥𝑥+𝑦𝑦) = 𝑓𝑓(𝑥𝑥, 𝑦𝑦)
∴ 𝑋𝑋 𝑎𝑎𝑎𝑎𝑎𝑎 𝑌𝑌 𝑎𝑎𝑎𝑎𝑎𝑎 𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖.

(OR)
(b) (i) The joint pdf of a random variable (X,Y) is 𝑓𝑓(𝑥𝑥, 𝑦𝑦) = 25𝑒𝑒 − 5𝑦𝑦 ; 0 < 𝑥𝑥 <
0.2, 𝑦𝑦 > 0. Find the covariance of X and Y..
Ans: The marginal density of X

𝑓𝑓𝑋𝑋 (𝑥𝑥) = � 𝑓𝑓(𝑥𝑥, 𝑦𝑦)𝑑𝑑𝑑𝑑
−∞

= � 25 𝑒𝑒 − 5𝑦𝑦 𝑑𝑑𝑑𝑑
0

𝑒𝑒 − 5𝑦𝑦
= 25 � � =5
−5 0
The marginal density of X

𝑓𝑓𝑌𝑌 (𝑦𝑦) = � 𝑓𝑓(𝑥𝑥, 𝑦𝑦)𝑑𝑑𝑑𝑑
−∞
0.2
= � 25𝑒𝑒 − 5𝑦𝑦 𝑑𝑑𝑑𝑑
0
= 25𝑒𝑒 − 5𝑦𝑦 [𝑥𝑥]0.2
0 = 25𝑒𝑒
− 5𝑦𝑦
(0.2 − 0)
− 5𝑦𝑦
= 5𝑒𝑒
𝐶𝐶𝐶𝐶𝐶𝐶(𝑋𝑋, 𝑌𝑌) = 𝐸𝐸(𝑋𝑋𝑋𝑋) − 𝐸𝐸(𝑋𝑋)𝐸𝐸(𝑌𝑌)
∞ 0.2

𝐸𝐸(𝑋𝑋𝑋𝑋) = � � 𝑥𝑥𝑥𝑥 25𝑒𝑒 − 5𝑦𝑦 𝑑𝑑𝑑𝑑 𝑑𝑑𝑑𝑑


0 0

∞ 0.2
− 5𝑦𝑦
𝑥𝑥 2
= � 𝑦𝑦 25𝑒𝑒 � � 𝑑𝑑𝑑𝑑
2 0
0

= � 𝑦𝑦 25𝑒𝑒 − 5𝑦𝑦 (0.2 − 0)𝑑𝑑𝑑𝑑


0

𝑒𝑒 − 5𝑦𝑦 𝑒𝑒 − 5𝑦𝑦 1
= 0.02 × 25 �𝑦𝑦 −1× � = 0.5 �0 − (0 − )� = 0.2
−5 (−5) 2 25
0
0.2 0.2 0.2
𝑥𝑥 2
𝐸𝐸(𝑋𝑋) = � 𝑥𝑥𝑥𝑥(𝑥𝑥)𝑑𝑑𝑑𝑑 = � 𝑥𝑥5𝑑𝑑𝑑𝑑 = 5 � � = 0.1
2 0
0 0

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
∞ ∞ ∞
− 5𝑦𝑦
𝑒𝑒 − 5𝑦𝑦 𝑒𝑒 − 5𝑦𝑦
𝐸𝐸(𝑦𝑦) = � 𝑦𝑦𝑦𝑦(𝑦𝑦)𝑑𝑑𝑑𝑑 = � 𝑦𝑦 5𝑒𝑒 𝑑𝑑𝑑𝑑 = 5 �𝑦𝑦 −1× �
−5 (−5)2
0 0 0
1
=5× = 0.2
25
∴ 𝐶𝐶𝐶𝐶𝐶𝐶(𝑋𝑋, 𝑌𝑌) = 0.2 − (0.1)(0.2) = 0.198
(ii) the random variables X and Y each follow exponential distribution with
parameter 1 and are independent. Find the pdf of 𝑈𝑈 = 𝑋𝑋 − 𝑌𝑌.
Ans: The pdf of X and Y are
𝑓𝑓(𝑥𝑥) = 𝑒𝑒 −𝑥𝑥 𝑥𝑥 ≥ 0
𝑓𝑓(𝑦𝑦) = 𝑒𝑒 −𝑦𝑦 𝑦𝑦 ≥ 0

The joint pdf of X and Y is


−𝑥𝑥 −𝑦𝑦 − (𝑥𝑥+𝑦𝑦)
𝑓𝑓(𝑥𝑥, 𝑦𝑦) = 𝑒𝑒 𝑒𝑒 = 𝑒𝑒 ; 𝑥𝑥, 𝑦𝑦 ≥ 0
Take 𝑢𝑢 = 𝑥𝑥 − 𝑦𝑦, 𝑣𝑣 = 𝑦𝑦
𝑥𝑥 = 𝑢𝑢 + 𝑣𝑣, 𝑦𝑦 = 𝑣𝑣
𝜕𝜕𝜕𝜕 𝜕𝜕𝜕𝜕
1 1
𝐽𝐽 = �𝜕𝜕𝜕𝜕 𝜕𝜕𝜕𝜕 � = � �=1
𝜕𝜕𝜕𝜕 𝜕𝜕𝜕𝜕 0 1
𝜕𝜕𝜕𝜕 𝜕𝜕𝜕𝜕
Hence the jpdf of U and V is
𝑓𝑓(𝑢𝑢, 𝑣𝑣) = 𝑓𝑓(𝑥𝑥, 𝑦𝑦)|𝐽𝐽|
= 𝑒𝑒 − (𝑥𝑥+𝑦𝑦) (1) = 𝑒𝑒 − (𝑢𝑢+𝑣𝑣+𝑣𝑣) = 𝑒𝑒 − (𝑢𝑢+2𝑣𝑣)
Range space:
Given 𝑦𝑦 ≥ 0 ⇒ 𝑣𝑣 ≥ 0
𝑥𝑥 ≥ 0 ⇒ 𝑢𝑢 + 𝑣𝑣 ≥ 0
⇒ 𝑢𝑢 ≥ −𝑢𝑢
For the region 𝑢𝑢 < 0, 𝑣𝑣 𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣 𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓 − 𝑢𝑢 𝑡𝑡𝑡𝑡 ∞. 𝑖𝑖. 𝑒𝑒. , −𝑢𝑢 < 𝑣𝑣 < ∞
For the region 𝑢𝑢 > 0, 𝑣𝑣 𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣 𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓 0 𝑡𝑡𝑡𝑡 ∞. 𝑖𝑖. 𝑒𝑒. , 0 < 𝑣𝑣 < ∞
To find the density of 𝑈𝑈 = 𝑋𝑋 − 𝑌𝑌, we have to find 𝑓𝑓𝑈𝑈 (𝑢𝑢) for the
regions:
(𝑖𝑖) 𝑢𝑢 < 0, −𝑢𝑢 < 𝑣𝑣 < ∞
(𝑖𝑖𝑖𝑖) 𝑢𝑢 > 0, 𝑣𝑣 > 0
∞ ∞
⎧ � 𝑔𝑔(𝑢𝑢, 𝑣𝑣) 𝑑𝑑𝑑𝑑 = � 𝑒𝑒 − (𝑢𝑢+2𝑣𝑣) 𝑑𝑑𝑑𝑑 𝑓𝑓𝑓𝑓𝑓𝑓 𝑢𝑢 < 0
⎪ −𝑢𝑢 −𝑢𝑢
𝑓𝑓𝑈𝑈 (𝑢𝑢) = ∞
⎨ = � 𝑒𝑒 − (𝑢𝑢+2𝑣𝑣) 𝑑𝑑𝑑𝑑 𝑓𝑓𝑓𝑓𝑓𝑓 𝑢𝑢 > 0

⎩ ∞
0
𝑒𝑒 −2𝑣𝑣 𝑒𝑒 𝑢𝑢
⎧𝑒𝑒 −𝑢𝑢 � � = 𝑓𝑓𝑓𝑓𝑓𝑓 𝑢𝑢 < 0
⎪ −2 −𝑢𝑢 2
𝑓𝑓𝑈𝑈 (𝑢𝑢) = ∞
⎨ −𝑢𝑢 𝑒𝑒 −2𝑣𝑣 𝑒𝑒 −𝑢𝑢
⎪𝑒𝑒 � � = 𝑓𝑓𝑓𝑓𝑓𝑓 𝑢𝑢 > 0
⎩ −2 0 2
𝑒𝑒 −|𝑢𝑢 |
Hence the pdf is 𝑓𝑓𝑈𝑈 (𝑢𝑢) = for −∞ < 𝑢𝑢 < ∞
2
13. (a) (i) A random process {𝑋𝑋(𝑡𝑡)} is defined by 𝑋𝑋(𝑡𝑡) = 𝐴𝐴 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 + 𝐵𝐵 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠, −∞ <
𝑡𝑡 < ∞ where A and B are independent random variables each of which
has a value –2 with probability 1/3 and a value 1 with probability 2/3.
Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130
Agni college of Technology
Chennai – 130

Show that {𝑋𝑋(𝑡𝑡)} is a wide sense stationary process.


Ans:
Given A&B are discrete RV which assumes values

A; -2 1

P(A): 1/3 2/3

B: -2 1

P(B): 1/3 2/3

1 2 2 2
𝐸𝐸[𝐴𝐴] = � 𝑎𝑎𝑖𝑖 𝑝𝑝(𝑎𝑎𝑖𝑖 ) = (−2) � � + (1) � � = − + = 0.
3 3 3 3
1 2 4 2
𝐸𝐸[𝐴𝐴2 ] = � 𝑎𝑎𝑖𝑖2 𝑝𝑝(𝑎𝑎𝑖𝑖 ) = (−2)2 � � + (1)2 � � = + = 2.
3 3 3 3
1 2 2 2
𝐸𝐸[𝐵𝐵] = � 𝑏𝑏𝑖𝑖 𝑝𝑝(𝑏𝑏𝑖𝑖 ) = (−2) � � + (1) � � = − + = 0
3 3 3 3
1 2 4 2
𝐸𝐸[𝐵𝐵2 ] = � 𝑏𝑏𝑖𝑖2 𝑝𝑝(𝑏𝑏𝑖𝑖 ) = (−2)2 � � + (1)2 � � = + = 2.
3 3 3 3
Since A & B are independent RV's, E[AB ]= E[A] E[B]=0.

(i) 𝐸𝐸[𝑋𝑋(𝑡𝑡)] = 𝐸𝐸[𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴 + 𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵 𝑡𝑡] = 𝐸𝐸[𝐴𝐴]𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 + 𝐸𝐸[𝐵𝐵]𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 =


0 + 0 + 0 = 0 = 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐.

(ii) 𝑅𝑅𝑋𝑋𝑋𝑋 (𝑡𝑡, 𝑡𝑡 + 𝜏𝜏) = 𝐸𝐸[𝑋𝑋(𝑡𝑡)𝑋𝑋(𝑡𝑡 + 𝜏𝜏)]

= 𝐸𝐸[(𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴 + 𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵)((𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴(𝑡𝑡 + 𝜏𝜏) + 𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵(𝑡𝑡 + 𝜏𝜏

= 𝐸𝐸[𝐴𝐴2 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐(𝑡𝑡 + 𝜏𝜏) + 𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴(𝑡𝑡 + 𝜏𝜏)

+𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴(𝑡𝑡 + 𝜏𝜏) + 𝐵𝐵2 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠(𝑡𝑡 + 𝜏𝜏)]

= 𝐸𝐸[𝐴𝐴2 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐(𝑡𝑡 + 𝜏𝜏) + 𝐸𝐸[𝐴𝐴𝐴𝐴] 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐(𝑡𝑡 + 𝜏𝜏)

𝐸𝐸[𝐴𝐴𝐴𝐴}𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠(𝑡𝑡 + 𝜏𝜏) + 𝐸𝐸[𝐵𝐵2 ]𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠(𝑡𝑡 + 𝜏𝜏)

= 2𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐(𝑡𝑡 + 𝜏𝜏) + 0 + 0 + 2𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠(𝑡𝑡 + 𝜏𝜏)

= 2[𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐(𝑡𝑡 + 𝜏𝜏) + 2𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠(𝑡𝑡 + 𝜏𝜏)]

= 2𝑐𝑐𝑐𝑐𝑐𝑐(𝑡𝑡 + 𝜏𝜏 − 𝑡𝑡) = 2𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐, which depends on 𝜏𝜏

Hence by (i) & (ii) X(t) is a WSS process.

(ii) Suppose the customer arrive at a bank according to Poisson process with
mean rate of 3 per minute. Find the probability that during a time interval
Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130
Agni college of Technology
Chennai – 130

of two minutes (1) exactly four customers arrive (2) greater than 4
customers arrive (3) fewer than 4 customers arrive.
Ans: Given 𝜆𝜆 = 3⁄𝑚𝑚𝑚𝑚𝑚𝑚, 𝑡𝑡 = 2
𝑒𝑒 − 𝜆𝜆𝜆𝜆 (𝜆𝜆𝜆𝜆)𝑛𝑛
𝑃𝑃[𝑋𝑋(𝑡𝑡) = 𝑛𝑛] = 𝑛𝑛 = 0,1,23, …
𝑛𝑛!
− 3×2 (3 𝑛𝑛
𝑒𝑒 × 2) 𝑒𝑒 −6 6𝑛𝑛
= =
𝑛𝑛! 𝑛𝑛!
−6
𝑒𝑒 6 4
(1) 𝑃𝑃[𝑋𝑋(2) = 4] = = 0.1338
4!
𝑒𝑒 −6 60 𝑒𝑒 −6 61 𝑒𝑒 −6 62 𝑒𝑒 −6 63
(2) 𝑃𝑃[𝑋𝑋(2) < 4] = + + +
0! 1! 2! 3!
= 𝑒𝑒 −6 [1 + 6 + 18 + 36]
= 0.1512
(3) 𝑃𝑃[𝑋𝑋(2) > 4] = 1 − [𝑃𝑃(𝑋𝑋 < 4) + 𝑃𝑃(𝑋𝑋 = 4)]
= 1 − (0.1338 + 0.1512) = 0.715

(OR)
(b) (i) A man either drives a car or catches a train to go office each day. He never
goes two days is just as likely to derive again as he is to travel by train.
Now suppose that on the first day of the week, the mean tossed a fair dice
and drove to work iff a 6 appeared. Find the probability that he takes a
train on the fourth day and the probability that he drives to work on the
fifth day.
Ans: Travel Pattern is a Markov chain with state space = (train, car)
The TPM of the chain is
T C
0 1
𝑇𝑇
𝑃𝑃 = �1 1�
𝐶𝐶
2 2
51
The initial state probability distribution is 𝑃𝑃(1) = � �
66
Since
𝑃𝑃(𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡 𝑏𝑏𝑏𝑏 𝑐𝑐𝑐𝑐𝑐𝑐) = 𝑃𝑃(𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔 6 𝑖𝑖𝑖𝑖 𝑡𝑡ℎ𝑒𝑒 𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡 𝑜𝑜𝑜𝑜 𝑡𝑡ℎ𝑒𝑒 𝑑𝑑𝑑𝑑𝑑𝑑) =
1
6
5
And 𝑃𝑃(𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡 𝑏𝑏𝑏𝑏 𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡) =
6
51 0 1 1 11
𝑃𝑃(2) = 𝑃𝑃(1) 𝑃𝑃 = � � �1 1� = � �
66 12 12
2 2
1 11 0 1 11 13
(3) (2)
𝑃𝑃 = 𝑃𝑃 𝑃𝑃 = � � �1 1� = � �
12 12 24 24
2 2
11
𝑃𝑃(𝑡𝑡ℎ𝑒𝑒 𝑚𝑚𝑚𝑚𝑚𝑚 𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡 𝑏𝑏𝑏𝑏 𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡 𝑜𝑜𝑜𝑜 𝑡𝑡ℎ𝑒𝑒 𝑡𝑡ℎ𝑖𝑖𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑𝑑𝑑) =
24
The long run probability is limiting probability

Π = [𝜋𝜋0 𝜋𝜋1 ] where 𝜋𝜋0 + 𝜋𝜋1 = 1 − − − − − −(1)

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
0 1
And ΠP = Π ⇒ [𝜋𝜋0 𝜋𝜋1 ] � 1 1� = [𝜋𝜋0 𝜋𝜋1 ]
2 2
𝜋𝜋 1
= 𝜋𝜋0 ---------(2)
2
𝜋𝜋 1
+ 𝜋𝜋0 = 𝜋𝜋1 -----------(3)
2

Equation (2) and (3) are one and the same. Solve (1) and (2)
1
𝜋𝜋0 =
3
2
𝜋𝜋1 =
3
2
𝑃𝑃(𝑡𝑡ℎ𝑒𝑒 𝑚𝑚𝑚𝑚𝑚𝑚 𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡 𝑏𝑏𝑏𝑏 𝑐𝑐𝑐𝑐𝑐𝑐 𝑖𝑖𝑖𝑖 𝑡𝑡ℎ𝑒𝑒 𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙 𝑟𝑟𝑟𝑟𝑟𝑟) =
3

(ii) Define semi-random telegraph signal process and prove that it is an


evolutionary process.
Ans:
If N(t) represents the number of occurrences of a specified event

in (0,t) and 𝑋𝑋(𝑡𝑡) = (−1)𝑁𝑁(𝑡𝑡) then {X(t)} is called a semi telegraph


process.

𝑒𝑒 −⋋𝑡𝑡 (⋋𝑡𝑡)𝑛𝑛
{N(t)} is a process with 𝑃𝑃{𝑁𝑁(𝑡𝑡) = 𝑟𝑟} = , 𝑛𝑛 = 𝑜𝑜, 1,2 …
𝑛𝑛!

To prove that {X(t)} is evolutionary.

Now {X(t)} can take values +1 and -1 only.

𝑃𝑃{𝑋𝑋(𝑡𝑡)} = 1} = 𝑃𝑃(𝑁𝑁(𝑡𝑡)𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖) = 𝑃𝑃(𝑁𝑁(𝑡𝑡) = 0,2,4 … )

= 𝑃𝑃(𝑁𝑁(𝑡𝑡) = 0) = +𝑃𝑃(𝑁𝑁(𝑡𝑡) = 2) + 𝑃𝑃(𝑁𝑁(𝑡𝑡) = 4) + ⋯

𝑒𝑒 −⋋𝑡𝑡 (⋋ 𝑡𝑡)0 𝑒𝑒 −⋋𝑡𝑡 (⋋ 𝑡𝑡)2 𝑒𝑒 −⋋𝑡𝑡 (⋋ 𝑡𝑡)4


= + + + ⋯…
0! 2! 4!
𝑒𝑒 −⋋𝑡𝑡 (⋋ 𝑡𝑡)0 𝑒𝑒 −⋋𝑡𝑡 (⋋ 𝑡𝑡)2 𝑒𝑒 −⋋𝑡𝑡 (⋋ 𝑡𝑡)4
= + + +⋯
0! 2! 4!
[⋋ 𝑡𝑡]2 [⋋ 𝑡𝑡]4
= 𝑒𝑒 −⋋𝑡𝑡 �1 + + + ⋯�
2! 4!

= 𝑒𝑒 −⋋𝑡𝑡 𝑐𝑐𝑐𝑐𝑐𝑐ℎ ⋋ 𝑡𝑡

𝑃𝑃{𝑋𝑋(𝑡𝑡)} = −1} = 𝑃𝑃(𝑁𝑁(𝑡𝑡)𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖) = 𝑃𝑃(𝑁𝑁(𝑡𝑡) = 1,3,5 … )

= 𝑃𝑃(𝑁𝑁(𝑡𝑡) = 1) = +𝑃𝑃(𝑁𝑁(𝑡𝑡) = 3) + 𝑃𝑃(𝑁𝑁(𝑡𝑡) = 5) + ⋯


Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130
Agni college of Technology
Chennai – 130

𝑒𝑒 −⋋𝑡𝑡 (⋋ 𝑡𝑡)1 𝑒𝑒 −⋋𝑡𝑡 (⋋ 𝑡𝑡)3 𝑒𝑒 −⋋𝑡𝑡 (⋋ 𝑡𝑡)5


= + + +⋯
1! 3! 5!

−⋋𝑡𝑡
[⋋ 𝑡𝑡]1 [⋋ 𝑡𝑡]3 [⋋ 𝑡𝑡]5
= 𝑒𝑒 � + + …�
1! 3! 5!

= 𝑒𝑒 −⋋𝑡𝑡 𝑠𝑠𝑠𝑠𝑠𝑠ℎ ⋋ 𝑡𝑡

Hence 𝐸𝐸{𝑋𝑋(𝑡𝑡)} = [(1)𝑃𝑃(𝑋𝑋 = 1) + (−1)𝑃𝑃(𝑋𝑋 = −1)

= (1)𝑒𝑒 −⋋𝑡𝑡 𝑐𝑐𝑐𝑐𝑐𝑐ℎ ⋋ 𝑡𝑡 + (−1)𝑒𝑒 −⋋𝑡𝑡 𝑠𝑠𝑠𝑠𝑠𝑠ℎ ⋋ 𝑡𝑡

𝑒𝑒 ⋋𝑡𝑡 + 𝑒𝑒 −⋋𝑡𝑡 𝑒𝑒 ⋋𝑡𝑡 − 𝑒𝑒 −⋋𝑡𝑡


= 𝑒𝑒 −⋋𝑡𝑡 [𝑐𝑐𝑐𝑐𝑐𝑐ℎ ⋋ 𝑡𝑡 − 𝑠𝑠𝑠𝑠𝑠𝑠ℎ ⋋ 𝑡𝑡 = 𝑒𝑒 −⋋𝑡𝑡 � − �
2 2

= 𝑒𝑒 −⋋𝑡𝑡 𝑒𝑒 −⋋𝑡𝑡 = 𝑒𝑒 −2⋋𝑡𝑡

Hence E{X(t)} is not a constant.

So {X(t)} is evolutionary.

14. (a) (i) Two random process {𝑋𝑋(𝑡𝑡)} and {𝑌𝑌(𝑡𝑡)} are defines as 𝑋𝑋(𝑡𝑡) =
𝐴𝐴 𝑐𝑐𝑐𝑐𝑐𝑐(𝜔𝜔𝜔𝜔 + 𝜃𝜃) and 𝑌𝑌(𝑡𝑡) = 𝐵𝐵 𝑠𝑠𝑠𝑠𝑠𝑠 (𝜔𝜔𝜔𝜔 + 𝜃𝜃) where A, B and 𝜔𝜔 are
constants and 𝜃𝜃 is uniformly distributed random variable over (0, 2𝜋𝜋).
Find the cross correlation function of {𝑋𝑋(𝑡𝑡)}and {𝑌𝑌(𝑡𝑡)}.
Ans: The cross correlation of 𝑋𝑋(𝑡𝑡) 𝑎𝑎𝑎𝑎𝑎𝑎 𝑌𝑌(𝑡𝑡) is
𝑅𝑅𝑋𝑋𝑋𝑋 (𝑡𝑡, 𝑡𝑡 + 𝜏𝜏) = 𝐸𝐸[𝑋𝑋(𝑡𝑡)𝑌𝑌(𝑡𝑡 + 𝜏𝜏)]
= 𝐴𝐴𝐴𝐴 𝐸𝐸[cos(𝜔𝜔𝜔𝜔 + 𝜃𝜃) sin(𝜔𝜔𝜔𝜔 + 𝜔𝜔𝜔𝜔 + 𝜃𝜃)]
𝐴𝐴𝐴𝐴
= 𝐸𝐸[sin(𝜔𝜔𝜔𝜔 + 𝜔𝜔𝜔𝜔 + 𝜃𝜃)cos⁡ (𝜔𝜔𝜔𝜔 + 𝜃𝜃)]
2
𝐴𝐴𝐴𝐴
= 𝐸𝐸[sin(2𝜔𝜔𝜔𝜔 + 𝜔𝜔𝜔𝜔 + 2𝜃𝜃) + 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠]
2
𝐴𝐴𝐴𝐴 𝐴𝐴𝐴𝐴
= 𝐸𝐸 �sin(2𝜔𝜔𝜔𝜔 + 𝜔𝜔𝜔𝜔 + 2𝜃𝜃)] + 𝐸𝐸[𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠�
2 2
𝐴𝐴𝐴𝐴 𝐴𝐴𝐴𝐴 2𝜋𝜋
= 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 + � 𝑠𝑠𝑠𝑠𝑠𝑠 (2𝜔𝜔𝜔𝜔 + 𝜔𝜔𝜔𝜔 + 2𝜃𝜃) 𝑓𝑓(𝜃𝜃)𝑑𝑑𝑑𝑑
2 2 0
𝐴𝐴𝐴𝐴 𝐴𝐴𝐴𝐴 2𝜋𝜋 1
= 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 + � 𝑠𝑠𝑠𝑠𝑠𝑠 (2𝜔𝜔𝜔𝜔 + 𝜔𝜔𝜔𝜔 + 2𝜃𝜃) 𝑑𝑑𝑑𝑑
2 2 0 2𝜋𝜋

2𝜋𝜋
𝐴𝐴𝐴𝐴 𝐴𝐴𝐴𝐴 [𝑐𝑐𝑐𝑐𝑐𝑐 (2𝜔𝜔𝜔𝜔 + 𝜔𝜔𝜔𝜔 + 2𝜃𝜃)]
= 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 + � �
2 4𝜋𝜋 2 0
𝐴𝐴𝐴𝐴 𝐴𝐴𝐴𝐴
= 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 + [𝑐𝑐𝑐𝑐𝑐𝑐 (2𝜔𝜔𝜔𝜔 + 𝜔𝜔𝜔𝜔 + 4𝜋𝜋) − 𝑐𝑐𝑐𝑐𝑐𝑐 (2𝜔𝜔𝜔𝜔 + 𝜔𝜔𝜔𝜔)]
2 8𝜋𝜋

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
𝐴𝐴𝐴𝐴 𝐴𝐴𝐴𝐴
= 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 + [𝑐𝑐𝑐𝑐𝑐𝑐 (2𝜔𝜔𝜔𝜔 + 𝜔𝜔𝜔𝜔) − 𝑐𝑐𝑐𝑐𝑐𝑐 (2𝜔𝜔𝜔𝜔 + 𝜔𝜔𝜔𝜔)]
2 8𝜋𝜋
𝐴𝐴𝐴𝐴 𝐴𝐴𝐴𝐴
= 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 + (0)
2 8𝜋𝜋
𝐴𝐴𝐴𝐴
∴ 𝑅𝑅𝑋𝑋𝑋𝑋 (𝑡𝑡, 𝑡𝑡 + 𝜏𝜏) = 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠
2
(ii) The power spectrum of a WSS process {𝑋𝑋(𝑡𝑡)} is given by 𝑆𝑆(𝜔𝜔) =
1
(1+𝜔𝜔 2 )2
. Find its auto correlation function 𝑅𝑅(𝜏𝜏).

We know that S XX (ω ) is the Fourier transform of R XX (τ )


Ans:

∴ R XX (τ ) = inverse Fourier transform of S XX (ω )


2
1 1 
S XX (ω ) = = 
(1 + ω )  (1 + iω )(1 − iω ) 
2 2
Given
2
 1 (1 + iω ) + (1 − iω ) 
= 
 2 (1 + iω )(1 − iω ) 
2
1 1 1 
=  +
4  (1 − iω ) (1 + iω ) 

1 1 1 2 
=  + +
4  (1 − iω ) 2
(1 + iω ) 2
(1 + iω )(1 − iω )
∴ R XX (τ )
1  2  
= F −1 [S XX (ω )] = F −1  
1 1
+ +
 4 (1 − iω ) 2 (1 + iω ) 2 (1 + ω )2  
  
 1 
We know that F −1   = u (τ )τe −ατ , where u (t ) is
2 
 (α + iω ) 
unit step function, α > 0

  2α 
 = u (τ )τeατ , F −1  2
1
Also F −1 
−α τ
2  2 
=e
 (α − iω )  α +ω 
1
4
[
∴ R XX (τ ) = u (τ )τe ατ + u (τ )τe −ατ + e
−α τ
]
1
[
= u (τ )τ (e ατ + e −ατ ) + e
4
−α τ
]
Since X(t) is WSS, average power PXX = E X (t )
2
[ ]
= R XX (0)
∴ Average power
= R XX (0)

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

1
= = 0.25
4

(OR)
(b) (i) If 𝑌𝑌(𝑡𝑡) = 𝑋𝑋(𝑡𝑡 + 𝑎𝑎) − 𝑋𝑋(𝑡𝑡 − 𝑎𝑎). Prove that 𝑅𝑅𝑌𝑌𝑌𝑌 (𝜏𝜏) = 2𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏 + 2𝑎𝑎) −
𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏 − 2𝑎𝑎) Hence prove that 𝑆𝑆𝑌𝑌𝑌𝑌 (𝜔𝜔) = 4 𝑠𝑠𝑠𝑠𝑠𝑠2 𝑎𝑎𝑎𝑎𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔).
Ans: Auto correlation function of Y(t) is given by
𝑌𝑌(𝑡𝑡) = 𝑋𝑋(𝑡𝑡 + 𝑎𝑎) − 𝑋𝑋(𝑡𝑡 − 𝑎𝑎)
𝑅𝑅𝑌𝑌𝑌𝑌 (𝜏𝜏) = 𝐸𝐸[𝑌𝑌(𝑡𝑡)𝑌𝑌(𝑡𝑡 + 𝜏𝜏)]
= 𝐸𝐸[�𝑋𝑋(𝑡𝑡 + 𝑎𝑎) − 𝑌𝑌(𝑡𝑡 − 𝑎𝑎)��𝑋𝑋(𝑡𝑡 + 𝑎𝑎 + 𝜏𝜏) − 𝑋𝑋(𝑡𝑡 − 𝑎𝑎 + 𝜏𝜏)�]
= 𝐸𝐸��𝑋𝑋(𝑡𝑡 + 𝑎𝑎)𝑋𝑋(𝑡𝑡 + 𝑎𝑎 + 𝜏𝜏)] − 𝐸𝐸[𝑋𝑋(𝑡𝑡 + 𝑎𝑎)��𝑋𝑋(𝑡𝑡 − 𝑎𝑎 + 𝜏𝜏)]
− 𝐸𝐸[𝑋𝑋(𝑡𝑡 − 𝑎𝑎)𝑋𝑋(𝑡𝑡 − 𝑎𝑎 + 𝜏𝜏)��
+ 𝐸𝐸[𝑋𝑋(𝑡𝑡 − 𝑎𝑎)𝑋𝑋(𝑡𝑡 − 𝑎𝑎 + 𝜏𝜏)]
= 2𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏) − 𝐸𝐸(𝑋𝑋(𝑡𝑡 + 𝑎𝑎)𝑋𝑋(𝑡𝑡 + 𝑎𝑎 + 𝜏𝜏 − 2𝑎𝑎))
− 𝐸𝐸(𝑋𝑋(𝑡𝑡 − 𝑎𝑎)𝑋𝑋(𝑡𝑡 − 𝑎𝑎 + 𝜏𝜏 + 2𝑎𝑎))
𝑅𝑅𝑌𝑌𝑌𝑌 (𝜏𝜏) = 2𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏) − 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏 − 2𝑎𝑎) − 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏 + 2𝑎𝑎)
The power spectral density of Y(t) is

𝑆𝑆𝑌𝑌𝑌𝑌 (𝜔𝜔) = � 𝑅𝑅𝑌𝑌𝑌𝑌 (𝜏𝜏)𝑒𝑒 −𝑖𝑖𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑
−∞
∞ ∞
= 2 � 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏)𝑒𝑒 −𝑖𝑖𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑 − � 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏 − 2𝑎𝑎)𝑒𝑒 −𝑖𝑖𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑
−∞ −∞

− � 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏 + 2𝑎𝑎)𝑒𝑒 −𝑖𝑖𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑
−∞

= 2𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔) − � 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏 − 2𝑎𝑎)𝑒𝑒 −𝑖𝑖𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑
−∞

− � 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏 + 2𝑎𝑎)𝑒𝑒 −𝑖𝑖𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑
−∞
Let 𝜏𝜏 − 2𝑎𝑎 = 𝑢𝑢
⇒ 𝜏𝜏 = 𝑢𝑢 + 2𝑎𝑎
𝑑𝑑𝑑𝑑 = 𝑑𝑑𝑑𝑑
𝑎𝑎𝑎𝑎𝑎𝑎 𝜏𝜏 + 2𝑎𝑎 = 𝑣𝑣
⇒ 𝜏𝜏 = 𝑢𝑢 − 2𝑎𝑎
𝑑𝑑𝑑𝑑 = 𝑑𝑑𝑑𝑑

= 2𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔) − � 𝑅𝑅𝑋𝑋𝑋𝑋 (𝑢𝑢)𝑒𝑒 −𝑖𝑖𝑖𝑖 (𝑢𝑢+2𝑎𝑎) 𝑑𝑑𝑑𝑑
−∞

− � 𝑅𝑅𝑋𝑋𝑋𝑋 (𝑣𝑣)𝑒𝑒 −𝑖𝑖𝑖𝑖 (𝑢𝑢−2𝑎𝑎) 𝑑𝑑𝑑𝑑
−∞
= 2𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔) − 𝑒𝑒 −𝑖𝑖𝑖𝑖 2𝑎𝑎 𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔) − 𝑒𝑒 +𝑖𝑖𝑖𝑖 2𝑎𝑎 𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔)
= 2𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔) − 𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔)�𝑒𝑒 𝑖𝑖𝑖𝑖 2𝑎𝑎 + 𝑒𝑒 −𝑖𝑖𝑖𝑖 2𝑎𝑎 �
= 2𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔) − 𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔)2𝑐𝑐𝑐𝑐𝑐𝑐2𝑎𝑎𝑎𝑎
= 2𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔)(1 − 𝑐𝑐𝑐𝑐𝑐𝑐2𝑎𝑎𝑎𝑎)
= 2𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔)2𝑠𝑠𝑠𝑠𝑠𝑠2 𝑎𝑎𝑎𝑎
∴ 𝑆𝑆𝑌𝑌𝑌𝑌 (𝜔𝜔) = 4𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔)𝑠𝑠𝑠𝑠𝑠𝑠2 𝑎𝑎𝑎𝑎
(ii) The auto correlation function of the random telegraph signal process is
given by 𝑅𝑅(𝜏𝜏) = 𝑎𝑎2 𝑒𝑒 − 2𝛾𝛾|𝜏𝜏| . Determine the power density spectrum of the
Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130
Agni college of Technology
Chennai – 130

random telegraph signal.



Ans:
𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔) = � 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏)𝑒𝑒 −𝑖𝑖𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑
−∞

= � 𝑎𝑎2 𝑒𝑒 − 2𝛾𝛾|𝜏𝜏| 𝑒𝑒 −𝑖𝑖𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑
−∞

= 𝑎𝑎 � 𝑒𝑒 − 2𝛾𝛾|𝜏𝜏| (𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 − 𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖) 𝑑𝑑𝑑𝑑
2
−∞

= 2𝑎𝑎2 � 𝑒𝑒 − 2𝛾𝛾𝛾𝛾 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 𝑑𝑑𝑑𝑑
0

2
𝑒𝑒 −4𝛾𝛾𝛾𝛾
= 2𝑎𝑎 � 2 (−2𝛾𝛾 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 + 𝜔𝜔𝜔𝜔𝜔𝜔𝜔𝜔𝜔𝜔𝜔𝜔)�
4𝛾𝛾 + 𝜔𝜔 2 0
1
= 2𝑎𝑎2 �− (−2𝛾𝛾)�
4𝛾𝛾 2 + 𝜔𝜔 2

4𝑎𝑎2 𝛾𝛾
=
4𝛾𝛾 2 + 𝜔𝜔 2

15. (a) If {𝑋𝑋(𝑡𝑡)} is a WSS process and if 𝑌𝑌(𝑡𝑡) = ∫− ∞ ℎ(𝑢𝑢)𝑋𝑋(𝑡𝑡 − 𝑢𝑢)𝑑𝑑𝑑𝑑, Prove
that
(𝑖𝑖)𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏) = 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏) ∗ ℎ(−𝑡𝑡)
(𝑖𝑖𝑖𝑖) 𝑅𝑅𝑌𝑌𝑌𝑌 (𝜏𝜏) = 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏) ∗ ℎ(𝑡𝑡)where * denotes convolution.
(𝑖𝑖𝑖𝑖𝑖𝑖)𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔) = 𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔)𝐻𝐻 ∗ (𝜔𝜔) where 𝐻𝐻 ∗ (𝜔𝜔) is the complex conjugate of
𝐻𝐻(𝜔𝜔).
(𝑖𝑖𝑖𝑖)𝑆𝑆𝑌𝑌𝑌𝑌 (𝜔𝜔) = 𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔)|𝐻𝐻(𝜔𝜔)|2
Ans: We know that

(a) 𝑅𝑅𝑋𝑋𝑋𝑋 (𝑡𝑡1 , 𝑡𝑡2 ) ∫−∞ ℎ(𝛽𝛽)𝑅𝑅𝑋𝑋𝑋𝑋 (𝑡𝑡1 , 𝑡𝑡2 − 𝛽𝛽) 𝑑𝑑𝑑𝑑


𝑅𝑅𝑋𝑋𝑋𝑋 (𝑡𝑡1 , 𝑡𝑡2 ) ∫−∞ ℎ(𝛽𝛽)𝑅𝑅𝑋𝑋𝑋𝑋 (𝑡𝑡1 − 𝑡𝑡2 − 𝛽𝛽) 𝑑𝑑𝑑𝑑since X(t) is a WSS
process.
And 𝑅𝑅𝑋𝑋𝑋𝑋 (𝑡𝑡1 − 𝑡𝑡2 − 𝛽𝛽) = 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏 − 𝛽𝛽) 𝑤𝑤ℎ𝑒𝑒𝑒𝑒𝑒𝑒 𝜏𝜏 = 𝑡𝑡2 − 𝑡𝑡1
Hence

𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏) = � ℎ(𝛽𝛽)𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏 − 𝛽𝛽) 𝑑𝑑𝑑𝑑
−∞
⇒ 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏) = ℎ(𝜏𝜏) ∗ 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏)-------------------(1)
(b) we know

𝑅𝑅𝑌𝑌𝑌𝑌 (𝑡𝑡1 , 𝑡𝑡2 ) = � ℎ(𝛼𝛼)𝑅𝑅𝑋𝑋𝑋𝑋 (𝑡𝑡1 − 𝑡𝑡2 − 𝛼𝛼) 𝑑𝑑𝑑𝑑
−∞

= ∫−∞ ℎ(𝛼𝛼)𝑅𝑅𝑋𝑋𝑋𝑋 (−𝜏𝜏 − 𝛼𝛼) 𝑑𝑑𝑑𝑑since X(t) is a WSS process.


𝑅𝑅𝑌𝑌𝑌𝑌 (𝜏𝜏) = � ℎ(𝛼𝛼)𝑅𝑅𝑋𝑋𝑋𝑋 (−𝜏𝜏 − 𝛼𝛼) 𝑑𝑑𝑑𝑑
−∞
⇒ 𝑅𝑅𝑌𝑌𝑌𝑌 (𝜏𝜏) = ℎ(−𝜏𝜏) ∗ 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏)---------------------(2)

(c) taking the Fourier transform of both sides of (2), we obtain

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔) = 𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔)𝐻𝐻 ∗ (𝜔𝜔)



(d) =
Let Y (t ) ∫ h(u ) X (t − u )du
−∞

=
Y (t ) ∫ X (t − α )h(α )dα
−∞

∴ X (t + τ )Y (t ) = ∫ X (t + τ ) X (t − α )h(α )dα
−∞

E[ X (t + τ )Y (t )]= ∫ E{ X (t + τ ) X (t − α )}h(α )dα
−∞
Hence ∞
= ∫R
−∞
XX (τ + α )h(α )dα

= ∫R
−∞
XX (τ − β )h(− β )d β

RXY (τ ) RXX (τ ) * h(−τ )


i.e.,= (1)
RYX (τ ) = RXX (τ ) * h(τ ) (1a )

Y (t )Y (t − τ=
) ∫ X (t − α )Y (t − τ )h(α )dα
−∞

E{Y (t )Y (t − τ=
)} ∫R
−∞
XY (t − α )h(α )dα

Assuming that {X(t) & Y(t) are jointly WSS


i.e., RYY (τ ) = RXY (τ ) * h(τ ) ( 2)
Taking FT’s of (1) & (2) we get
S XY (ω ) = S XX (ω ) H* (ω ) (3)
Where H*(ω ) is the conjugate of H(ω ) &
SYY (ω ) = S XY (ω ) H(ω ) (4)
Inserting (3) In (4) SYY (ω ) = H (ω ) S XX (ω ) . 2

(OR)
(b) A random process 𝑋𝑋(𝑡𝑡) is the input to a linear system whose impulse
response is ℎ(𝑡𝑡) = 2𝑒𝑒 − 𝑡𝑡 , 𝑡𝑡 ≥ 0. If the autocorrelation function of the
process is 𝑅𝑅(𝜏𝜏) = 𝑒𝑒 − 2|𝜏𝜏| , determine the cross correlation function 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏)
between the input process 𝑋𝑋(𝑡𝑡) and the output 𝑌𝑌(𝑡𝑡) and the cross
correlation function 𝑅𝑅𝑌𝑌𝑌𝑌 (𝜏𝜏) between the output process 𝑌𝑌(𝑡𝑡) and the
input process 𝑋𝑋(𝑡𝑡).

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

Ans: The cross correlation between input X(t) and output Y(t) to a linear
system is
𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏) = 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏)ℎ(𝜏𝜏)
Taking Fourier transform we get
𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔) = 𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔)𝐻𝐻(𝜔𝜔)
Given 𝑅𝑅(𝜏𝜏) = 𝑒𝑒 − 2|𝜏𝜏|

𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔) = � 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏)𝑒𝑒 −𝑖𝑖𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑
−∞

− 2|𝜏𝜏| −𝑖𝑖𝑖𝑖𝑖𝑖
= � 𝑒𝑒 𝑒𝑒 𝑑𝑑𝑑𝑑
−∞

= � 𝑒𝑒 − 2|𝜏𝜏| (𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 − 𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖) 𝑑𝑑𝑑𝑑
−∞
∞ ∞
− 2|𝜏𝜏|
= � 𝑒𝑒 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 𝑑𝑑𝑑𝑑 − 𝑖𝑖 � 𝑒𝑒 − 2|𝜏𝜏| 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 𝑑𝑑𝑑𝑑
−∞ −∞

− 2𝜏𝜏
= 2 � 𝑒𝑒 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 𝑑𝑑𝑑𝑑
0
𝑒𝑒 − 2𝜏𝜏
=2 [−2𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 + 𝜔𝜔𝜔𝜔𝜔𝜔𝜔𝜔𝜔𝜔𝜔𝜔]∞
0
4 + 𝜔𝜔 2
2
= [0 − (−2 + 0)]
4 + 𝜔𝜔 2
4
=
4 + 𝜔𝜔 2 ∞
𝐻𝐻(𝜔𝜔) = 𝐹𝐹[ℎ(𝑡𝑡)] = � ℎ(𝑡𝑡) 𝑒𝑒 − 𝑖𝑖𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑
−∞
∞ ∞
− 𝑡𝑡 − 𝑖𝑖𝑖𝑖𝑖𝑖
= � 2𝑒𝑒 𝑒𝑒 𝑑𝑑𝑑𝑑 = 2 � 𝑒𝑒 − (1+𝑖𝑖𝑖𝑖 )𝑡𝑡 𝑑𝑑𝑑𝑑
0 0


𝑒𝑒 − (1+𝑖𝑖𝑖𝑖 )𝑡𝑡
= 2� �
−(1 + 𝑖𝑖𝑖𝑖) 0
2
=
1 + 𝑖𝑖𝑖𝑖
4 2 8
∴ 𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔) = =
(4 + 𝜔𝜔 ) (1 + 𝑖𝑖𝑖𝑖) (2 + 𝑖𝑖𝑖𝑖)(2 − 𝑖𝑖𝑖𝑖)(1 + 𝑖𝑖𝑖𝑖)
2
Let
8 𝐴𝐴 𝐵𝐵 𝐶𝐶
= + +
(2 + 𝑖𝑖𝑖𝑖)(2 − 𝑖𝑖𝑖𝑖)(1 + 𝑖𝑖𝑖𝑖) (2 + 𝑖𝑖𝑖𝑖) (2 − 𝑖𝑖𝑖𝑖) (1 + 𝑖𝑖𝑖𝑖)

8 = 𝐴𝐴(2 − 𝑖𝑖𝑖𝑖)(1 + 𝑖𝑖𝑖𝑖) + 𝐵𝐵(2 + 𝑖𝑖𝑖𝑖)(1 + 𝑖𝑖𝑖𝑖)


+ 𝐶𝐶(2 + 𝑖𝑖𝑖𝑖)(2 − 𝑖𝑖𝑖𝑖)

Put 𝜔𝜔 = 𝑖𝑖2,8 = 𝐴𝐴(4)(−1) ⇒ 𝐴𝐴 = −2


8
𝜔𝜔 = 𝑖𝑖, 8 = 𝐶𝐶(1)(3) ⇒ 𝐶𝐶 =
3

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
2
𝜔𝜔 = −2𝑖𝑖, 8 = 𝐵𝐵(4)(3) ⇒ 𝐵𝐵 =
3
2 8
−2 3 3
∴ 𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔) = + +
(2 + 𝑖𝑖𝑖𝑖) (2 − 𝑖𝑖𝑖𝑖) (2 − 𝑖𝑖𝑖𝑖)
Taking inverse Fourier Transform
1 2 1
𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏) = −2𝐹𝐹 −1 � � + 𝐹𝐹 −1 � �
(2 + 𝑖𝑖𝑖𝑖) 3 (2 − 𝑖𝑖𝑖𝑖)
8 1
+ 𝐹𝐹 −1 � �
3 (2 − 𝑖𝑖𝑖𝑖)
2 8
= −2𝑒𝑒 −2𝜏𝜏 𝑢𝑢(𝜏𝜏) + 𝑒𝑒 2𝜏𝜏 𝑢𝑢(−𝜏𝜏) + 𝑒𝑒 −𝜏𝜏 𝑢𝑢(𝜏𝜏)
3 3

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

B.E./B.Tech. DEGREE EXAMINATIONS, APRIL/MAY 2015


Fourth Semester
Common to all branches
MA6453 – Probability and Random Processes
(Regulations 2013)
Time: Three hours Maximum: 100 marks
PART A

𝑒𝑒 − 𝑥𝑥 𝑥𝑥 ≥ 0
1. Show that the function 𝑓𝑓(𝑥𝑥) = � is the probability density function (pdf)
0 𝑥𝑥 < 0
of a random variable X.
Solution: Given 𝑓𝑓(𝑥𝑥) ≥ 0 and
∞ ∞
� 𝑓𝑓(𝑥𝑥)𝑑𝑑𝑑𝑑 = � 𝑒𝑒 − 𝑥𝑥 𝑑𝑑𝑑𝑑
−∞ 0
𝑒𝑒 − 𝑥𝑥 ∞
=� � = [0 + 1] = 1
−1 0

Since 𝑓𝑓(𝑥𝑥) ≥ 0and ∫−∞ 𝑓𝑓(𝑥𝑥)𝑑𝑑𝑑𝑑 = 1. Hence the given 𝑓𝑓(𝑥𝑥) is pdf.
2. The mean and the variance of binomial distribution are 5 and 4. Determine the
distribution.
Solution: Let x be a binomial random variable with parameters n and p.
P(X=x)=n𝐶𝐶𝑥𝑥 𝑝𝑝 𝑥𝑥 𝑞𝑞 𝑛𝑛−𝑥𝑥 𝑥𝑥 = 0,1,2, …n
Mean=np, variance=npq
Given mean=5 and variance=4
𝑛𝑛𝑛𝑛 = 5 → (1), 𝑛𝑛𝑛𝑛𝑛𝑛 = 4 → (2)
(2) 𝑛𝑛𝑛𝑛𝑛𝑛 4
→ =
(1) 𝑛𝑛𝑛𝑛 5
4 4 1
𝑞𝑞 = , 𝑝𝑝 = 1 − 𝑞𝑞 = 1 − =
5 5 5
𝑛𝑛𝑛𝑛 = 5
1
𝑛𝑛 = 5
5

𝑛𝑛 = 25
1 𝑥𝑥 4 25−𝑥𝑥,
𝑃𝑃(𝑋𝑋 = 𝑥𝑥) = 25𝐶𝐶𝑥𝑥 � � � � 𝑥𝑥 = 0,1,2, … 25
5 5

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
2 +𝑦𝑦 2 �
3. Find the value of k, if 𝑓𝑓(𝑥𝑥, 𝑦𝑦) = 𝑘𝑘𝑘𝑘𝑘𝑘𝑒𝑒 − �𝑥𝑥 ; 𝑥𝑥 ≥ 0, 𝑦𝑦 ≥ 0 is to be a joint probability
density function.
Solution: Here the range space is the entire first quadrant of the XY-place.
∞ ∞
2 +𝑦𝑦 2 �
� � 𝑘𝑘𝑘𝑘𝑘𝑘𝑒𝑒 −�𝑥𝑥 𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑 = 1 … … … … . (1)
0 0
∞ ∞
2 +𝑦𝑦 2 �
𝑘𝑘 � � 𝑥𝑥𝑥𝑥𝑒𝑒 −�𝑥𝑥 𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑 = 1
0 0
∞ ∞
2 2
𝑘𝑘 �� 𝑥𝑥𝑒𝑒 −𝑥𝑥 𝑑𝑑𝑑𝑑� �� 𝑦𝑦𝑒𝑒 −𝑦𝑦 𝑑𝑑𝑑𝑑� = 1
0 0
𝑑𝑑𝑑𝑑
Put 𝑥𝑥 2 = 𝑡𝑡, 2𝑥𝑥𝑥𝑥𝑥𝑥 = 𝑑𝑑𝑑𝑑. 𝑖𝑖. 𝑒𝑒. , 𝑥𝑥𝑥𝑥𝑥𝑥 =
2

When 𝑥𝑥 → 0 ⇒ 𝑡𝑡 ⇢ 0, 𝑥𝑥 ⇢ ∞ ⇒ 𝑡𝑡 ⟶ ∞
𝑑𝑑𝑑𝑑
Put 𝑦𝑦 2 = 𝑣𝑣 , 2𝑦𝑦𝑦𝑦𝑦𝑦 = 𝑑𝑑𝑑𝑑 , 𝑖𝑖𝑖𝑖. , 𝑦𝑦𝑦𝑦𝑦𝑦 =
2

As 𝑦𝑦 → 0 ⇒ 𝑣𝑣 ⇢ 0, 𝑦𝑦 ⇢ ∞ ⇒ 𝑣𝑣 ⇢ ∞
∞ 𝑑𝑑𝑑𝑑 ∞ 𝑑𝑑𝑑𝑑
(1) ⇒ 𝑘𝑘 �∫0 𝑒𝑒 −𝑡𝑡 � �∫0 𝑒𝑒 −𝑣𝑣 �=1
2 2

𝑒𝑒 −𝑡𝑡 𝑒𝑒 −𝑣𝑣
𝑘𝑘 � �� �=1
2 2
−1 1
𝑘𝑘 �0 − � �� �0 − � �� = 1
2 −2
1 1
𝑘𝑘 � � � � = 1
2 2
𝑘𝑘
=1
4
∴ 𝑘𝑘 = 4
4. What is the angle between the two regression lines?
Solution: If 𝜃𝜃 is the angle between two regression lines , then
1− 𝑟𝑟 2 𝜎𝜎𝑥𝑥 𝜎𝜎𝑦𝑦
𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡 = � � .
𝑟𝑟 𝜎𝜎𝑥𝑥 2 + 𝜎𝜎𝑦𝑦 2

5. Give an example of evolutionary random process.


Solution: Poisson Process.
6. Define a semi-random telegraph signal process.
Solution: If N(t) represents the number of occurrences of a specified event

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

in (0,t) and 𝑋𝑋(𝑡𝑡) = (−1)𝑁𝑁(𝑡𝑡) then {X(t)} is called a semi telegraph process.
𝑒𝑒 −⋋𝑡𝑡 (⋋𝑡𝑡)𝑛𝑛
{N(t)} is a process with 𝑃𝑃{𝑁𝑁(𝑡𝑡) = 𝑟𝑟} = , 𝑛𝑛 = 0,1,2 …
𝑛𝑛!

7. State any two properties of cross correlation function.


Solution: (1) 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏) = 𝑅𝑅𝑌𝑌𝑌𝑌 (− 𝜏𝜏) (2)
1
| 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏)| ≤ � 𝑅𝑅𝑋𝑋𝑋𝑋 (0) 𝑅𝑅𝑌𝑌𝑌𝑌 (0) ≤ [ 𝑅𝑅𝑋𝑋𝑋𝑋 (0)+ 𝑅𝑅𝑌𝑌𝑌𝑌 (0)]
2

𝜋𝜋, |𝜔𝜔| ≤ 1
8. Find the auto correlation function whose spectral density is 𝑆𝑆(𝜔𝜔) = �
0, 𝑜𝑜𝑜𝑜ℎ𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒
1 ∞
Solution: 𝑅𝑅(𝜏𝜏) = ∫ 𝑆𝑆(𝜔𝜔)𝑒𝑒 𝑖𝑖𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑
2𝜋𝜋 −∞
1 1
= ∫ 𝜋𝜋 𝑒𝑒 𝑖𝑖𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑
2𝜋𝜋 −1
1
𝜋𝜋 𝑒𝑒 𝑖𝑖𝑖𝑖𝑖𝑖
= � �
2𝜋𝜋 𝑖𝑖𝑖𝑖 −1
1 𝑒𝑒 𝑖𝑖𝑖𝑖 −𝑒𝑒 − 𝑖𝑖𝑖𝑖
= � �
𝜏𝜏 2𝑖𝑖
1
= 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠
𝜏𝜏

9. Prove that 𝑌𝑌(𝑡𝑡) = 2𝑋𝑋(𝑡𝑡) is linear.


Solution: 𝑌𝑌(𝑡𝑡) = 𝐿𝐿[2𝑋𝑋(𝑡𝑡)] = 2𝐿𝐿[𝑋𝑋(𝑡𝑡)]
10. State the relation between input and output of a linear time invariant system.
Solution: In an electrical communication systems usually the output Y(t) is expressed as
the convolution of input X(t) with a system impulse response function h(t).

𝑌𝑌(𝑡𝑡) = ℎ(𝑡𝑡) ∗ 𝑋𝑋(𝑡𝑡) = � ℎ(𝑢𝑢)𝑋𝑋(𝑡𝑡 − 𝑢𝑢)𝑑𝑑𝑑𝑑
−∞

PART B – (5 X 16 = 80)
11. (a) (i) A continuous random variable X that can assume any value between 𝑋𝑋 =
2 𝑎𝑎𝑎𝑎𝑎𝑎 𝑋𝑋 = 5 has a probability density function given by 𝑓𝑓(𝑥𝑥) = 𝑘𝑘(1 + 𝑥𝑥). Find
𝑃𝑃(𝑋𝑋 < 4).

Solution: Since ∫−∞ 𝑓𝑓(𝑥𝑥) 𝑑𝑑𝑑𝑑 = 1
5
∫2 𝑘𝑘(1 + 𝑥𝑥) 𝑑𝑑𝑑𝑑 = 1
5
(1+𝑥𝑥)2
𝑘𝑘 � � =1
2 2
36−9
𝑘𝑘 � �=1
2

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
27
𝑘𝑘 � � = 1
2
2
𝑘𝑘 =
27
4 2 2 4
𝑃𝑃(𝑋𝑋 < 4) = ∫2 (1 + 𝑥𝑥) 𝑑𝑑𝑑𝑑 = ∫ (1 + 𝑥𝑥) 𝑑𝑑𝑑𝑑
27 27 2
4
2 (1+𝑥𝑥)2 2 25−9 16
= � � = � �=
27 2 2 27 2 27

(ii) If the probability that an applicant for a driver’s license will pass the road test on any
given trial is 0.8. What is the probability that he will finally pass the test on the 4thtrails.
Also find the probability that he will finally pass the test in less than 4 trials.
Solution: Let X denote the number of trials required to achieve the first success. Hence
X follows geometric distribution.
𝑃𝑃(𝑋𝑋 = 𝑥𝑥) = 𝑞𝑞 𝑥𝑥−1 𝑝𝑝, 𝑥𝑥 = 1,2,3, …
Given 𝑝𝑝 = 0.8, 𝑞𝑞 = 0.2
(i) 𝑃𝑃(𝑋𝑋 = 4) = (0.2)4−1 (0.8) = 0.8(0.008) = 0.0064
(ii) 𝑃𝑃(𝑋𝑋 < 4) = 𝑃𝑃(𝑋𝑋 = 1) + 𝑃𝑃(𝑋𝑋 = 2) + 𝑃𝑃(𝑋𝑋 = 3)
= (0.2)1−1 (0.8) + (0.2)2−1 (0.8) + (0.2)3−1 (0.8)
= 0.8 (1 + 0.2 + (0.2)2 ) = 0.992
(OR)
(b) (i) Find the moment generating function of exponential distribution and hence find the
mean and variance of exponential distribution.

Solution: 𝑀𝑀𝑋𝑋 (𝑡𝑡) = 𝐸𝐸[𝑒𝑒 𝑡𝑡𝑡𝑡 ] = ∫−∞ 𝑒𝑒 𝑡𝑡𝑡𝑡 𝑓𝑓(𝑥𝑥) 𝑑𝑑𝑑𝑑

= ∫0 𝑒𝑒 𝑡𝑡𝑡𝑡 𝜆𝜆𝑒𝑒 − 𝜆𝜆𝜆𝜆 𝑑𝑑𝑑𝑑

= 𝜆𝜆 ∫0 𝑒𝑒 − (𝜆𝜆−𝑡𝑡)𝑥𝑥 𝑑𝑑𝑑𝑑

𝑒𝑒 − (𝜆𝜆 −𝑡𝑡)𝑥𝑥 1 𝜆𝜆
= 𝜆𝜆 � � = 𝜆𝜆 �0 − �=
−(𝜆𝜆−𝑡𝑡) 0 −(𝜆𝜆−𝑡𝑡) 𝜆𝜆−𝑡𝑡

𝑑𝑑 𝑑𝑑 𝜆𝜆 𝑑𝑑 𝜆𝜆
[𝑀𝑀𝑋𝑋 (𝑡𝑡)] = � � = 𝜆𝜆 (𝜆𝜆 − 𝑡𝑡)−1 = 𝜆𝜆(−1) (𝜆𝜆 − 𝑡𝑡)−2 (−1) =
𝑑𝑑𝑑𝑑 𝑑𝑑𝑑𝑑 𝜆𝜆−𝑡𝑡 𝑑𝑑𝑑𝑑 (𝜆𝜆−𝑡𝑡)2
𝑑𝑑 𝜆𝜆 𝜆𝜆 1
∴ 𝜇𝜇1′ = [𝑀𝑀𝑋𝑋 (𝑡𝑡)]� = �
(𝜆𝜆−𝑡𝑡)2 𝑡𝑡=0
= =
𝑑𝑑𝑑𝑑 𝑡𝑡=0 𝜆𝜆 2 𝜆𝜆

𝑑𝑑2 𝑑𝑑 𝜆𝜆 𝑑𝑑 2𝜆𝜆
[𝑀𝑀𝑋𝑋 (𝑡𝑡)] = � � = 𝜆𝜆 (𝜆𝜆 − 𝑡𝑡)−2 = 𝜆𝜆(−2) (𝜆𝜆 − 𝑡𝑡)−3 (−1) =
𝑑𝑑𝑡𝑡 2 𝑑𝑑𝑑𝑑 (𝜆𝜆 − 𝑡𝑡) 2 𝑑𝑑𝑑𝑑 (𝜆𝜆 − 𝑡𝑡)3

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

𝑑𝑑 2 2𝜆𝜆 2𝜆𝜆 2
∴ 𝜇𝜇2′ = [𝑀𝑀𝑋𝑋 (𝑡𝑡)]� = � = 2 = 2
𝑑𝑑𝑡𝑡 2
𝑡𝑡=0
(𝜆𝜆 − 𝑡𝑡) 𝑡𝑡=0 𝜆𝜆
3 𝜆𝜆
1
𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 = 𝜇𝜇1′ =
𝜆𝜆
2 2 1 2 1
𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉 = 𝜇𝜇2′ − 𝜇𝜇1′ = −� � =
𝜆𝜆 2 𝜆𝜆 𝜆𝜆 2

(ii) If the probability mass function of the random variable X is given by 𝑃𝑃[𝑋𝑋 = 𝑥𝑥] =
1 5
𝑘𝑘𝑥𝑥 3 , 𝑥𝑥 = 1,2,3,4. Find the value of 𝑘𝑘, 𝑃𝑃 �� < 𝑋𝑋 < �⁄𝑋𝑋 > 1�, mean and variance of
2 2

X.
Solution:
x 1 2 3 4
P(x) k 8k 27k 64k

𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆 � 𝑝𝑝(𝑥𝑥) = 1
𝑥𝑥=1

𝑘𝑘 + 8𝑘𝑘 + 27𝑘𝑘 + 64𝑘𝑘 = 1


100𝑘𝑘 = 1
1
∴ 𝑘𝑘 =
100
x 1 2 3 4
P(x) 1 8 27 64
100 100 100 100

1 5
1 5 𝑃𝑃 � < 𝑋𝑋 < ∩ 𝑋𝑋 > 1�
𝑃𝑃 �� < 𝑋𝑋 < ��𝑋𝑋 > 1� = 2 2
2 2 𝑃𝑃(𝑋𝑋 > 1)
1 5
𝑃𝑃 � < 𝑋𝑋 < � 𝑃𝑃(𝑋𝑋 = 1) 𝑃𝑃(𝑋𝑋 = 1)
= 2 2 = =
𝑃𝑃(𝑋𝑋 > 1) 𝑃𝑃(𝑋𝑋 = 2) + 𝑃𝑃(𝑋𝑋 = 3) + 𝑃𝑃(𝑋𝑋 = 4) 1 − 𝑃𝑃(𝑋𝑋 = 1)
1
100 1
= =
1 99
1−
100

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
4

𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 = 𝐸𝐸(𝑋𝑋) = � 𝑥𝑥 𝑝𝑝(𝑥𝑥)


𝑥𝑥=1

1 8 8 64
=1× +2× +3× +4×
100 100 100 100
1 16 24 256 297
= + + + =
100 100 100 100 100
4
2)
𝐸𝐸(𝑋𝑋 = � 𝑥𝑥 2 𝑝𝑝(𝑥𝑥)
𝑥𝑥=1
1 8 8 64
=1× + 22 × + 32 × + 42 ×
100 100 100 100
1 32 72 1024 1129
= + + + =
100 100 100 100 100
𝑉𝑉𝑉𝑉𝑉𝑉(𝑋𝑋) = 𝐸𝐸(𝑋𝑋 2 ) − [𝐸𝐸(𝑋𝑋)]2
1129 297 2
= −� �
100 100
= 11.29 − 8.82 = 2.47
12. (a) (i) If the joint probability distribution function of a two dimensional random variable
(1 − 𝑒𝑒 − 𝑥𝑥 )(1 − 𝑒𝑒 − 𝑦𝑦 )
(X,Y) is given by 𝐹𝐹(𝑥𝑥, 𝑦𝑦) = � ; 𝑥𝑥 > 0, 𝑦𝑦 > 0. Find the marginal
0 ; 𝑜𝑜𝑜𝑜ℎ𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒
densities of X and Y. Are X and Y independent? Find 𝑃𝑃[1 < 𝑋𝑋 < 3, 1 < 𝑌𝑌 < 2].
Solution: Given the cumulative distribution function
𝐹𝐹(𝑥𝑥, 𝑦𝑦) = (1 − 𝑒𝑒 − 𝑥𝑥 )(1 − 𝑒𝑒 − 𝑦𝑦 ), ; 𝑥𝑥 > 0, 𝑦𝑦 > 0
𝜕𝜕 2 𝐹𝐹 𝜕𝜕
The joint pdf is 𝑓𝑓(𝑥𝑥, 𝑦𝑦) = = [(1 − 𝑒𝑒 − 𝑥𝑥 )𝑒𝑒 − 𝑦𝑦 ] = 𝑒𝑒 − 𝑥𝑥 𝑒𝑒 − 𝑦𝑦
𝜕𝜕𝜕𝜕𝜕𝜕𝜕𝜕 𝜕𝜕𝜕𝜕

𝑓𝑓(𝑥𝑥, 𝑦𝑦) = 𝑒𝑒 − (𝑥𝑥+𝑦𝑦) , 𝑥𝑥 > 0, 𝑦𝑦 > 0


The marginal density function of X is

𝑓𝑓𝑋𝑋 (𝑥𝑥) = � 𝑓𝑓(𝑥𝑥, 𝑦𝑦)𝑑𝑑𝑑𝑑
−∞

= � 𝑒𝑒 − (𝑥𝑥+𝑦𝑦) 𝑑𝑑𝑑𝑑
0

𝑒𝑒 − 𝑦𝑦 ∞
= 𝑒𝑒 − 𝑥𝑥 � �
−1 0
= 𝑒𝑒 − 𝑥𝑥 , 𝑥𝑥 > 0

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
The marginal density function of Y is

𝑓𝑓𝑌𝑌 (𝑦𝑦) = � 𝑓𝑓(𝑥𝑥, 𝑦𝑦) 𝑑𝑑𝑑𝑑
−∞

= � 𝑒𝑒 − (𝑥𝑥+𝑦𝑦) 𝑑𝑑𝑑𝑑
0

− 𝑦𝑦
𝑒𝑒 −𝑥𝑥 ∞
= 𝑒𝑒 � �
−1 0
= 𝑒𝑒 − 𝑦𝑦 , 𝑦𝑦 > 0
𝑁𝑁𝑁𝑁𝑁𝑁 𝑓𝑓𝑋𝑋 (𝑥𝑥)𝑓𝑓𝑌𝑌 (𝑦𝑦) = 𝑒𝑒 − 𝑥𝑥 𝑒𝑒 − 𝑦𝑦 = 𝑒𝑒 − (𝑥𝑥+𝑦𝑦) = 𝑓𝑓(𝑥𝑥, 𝑦𝑦)
∴ 𝑋𝑋 𝑎𝑎𝑎𝑎𝑎𝑎 𝑌𝑌 𝑎𝑎𝑎𝑎𝑎𝑎 𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖.
𝑃𝑃[1 < 𝑋𝑋 < 3, 1 < 𝑌𝑌 < 2] = 𝑃𝑃(1 < 𝑋𝑋 < 3)𝑃𝑃(1 < 𝑌𝑌 < 2)
Since X and Y are independent.
2 2
= � 𝑓𝑓𝑋𝑋 (𝑥𝑥) 𝑑𝑑𝑑𝑑 � 𝑓𝑓𝑌𝑌 (𝑦𝑦) 𝑑𝑑𝑑𝑑
1 1
3 2
= � 𝑒𝑒 − 𝑥𝑥 𝑑𝑑𝑑𝑑 � 𝑒𝑒 − 𝑦𝑦 𝑑𝑑𝑑𝑑
1 1
−𝑥𝑥 3
𝑒𝑒 𝑒𝑒 −𝑦𝑦 2
=� � � �
−1 1 −1 1
= [𝑒𝑒 − 3 − 𝑒𝑒 − 1 ][𝑒𝑒 − 2 − 𝑒𝑒 − 1 ]
(ii) Find the coefficient of correlation between X and Y from the data given below.
X: 65 66 67 67 68 69 70 72
Y: 67 68 65 68 72 72 69 71
Solution:
𝑥𝑥 𝑦𝑦 𝑥𝑥 2 𝑦𝑦 2 𝑥𝑥𝑥𝑥
65 67 4225 4489 4355
66 68 4356 4624 4488
67 65 4489 4225 4355
67 68 4489 4624 4556
68 72 4624 5184 4896
69 72 4761 5184 4968
70 69 4900 4761 4830

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

72 71 5184 5041 5112


Total 544 552 37028 38132 37560
∑ 𝑋𝑋 544
𝑋𝑋� = = = 68
𝑛𝑛 8
∑ 𝑌𝑌 552
𝑌𝑌� = = = 69
𝑛𝑛 8
1 1
𝜎𝜎𝑥𝑥 = � ∑ 𝑋𝑋 2 − (∑ 𝑋𝑋)2 = � × 37028 − (68)2 =2.12132
𝑛𝑛 8

1 1
𝜎𝜎𝑦𝑦 = � ∑ 𝑌𝑌 2 − (∑ 𝑌𝑌)2 = � × 38132 − (69)2 =2.345208
𝑛𝑛 8

∑ 𝑥𝑥𝑥𝑥 37560
𝐶𝐶𝐶𝐶𝐶𝐶(𝑋𝑋, 𝑌𝑌) = − 𝑋𝑋�𝑌𝑌� = − 68 × 69
𝑛𝑛 8
= 4695 − 4692 = 3
Coefficient of correlation is
𝐶𝐶𝐶𝐶𝐶𝐶(𝑋𝑋, 𝑌𝑌) 3
𝑟𝑟𝑋𝑋𝑋𝑋 = = = 0.603
𝜎𝜎𝑥𝑥 𝜎𝜎𝑦𝑦 2.12132 × 2.345208
(OR)
(b) (i) The two lines of regression are 8𝑋𝑋 − 10𝑌𝑌 + 66 = 0, 40𝑋𝑋 − 18𝑌𝑌 − 214 = 0.
The variance of X is 9. Find the mean values of X and Y. Also find the coefficient of
correlation between the variables X and Y.
Solution: Since both the regression lines pass through the point (𝑥𝑥̅ , 𝑦𝑦�)
8𝑥𝑥̅ − 10𝑦𝑦� + 66 = 0 … … … … … … (1)
40𝑥𝑥̅ − 18𝑦𝑦� − 214 = 0 … … … … … … (2)
Solving (1) and (2) we get
𝑥𝑥̅ = 13, 𝑦𝑦� = 17
Given the variance of X = Var(x) = 9 ⟹ 𝜎𝜎𝑥𝑥 = 3
The equations of regression lines can be written as
𝑦𝑦 = 0.8𝑥𝑥 + 6.6, 𝑥𝑥 = 0.45𝑦𝑦 + 5.35
Hence the regression coefficient of Y on X is
𝑟𝑟𝜎𝜎𝑦𝑦
𝑏𝑏𝑦𝑦𝑦𝑦 = = 0.8 … … … … … (3)
𝜎𝜎𝑥𝑥

The regression coefficient of X on Y is

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
𝑟𝑟𝜎𝜎𝑥𝑥
𝑏𝑏𝑥𝑥𝑥𝑥 = = 0.45 … … … … … (4)
𝜎𝜎𝑦𝑦

Multiplying (3) and (4)


𝑟𝑟 2 = 0.8 × 0.45 = 0.36

𝑟𝑟 = �𝑏𝑏𝑦𝑦𝑦𝑦 × �𝑏𝑏𝑥𝑥𝑥𝑥 ⟹ 𝑟𝑟 = 0.6

0.8 𝜎𝜎𝑥𝑥 0.8 ×3


Now 𝜎𝜎𝑦𝑦 = = =4
𝑟𝑟 0.6

𝜎𝜎𝑥𝑥 = 3 , 𝜎𝜎𝑦𝑦 = 4
The correlation coefficient is 𝑟𝑟 = 0.6.

(ii) Two random variables X and Y have the following joint probability density function.
𝑥𝑥 + 𝑦𝑦 ; 0 ≤ 𝑥𝑥 ≤ 1, 0 ≤ 𝑦𝑦 ≤ 1
𝑓𝑓(𝑥𝑥, 𝑦𝑦) = � Find the probability density function of the
0 ; 𝑜𝑜𝑜𝑜ℎ𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒
random variable 𝑈𝑈 = 𝑋𝑋𝑋𝑋.
𝑢𝑢
Solution: 𝑢𝑢 = 𝑥𝑥𝑥𝑥 , 𝑣𝑣 = 𝑦𝑦. Hence 𝑥𝑥 = and 𝑦𝑦 = 𝑣𝑣.
𝑣𝑣

𝜕𝜕𝜕𝜕 𝜕𝜕𝜕𝜕 1
0 1
𝐽𝐽 = �𝜕𝜕𝜕𝜕 𝜕𝜕𝜕𝜕 � = � 𝑣𝑣
−𝑢𝑢 �=
𝜕𝜕𝜕𝜕 𝜕𝜕𝜕𝜕 𝑣𝑣
1
𝜕𝜕𝜕𝜕 𝜕𝜕𝜕𝜕 𝑣𝑣 2

The joint p.d.f. of 𝑢𝑢 and 𝑣𝑣 is given by


1 𝑢𝑢 1 𝑢𝑢 𝑢𝑢
𝑓𝑓(𝑢𝑢, 𝑣𝑣) = 𝑓𝑓(𝑥𝑥, 𝑦𝑦)|𝐽𝐽| = (𝑥𝑥 + 𝑦𝑦) = � + 𝑣𝑣� = 2 + 1 = 1 + 2
𝑣𝑣 𝑣𝑣 𝑣𝑣 𝑣𝑣 𝑣𝑣
Since , 0 ≤ 𝑦𝑦 ≤ 1 , 0 ≤ 𝑣𝑣 ≤ 1, , 0 ≤ 𝑥𝑥 ≤ 1 ⇒ , 0 ≤ 𝑢𝑢 ≤ 𝑣𝑣
𝑣𝑣varies from 𝑣𝑣 = 𝑢𝑢 to 𝑣𝑣 = 1.
Hence the p.d.f of U is given by

𝑓𝑓(𝑢𝑢) = � 𝑔𝑔(𝑢𝑢, 𝑣𝑣)𝑑𝑑𝑑𝑑
−∞
1
𝑢𝑢
= � �1 + � 𝑑𝑑𝑑𝑑
𝑢𝑢 𝑣𝑣 2
𝑢𝑢 1
= [𝑣𝑣]1𝑢𝑢 − � � = 1 − 𝑢𝑢 − 𝑢𝑢 + 1
𝑣𝑣 𝑢𝑢
𝑓𝑓(𝑢𝑢) = 2(1 − 𝑢𝑢), 0 ≤ 𝑥𝑥 ≤ 1.

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

13. (a) (i) Show that the process 𝑋𝑋(𝑡𝑡) = 𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴 + 𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵 where A and B are random
variables, is wide sense stationary process if 𝐸𝐸(𝐴𝐴) = 𝐸𝐸(𝐵𝐵) = 𝐸𝐸(𝐴𝐴𝐴𝐴) = 0, 𝐸𝐸(𝐴𝐴2 ) =
𝐸𝐸(𝐵𝐵2 ).
Solution: Given 𝑋𝑋(𝑡𝑡) = 𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴 + 𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵
And the random variables A and B satisfy
𝐸𝐸(𝐴𝐴) = 𝐸𝐸(𝐵𝐵) = 0, 𝐸𝐸(𝐴𝐴𝐴𝐴) = 0, 𝐸𝐸(𝐴𝐴2 ) = 𝐸𝐸(𝐵𝐵2 ) = 𝑘𝑘
We have to prove
(𝑖𝑖)𝐸𝐸[𝑋𝑋(𝑡𝑡)] = 0
(𝑖𝑖𝑖𝑖)𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏) 𝑖𝑖𝑖𝑖 𝑎𝑎 𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓 𝑜𝑜𝑜𝑜 𝜏𝜏
(𝑖𝑖) 𝐸𝐸[𝑋𝑋(𝑡𝑡)] = 𝐸𝐸[𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴 + 𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵 ] = 𝐸𝐸[𝐴𝐴]𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 + 𝐸𝐸[𝐵𝐵]𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 = 0 + 0 + 0 = 0
= 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐.
(𝑖𝑖𝑖𝑖)𝑅𝑅𝑋𝑋𝑋𝑋 (𝑡𝑡, 𝑡𝑡 + 𝜏𝜏) = 𝐸𝐸[𝑋𝑋(𝑡𝑡)𝑋𝑋(𝑡𝑡 + 𝜏𝜏)]
= 𝐸𝐸[(𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴 + 𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵)(�𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴(𝑡𝑡 + 𝜏𝜏) + 𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵𝐵(𝑡𝑡 + 𝜏𝜏)�]
= 𝐸𝐸[𝐴𝐴2 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐(𝑡𝑡 + 𝜏𝜏) + 𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴(𝑡𝑡 + 𝜏𝜏) + 𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴(𝑡𝑡 + 𝜏𝜏) + 𝐵𝐵2 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠(𝑡𝑡
+ 𝜏𝜏)]
= 𝐸𝐸[𝐴𝐴2 ]𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐(𝑡𝑡 + 𝜏𝜏) + 𝐸𝐸[𝐴𝐴𝐴𝐴]𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠(𝑡𝑡 + 𝜏𝜏) + 𝐸𝐸[𝐴𝐴𝐴𝐴]𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐(𝑡𝑡 + 𝜏𝜏)
+ 𝐸𝐸[𝐵𝐵2 ] 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠(𝑡𝑡 + 𝜏𝜏)
= 𝑘𝑘 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐(𝑡𝑡 + 𝜏𝜏) + 0 + 0 + 𝑘𝑘 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠(𝑡𝑡 + 𝜏𝜏)
= 𝑘𝑘[𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐(𝑡𝑡 + 𝜏𝜏) + 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠(𝑡𝑡 + 𝜏𝜏)]
= 𝑘𝑘𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐(𝑡𝑡 − 𝜏𝜏 − 𝑡𝑡) = 𝑘𝑘𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐(−𝜏𝜏) = 𝑘𝑘𝑘𝑘𝑘𝑘𝑘𝑘𝑘𝑘𝑘𝑘, which depends on 𝜏𝜏
Hence by (i) & (ii) X(t) is a WSS process.
(ii) There are 2 white marbles in Urn A and 3 red marbles in Urn B. At each step of
the process, a marble is selected from each urn and the 2 marbles selected are
interchanged. The state of the related Markov chain is the number of red marbles
in Urn A after the interchange. What is the probability that there are 2 red marbles
in Urn A after 3 steps? In the long run, what the probability that there are 2 red
marbles in Urn A?
Ans: The Markov chain{𝑋𝑋𝑛𝑛 } has state space 0,1,2 since the number of marbles in
Urn A is always 2 and the number of red marbles may be 0,1,2.

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
(i) The TPM of the chain is
S𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡 𝑜𝑜𝑜𝑜 𝑋𝑋𝑛𝑛+1
0 1 2
0 𝑃𝑃00 𝑃𝑃01 𝑃𝑃02
𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆 𝑜𝑜𝑜𝑜 𝑋𝑋𝑛𝑛 𝑃𝑃 = 1 �𝑃𝑃10 𝑃𝑃11 𝑃𝑃12 �
2 𝑃𝑃20 𝑃𝑃21 𝑃𝑃22
If 𝑋𝑋𝑛𝑛 = 0 i.e., if the system is at state 0.
This means Urn I has no red marbles after interchanges, which is not true. Then
𝑃𝑃00 = 0.
After one interchange, Urn I definitely contains 1 red marbles. ∴ 𝑃𝑃01 = 1
After one interchange, Urn I definitely contains 2 red marbles. ∴ 𝑃𝑃02 = 0
If 𝑋𝑋𝑛𝑛 = 1 i.e., if the system is at state 1.
Urn I will contain 1 red marble and 1 white marble.
So Urn II will contain 1 White and 2 Red marbles.
1 1 1
𝑃𝑃10 = 𝑃𝑃(𝑋𝑋𝑛𝑛+1 = 0⁄𝑋𝑋𝑛𝑛 = 1) = × =
2 3 6
2 1 1
𝑃𝑃12 = 𝑃𝑃(𝑋𝑋𝑛𝑛+1 = 2⁄𝑋𝑋𝑛𝑛 = 1) = × =
3 2 3
But 𝑃𝑃10 + 𝑃𝑃11 + 𝑃𝑃12 = 1
1 1 1
⇒ 𝑃𝑃11 = 1 − − =
6 3 2

If 𝑋𝑋𝑛𝑛 = 2 i.e., if the system is at state 2.


𝑃𝑃20 = 0
2 2
𝑃𝑃21 = 𝑃𝑃(𝑋𝑋𝑛𝑛+1 = 1⁄𝑋𝑋𝑛𝑛 = 2) = 1 × =
3 3
But 𝑃𝑃22 = 1 − 𝑃𝑃20 − 𝑃𝑃21
2 1
⇒ 𝑃𝑃22 = 1 − =
3 3

∴ 𝑇𝑇ℎ𝑒𝑒 𝑇𝑇𝑇𝑇𝑇𝑇 𝑖𝑖𝑖𝑖

0 1 0
⎡1 1 1⎤
⎢ ⎥
𝑃𝑃 = ⎢6 2 3⎥
⎢ 2 1⎥
⎣0 3 3⎦

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

The initial distribution 𝑃𝑃(0) = [1 0 0], since there is no red marble and so the
probability of 0 red marble is 1.
0 1 0
1 1 1
(ii) 𝑃𝑃(1) = 𝑃𝑃(0) 𝑃𝑃 = [1 0 0] � 6 2 3� = [0 1 0]
2 1
0
3 3
0 1 0
⎡1 1 1⎤
⎢ ⎥ 1 1 1
𝑃𝑃(2) = 𝑃𝑃(1) 𝑃𝑃 = [0 1 0] ⎢6 2 3⎥ = � �
⎢ 2 1⎥ 6 2 3
⎣0 3 3⎦
0 1 0
1 1 1
1 1 1 1 23 5
𝑃𝑃(3) = 𝑃𝑃(2) 𝑃𝑃 = �6 2 3
� �6 2 3� = �12 36 18

2 1
0
3 3
5
𝑃𝑃(2 𝑟𝑟𝑟𝑟𝑟𝑟 𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚 𝑖𝑖𝑖𝑖 𝑈𝑈𝑈𝑈𝑈𝑈 𝐼𝐼 𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎 3 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠) =
18

(iii) The long run probability is limiting probability


Π = [𝜋𝜋0 𝜋𝜋1 𝜋𝜋2 ]where𝜋𝜋0 + 𝜋𝜋1 + 𝜋𝜋2 = 1 − − − − − −(1)

0 1 0
1 1 1
And ΠP = Π ⇒ [𝜋𝜋0 𝜋𝜋1 𝜋𝜋2 ] � 6 2 3� = [𝜋𝜋0 𝜋𝜋1 𝜋𝜋2 ]
2 1
0
3 3
𝜋𝜋1 𝜋𝜋1 2𝜋𝜋2 𝜋𝜋1 𝜋𝜋2
⇒� 𝜋𝜋0 + + + � = [𝜋𝜋0 𝜋𝜋1 𝜋𝜋2 ]
6 2 3 3 3
𝜋𝜋1 𝜋𝜋1 𝜋𝜋2 𝜋𝜋1
⇒ 𝜋𝜋0 = + = 𝜋𝜋2 ⇒ 𝜋𝜋2 =
6 3 3 2
Sub in (1)
𝜋𝜋 1 𝜋𝜋 1
+ 𝜋𝜋1 + =1
6 2
10 𝜋𝜋 1 6
⇒ = 1 ⇒ 𝜋𝜋1 =
6 10
1 6 1 1 6 3
𝜋𝜋0 = × = ; 𝜋𝜋2 = × =
6 10 10 2 10 10
1 6 3
Π=� �
10 10 10
3
∴ 𝑃𝑃( 2 𝑟𝑟𝑟𝑟𝑟𝑟 𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚 𝑖𝑖𝑖𝑖 𝑈𝑈𝑈𝑈𝑈𝑈 𝐼𝐼 𝑖𝑖𝑖𝑖 𝑡𝑡ℎ𝑒𝑒 𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙 𝑟𝑟𝑟𝑟𝑟𝑟) =
10

(OR)

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
(b) (i) A radioactive source emits particles of a rate of 5 per minute in accordance with
Poisson process. Each particle emitted has a probability 0.6 of being recorded. Find the
probability that 10 particles are recorded in 4 minute period.
Solution: By property of Poisson process with rate 𝜆𝜆 the number of recorded particles
𝑁𝑁(𝑡𝑡) is a Poisson process with parameter 𝜆𝜆𝜆𝜆 where p is the probability of recording
each.
𝜆𝜆 = 5⁄𝑚𝑚𝑚𝑚𝑚𝑚
𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃 𝑜𝑜𝑜𝑜 𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟 = 𝑝𝑝 = 0.6
𝑡𝑡 = 4, 𝑛𝑛 = 10
𝑒𝑒 − 𝜆𝜆𝜆𝜆𝜆𝜆 (𝜆𝜆𝜆𝜆𝜆𝜆)𝑛𝑛
𝑃𝑃[𝑋𝑋(𝑡𝑡) = 𝑛𝑛] = 𝑛𝑛 = 0,1,23, …
𝑛𝑛!
Here 𝜆𝜆𝜆𝜆𝜆𝜆 = 5 × 0.6 × 4 = 12
𝑒𝑒 − 12 (12)10
𝑃𝑃[𝑋𝑋(4) = 10] = = 0.1048
10!
(ii) Check if a random telegraph signal process is wide sense stationary.
(ii) Mention any three properties each of auto correlation and of cross correlation function
of a wide sense stationary process.
Solution: Properties of auto correlation: Let X(t) be a WSS process. Then the auto correlation
function 𝑅𝑅𝑋𝑋𝑋𝑋 (𝑡𝑡, 𝑡𝑡 + 𝜏𝜏) is a function of time difference 𝜏𝜏 only.Itis denoted by 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏).
Thus 𝑅𝑅𝑋𝑋𝑋𝑋 (𝑡𝑡, 𝑡𝑡 + 𝜏𝜏) = 𝐸𝐸(𝑋𝑋(𝑡𝑡)𝑋𝑋(𝑡𝑡 + 𝜏𝜏) = 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏).
(1) 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏) = 𝑅𝑅𝑋𝑋𝑋𝑋 (−𝜏𝜏). (i.e., autocorrelation function is an even function)
(2) |𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏)| ≤ 𝑅𝑅𝑋𝑋𝑋𝑋 (0). (i.e., Max. value of 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏)is 𝑅𝑅𝑋𝑋𝑋𝑋 (0)).
(3) If the process X(t) contains a periodic component, then 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏) will also
contain periodic component of the same period.
Properties of cross correlation:
(1) 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏)=𝑅𝑅𝑌𝑌𝑌𝑌 (−𝜏𝜏).
(2) If the random process X(t) and Y(t) are independent, then
𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏) = 𝐸𝐸(𝑋𝑋). 𝐸𝐸(𝑌𝑌)
(3) If the R.P X(t) & Y(t) are of zero mean,
lim 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏) = lim 𝑅𝑅𝑌𝑌𝑌𝑌 (𝜏𝜏) = 0
𝜏𝜏→∞ 𝜏𝜏→∞

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

14. (a)(i) Consider the two random processes 𝑋𝑋(𝑡𝑡) = 3cos⁡


(𝜔𝜔𝜔𝜔 + 𝜃𝜃) and 𝑌𝑌(𝑡𝑡) =
𝜋𝜋
2 cos(𝜔𝜔𝜔𝜔 + ∅) 𝑤𝑤ℎ𝑒𝑒𝑒𝑒𝑒𝑒 ∅ = 𝜃𝜃 − 𝑎𝑎𝑎𝑎𝑎𝑎 𝜃𝜃 is uniformly distributed over
2

(0,2𝜋𝜋). 𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉 |𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏)| ≤ �𝑅𝑅𝑋𝑋𝑋𝑋 (0)𝑅𝑅𝑌𝑌𝑌𝑌 (0).


Solution:
Given 𝑋𝑋(𝑡𝑡) = 3cos⁡
(𝜔𝜔𝜔𝜔 + 𝜃𝜃)
𝜋𝜋
𝑌𝑌(𝑡𝑡) = 2 cos �𝜔𝜔𝜔𝜔 + 𝜃𝜃 − � = 2sin⁡
(𝜔𝜔𝜔𝜔 + 𝜃𝜃)
2
32
𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏) = 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐
2
22
𝑅𝑅𝑌𝑌𝑌𝑌 (𝜏𝜏) = 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐
2
9
𝑅𝑅𝑋𝑋𝑋𝑋 (0) = 𝑅𝑅𝑌𝑌𝑌𝑌 (0) = 2
2
9
Now �𝑅𝑅𝑋𝑋𝑋𝑋 (0)𝑅𝑅𝑌𝑌𝑌𝑌 (0) = � × 2 = 3
2

𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏) = 𝐸𝐸[𝑋𝑋(𝑡𝑡)𝑌𝑌(𝑡𝑡 + 𝜏𝜏)]


(𝜔𝜔(𝑡𝑡 + 𝜏𝜏) + 𝜃𝜃)]
= 𝐸𝐸[3 cos(𝜔𝜔𝜔𝜔 + 𝜃𝜃) 2𝑠𝑠𝑠𝑠𝑠𝑠⁡
1
= 6𝐸𝐸 � (sin(2𝜔𝜔𝜔𝜔 + 𝜔𝜔𝜔𝜔 + 2𝜃𝜃) + 𝑆𝑆𝑆𝑆𝑆𝑆(−𝜔𝜔𝜔𝜔))�
2
= 3𝐸𝐸[sin(2𝜔𝜔𝜔𝜔 + 𝜔𝜔𝜔𝜔 + 2𝜃𝜃)] − 3𝐸𝐸[𝑆𝑆𝑆𝑆𝑆𝑆(𝜔𝜔𝜔𝜔)]
2𝜋𝜋
= −3 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 + 3 � 𝑠𝑠𝑠𝑠𝑠𝑠 (2𝜔𝜔𝜔𝜔 + 𝜔𝜔𝜔𝜔 + 2𝜃𝜃) 𝑓𝑓(𝜃𝜃)𝑑𝑑𝑑𝑑
0
2𝜋𝜋
1
= −3 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 + 3 � 𝑠𝑠𝑠𝑠𝑠𝑠 (2𝜔𝜔𝜔𝜔 + 𝜔𝜔𝜔𝜔 + 2𝜃𝜃) 𝑑𝑑𝑑𝑑
0 2𝜋𝜋
2𝜋𝜋
3 [𝑐𝑐𝑐𝑐𝑐𝑐 (2𝜔𝜔𝜔𝜔 + 𝜔𝜔𝜔𝜔 + 2𝜃𝜃)]
= −3 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 + � �
2𝜋𝜋 2 0
3
= −3 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 + [𝑐𝑐𝑐𝑐𝑐𝑐 (2𝜔𝜔𝜔𝜔 + 𝜔𝜔𝜔𝜔 + 4𝜋𝜋) − 𝑐𝑐𝑐𝑐𝑐𝑐 (2𝜔𝜔𝜔𝜔 + 𝜔𝜔𝜔𝜔)]
4𝜋𝜋
3
= −3 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 + [𝑐𝑐𝑐𝑐𝑐𝑐 (2𝜔𝜔𝜔𝜔 + 𝜔𝜔𝜔𝜔) − 𝑐𝑐𝑐𝑐𝑐𝑐 (2𝜔𝜔𝜔𝜔 + 𝜔𝜔𝜔𝜔)]
4𝜋𝜋
3
𝑅𝑅𝑋𝑋𝑋𝑋 (𝑡𝑡, 𝑡𝑡 + 𝜏𝜏) = −3 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 + (0)
4𝜋𝜋
|𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏)| = |−3 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠| ≤ 3 𝑓𝑓𝑓𝑓𝑓𝑓 𝑎𝑎𝑎𝑎𝑎𝑎 𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣 𝑜𝑜𝑜𝑜 𝜏𝜏

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

|𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏)| ≤ �𝑅𝑅𝑋𝑋𝑋𝑋 (0)𝑅𝑅𝑌𝑌𝑌𝑌 (0).

(ii) Find the power spectral density of a random binary transmission process where
|𝜏𝜏|
autocorrelation function is 𝑅𝑅(𝜏𝜏) = �1 − ; |𝜏𝜏| ≤ 𝑇𝑇.
𝑇𝑇
|𝜏𝜏|
Solution: Given 𝑅𝑅(𝜏𝜏) = �1 − ; |𝜏𝜏| ≤ 𝑇𝑇
𝑇𝑇

We know that

𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔) = � 𝑅𝑅(𝜏𝜏)𝑒𝑒 −𝑖𝑖𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑
−∞
𝑇𝑇 |𝜏𝜏|
= � �1 − � 𝑒𝑒 −𝑖𝑖𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑
−𝑇𝑇 𝑇𝑇
𝑇𝑇 |𝜏𝜏|
= � �1 − � (𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 − 𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖) 𝑑𝑑𝑑𝑑
−𝑇𝑇 𝑇𝑇
𝑇𝑇 |𝜏𝜏|
= � �1 − � (𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 − 𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖) 𝑑𝑑𝑑𝑑
−𝑇𝑇 𝑇𝑇
𝑇𝑇
𝜏𝜏
= 2 � �1 − � 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 𝑑𝑑𝑑𝑑
0 𝑇𝑇
𝜏𝜏 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 1 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 𝑇𝑇
= 2 ��1 − � � � − �− � �− ��
𝑇𝑇 𝜔𝜔 𝑇𝑇 𝜔𝜔 2 0
𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 1
= 2 ��0 − 2
� − �0 − ��
𝑇𝑇𝜔𝜔 𝑇𝑇𝑇𝑇 2
2
= (1 − 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐)
𝑇𝑇𝜔𝜔 2
𝜔𝜔𝜔𝜔
4 𝑠𝑠𝑠𝑠𝑠𝑠2
= 2
𝑇𝑇𝜔𝜔 2

(OR)
𝜔𝜔 2 +9
(b)(i) If the power spectral density of a continuous process is 𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔) = Find
𝜔𝜔 4 +5𝜔𝜔 2 +4

the mean square value of the process.


𝜔𝜔 2 +9
Solution: Given 𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔) =
𝜔𝜔 4 +5𝜔𝜔 2 +4

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

𝜔𝜔2 + 9
=
(𝜔𝜔 2 + 1)(𝜔𝜔 2 + 4)
1 ∞
𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏) = � 𝑆𝑆 (𝜔𝜔)𝑒𝑒 𝑖𝑖𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑
2𝜋𝜋 −∞ 𝑋𝑋𝑋𝑋
The mean square value is given by
𝐸𝐸[𝑋𝑋 2 (𝑡𝑡)] = 𝑅𝑅𝑋𝑋𝑋𝑋 (0)

1 ∞
= � 𝑆𝑆 (𝜔𝜔) 𝑑𝑑𝑑𝑑
2𝜋𝜋 −∞ 𝑋𝑋𝑋𝑋

1
= 2 � 𝑆𝑆 (𝜔𝜔) 𝑑𝑑𝑑𝑑 ∵ 𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔) 𝑖𝑖𝑖𝑖 𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒
2𝜋𝜋 0 𝑋𝑋𝑋𝑋
1 ∞ 𝜔𝜔2 + 9
= � 𝑑𝑑𝑑𝑑
𝜋𝜋 0 (𝜔𝜔 2 + 1)(𝜔𝜔 2 + 4)

To evaluate this integral by using partial fraction


Put 𝜔𝜔2 = 𝑢𝑢 we have
𝑢𝑢 + 9 𝐴𝐴 𝐵𝐵
= +
(𝑢𝑢 + 1)(𝑢𝑢 + 4) (𝑢𝑢 + 1) (𝑢𝑢 + 4)

𝑢𝑢 + 9 = 𝐴𝐴(𝑢𝑢 + 4) + 𝐵𝐵(𝑢𝑢 + 1)
𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆 𝑓𝑓𝑓𝑓𝑓𝑓 𝐴𝐴 𝑎𝑎𝑎𝑎𝑎𝑎 𝐵𝐵, 𝑤𝑤𝑤𝑤 𝑔𝑔𝑔𝑔𝑔𝑔
8 5
𝐴𝐴 = , 𝐵𝐵 = −
3 3
8 5
𝜔𝜔2 + 9 −3
∴ = 3 +
(𝜔𝜔 2 + 1)(𝜔𝜔 2 + 4) (𝑢𝑢 + 1) (𝑢𝑢 + 4)

8 5
2 (𝑡𝑡)]
1 ∞ 3

3
𝐸𝐸[𝑋𝑋 = �� 𝑑𝑑𝑑𝑑 − � 𝑑𝑑𝑑𝑑�
𝜋𝜋 0 (𝜔𝜔 2 + 1) 0 (𝜔𝜔 2 + 4)

1 8 5 1 𝜔𝜔 ∞
= � [𝑡𝑡𝑡𝑡𝑡𝑡−1 𝜔𝜔]∞
0 − × �𝑡𝑡𝑡𝑡𝑡𝑡
−1
� �
𝜋𝜋 3 3 2 2 0

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
1 8 5
= � [𝑡𝑡𝑡𝑡𝑡𝑡−1 (∞) − 𝑡𝑡𝑡𝑡𝑡𝑡−1 (0)] − [𝑡𝑡𝑡𝑡𝑡𝑡−1 (∞) − 𝑡𝑡𝑡𝑡𝑡𝑡−1 (0)]�
𝜋𝜋 3 6
1 8 𝜋𝜋 5 𝜋𝜋 1 𝜋𝜋 8 5
= � × − × �= × � − �
𝜋𝜋 3 2 6 2 𝜋𝜋 2 3 6
11
=
12
25𝜏𝜏 2 +36
(ii) A stationary process has an autocorrelation function given by 𝑅𝑅(𝜏𝜏) = Find
6.25𝜏𝜏 2 +4

the mean value mean-square value and variance of the process.


25𝜏𝜏 2 +36
Solution: Given 𝑅𝑅(𝜏𝜏) =
6.25𝜏𝜏 2 +4

𝐸𝐸�𝑋𝑋(𝑡𝑡)� = 𝜇𝜇𝑋𝑋 = � lim 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏)


𝜏𝜏→∞

36
25𝜏𝜏 2 + 36 𝜏𝜏 2 �25 + 2 � 25
= � lim = �lim 𝜏𝜏 =� = √4 = 2
2
𝜏𝜏→∞ 6.25𝜏𝜏 + 4 𝜏𝜏→∞ 2 4 6.25
𝜏𝜏 �6.25 + 2 �
𝜏𝜏
𝑀𝑀𝑀𝑀𝑀𝑀𝑀𝑀 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣 = 𝐸𝐸�𝑋𝑋 2 (𝑡𝑡)� = 𝑅𝑅𝑋𝑋𝑋𝑋 (0)
0 + 36
= =9
0+4
𝑉𝑉𝑉𝑉𝑉𝑉[𝑋𝑋(𝑡𝑡)] = 𝐸𝐸�𝑋𝑋 2 (𝑡𝑡)� − 𝐸𝐸(𝑋𝑋(𝑡𝑡))
= 9 − 22 = 5
15. (a)(i) If the input to a time invariant stable line system is a wide sense stationary
process. Prove that the output will also be a wide sense stationary process.
Solution: Let X(t) be a WSS process for a linear time invariant stable system with Y(t) as the
output process.

Y (t ) = ∫ h(u ) X (t − u )du
−∞
Then where h(t ) is weighting function or unit impulse response.

∴ E [Y (t )] = ∫ E[h(u ) X (t − u )]du
−∞


= ∫ h(u ) E[ X (t − u )]du
−∞

Since X(t) is a WSS process, E [ X (t )] is a constant µ X for any t.

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

∴ E[ X (t − u )] = µ X
∞ ∞
∴ E [Y (t )] = ∫ h(u )µ X du = µ X ∫ h(u )du
−∞ −∞

∫ h(u )du
−∞
Since the system is stable , is finite
∴ E [Y (t )] is a constant.
Now RYY (t , t + τ ) = E[Y (t )Y (t + τ )]
∞ ∞
= E[ ∫ h(u1 ) E[ X (t − u1 )]du1 ∫ h(u 2 ) E[ X (t + τ − u 2 )]du 2 ]
−∞ −∞

∞ ∞
= E[ ∫ ∫ h(u1 )h(u 2 ) X (t − u1 ) X (t + τ − u 2 )du1 du 2 ]
− ∞− ∞

∞ ∞
= ∫ ∫ h(u )h(u
− ∞− ∞
1 2 ) E[ X (t − u1 ) X (t + τ − u 2 )]du1 du 2

Since X(t) is a WSS process, auto correlation function is only a function of time
difference
∞ ∞
∴ RYY (t , t + τ ) = ∫ ∫ h(u )h(u
− ∞− ∞
1 2 ) R XX (τ + u1 − u 2 )du1 du 2

When this double integral is evaluated by integrating with respect to


u1 and u 2 , the RHS is only a functionof τ . Hence Y(t) is a WSS process.
(ii) Show that 𝑆𝑆𝑌𝑌𝑌𝑌 (𝜔𝜔) = |𝐻𝐻(𝜔𝜔)|2 𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔) where 𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔) and 𝑆𝑆𝑌𝑌𝑌𝑌 (𝜔𝜔) are the power
spectral densities of the input X(t) and the output Y(t) respectively and 𝐻𝐻(𝜔𝜔) is the
system transfer function.

Solution: =
Let Y (t ) ∫ h(u ) X (t − u )du
−∞


=
Y (t ) ∫ X (t − α )h(α )dα
−∞

∴ X (t + τ )Y (t ) = ∫ X (t + τ ) X (t − α )h(α )dα
−∞

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

E[ X (t + τ )Y (t )]= ∫ E{ X (t + τ ) X (t − α )}h(α )dα
−∞
Hence ∞
= ∫R
−∞
XX (τ + α )h(α )dα


= ∫R
−∞
XX (τ − β )h(− β )d β

=
i.e., RXY (τ ) RXX (τ ) * h(−τ ) (1)
RYX (τ ) = RXX (τ ) * h(τ ) (1a )

Y (t )Y (t − τ=
) ∫ X (t − α )Y (t − τ )h(α )dα
−∞


E{Y (t )Y (t − τ=
)} ∫R
−∞
XY (t − α )h(α )dα

Assuming that {X(t) & Y(t) are jointly WSS


i.e., RYY (τ ) = RXY (τ ) * h(τ ) ( 2)
Taking FT’s of (1) & (2) we get
S XY (ω ) = S XX (ω ) H* (ω ) (3)

Where H*(ω ) is the conjugate of H(ω ) & SYY (ω ) = S XY (ω ) H(ω ) (4)

Inserting (3) In (4) SYY (ω ) = H (ω ) 2 S XX (ω ) .


(OR)
1
(b) (i) A circuit has an impulse response given by ℎ(𝑡𝑡) = � ; 0 ≤ 𝑡𝑡 ≤ 𝑇𝑇 , Express
𝑇𝑇

𝑆𝑆𝑌𝑌𝑌𝑌 (𝜔𝜔)
In terms of 𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔).

Solution: 𝐻𝐻(𝜔𝜔) = 𝐹𝐹[ℎ(𝑡𝑡)] == ∫−∞ ℎ(𝑡𝑡) 𝑒𝑒 − 𝑖𝑖𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑
𝑇𝑇 1 1 𝑇𝑇
= ∫0 𝑒𝑒 − 𝑖𝑖𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑 = ∫0 𝑒𝑒 − 𝑖𝑖𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑
𝑇𝑇 𝑇𝑇
𝑇𝑇
1 𝑒𝑒 − 𝑖𝑖𝑖𝑖𝑖𝑖 1 𝑒𝑒 − 𝑖𝑖𝑖𝑖𝑖𝑖 1
= � � = � − �
𝑇𝑇 −𝑖𝑖𝑖𝑖 0 𝑇𝑇 −𝑖𝑖𝑖𝑖 −𝑖𝑖𝑖𝑖
1
=− �𝑒𝑒 − 𝑖𝑖𝑖𝑖𝑖𝑖 − 1�
𝑖𝑖𝑖𝑖𝑖𝑖
1
= �1 − 𝑒𝑒 − 𝑖𝑖𝑖𝑖𝑖𝑖 �
𝑖𝑖𝑖𝑖𝑖𝑖
1
= [1 − (𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 − 𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖)]
𝑖𝑖𝑖𝑖𝑖𝑖

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
1
= [1 − 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 + 𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖]
𝑖𝑖𝑖𝑖𝑖𝑖
1 𝜔𝜔𝜔𝜔 𝜔𝜔𝜔𝜔 𝜔𝜔𝜔𝜔
= �2𝑠𝑠𝑠𝑠𝑠𝑠2 + 𝑖𝑖2𝑠𝑠𝑠𝑠𝑠𝑠 𝑐𝑐𝑐𝑐𝑐𝑐 �
𝑖𝑖𝑖𝑖𝑖𝑖 2 2 2
−2𝑖𝑖 𝜔𝜔𝜔𝜔 𝜔𝜔𝜔𝜔 𝜔𝜔𝜔𝜔
= × 𝑆𝑆𝑆𝑆𝑆𝑆 �𝑆𝑆𝑆𝑆𝑆𝑆 + 𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖 �
𝜔𝜔𝜔𝜔 2 2 2
𝜔𝜔𝜔𝜔
2𝑆𝑆𝑆𝑆𝑆𝑆
= 2 �𝐶𝐶𝐶𝐶𝐶𝐶 𝜔𝜔𝜔𝜔 − 𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖 𝜔𝜔𝜔𝜔�
𝜔𝜔𝜔𝜔 2 2
|𝐻𝐻(𝜔𝜔)|2 = 𝐻𝐻(𝜔𝜔)𝐻𝐻 ∗ (𝜔𝜔)
𝜔𝜔𝜔𝜔 𝜔𝜔𝜔𝜔
2𝑆𝑆𝑆𝑆𝑆𝑆 𝜔𝜔𝜔𝜔 𝜔𝜔𝜔𝜔 2𝑆𝑆𝑆𝑆𝑆𝑆
= 2 �𝐶𝐶𝐶𝐶𝐶𝐶 − 𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖 �× 2 �𝐶𝐶𝐶𝐶𝐶𝐶 𝜔𝜔𝜔𝜔 + 𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖 𝜔𝜔𝜔𝜔�
𝜔𝜔𝜔𝜔 2 2 𝜔𝜔𝜔𝜔 2 2
𝜔𝜔𝜔𝜔 2 𝜔𝜔𝜔𝜔 2
2𝑆𝑆𝑆𝑆𝑆𝑆 2𝑆𝑆𝑆𝑆𝑆𝑆
=� 2 � ×1=� 2�
𝜔𝜔𝜔𝜔 𝜔𝜔𝜔𝜔

𝑆𝑆𝑌𝑌𝑌𝑌 (𝜔𝜔) = |𝐻𝐻(𝜔𝜔)|2 𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔)


𝜔𝜔𝜔𝜔 2
2𝑆𝑆𝑆𝑆𝑆𝑆
∴ 𝑆𝑆𝑌𝑌𝑌𝑌 (𝜔𝜔) = � 2 � 𝑆𝑆 (𝜔𝜔)
𝑋𝑋𝑋𝑋
𝜔𝜔𝜔𝜔

(ii) Given 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏) = 𝐴𝐴𝑒𝑒 − 𝑎𝑎|𝜏𝜏| and ℎ(𝑡𝑡) = 𝑒𝑒 − 𝛽𝛽𝛽𝛽 𝑢𝑢(𝑡𝑡) where 𝑢𝑢(𝑡𝑡) = {1; 𝑡𝑡 ≥ 0 . Find the
power spectral density of the output y(t).


Solution: 𝐻𝐻(𝜔𝜔) = 𝐹𝐹[ℎ(𝑡𝑡)] = ∫−∞ ℎ(𝑡𝑡) 𝑒𝑒 − 𝑖𝑖𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑


= � 𝑒𝑒 − 𝛽𝛽𝛽𝛽 𝑒𝑒 − 𝑖𝑖𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑
0

= � 𝑒𝑒 − (𝛽𝛽 +𝑖𝑖𝑖𝑖 )𝑡𝑡 𝑑𝑑𝑑𝑑
0

𝑒𝑒 − (𝛽𝛽 +𝑖𝑖𝑖𝑖 )𝑡𝑡
=� �
− (𝛽𝛽 + 𝑖𝑖𝑖𝑖) 0
1
=
(𝛽𝛽 + 𝑖𝑖𝑖𝑖)

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
1 1
|𝐻𝐻(𝜔𝜔)|2 = � �= 2
(𝛽𝛽 + 𝑖𝑖𝑖𝑖) (𝛽𝛽 + 𝜔𝜔 2 )
Power spectral density of X(t) is 𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔) = 𝐹𝐹[𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏)]
𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔) = 𝐴𝐴𝐴𝐴�𝑒𝑒 − 𝑎𝑎|𝜏𝜏| �
2𝐴𝐴𝐴𝐴
=
𝑎𝑎2 + 𝜔𝜔 2
The power spectral density of Y(t) is 𝑆𝑆𝑌𝑌𝑌𝑌 (𝜔𝜔) = |𝐻𝐻(𝜔𝜔)|2 𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔)
1 2𝐴𝐴𝐴𝐴
=
(𝛽𝛽2 + 𝜔𝜔 ) 𝑎𝑎 + 𝜔𝜔 2
2 2

2𝐴𝐴𝐴𝐴 1 1
= � 2 − 2 �
𝛽𝛽2 2
− 𝑎𝑎 𝑎𝑎 + 𝜔𝜔 2 (𝛽𝛽 + 𝜔𝜔 2 )

𝐴𝐴 2𝐴𝐴𝐴𝐴 𝑎𝑎𝑎𝑎 2𝛽𝛽


= � 2 �−
𝛽𝛽2 2
− 𝑎𝑎 𝑎𝑎 + 𝜔𝜔 2 𝛽𝛽(𝛽𝛽 − 𝑎𝑎 ) (𝛽𝛽 + 𝜔𝜔 2 )
2 2 2

𝑎𝑎𝑎𝑎 2𝛽𝛽 𝐴𝐴 2𝑎𝑎


𝑆𝑆𝑌𝑌𝑌𝑌 (𝜔𝜔) = − � �
𝛽𝛽(𝑎𝑎2 − 𝛽𝛽2 ) (𝛽𝛽2 + 𝜔𝜔 2 ) 𝛽𝛽(𝑎𝑎2 − 𝛽𝛽2 ) 𝑎𝑎2 + 𝜔𝜔 2

Since 𝑅𝑅𝑌𝑌𝑌𝑌 (𝜏𝜏) = 𝐹𝐹 −1 [𝑆𝑆𝑌𝑌𝑌𝑌 (𝜔𝜔)], the inverse Fourier transform of 𝑆𝑆𝑌𝑌𝑌𝑌 (𝜔𝜔)
𝑎𝑎𝑎𝑎 2𝛽𝛽 𝐴𝐴 2𝑎𝑎
𝑅𝑅𝑌𝑌𝑌𝑌 (𝜏𝜏) = 𝐹𝐹 −1 � 2 �− 2 𝐹𝐹 −1 � 2 �
𝛽𝛽(𝑎𝑎2 − 𝛽𝛽 )
2 2
(𝛽𝛽 + 𝜔𝜔 ) (𝑎𝑎 − 𝛽𝛽 )
2 𝑎𝑎 + 𝜔𝜔 2
𝑎𝑎𝑎𝑎 𝐴𝐴 2𝑎𝑎
= 𝑒𝑒 − 𝛽𝛽 |𝜏𝜏| − (𝑎𝑎 2 𝑒𝑒 − 𝑎𝑎|𝜏𝜏| since𝐹𝐹�𝑒𝑒 − 𝑎𝑎|𝜏𝜏| � =
𝛽𝛽 (𝑎𝑎 2 −𝛽𝛽 2 ) −𝛽𝛽 2 ) 𝑎𝑎 2 +𝜔𝜔 2

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

B.E./B.Tech. DEGREE EXAMINATIONS, NOVEMBER/DECEMBER 2016


Fourth Semester
Common to ECE/BIOMEDICAL
MA6453 – PROBABILITY AND RANDOM PROCESSES
(Regulations 2013)
Time: Three hours Maximum: 100 marks
Answer ALL Questions

PART – A (10 X 2 = 20 marks)

1. 𝐾𝐾𝑒𝑒 −𝑥𝑥 , 𝑥𝑥 > 0


If 𝑓𝑓(𝑥𝑥) = � is the pdf of a random variable X, then find the value of
0, 𝑜𝑜𝑜𝑜ℎ𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒
K.

Ans: Since∫−∞ 𝑓𝑓(𝑥𝑥)𝑑𝑑𝑑𝑑 = 1

� 𝐾𝐾 𝑒𝑒 − 𝑥𝑥 𝑑𝑑𝑑𝑑 = 1
0
𝑒𝑒 −𝑥𝑥 ∞
𝐾𝐾 � � =1
−1 0
𝐾𝐾[0 − (−1)] = 1
∴ 𝐾𝐾 = 1
4
2. �2𝑒𝑒 𝑡𝑡 +1�
Let X be a random variable with moment generating function 𝑀𝑀𝑋𝑋 (𝑡𝑡) =
81
Then find its mean and variance.
Ans: 4(2𝑒𝑒 𝑡𝑡 + 1)3 2𝑒𝑒 𝑡𝑡 8 × 33 8
𝜇𝜇1′ = 𝑀𝑀′𝑋𝑋 (𝑡𝑡) = � � = =
81 𝑡𝑡=0
81 3
8
𝜇𝜇2′ = 𝑀𝑀′′ 𝑋𝑋 (𝑡𝑡) = [(2𝑒𝑒 𝑡𝑡 + 1)3 𝑒𝑒 𝑡𝑡 ]𝑡𝑡=0
81
8
= [3𝑒𝑒 𝑡𝑡 (2𝑒𝑒 𝑡𝑡 + 1)2 2𝑒𝑒 𝑡𝑡 + 𝑒𝑒 𝑡𝑡 (2𝑒𝑒 𝑡𝑡 + 1)3 ]𝑡𝑡=0
81
6 × 32 + 33
= =8
81
8
𝜇𝜇1′ = 𝜇𝜇2′ = 8
3
2
2 8 8
𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉𝑉 = 𝜇𝜇2′ − �𝜇𝜇1′ � = 8 − � � =
3 9

3. Let (X,Y) be a two dimensional random variable. Define covariance of (X,Y). If


X and Y are independent, what will be the covariance of (X,Y)?
Ans: If X and Y are independent then the covariance of (X,Y)=0.
1
4. , 0 < 𝑥𝑥, 𝑦𝑦 < 2
If the joint pdf of (X,Y) is 𝑓𝑓(𝑥𝑥, 𝑦𝑦) = �4 Find 𝑃𝑃(𝑋𝑋 + 𝑌𝑌 ≤ 1)
0, 𝑜𝑜𝑜𝑜ℎ𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

Ans:
𝑃𝑃(𝑋𝑋 + 𝑌𝑌 ≤ 1) = � 𝑓𝑓(𝑥𝑥, 𝑦𝑦) 𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑
𝑅𝑅
1 1−𝑦𝑦
1
=�� 𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑
4
0 0
1
1 1−𝑦𝑦
= �[𝑥𝑥]0 𝑑𝑑𝑑𝑑
4
0
1
1
= �(1 − 𝑦𝑦) 𝑑𝑑𝑑𝑑
4
0
1
1 𝑦𝑦 2
= �𝑦𝑦 − �
4 2 0
1 1
= �1 − �
4 2
1 1
= ×
4 2
1
=
8

5. Define a stationary process.


Ans: A random process X(t) is said to be stationary if all its statistical averages
are invariant under time. This means that X(t) and 𝑋𝑋(𝑡𝑡 + 𝜏𝜏) have the same
statistical averages for any 𝜏𝜏.
6. What is Markov Process?
Ans: If for 𝑡𝑡1 < 𝑡𝑡2 < 𝑡𝑡3 … < 𝑡𝑡𝑛𝑛 we have
𝑃𝑃[𝑋𝑋(𝑡𝑡) ≤ 𝑥𝑥⁄𝑋𝑋(𝑡𝑡1 ) = 𝑥𝑥1 , 𝑋𝑋(𝑡𝑡2 ) = 𝑥𝑥2 , … , 𝑋𝑋(𝑡𝑡𝑛𝑛 ) = 𝑥𝑥𝑛𝑛 ]
= 𝑃𝑃[𝑋𝑋(𝑡𝑡) ≤ 𝑥𝑥 ⁄𝑋𝑋(𝑡𝑡1 ) = 𝑥𝑥1 ]
7. The Power Spectral density of a random process X(t) is given by 𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔) =
𝜋𝜋 𝑖𝑖𝑖𝑖 |𝜔𝜔| < 1
� find its autocorrelation
0, 𝑜𝑜𝑜𝑜ℎ𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒

Ans: 1
𝑅𝑅(𝜏𝜏) = � 𝑆𝑆(𝜔𝜔)𝑒𝑒 𝑖𝑖𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑
2𝜋𝜋
−∞
1 1
= ∫ 𝜋𝜋 𝑒𝑒 𝑖𝑖𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑
2𝜋𝜋 −1
1
𝜋𝜋 𝑒𝑒 𝑖𝑖𝑖𝑖𝑖𝑖
= � �
2𝜋𝜋 𝑖𝑖𝑖𝑖 −1
1 𝑒𝑒 𝑖𝑖𝑖𝑖 −𝑒𝑒 − 𝑖𝑖𝑖𝑖
= � �
𝜏𝜏 2𝑖𝑖
1
= 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠
𝜏𝜏

8. Prove that 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏) = 𝑅𝑅𝑌𝑌𝑌𝑌 (−𝜏𝜏)


Ans: Proof: 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏) = 𝐸𝐸[𝑋𝑋(𝑡𝑡)𝑌𝑌(𝑡𝑡 + 𝜏𝜏)]

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

= 𝐸𝐸[𝑌𝑌(𝑡𝑡 + 𝜏𝜏)𝑋𝑋(𝑡𝑡)]
= 𝐸𝐸[𝑌𝑌(𝑢𝑢)𝑋𝑋(𝑢𝑢 − 𝜏𝜏)] Put 𝑢𝑢 = 𝑡𝑡 + 𝜏𝜏 ⟹ 𝑡𝑡 = 𝑢𝑢 − 𝜏𝜏
= 𝑅𝑅𝑌𝑌𝑌𝑌 (−𝜏𝜏)
∴ 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏) = 𝑅𝑅𝑌𝑌𝑌𝑌 (−𝜏𝜏)

9. Find the autocorrelation function of the white noise.


Ans: The autocorrelation function of N(t) is
𝑁𝑁
𝑅𝑅𝑁𝑁𝑁𝑁 (𝜏𝜏) = 0 𝛿𝛿(𝜏𝜏), 𝑤𝑤ℎ𝑒𝑒𝑒𝑒𝑒𝑒 𝛿𝛿(𝜏𝜏)is unit impulse
2
function.
1
10. 𝑖𝑖𝑖𝑖 |𝑡𝑡| ≤ 𝑐𝑐
If the system has the impulse response ℎ(𝑡𝑡) = � 2𝐶𝐶 write down the
0, |𝑡𝑡| > 𝑐𝑐
relation between the spectrums of input X(t) and output Y(t)
Ans: 𝑆𝑆𝑌𝑌𝑌𝑌 (𝜔𝜔) = |𝐻𝐻(𝜔𝜔)|2 𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔)

𝐻𝐻(𝜔𝜔) = 𝐹𝐹[ℎ(𝑡𝑡)] = � ℎ(𝑡𝑡) 𝑒𝑒 − 𝑖𝑖𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑
−∞
𝑐𝑐
1 − 𝑖𝑖𝑖𝑖𝑖𝑖 1 𝑐𝑐 − 𝑖𝑖𝑖𝑖𝑖𝑖
= � 𝑒𝑒 𝑑𝑑𝑑𝑑 = � 𝑒𝑒 𝑑𝑑𝑑𝑑
−𝑐𝑐 2𝐶𝐶 2𝐶𝐶 −𝑐𝑐
𝑐𝑐
1 𝑒𝑒 − 𝑖𝑖𝑖𝑖𝑖𝑖 1 𝑒𝑒 − 𝑖𝑖𝑖𝑖𝑖𝑖 𝑒𝑒 𝑖𝑖𝑖𝑖𝑖𝑖
= � � = � − �
2𝐶𝐶 −𝑖𝑖𝑖𝑖 −𝑐𝑐 2𝐶𝐶 −𝑖𝑖𝑖𝑖 −𝑖𝑖𝑖𝑖
1 𝑒𝑒 𝑖𝑖𝑖𝑖𝑖𝑖 − 𝑒𝑒 − 𝑖𝑖𝑖𝑖𝑖𝑖
= � �
𝐶𝐶𝐶𝐶 2𝑖𝑖
𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠
=
𝑐𝑐𝑐𝑐
𝑠𝑠𝑠𝑠𝑠𝑠2 𝑐𝑐𝑐𝑐
|𝐻𝐻(𝜔𝜔)|2 = 2 2
𝑐𝑐 𝜔𝜔
𝑠𝑠𝑠𝑠𝑠𝑠2 𝑐𝑐𝑐𝑐
𝑆𝑆𝑌𝑌𝑌𝑌 (𝜔𝜔) = 2 2 𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔)
𝑐𝑐 𝜔𝜔
Part - B
11. (a) (i) State and prove memoryless property for geometric distribution.
Ans: Memory less Property of Geometric Distribution
If X is said to follow Geometric system with parameter p, then for any two
positive integers ‘𝑠𝑠’ 𝑎𝑎𝑎𝑎𝑎𝑎 ‘𝑡𝑡’.
𝑃𝑃[𝑋𝑋 > 𝑠𝑠 + 𝑡𝑡/𝑋𝑋 > 𝑠𝑠] = 𝑃𝑃[𝑋𝑋 > 𝑡𝑡]
Proof:
𝑃𝑃[𝑋𝑋 > 𝑠𝑠 + 𝑡𝑡 ∩ 𝑋𝑋 > 𝑠𝑠]
𝑃𝑃[𝑋𝑋 > 𝑠𝑠 + 𝑡𝑡/𝑋𝑋 > 𝑠𝑠] =
𝑃𝑃(𝑋𝑋 > 𝑠𝑠)
Now 𝑃𝑃(𝑋𝑋 = 𝑥𝑥) = 𝑞𝑞 𝑥𝑥−1 𝑝𝑝, 𝑥𝑥 = 1,2,3, …

𝑃𝑃[𝑋𝑋 > 𝑘𝑘] = � 𝑞𝑞 𝑥𝑥−1 𝑝𝑝


𝑥𝑥=𝑘𝑘+1
= 𝑞𝑞 𝑘𝑘 𝑝𝑝 + 𝑞𝑞 𝑘𝑘+1 𝑝𝑝 + 𝑞𝑞 𝑘𝑘+2 𝑝𝑝 + ⋯

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

= 𝑞𝑞 𝑘𝑘 𝑝𝑝[1 + 𝑞𝑞 + 𝑞𝑞 2 + ⋯ ]
= 𝑞𝑞 𝑘𝑘 𝑝𝑝(1 − 𝑞𝑞)−1
𝑞𝑞 𝑘𝑘 𝑝𝑝 𝑞𝑞 𝑘𝑘 𝑝𝑝
𝑃𝑃[𝑋𝑋 > 𝑘𝑘] = = = 𝑞𝑞 𝑘𝑘
1 − 𝑞𝑞 𝑝𝑝
Hence 𝑃𝑃[𝑋𝑋 > 𝑠𝑠 + 𝑡𝑡] = 𝑞𝑞 𝑠𝑠+𝑡𝑡

𝑃𝑃[𝑋𝑋 > 𝑠𝑠] = 𝑞𝑞 𝑠𝑠


𝑃𝑃[𝑋𝑋 > 𝑡𝑡] = 𝑞𝑞 𝑡𝑡
𝑞𝑞 𝑠𝑠+𝑡𝑡
∴ 𝑃𝑃[𝑋𝑋 > 𝑠𝑠 + 𝑡𝑡⁄𝑋𝑋 > 𝑠𝑠] = = 𝑞𝑞 𝑡𝑡 = 𝑃𝑃[𝑋𝑋 > 𝑡𝑡]
𝑞𝑞 𝑠𝑠
𝑖𝑖. 𝑒𝑒. , 𝑃𝑃[𝑋𝑋 > 𝑠𝑠 + 𝑡𝑡/𝑋𝑋 > 𝑠𝑠] = 𝑃𝑃[𝑋𝑋 > 𝑡𝑡]

(ii) In a certain city, the daily consumption of electric power in millions of Kilowatt-
hours can be considered as a random variable following gamma distribution with
1
parameters 𝜆𝜆 = , 𝛼𝛼 = 3. If the power plant in this city has a daily capacity of 12
2
million Kilowatt-hours, what is the probability that this supply of power will be
insufficient on any given day?
Ans: Let X denote the daily consumption of power in millions of kilowatt-hours.
1
Given X follows gamma distribution with parameter 𝜆𝜆 = , 𝛼𝛼 = 3.
2
The pdf of X is given by
∝−1
𝜆𝜆 𝑒𝑒 − 𝜆𝜆𝜆𝜆 ( 𝜆𝜆𝜆𝜆 )
𝑓𝑓(𝑥𝑥) = 𝑥𝑥 ≥ 0
Γ𝛼𝛼
1 − 𝑥𝑥2 𝑥𝑥 2
𝑒𝑒 � �
=2 2 = 1 𝑒𝑒 − 𝑥𝑥2 𝑥𝑥 2
Γ3 16
Probability for insufficient supply = 𝑃𝑃(𝑋𝑋 > 12)
∞ ∞
1 − 𝑥𝑥 2
= � 𝑓𝑓(𝑥𝑥)𝑑𝑑𝑑𝑑 = � 𝑒𝑒 2 𝑥𝑥 𝑑𝑑𝑑𝑑
16
12 12

𝑥𝑥 𝑥𝑥 𝑥𝑥
1 2 𝑒𝑒 −2 𝑒𝑒 −2 𝑒𝑒 −2
= �𝑥𝑥 − (2𝑥𝑥) +2× �
16 1 1 2 1 3
− �− � �− �
2 2 2 12
1
= [(0 − 0 + 0) − (−144 × 2 × 𝑒𝑒 −6 ) − 24 × 4 × 𝑒𝑒 −6 − (8 × 2
16
× 𝑒𝑒 −6 )]
𝑒𝑒 −6
= [288 + 96 + 16]
16
= 𝑒𝑒 −6 × 25
= 0.06195

(OR)
(b) (i) A coin is biased so that a head is twice as likely to appear as a tail. If the coin
tossed 6 times, find the probabilities of getting (1) exactly 2 heads (2) at least 3
heads (3) at most 4 heads

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

Ans: Probability of appearing head = 𝑝𝑝 = 2 , 𝑞𝑞 = 1 , 𝑛𝑛 = 6


3 3
P(X=x)=n𝐶𝐶𝑥𝑥 𝑝𝑝 𝑥𝑥 𝑞𝑞 𝑛𝑛−𝑥𝑥 𝑥𝑥 = 0,1,2, …n
2 𝑥𝑥 1 6−𝑥𝑥
= 6𝐶𝐶𝑥𝑥 � � � �
3 3
2 2 1 6−2
𝑃𝑃[𝑋𝑋 = 2] = 6𝐶𝐶2 � � � �
3 3
4 1
= 15 × × = 0.0823
9 81

𝑃𝑃[𝑋𝑋 ≥ 3] = 1 − [𝑃𝑃(𝑋𝑋 = 0) + 𝑃𝑃(𝑋𝑋 = 1) + 𝑃𝑃(𝑋𝑋 = 2)]


2 0 1 6 2 1 1 5 2 2 1 4
= 1 − �6𝐶𝐶0 � � � � + 6𝐶𝐶1 � � � � + 6𝐶𝐶2 � � � � �
3 3 3 3 3 3
6 5 2 4
1 2 1 2 1
= 1 − �� � + 6 � � � � + 15 � � � � �
3 3 3 3 3
= 1 − [0.00137 + 0.01646 + 0.01234]
= 1 − 0.03017
= 0.96983
𝑃𝑃(𝑋𝑋 ≤ 4) = 𝑃𝑃(𝑋𝑋 = 0) + 𝑃𝑃(𝑋𝑋 = 1) + 𝑃𝑃(𝑋𝑋 = 2) + 𝑃𝑃(𝑋𝑋 = 3) + 𝑃𝑃(𝑋𝑋 = 4)
2 0 1 6 2 1 1 5 2 2 1 4 2 3 1 3
= 6𝐶𝐶0 � � � � + 6𝐶𝐶1 � � � � + 6𝐶𝐶2 � � � � + 6𝐶𝐶3 � � � �
3 3 3 3 3 3 3 3
4 2
2 1
+ 6𝐶𝐶4 � � � �
3 3
(ii) The length of time a person speaks over phone follows exponential distribution
with mean 6 mins. What is the probability that the person will talk for (1) more
than 8 mins (2) between 4 and 8 mins
Ans: Let X denote length of time a person speaks over phone follows
exponential distribution with mean 𝜆𝜆 = 6 𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚
The pdf of X is given by
𝑓𝑓(𝑥𝑥) = 𝜆𝜆𝑒𝑒 − 𝜆𝜆𝜆𝜆 , 𝑥𝑥 ≥ 0
= 6𝑒𝑒 − 6𝑥𝑥
∞ ∞
(1) 𝑃𝑃(𝑋𝑋 > 8) = ∫8 𝑓𝑓(𝑥𝑥)𝑑𝑑𝑑𝑑 = ∫8 6𝑒𝑒 − 6𝑥𝑥 𝑑𝑑𝑑𝑑

𝑒𝑒 − 6𝑥𝑥
= 6� � = [0 + 𝑒𝑒 −48 ] = 𝑒𝑒 −48
−6 8
8 8
(2) 𝑃𝑃(4 < 𝑋𝑋 < 8) = ∫4 𝑓𝑓(𝑥𝑥)𝑑𝑑𝑑𝑑 = ∫4 6𝑒𝑒 − 6𝑥𝑥 𝑑𝑑𝑑𝑑
8
𝑒𝑒 − 6𝑥𝑥
= 6� � = [−𝑒𝑒 −48 + 𝑒𝑒 −24 ]
−6 4
= [𝑒𝑒 −24 − 𝑒𝑒 −48 ]

12. (a) (i) Let (X,Y) be a two dimensional non-negative continuous random variable having
−(𝑥𝑥 2 +𝑦𝑦 2 )
the joint density 𝑓𝑓(𝑥𝑥, 𝑦𝑦) = � 4𝑥𝑥𝑥𝑥 𝑒𝑒 𝑥𝑥 ≥ 0, 𝑦𝑦 ≥ 0 Find the densu,ity
0, 𝑜𝑜𝑜𝑜ℎ𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒
function of 𝑈𝑈 = �(𝑋𝑋 2 + 𝑌𝑌 2 )

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
−�𝑥𝑥 +𝑦𝑦 � 2 2
Given 𝑓𝑓(𝑥𝑥, 𝑦𝑦) = � 4𝑥𝑥𝑥𝑥 𝑒𝑒
Ans:
𝑥𝑥 ≥ 0, 𝑦𝑦 ≥ 0
0, 𝑜𝑜𝑜𝑜ℎ𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒
𝑈𝑈 = �(𝑥𝑥 2 + 𝑦𝑦 2 )
The transformation function is
𝑢𝑢 = �(𝑥𝑥 2 + 𝑦𝑦 2 ) 𝑢𝑢 ≥ 0
Consider the transformation 𝑥𝑥 = 𝑢𝑢 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐, 𝑦𝑦 = 𝑢𝑢 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠
𝜕𝜕𝜕𝜕 𝜕𝜕𝜕𝜕
𝜕𝜕𝜕𝜕 𝜕𝜕𝜕𝜕 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 −𝑢𝑢 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠
Then 𝐽𝐽 = �𝜕𝜕𝜕𝜕 � =� �
𝜕𝜕𝜕𝜕 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 𝑢𝑢 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐
𝜕𝜕𝜕𝜕 𝜕𝜕𝜕𝜕
= 𝑢𝑢𝑐𝑐𝑐𝑐𝑐𝑐 2 𝜃𝜃 + 𝑢𝑢𝑠𝑠𝑠𝑠𝑠𝑠2 𝜃𝜃 = 𝑢𝑢
So (X,Y) is transformed to (𝑈𝑈, ∅)
∴ the joint pdf of (𝑈𝑈, ∅) is
𝑓𝑓𝑈𝑈,∅ (𝑢𝑢, 𝜃𝜃) = |𝐽𝐽|𝑓𝑓𝑋𝑋𝑋𝑋 (𝑥𝑥, 𝑦𝑦)
2 2
= 𝑢𝑢 4𝑥𝑥𝑥𝑥𝑒𝑒 −�𝑥𝑥 +𝑦𝑦 �
2
= 4𝑢𝑢3 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 𝑒𝑒 −𝑢𝑢
2
𝑓𝑓(𝑢𝑢, 𝜃𝜃) = 2𝑢𝑢3 𝑒𝑒 −𝑢𝑢 𝑠𝑠𝑠𝑠𝑠𝑠2𝜃𝜃
The range space of (𝑈𝑈, ∅) is obtained from the range space of (X,Y)
using the transformations
𝑥𝑥 = 𝑢𝑢 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐, 𝑦𝑦 = 𝑢𝑢 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠
𝜋𝜋
As 𝑥𝑥 ≥ 0, 𝑦𝑦 ≥ 0 ⇒ 𝑢𝑢 ≥ 0 𝑎𝑎𝑎𝑎𝑎𝑎 0 ≤ 𝜃𝜃 ≤
2
3 −𝑢𝑢 2
𝜋𝜋
∴ 𝑓𝑓(𝑢𝑢, 𝜃𝜃) = 2𝑢𝑢 𝑒𝑒 𝑠𝑠𝑠𝑠𝑠𝑠2𝜃𝜃, 𝑢𝑢 ≥ 0 𝑎𝑎𝑎𝑎𝑎𝑎 0 ≤ 𝜃𝜃 ≤
2
The pdf of U is

𝑓𝑓𝑈𝑈 (𝑢𝑢) = � 𝑓𝑓(𝑢𝑢, 𝜃𝜃) 𝑑𝑑𝑑𝑑


−∞
𝜋𝜋
2
2
= � 2𝑢𝑢3 𝑒𝑒 −𝑢𝑢 𝑠𝑠𝑠𝑠𝑠𝑠2𝜃𝜃 𝑑𝑑𝑑𝑑
0
𝜋𝜋
−𝑢𝑢 2
𝑐𝑐𝑐𝑐𝑐𝑐2𝜃𝜃 2
= 2𝑢𝑢3 𝑒𝑒 �− �
2 0
2
= −𝑢𝑢3 𝑒𝑒 −𝑢𝑢 (−1 − 1)
2
∴ 𝑓𝑓𝑈𝑈 (𝑢𝑢) = 2𝑢𝑢3 𝑒𝑒 −𝑢𝑢 , 𝑢𝑢 ≥ 0
(ii) 𝐶𝐶𝐶𝐶(𝑥𝑥 − 𝑦𝑦), 0 < 𝑥𝑥 < 2, −𝑥𝑥 < 𝑦𝑦 < 𝑥𝑥
Given 𝑓𝑓(𝑥𝑥, 𝑦𝑦) = �
0, 𝑜𝑜𝑜𝑜ℎ𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒
(1) Evaluate C
(2) Find 𝑓𝑓𝑋𝑋 (𝑥𝑥)
(3) 𝑓𝑓𝑌𝑌 ⁄𝑋𝑋 (𝑦𝑦⁄𝑥𝑥)
(4) 𝑓𝑓𝑌𝑌 (𝑦𝑦)
Ans: (1) ∫2 ∫𝑥𝑥 𝐶𝐶𝐶𝐶(𝑥𝑥 − 𝑦𝑦)𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑 = 1
𝑥𝑥=0 𝑦𝑦 =−𝑥𝑥
2 𝑥𝑥
𝐶𝐶 � � (𝑥𝑥 2 − 𝑥𝑥𝑥𝑥)𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑 = 1
𝑥𝑥=0 𝑦𝑦=−𝑥𝑥

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

2
𝑥𝑥 3 𝑥𝑥 3
𝐶𝐶 � [�𝑥𝑥 3 − � − (−𝑥𝑥 3 − )]𝑑𝑑𝑑𝑑 = 1
𝑥𝑥=0 2 2

2 2
3
2𝑥𝑥 4 1
𝐶𝐶 � 2𝑥𝑥 𝑑𝑑𝑑𝑑 = 1 ⟹ 𝐶𝐶 � � ⟹ 𝐶𝐶 =
0 4 0 8
1
𝑓𝑓𝑋𝑋,𝑌𝑌 𝑦𝑦) = �8 𝑥𝑥(𝑥𝑥 − 𝑦𝑦): 0 < 𝑥𝑥 < 2, −𝑥𝑥 < 𝑦𝑦 < 𝑥𝑥
(𝑥𝑥,
0 ∶ 𝑜𝑜𝑜𝑜ℎ𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒
𝑥𝑥 1 1 𝑥𝑥 2
(2) 𝑓𝑓𝑥𝑥 (𝑥𝑥) = ∫−𝑥𝑥 𝑥𝑥(𝑥𝑥 − 𝑦𝑦)𝑑𝑑𝑑𝑑 = ∫−𝑥𝑥 (𝑥𝑥 − 𝑥𝑥𝑥𝑥) 𝑑𝑑𝑑𝑑 =
8 8
2 𝑥𝑥 3
1 2 𝑥𝑥𝑦𝑦 𝑥𝑥 𝑥𝑥 3 1
�𝑥𝑥 𝑦𝑦 − � = ��𝑥𝑥 3 − � − �−𝑥𝑥 3 − �� = 𝑥𝑥 3 ; 0 < 𝑥𝑥 < 2.
8 2 −𝑥𝑥 2 2 4
1
𝑓𝑓(𝑥𝑥,𝑦𝑦) 𝑥𝑥(𝑥𝑥−𝑦𝑦) 1
(3)𝑓𝑓𝑌𝑌 ⁄𝑋𝑋 (𝑦𝑦⁄𝑥𝑥) = = 8
𝑥𝑥 3
= (𝑥𝑥 − 𝑦𝑦); −𝑥𝑥 < 𝑦𝑦 < 𝑥𝑥.
𝑓𝑓 𝑋𝑋 (𝑥𝑥) 2𝑥𝑥 2
4


(4)𝑓𝑓𝑌𝑌 (𝑦𝑦) = ∫−∞ 𝑓𝑓(𝑥𝑥, 𝑦𝑦)𝑑𝑑𝑑𝑑
2
⎧� 1 𝑥𝑥(𝑥𝑥 − 𝑦𝑦)𝑑𝑑𝑑𝑑 𝑖𝑖𝑖𝑖 − 2 ≤ 𝑦𝑦 ≤ 0
⎪ 8
= −𝑦𝑦𝑥𝑥
⎨ 1
⎪ � 𝑥𝑥(𝑥𝑥 − 𝑦𝑦) 𝑑𝑑𝑑𝑑 𝑖𝑖𝑖𝑖 0 ≤ 𝑦𝑦 ≤ 2
⎩ −𝑥𝑥 8
3 2 2
⎧1 �𝑥𝑥 − 𝑥𝑥 𝑦𝑦� 𝑖𝑖𝑖𝑖 − 2 ≤ 𝑦𝑦 ≤ 0
⎪8 3 2 −𝑦𝑦
= 2
⎨ 1 𝑥𝑥 3 𝑥𝑥 2
⎪ � − 𝑦𝑦� 𝑖𝑖𝑖𝑖 0 ≤ 𝑦𝑦 ≤ 2
⎩ 8 3 2 𝑦𝑦
3 3
⎧1 �8 − 2𝑦𝑦 − �− 𝑦𝑦 − 𝑦𝑦 �� 𝑖𝑖𝑖𝑖 − 2 ≤ 𝑦𝑦 ≤ 0
⎪8 3 3 2
=
⎨ 1 8 𝑦𝑦 3 𝑦𝑦 3
⎪ � − 2𝑦𝑦 − � − �� 𝑖𝑖𝑖𝑖 0 ≤ 𝑦𝑦 ≤ 2
⎩ 8 3 3 2
1 𝑦𝑦 5 3
− + 𝑦𝑦 𝑖𝑖𝑖𝑖 − 2 ≤ 𝑦𝑦 ≤ 0
∴ 𝑓𝑓𝑌𝑌 (𝑦𝑦) = �3 4 48
1 𝑦𝑦 1 3
− + 𝑦𝑦 𝑖𝑖𝑖𝑖 0 ≤ 𝑦𝑦 ≤ 2
3 4 48
(OR)
(b) (i) Three balls are drawn at random without replacement from a box containing 2
white, 3 red and 4 black balls. If X denotes the number of white balls drawn and
Y denotes the number of red balls drawn, find the joint probability distribution of
(X,Y).
Ans: Given that X denotes the number of white balls drawn and Y denotes the

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

number of red balls drawn.


So values of X are 0,1,2 and values of Y are 0,1,2,3. The jpmf of (X,Y) is
required.
When three balls are drawn at a time without replacement.
𝑃𝑃(0,0) = 𝑃𝑃(𝑋𝑋 = 0, 𝑌𝑌 = 0) = 𝑃𝑃(𝑛𝑛𝑛𝑛 𝑤𝑤ℎ𝑖𝑖𝑖𝑖𝑖𝑖 𝑎𝑎𝑎𝑎𝑎𝑎 𝑛𝑛𝑛𝑛 𝑟𝑟𝑟𝑟𝑟𝑟 𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏)
= 𝑃𝑃(𝑎𝑎𝑎𝑎𝑎𝑎 𝑎𝑎𝑎𝑎𝑎𝑎 𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏 𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏)
4𝐶𝐶3 1
= =
9𝐶𝐶3 2
𝑃𝑃(0,1) = 𝑃𝑃(𝑋𝑋 = 0, 𝑌𝑌 = 1) = 𝑃𝑃(𝑜𝑜𝑜𝑜𝑜𝑜 𝑟𝑟𝑟𝑟𝑟𝑟 𝑎𝑎𝑎𝑎𝑎𝑎 2 𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏 𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏)
3𝐶𝐶1 4𝐶𝐶2 3
= =
9𝐶𝐶3 14
𝑃𝑃(0,2) = 𝑃𝑃(𝑋𝑋 = 0, 𝑌𝑌 = 1) = 𝑃𝑃(2 𝑟𝑟𝑟𝑟𝑟𝑟 𝑎𝑎𝑎𝑎𝑎𝑎 1 𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏 𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏)
3𝐶𝐶2 4𝐶𝐶1 1
= =
9𝐶𝐶3 7
𝑃𝑃(0,3) = 𝑃𝑃(𝑋𝑋 = 0, 𝑌𝑌 = 3) = 𝑃𝑃(3 𝑟𝑟𝑟𝑟𝑟𝑟 𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏)
33 1
= =
9𝐶𝐶3 84
𝑃𝑃(1,0) = 𝑃𝑃(𝑋𝑋 = 1, 𝑌𝑌 = 0) = 𝑃𝑃(1 𝑟𝑟𝑟𝑟𝑟𝑟 𝑎𝑎𝑎𝑎𝑎𝑎 2 𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏 𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏)
2𝐶𝐶1 4𝐶𝐶2 1
= =
9𝐶𝐶3 7
𝑃𝑃(1,1) = 𝑃𝑃(𝑋𝑋 = 1, 𝑌𝑌 = 1) = 𝑃𝑃(1 𝑟𝑟𝑟𝑟𝑟𝑟, 1 𝑤𝑤ℎ𝑖𝑖𝑖𝑖𝑖𝑖 𝑎𝑎𝑎𝑎𝑎𝑎 1 𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏 𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏)
2𝐶𝐶1 3𝐶𝐶1 4𝐶𝐶1 2
= =
9𝐶𝐶3 7
𝑃𝑃(1,2) = 𝑃𝑃(𝑋𝑋 = 1, 𝑌𝑌 = 2) = 𝑃𝑃(1 𝑤𝑤ℎ𝑖𝑖𝑖𝑖𝑖𝑖 𝑎𝑎𝑎𝑎𝑎𝑎 2 𝑟𝑟𝑟𝑟𝑟𝑟 𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏)
2𝐶𝐶1 3𝐶𝐶2 1
= =
9𝐶𝐶3 14
𝑃𝑃(1,3) = 𝑃𝑃(𝑋𝑋 = 1, 𝑌𝑌 = 3) = 0(since this event is impossible)
𝑃𝑃(2,0) = 𝑃𝑃(𝑋𝑋 = 2, 𝑌𝑌 = 0) = 𝑃𝑃(2 𝑤𝑤ℎ𝑖𝑖𝑖𝑖𝑖𝑖 𝑎𝑎𝑎𝑎𝑎𝑎 0 𝑟𝑟𝑟𝑟𝑟𝑟 𝑎𝑎𝑎𝑎𝑎𝑎 1 𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏 𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏)
2𝐶𝐶2 4𝐶𝐶1 1
= =
9𝐶𝐶3 21
𝑃𝑃(2,1) = 𝑃𝑃(𝑋𝑋 = 2, 𝑌𝑌 = 1) = 𝑃𝑃(2 𝑤𝑤ℎ𝑖𝑖𝑖𝑖𝑖𝑖 𝑎𝑎𝑎𝑎𝑎𝑎 1 𝑟𝑟𝑟𝑟𝑟𝑟 𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏)
2𝐶𝐶2 3𝐶𝐶1 1
= =
9𝐶𝐶3 28
𝑃𝑃(2,2) = 𝑃𝑃(𝑋𝑋 = 2, 𝑌𝑌 = 2) = 0
𝑃𝑃(2,3) = 𝑃𝑃(𝑋𝑋 = 2, 𝑌𝑌 = 3) = 0
The joint pmf is
0 1 2
X
Y
0 1 1 1
21 71 21
1 3 2 1
14 7 28
2 1 1 0
7 14

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

3 1 0 0
84
(ii) In a probability destroyed laboratory record only the lines of regressions and
variance of X are available. The regression equations are 8𝑋𝑋 − 10𝑌𝑌 + 66 =
0 𝑎𝑎𝑎𝑎𝑎𝑎 40𝑋𝑋 − 18𝑌𝑌 = 214, 𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣 𝑜𝑜𝑜𝑜 𝑋𝑋 = 9. Find (1) the correlation
coefficient between X and Y (2) Mean values of X and Y (3) variance of Y.
Ans: Since both the regression lines pass through the point (𝑥𝑥̅ , 𝑦𝑦�)
8𝑥𝑥̅ − 10𝑦𝑦� + 66 = 0 … … … … … … (1)
40𝑥𝑥̅ − 18𝑦𝑦� − 214 = 0 … … … … … … (2)
Solving (1) and (2) we get
𝑥𝑥̅ = 13, 𝑦𝑦� = 17
Given the variance of X = Var(x) = 9 ⟹ 𝜎𝜎𝑥𝑥 = 3
The equations of regression lines can be written as
𝑦𝑦 = 0.8𝑥𝑥 + 6.6, 𝑥𝑥 = 0.45𝑦𝑦 + 5.35
Hence the regression coefficient of Y on X is
𝑟𝑟𝜎𝜎𝑦𝑦
𝑏𝑏𝑦𝑦𝑦𝑦 = = 0.8 … … … … … (3)
𝜎𝜎𝑥𝑥
The regression coefficient of X on Y is
𝑟𝑟𝜎𝜎𝑥𝑥
𝑏𝑏𝑥𝑥𝑥𝑥 = = 0.45 … … … … … (4)
𝜎𝜎𝑦𝑦
Multiplying (3) and (4)
𝑟𝑟 2 = 0.8 × 0.45 = 0.36
𝑟𝑟 = �𝑏𝑏𝑦𝑦𝑦𝑦 × �𝑏𝑏𝑥𝑥𝑥𝑥 ⟹ 𝑟𝑟 = 0.6
0.8 𝜎𝜎𝑥𝑥 0.8 ×3
Now 𝜎𝜎𝑦𝑦 = = =4
𝑟𝑟 0.6
𝜎𝜎𝑥𝑥 = 3 , 𝜎𝜎𝑦𝑦 = 4
The correlation coefficient is 𝑟𝑟 = 0.6.

13. (a) (i) Show that the random process 𝑋𝑋(𝑡𝑡) = 𝐴𝐴 cos⁡(𝜔𝜔𝜔𝜔 + 𝜃𝜃) is a wide sense stationary,
where A and 𝜔𝜔 are constants and 𝜃𝜃 is uniformly distributed on the interval
(0,2𝜋𝜋)

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

Ans: Given 𝑋𝑋(𝑡𝑡) = 𝐴𝐴 cos⁡ (𝜔𝜔𝜔𝜔 + 𝜃𝜃)


𝜃𝜃 is uniformly distributed in (0,2𝜋𝜋)
1
∴ 𝑓𝑓(𝜃𝜃) = 0 ≤ 𝜃𝜃 ≤ 2𝜋𝜋
2𝜋𝜋
We have to prove X(t) is a WSS process using the following
(i) 𝐸𝐸[𝑋𝑋(𝑡𝑡)] = 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐
(ii) 𝑅𝑅𝑋𝑋𝑋𝑋 (𝑡𝑡, 𝑡𝑡 + 𝜏𝜏) 𝑖𝑖𝑖𝑖 𝑎𝑎 𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓 𝑜𝑜𝑜𝑜 𝜏𝜏
𝐸𝐸[𝑋𝑋(𝑡𝑡)] = 𝐸𝐸[𝐴𝐴 cos(𝜔𝜔𝜔𝜔 + 𝜃𝜃)]
2𝜋𝜋

= 𝐴𝐴 � cos(𝜔𝜔𝜔𝜔 + 𝜃𝜃) 𝑓𝑓( 𝜃𝜃) 𝑑𝑑𝑑𝑑


0
2𝜋𝜋
1
= 𝐴𝐴 � cos(𝜔𝜔𝜔𝜔 + 𝜃𝜃) 𝑑𝑑𝑑𝑑
2𝜋𝜋
0
𝐴𝐴 𝐴𝐴
= [sin(𝜔𝜔𝜔𝜔 + 𝜃𝜃)]2𝜋𝜋
0 = [𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 − 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠]
2𝜋𝜋 2𝜋𝜋
= 0 = 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐
𝑅𝑅𝑋𝑋𝑋𝑋 (𝑡𝑡, 𝑡𝑡 + 𝜏𝜏) = 𝐸𝐸[𝑋𝑋(𝑡𝑡)𝑋𝑋(𝑡𝑡 + 𝜏𝜏)]
= 𝐸𝐸[𝐴𝐴 cos(𝜔𝜔𝜔𝜔 + 𝜃𝜃) 𝐴𝐴 cos(𝜔𝜔(𝑡𝑡 + 𝜏𝜏) + 𝜃𝜃)]
= 𝐴𝐴2 𝐸𝐸[cos(𝜔𝜔𝜔𝜔 + 𝜃𝜃) cos(𝜔𝜔(𝑡𝑡 + 𝜏𝜏) + 𝜃𝜃)]
𝐴𝐴2
= 𝐸𝐸[cos(2𝜔𝜔𝜔𝜔 + 𝜔𝜔𝜔𝜔 + 2𝜃𝜃) +cos(−𝜔𝜔𝜔𝜔)]
2
𝐴𝐴2 𝐴𝐴2
= 𝐸𝐸[cos(2𝜔𝜔𝜔𝜔 + 𝜔𝜔𝜔𝜔 + 2𝜃𝜃)] + E[cos(𝜔𝜔𝜔𝜔)]
2 2
2𝜋𝜋
𝐴𝐴2 𝐴𝐴2
= � cos(2𝜔𝜔𝜔𝜔 + 𝜔𝜔𝜔𝜔 + 2𝜃𝜃) 𝑓𝑓( 𝜃𝜃) 𝑑𝑑𝑑𝑑 + 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐
2 2
0
2𝜋𝜋
𝐴𝐴2 1 𝐴𝐴2
= � cos(2𝜔𝜔𝜔𝜔 + 𝜔𝜔𝜔𝜔 + 2𝜃𝜃) 𝑑𝑑𝑑𝑑 + 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐
2 2𝜋𝜋 2
0
2 2𝜋𝜋
𝐴𝐴 𝑠𝑠𝑠𝑠𝑠𝑠(2𝜔𝜔𝜔𝜔 + 𝜔𝜔𝜔𝜔 + 2𝜃𝜃) 𝐴𝐴2
= � � + 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐
4𝜋𝜋 2 0
2
2
𝐴𝐴 𝐴𝐴2
= [𝑠𝑠𝑠𝑠𝑠𝑠(2𝜔𝜔𝜔𝜔 + 𝜔𝜔𝜔𝜔) − 𝑠𝑠𝑠𝑠𝑠𝑠(2𝜔𝜔𝜔𝜔 + 𝜔𝜔𝜔𝜔)] + 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐
8𝜋𝜋 2
𝐴𝐴2 𝐴𝐴2
= (0) + 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐
8𝜋𝜋 2
𝐴𝐴2
= 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 = 𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓 𝑜𝑜𝑜𝑜 𝜏𝜏
2
Since 𝐸𝐸[𝑋𝑋(𝑡𝑡)] = 0 𝑎𝑎𝑎𝑎𝑎𝑎 𝑅𝑅𝑋𝑋𝑋𝑋 (𝑡𝑡, 𝑡𝑡 + 𝜏𝜏) 𝑖𝑖𝑖𝑖 𝑎𝑎 𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓 𝑜𝑜𝑜𝑜 𝜏𝜏
Hence X(t) is WSS.
(ii) A man either drives a car or catches a train to go office each day. He never goes
two days is just as likely to derive again as he is to travel by train. Now suppose
that on the first day of the week, the man tossed a fair dice and drove to work iff a
6 appeared. Find the probability that he takes a train on the fourth day and the
probability that he drives to work on the third day.

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

Ans: Travel Pattern is a Markov chain with state space = (train, car)
The TPM of the chain is
T C
0 1
𝑇𝑇
𝑃𝑃 = �1 1�
𝐶𝐶
2 2
51
The initial state probability distribution is 𝑃𝑃(1) = � �
66
1
Since 𝑃𝑃(𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡 𝑏𝑏𝑏𝑏 𝑐𝑐𝑐𝑐𝑐𝑐) = 𝑃𝑃(𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔 6 𝑖𝑖𝑖𝑖 𝑡𝑡ℎ𝑒𝑒 𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡 𝑜𝑜𝑜𝑜 𝑡𝑡ℎ𝑒𝑒 𝑑𝑑𝑑𝑑𝑑𝑑) =
6
5
And 𝑃𝑃(𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡 𝑏𝑏𝑏𝑏 𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡) =
6
51 0 1 1 11
𝑃𝑃(2) = 𝑃𝑃(1) 𝑃𝑃 = � � �1 1� = � �
66 12 12
2 2
1 11 0 1 11 13
𝑃𝑃(3) = 𝑃𝑃(2) 𝑃𝑃 = � � �1 1� = � �
12 12 24 24
2 2
11
𝑃𝑃(𝑡𝑡ℎ𝑒𝑒 𝑚𝑚𝑚𝑚𝑚𝑚 𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡 𝑏𝑏𝑏𝑏 𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡 𝑜𝑜𝑜𝑜 𝑡𝑡ℎ𝑒𝑒 𝑡𝑡ℎ𝑖𝑖𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑𝑑𝑑) =
24
The long run probability is limiting probability

Π = [𝜋𝜋0 𝜋𝜋1 ] where 𝜋𝜋0 + 𝜋𝜋1 = 1 − − − − − −(1)

0 1
And ΠP = Π ⇒ [𝜋𝜋0 𝜋𝜋1 ] � 1 1� = [𝜋𝜋0 𝜋𝜋1 ]
2 2
𝜋𝜋 1
= 𝜋𝜋0 ---------(2)
2
𝜋𝜋 1
+ 𝜋𝜋0 = 𝜋𝜋1 -----------(3)
2

Equation (2) and (3) are one and the same. Solve (1) and (2)
1
𝜋𝜋0 =
3
2
𝜋𝜋1 =
3
2
𝑃𝑃(𝑡𝑡ℎ𝑒𝑒 𝑚𝑚𝑚𝑚𝑚𝑚 𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡 𝑏𝑏𝑏𝑏 𝑐𝑐𝑐𝑐𝑐𝑐 𝑖𝑖𝑖𝑖 𝑡𝑡ℎ𝑒𝑒 𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙 𝑟𝑟𝑟𝑟𝑟𝑟) =
3

(OR)
(b) (i) Prove that the difference of two independent Poisson processes is not a Poisson
process.
Ans: Let 𝑋𝑋1 (𝑡𝑡)𝑎𝑎𝑎𝑎𝑎𝑎 𝑋𝑋2 (𝑡𝑡) be two Poisson processes with parameter
𝜆𝜆1 𝑎𝑎𝑎𝑎𝑎𝑎 𝜆𝜆2
Let 𝑋𝑋(𝑡𝑡) = 𝑋𝑋1 (𝑡𝑡) − 𝑋𝑋2 (𝑡𝑡)
𝐸𝐸[𝑋𝑋(𝑡𝑡)] = 𝐸𝐸[ 𝑋𝑋1 (𝑡𝑡) − 𝑋𝑋2 (𝑡𝑡)]
= 𝜆𝜆1 𝑡𝑡 − 𝜆𝜆2 𝑡𝑡
= (𝜆𝜆1 − 𝜆𝜆2 )𝑡𝑡
Now

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

𝐸𝐸[𝑋𝑋 2 (𝑡𝑡)] = 𝐸𝐸 [ 𝑋𝑋1 (𝑡𝑡) − 𝑋𝑋2 (𝑡𝑡)]2


= 𝐸𝐸[𝑋𝑋12 (𝑡𝑡) + 𝑋𝑋22 (𝑡𝑡) − 2𝑋𝑋1 (𝑡𝑡)𝑋𝑋2 (𝑡𝑡)]
= 𝐸𝐸[𝑋𝑋12 (𝑡𝑡)] + 𝐸𝐸[𝑋𝑋22 (𝑡𝑡)] − 2𝐸𝐸[𝑋𝑋1 (𝑡𝑡)𝑋𝑋2 (𝑡𝑡)]
= 𝐸𝐸[𝑋𝑋12 (𝑡𝑡)] + 𝐸𝐸[𝑋𝑋22 (𝑡𝑡)] − 2𝐸𝐸[𝑋𝑋1 (𝑡𝑡)]𝐸𝐸[𝑋𝑋2 (𝑡𝑡)]
∴ 𝑋𝑋1 (𝑡𝑡) 𝑎𝑎𝑎𝑎𝑎𝑎 𝑋𝑋2 (𝑡𝑡) are independent
= 𝜆𝜆12 𝑡𝑡 2 + 𝜆𝜆1 𝑡𝑡 + 𝜆𝜆22 𝑡𝑡 2 + 𝜆𝜆2 𝑡𝑡 − 2𝜆𝜆1 𝑡𝑡 𝜆𝜆2 𝑡𝑡
= (𝜆𝜆12 + 𝜆𝜆22 − 2 𝜆𝜆1 𝑡𝑡 𝜆𝜆2 )𝑡𝑡 2 + (𝜆𝜆1 + 𝜆𝜆2 )𝑡𝑡
= (𝜆𝜆1 − 𝜆𝜆2 )2 𝑡𝑡 2 + (𝜆𝜆1 + 𝜆𝜆2 )𝑡𝑡
≠ (𝜆𝜆1 − 𝜆𝜆2 )2 𝑡𝑡 2 + (𝜆𝜆1 − 𝜆𝜆2 )𝑡𝑡
∴ 𝑋𝑋1 (𝑡𝑡) − 𝑋𝑋2 (𝑡𝑡)is not a Poisson process.
(ii) Prove that a random telegraph signal process 𝑌𝑌(𝑡𝑡) = 𝛼𝛼 𝑋𝑋(𝑡𝑡) is a WSS process,
where 𝛼𝛼 is a random variable which is independent of X(t) assumes values -1 and
1 with equal probability and 𝑅𝑅𝑋𝑋𝑋𝑋 (𝑡𝑡1 , 𝑡𝑡2 ) = 𝑒𝑒 −2𝜆𝜆|𝑡𝑡 1 −𝑡𝑡 2 |
Ans: Let 𝑌𝑌(𝑡𝑡) = 𝛼𝛼 𝑋𝑋(𝑡𝑡)
1
𝑃𝑃(𝛼𝛼 = 1) = 𝑃𝑃(𝛼𝛼 = −1) =
2
By the definition 𝐸𝐸(𝛼𝛼) = 0, 𝐸𝐸(𝛼𝛼 2 ) = 1
To prove Y(t) is a WSS process
To Prove (i) 𝐸𝐸[𝑌𝑌(𝑡𝑡)] = 𝑎𝑎 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐
(ii) 𝑅𝑅(𝑡𝑡1 , 𝑡𝑡2 ) = 𝑎𝑎 𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓 𝑜𝑜𝑜𝑜 𝑡𝑡1 − 𝑡𝑡2
(i) 𝐸𝐸[𝑌𝑌(𝑡𝑡)] = 𝐸𝐸[𝛼𝛼𝛼𝛼(𝑡𝑡)]
= 𝐸𝐸(𝛼𝛼)𝐸𝐸[𝑋𝑋(𝑡𝑡)] since 𝛼𝛼 𝑎𝑎𝑎𝑎𝑎𝑎 𝑋𝑋(𝑡𝑡)𝑎𝑎𝑎𝑎𝑎𝑎 𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖
= 0 = 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐
(ii) 𝑅𝑅(𝑡𝑡1 , 𝑡𝑡2 ) = 𝐸𝐸[𝑌𝑌(𝑡𝑡1 )𝑌𝑌(𝑡𝑡2 )] = 𝐸𝐸[𝛼𝛼𝛼𝛼(𝑡𝑡1 )𝛼𝛼𝛼𝛼(𝑡𝑡2 )]
= 𝐸𝐸[𝛼𝛼 2 ]𝐸𝐸[𝑋𝑋(𝑡𝑡1 )𝑋𝑋(𝑡𝑡2 )]
= 1 × 𝑒𝑒 −2𝜆𝜆|𝑡𝑡 1 −𝑡𝑡2 |
= 𝑎𝑎 𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓 𝑜𝑜𝑜𝑜 𝑡𝑡1 − 𝑡𝑡2
∴ 𝑌𝑌(𝑡𝑡) 𝑖𝑖𝑖𝑖 𝑎𝑎 𝑊𝑊𝑊𝑊𝑊𝑊
14. (a) (i) The autocorrelation function of an ergodic process X(t) is 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏) =
1 − |𝜏𝜏| 𝑖𝑖𝑖𝑖 |𝜏𝜏| ≤ 1
� Obtain the spectral density X(t).
0, 𝑜𝑜𝑜𝑜ℎ𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒
Ans: 1 − |𝜏𝜏| 𝑖𝑖𝑖𝑖 |𝜏𝜏| ≤ 1
Given 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏) = �
0, 𝑜𝑜𝑜𝑜ℎ𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒

𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔) = � 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏)𝑒𝑒 − 𝑖𝑖𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑
−∞
1
= � (1 − |𝜏𝜏|)𝑒𝑒 − 𝑖𝑖𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑
−1
1
= � (1 − |𝜏𝜏|) (𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 − 𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖) 𝑑𝑑𝑑𝑑
−1
1
1
= � (1 − |𝜏𝜏|) 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 𝑑𝑑𝑑𝑑 − 𝑖𝑖 �(1 − |𝜏𝜏|) 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 𝑑𝑑𝑑𝑑
−1
−1
1

= 2 �(1 − 𝜏𝜏) 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 𝑑𝑑𝑑𝑑


0

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 1
= 2 �(1 − 𝜏𝜏) − (−1) �− ��
𝜔𝜔 𝜔𝜔 2 0
𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 1
= 2 ��0 − 2 � − �0 − 2 ��
𝜔𝜔 𝜔𝜔
1 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐
= 2� 2 − 2 �
𝜔𝜔 𝜔𝜔
2
= 2 [1 − 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐]
𝜔𝜔
4 𝜔𝜔
∴ 𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔) = 2 𝑠𝑠𝑠𝑠𝑠𝑠2 � �
𝜔𝜔 2

(ii) The cross power spectrum of real processes X(t) and Y(t) is given by 𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔) =
𝑎𝑎 + 𝑖𝑖𝑖𝑖𝑖𝑖, 𝑖𝑖𝑖𝑖 |𝜔𝜔| < 1
� Find the cross correlation function.
0, 𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒ℎ𝑒𝑒𝑒𝑒𝑒𝑒
Ans: 𝑎𝑎 + 𝑖𝑖𝑖𝑖𝑖𝑖, 𝑖𝑖𝑖𝑖 |𝜔𝜔| < 1
Given 𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔) = �
0, 𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒ℎ𝑒𝑒𝑒𝑒𝑒𝑒
1 ∞
∴ 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏) = � 𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔)𝑒𝑒 𝑖𝑖𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑
2𝜋𝜋 −∞
1 1
= � (𝑎𝑎 + 𝑖𝑖𝑖𝑖𝑖𝑖) 𝑒𝑒 𝑖𝑖𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑
2𝜋𝜋 −1
1 1 1
= � 𝑎𝑎 𝑒𝑒 𝑑𝑑𝑑𝑑 + � 𝑖𝑖𝑖𝑖𝑖𝑖 𝑒𝑒 𝑖𝑖𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑
𝑖𝑖𝑖𝑖𝑖𝑖
2𝜋𝜋 −1 −1
𝑖𝑖𝑖𝑖𝑖𝑖 1 1
1 𝑒𝑒 𝑒𝑒 𝑖𝑖𝑖𝑖𝑖𝑖 𝑒𝑒 𝑖𝑖𝑖𝑖𝑖𝑖
= �𝑎𝑎 � � + 𝑖𝑖𝑖𝑖 �𝜔𝜔 −1 � �
2𝜋𝜋 𝑖𝑖𝑖𝑖 𝑖𝑖𝑖𝑖 (𝑖𝑖𝑖𝑖)2
−1 −1

1 𝑒𝑒 𝑖𝑖𝑖𝑖 − 𝑒𝑒 −𝑖𝑖𝑖𝑖 𝑒𝑒 𝑖𝑖𝑖𝑖 + 𝑒𝑒 −𝑖𝑖𝑖𝑖 2


𝑒𝑒 𝑖𝑖𝑖𝑖 − 𝑒𝑒 −𝑖𝑖𝑖𝑖
= �𝑎𝑎 � � + 𝑏𝑏 � � + 𝑖𝑖 𝑏𝑏 � ��
𝜋𝜋 𝑖𝑖2𝜏𝜏 2𝜏𝜏 2𝜏𝜏 2
1 𝑎𝑎 𝑏𝑏 𝑏𝑏
= � 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 + 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 − 2 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠�
𝜋𝜋 𝜏𝜏 𝜏𝜏 𝜏𝜏
1
∴ 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏) = 2 [𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎 + 𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏 − 𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏]
𝜋𝜋𝜏𝜏
(OR)
(b) (i) If {𝑋𝑋(𝑡𝑡)} and {𝑌𝑌(𝑡𝑡)} are two random processes with autocorrelation functions
𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏) 𝑎𝑎𝑎𝑎𝑎𝑎 𝑅𝑅𝑌𝑌𝑌𝑌 (𝜏𝜏) respectively and jointly WSS, then prove that |𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏)| ≤
�𝑅𝑅𝑋𝑋𝑋𝑋 (0)𝑅𝑅𝑌𝑌𝑌𝑌 (0). Establish any two properties of autocorrelation function 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏).
Ans: Given |𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏)| ≤ �𝑅𝑅𝑋𝑋𝑋𝑋 (0)𝑅𝑅𝑌𝑌𝑌𝑌 (0)
Proof: By Cauchy-Schwartz inequality, we have
𝐸𝐸[𝑋𝑋(𝑡𝑡)𝑌𝑌(𝑡𝑡 + 𝜏𝜏)]2 ≤ 𝐸𝐸[𝑋𝑋 2 (𝑡𝑡)]𝐸𝐸[𝑋𝑋 2 (𝑡𝑡 + 𝜏𝜏)]
⇒ [𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏)]2 ≤ 𝑅𝑅𝑋𝑋𝑋𝑋 (0)𝑅𝑅𝑌𝑌𝑌𝑌 (0)
⇒ |𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏)| ≤ �𝑅𝑅𝑋𝑋𝑋𝑋 (0)𝑅𝑅𝑌𝑌𝑌𝑌 (0)

(1)𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏) = 𝑅𝑅𝑋𝑋𝑋𝑋 (−𝜏𝜏). (i.e., autocorrelation function is an even function)

Proof: We know that

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏) = 𝐸𝐸[ 𝑋𝑋(𝑡𝑡)𝑋𝑋(𝑡𝑡 + 𝜏𝜏)]

𝑅𝑅𝑋𝑋𝑋𝑋 (−𝜏𝜏) = 𝐸𝐸[ 𝑋𝑋(𝑡𝑡)𝑋𝑋(𝑡𝑡 − 𝜏𝜏)]

Put 𝑡𝑡 − 𝜏𝜏 = 𝑃𝑃 ⇒ 𝑡𝑡 = 𝑃𝑃 + 𝜏𝜏

𝑅𝑅𝑋𝑋𝑋𝑋 (−𝜏𝜏) = 𝐸𝐸[ 𝑋𝑋(𝑃𝑃 + 𝜏𝜏)𝑋𝑋(𝑃𝑃)] = 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏)

(2) The mean square value of the random process may be obtained from
the auto correlation function, 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏) by putting 𝜏𝜏 = 0

Proof:We know that

𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏) = 𝐸𝐸[ 𝑋𝑋(𝑡𝑡)𝑋𝑋(𝑡𝑡 + 𝜏𝜏)]

𝑅𝑅𝑋𝑋𝑋𝑋 (0) = 𝐸𝐸[ 𝑋𝑋(𝑡𝑡)𝑋𝑋(𝑡𝑡)]

= 𝐸𝐸[𝑋𝑋 2 (𝑡𝑡)]

𝑅𝑅𝑋𝑋𝑋𝑋 (0)is the mean square value.

(ii) Given the power spectral density of a continuous process as 𝑆𝑆 (𝜔𝜔) = 𝜔𝜔 2 +9 .


𝑋𝑋𝑋𝑋 𝜔𝜔 4 +5𝜔𝜔 2 +4
Find the mean square value of the process.
Ans: 𝜔𝜔 2 +9
Given 𝑆𝑆 (𝜔𝜔) =𝑋𝑋𝑋𝑋 𝜔𝜔 4 +5𝜔𝜔 2 +4

𝜔𝜔2 + 9
=
(𝜔𝜔 2 + 1)(𝜔𝜔 2 + 4)
1 ∞
𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏) = � 𝑆𝑆 (𝜔𝜔)𝑒𝑒 𝑖𝑖𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑
2𝜋𝜋 −∞ 𝑋𝑋𝑋𝑋
The mean square value is given by
𝐸𝐸[𝑋𝑋 2 (𝑡𝑡)] = 𝑅𝑅𝑋𝑋𝑋𝑋 (0)

1 ∞
= � 𝑆𝑆 (𝜔𝜔) 𝑑𝑑𝑑𝑑
2𝜋𝜋 −∞ 𝑋𝑋𝑋𝑋

1
= 2 � 𝑆𝑆 (𝜔𝜔) 𝑑𝑑𝑑𝑑 ∵ 𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔) 𝑖𝑖𝑖𝑖 𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒
2𝜋𝜋 0 𝑋𝑋𝑋𝑋
1 ∞ 𝜔𝜔2 + 9
= � 𝑑𝑑𝑑𝑑
𝜋𝜋 0 (𝜔𝜔 2 + 1)(𝜔𝜔 2 + 4)

To evaluate this integral by using partial fraction


Put 𝜔𝜔2 = 𝑢𝑢 we have
𝑢𝑢 + 9 𝐴𝐴 𝐵𝐵
= +
(𝑢𝑢 + 1)(𝑢𝑢 + 4) (𝑢𝑢 + 1) (𝑢𝑢 + 4)

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130

𝑢𝑢 + 9 = 𝐴𝐴(𝑢𝑢 + 4) + 𝐵𝐵(𝑢𝑢 + 1)
𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆𝑆 𝑓𝑓𝑓𝑓𝑓𝑓 𝐴𝐴 𝑎𝑎𝑎𝑎𝑎𝑎 𝐵𝐵, 𝑤𝑤𝑤𝑤 𝑔𝑔𝑔𝑔𝑔𝑔
8 5
𝐴𝐴 = , 𝐵𝐵 = −
3 3
8 5
𝜔𝜔2 + 9 −3
∴ = 3 +
(𝜔𝜔 2 + 1)(𝜔𝜔 2 + 4) (𝑢𝑢 + 1) (𝑢𝑢 + 4)

8 5
2 (𝑡𝑡)]
1 ∞ 3

3
𝐸𝐸[𝑋𝑋 = �� 2
𝑑𝑑𝑑𝑑 − � 2
𝑑𝑑𝑑𝑑�
𝜋𝜋 0 (𝜔𝜔 + 1) 0 (𝜔𝜔 + 4)

1 8 5 1 𝜔𝜔 ∞
= � [𝑡𝑡𝑡𝑡𝑡𝑡−1 𝜔𝜔]∞
0 − × �𝑡𝑡𝑡𝑡𝑡𝑡 −1
� �
𝜋𝜋 3 3 2 2 0
1 8 5
= � [𝑡𝑡𝑡𝑡𝑡𝑡−1 (∞) − 𝑡𝑡𝑡𝑡𝑡𝑡−1 (0)] − [𝑡𝑡𝑡𝑡𝑡𝑡−1 (∞) − 𝑡𝑡𝑡𝑡𝑡𝑡−1 (0)]�
𝜋𝜋 3 6
1 8 𝜋𝜋 5 𝜋𝜋 1 𝜋𝜋 8 5
= � × − × �= × � − �
𝜋𝜋 3 2 6 2 𝜋𝜋 2 3 6
11
=
12

15. (a) (i) If 𝑌𝑌(𝑡𝑡) = 𝐴𝐴 cos(𝜔𝜔0 𝑡𝑡 + 𝜃𝜃) + 𝑁𝑁(𝑡𝑡), where A is a constant, 𝜃𝜃 is a random variable
with uniform distribution in (−𝜋𝜋, 𝜋𝜋)and N(t) is a band-limited Gaussian white
𝑁𝑁0
𝑓𝑓𝑓𝑓𝑓𝑓 |𝜔𝜔 − 𝜔𝜔0 | < 𝜔𝜔𝐵𝐵
noise with a power spectral density 𝑆𝑆𝑁𝑁𝑁𝑁 (𝜔𝜔) = � 2 Fnd the
0, 𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒ℎ𝑒𝑒𝑒𝑒𝑒𝑒
power spectral density of Y(t). Assume that N(t) and 𝜃𝜃 are independent.
Ans: 𝑌𝑌(𝑡𝑡)𝑌𝑌(𝑡𝑡 + 𝜏𝜏) = [𝐴𝐴 cos(𝜔𝜔0 𝑡𝑡 + 𝜃𝜃) + 𝑁𝑁(𝑡𝑡)][𝐴𝐴 cos(𝜔𝜔0 (𝑡𝑡 + 𝜏𝜏) + 𝜃𝜃)
+ 𝑁𝑁(𝑡𝑡 + 𝜏𝜏)]
2
= 𝐴𝐴 cos(𝜔𝜔𝜔𝜔 + 𝜃𝜃) cos(𝜔𝜔0 (𝑡𝑡 + 𝜏𝜏) + 𝜃𝜃) + 𝑁𝑁(𝑡𝑡)𝑁𝑁(𝑡𝑡 + 𝜏𝜏)
+ 𝐴𝐴 cos(𝜔𝜔0 𝑡𝑡 + 𝜃𝜃)𝑁𝑁(𝑡𝑡 + 𝜏𝜏) + 𝐴𝐴 cos(𝜔𝜔0 (𝑡𝑡 + 𝜏𝜏) + 𝜃𝜃)𝑁𝑁(𝑡𝑡)
𝑅𝑅𝑌𝑌𝑌𝑌 (𝑡𝑡, 𝑡𝑡 + 𝜏𝜏) = 𝐴𝐴2 E[cos(𝜔𝜔0 𝑡𝑡 + 𝜃𝜃) cos(𝜔𝜔0 (𝑡𝑡 + 𝜏𝜏) + 𝜃𝜃)]
+ 𝐸𝐸[𝑁𝑁(𝑡𝑡)𝑁𝑁(𝑡𝑡 + 𝜏𝜏)] + 𝐴𝐴 𝐸𝐸[cos(𝜔𝜔0 𝑡𝑡 + 𝜃𝜃)] 𝐸𝐸[𝑁𝑁(𝑡𝑡 + 𝜏𝜏)]
+ 𝐴𝐴 𝐸𝐸[cos(𝜔𝜔0 (𝑡𝑡 + 𝜏𝜏) + 𝜃𝜃)] 𝐸𝐸[𝑁𝑁(𝑡𝑡)]
Since 𝑁𝑁(𝑡𝑡) 𝑎𝑎𝑎𝑎𝑎𝑎 𝜃𝜃 are independent.
By hypothesis 𝐸𝐸[𝑁𝑁(𝑡𝑡)] = 0, 𝐸𝐸[𝑁𝑁(𝑡𝑡 + 𝜏𝜏) = 0
𝐴𝐴2
∴ 𝑅𝑅𝑌𝑌𝑌𝑌 (𝜏𝜏) = 𝑐𝑐𝑐𝑐𝑐𝑐𝜔𝜔0 𝜏𝜏 + 𝑅𝑅𝑁𝑁𝑁𝑁 (𝜏𝜏)
2
𝐴𝐴2 ∞
𝑆𝑆𝑌𝑌𝑌𝑌 (𝜔𝜔) = � 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 𝑒𝑒 − 𝑖𝑖𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑 + 𝑆𝑆𝑁𝑁𝑁𝑁 (𝜔𝜔)
2 −∞
𝜋𝜋𝐴𝐴2
= [𝛿𝛿(𝜔𝜔 − 𝜔𝜔0 ) + 𝛿𝛿(𝜔𝜔 − 𝜔𝜔0 )] + 𝑆𝑆𝑁𝑁𝑁𝑁 (𝜔𝜔)
2
(ii) A linear time invariant system has a impulse response ℎ(𝑡𝑡) = 𝑒𝑒 −𝛽𝛽𝛽𝛽 𝑈𝑈(𝑡𝑡). Find the
power spectral density of the output Y(t) corresponding to the input X(t).

Ans:
𝐻𝐻(𝜔𝜔) = 𝐹𝐹[ℎ(𝑡𝑡)] = � ℎ(𝑡𝑡) 𝑒𝑒 − 𝑖𝑖𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑
−∞

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130


= � 𝑒𝑒 − 𝛽𝛽𝛽𝛽 𝑒𝑒 − 𝑖𝑖𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑
0

= � 𝑒𝑒 − (𝛽𝛽 +𝑖𝑖𝑖𝑖 )𝑡𝑡 𝑑𝑑𝑑𝑑
0

𝑒𝑒 − (𝛽𝛽 +𝑖𝑖𝑖𝑖 )𝑡𝑡
=� �
− (𝛽𝛽 + 𝑖𝑖𝑖𝑖) 0
1
=
(𝛽𝛽 + 𝑖𝑖𝑖𝑖)
1 1
|𝐻𝐻(𝜔𝜔)|2 = � �= 2
(𝛽𝛽 + 𝑖𝑖𝑖𝑖) (𝛽𝛽 + 𝜔𝜔 2 )
Power spectral density of X(t) is 𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔) = 𝐹𝐹[𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏)]
∴ 𝑆𝑆𝑌𝑌𝑌𝑌 (𝜔𝜔) = |𝐻𝐻(𝜔𝜔)|2 𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔)
1
= 2 𝑆𝑆 (𝜔𝜔)
(𝛽𝛽 + 𝜔𝜔 2 ) 𝑋𝑋𝑋𝑋
(OR)
(b) (i) Assume a random process X(t) is given as input to a system with transfer
function 𝐻𝐻(𝜔𝜔) = 1 𝑓𝑓𝑓𝑓𝑓𝑓 − 𝜔𝜔0 < 𝜔𝜔 < 𝜔𝜔0 . If the autocorrelation function of the
𝑁𝑁
input is 0 𝛿𝛿(𝜏𝜏), find the autocorrelation function of the output process.
2
Ans: Given X(t) is the input process to a system to a system with system transfer
𝑁𝑁
function 𝐻𝐻(𝜔𝜔) = 1 𝑎𝑎𝑎𝑎𝑎𝑎 𝑅𝑅𝑋𝑋𝑋𝑋 (𝜏𝜏) = 0 𝛿𝛿(𝜏𝜏)
2

𝑁𝑁0
𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔) = � 𝛿𝛿(𝜏𝜏)𝑒𝑒 − 𝑖𝑖𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑
−∞ 2
𝑁𝑁0 ∞ 𝑁𝑁0
= � 𝛿𝛿(𝜏𝜏)𝑒𝑒 − 𝑖𝑖𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑 = ×1 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠[𝛿𝛿(𝜏𝜏)] = 1
2 −∞ 2
𝑁𝑁0
𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔) =
2
If the output process is Y(t) then 𝑆𝑆𝑌𝑌𝑌𝑌 (𝜔𝜔) = |𝐻𝐻(𝜔𝜔)|2 𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔)
𝑁𝑁0 𝑁𝑁0
∴ 𝑆𝑆𝑌𝑌𝑌𝑌 (𝜔𝜔) = 12 = , − 𝜔𝜔0 < 𝜔𝜔 < 𝜔𝜔0

2 2
1
𝑅𝑅𝑌𝑌𝑌𝑌 (𝜏𝜏) = � 𝑆𝑆 (𝜔𝜔)𝑒𝑒 𝑖𝑖𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑
2𝜋𝜋 −∞ 𝑌𝑌𝑌𝑌
1 𝜔𝜔 0 𝑁𝑁0 𝑖𝑖𝑖𝑖𝑖𝑖
= � 𝑒𝑒 𝑑𝑑𝑑𝑑
2𝜋𝜋 −𝜔𝜔 0 2
𝜔𝜔
𝑁𝑁0 𝑒𝑒 𝑖𝑖𝑖𝑖𝑖𝑖 0 𝑁𝑁0 𝑒𝑒 𝑖𝑖𝑖𝑖 𝜔𝜔 0 − 𝑒𝑒 −𝑖𝑖𝑖𝑖 𝜔𝜔 0
= � � = � �
4𝜋𝜋 𝑖𝑖𝑖𝑖 −𝜔𝜔 4𝜋𝜋 𝑖𝑖𝑖𝑖
0
𝑁𝑁0 𝑒𝑒 𝑖𝑖𝑖𝑖 𝜔𝜔 0 − 𝑒𝑒 −𝑖𝑖𝑖𝑖𝜔𝜔 0
= � �
2𝜋𝜋𝜋𝜋 𝑖𝑖2
𝑁𝑁0
∴ 𝑆𝑆𝑌𝑌𝑌𝑌 (𝜔𝜔) = 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝜔𝜔0
2𝜋𝜋𝜋𝜋

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130


Agni college of Technology
Chennai – 130
1
(ii) 𝑖𝑖𝑖𝑖 0 ≤ 𝑡𝑡 ≤ 𝑇𝑇
A circuit has unit impulse response given by ℎ(𝑡𝑡) = �𝑇𝑇 Evaluate
0, 𝑜𝑜𝑜𝑜ℎ𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒𝑒
𝑆𝑆𝑌𝑌𝑌𝑌 (𝜔𝜔) interims of 𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔).

Ans:
𝐻𝐻(𝜔𝜔) = 𝐹𝐹[ℎ(𝑡𝑡)] == � ℎ(𝑡𝑡) 𝑒𝑒 − 𝑖𝑖𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑
−∞
𝑇𝑇 1 1 𝑇𝑇
= ∫0 𝑒𝑒 − 𝑖𝑖𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑 = ∫0 𝑒𝑒 − 𝑖𝑖𝑖𝑖𝑖𝑖 𝑑𝑑𝑑𝑑
𝑇𝑇 𝑇𝑇
𝑇𝑇
1 𝑒𝑒 − 𝑖𝑖𝑖𝑖𝑖𝑖 1 𝑒𝑒 − 𝑖𝑖𝑖𝑖𝑖𝑖 1
= � � = � − �
𝑇𝑇 −𝑖𝑖𝑖𝑖 0 𝑇𝑇 −𝑖𝑖𝑖𝑖 −𝑖𝑖𝑖𝑖
1 − 𝑖𝑖𝑖𝑖𝑖𝑖
= − �𝑒𝑒 − 1�
𝑖𝑖𝑖𝑖𝑖𝑖
1
= �1 − 𝑒𝑒 − 𝑖𝑖𝑖𝑖𝑖𝑖 �
𝑖𝑖𝑖𝑖𝑖𝑖
1
= [1 − (𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 − 𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖)]
𝑖𝑖𝑖𝑖𝑖𝑖
1
= [1 − 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 + 𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖]
𝑖𝑖𝑖𝑖𝑖𝑖
1 𝜔𝜔𝜔𝜔 𝜔𝜔𝜔𝜔 𝜔𝜔𝜔𝜔
= �2𝑠𝑠𝑠𝑠𝑠𝑠2 + 𝑖𝑖2𝑠𝑠𝑠𝑠𝑠𝑠 𝑐𝑐𝑐𝑐𝑐𝑐 �
𝑖𝑖𝑖𝑖𝑖𝑖 2 2 2
−2𝑖𝑖 𝜔𝜔𝜔𝜔 𝜔𝜔𝜔𝜔 𝜔𝜔𝜔𝜔
= × 𝑆𝑆𝑆𝑆𝑆𝑆 �𝑆𝑆𝑆𝑆𝑆𝑆 + 𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖 �
𝜔𝜔𝜔𝜔 2 2 2
𝜔𝜔𝜔𝜔
2𝑆𝑆𝑆𝑆𝑆𝑆
= 2 �𝐶𝐶𝐶𝐶𝐶𝐶 𝜔𝜔𝜔𝜔 − 𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖 𝜔𝜔𝜔𝜔�
𝜔𝜔𝜔𝜔 2 2
|𝐻𝐻(𝜔𝜔)|2 = 𝐻𝐻(𝜔𝜔)𝐻𝐻 ∗ (𝜔𝜔)
𝜔𝜔𝜔𝜔
2𝑆𝑆𝑆𝑆𝑆𝑆
= 2 �𝐶𝐶𝐶𝐶𝐶𝐶 𝜔𝜔𝜔𝜔 − 𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖 𝜔𝜔𝜔𝜔�
𝜔𝜔𝜔𝜔 2 2
𝜔𝜔𝜔𝜔
2𝑆𝑆𝑆𝑆𝑆𝑆
× 2 �𝐶𝐶𝐶𝐶𝐶𝐶 𝜔𝜔𝜔𝜔 + 𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖 𝜔𝜔𝜔𝜔�
𝜔𝜔𝜔𝜔 2 2
𝜔𝜔𝜔𝜔 2 𝜔𝜔𝜔𝜔 2
2𝑆𝑆𝑆𝑆𝑆𝑆 2𝑆𝑆𝑆𝑆𝑆𝑆
=� 2 � ×1=� 2�
𝜔𝜔𝜔𝜔 𝜔𝜔𝜔𝜔
𝑆𝑆𝑌𝑌𝑌𝑌 (𝜔𝜔) = |𝐻𝐻(𝜔𝜔)|2 𝑆𝑆𝑋𝑋𝑋𝑋 (𝜔𝜔)
𝜔𝜔𝜔𝜔 2
2𝑆𝑆𝑆𝑆𝑆𝑆
∴ 𝑆𝑆𝑌𝑌𝑌𝑌 (𝜔𝜔) = � 2 � 𝑆𝑆 (𝜔𝜔)
𝑋𝑋𝑋𝑋
𝜔𝜔𝜔𝜔

Prepared by Department of Mathematics, Agni College of Technology, Chennai - 130

Anda mungkin juga menyukai