net
et
11 Gamma distribution 31
UNIT II
.n
TWO –DIMENSIONAL RANDOM VARIABLES
11 Introduction 37
12
13
Joint distribution
pz
Marginal and Conditional Distribution
37
38
ee
14 Covariance 43
15 Correlation Coefficient 44
16 Problems 41
ad
17 Linear Regression 45
18 Transformation of random variables 46
19 Problems 47
.p
UNIT III
RANDOM PROCESSES
w
20 Introduction 49
w
21 Classification 50
22 stationary processes 51
w
23 Markov processes 55
24 Poisson processes 56
25 Random Telegraph processes 57
UNIT IV
CORRELATION AND SPECTRAL DENSITIES
26 Introduction 60
27 Auto Correlation functions 60
28 Properties 61
29 Cross Correlation functions 63
30 Properties 64
31 Power spectral density 65
www.padeepz.net
www.padeepz.net
32 66
Properties
33 Cross spectral density 66
34 Properties 67
UNIT V
LINER SYSTEMS WITH RANDOM INPUTS
35 Introduction 71
36 Linear time invariant systems 72
37 Problems 72
38 Linear systems with random inputs 73
39 Auto Correlation and Cross Correlation functions of inputs and 74
outputs
40 System transfer function 75
41 Problems 76
et
.n
pz
ee
ad
.p
w
w
w
www.padeepz.net
www.padeepz.net
OBJECTIVES: To provide necessary basic concepts in probability and random processes for
applications such as random signals, linear systems etc in communication engineering.
UNIT I RANDOM VARIABLES 9+3 Discrete and continuous random variables – Moments –
Moment generating functions – Binomial, Poisson, Geometric, Uniform, Exponential, Gamma
and Normal distributions.
UNIT II TWO - DIMENSIONAL RANDOM VARIABLES 9+3 Joint distributions – Marginal and
conditional distributions – Covariance – Correlation and Linear regression – Transformation of
random variables.
UNIT III RANDOM PROCESSES 9+3 Classification – Stationary process – Markov process -
Poisson process – Random telegraph process.
UNIT IV CORRELATION AND SPECTRAL DENSITIES 9+3 Auto correlation functions – Cross
et
correlation functions – Properties – Power spectral density – Cross spectral density –
Properties.
UNIT V LINEAR SYSTEMS WITH RANDOM INPUTS 9+3
.n
Linear time invariant system – System transfer function – Linear systems with random inputs –
Auto correlation and Cross correlation functions of input and output.
TOTAL (L:45+T:15): 60 PERIODS
OUTCOMES: pz
The students will have an exposure of various distribution functions and help in acquiring
ee
skills in handling situations involving more than one variable. Able to analyze the response of
random inputs to linear time invariant systems.
ad
TEXT BOOKS:
1. Ibe.O.C., “Fundamentals of Applied Probability and Random Processes", Elsevier, 1st Indian
Reprint, 2007.
.p
2. Peebles. P.Z., "Probability, Random Variables and Random Signal Principles", Tata McGraw
Hill, 4th Edition, New Delhi, 2002.
w
REFERENCES:
w
1. Yates. R.D. and Goodman.D.J., "Probability and Stochastic Processes", 2nd Edition, Wiley
India Pvt. Ltd., Bangalore, 2012.
w
2. Stark. H., and Woods. J.W., "Probability and Random Processes with Applications to Signal
Processing", 3rd Edition,Pearson Education, Asia, 2002.
3. Miller. S.L. and Childers.D.G., "Probability and Random Processes with Applications to Signal
Processing and Communications", Academic Press, 2004.
4. Hwei Hsu, "Schaum‟s Outline of Theory and Problems of Probability, Random Variables and
Random Processes", Tata McGraw Hill Edition, New Delhi, 2004.
5. Cooper. G.R., McGillem. C.D., "Probabilistic Methods of Signal and System Analysis", 3rd
Indian Edition, Oxford University Press, New Delhi, 2012.
www.padeepz.net
www.padeepz.net
UNIT - I
RANDOM VARIABLES
Introduction
Thus a random variable X can be considered as a function that maps all elements in
the sample space S into points on the real line. The notation X(S)=x means that x is the
et
value associated with the outcomes S by the Random variable X.
.n
1.1 SAMPLE SPACE
Consider an experiment of throwing a coin twice. The outcomes
pz
S = {HH, HT, TH, TT} constitute the sample space.
Eg : Number of heads
We denote random variable by the letter (X, Y, etc) and any particular value of the
random variable by x or y.
.p
Thus a random X can be the considered as a fun. That maps all elements in the sample space S
into points on the real line. The notation X(S) = x means that x is the value associated with
w
www.padeepz.net
www.padeepz.net
et
Distribution function of the random variable X or cumulative distribution of the random
variable X
.n
Def :
The distribution function of a random variable X defined in (-∞, ∞) is given by
F(x) = P(X ≤ x) = P{s : X(s) ≤ x}
Note pz
Let the random variable X takes values x1 , x2 , ….., x n with probabilities P 1 , P 2 , ….., P n
ee
and let x1 < x 2 < ….. <x n
Then we have
F(x) = P(X < x1 ) = 0, -∞ < x < x,
ad
www.padeepz.net
www.padeepz.net
The function p(x) satisfying the above two conditions is called the probability mass
function (or) probability distribution of the R.V.X. The probability distribution {x i , p i } can be
displayed in the form of table as shown below.
X = xi x1 x2 ……. xi
P(X = xi ) = p i p1 p2 ……. pi
Notation
Let ‘S’ be a sample space. The set of all outcomes ‘S’ in S such that
X(S) = x is denoted by writing X = x.
P(X = x) = P{S : X(s) = x}
|||ly P(x ≤ a) = P{S : X() ∈ (-∞, a)}
and P(a < x ≤ b) = P{s : X(s) ∈ (a, b)}
P(X = a or X = b) = P{(X = a) ∪ (X = b)}
P(X = a and X = b) = P{(X = a) ∩ (X = b)}
et
and so on.
.n
Theorem :1 If X1 and X2 are random variable and K is a constant then KX 1 , X 1 + X 2 , X 1 X 2 ,
K 1 X 1 + K2 X 2 , X 1 -X 2 are also random variables.
Theorem :2
pz
If ‘X’ is a random variable and f(•) is a continuous function, then f(X) is a random
ee
variable.
Note
ad
www.padeepz.net
www.padeepz.net
Table 1
Values of X 0 1 2 3 4 5 6 7 8
p(x) a 3a 5a 7a 9a 11a 13a 15a 17a
∑ p(x ) = 1
i =0
i
et
X=x 0 1 2 3 4 5 6 7 8
.n
P(x) 1/81 3/81 5/81 7/81 9/81 11/81 13/81 15/81 17/81
(ii) P(X ≥ 3)
= p(0) + p(1) + p(2)
= 1/81+ 3/81 + 5/81 = 9/81 pz
ee
= 1 - p(X < 3)
= 1 - 9/81 = 72/81
(iii) P(0 < x < 5) = p(1) + p(2) + p(3) + p(4) here 0 & 5 are not include
ad
81 81
(iv) To find the distribution function of X using table 2, we get
w
X = x F(X) = P(x ≤ x)
w
www.padeepz.net
www.padeepz.net
et
variable.
Example : Age, height, weight are continuous R.V.’s.
.n
1.3.1 PROBABILITY DENSITY FUNCTION
Then this function f(x) is termed as the probability density function (or) simply density function
of the R.V. ‘X’.
.p
It is also called the frequency function, distribution density or the probability density
function.
w
The curve y = f(x) is called the probability curve of the distribution curve.
Remark
w
If f(x) is p.d.f of the R.V.X then the probability that a value of the R.V. X will fall in
w
some interval (a, b) is equal to the definite integral of the function f(x) a to b.
b
P(a < x < b) = ∫ f (x) dx
a (or)
b
P(a ≤ X ≤ b) = ∫ f (x) dx
a
Remark
www.padeepz.net
www.padeepz.net
1. In the case of discrete R.V. the probability at a point say at x = c is not zero. But in the case
of a continuous R.V.X the probability at a point is always zero.
∞
P ( X = c ) = ∫ f (x) dx = [ x ]c = C − C = 0
C
−∞
et
b 1
ii Harmonic mean ∫ f (x) dx
a x
.n
b
iii Geometric mean ‘G’ log G ∫ log x f (x) dx
a
b
∫ (x − mean) f (x) dx
r
vi Moment about mean µ r
a
.p
b
∫ (x − mean) f (x) dx
2
vii Variance µ 2
w
a
b
∫ | x − mean | f (x) dx
w
It is denoted by
∞
µ 'r =
∫ x f (x) dx
r
−∞
Thus
www.padeepz.net
www.padeepz.net
µ1' =
E(X) (µ1' about origin)
µ '2 =
E(X 2 ) (µ '2 about origin)
∴ Mean = X =µ1' =E(X)
And
Variance = µ '2 −µ '2 2
Variance= E(X 2 ) − [E(X)]2 (a)
th
* r moment (abut mean)
Now
∞
E {X − E ( X )}
r
= ∫ {x − E(X)}r f (x) dx
−∞
∞
= ∫ {x − X} f (x) dx
r
et
−∞
Thus
.n
∞
µr = ∫ {x − X} f (x) dx
r
Where µr =
−∞
E[X − E(X) ] pz r
(b)
ee
This gives the rth moment about mean and it is denoted by µ r
Put r = 1 in (B) we get
∞
ad
µr = ∫ {x − X}f (x) dx
−∞
.p
∞ ∞
= ∫ x f (x) dx − ∫ x f (x) dx
w
−∞ −∞
=
X − X ∫ f (x) dx
∞
∞ f (x) dx =
1
−∞∫
w
−∞
w
= X−X
µ1 =0
Put r = 2 in (B), we get
∞
µ2 = ∫ (x − X) f (x) dx
2
−∞
Variance =
µ2 =E[X − E(X)]2
Which gives the variance interms of expectations.
Note
Let g(x) = K (Constant), then
www.padeepz.net
www.padeepz.net
∞
E g (=
X ) E (=
K) ∫ K f (x) dx
−∞
∞
∞ f (x) dx =1
= K ∫ f (x) dx −∞∫
−∞
= K.1 = K
Thus E(K) = K ⇒ E[a constant] = constant.
1.3.4 EXPECTATIONS (Discrete R.V.’s)
Let ‘X’ be a discrete random variable with P.M.F p(x)
Then
E(X) = ∑ x p(x)
x
For discrete random variables ‘X’
et
E(X r ) = ∑ x p(x)
r
x (by def)
.n
If we denote
=
E(X r
) µ 'r
Then
µ 'r= =
E[X r
] ∑ x p(x)
r
pz
ee
x
Put r = 1, we get
ad
µ '2= =
E[X 2
] ∑ x p(x)
2
w
x
∴
µ 2 = µ '2 −µ1' 2 = E(X 2 ) − {E(X)}
2
w
www.padeepz.net
www.padeepz.net
Theorem 3
If ‘X’ is a random variable with pdf f(x) and ‘a’ is a constant, then
(i) E[a G(x)] = a E[G(x)]
(ii) E[G(x)+a] = E[G(x)+a]
Where G(X) is a function of ‘X’ which is also a random variable.
et
Theorem 4
If ‘X’ is a random variable with p.d.f. f(x) and ‘a’ and ‘b’ are constants, then
.n
E[ax + b] = a E(X) + b
Cor 1:
pz
If we take a = 1 and b = –E(X) = – X , then we get
ee
E(X- X ) = E(X) – E(X) = 0
Note
1 1
ad
E ≠
X E(X)
E[log (x)] ≠ log E(X)
.p
E(X2) ≠ [E(X)]2
1.3.7 EXPECTATION OF A LINEAR COMBINATION OF RANDOM VARIABLES
w
Let X 1 , X 2 , ……, X n be any ‘n’ random variable and if a 1 , a 2 , ……, a n are constants, then
E[a 1 X 1 + a 2 X 2 + ……+ a nX n ] = a 1 E(X 1 ) + a 2 E(X 2 )+ ……+ a nE(X n )
w
Result
w
www.padeepz.net
www.padeepz.net
Cov (X, Y) = 0
Note
(i) Cov(aX, bY) = ab Cov(X, Y)
(ii) Cov(X+a, Y+b) = Cov(X, Y)
(iii) Cov(aX+b, cY+d) = ac Cov(X, Y)
(iv) Var (X 1 + X 2 ) = Var(X 1 ) + Var(X 2 ) + 2 Cov(X 1 , X 2 )
If X 1 , X 2 are independent
Var (X 1 + X 2 ) = Var(X 1 ) + Var(X 2 )
EXPECTATION TABLE
Discrete R.V’s Continuous R.V’s
∞
1. E(X) = ∑x p(x) 1. E(X) = ∫ x f (x) dx
−∞
∞
2. E(X ) =µ =∑ x p(x)
r ' r
2. E(X r ) =µ 'r = ∫ x r f (x) dx
et
r
x −∞
∞
3. Mean = µ 'r =∑ x p(x) 3. Mean = µ 'r = ∫ x f (x) dx
.n
−∞
∞
4. µ '2 =∑ x 2 p(x)
pz
4. µ '2 = ∫ x 2 f (x) dx
−∞
ee
5. Variance = µ '2 −µ1' 2 = E(X2) – {E(X)}2 5. Variance = µ '2 −µ1' 2 = E(X2) – {E(X)}2
ad
When die is thrown, ‘X’ denotes the number that turns up. Find E(X), E(X2) and Var (X).
Solution
w
Let ‘X’ be the R.V. denoting the number that turns up in a die.
‘X’ takes values 1, 2, 3, 4, 5, 6 and with probability 1/6 for each
w
X=x 1 2 3 4 5 6
w
Now
6
E(X) = ∑ x i p(x i )
i =1
= x1 p(x 1 ) + x2 p(x2 ) + x3 p(x3 ) + x 4 p(x 4 ) + x5 p(x 5 ) + x6 p(x6 )
= 1 x (1/6) + 1 x (1/6) + 3 x (1/6) + 4 x (1/6) + 5 x (1/6) + 6 x (1/6)
= 21/6 = 7/2 (1)
www.padeepz.net
www.padeepz.net
6
E(X) = ∑ x i p(x p )
i =1
= x1 2p(x1 )+x 2 2p(x 2 )+x3 2p(x3 )+x 4 2p(x4 )+x 5 2p(x 5 )+x 6 p(x6 )
= 1(1/6) + 4(1/6) + 9(1/6) + 16(1/6) + 25(1/6) + 36(1/6)
1 + 4 + 9 + 16 + 25 + 36 91
= = (2)
6 6
Variance (X) = Var (X) = E(X2) – [E(X)]2
2
91 7 91 49 35
= – = − =
6 2 6 4 12
Example :2
Find the value of (i) C (ii) mean of the following distribution given
C(x − x 2 ), 0 < x <1
f (x) =
0 otherwise
et
Solution
C(x − x 2 ),
.n
0 < x <1
Given f (x) = (1)
0 otherwise
−∞
∞
∫ f (x) dx =1 pz
ee
1
∫ C(x − x ) dx =
2
1 [using (1)] [∴ 0<x<1]
0
ad
1
x 2 x3
C − = 1
2 3 0
.p
1 1
C − = 1
w
2 3
3 − 2
w
C =1
6
w
C
=1 C=6 (2)
6
Sub (2) in (1), f(x) = 6(x – x2), 0< x < 1 (3)
∞
Mean = E(x) = ∫ x f (x) dx
−∞
1
= ∫ x 6(x − x 2 ) dx [from (3)] [∴ 0 < x < 1]
0
1
= ∫ (6x 2 − x 3 ) dx
0
www.padeepz.net
www.padeepz.net
1
6x 3 6x 4
= −
3 4 0
∴ Mean = ½
Mean C
½ 6
et
variable.
* PROPERTIES OF CDF OF A R.V. ‘X’
.n
(i) 0 ≤ F(x) ≤ 1, - ∞ < x < ∞
(ii) Lt F(x) = 0 , Lt F(x) = 1
(iii)
x →−∞
b
P(a ≤ X ≤ b) = ∫ f (x)=
dx
x →−∞
F(b) − F(a) pz
ee
a
dF(x)
(iv) F'(x) = = f(x) ≥ 0
dx
ad
0 otherwise
Solution
w
0 otherwise
x
=
The c.d.f is F(x) ∫ f (x) dx , − ∞ < x < ∞
−∞
(i) When x < 0, then
x
F(x) = ∫ f (x) dx
−∞
x
= ∫ 0 dx =0
−∞
(ii) When 0< x < 1, then
www.padeepz.net
www.padeepz.net
x
F(x) = ∫ f (x) dx
−∞
0 x
= ∫ f (x) dx + ∫ f (x) dx
−∞ 0
x
x x 2 x3
x
= 0 + ∫ 6x(1 − x) dx = 6 ∫ x(1 − x) dx = 6 −
0 0 2 3 0
= 3x 2 − 2x 3
(iii) When x > 1, then
x
F(x) = ∫ f (x) dx
−∞
0 1 x
= ∫ 0dx + ∫ 6x(1 − x) dx + ∫ 0 dx
−∞ 0 0
et
1
= 6 ∫ (x − x 2 ) dx =1
.n
0
Using (1), (2) & (3) we get
0, x<0
F(x)
= 3x 2 − 2x 3 ,
1,
0 < x <1 pz
ee
x >1
ad
Example:1.4.2
e − x , x≥0
(i) If f (x) = defined as follows a density function ?
0, x<0
.p
(ii) If so determine the probability that the variate having this density will fall in the interval (1,
w
2).
Solution
w
e − x , x≥0
Given f (x) =
w
0, x<0
(a) In (0, ∞), e-x is +ve
∴f(x) ≥ 0 in (0, ∞)
∞ 0 ∞
(b) ∫ f (x) dx = ∫ f (x) dx + ∫ f (x) dx
−∞ −∞ 0
0 ∞
= ∫ 0dx + ∫ e − x dx
−∞ 0
∞
= −e − x − e −∞ + 1
=
0
www.padeepz.net
www.padeepz.net
=1
Hence f(x) is a p.d.f
(ii) We know that
b
P(a ≤ X ≤ b) = ∫ f (x) dx
a
2 2
P(1 ≤ X ≤ 2) = ∫ f (x) dx = ∫ e − x dx = [−e − x ]2+1
1 1
2
= ∫ e − x dx = [−e − x ]2+1
1
= -e-2 + e-1 = -0.135 + 0.368 = 0.233
Example:1.4..3
A probability curve y = f(x) has a range from 0 to ∞. If f(x) = e-x, find the mean and
variance and the third moment about mean.
et
Solution
∞
.n
Mean = ∫ x f (x) dx
0
∞
Mean = 1
= ∫ x e − x dx
0 pz
= x[−e − x ] − [e − x ]
∞
0
ee
∞
Variance µ 2= ∫ (x − Mean) 2 f (x) dx
0
ad
∞
= ∫ (x − 1) 2 e − x dx
0
µ 2 =1
.p
b
µ3= ∫ (x − Mean)3 f (x) dx
w
a
Here a = 0, b = ∞
w
b
µ3 = ∫ (x − 1)3 e − x dx
a
{ }
∞
= (x − 1)3 ( −e − x ) − 3(x − 1) 2 (e − x ) + 6(x − 1)( −e − x ) − 6(e − x )
0
= -1 + 3 -6 + 6 = 2
µ3 = 2
1.5 MOMENT GENERATING FUNCTION
Def : The moment generating function (MGF) of a random variable ‘X’ (about origin) whose
probability function f(x) is given by
M X(t) = E[etX]
www.padeepz.net
www.padeepz.net
et
2 r
t t
= E[1] + t E(X) + E(X 2 ) + ..... + E(X r ) + ........
.n
2! r!
2 3 r
t t t
= 1 + t µ1' + µ '2 + µ3' + ..... + µ 'r + ........
M X (t)
Note
1. The above results gives MGF interms of moments.
2. Since M X (t) generates moments, it is known as moment generating function.
.p
Example:1.5.2
Find µ1' and µ '2 from M X (t)
w
Proof
w
tr '∞
WKT M X (t) = ∑ µ r
w
r = 0 r!
t t2 tr
M X (t) =µ '0 + µ1' + µ '2 + ..... + µ 'r (A)
1! 2! r!
Differenting (A) W.R.T ‘t’, we get
2t ' t 3 '
'
M X (t) =µ + µ 2 + µ3 + .....
'
1 (B)
2! 3!
Put t = 0 in (B), we get
M X ' (0) =µ1' =Mean
d
Mean = M1' (0) (or) dt (M X (t))
t =0
www.padeepz.net
www.padeepz.net
et
Formula
.n
2
e x =1 + x + x + ...
1! 2!
t2 tr pz
= E (1) + E[t(X − a)] + E[ (X − a) 2 ] + .... + E[ (X − a) r ] + ....
ee
2! r!
2 r
t t
=1 + t E(X − a) + E(X − a) 2 + .... + E(X − a) r + ....
2! r!
ad
2 r
t t
= 1 + t µ1' + µ '2 + .... + µ 'r + .... Where = µ 'r E[(X − a) r ]
2! r!
.p
2
t ' tr '
[ M X (t)]x =a = 1 + tµ1 + µ 2 + ..... + µr + .....
'
w
2! r!
Result:
w
www.padeepz.net
www.padeepz.net
Example:1.5.5
− at t
X−a
Prove that if ∪ = , then M ∪ (t) = e h .M X h , where a, h are constants.
h
Proof
By definition
M ∪ (t) = E e tu M X (t) = E[e tx ]
t Xh−a
= E e
tXn − tan
= E e
et
tX − ta
.n
= E[ e h ] E[ e h ]
− ta tX
=e
=e
h
− ta
h
E[ e ]
. MX
h
t
h
[by def]
pz
ee
− at
t X−a
∴ M ∪ (t) = e h
.M X , where ∪ = and M X(t) is the MGF about origin.
h
ad
Example:1.5.6
.p
3
w
1
=f (x) = at x 2
3
w
0 otherwise
Solution
2
Given f (1) =
3
1
f (2) =
3
f(3) = f(4) = …… = 0
MGF of a R.V. ‘X’ is given by
www.padeepz.net
www.padeepz.net
M X (t) = E[e tx ]
∞
= ∑ e tx f (x)
x =0
= e0 f(0) + et f(1) + e2t f(2) + …….
= 0 +et f(2/3) + e2t f(1/3) + 0
= 2/3et + 1/3e2t
et
∴ MGF is M=
X (t) [2 + e t ]
3
1.6 Discrete Distributions
The important discrete distribution of a random variable ‘X’ are
1. Binomial Distribution
2. Poisson Distribution
3. Geometric Distribution
1.6.1 BINOMIAL DISTRIBUTION
et
Def : A random variable X is said to follow binomial distribution if its probability law is given
by
.n
P(x) = p(X = x successes) = nC x p x qn-x
Where x = 0, 1, 2, ……., n, p+q = 1
Note
Assumptions in Binomial distribution pz
ee
i) There are only two possible outcomes for each trail (success or failure).
ii) The probability of a success is the same for each trail.
iii) There are ‘n’ trails, where ‘n’ is a constant.
ad
Example :1.6.1
Find the Moment Generating Function (MGF) of a binomial distribution about origin.
w
Solution
n
w
Let ‘X’ be a random variable which follows binomial distribution then MGF about origin is
given by
n
E[e=
tX
] =
M X (t) ∑ e p(x)
tx
x =0
n
= x n −x
∑ e nC x p q
tx
p(x) = nC x p x q n − x
x =0
n
n−x
= ∑ (e ) p nC x q
tx x
x =0
n
n−x
= ∑ (pe ) nC x q
t x
x =0
www.padeepz.net
www.padeepz.net
∴ M X (t) =
(q + pe t ) n
Example:1.6.2
Find the mean and variance of binomial distribution.
Solution
M X (t) = (q + pe t ) n
∴ M 'X (t) n(q + pe t ) n −1.pe t
=
Put t = 0, we get
M 'X (0) = n(q + p) n −1.p
Mean = E(X) = np [ (q + p) =
1] Mean M 'X (0)
M"X (t) = np (q + pe t ) n −1.e t + e t (n − 1)(q + pe t ) n − 2 .pe t
Put t = 0, we get
= np (q + p) n −1 + (n − 1)(q + p) n − 2 .p
et
M"X (t)
= np [1 + (n − 1)p ]
.n
= np + n 2 p 2 − np 2
M"X (0) =
= n 2 p 2 + np(1 − p)
n 2 p 2 + npq
pz [ 1 − p =q]
ee
M"X (0) = =
E(X 2
) n 2 p 2 + npq
Var ( X ) = E(X 2 ) − [E(X)]2= n 2 / p 2 + npq − n 2 / p 2= npq
ad
Example :1.6.3
w
Find the Moment Generating Function (MGF) of a binomial distribution about mean
(np).
w
Solution
Wkt the MGF of a random variable X about any point ‘a’ is
w
Example :1.6.4
Additive property of binomial distribution.
Solution
www.padeepz.net
www.padeepz.net
∴ M X+Y (
t) =
M X ( t ) .M Y ( t ) [ X & Y are independent R.V.’s]
( ) . (q + p 2e t )
n1 n2
= q1 + p1e t 2
Example :1.6.5
et
=
If M X (t) (=
q+pe ) , M (t) ( q+pe )
t n1
Y
t n2
, then
.n
M X+Y (t) = ( q+pe ) t n1 + n 2
(1) npq 4
4 1
⇒ q = = 1 which is > 1.
w
3 3
Since q > 1 which is not possible (0 < q < 1). The given data not follow binomial distribution.
w
Example :1.6.5
w
The mean and SD of a binomial distribution are 5 and 2, determine the distribution.
Solution
Given Mean = np = 5 (1)
SD = npq = 2 (2)
(2) np 4 4
⇒ =⇒ q=
(1) npq 5 5
4 1 1
∴ p =1 − = ⇒ p=
5 5 5
Sub (3) in (1) we get
n x 1/5 = 5
www.padeepz.net
www.padeepz.net
n = 25
∴ The binomial distribution is
P(X = x) = p(x) = nC x px qn-x
= 25C x(1/5)x(4/5)n-x, x = 0, 1, 2, ….., 25
et
2. The probability of successes ‘p’ for each trail is infinitely small.
3. np = λ , should be finite where λ is a constant.
.n
* To find MGF
M X(t) = E(etx)
∞
= ∑ e tx p(x)
x =0
λ x eλ
pz
ee
∞
= ∑ e tx
x =0 x!
∞ e −λ (λe t ) x
ad
= ∑
x =0 x!
∞ (λe t ) x
.p
= e −λ ∑
x =0 x!
w
(λe t ) 2
= e −λ 1 + λe t + + ......
w
2!
= e −λ eλe = eλ (e −1)
t t
w
M X(t) = eλ (e −1)
t
Hence
www.padeepz.net
www.padeepz.net
e −λ λ x∞ ∞ x. e −λ λλ x −1
= ∑ x. = ∑
= x 0= x! x 0 x!
∞ x.λ x −1
= 0 + e −λ . λ ∑
x =1 x!
−
∞ λ x 1
= λ e −λ . ∑
x =1 (x − 1)!
λ2
= λ e −λ 1 + λ + + .....
2!
= λ e −λ .eλ
Mean = λ
∞ ∞ e −λ λ x
µ = E[X ] = ∑ x .p(x) = ∑ x .
' 2 2 2
et
2
= x 0= x 0 x!
−λ
∞ e λ x
= ∑ {x(x − 1) + x}.
.n
x =0 x!
∞ x(x − 1)e λ −λ x ∞ x.e −λ λ x
= ∑
= x 0=
∞
x!
λ
+∑
x −2
x 0 x!pz
ee
= e −λ λ 2 ∑ +λ
x = 0 (x − 2)(x − 3)....1
∞ λ x −2
ad
= e −λ λ 2 ∑ +λ
x = 0 (x − 2)!
−λ 2 λ λ2
.p
= e λ 1 + + + .... + λ
1! 2!
w
= λ2 + λ
w
www.padeepz.net
www.padeepz.net
3
∴ P(X=1) = e −λ λ = (Given)
10
3
= λe −λ = (1)
10
e −λ λ 2 1
P(X=2) = = (Given)
2! 5
e −λ λ 2 1
= (2)
2! 5
3
(1) ⇒ e −λ λ = (3)
10
2
(2) ⇒ e −λ λ 2 = (4)
5
et
(3) 1 3
⇒ =
(4) λ 4
.n
4
λ=
∴ P(X=0)
3
=
e −λ λ 0
= e −4/3
pz
ee
0!
−λ 3
e λ e −4/3 (4 / 3)3
P(X=3) = =
ad
3! 3!
Example :1.7.2
.p
If X is a Poisson variable
P(X = 2) = 9 P(X = 4) + 90 P(X=6)
w
e −λ λ x
P(X=x) = , x = 0,1, 2,.....
w
x!
Given P(X = 2) = 9 P(X = 4) + 90 P(X=6)
e −λ λ 2 e −λ λ 4 e −λ λ 6
=9 + 90
2! 4! 6!
1 9λ 90λ 2 4
= +
2 4! 6!
1 3λ 2
λ 4
= +
2 8 8
3λ 2
λ4
1= +
4 4
www.padeepz.net
www.padeepz.net
λ 4 + 3λ 2 − 4 =0
λ2 = 1 or λ2 = -4
λ =±1 or λ = ± 2i
∴ Mean = λ = 1, Variance = λ = 1
∴ Standard Deviation = 1
1.7.3 Derive probability mass function of Poisson distribution as a limiting case of Binomial
distribution
Solution
We know that the Binomial distribution is
P(X=x) = nC x pxqn-x
n!
= p x (1 − p) n − x
(n − x)! x!
et
1.2.3.......(n − x)(n − x + 1)......np n (1 − p) n
=
1.2.3.....(n − x) x! (1 − p) x
.n
x
1.2.3.......(n − x)(n − x + 1)......n p
(1 − p)
n
=
1.2.3.....(n − x) x!
pz
n(n − 1)(n − 2)......(n − x + 1) λ x
1− p
1 λ
n
ee
= x
1−
x! n λ n
x
1 −
n
ad
−x
n(n − 1)(n − 2)......(n − x + 1) λ λ
n
= x 1 − 1 −
x! n n
.p
1 2 x − 1
11 − 1 − ...... 1 −
w
n−x
n n n x λ
P(X=x) = λ 1 −
n
w
x!
n−x
λ 1 2
x
x − 1 λ
w
www.padeepz.net
www.padeepz.net
n −x
λ
lt 1 − e −λ
=
n →∞
n
1 2 x −1
and lt 1 − = lt 1 − .....= lt 1 − = 1
n →∞
n n →∞
n n →∞
n
λ x −λ
∴ P(X=x) = = e , x 0,1, 2,...... ∞
x!
et
M X(t) = E[etx]
.n
= ∑ e tx p(x)
∞
= ∑ e tx q x −1p
x =1
∞
= ∑ e tx q x q −1p
pz
ee
x =1
∞
= ∑ e tx q x p / q
ad
x =1
∞
= p / q ∑ e tx q x
x =1
.p
∞
= p / q ∑ (e t q) x
w
x =1
p p
= x 1 + x + x 2 + .... = (1 − x) −1
q q
p
= qe t 1 − qe t = pe t [1 − qe t ]−1
q
pe t
∴M X (t) =
1 − qe t
www.padeepz.net
www.padeepz.net
(1 − qe t )pe t − pe t (−qe t ) pe t
M 'X ( t ) = =
(1 − qe t ) 2 (1 − qe t ) 2
∴ E(X) = M 'X ( 0 ) = 1/p
∴ Mean = 1/p
d pe t
µ"X (t) =
dt (1 − qe t ) 2
Variance
(1 − qe t ) 2 pe t − pe t 2(1 − qe t )(−qe t )
=
(1 − qe t ) 4
(1 − qe t ) 2 pe t + 2pe t qe t (1 − qe t )
=
(1 − qe t ) 4
1+ q
M"X (0) = 2
et
p
(1 + q) 1 q
.n
Var (X) = E(X2) – [E(X)]2 = 2
− 2 ⇒ 2
p p p
Var (X) = 2
p
q
pz
ee
Note:
Another form of geometric distribution
P[X=x] = qxp ; x = 0, 1, 2, ….
ad
p
M X (t) =
(1 − qe t )
.p
Example:1.8.2
If the MGF of X is (5-4et)-1, find the distribution of X and P(X=5)
w
Solution
Let the geometric distribution be
w
www.padeepz.net
www.padeepz.net
x
1 4
=
5 5
5
1 4 45
P(X = 5) = = 6
5 5 5
1.9 CONTINUOUS DISTRIBUTIONS
If ‘X’ is a continuous random variable then we have the following distribution
1. Uniform (Rectangular Distribution)
2. Exponential Distribution
3. Gamma Distribution
4. Normal Distribution
1. 9.1 Uniform Distribution (Rectangular Distribution)
Def : A random variable X is set to follow uniform distribution if its
1
, a<x<b
et
f (x) = b − a
0, otherwise
.n
* To find MGF
M X (t)
∞
= ∫ e tx f (x)dx
−∞ pz
ee
b 1
= ∫ e tx dx
a b−a
a
1 e tx
ad
=
b − a t b
.p
1
= e bx − eat
(b − a)t
w
M X (t) =
(b − a)t
w
www.padeepz.net
www.padeepz.net
a+b
Mean µ1' =
2
Putting r = 2 in (A), we get
b x2 b
µ = ∫ x f (x)dx
'
= ∫ 2
dx
a b−a
2
a
a 2 + ab + b 2
=
3
∴ Variance = µ 2 − µ1' 2
'
b 2 + ab + b 2 b + a (b − a) 2
2
= − =
3 2 12
(b − a) 2
Variance =
12
et
PROBLEMS ON UNIFORM DISTRIBUTION
.n
Example 1.9.1
If X is uniformly distributed over (-α,α), α< 0, find α so that
(i)
(ii)
P(X>1) = 1/3
P(|X| < 1) = P(|X| > 1) pz
ee
Solution
If X is uniformly distributed in (-α, α), then its p.d.f. is
1
ad
−α < x < α
f (x) = 2α
0 otherwise
.p
∫ f (x)dx = 1 / 3
1
w
α 1
∫ dx = 1 / 3
1 2α
w
1 1
( x )1 = 1 / 3 ( α − 1) =1 / 3
α
⇒
2α 2α
α=3
(ii) P(|X| < 1) = P(|X| > 1) = 1 - P(|X| < 1)
P(|X| < 1) + P(|X| < 1) = 1
2 P(|X| < 1) = 1
2 P(-1 < X < 1) = 1
1
2 ∫ f (x)dx = 1
1
www.padeepz.net
www.padeepz.net
1 1
2/ ∫ dx = 1
/α
12
⇒α = 2
Note:
1. The distribution function F(x) is given by
0 −α < x < α
x − a
=
F(x) a≤x≤b
b − a
1 b<x<∞
2. The p.d.f. of a uniform variate ‘X’ in (-a, a) is given by
1
−a < x < a
= 2a
et
F(x)
0 otherwise
.n
1.10 THE EXPONENTIAL DISTRIBUTION
pz
Def :A continuous random variable ‘X’ is said to follow an exponential distribution with
parameter λ>0 if its probability density function is given by
λe −λx x > a
ee
F(x) =
0 otherwise
ad
To find MGF
Solution
∞
= ∫ e tx f (x)dx
.p
M X (t)
−∞
∞ ∞
w
= ∫ e tx λe −λx dx = λ ∫ e − ( λ− t )x dx
0 0
w
∞
e − ( λ− t )x
= λ
w
λ − t 0
λ λ
= e −∞ − e −0 =
−(λ − t) λ−t
λ
∴ MGF of x = , λ> t
λ−t
www.padeepz.net
www.padeepz.net
−1
λ 1 t
M X(t)= = = 1 −
λ − t 1− t λ
λ
2
t t tr
=+ 1 + 2 + ..... + r
λ λ λ
t t 2!
2
t r t!
=1 + + 2 + ..... + r
λ 2! λ r! λ
r
∞ t
M X(t) = ∑
r =0 λ
t1 1
∴= Mean µ1' Coefficient
= of
1! λ
et
2
t 2
=µ '2 Coefficient
= of
2! λ 2
.n
2 1 1
Variance= µ 2 =µ '2 − µ1' 2 = 2 − 2 =
λ λ λ2
Variance = 2
λ
1
Mean =
1
λ
pz
ee
Example: 1.10.1
Let ‘X’ be a random variable with p.d.f
ad
1 −3x
e x>0
F(x) = 3
0
.p
otherwise
Find 1) P(X > 3) 2) MGF of ‘X’
w
Solution
WKT the exponential distribution is
w
1
Here λ =
3
∞ ∞ 1 −x
P(x>3) = ∫ f (x) dx = ∫ e 3 dx
3 3 3
P(X>3) = e-1
λ
MGF is M X (t) =
λ−t
www.padeepz.net
www.padeepz.net
1 1
3 3 1
= = =
1 1 − 3t 1 − 3t
−t
3 3
1
M X(t) =
1 − 3t
Note
If X is exponentially distributed, then
P(X > s+t / x > s) = P(X > t), for any s, t > 0.
et
, α>0, 0 < x < ∞
.n
f(x) =
=0, elsewhere
and = dx pz
ee
=0, elsewhere
ad
+ ….. + λ k
.
w
Example :1.11.1
w
www.padeepz.net
www.padeepz.net
= 0 ,when x=0 , x=
et
Now V(X) = = =
.n
= a (a-b) From (1) and (2)
TUTORIAL QUESTIONS
pz
ee
1.It is known that the probability of an item produced by a certain
machine will be defective is 0.05. If the produced items are sent to the
ad
daily stock of 35,000 gallons. What is the probability that of two days
selected at random, the stock is insufficient for both days.
w
3.The density function of a random variable X is given by f(x)= KX(2-X), 0≤X≤2.Find K, mean,
variance and rth moment.
4.A binomial variable X satisfies the relation 9P(X=4)=P(X=2) when n=6. Find the parameter p
of the Binomial distribution.
5. Find the M.G.F for Poisson Distribution.
6. If X and Y are independent Poisson variates such that P(X=1)=P(X=2) and P(Y=2)=P(Y=3).
Find V(X-2Y).
7.A discrete random variable has the following probability distribution
X: 0 1 2 3 4 5 6 7 8
P(X) a 3a 5a 7a 9a 11a 13a 15a 17a
Find the value of a, P(X<3) and c.d.f of X.
www.padeepz.net
www.padeepz.net
Example :1
Given the p.d.f. of a continuous random variable ‘X’ follows
6x(1 − x), 0 < x <1
f (x) = , find c.d.f. for ‘X’
0 otherwise
Solution
et
6x(1 − x), 0 < x <1
Given f (x) =
0 otherwise
.n
x
=
The c.d.f is F(x) ∫ f (x) dx , − ∞ < x < ∞
(i) When x < 0, then
−∞
pz
ee
x
F(x) = ∫ f (x) dx
−∞
x
ad
= ∫ 0 dx =0
−∞
(ii) When 0< x < 1, then
.p
x
F(x) = ∫ f (x) dx
−∞
w
0 x
= ∫ f (x) dx + ∫ f (x) dx
w
−∞ 0
x
x 2 x3
w
x x
= 0 + ∫ 6x(1 − x) dx = 6 ∫ x(1 − x) dx = 6 −
0 0 2 3 0
= 3x 2 − 2x 3
(iii) When x > 1, then
x
F(x) = ∫ f (x) dx
−∞
0 1 x
= ∫ 0dx + ∫ 6x(1 − x) dx + ∫ 0 dx
−∞ 0 0
1
6 ∫ (x − x 2 ) dx =1
0
www.padeepz.net
www.padeepz.net
Example :2
A random variable X has the following probability function
Values of X 0 1 2 3 4 5 6 7 8
Probability P(X) a 3a 5a 7a 9a 11a 13a 15a 17a
et
(iii) Find the distribution function of X.
Solution
.n
Table 1
Values of X 0 1 2 3 4 5 6 7 8
p(x)
pz
a 3a 5a 7a 9a 11a 13a 15a 17a
∑ p(x ) = 1
i =0
i
ad
81 a = 1
a = 1/81
w
X=x 0 1 2 3 4 5 6 7 8
w
P(x) 1/81 3/81 5/81 7/81 9/81 11/81 13/81 15/81 17/81
www.padeepz.net
www.padeepz.net
et
5
= 2/81 + 11/81 = 36/81
.n
F(6) = P(X ≤ 6) = p(0) + p(1) + ….. + p(6)
6
= 36/81 + 13/81 = 49/81
7
F(7)
= 49/81 + 15/81 = 64/81 pz
= P(X ≤ 7) = p(0) + p(1) + …. + p(6) + p(7)
ee
F(8) = P(X ≤ 8) = p(0) + p(1) + ….. + p(6) + p(7) + p(8)
8
= 64/81 + 17/81 = 81/81 = 1
ad
Example :3
The mean and SD of a binomial distribution are 5 and 2, determine the distribution.
.p
Solution
Given Mean = np = 5 (1)
w
SD = npq = 2 (2)
w
(2) np 4 4
⇒ =⇒ q=
w
(1) npq 5 5
4 1 1
∴ p =1 − = ⇒ p=
5 5 5
Sub (3) in (1) we get
n x 1/5 = 5
n = 25
∴ The binomial distribution is
P(X = x) = p(x) = nC x px qn-x
= 25C x(1/5) (4/5)n-x,
x
x = 0, 1, 2, ….., 25
Example :4
If X is a Poisson variable
www.padeepz.net
www.padeepz.net
et
1= +
4 4
.n
λ 4 + 3λ 2 − 4 =0
λ2 = 1 or λ2 = -4
λ =±1
∴
or λ = ± 2i
Mean = λ = 1, Variance = λ = 1
pz
ee
∴ Standard Deviation = 1
ad
.p
w
w
w
www.padeepz.net
www.padeepz.net
UNIT – II
Introduction
In the previous chapter we studied various aspects of the theory of a single R.V. In this
chapter we extend our theory to include two R.V's one for each coordinator axis X and Y
of the XY Plane.
DEFINITION : Let S be the sample space. Let X = X(S) & Y = Y(S) be two functions each
assigning a real number to each outcome s ∈ S. hen (X, Y) is a two dimensional random
variable.
2.1 Types of random variables
1. Discrete R.V.’s
et
2. Continuous R.V.’s
Discrete R.V.’s (Two Dimensional Discrete R.V.’s)
.n
If the possible values of (X, Y) are finite, then (X, Y) is called a two dimensional discrete
R.V. and it can be represented by (x i , y), i = 1,2,….,m.
pz
In the study of two dimensional discrete R.V.’s we have the following
5 important terms.
ee
• Joint Probability Function (JPF) (or) Joint Probability Mass Function.
• Joint Probability Distribution.
• Marginal Probability Function of X.
ad
The function P(X = x i , Y = y j ) = P(x i , y j ) is called the joint probability function for
discrete random variable X and Y is denote by p ij .
w
Note
1. P(X = xi , Y = yj ) = P[(X = xi )∩(Y = yj )] = p ij
w
www.padeepz.net
www.padeepz.net
et
1 3/14 3/14 0
.n
2 1/28 0 0
Solution
X
Y
0 pz2 P Y (y) = p(Y=y)
ee
0 3/28 P(0,0) 3/28 P(2,0) 15/28 = P y (0)
1 3/14 P(0, 1) 3/14 P(1,1) 6/14 = P y (1)
ad
www.padeepz.net
www.padeepz.net
15
28 , y = 0
3
=
PY (y) = , y 1
7
1
28 , y = 2
et
dimensional continuous random variable.
• Joint probability density function :
.n
∞ ∞
(i) f XY (x,y) ≥0 ; (ii) ∫ ∫ f XY (x, y) dydx =1
•
−∞ −∞
Joint probability distribution function
F(x,y) = P[X ≤ x, Y ≤ y]
pz
ee
x
y
= ∫ ∫ f (x, y)dx dy
−∞ −∞
ad
−∞
∞
f(y) = fY (x) = ∫ f x,y (x, y)dy (Marginal pdf of Y)
w
−∞
• Conditional probability density function
w
f (x, y)
(i) P(Y= y / X= x)
= f (y / x) = , f (x) > 0
w
f (x)
f (x, y)
(ii) P(X= x / Y= y)
= f (x / y) = , f (y) > 0
f (y)
Example :2.3.1
2
(2x + 3y), 0 < x < 1, 0 < y <1
Show that the function f (x, y) = 5
0 otherwise
is a joint density function of X and Y.
Solution
www.padeepz.net
www.padeepz.net
2
(2x + 3y), 0 < x < 1, 0 < y <1
Given f (x, y) = 5
0 otherwise
(i) f (x, y) ≥ 0 in the given interval 0 ≤ (x,y) ≤ 1
∞ ∞ 11 2
(ii) ∫ ∫ f (x, y) dx dy = ∫ ∫ (2x + 3y) dx dy
−∞ −∞ 00 5
1
2 1 1 x2
= ∫ ∫ 2 + 3xy dy
5 00 2 0
1
2 3y 2
et
21 2 3
= ∫ (1 + 3y) dy = y + = 1 +
50 5 2 0 5 2
.n
2 5
= =1
5 2
Since f(x,y) satisfies the two conditions it is a j.d.f.
Example :2.3.2
pz
ee
The j.d.f of the random variables X and Y is given
8xy, 0 < x < 1, 0<y<x
f (x, y) =
ad
0, otherwise
Find (i) f X(x) (ii) fY (y) (iii) f(y/x)
.p
Solution
We know that
w
= 4x , 0 < x < 1
f (x) 3
www.padeepz.net
www.padeepz.net
Result
Marginal pdf g Marginal pdf y F(y/x)
2y
4x3, 0<x<1 4y, 0<y<x ,0 < y < x, 0 < x < 1
x2
2.4 REGRESSION
* Line of regression
The line of regression of X on Y is given by
σy
x=
− x r. (y − y)
σx
The line of regression of Y on X is given by
σy
y=
− y r. (x − x)
σx
et
* Angle between two lines of Regression.
1 − r 2 σyσx
.n
tan θ =
r σ x 2 + σ y2
* Regression coefficient
Regression coefficients of Y on X pz
ee
σy
r. = b YX
σx
ad
∴ Correlation coefficient r =
± b XY × b YX
w
w
Example:2.4.1
1. From the following data, find
w
Marks in Economics 25 28 35 32 31 36 29 38 34 32
Marks in Statistics 40 46 49 41 36 32 31 30 33 39
Solution
www.padeepz.net
www.padeepz.net
(X − X) (Y − Y) (X − X) (Y − Y)
2 2 2
X Y X − X = X − 32 X − Y = Y − 38
25 43 -7 5 49 25 -35
28 46 -4 8 16 64 -32
35 4 3 11 9 121 33
32 41 0 3 0 9 0
31 36 -1 -2 1 4 2
36 32 4 -6 16 36 -24
29 31 -3 -7 09 49 +21
38 30 6 -8 36 64 -48
34 33 2 -5 4 25 -48
32 39 0 1 0 1 100
320 380 0 0 140 398 -93
∑ X 320 ∑ Y 380
et
Here =
X = = 32 and=
Y = = 38
n 10 n 10
.n
Coefficient of regression of Y on X is
∑ (X − X)(Y − Y) −93
b YX = = = − 0.6643
∑ (X − X)
2
140
Coefficient of regression of X on Y is
pz
ee
∑ (X − X)(Y − Y) −93
b XY = = = − 0.2337
∑ (Y − Y)
2
398
ad
X – 32 = -0.2337 (y – 38)
X = -0.2337 y + 0.2337 x 38 + 32
w
X = -0.2337 y + 40.8806
Equation of the line of regression of Y on X is
w
Y−Y =b YX (X − X)
w
Y – 38 = -0.6643 (x – 32)
Y = -0.6643 x + 38 + 0.6643 x 32
= -0.6642 x + 59.2576
Coefficient of Correlation
r2 = bYX × b XY
= -0.6643 x (-0.2337)
r = 0.1552
r = ± 0.1552
r = ± 0.394
Now we have to find the most likely mark, in statistics (Y) when marks in economics (X) are 30.
y = -0.6643 x + 59.2575
www.padeepz.net
www.padeepz.net
2.5 COVARIANCE
Def : If X and Y are random variables, then Covariance between X and Y is defined as
Cov (X, Y) = E(XY) – E(X) . E(Y)
Cov (X, Y) = 0 [If X & Y are independent]
2.6 CORRELATION
Types of Correlation
• Positive Correlation
(If two variables deviate in same direction)
• Negative Correlation
et
(If two variables constantly deviate in opposite direction)
.n
2.7 KARL-PEARSON’S COEFFICIENT OF CORRELATION
Correlation coefficient between two random variables X and Y usually denoted by r(X,
r(X, Y) = ,
pz
Y) is a numerical measure of linear relationship between them and is defined as
Cov(X, Y)
ee
σ X .σ Y
1
Where Cov (X, Y) = ∑ XY − X Y
ad
n
∑X ∑Y
= σX ; = σY
.p
n n
* Limits of correlation coefficient
-1 ≤ r ≤ 1.
w
www.padeepz.net
www.padeepz.net
Solution
X Y U = X – 68 V = Y – 68 UV U2 V2
65 67 -3 -1 3 9 1
66 68 -2 0 0 4 0
67 65 -1 -3 3 1 9
67 68 -1 0 0 1 0
68 72 0 4 0 0 16
69 72 1 4 4 1 16
70 69 2 1 2 4 1
72 71 4 3 12 16 9
et
∑U = 0 ∑V = 0 ∑ UV = 24 ∑ U 2 = 36 ∑ V 2 = 52
.n
Now
∑U 0
U= = = 0
V=
n
∑V
=
8
8
= 1
pz
ee
n 8
Cov (X, Y) = Cov (U, V)
∑ UV 24
⇒ − UV = −0= 3
ad
(1)
n 8
∑U
2
2 36
.p
σ=
U − U= − 0= 2.121 (2)
n 8
w
∑V
2
2 52
σ=
V − V= − 1= 2.345 (3)
w
n 8
Cov(U, V) 3
∴r(X, Y) =
w
= r(U, V) =
σ U .σ V 2.121 x 2.345
= 0.6031 (by 1, 2, 3)
Example :2.6.2
1
Let X be a random variable with p.d.f. f (x) = , − 1 ≤ x ≤1 and let
2
Y = x2, find the correlation coefficient between X and Y.
Solution
www.padeepz.net
www.padeepz.net
∞
E(X) = ∫ x.f (x) dx
−∞
1
1 1 1 x2 11 1
= ∫ x. dx = = − = 0
−1 2 2 2 −1 2 2 2
E(X) = 0
∞
E(Y) = ∫ x 2 .f (x) dx
−∞
1
1 1 1 x3 11 1 1 2 1
= ∫ x . dx =
2
= + = . =
−1 2 2 3 −1 23 3 2 3 3
E(XY) = E(XX2)
1
x4
∞
= E(X )= ∫ x .f (x)
3
= dx = 3
0
et
−∞ 4 −1
E(XY) = 0
.n
Cov(X, Y)
∴r(X, Y) = ρ(X, Y) = =0
σX σY
ρ = 0. pz
Note : E(X) and E(XY) are equal to zero, noted not find σ x &σ y .
ee
ad
−∞
∞
w
∂ (x, y)
f UV (u, V) = f XY (x, y)
∂ (u, v)
w
Example : 1
If the joint pdf of (X, Y) is given by fxy (x, y) = x+y, 0 ≤ x, y ≤ 1, find the pdf of ∪ = XY.
Solution
Given f xy (x, y) = x + y
Given U = XY
Let V=Y
u
=x =&y V
v
www.padeepz.net
www.padeepz.net
∂x 1 ∂x − u ∂y ∂y
= = . =; 0;= 1 (1)
∂u V ∂v V ∂u2
∂v
∂y ∂x
1 −u
∂ (x, y) ∂u ∂v 1 1
=
∴J = = V V2 = −1 =
∂ (u, v) ∂y ∂y v v
0 1
∂u ∂v
1
⇒|J|= (2)
V
The joint p.d.f. (u, v) is given by
f uv (u, v) = f xy (x, y) |J|
1
= (x + y)
|v|
et
1 u
= + u (3)
Vv
.n
The range of V :
Since 0 ≤ y ≤ 1, we have 0 ≤ V ≤ 1
The range of u :
Given 0≤x≤1
pz
(∴ V = y)
ee
u
⇒ 0 ≤ ≤ 1
v
ad
⇒ 0≤u≤v
Hence the p.d.f. of (u, v) is given by
1u
.p
f uv (u, v) = + v , 0 ≤ u ≤ v, 0 ≤ v ≤ 1
v v
w
Now
∞
f U (u) = ∫ f u,v (u, v) dv
w
−∞
w
1
= ∫ f u,v (u, v) dv
u
u 1
= ∫ 2 + 1 dv
u v
1
v −1
= v + u.
−1 u
∴fu (u) = 2(1-u), 0 < u < 1
p.d.f of (u, v) p.d.f of u = XY
www.padeepz.net
www.padeepz.net
1u
=
f uv (u, v) + v fu (u) = 2(1-u), 0 < u < 1
v v
0 ≤ u ≤ v, 0 ≤ v ≤ 1
TUTORIAL QUESTIONS
et
P(120≤S n ≤160) where Sn=X 1 +X 2 +…X n and n=75.
.n
5. If the joint probability density function of a two dimensional random variable (X,Y) is
given by f(x, y) = x2 + , 0<x<1,0<y<2= 0, elsewhere Find (i) P(X>1/2)(ii) P(Y<X) and (iii)
P(Y<1/2/ X<1/2).
6. Two random variables X and Y have joint density pz
ee
Find Cov (X,Y).
7. If the equations of the two lines of regression of y on x and x on y are respectively
ad
WORKEDOUT EXAMPLES
.p
Example 1
The j.d.f of the random variables X and Y is given
w
0, otherwise
Find (i) f X(x) (ii) fY (y) (iii) f(y/x)
w
Solution
We know that
(i) The marginal pdf of ‘X’ is
∞ x
=
f X (x) = f(x) = ∫ f (x, y)dy ∫=
8xy dy 4x 3
−∞ 0
= 4x , 0 < x < 1
f (x) 3
www.padeepz.net
www.padeepz.net
Solution
∞
E(X) = ∫ x.f (x) dx
−∞
et
1
11 1 x2 11 1
= ∫ x. dx = = − = 0
.n
−1 2 2 2 −1 2 2 2
E(X) = 0
∞
E(Y) = ∫ x 2 .f (x) dx
−∞
pz
ee
1
1 1 1 x3 11 1 1 2 1
= ∫ x . dx =
2
= + = . =
−1 2 2 3 −1 23 3 2 3 3
ad
E(XY) = E(XX2)
1
x4
∞
= E(X )= ∫ x .f (x)
3
= dx =
3
.p
0
−∞ 4 −1
w
E(XY) = 0
Cov(X, Y)
∴r(X, Y) = ρ(X, Y) = =0
w
σX σY
ρ = 0.
w
Note : E(X) and E(XY) are equal to zero, noted not find σ x &σ y .
Result
Marginal pdf g Marginal pdf y F(y/x)
2y
4x3, 0<x<1 4y, 0<y<x ,0 < y < x, 0 < x < 1
x2
www.padeepz.net
www.padeepz.net
UNIT - III
RANDOM PROCESSES
Introduction
In chapter 1, we discussed about random variables. Random variable is a function of the
possible outcomes of a experiment. But, it does not include the concept of time. In the
real situations, we come across so many time varying functions which are random in
nature. In electrical and electronics engineering, we studied about signals.
Generally, signals are classified into two types.
(i) Deterministic
(ii) Random
Here both deterministic and random signals are functions of time. Hence it is
possible for us to determine the value of a signal at any given time. But this is not
possible in the case of a random signal, since uncertainty of some element is always
et
associated with it. The probability model used for characterizing a random signal is called
a random process or stochastic process.
.n
3.1 RANDOM PROCESS CONCEPT
pz
A random process is a collection (ensemble) of real variable {X(s, t)} that are functions
of a real variable t where s ∈ S, S is the sample space and
ee
t ∈T. (T is an index set).
REMARK
ad
NOTATION
w
w
Here after we denote the random process {X(s, t)} by {X(t)} where the index set T is assumed to
be continuous process is denoted by {X(n)} or {Xn}.
w
www.padeepz.net
www.padeepz.net
et
.n
Discrete Continuous Discrete Continuous
pz
ee
3.2.1 CONTINUOUS RANDOM PROCESS
ad
If 'S' is continuous and t takes any value, then X(t) is a continuous random variable.
Example
Let X(t) = Maximum temperature of a particular place in (0, t). Here 'S' is a continuous
.p
Example
Let X(t) be the number of telephone calls received in the interval (0, t).
Here, S = {1, 2, 3, …}
T = {t, t ≥ 0}
∴ {X(t)} is a discrete random process.
www.padeepz.net
www.padeepz.net
Deterministic Process
A process is called deterministic if future value of any sample function can be predicted
from past values.
et
1. 1st Order Distribution Function of {X(t)}
For a specific t, X(t) is a random variable as it was observed earlier.
.n
F(x, t) = P{X(t) ≤ x} is called the first order distribution of the process {X(t)}.
Definition
A random process is called stationary to order, one or first order stationary if its 1st order
density function does not change with a shift in time origin.
In other words,
( x1 , t1 ) f X ( x1 , t1 + C ) must be true for any t 1 and any real number C if {X(t 1 )} is to
f X=
be a first order stationary process.
Example :3.3.1
Show that a first order stationary process has a constant mean.
Solution
Let us consider a random process {X(t 1 )} at two different times t 1 and t 2 .
www.padeepz.net
www.padeepz.net
∞
∴ E X ( t1 ) = xf ( x, t1 )dx
∫
−∞
[f(x,t 1 ) is the density form of the random process X(t 1 )]
∞
∴ E X ( t 2 ) = xf ( x, t 2 )dx
∫
−∞
[f(x,t 2 ) is the density form of the random process X(t 2 )]
Let t 2 = t 1 + C
∞ ∞
X ( t 2 )
∴E= ∫ xf ( x, t=
+ C )dx ∫ xf ( x, t )dx
1 1
−∞ −∞
= E X ( t1 )
Thus E X ( t 2 ) =E X ( t1 )
et
Mean process {X(t1)} = mean of the random process {X(t 2 )}.
Definition 2:
.n
If the process is first order stationary, then
Mean = E(X(t)] = constant
3.3.4 Second Order Stationary Process
pz
A random process is said to be second order stationary, if the second order density
function stationary.
ee
f ( x1 , x 2 ;=
t1 , t 2 ) f ( x1 , x 2 ; t1 + C, t 2 + C ) ∀x1 , x 2 and C.
E ( X12 ) , E ( X 22 ) , E ( X1 , X 2 ) denote change with time, where
ad
X = X(t 1 ); X2 = X(t 2 ).
3.3.5 Strongly Stationary Process
.p
time 't'.
f X (x 1 , x2 ; t 1 , t 2 ) = fX(x 1 , x2 ; t 1 +C, t 2 +C)
w
f X (x 1 , x 2 ..x n ; t 1 , t 2 …t n) = f X(x1 , x2 ..x n ; t 1 +C, t 2 +C..t n +C) for any t 1 and any real number
C.
www.padeepz.net
www.padeepz.net
Let X(t 1 ) and X(t 2 ) be the two given numbers of the random process {X(t)}. The auto
correlation is
R XX ( t1 , t 2 ) = E {X ( t1 ) xX ( t 2 )}
Mean Square Value
Putting t 1 = t 2 = t in (1), we get
R XX (t,t) = E[X(t) X(t)]
⇒ R XX ( t, t ) = E X 2 ( t ) is the mean square value of the random process.
3.3.8 Auto Covariance of A Random Process
{ }
E X ( t1 ) − E ( X ( t1 ) ) X ( t 2 ) − E ( X ( t 2 ) )
C XX ( t1 , t 2 ) =
= R XX ( t1 , t 2 ) − E X ( t1 ) E X ( t 2 )
Correlation Coefficient
The correlation coefficient of the random process {X(t)} is defined as
et
C XX ( t1 , t 2 )
ρXX ( t1 , t 2 ) =
Var X ( t1 ) xVar X ( t 2 )
.n
Where C XX (t 1 , t 2 ) denotes the auto covariance.
i) E{X(t)} = Constant
ii) E[X(t) X(t+τ] = R XX (τ) depend only on τ when τ = t 2 - t 1 .
w
REMARKS :
SSS Process of order two is a WSS Process and not conversely.
w
A random process that is not stationary in any sense is called as evolutionary process.
www.padeepz.net
www.padeepz.net
1
,0 < C < 2π
f ( θ ) = 2π
0 ,otherwise
∞
∴ E[X(t)] = ∫ X ( t ) f ( θ ) dθ
−∞
2π
1
= ∫ A ω ( ω t + θ ) 2 π dθ
0
A 2π
= sin ( ωt + θ ) 0
2π
A
Sin ( 2π + ωt ) − Sin ( ωt + 0 )
2π
=
et
A
= [Sinωt − sin ωt ]
2π
.n
= 0 constant
Since E[X(t)] = a constant, the process X(t) is a stationary random process.
n
Solution
We know that the mean is given by
.p
∞
E X ( t ) =∑ nPn ( t )
w
n =0
ne −λt ( λt )
∞ n
= ∑
w
n =0 n
w
e −λt ( λt )
∞ n
= ∑
n =1 n −1
( λt )
∞ n
=e −λt
∑
n =1 n −1
−λt
λt ( λt ) 2
=e + + ...
0! 1!
www.padeepz.net
www.padeepz.net
λt ( λt ) 2
= ( λt ) e 1 +
−λt
+ + ...
1 2
= ( λt ) e e
−λt λt
= λt , depends on t
Hence Poisson process is not a stationary process.
3.7 ERGODIC RANDOM PROCESS
Time Average
The time average of a random process {X(t)} is defined as
T
1
X ( t ) dt
2T −∫T
XT =
Ensemble Average
The ensemble average of a random process {X(t)} is the expected value of the random
et
variable X at time t
Ensemble Average = E[X(t)]
.n
Ergodic Random Process
{X(t)} is said to be mean Ergodic
If lim X T = µ
1
T →∞
T
X ( t ) dt =
pz
T →∞ 2T ∫
⇒ lim µ
ee
−T
Mean Ergodic Theorem
ad
Let {X(t)} be a random process with constant mean µ and let X T be its time average.
Then {X(t)} is mean ergodic if
.p
lim Var X T = 0
T →∞
Correlation Ergodic Process
w
The stationary process {X(t)} is said to be correlation ergodic if the process {Y(t)} is
w
www.padeepz.net
www.padeepz.net
1.The probability of raining today depends only on previous weather conditions existed
for the last two days and not on past weather conditions.
2.A different equation is markovian.
Markov Process
et
.n
3.9 MARKOV CHAIN
Definition
We define the Markov Chain as follows
pz
If P{X n = a n/X n-1 = a n-1 , X n-2 = a n-2 , … X 0 = a 0 }
⇒P{X n = a n / X n-1 = a n-1 } for all n. the process {X n }, n = 0, 1, 2… is called as Markov
ee
Chains.
1.a1, a2, a3, … an are called the states of the Markov Chain.
= Pij ( n − 1, n ) is called the one step
ad
model for many practical situations. It describe number of times occurred. When an experiment
is conducted as a function of time.
Property Law for the Poisson Process
Let λ be the rate of occurrences or number of occurrences per unit time and P n (t) be the
probability of n occurrences of the event in the interval (0, t) is a Poisson distribution with
parameter λt.
e −λt ( λt )
n
Pn ( t ) =
n!
www.padeepz.net
www.padeepz.net
X ( t1 ) =n 2 , t 2 > t1
= P X ( t1 ) =
n1 .P [the even occurs n2 -n times in the interval (t 2 =t 1 )
{λ ( t − t1 )}
−λ ( t 2 − t1 ) n 2 = n1
e −λt1 ( λt1 ) 1 e
n
, n 2 ≥ n1
2
= .
n1 n 2 − n1
e −λt 2 .λ n 2 .t1n1 ( t 2 − t1 )n 2 − n1
, n 2 ≥ n1
= n, n 2 − n1
0 , otherwise
et
3.11SEMI RANDOM TELEGRAPH SIGNAL PROCESS
.n
If N(t) represents the number of occurrence of a specified event in (0, t) and X(t) = (–)N(t),
then {X(t)} is called a semi-random telegraph signal process.
3.11.1 RANDOM TELEGRAPH SIGNAL PROCESS
Definition pz
A random telegraph process is a discrete random process X(t) satisfying the following:
ee
i. X(t) assumes only one of the two possible values 1 or –1 at any time 't'
ii. X(0) = 1 or –1 with equal probability 1/2
ad
iii. The number of occurrence N(t) from one value to another occurring in any interval of
length 't' is a Poisson process with rate λ, so that the probability of exactly 'r' transitions is
e −λt ( λt )
r
P N ( t ) = r =
.p
, r = 0,1, 2,...
r!
w
(0,-1)
www.padeepz.net
www.padeepz.net
TUTORIAL QUESTIONS
et
4. A man either drives a car or catches a train to go to office each day. He never goes 2 days in a
row by train but if he drives one day, then the next day he is just as likely to drive again as he is
.n
to travel by train. Now suppose that on the first day of week, the man tossed a fair die and drove
to work if a 6 appeared. Find 1) the probability that he takes a train on the 3rd day. 2). The
probability that he drives to work in the long run.
Here S = {1, 2, 3, 4, 5, 6}
T = {1, 2, 3, …}
∴ (X n , n = 1, 2, 3, …} is a discrete random sequence.
.p
w
Example:2 Given an example of stationary random process and justify your claim.
Solution:
w
Let us consider a random process X(t) = A as (wt + θ) where A &ω are custom and 'θ' is
uniformly distribution random Variable in the interval
w
(0, 2π).
Since 'θ' is uniformly distributed in (0, 2π), we have
1
,0 < C < 2π
f ( θ ) = 2π
0 ,otherwise
∞
∴ E[X(t)] = ∫ X ( t ) f ( θ ) dθ
−∞
www.padeepz.net
www.padeepz.net
2π
1
= ∫ A ω ( ω t + θ ) 2 π dθ
0
A 2π
= sin ( ωt + θ ) 0
2π
A
Sin ( 2π + ωt ) − Sin ( ωt + 0 )
2π
=
A
= [Sinωt − sin ωt ]
2π
= 0 constant
Since E[X(t)] = a constant, the process X(t) is a stationary random process.
Example:3.which are not stationary .Examine whether the Poisson process {X(t)} given by the
et
e −λt ( λt )
probability law P{X(t)=n] = , n = 0, 1, 2, ….
.n
n
Solution
We know that the mean is given by
∞
E X ( t ) =∑ nPn ( t )
pz
ee
n =0
ne −λt ( λt )
∞ n
= ∑
ad
n =0 n
e −λt ( λt )
∞ n
= ∑
.p
n =1 n −1
( λt )
∞ n
w
=e −λt
∑
n =1 n −1
w
−λt
λt ( λt ) 2
e + + ...
w
=
0! 1!
λt ( λt ) 2
= ( λt ) e 1 + +
−λt
+ ...
1 2
= ( λt ) e e
−λt λt
= λt , depends on t
Hence Poisson process is not a stationary process.
www.padeepz.net
www.padeepz.net
UNIT - 4
CORRELATION AND SPECTRAL DENSITY
Introduction
The power spectrum of a time series x(t) describes how the variance of the data x(t) is
distributed over the frequency components into which x(t) may be decomposed. This
distribution of the variance may be described either by a measure µ or by a statistical
cumulative distribution function S(f) = the power contributed by frequencies from 0 upto
f. Given a band of frequencies [a, b) the amount of variance contributed to x(t) by
frequencies lying within the interval [a,b) is given by S(b) - S(a). Then S is called the
spectral distribution function of x.
The spectral density at a frequency f gives the rate of variance contributed by
frequencies in the immediate neighbourhood of f to the variance of x per unit frequency.
et
4.1 Auto Correlation of a Random Process
Let X(t 1 ) and X(t 2 ) be the two given random variables. Then auto correlation is
.n
R XX (t 1 , t 2 ) = E[X(t 1 ) X(t 2 )]
Mean Square Value
Putting t 1 = t 2 = t in (1)
⇒
R XX (t, t) = E[X(t) X(t)]
RXX (t, t) = E[X2(t)]
pz
ee
Which is called the mean square value of the random process.
ad
PROPERTY: 1
The mean square value of the Random process may be obtained from the auto correlation
w
function.
R XX(τ), by putting τ = 0.
w
PROPERTY: 3
If the process X(t) contains a periodic component of the same period.
PROPERTY: 4
If a random process {X(t)} has no periodic components, and
E[X(t)] = X then
www.padeepz.net
www.padeepz.net
R XX ( τ ) X = lim R XX ( τ )
2
lim= (or)X
|T| →∞ |T| →∞
i.e., when τ→∞, the auto correlation function represents the square of the mean of the random
process.
PROPERTY: 5
The auto correlation function of a random process cannot have an arbitrary shape.
et
Solution:
(i) Given R XX(τ) = 5 Sin nπ
.n
R XX (–τ) = 5 Sin n(–π) = –5 Sin nπ
R XX(τ) ≠ R XX (–τ), the given function is not an auto correlation function.
Example : 2
Find the mean and variance of a stationary random process whose auto correlation
w
function is given by
w
2
R XX ( τ ) = 18 +
6 + τ2
w
Solution
2
Given R XX ( τ ) = 18 +
6 + τ2
=
X2 lim R XX ( τ )
| τ | →∞
2
= lim 18 +
| τ | →∞ 6 + τ2
2
= 18 + lim
| τ | →∞ 6 + τ 2
www.padeepz.net
www.padeepz.net
2
= 18 +
6+
= 18 + 0
= 18
X = 18
E X ( t ) = 18
Var {X(t)} = E[X2(t)] - {E[X(t)]}2
We know that
E X 2 ( t ) = R XX(0)
2 55
= 18 + =
6+0 3
1
=
et
3
.n
Example : 3
Express the autocorrelation function of the process {X'(t)} in terms of the auto correlation
function of process {X(t)}
pz
ee
Solution
Consider, R XX '(t 1 , t 2 ) = E{X(t 1 )X'(t 2 )}
X ( t 2 + h ) − X ( t 2 )
= E X ( t1 ) lim
ad
n →0
h
X ( t1 ) X ( t 2 + h ) − X ( t1 ) X ( t 2 )
.p
= lim E
h →0
h
w
R XX ( t1 , t 2 + h ) − R X ( t1 , t 2 )
= lim
w
h →0
h
∂
w
⇒ R XX ' (t 1 , t 2 ) = R XX ( t1 , t 2 ) (1)
∂t 2
∂
Similarly R XX ' (t 1 , t 2 ) = R XX ' ( t, t 2 )
∂t1
∂
⇒ R X ' X (t 1 , t 2 ) = R XX ( t1 , t 2 ) by (1)
∂t, ∂t 2
Auto Covariance
The auto covariance of the process {X(t)} denoted by C XX (t 1 , t 2 ) or C(t 1 , t 2 ) is defined as
www.padeepz.net
www.padeepz.net
CXX ( t1 ,=
t2 ) { ( )}
E X ( t1 ) − E ( X ( t1 ) ) X t 2 − E X ( t 2 )
4.2 CORRELATION COEFFICIENT
C (t ,t )
ρXX ( t1 , t 2 ) =XX 1 2
Var X ( t1 ) x Var X ( t 2 )
Where C XX(t 1 , t 2 ) denotes the auto covariance.
et
CXY ( t1 ,=
t2 ) { ( )}
E X ( t1 ) − E ( Y ( t1 ) ) X t 2 − E Y ( t 2 )
.n
The relation between Mean Cross Correlation and cross covariance is as follows:
C XY ( t1 , t 2 =
) R XY ( t1 , t 2 ) − E X ( t1 ) E Y ( t 2 )
Definition pz
ee
Two random process {X(t)} and {Y(t)} are said to be uncorrelated if
CXY ( t1 , t 2 ) 0, ∀ t1 , t 2
Hence from the above remark we have,
ad
R XY (t 1 , t 2 ) = E[X(t 1 ) Y(t 2 )]
4.4.1 CROSS CORRELATION COEFFICIENT
c XY ( t1 , t 2 )
.p
ρXY ( t1 , t 2 ) =
Var ( X ( t1 ) ) Var ( X ( t 2 ) )
w
w
defined as
R XY (t, t+τ) = E X ( t ) Y ( t + τ )
= R XY (τ)
PROPERTY : 1
R XY (τ) = R YX (–τ)
PROPERTY : 2
If {X(t)} and {Y(t)} are two random process then R XY ( τ ) ≤ R XX ( 0 ) R YY ( 0 ) , where
R XX(τ) and R YY (τ) are their respective auto correlation functions.
www.padeepz.net
www.padeepz.net
PROPERTY : 3
If {X(t)} and {Y(t)} are two random process then,
R XY ( τ ) ≤ 1 R XX ( 0 ) + R YY ( 0 )
2
Solution
By def. we have
R XY (τ) = R XY (t, t+τ)
et
Now, R XY (t, t+τ) = E[X(t). Y(t+τ)]
= E [A cos (ωt + θ). A sin (ω (t+τ) + θ)]
.n
{ }
= A 2 E sin ω ( t + τ ) + θ cos ( ωt + θ )
∞
= ∫ sin ( ωt + ωτ + θ ) .cos ( wt + θ ) f ( θ ) dθ
.p
−∞
2π
∫ sin ( ωt + ω ) .cos ( ωt + θ ) 2π dθ
t +θ 1
w
=
0
w
2π
1
sin ( ωt + ωτ + θ ) cos ( ωt + θ ) dθ
2π ∫0
=
w
2π
1
∫
1
{sin ( ωt + ωτ + θ + ωt + θ )
= 2π 0
2
+ sin [ ωt + ωτ + θ − ωt − θ]} dθ
1
2π
sin [ 2ωt + ωτ + 2θ] + sin ( ωτ )
=
2π ∫
0
2
dθ
www.padeepz.net
www.padeepz.net
2π
1 cos ( 2ωt + ωτ + 2θ )
= − + sin ωτ ( θ )
4π 2 0
1 cos ( 2ωt + ωτ ) cos ( 2ωt + ωτ + 0 )
= − + + sin ωτ ( 2π − 0 )
4π 2 2
1 cos ( 2ωt + ωτ ) cos ( 2ωt + ωτ )
= − + + 2π sin ωτ
4π 2 2
1
= [0 + 2π sin ωτ]
4π
1
= sin ωτ (3)
2
Substituting (3) in (1) we get
et
A2
R XY (=
t, t τ ) sin ωτ
.n
2
( t ) x =
F x = (w) ∫ x (t)e
− iωt
w
dt
−∞
w
Definition
The average power P(T) of x(t) over the interval (-T, T) is given by
T
1
P (T) = ∫ x 2 ( t ) dt
2T − T
www.padeepz.net
www.padeepz.net
X T ( ω)
2
∞
1
= ∫
2π −∞ 2T
dω (1)
Definition
The average power PXX for the random process {X(t)} is given by
T
1
PXX = lim ∫ E X 2 ( t ) dt
T →∞ 2π
−T
E
∞ X ( ω) 2
1 T dω
= ∫ lim
2π −∞ T →∞ 2T
(2)
et
If {X(t)} is a stationary process (either in the strict sense or wide sense) with auto
correlation function R XX(τ), then the Fourier transform of R XX (τ) is called the power spectral
.n
density function of {X(t)} and is denoted by S XX (ω) or S(ω) or S X(ω).
S XX (ω)= Fourier Transform of R XX (τ)
∞
= ∫ R ( τ) e
−∞
XX
− iωτ
dτ
pz
ee
Thus,
∞
SXX ( f ) = ∫ R ( τ) e XX
− i2 πfτ
dτ
ad
−∞
( ω)
SXX= ∫ R ( τ) e
− iωτ
dτ
w
XX
−∞
∞
w
SXX ( f ) = ∫ R ( τ) e XX
− i2 πfτ
dτ
−∞
w
www.padeepz.net
www.padeepz.net
The value of the spectral density function at zero frequency is equal to the total area
under the group of the auto correlation function.
∞
SXX ( f ) = ∫ R ( τ) e
XX
− i2 πfc
dτ
−∞
Taking f = 0, we get
∞
Sxx(0) = ∫ R ( τ ) dτ
−∞
XX
TUTORIAL QUESTIONS
1. Find the ACF of {Y(t)} = AX(t)cos (w 0 + ) where X(t) is a zero mean stationary random
process with ACF A and w 0 are constants and is uniformly distributed over (0, 2 ) and
independent of X(t).
et
2. Find the ACF of the periodic time function X(t) = A sinwt
3.If X(t) is a WSS process and if Y(t) = X(t + a) – X(t – a), prove that
.n
4. If X(t) = A sin( ), where A and are constants and is a random variable, uniformly
distributed over (-
pz
), Find the A.C.F of {Y(t)} where Y(t) = X2(t).
5.. Let X(t) and Y(t) be defined by X(t) = Acos t + Bsin t and Y(t) = B cos t – Asin t
ee
Where is a constant and A nd B are independent random variables both having zero mean and
varaince . Find the cross correlation of X(t) and Y(t). Are X(t) and Y(t) jointly W.S.S
ad
processes?
6. Two random processes X(t) and Y(t) are given by X(t) = A cos ( ), Y(t) = A sin(
), where A and are constants and is uniformly distributed over (0, 2 ). Find the cross
.p
varables such that E(X) = 0 = E(Y), E[X2] = E[Y2] = 1, show that U(t) and V(t) are not jointly
W.S.S but they are individually stationary in the wide sense.
w
8. Random Prosesses X(t) and Y(t) are defined by X(t) = A cos ( ), Y(t) = B cos ( )
where A, B and are constants and is uniformly distributed over (0, 2 ). Find the cross
w
correlation and show that X(t) and Y(t) are jointly W.S.S
WORKEDOUT EXAMPLES
Example 1.Check whether the following function are valid auto correlation function (i) 5 sin nπ
1
(ii)
1 + 9τ 2
Solution:
(i) Given R XX(τ) = 5 Sin nπ
R XX (–τ) = 5 Sin n(–π) = –5 Sin nπ
www.padeepz.net
www.padeepz.net
1
(ii) Given R XX (τ) =
1 + 9τ 2
1
R XX (–τ) = = R XX ( τ )
1 + 9 ( −τ )
2
Example : 2
Find the mean and variance of a stationary random process whose auto correlation
function is given by
2
R XX ( τ ) = 18 +
6 + τ2
et
Solution
2
Given R XX ( τ ) = 18 +
.n
6 + τ2
=
X2 lim R XX ( τ )
| τ | →∞
= lim 18 +
2
pz
ee
| τ | →∞ 6 + τ2
2
= 18 + lim
ad
| τ | →∞ 6 + τ 2
2
= 18 +
6+
.p
= 18 + 0
w
= 18
X = 18
w
E X ( t ) = 18
w
Example : 3
www.padeepz.net
www.padeepz.net
Express the autocorrelation function of the process {X'(t)} in terms of the auto correlation
function of process {X(t)}
Solution
Consider, R XX '(t 1 , t 2 ) = E{X(t 1 )X'(t 2 )}
X ( t 2 + h ) − X ( t 2 )
= E X ( t1 ) lim
n →0
h
X ( t1 ) X ( t 2 + h ) − X ( t1 ) X ( t 2 )
= lim E
h →0
h
R XX ( t1 , t 2 + h ) − R X ( t1 , t 2 )
= lim
h →0
h
∂
R XX ( t1 , t 2 )
et
⇒ R XX ' (t 1 , t 2 ) = (1)
∂t 2
.n
∂
Similarly R XX ' (t 1 , t 2 ) = R XX ' ( t, t 2 )
∂t1
⇒ R X ' X (t 1 , t 2 ) =
∂
∂t, ∂t 2
pz
R XX ( t1 , t 2 ) by (1)
ee
Example :4
Two random process {X(t)} and {Y(t)} are given by
ad
X(t) = A cos (ωt+θ), Y(t) = A sin (ωt + θ) where A and ω are constants and 'θ' is a uniform
random variable over 0 to 2π. Find the cross correlation function.
Solution
.p
By def. we have
R XY (τ) = R XY (t, t+τ)
w
{ }
= A 2 E sin ω ( t + τ ) + θ cos ( ωt + θ )
w
www.padeepz.net
www.padeepz.net
2π
∫ sin ( ωt + ω ) .cos ( ωt + θ ) 2π dθ
t +θ 1
=
0
2π
1
sin ( ωt + ωτ + θ ) cos ( ωt + θ ) dθ
2π ∫0
=
2π
∫ 2 {sin ( ωt + ωτ + θ + ωt + θ )
1 1
= 2π 0
+ sin [ ωt + ωτ + θ − ωt − θ]} dθ
1
2π
sin [ 2ωt + ωτ + 2θ] + sin ( ωτ )
=
2π ∫
0
2
dθ
2π
1 cos ( 2ωt + ωτ + 2θ )
+ sin ωτ ( θ )
et
= −
4π 2 0
.n
1 cos ( 2ωt + ωτ ) cos ( 2ωt + ωτ + 0 )
= − + + sin ωτ ( 2π − 0 )
4π
= −
2
+
pz2
1 cos ( 2ωt + ωτ ) cos ( 2ωt + ωτ )
+ 2π sin ωτ
ee
4π 2 2
1
= [0 + 2π sin ωτ]
ad
4π
1
= sin ωτ (3)
.p
2
Substituting (3) in (1) we get
w
A2
R XY (=
t, t τ ) sin ωτ
2
w
w
www.padeepz.net
www.padeepz.net
UNIT – 5
et
Mathematically a "system" is a functional relationship between the input x(t) and output
y(t). we can write the relationship
y ( t )= f x ( t )
.n
: − ∞ < t < ∞
5.2 CLASSIFICATION OF SYSTEM
1. Linear System: f is called a linear system, if it satisfies
f a1X1 ( t ) ± a=
2 x 2 ( t )
pz
a1f X1 ( t ) ± a 2f X 2 ( t )
ee
2. Time Invariant System:
Let Y(t) = f[X(t)]
If Y ( t + h ) =f X ( t + h ) , then f is called a time invariant system or X(t) and Y(t) are said to
ad
3. Causal System:
Suppose the value of the output Y(t) at t = t 0 depends only on the past values of the input
w
X(t), t≤t 0 .
In other words, if Y ( t 0 ) =f X ( t ) : t ≤ t 0 , then such a system is called a causal
w
system.
w
www.padeepz.net
www.padeepz.net
h(t)
(a)
() ()
→ LTI System →
Input X t Output Y t
h(t)
(b)
a) Shows a general single input - output linear system
b) Shows a linear time invariant system
∞
Y(t)
= ∫ h ( u ) X ( t − u ) du
−∞
et
∞
∫ h ( t − u ) X ( u ) du
.n
=
−∞
5.4 UNIT IMPULSE RESPONSE TO THE SYSTEM
pz
If the input of the system is the unit impulse function, then the output or response is the
system weighting function.
ee
Y(t) = h(t)
Which is the system weight function.
ad
∞
=Y(t) ∫ h ( u ) X ( t − u ) du , then the system is a linear time - invariant system.
w
−∞
Property 2:
w
If the input to a time - invariant, stable linear system is a WSS process, then the output
will also be a WSS process, i.e To show that if {X(t)} is a WSS process then the output {Y(t)} is
w
a WSS process.
Property 3:
∞
If {X(t)} is a WSS process and if Y(t) = ∫ h ( u ) X ( t − u ) du ,
−∞
then
R XY (=
τ ) R XX ( τ ) x h ( τ )
Property 4 :
∞
If {(X(t)} is a WSS process and if Y(t) = ∫ h ( u ) X ( t − u ) du ,
−∞
then
( τ ) R XY ( τ ) x h ( −τ )
R YY=
www.padeepz.net
www.padeepz.net
Property 5:
∞
If {X(t)} is a WSS process and if Y(t) = ∫ h ( u ) X ( t − u ) du ,
−∞
then
( τ ) R XX ( τ ) x h ( τ ) x h ( −τ )
R YY=
Property 6:
∞
If {X(t)} is a WSS process if Y(t) = ∫ h ( u ) X ( t − u ) du , then S ( ω=)
−∞
XY SXX ( ω) H ( ω)
. Property 7:
∞
If {X(t)} is a WSS process and if Y(t) = ∫ h ( u ) X ( t − u ) du ,
−∞
then
SYY ( ω=
) SXX ( ω) H ( ω)
2
et
Note:
Instead of taking R XY ( τ )= E X ( t ) Y ( t + τ ) in properties (3), (4) & (5), if we start
.n
with R XY ( τ )= E X ( t − τ ) Y ( t ) , then the above property can also stated as
( τ ) R XY ( τ ) xh ( −τ )
a) R XY=
( τ ) R XY ( τ ) xh ( −τ )
pz
ee
b) R YY=
c) R YY ( τ ) = R XX ( τ ) x h ( −τ ) x h ( τ )
ad
REMARK :
(i) We have written H ( ω) H * ( ω)= H ( ω) because
2
.p
H ( ω) = F h ( τ )
H * ( ω)= F h ( −τ )
w
(
= F h ( τ ) )
w
= H ( ω)
w
(ii) Equation (c) gives a relationship between the spectral densities of the input and output
process in the system.
(iii) System transfer function:
We call H ( ω
= ) F{h ( τ )} as the power transfer function or system transfer function.
SOLVED PROBLEMS ON AUTO CROSS CORRELATION FUNCTIONS OF INPUT
AND OUTPUT
Example :5.4.1
Find the power spectral density of the random telegraph signal.
Solution
We know, the auto correlation of the telegraph signal process X(y) is
www.padeepz.net
www.padeepz.net
R XX ( τ ) =e
−2 λ τ
( ω)
SXX= ∫ R ( τ) e
XX
− iωτ
dτ
−∞
0 ∞
∫e dτ + ∫ e −2 λτe − iωτdτ
2 λτ − iωτ
= e
−∞ 0
τ = −τ when τ < 0
τ = −τ when τ > 0
0 ∞
( 2 λ−iω)τ
= ∫e
−∞
dτ + ∫ e −2 λτe − iωτdτ
0
et
∞
e( 2 λ−iω)τ
0
e −( 2 λ+iω)τ
= +
( 2λ − iω) ∞ − ( 2λ + iω) 0
.n
1 1
=
( 2λ − iω)
e0 − e −∞ −
( 2λ + iω) pz
e −∞ − e −0
ee
1 1
= (1 − 0 ) − ( 0 − 1)
( 2λ − iω) ( 2λ + i ω )
ad
1 1
= + e −∞ − e −0
( 2λ − i ω ) ( 2 λ + i ω )
.p
1
= (1 − 0 ) +
( 2λ − iω)
w
2λ + iω + 2λ − iω
=
( 2λ − iω)( 2λ + iω)
w
4λ
w
SXX ( ω) = 2
4λ + ω2
Example : 5.4.2
A linear time invariant system has a impulse response h ( t ) = e −βt U ( t ) . Find the power
spectral density of the output Y(t) corresponding to the input X(t).
Solution:
Given X(t) - Input
Y(t) - output
S YY (ω) - |H(ω)|2 S XX(ω)
www.padeepz.net
www.padeepz.net
∞
Now H ( ω) = h ( t ) e − iωt dt
∫
−∞
0 ∞
0
∞
e −(β+iω)t
=
− ( β + iω ) 0
=
1
− ( β + iω )
(
e −∞ − e −0 )
et
=
1
− ( β + iω )
( e −∞ − e −0 )
.n
1
=
|H(ω)| =
β + iω
1 pz
ee
β + iω
1
=
ad
β2 + ω2
1
∴ SYY ( ω)
= SXX ( ω)
.p
β + ω2
2
w
TUTORIAL QUESTIONS
w
2. Suppose that X(t) is the input to an LTI system impulse response h1 (t) and that Y(t) is the
input to another LTI system with impulse response h2 (t). It is assumed that X(t) and Y(t) are
jointly wide sense stationary. Let V(t) and Z(t) denote that random processes at the respective
system outputs. Find the cross correlation of X(t) and Y(t).
3. The input to the RC filter is a white noise process with ACF . If the
frequency response find the auto correlation and the mean square value of the
output process Y(t).
4. A random process X(t0 having ACF , where P and
are real positive
λ e − λt , t > 0
constants, is applied to the input of the system with impulse response h(t) = where
0, t < 0
www.padeepz.net
www.padeepz.net
λ is a real positive constant. Find the ACF of the network’s response Y(t). Find the cross
correlation .
WORKEDOUT EXAMPLES
Example: 1
Find the power spectral density of the random telegraph signal.
Solution
We know, the auto correlation of the telegraph signal process X(y) is
R XX ( τ ) =e
−2 λ τ
( ω)
SXX= ∫ R ( τ) e
XX
− iωτ
dτ
−∞
∞
et
0
∫ e e dτ + ∫ e e dτ
2 λτ − iωτ −2 λτ − iωτ
=
.n
−∞ 0
τ = −τ when τ < 0
∞
pz
τ = −τ when τ > 0
ee
0
( 2 λ−iω)τ
= ∫e
−∞
dτ + ∫ e −2 λτe − iωτdτ
0
ad
∞
e ( 2 λ−iω)τ
0
e −( 2 λ+iω)τ
= +
( 2λ − i ω ) ∞ − ( 2λ + i ω ) 0
.p
1 1
= e0 − e −∞ − e −∞ − e −0
( 2λ − iω) ( 2λ + iω)
w
1 1
(1 − 0 ) − ( 0 − 1)
w
=
( 2λ − iω) ( 2λ + i ω )
w
1 1
= + e −∞ − e −0
( 2λ − i ω ) ( 2 λ + i ω )
1
= (1 − 0 ) +
( 2λ − iω)
2λ + iω + 2λ − iω
=
( 2λ − iω)( 2λ + iω)
4λ
SXX ( ω) = 2
4λ + ω2
www.padeepz.net