Anda di halaman 1dari 79

www.padeepz.

net

S.NO CONTENTS Page.NO


UNIT I
RANDOM VARIABLES
1. Introduction 1
2. Discrete Random Variables 1
3 Continuous Random Variables 5
4 Moments 14
5 Moment generating functions 14
6 Binomial distribution 18
7 Poisson distribution 21
8 Geometric distribution 25
9 Uniform distribution 27
10 Exponential distribution 29

et
11 Gamma distribution 31
UNIT II

.n
TWO –DIMENSIONAL RANDOM VARIABLES
11 Introduction 37
12
13
Joint distribution
pz
Marginal and Conditional Distribution
37
38
ee
14 Covariance 43
15 Correlation Coefficient 44
16 Problems 41
ad

17 Linear Regression 45
18 Transformation of random variables 46
19 Problems 47
.p

UNIT III
RANDOM PROCESSES
w

20 Introduction 49
w

21 Classification 50
22 stationary processes 51
w

23 Markov processes 55
24 Poisson processes 56
25 Random Telegraph processes 57

UNIT IV
CORRELATION AND SPECTRAL DENSITIES
26 Introduction 60
27 Auto Correlation functions 60
28 Properties 61
29 Cross Correlation functions 63
30 Properties 64
31 Power spectral density 65

www.padeepz.net
www.padeepz.net

32 66
Properties
33 Cross spectral density 66
34 Properties 67
UNIT V
LINER SYSTEMS WITH RANDOM INPUTS
35 Introduction 71
36 Linear time invariant systems 72
37 Problems 72
38 Linear systems with random inputs 73
39 Auto Correlation and Cross Correlation functions of inputs and 74
outputs
40 System transfer function 75
41 Problems 76

et
.n
pz
ee
ad
.p
w
w
w

www.padeepz.net
www.padeepz.net

MA6451 PROBABILITY AND RANDOM PROCESSES LTPC3104

OBJECTIVES: To provide necessary basic concepts in probability and random processes for
applications such as random signals, linear systems etc in communication engineering.

UNIT I RANDOM VARIABLES 9+3 Discrete and continuous random variables – Moments –
Moment generating functions – Binomial, Poisson, Geometric, Uniform, Exponential, Gamma
and Normal distributions.

UNIT II TWO - DIMENSIONAL RANDOM VARIABLES 9+3 Joint distributions – Marginal and
conditional distributions – Covariance – Correlation and Linear regression – Transformation of
random variables.
UNIT III RANDOM PROCESSES 9+3 Classification – Stationary process – Markov process -
Poisson process – Random telegraph process.
UNIT IV CORRELATION AND SPECTRAL DENSITIES 9+3 Auto correlation functions – Cross

et
correlation functions – Properties – Power spectral density – Cross spectral density –
Properties.
UNIT V LINEAR SYSTEMS WITH RANDOM INPUTS 9+3

.n
Linear time invariant system – System transfer function – Linear systems with random inputs –
Auto correlation and Cross correlation functions of input and output.
TOTAL (L:45+T:15): 60 PERIODS
OUTCOMES: pz
 The students will have an exposure of various distribution functions and help in acquiring
ee
skills in handling situations involving more than one variable. Able to analyze the response of
random inputs to linear time invariant systems.
ad

TEXT BOOKS:
1. Ibe.O.C., “Fundamentals of Applied Probability and Random Processes", Elsevier, 1st Indian
Reprint, 2007.
.p

2. Peebles. P.Z., "Probability, Random Variables and Random Signal Principles", Tata McGraw
Hill, 4th Edition, New Delhi, 2002.
w

REFERENCES:
w

1. Yates. R.D. and Goodman.D.J., "Probability and Stochastic Processes", 2nd Edition, Wiley
India Pvt. Ltd., Bangalore, 2012.
w

2. Stark. H., and Woods. J.W., "Probability and Random Processes with Applications to Signal
Processing", 3rd Edition,Pearson Education, Asia, 2002.
3. Miller. S.L. and Childers.D.G., "Probability and Random Processes with Applications to Signal
Processing and Communications", Academic Press, 2004.
4. Hwei Hsu, "Schaum‟s Outline of Theory and Problems of Probability, Random Variables and
Random Processes", Tata McGraw Hill Edition, New Delhi, 2004.
5. Cooper. G.R., McGillem. C.D., "Probabilistic Methods of Signal and System Analysis", 3rd
Indian Edition, Oxford University Press, New Delhi, 2012.

www.padeepz.net
www.padeepz.net

UNIT - I
RANDOM VARIABLES

Introduction

Consider an experiment of throwing a coin twice. The outcomes


{HH, HT, TH, TT} consider the sample space. Each of these outcome can be associated
with a number by specifying a rule of association with a number by specifying a rule of
association (eg. The number of heads). Such a rule of association is called a random
variable. We denote a random variable by the capital letter (X, Y, etc) and any particular
value of the random variable by x and y.

Thus a random variable X can be considered as a function that maps all elements in
the sample space S into points on the real line. The notation X(S)=x means that x is the

et
value associated with the outcomes S by the Random variable X.

.n
1.1 SAMPLE SPACE
Consider an experiment of throwing a coin twice. The outcomes

pz
S = {HH, HT, TH, TT} constitute the sample space.

1.2 RANDOM VARIABLE


ee
In this sample space each of these outcomes can be associated with a number by
specifying a rule of association. Such a rule of association is called a random variables.
ad

Eg : Number of heads
We denote random variable by the letter (X, Y, etc) and any particular value of the
random variable by x or y.
.p

S = {HH, HT, TH, TT}


X(S) = {2, 1, 1, 0}
w

Thus a random X can be the considered as a fun. That maps all elements in the sample space S
into points on the real line. The notation X(S) = x means that x is the value associated with
w

outcome s by the R.V.X.


Example
w

In the experiment of throwing a coin twice the sample space S is


S = {HH,HT,TH,TT}. Let X be a random variable chosen such that X(S) = x (the number of
heads).
Note
Any random variable whose only possible values are 0 and 1 is called a Bernoulli random
variable.

1.2.1 DISCRETE RANDOM VARIABLE


Definition : A discrete random variable is a R.V.X whose possible values consitute finite set of
values or countably infinite set of values.
Examples

www.padeepz.net
www.padeepz.net

All the R.V.’s from Example : 1 are discrete R.V’s


Remark
The meaning of P(X ≤a).
P(X ≤a) is simply the probability of the set of outcomes ‘S’ in the sample space for which
X(s) ≤ a.
Or
P(X≤a) = P{S : X(S) ≤ a}
In the above example : 1 we should write
3
P(X ≤ 1) = P(HH, HT, TH) =
4
3
Here P(X≤1) = means the probability of the R.V.X (the number of heads) is less than or equal
4
3
to 1 is .
4

et
Distribution function of the random variable X or cumulative distribution of the random
variable X

.n
Def :
The distribution function of a random variable X defined in (-∞, ∞) is given by
F(x) = P(X ≤ x) = P{s : X(s) ≤ x}
Note pz
Let the random variable X takes values x1 , x2 , ….., x n with probabilities P 1 , P 2 , ….., P n
ee
and let x1 < x 2 < ….. <x n
Then we have
F(x) = P(X < x1 ) = 0, -∞ < x < x,
ad

F(x) = P(X < x1 ) = 0, P(X < x1 ) + P(X = x1 ) = 0 + p1 = p1


F(x) = P(X < x2 ) = 0, P(X < x1 ) + P(X = x1 ) + P(X = x2 ) = p 1 + p 2
.p

F(x) = P(X < x n) = P(X < x1 ) + P(X = x1 ) + ….. + P(X = x n )


= p 1 + p 2 + ………. + p n =1
w

1.2.2 PROPERTIES OF DISTRIBUTION FUNCTIONS


Property : 1 P(a < X ≤ b) = F(b) – F(a), where F(x) = P(X ≤ x)
w

Property : 2 P(a ≤ X ≤ b) = P(X = a) + F(b) – F(a)


Property : 3 P(a < X < b) = P(a < X ≤ b) - P(X = b)
w

= F(b) – F(a) – P(X = b) by prob (1)

1.2.3 PROBABILITY MASS FUNCTION (OR) PROBABILITY FUNCTION


Let X be a one dimenstional discrete R.V. which takes the values
x1 , x2 , …… To each possible outcome ‘x i ’ we can associate a number p i .
i.e., P(X = xi ) = P(x i ) = p i called the probability of x i . The number
p i = P(x i ) satisfies the following conditions.

(i) p ( x i ) ≥ 0, ∀i (ii) ∑ p(x ) = 1
i =1
i

www.padeepz.net
www.padeepz.net

The function p(x) satisfying the above two conditions is called the probability mass
function (or) probability distribution of the R.V.X. The probability distribution {x i , p i } can be
displayed in the form of table as shown below.
X = xi x1 x2 ……. xi
P(X = xi ) = p i p1 p2 ……. pi

Notation
Let ‘S’ be a sample space. The set of all outcomes ‘S’ in S such that
X(S) = x is denoted by writing X = x.
P(X = x) = P{S : X(s) = x}
|||ly P(x ≤ a) = P{S : X() ∈ (-∞, a)}
and P(a < x ≤ b) = P{s : X(s) ∈ (a, b)}
P(X = a or X = b) = P{(X = a) ∪ (X = b)}
P(X = a and X = b) = P{(X = a) ∩ (X = b)}

et
and so on.

.n
Theorem :1 If X1 and X2 are random variable and K is a constant then KX 1 , X 1 + X 2 , X 1 X 2 ,
K 1 X 1 + K2 X 2 , X 1 -X 2 are also random variables.

Theorem :2
pz
If ‘X’ is a random variable and f(•) is a continuous function, then f(X) is a random
ee
variable.

Note
ad

If F(x) is the distribution function of one dimensional random variable then


I. 0 ≤ F(x) ≤ 1
II. If x < y, then F(x) ≤ F(y)
.p

III. F(-∞) = lim F(x) = 0


x →−∞
w

IV. F(∞) = lim F(x) = 1


x →∞
w

V.If ‘X’ is a discrete R.V. taking values x1 , x 2 , x3


Where x1 < x2 < x i-1 x i …….. then
w

P(X = xi ) = F(x i ) – F(xi-1 )


Example:1.2.1
A random variable X has the following probability function
Values of X 0 1 2 3 4 5 6 7 8
Probability P(X) a 3a 5a 7a 9a 11a 13a 15a 17a

(i) Determine the value of ‘a’


(ii) Find P(X<3), P(X≥3), P(0<X<5)
(iii) Find the distribution function of X.
Solution

www.padeepz.net
www.padeepz.net

Table 1
Values of X 0 1 2 3 4 5 6 7 8
p(x) a 3a 5a 7a 9a 11a 13a 15a 17a

(i) We know that if p(x) is the probability of mass function then


8

∑ p(x ) = 1
i =0
i

p(0) + p(1) + p(2) + p(3) + p(4) + p(5) + p(6) + p(7) + p(8) = 1


a + 3a + 5a + 7a + 9a + 11a + 13a + 15a + 17a = 1
81 a = 1
a = 1/81
put a = 1/81 in table 1, e get table 2
Table 2

et
X=x 0 1 2 3 4 5 6 7 8

.n
P(x) 1/81 3/81 5/81 7/81 9/81 11/81 13/81 15/81 17/81

(ii) P(X < 3)

(ii) P(X ≥ 3)
= p(0) + p(1) + p(2)
= 1/81+ 3/81 + 5/81 = 9/81 pz
ee
= 1 - p(X < 3)
= 1 - 9/81 = 72/81
(iii) P(0 < x < 5) = p(1) + p(2) + p(3) + p(4) here 0 & 5 are not include
ad

= 3/81 + 5/81 + 7/81 + 9/81


3+5+7+8+9 24
= ––––––––––––––– = –––––
.p

81 81
(iv) To find the distribution function of X using table 2, we get
w

X = x F(X) = P(x ≤ x)
w

0 F(0) = p(0) = 1/81


w

F(1) = P(X ≤ 1) = p(0) + p(1)


1
= 1/81 + 3/81 = 4/81
F(2) = P(X ≤ 2) = p(0) + p(1) + p(2)
2
= 4/81 + 5/81 = 9/81
F(3) = P(X ≤ 3) = p(0) + p(1) + p(2) + p(3)
3
= 9/81 + 7/81 = 16/81
F(4) = P(X ≤ 4) = p(0) + p(1) + …. + p(4)
4
= 16/81 + 9/81 = 25/81

www.padeepz.net
www.padeepz.net

F(5) = P(X ≤ 5) = p(0) + p(1) + ….. + p(4) + p(5)


5
= 2/81 + 11/81 = 36/81
F(6) = P(X ≤ 6) = p(0) + p(1) + ….. + p(6)
6
= 36/81 + 13/81 = 49/81
F(7) = P(X ≤ 7) = p(0) + p(1) + …. + p(6) + p(7)
7
= 49/81 + 15/81 = 64/81
F(8) = P(X ≤ 8) = p(0) + p(1) + ….. + p(6) + p(7) +
8 p(8)
= 64/81 + 17/81 = 81/81 = 1

1.3 CONTINUOUS RANDOM VARIABLE


Def : A R.V.’X’ which takes all possible values in a given internal is called a continuous random

et
variable.
Example : Age, height, weight are continuous R.V.’s.

.n
1.3.1 PROBABILITY DENSITY FUNCTION

interval (-∞, ∞)).


If there is a function y = f(x) such that
pz
Consider a continuous R.V. ‘X’ specified on a certain interval (a, b) (which can also be a infinite
ee
P(x < X < x + ∆x)
lim = f (x)
∆x → 0 ∆x
ad

Then this function f(x) is termed as the probability density function (or) simply density function
of the R.V. ‘X’.
.p

It is also called the frequency function, distribution density or the probability density
function.
w

The curve y = f(x) is called the probability curve of the distribution curve.
Remark
w

If f(x) is p.d.f of the R.V.X then the probability that a value of the R.V. X will fall in
w

some interval (a, b) is equal to the definite integral of the function f(x) a to b.
b
P(a < x < b) = ∫ f (x) dx
a (or)
b
P(a ≤ X ≤ b) = ∫ f (x) dx
a

1.3.2 PROPERTIES OF P.D.F


The p.d.f f(x) of a R.V.X has the following properties

(i) f(x) ≥ 0, -∞ < x < ∞ (ii) ∫ f (x) dx = 1
−∞

Remark

www.padeepz.net
www.padeepz.net

1. In the case of discrete R.V. the probability at a point say at x = c is not zero. But in the case
of a continuous R.V.X the probability at a point is always zero.

P ( X = c ) = ∫ f (x) dx = [ x ]c = C − C = 0
C

−∞

2. If x is a continuous R.V. then we have p(a ≤ X ≤ b) = p(a ≤ X < b)


= p(a < X V b)

IMPORTANT DEFINITIONS INTERMS OF P.D.F


If f(x) is the p.d.f of a random variable ‘X’ which is defined in the interval (a, b) then
b
i Arithmetic mean ∫ x f (x) dx
a

et
b 1
ii Harmonic mean ∫ f (x) dx
a x

.n
b
iii Geometric mean ‘G’ log G ∫ log x f (x) dx

iv Moments about origin


pz a
b
∫ x f (x) dx
r
ee
a
b
∫ (x − A) f (x) dx
r
v Moments about any point A
ad

a
b
∫ (x − mean) f (x) dx
r
vi Moment about mean µ r
a
.p

b
∫ (x − mean) f (x) dx
2
vii Variance µ 2
w

a
b
∫ | x − mean | f (x) dx
w

viii Mean deviation about the mean is M.D.


a
w

1.3.3 Mathematical Expectations


Def :Let ‘X’ be a continuous random variable with probability density function f(x).
Then the mathematical expectation of ‘X’ is denoted by E(X) and is given by

E(X) = ∫ x f (x) dx
−∞

It is denoted by

µ 'r =
∫ x f (x) dx
r

−∞

Thus

www.padeepz.net
www.padeepz.net

µ1' =
E(X) (µ1' about origin)
µ '2 =
E(X 2 ) (µ '2 about origin)
∴  Mean =  X =µ1' =E(X)
And
Variance = µ '2 −µ '2 2
Variance= E(X 2 ) − [E(X)]2 (a)
th
* r moment (abut mean)
Now

E {X − E ( X )}
r
= ∫ {x − E(X)}r f (x) dx
−∞

= ∫ {x − X} f (x) dx
r

et
−∞

Thus

.n

µr = ∫ {x − X} f (x) dx
r

Where µr =
−∞

E[X − E(X) ] pz r
(b)
ee
This gives the rth moment about mean and it is denoted by µ r
Put r = 1 in (B) we get

ad

µr = ∫ {x − X}f (x) dx
−∞
.p

∞ ∞
= ∫ x f (x) dx − ∫ x f (x) dx
w

−∞ −∞

=
X − X ∫ f (x) dx

 ∞ f (x) dx =
1
 −∞∫
w

−∞ 
w

= X−X
µ1 =0
Put r = 2 in (B), we get

µ2 = ∫ (x − X) f (x) dx
2

−∞

Variance =
µ2 =E[X − E(X)]2
Which gives the variance interms of expectations.
Note
Let g(x) = K (Constant), then

www.padeepz.net
www.padeepz.net


E g (=
X )     E (=
K) ∫ K f (x) dx
−∞

 ∞ f (x) dx =1
= K ∫ f (x) dx  −∞∫ 
−∞

= K.1 = K
Thus E(K) = K ⇒ E[a constant] = constant.
1.3.4 EXPECTATIONS (Discrete R.V.’s)
Let ‘X’ be a discrete random variable with P.M.F p(x)
Then
E(X) = ∑ x p(x)
x
For discrete random variables ‘X’

et
E(X r ) = ∑ x p(x)
r

x (by def)

.n
If we denote
=
E(X r
) µ 'r
Then
µ 'r= =
E[X r
] ∑ x p(x)
r
pz
ee
x

Put r = 1, we get
ad

Mean µ 'r = ∑ x p(x)


Put r = 2, we get
.p

µ '2= =
E[X 2
] ∑ x p(x)
2
w

x

µ 2 = µ '2 −µ1' 2 = E(X 2 ) − {E(X)}
2
w

The rth moment about mean


w

µ 'r = E[{X − E(X)}r ]


=∑ (x − X) p(x),
r
E(X) =
X
x
Put r = 2, we get
Variance = µ= 2 ∑ (x − X) p(x)
2

1.3.5 ADDITION THEOREM (EXPECTATION)


Theorem 1
If X and Y are two continuous random variable with pdf fx (x) and fy (y) then
E(X+Y) = E(X) + E(Y)

www.padeepz.net
www.padeepz.net

1.3.6 MULTIPLICATION THEOREM OF EXPECTATION


Theorem 2
If X and Y are independent random variables,
Then E(XY) = E(X) . E(Y)
Note :
If X 1 , X 2 , ……, X n are ‘n’ independent random variables, then
E[X 1 , X 2 , ……, X n] = E(X 1 ), E(X 2 ), ……, E(X n )

Theorem 3
If ‘X’ is a random variable with pdf f(x) and ‘a’ is a constant, then
(i) E[a G(x)] = a E[G(x)]
(ii) E[G(x)+a] = E[G(x)+a]
Where G(X) is a function of ‘X’ which is also a random variable.

et
Theorem 4
If ‘X’ is a random variable with p.d.f. f(x) and ‘a’ and ‘b’ are constants, then

.n
E[ax + b] = a E(X) + b

Cor 1:
pz
If we take a = 1 and b = –E(X) = – X , then we get
ee
E(X- X ) = E(X) – E(X) = 0
Note
1 1
ad

E  ≠
 X  E(X)
E[log (x)] ≠ log E(X)
.p

E(X2) ≠ [E(X)]2
1.3.7 EXPECTATION OF A LINEAR COMBINATION OF RANDOM VARIABLES
w

Let X 1 , X 2 , ……, X n be any ‘n’ random variable and if a 1 , a 2 , ……, a n are constants, then
E[a 1 X 1 + a 2 X 2 + ……+ a nX n ] = a 1 E(X 1 ) + a 2 E(X 2 )+ ……+ a nE(X n )
w

Result
w

If X is a random variable, then


Var (aX + b) = a2Var(X) ‘a’ and ‘b’ are constants.
Covariance :
If X and Y are random variables, then covariance between them is defined as
Cov(X, Y) = E{[X - E(X)] [Y - E(Y)]}
= E{XY - XE(Y) – E(X)Y + E(X)E(Y)}
Cov(X, Y) = E(XY) – E(X) . E(Y) (A)
If X and Y are independent, then
E(XY) = E(X) E(Y)
Sub (B) in (A), we get
Cov (X, Y) = 0
∴ If X and Y are independent, then

www.padeepz.net
www.padeepz.net

Cov (X, Y) = 0
Note
(i) Cov(aX, bY) = ab Cov(X, Y)
(ii) Cov(X+a, Y+b) = Cov(X, Y)
(iii) Cov(aX+b, cY+d) = ac Cov(X, Y)
(iv) Var (X 1 + X 2 ) = Var(X 1 ) + Var(X 2 ) + 2 Cov(X 1 , X 2 )
If X 1 , X 2 are independent
Var (X 1 + X 2 ) = Var(X 1 ) + Var(X 2 )
EXPECTATION TABLE
Discrete R.V’s Continuous R.V’s

1. E(X) = ∑x p(x) 1. E(X) = ∫ x f (x) dx
−∞

2. E(X ) =µ =∑ x p(x)
r ' r
2. E(X r ) =µ 'r = ∫ x r f (x) dx

et
r
x −∞

3. Mean = µ 'r =∑ x p(x) 3. Mean = µ 'r = ∫ x f (x) dx

.n
−∞

4. µ '2 =∑ x 2 p(x)
pz
4. µ '2 = ∫ x 2 f (x) dx
−∞
ee
5. Variance = µ '2 −µ1' 2 = E(X2) – {E(X)}2 5. Variance = µ '2 −µ1' 2 = E(X2) – {E(X)}2
ad

SOLVED PROBLEMS ON DISCRETE R.V’S


Example :1
.p

When die is thrown, ‘X’ denotes the number that turns up. Find E(X), E(X2) and Var (X).
Solution
w

Let ‘X’ be the R.V. denoting the number that turns up in a die.
‘X’ takes values 1, 2, 3, 4, 5, 6 and with probability 1/6 for each
w

X=x 1 2 3 4 5 6
w

1/6 1/6 1/6 1/6 1/6 1/6


p(x)
p(x1 ) p(x2 ) p(x3 ) p(x4 ) p(x5 ) p(x6 )

Now
6
E(X) = ∑ x i p(x i )
i =1
= x1 p(x 1 ) + x2 p(x2 ) + x3 p(x3 ) + x 4 p(x 4 ) + x5 p(x 5 ) + x6 p(x6 )
= 1 x (1/6) + 1 x (1/6) + 3 x (1/6) + 4 x (1/6) + 5 x (1/6) + 6 x (1/6)
= 21/6 = 7/2 (1)

www.padeepz.net
www.padeepz.net

6
E(X) = ∑ x i p(x p )
i =1
= x1 2p(x1 )+x 2 2p(x 2 )+x3 2p(x3 )+x 4 2p(x4 )+x 5 2p(x 5 )+x 6 p(x6 )
= 1(1/6) + 4(1/6) + 9(1/6) + 16(1/6) + 25(1/6) + 36(1/6)
1 + 4 + 9 + 16 + 25 + 36 91
= = (2)
6 6
Variance (X) = Var (X) = E(X2) – [E(X)]2
2
91  7  91 49 35
= –  = − =
6 2 6 4 12
Example :2
Find the value of (i) C (ii) mean of the following distribution given
C(x − x 2 ), 0 < x <1
f (x) = 
0 otherwise

et
Solution
C(x − x 2 ),

.n
0 < x <1
Given f (x) =  (1)
0 otherwise

−∞

∫ f (x) dx =1 pz
ee
1
∫ C(x − x ) dx =
2
1 [using (1)] [∴ 0<x<1]
0
ad

1
 x 2 x3 
C −  = 1
2 3 0
.p

1 1
C −  = 1
w

 2 3
3 − 2
w

C =1
 6 
w

C
=1 C=6 (2)
6
Sub (2) in (1), f(x) = 6(x – x2), 0< x < 1 (3)

Mean = E(x) = ∫ x f (x) dx
−∞
1
= ∫ x 6(x − x 2 ) dx [from (3)] [∴ 0 < x < 1]
0
1
= ∫ (6x 2 − x 3 ) dx
0

www.padeepz.net
www.padeepz.net

1
 6x 3 6x 4 
=  −
 3 4  0
∴ Mean = ½
Mean C
½ 6

1.4 CONTINUOUS DISTRIBUTION FUNCTION


Def :
If f(x) is a p.d.f. of a continuous random variable ‘X’, then the function

F X (x) = F(x) = P(X ≤ x) = ∫ f (x) dx, − ∞ < x < ∞
−∞
is called the distribution function or cumulative distribution function of the random

et
variable.
* PROPERTIES OF CDF OF A R.V. ‘X’

.n
(i) 0 ≤ F(x) ≤ 1, - ∞ < x < ∞
(ii) Lt F(x) = 0 , Lt F(x) = 1

(iii)
x →−∞
b
P(a ≤ X ≤ b) = ∫ f (x)=
dx
x →−∞

F(b) − F(a) pz
ee
a
dF(x)
(iv) F'(x) = = f(x) ≥ 0
dx
ad

(v) P(X = xi ) = F(x i ) – F(xi – 1)


Example :1.4.1
Given the p.d.f. of a continuous random variable ‘X’ follows
.p

6x(1 − x), 0 < x <1


f (x) =  , find c.d.f. for ‘X’
w

0 otherwise
Solution
w

6x(1 − x), 0 < x <1


Given f (x) = 
w

0 otherwise
x
=
The c.d.f is F(x) ∫ f (x) dx , − ∞ < x < ∞
−∞
(i) When x < 0, then
x
F(x) = ∫ f (x) dx
−∞
x
= ∫ 0 dx =0
−∞
(ii) When 0< x < 1, then

www.padeepz.net
www.padeepz.net

x
F(x) = ∫ f (x) dx
−∞
0 x
= ∫ f (x) dx + ∫ f (x) dx
−∞ 0
x
x  x 2 x3 
x
= 0 + ∫ 6x(1 − x) dx = 6 ∫ x(1 − x) dx = 6  − 
0 0 2 3 0
= 3x 2 − 2x 3
(iii) When x > 1, then
x
F(x) = ∫ f (x) dx
−∞
0 1 x
= ∫ 0dx + ∫ 6x(1 − x) dx + ∫ 0 dx
−∞ 0 0

et
1
= 6 ∫ (x − x 2 ) dx =1

.n
0
Using (1), (2) & (3) we get
0, x<0
F(x)

= 3x 2 − 2x 3 ,
1,
0 < x <1 pz
ee
 x >1
ad

Example:1.4.2
e − x , x≥0
(i) If f (x) =  defined as follows a density function ?
0, x<0
.p

(ii) If so determine the probability that the variate having this density will fall in the interval (1,
w

2).
Solution
w

e − x , x≥0
Given f (x) = 
w

0, x<0
(a) In (0, ∞), e-x is +ve
∴f(x) ≥ 0 in (0, ∞)

∞ 0 ∞
(b) ∫ f (x) dx = ∫ f (x) dx + ∫ f (x) dx
−∞ −∞ 0
0 ∞
= ∫ 0dx + ∫ e − x dx
−∞ 0

=  −e − x  − e −∞ + 1
=
0

www.padeepz.net
www.padeepz.net

=1
Hence f(x) is a p.d.f
(ii) We know that
b
P(a ≤ X ≤ b) = ∫ f (x) dx
a
2 2
P(1 ≤ X ≤ 2) = ∫ f (x) dx = ∫ e − x dx = [−e − x ]2+1
1 1
2
= ∫ e − x dx = [−e − x ]2+1
1
= -e-2 + e-1 = -0.135 + 0.368 = 0.233

Example:1.4..3
A probability curve y = f(x) has a range from 0 to ∞. If f(x) = e-x, find the mean and
variance and the third moment about mean.

et
Solution

.n
Mean = ∫ x f (x) dx
0

Mean = 1
= ∫ x e − x dx
0 pz
=  x[−e − x ] − [e − x ]

0
ee

Variance µ 2= ∫ (x − Mean) 2 f (x) dx
0
ad


= ∫ (x − 1) 2 e − x dx
0
µ 2 =1
.p

Third moment about mean


w

b
µ3= ∫ (x − Mean)3 f (x) dx
w

a
Here a = 0, b = ∞
w

b
µ3 = ∫ (x − 1)3 e − x dx
a

{ }

= (x − 1)3 ( −e − x ) − 3(x − 1) 2 (e − x ) + 6(x − 1)( −e − x ) − 6(e − x )
0
= -1 + 3 -6 + 6 = 2
µ3 = 2
1.5 MOMENT GENERATING FUNCTION
Def : The moment generating function (MGF) of a random variable ‘X’ (about origin) whose
probability function f(x) is given by
M X(t) = E[etX]

www.padeepz.net
www.padeepz.net

 ∞ e tx f (x)dx, for a continuous probably function


 ∫
=  x = −∞

 ∑ e tx p(x), for a discrete probably function
 x = −∞
Where t is real parameter and the integration or summation being extended to the entire range of
x.
Example :1.5.1
∞ tr '
Prove that the rth moment of the R.V. ‘X’ about origin is M=
X (t) ∫ µr
r = 0 r!
Proof
WKT M X (t) = E(etX)
 tX (tX) 2 (tX)3 (tX) r 
= E 1 + + + + .... + + + ....
 1! 2! 3! r! 

et
2 r
t t
= E[1] + t E(X) + E(X 2 ) + ..... + E(X r ) + ........

.n
2! r!
2 3 r
t t t
= 1 + t µ1' + µ '2 + µ3' + ..... + µ 'r + ........
M X (t)

[using µ 'r =E(X r ) ]


2! 3!
pz
r!
ee
tr
Thus rth moment = coefficient of
r!
ad

Note
1. The above results gives MGF interms of moments.
2. Since M X (t) generates moments, it is known as moment generating function.
.p

Example:1.5.2
Find µ1' and µ '2 from M X (t)
w

Proof
w

tr '∞
WKT M X (t) = ∑ µ r
w

r = 0 r!

t t2 tr
M X (t) =µ '0 + µ1' + µ '2 + ..... + µ 'r (A)
1! 2! r!
Differenting (A) W.R.T ‘t’, we get
2t ' t 3 '
'
M X (t) =µ + µ 2 + µ3 + .....
'
1 (B)
2! 3!
Put t = 0 in (B), we get
M X ' (0) =µ1' =Mean
d 
Mean = M1' (0) (or)  dt (M X (t)) 
t =0

www.padeepz.net
www.padeepz.net

M X" (t) =µ '2 + t µ3' + .......


Put t = 0 in (B)
 d2 
M X (0) = µ
" '
2 (or)  dt 2 (M X (t)) 
  t =0
 dr 
In general µ = r (M X (t)) 
'
r
 dt  t =0
Example :1.5.3
Obtain the MGF of X about the point X = a.
Proof
The moment generating function of X about the point X = a is M X (t) = E[e t (X −a ) ]
 t2 tr 
= E 1 + t(X − a) + (X − a) 2 + .... + (X − a) r + ....
 2! r! 

et
 Formula 
 

.n
2
e x =1 + x + x + ...
 
1! 2!
t2 tr pz
= E (1) + E[t(X − a)] + E[ (X − a) 2 ] + .... + E[ (X − a) r ] + ....
ee
2! r!
2 r
t t
=1 + t E(X − a) + E(X − a) 2 + .... + E(X − a) r + ....
2! r!
ad

2 r
t t
= 1 + t µ1' + µ '2 + .... + µ 'r + .... Where = µ 'r E[(X − a) r ]
2! r!
.p

2
t ' tr '
[ M X (t)]x =a = 1 + tµ1 + µ 2 + ..... + µr + .....
'
w

2! r!
Result:
w

M CX (t) = E[e tcx ] (1)


w

M X (t) = E[e ] ctx


(2)
From (1) & (2) we get
M CX (t) = M X (ct)
Example :1.5.4
If X 1 , X2, ….., Xn are independent variables, then prove that
t (X1 + X 2 +....+ X n )
M X1 + X2 +....+ Xn (t) = E[e ]
= E[e tX1 .e tX 2 .....e tX n ]
= E(e tX1 ).E(e tX 2 ).....E(e tX n )
[∴ X 1 , X 2 , ….., X n are independent]

www.padeepz.net
www.padeepz.net

= M X1 (t).M X 2 (t)..........M X n (t)

Example:1.5.5
− at t
X−a  
Prove that if ∪ = , then M ∪ (t) = e h .M X  h  , where a, h are constants.
h
Proof
By definition
M ∪ (t) = E e tu    M X (t) = E[e tx ]
 t  Xh−a  
= E e   
 
 tXn − tan 
= E e 

et
 
tX − ta

.n
= E[ e h ] E[ e h ]
− ta tX
=e

=e
h

− ta
h
E[ e ]

. MX
h

t
 
h
[by def]
pz
ee
− at
t X−a
∴ M ∪ (t) = e h
.M X   , where ∪ = and M X(t) is the MGF about origin.
h
ad

Example:1.5.6
.p

Find the MGF for the distribution where


2
at x = 1
w

3

w

1
=f (x) = at x 2
 3
w

0 otherwise


Solution
2
Given f (1) =
3
1
f (2) =
3
f(3) = f(4) = …… = 0
MGF of a R.V. ‘X’ is given by

www.padeepz.net
www.padeepz.net

M X (t) = E[e tx ]

= ∑ e tx f (x)
x =0
= e0 f(0) + et f(1) + e2t f(2) + …….
= 0 +et f(2/3) + e2t f(1/3) + 0
= 2/3et + 1/3e2t
et
∴ MGF is M=
X (t) [2 + e t ]
3
1.6 Discrete Distributions
The important discrete distribution of a random variable ‘X’ are
1. Binomial Distribution
2. Poisson Distribution
3. Geometric Distribution
1.6.1 BINOMIAL DISTRIBUTION

et
Def : A random variable X is said to follow binomial distribution if its probability law is given
by

.n
P(x) = p(X = x successes) = nC x p x qn-x
Where x = 0, 1, 2, ……., n, p+q = 1
Note
Assumptions in Binomial distribution pz
ee
i) There are only two possible outcomes for each trail (success or failure).
ii) The probability of a success is the same for each trail.
iii) There are ‘n’ trails, where ‘n’ is a constant.
ad

iv) The ‘n’ trails are independent.


.p

Example :1.6.1
Find the Moment Generating Function (MGF) of a binomial distribution about origin.
w

Solution
n
w

WKT M X (t) = ∑ e tx p(x)


x =0
w

Let ‘X’ be a random variable which follows binomial distribution then MGF about origin is
given by
n
E[e=
tX
] =
M X (t) ∑ e p(x)
tx

x =0
n
= x n −x
∑ e nC x p q
tx
 p(x) = nC x p x q n − x 
x =0
n
n−x
= ∑ (e ) p nC x q
tx x

x =0
n
n−x
= ∑ (pe ) nC x q
t x

x =0

www.padeepz.net
www.padeepz.net

∴ M X (t) =
(q + pe t ) n
Example:1.6.2
Find the mean and variance of binomial distribution.
Solution
M X (t) = (q + pe t ) n
∴ M 'X (t) n(q + pe t ) n −1.pe t
=
Put t = 0, we get
M 'X (0) = n(q + p) n −1.p
Mean = E(X) = np [ (q + p) =
1]  Mean M 'X (0) 
M"X (t) = np (q + pe t ) n −1.e t + e t (n − 1)(q + pe t ) n − 2 .pe t 
Put t = 0, we get
= np (q + p) n −1 + (n − 1)(q + p) n − 2 .p 

et
M"X (t)
= np [1 + (n − 1)p ]

.n
= np + n 2 p 2 − np 2

M"X (0) =
= n 2 p 2 + np(1 − p)
n 2 p 2 + npq
pz [ 1 − p =q]
ee
M"X (0) = =
E(X 2
) n 2 p 2 + npq
Var ( X ) = E(X 2 ) − [E(X)]2= n 2 / p 2 + npq − n 2 / p 2= npq
ad

Var (X) = npq


S.D = npq
.p

Example :1.6.3
w

Find the Moment Generating Function (MGF) of a binomial distribution about mean
(np).
w

Solution
Wkt the MGF of a random variable X about any point ‘a’ is
w

M x(t) (about X = a) = E[et(X-a)]


Here ‘a’ is mean of the binomial distribution
M X(t) (about X = np) = E[et(X-np)]
= E[etX . e-tnp)]
= e-tnp . [-[e-tX)]]
= e-tnp . (q+pet)-n
= (e-tp)n. (q + pet)n
∴ MGF about mean = (e-tp)n. (q + pet)n

Example :1.6.4
Additive property of binomial distribution.
Solution

www.padeepz.net
www.padeepz.net

The sum of two binomial variants is not a binomial variate.


Let X and Y be two independent binomial variates with parameter
(n1 , p 1 ) and (n 2 , p 2 ) respectively.
Then
( t )  (q + p 1e t ) , ( t )  (q + p 2e t )
n1 n2
MX = 1 MY= 2

∴ M X+Y ( 
t) =
M X ( t ) .M Y ( t ) [ X & Y are independent R.V.’s]

( ) . (q + p 2e t )
n1 n2
= q1 + p1e t 2

( ) . Hence by uniqueness theorem of


n
RHS cannot be expressed in the form q + pe t
MGF X+Y is not a binomial variate. Hence in general, the sum of two binomial variates is not a
binomial variate.

Example :1.6.5

et
=
If M X (t) (=
q+pe ) , M (t) ( q+pe )
t n1
Y
t n2
, then

.n
M X+Y (t) = ( q+pe ) t n1 + n 2

Problems on Binomial Distribution


pz
1. Check whether the following data follow a binomial distribution or not. Mean = 3; variance =
ee
4.
Solution
Given Mean np = 3 (1)
ad

Variance npr = 4 (2)


(2) np 3
⇒ =
.p

(1) npq 4
4 1
⇒ q = = 1 which is > 1.
w

3 3
Since q > 1 which is not possible (0 < q < 1). The given data not follow binomial distribution.
w

Example :1.6.5
w

The mean and SD of a binomial distribution are 5 and 2, determine the distribution.
Solution
Given Mean = np = 5 (1)
SD = npq = 2 (2)
(2) np 4 4
⇒ =⇒ q=
(1) npq 5 5
4 1 1
∴ p =1 − = ⇒ p=
5 5 5
Sub (3) in (1) we get
n x 1/5 = 5

www.padeepz.net
www.padeepz.net

n = 25
∴ The binomial distribution is
P(X = x) = p(x) = nC x px qn-x
= 25C x(1/5)x(4/5)n-x, x = 0, 1, 2, ….., 25

1.7 Passion Distribution


Def :
A random variable X is said to follow if its probability law is given by
e −λ λ x
P(X = x) = p(x) = , x = 0, 1, 2, ….., ∞
x!

Poisson distribution is a limiting case of binomial distribution under the following


conditions or assumptions.
1. The number of trails ‘n’ should e infinitely large i.e. n→∞.

et
2. The probability of successes ‘p’ for each trail is infinitely small.
3. np = λ , should be finite where λ is a constant.

.n
* To find MGF
M X(t) = E(etx)

= ∑ e tx p(x)
x =0

 λ x eλ 
pz
ee

= ∑ e tx  
x =0  x! 
∞ e −λ (λe t ) x
ad

= ∑
x =0 x!
∞ (λe t ) x
.p

= e −λ ∑
x =0 x!
w

 (λe t ) 2 
= e −λ 1 + λe t + + ......
w

 2! 
= e −λ eλe = eλ (e −1)
t t
w

M X(t) = eλ (e −1)
t
Hence

* To find Mean and Variance


= eλ (e −1)
t
WKT M X (t)
= eλ (e −1) .e t
t
∴ M X '(t)
M X '(0) = e −λ . λ

=µ1' E(X)
= ∑ x.p(x)
x =0

www.padeepz.net
www.padeepz.net

e −λ λ x∞ ∞ x. e −λ λλ x −1
= ∑ x. = ∑
= x 0= x! x 0 x!
∞ x.λ x −1
= 0 + e −λ . λ ∑
x =1 x!

∞ λ x 1
= λ e −λ . ∑
x =1 (x − 1)!

 λ2 
= λ e −λ 1 + λ + + .....
 2! 
= λ e −λ .eλ
Mean = λ
∞ ∞ e −λ λ x
µ = E[X ] = ∑ x .p(x) = ∑ x .
' 2 2 2

et
2
= x 0= x 0 x!
−λ
∞ e λ x
= ∑ {x(x − 1) + x}.

.n
x =0 x!
∞ x(x − 1)e λ −λ x ∞ x.e −λ λ x
= ∑
= x 0=

x!
λ
+∑
x −2
x 0 x!pz
ee
= e −λ λ 2 ∑ +λ
x = 0 (x − 2)(x − 3)....1

∞ λ x −2
ad

= e −λ λ 2 ∑ +λ
x = 0 (x − 2)!

−λ 2  λ λ2 
.p

= e λ 1 + + + .... + λ
 1! 2! 
w

= λ2 + λ
w

Variance µ 2 = E(X 2 ) − [E(X)]2 = λ2 + λ − λ2 = λ


Variance = λ
w

Hence Mean = Variance =λ


Note : * sum of independent Poisson Vairates is also Poisson variate.

PROBLEMS ON POISSON DISTRIBUTION


Example:1.7.1
3 1
If x is a Poisson variate such that P(X=1) = and P(X=2) = , find the P(X=0) and P(X=3).
10 5
Solution
e −λ λ x
P(X = x) =
x!

www.padeepz.net
www.padeepz.net

3
∴ P(X=1) = e −λ λ = (Given)
10
3
= λe −λ = (1)
10
e −λ λ 2 1
P(X=2) = = (Given)
2! 5
e −λ λ 2 1
= (2)
2! 5
3
(1) ⇒ e −λ λ = (3)
10
2
(2) ⇒ e −λ λ 2 = (4)
5

et
(3) 1 3
⇒ =
(4) λ 4

.n
4
λ=

∴ P(X=0)
3
=
e −λ λ 0
= e −4/3
pz
ee
0!
−λ 3
e λ e −4/3 (4 / 3)3
P(X=3) = =
ad

3! 3!

Example :1.7.2
.p

If X is a Poisson variable
P(X = 2) = 9 P(X = 4) + 90 P(X=6)
w

Find (i) Mean if X (ii) Variance of X


Solution
w

e −λ λ x
P(X=x) = , x = 0,1, 2,.....
w

x!
Given P(X = 2) = 9 P(X = 4) + 90 P(X=6)
e −λ λ 2 e −λ λ 4 e −λ λ 6
=9 + 90
2! 4! 6!
1 9λ 90λ 2 4
= +
2 4! 6!
1 3λ 2
λ 4
= +
2 8 8
3λ 2
λ4
1= +
4 4

www.padeepz.net
www.padeepz.net

λ 4 + 3λ 2 − 4 =0
λ2 = 1 or λ2 = -4
λ =±1 or λ = ± 2i
∴ Mean = λ = 1, Variance = λ = 1
∴ Standard Deviation = 1

1.7.3 Derive probability mass function of Poisson distribution as a limiting case of Binomial
distribution
Solution
We know that the Binomial distribution is
P(X=x) = nC x pxqn-x
n!
= p x (1 − p) n − x
(n − x)! x!

et
1.2.3.......(n − x)(n − x + 1)......np n (1 − p) n
=
1.2.3.....(n − x) x! (1 − p) x

.n
x
1.2.3.......(n − x)(n − x + 1)......n  p 
 (1 − p)
n

=
1.2.3.....(n − x) x!
pz
n(n − 1)(n − 2)......(n − x + 1) λ x
1− p 
1  λ
n
ee
= x 
1− 
x! n  λ  n
x

1 − 
 n
ad

−x
n(n − 1)(n − 2)......(n − x + 1)  λ   λ 
n

= x 1 −  1 − 
x!  n  n
.p

 1  2    x − 1 
11 − 1 −  ...... 1 −  
w

n−x
 n  n    n  x  λ 
P(X=x) = λ 1 − 
 n
w

x!
n−x
λ  1  2 
x
  x − 1   λ 
w

= 11 − 1 −  ...... 1 −    1 − 


x!  n  n    n   n 
When n→∞
λx   1  2    x − 1   λ  
n−x

P(X=x) = lt 1 − 1 − 1 −  ...... 1 −    1 −  


x! n →∞   n  n    n    n  
λx  1  2  x −1 
= lt 1 −  lt 1 −  ...... lt 1 −  
x! n →∞  n  n →∞  n  n →∞
 n 
We know that

www.padeepz.net
www.padeepz.net

n −x
 λ
lt 1 −  e −λ
=
n →∞
 n
 1  2   x −1  
and lt 1 − = lt 1 −  .....= lt 1 −   = 1
n →∞
 n n →∞
 n n →∞
  n 
λ x −λ
∴ P(X=x) = = e , x 0,1, 2,...... ∞
x!

1.8 GEOMETRIC DISTRIBUTION


Def: A discrete random variable ‘X’ is said to follow geometric distribution, if it assumes only
non-negative values and its probability mass function is given by
P(X=x) = p(x) = qx-1 ; x = 1, 2, ……., 0 < p < 1, Where q = 1-p
Example:1.8.1
To find MGF

et
M X(t) = E[etx]

.n
= ∑ e tx p(x)

= ∑ e tx q x −1p
x =1

= ∑ e tx q x q −1p
pz
ee
x =1

= ∑ e tx q x p / q
ad

x =1

= p / q ∑ e tx q x
x =1
.p


= p / q ∑ (e t q) x
w

x =1

= p / q (e t q)1 + (e t q) 2 + (e t q)3 + ....


w

Let x = etq = p / q  x + x 2 + x 3 + ....


w

p p
= x 1 + x + x 2 + .... = (1 − x) −1
q q
p
= qe t 1 − qe t  = pe t [1 − qe t ]−1
q
pe t
∴M X (t) =
1 − qe t

* To find the Mean & Variance

www.padeepz.net
www.padeepz.net

(1 − qe t )pe t − pe t (−qe t ) pe t
M 'X ( t )    = =
(1 − qe t ) 2 (1 − qe t ) 2
∴ E(X) = M 'X ( 0 )   = 1/p
∴ Mean = 1/p
d  pe t 
µ"X (t) =
dt  (1 − qe t ) 2 
Variance

(1 − qe t ) 2 pe t − pe t 2(1 − qe t )(−qe t )
=
(1 − qe t ) 4
(1 − qe t ) 2 pe t + 2pe t qe t (1 − qe t )
=
(1 − qe t ) 4
1+ q
M"X (0) = 2

et
p
(1 + q) 1 q

.n
Var (X) = E(X2) – [E(X)]2 = 2
− 2 ⇒ 2
p p p
Var (X) = 2
p
q
pz
ee
Note:
Another form of geometric distribution
P[X=x] = qxp ; x = 0, 1, 2, ….
ad

p
M X (t) =
(1 − qe t )
.p

Mean = q/p, Variance = q/p2


w

Example:1.8.2
If the MGF of X is (5-4et)-1, find the distribution of X and P(X=5)
w

Solution
Let the geometric distribution be
w

P(X = x) = qxp, x = 0, 1, 2, …..


The MGF of geometric distribution is given by
p
(1)
1 − qe t
1
 4  −1
Here M X(t) = (5 - 4e ) ⇒ 5 1 − e t 
t -1
(2)
 5 
4 1
Company (1) & (2) we get q= ; p=
5 5
∴ P(X = x) = pqx, x = 0, 1, 2, 3, …….

www.padeepz.net
www.padeepz.net

x
 1  4 
=   
 5  5 
5
 1  4  45
P(X = 5) =    = 6
 5  5  5
1.9 CONTINUOUS DISTRIBUTIONS
If ‘X’ is a continuous random variable then we have the following distribution
1. Uniform (Rectangular Distribution)
2. Exponential Distribution
3. Gamma Distribution
4. Normal Distribution
1. 9.1 Uniform Distribution (Rectangular Distribution)
Def : A random variable X is set to follow uniform distribution if its
 1
 , a<x<b

et
f (x) =  b − a
0, otherwise

.n
* To find MGF
M X (t)

= ∫ e tx f (x)dx
−∞ pz
ee
b 1
= ∫ e tx dx
a b−a
a
1  e tx 
ad

=
b − a  t  b
.p

1
= e bx − eat 
(b − a)t
w

∴ The MGF of uniform distribution is


e bt − eat
w

M X (t) =
(b − a)t
w

* To find Mean and Variance



E(X) = ∫ x f (x)dx
−∞
b
 x2 
 
1 b 1 b  2 a
= ∫ bx
dx = ∫ x dx =
a b−a b−a a b−a
b −a
2 2
b+a a+b
= = =
2(b − a) 2 2

www.padeepz.net
www.padeepz.net

a+b
Mean µ1' =
2
Putting r = 2 in (A), we get
b x2 b
µ = ∫ x f (x)dx
'
= ∫ 2
dx
a b−a
2
a

a 2 + ab + b 2
=
3
∴ Variance = µ 2 − µ1' 2
'

b 2 + ab + b 2  b + a  (b − a) 2
2

= −  =
3  2  12
(b − a) 2
Variance =
12

et
PROBLEMS ON UNIFORM DISTRIBUTION

.n
Example 1.9.1
If X is uniformly distributed over (-α,α), α< 0, find α so that
(i)
(ii)
P(X>1) = 1/3
P(|X| < 1) = P(|X| > 1) pz
ee
Solution
If X is uniformly distributed in (-α, α), then its p.d.f. is
 1
ad

 −α < x < α
f (x) =  2α
0 otherwise
.p

(i) P(X>1) = 1/3


α
w

∫ f (x)dx = 1 / 3
1
w

α 1
∫ dx = 1 / 3
1 2α
w

1 1
( x )1 = 1 / 3 ( α − 1) =1 / 3
α

2α 2α
α=3
(ii) P(|X| < 1) = P(|X| > 1) = 1 - P(|X| < 1)
P(|X| < 1) + P(|X| < 1) = 1
2 P(|X| < 1) = 1
2 P(-1 < X < 1) = 1
1
2 ∫ f (x)dx = 1
1

www.padeepz.net
www.padeepz.net

1 1
2/ ∫ dx = 1

12
⇒α = 2

Note:
1. The distribution function F(x) is given by
0 −α < x < α
x − a

= 
F(x) a≤x≤b
 b − a
1 b<x<∞
2. The p.d.f. of a uniform variate ‘X’ in (-a, a) is given by
1
 −a < x < a
=  2a

et
F(x)
0 otherwise

.n
1.10 THE EXPONENTIAL DISTRIBUTION

pz
Def :A continuous random variable ‘X’ is said to follow an exponential distribution with
parameter λ>0 if its probability density function is given by
λe −λx x > a
ee
F(x) =
0 otherwise
ad

To find MGF
Solution

= ∫ e tx f (x)dx
.p

M X (t)
−∞
∞ ∞
w

= ∫ e tx λe −λx dx = λ ∫ e − ( λ− t )x dx
0 0
w


 e − ( λ− t )x 
= λ 
w

 λ − t 0
λ λ
= e −∞ − e −0  =
−(λ − t) λ−t
λ
∴ MGF of x = , λ> t
λ−t

* To find Mean and Variance


We know that MGF is

www.padeepz.net
www.padeepz.net

−1
λ 1  t
M X(t)= = = 1 − 
λ − t 1− t  λ 
λ
2
t t tr
=+ 1 + 2 + ..... + r
λ λ λ
t t  2! 
2
t r  t! 
=1 + +  2  + ..... +  r 
λ 2!  λ  r!  λ 
r
∞  t 
M X(t) = ∑  
r =0  λ 

t1 1
∴= Mean µ1' Coefficient
= of
1! λ

et
2
t 2
=µ '2 Coefficient
= of
2! λ 2

.n
2 1 1
Variance= µ 2 =µ '2 − µ1' 2 = 2 − 2 =
λ λ λ2
Variance = 2
λ
1
Mean =
1
λ
pz
ee
Example: 1.10.1
Let ‘X’ be a random variable with p.d.f
ad

 1 −3x
 e x>0
F(x) = 3
0
.p

 otherwise
Find 1) P(X > 3) 2) MGF of ‘X’
w

Solution
WKT the exponential distribution is
w

F(x) = λe-λx, x > 0


w

1
Here λ =
3
∞ ∞ 1 −x
P(x>3) = ∫ f (x) dx = ∫ e 3 dx
3 3 3
P(X>3) = e-1
λ
MGF is M X (t) =
λ−t

www.padeepz.net
www.padeepz.net

1 1
3 3 1
= = =
1 1 − 3t 1 − 3t
−t
3 3
1
M X(t) =
1 − 3t
Note
If X is exponentially distributed, then
P(X > s+t / x > s) = P(X > t), for any s, t > 0.

1.11 GAMMA DISTRIBUTION


Definition
A Continuous random variable X taking non-negative values is said to follow gamma
distribution , if its probability density function is given by

et
, α>0, 0 < x < ∞

.n
f(x) =
=0, elsewhere

and = dx pz
ee
=0, elsewhere
ad

When α is the parameter of the distribution.

Additive property of Gamma Variates


.p

If X 1 ,X 2 , X 3,.... X k are independent gamma variates with parameters


λ 1, λ 2,….. λ k respectively then X1+X2 + X 3+.... +X k is also a gamma variates with parameter λ 1 + λ 2
w

+ ….. + λ k
.
w

Example :1.11.1
w

Customer demand for milk in a certain locality ,per month , is Known to be a


general Gamma RV.If the average demand is a liters and the most likely demand b liters (b<a)
, what is the varience of the demand?
Solution :
Let X be represent the monthly Customer demand for milk.
Average demand is the value of E(X).
Most likely demand is the value of the mode of X or the value of X for which its density
function is maximum.
If f(x) is the its density function of X ,then

f(x) = . xk-1 e-λx e-λx , x>0

www.padeepz.net
www.padeepz.net

f(x) = .[(k-1) xk-2 e-λx - e-λx ]

= 0 ,when x=0 , x=

f” (x) = .[(k-1) xk-2 e-λx - e-λx ]


<0 , when x=
Therefour f(x) is maximum , when x=
i.e ,Most likely demand = =b ….(1)
and E(X) = ………(2)

et
Now V(X) = = =

.n
= a (a-b) From (1) and (2)

TUTORIAL QUESTIONS
pz
ee
1.It is known that the probability of an item produced by a certain
machine will be defective is 0.05. If the produced items are sent to the
ad

market in packets of 20, fine the no. of packets containing at least,


exactly and atmost 2 defective items in a consignment of 1000 packets
.p

using (i) Binomial distribution (ii) Poisson approximation to binomial


distribution.
2. The daily consumption of milk in excess of 20,000 gallons is
w

approximately exponentially distributed with . 3000 = θ The city has a


w

daily stock of 35,000 gallons. What is the probability that of two days
selected at random, the stock is insufficient for both days.
w

3.The density function of a random variable X is given by f(x)= KX(2-X), 0≤X≤2.Find K, mean,
variance and rth moment.
4.A binomial variable X satisfies the relation 9P(X=4)=P(X=2) when n=6. Find the parameter p
of the Binomial distribution.
5. Find the M.G.F for Poisson Distribution.
6. If X and Y are independent Poisson variates such that P(X=1)=P(X=2) and P(Y=2)=P(Y=3).
Find V(X-2Y).
7.A discrete random variable has the following probability distribution
X: 0 1 2 3 4 5 6 7 8
P(X) a 3a 5a 7a 9a 11a 13a 15a 17a
Find the value of a, P(X<3) and c.d.f of X.

www.padeepz.net
www.padeepz.net

7. In a component manufacturing industry, there is a small probability of 1/500 for any


component to be defective. The components are supplied in packets of 10. Use Poisson
distribution to calculate the approximate number of packets containing (1). No defective. (2).
Two defective components in a consignment of 10,000 packets.

WORKED OUT EXAMPLES

Example :1
Given the p.d.f. of a continuous random variable ‘X’ follows
6x(1 − x), 0 < x <1
f (x) =  , find c.d.f. for ‘X’
0 otherwise
Solution

et
6x(1 − x), 0 < x <1
Given f (x) = 
0 otherwise

.n
x
=
The c.d.f is F(x) ∫ f (x) dx , − ∞ < x < ∞
(i) When x < 0, then
−∞
pz
ee
x
F(x) = ∫ f (x) dx
−∞
x
ad

= ∫ 0 dx =0
−∞
(ii) When 0< x < 1, then
.p

x
F(x) = ∫ f (x) dx
−∞
w

0 x
= ∫ f (x) dx + ∫ f (x) dx
w

−∞ 0
x
 x 2 x3 
w

x x
= 0 + ∫ 6x(1 − x) dx = 6 ∫ x(1 − x) dx = 6  − 
0 0 2 3 0
= 3x 2 − 2x 3
(iii) When x > 1, then
x
F(x) = ∫ f (x) dx
−∞
0 1 x
= ∫ 0dx + ∫ 6x(1 − x) dx + ∫ 0 dx
−∞ 0 0
1
6 ∫ (x − x 2 ) dx =1
0

www.padeepz.net
www.padeepz.net

Using (1), (2) & (3) we get


0, x<0

= 3x 2 − 2x 3 ,
F(x) 0 < x <1
1, x >1

Example :2
A random variable X has the following probability function
Values of X 0 1 2 3 4 5 6 7 8
Probability P(X) a 3a 5a 7a 9a 11a 13a 15a 17a

(i) Determine the value of ‘a’


(ii) Find P(X<3), P(X≥3), P(0<X<5)

et
(iii) Find the distribution function of X.
Solution

.n
Table 1
Values of X 0 1 2 3 4 5 6 7 8
p(x)
pz
a 3a 5a 7a 9a 11a 13a 15a 17a

(i) We know that if p(x) is the probability of mass function then


ee
8

∑ p(x ) = 1
i =0
i
ad

p(0) + p(1) + p(2) + p(3) + p(4) + p(5) + p(6) + p(7) + p(8) = 1


a + 3a + 5a + 7a + 9a + 11a + 13a + 15a + 17a = 1
.p

81 a = 1
a = 1/81
w

put a = 1/81 in table 1, e get table 2


Table 2
w

X=x 0 1 2 3 4 5 6 7 8
w

P(x) 1/81 3/81 5/81 7/81 9/81 11/81 13/81 15/81 17/81

(ii) P(X < 3) = p(0) + p(1) + p(2)


= 1/81+ 3/81 + 5/81 = 9/81
(ii) P(X ≥ 3) = 1 - p(X < 3)
= 1 - 9/81 = 72/81
(iii) P(0 < x < 5) = p(1) + p(2) + p(3) + p(4) here 0 & 5 are not include
= 3/81 + 5/81 + 7/81 + 9/81
3+5+7+8+9 24
= ––––––––––––––– = –––––
81 81
(iv) To find the distribution function of X using table 2, we get

www.padeepz.net
www.padeepz.net

X=x F(X) = P(x ≤ x)


0 F(0) = p(0) = 1/81
F(1) = P(X ≤ 1) = p(0) + p(1)
1
= 1/81 + 3/81 = 4/81
F(2) = P(X ≤ 2) = p(0) + p(1) + p(2)
2
= 4/81 + 5/81 = 9/81
F(3) = P(X ≤ 3) = p(0) + p(1) + p(2) + p(3)
3
= 9/81 + 7/81 = 16/81
F(4) = P(X ≤ 4) = p(0) + p(1) + …. + p(4)
4
= 16/81 + 9/81 = 25/81
F(5) = P(X ≤ 5) = p(0) + p(1) + ….. + p(4) + p(5)

et
5
= 2/81 + 11/81 = 36/81

.n
F(6) = P(X ≤ 6) = p(0) + p(1) + ….. + p(6)
6
= 36/81 + 13/81 = 49/81

7
F(7)
= 49/81 + 15/81 = 64/81 pz
= P(X ≤ 7) = p(0) + p(1) + …. + p(6) + p(7)
ee
F(8) = P(X ≤ 8) = p(0) + p(1) + ….. + p(6) + p(7) + p(8)
8
= 64/81 + 17/81 = 81/81 = 1
ad

Example :3
The mean and SD of a binomial distribution are 5 and 2, determine the distribution.
.p

Solution
Given Mean = np = 5 (1)
w

SD = npq = 2 (2)
w

(2) np 4 4
⇒ =⇒ q=
w

(1) npq 5 5
4 1 1
∴ p =1 − = ⇒ p=
5 5 5
Sub (3) in (1) we get
n x 1/5 = 5
n = 25
∴ The binomial distribution is
P(X = x) = p(x) = nC x px qn-x
= 25C x(1/5) (4/5)n-x,
x
x = 0, 1, 2, ….., 25
Example :4
If X is a Poisson variable

www.padeepz.net
www.padeepz.net

P(X = 2) = 9 P(X = 4) + 90 P(X=6)


Find (i) Mean if X (ii) Variance of X
Solution
e −λ λ x
P(X=x) = , x = 0,1, 2,.....
x!
Given P(X = 2) = 9 P(X = 4) + 90 P(X=6)
e −λ λ 2 e −λ λ 4 e −λ λ 6
=9 + 90
2! 4! 6!
1 9λ 90λ 2 4
= +
2 4! 6!
1 3λ 2
λ 4
= +
2 8 8
3λ 2
λ 4

et
1= +
4 4

.n
λ 4 + 3λ 2 − 4 =0
λ2 = 1 or λ2 = -4
λ =±1

or λ = ± 2i
Mean = λ = 1, Variance = λ = 1
pz
ee
∴ Standard Deviation = 1
ad
.p
w
w
w

www.padeepz.net
www.padeepz.net

UNIT – II

TWO DIMENSIONAL RANDOM VARIABLES

Introduction

In the previous chapter we studied various aspects of the theory of a single R.V. In this
chapter we extend our theory to include two R.V's one for each coordinator axis X and Y
of the XY Plane.

DEFINITION : Let S be the sample space. Let X = X(S) & Y = Y(S) be two functions each
assigning a real number to each outcome s ∈ S. hen (X, Y) is a two dimensional random
variable.
2.1 Types of random variables
1. Discrete R.V.’s

et
2. Continuous R.V.’s
Discrete R.V.’s (Two Dimensional Discrete R.V.’s)

.n
If the possible values of (X, Y) are finite, then (X, Y) is called a two dimensional discrete
R.V. and it can be represented by (x i , y), i = 1,2,….,m.
pz
In the study of two dimensional discrete R.V.’s we have the following
5 important terms.
ee
• Joint Probability Function (JPF) (or) Joint Probability Mass Function.
• Joint Probability Distribution.
• Marginal Probability Function of X.
ad

• Marginal Probability Function of Y.


• Conditional Probability Function.
.p

2.1.1 Joint Probability Function of discrete R.V.’s X and Y


w

The function P(X = x i , Y = y j ) = P(x i , y j ) is called the joint probability function for
discrete random variable X and Y is denote by p ij .
w

Note
1. P(X = xi , Y = yj ) = P[(X = xi )∩(Y = yj )] = p ij
w

2. It should satisfies the following conditions


(i) p ij ≥ ∀ i, j (ii) Σ j Σ i p ij = 1
2.1.2 Marginal Probability Function of X
If the joint probability distribution of two random variables X and Y is given then the
marginal probability function of X is given by
P x(x i ) = p i (marginal probability function of Y)
Conditional Probabilities
The conditional probabilities function of X given Y = yj is given by
P[X = xi / Y = yj ] p ij
P[X = xi / Y = yj ] = –––––––––––––––– = ––––
P[Y = yj ] p .j

www.padeepz.net
www.padeepz.net

The set {x i , p ij / p .j }, i = 1, 2, 3, …..is called the conditional probability distribution of X


given Y = yj .
The conditional probability function of Y given X = xi is given by
P[Y = yi / X = xj ] p ij
P[Y = yi / X = xj ] = –––––––––––––––– = ––––
P[X = xj ] pi.
The set {yi , p ij / p i. }, j = 1, 2, 3, …..is called the conditional probability distribution of Y
given X = x i .

SOLVED PROBLEMS ON MARGINAL DISTRIBUTION


Example:2.1.1
From the following joint distribution of X and Y find the marginal distributions.
X
0 1 2
Y
0 3/28 9/28 3/28

et
1 3/14 3/14 0

.n
2 1/28 0 0
Solution
X
Y
0 pz2 P Y (y) = p(Y=y)
ee
0 3/28 P(0,0) 3/28 P(2,0) 15/28 = P y (0)
1 3/14 P(0, 1) 3/14 P(1,1) 6/14 = P y (1)
ad

2 1/28 P(0,2) 0 P(2,2) 1/28 = P y (2)


10/28 = 5/14 3/28
P X (X) = P(X=x) 1
P X (0) P X (2)
.p

The marginal distribution of X


w

P X (0) = P(X = 0) = p(0,0) + p(0,1) + p(0,2) = 5/14


P X (1) = P(X = 1) = p(1,0) + p(1,1) + p(1,2) = 15/28
w

P X (2) = P(X = 2) = p(2,0) + p(2,1) + p(2,2) = 3/28


w

Marginal probability function of X is


5
14 , x = 0

 15
=
PX (x) = , x 1
 28
3
 28 , x = 2

The marginal distribution of Y
P Y (0) = P(Y = 0) = p(0,0) + p(1,0) + p(2,0) = 15/28
P Y (1) = P(Y = 1) = p(0,1) + p(2,1) + p(1,1) = 3/7

www.padeepz.net
www.padeepz.net

P Y (2) = P(Y = 2) = p(0,2) + p(1,2) + p(2,2) = 1/28


Marginal probability function of Y is

 15
 28 , y = 0

3
=
PY (y) = , y 1
7
1
 28 , y = 2

2.3 CONTINUOUS RANDOM VARIABLES


• Two dimensional continuous R.V.’s
If (X, Y) can take all the values in a region R in the XY plans then (X, Y) is called two-

et
dimensional continuous random variable.
• Joint probability density function :

.n
∞ ∞
(i) f XY (x,y) ≥0 ; (ii) ∫ ∫ f XY (x, y) dydx =1


−∞ −∞
Joint probability distribution function
F(x,y) = P[X ≤ x, Y ≤ y]
pz
ee
x
y 
= ∫  ∫ f (x, y)dx  dy
−∞  −∞ 
ad

• Marginal probability density function



f(x) = fX (x) = ∫ f x,y (x, y)dy (Marginal pdf of X)
.p

−∞

f(y) = fY (x) = ∫ f x,y (x, y)dy (Marginal pdf of Y)
w

−∞
• Conditional probability density function
w

f (x, y)
(i) P(Y= y / X= x)
= f (y / x) = , f (x) > 0
w

f (x)
f (x, y)
(ii) P(X= x / Y= y)
= f (x / y) = , f (y) > 0
f (y)

Example :2.3.1
2
 (2x + 3y), 0 < x < 1, 0 < y <1
Show that the function f (x, y) =  5
0 otherwise
is a joint density function of X and Y.
Solution

www.padeepz.net
www.padeepz.net

We know that if f(x,y) satisfies the conditions


∞ ∞
(i) f (x, y) ≥ 0 (ii) ∫ ∫ f (x, y) = 1 , then f(x,y) is a jdf
−∞ −∞

2
 (2x + 3y), 0 < x < 1, 0 < y <1
Given f (x, y) =  5
0 otherwise
(i) f (x, y) ≥ 0 in the given interval 0 ≤ (x,y) ≤ 1
∞ ∞ 11 2
(ii) ∫ ∫ f (x, y) dx dy = ∫ ∫ (2x + 3y) dx dy
−∞ −∞ 00 5
1
2 1 1  x2 
= ∫ ∫ 2 + 3xy  dy
5 00 2 0
1
2 3y 2 

et
21 2 3
= ∫ (1 + 3y) dy =  y +  = 1 + 
50 5 2 0 5  2 

.n
2 5
=   =1
5 2
Since f(x,y) satisfies the two conditions it is a j.d.f.
Example :2.3.2
pz
ee
The j.d.f of the random variables X and Y is given
8xy, 0 < x < 1, 0<y<x
f (x, y) = 
ad

0, otherwise
Find (i) f X(x) (ii) fY (y) (iii) f(y/x)
.p

Solution
We know that
w

(i) The marginal pdf of ‘X’ is


∞ x
= ∫=
w

f X (x) = f(x) = ∫ f (x, y)dy 8xy dy 4x 3


−∞ 0
w

= 4x , 0 < x < 1
f (x) 3

(ii) The marginal pdf of ‘Y’ is


∞ 1
=
fY (y) = f(y) = ∫ f (x, y)dy ∫=
8xy dy 4y
−∞ 0
f=
(y) 4y, 0 < y < α
(iii) We know that
f (x, y)
f (y / x) =
f (x)
8xy 2y
= 3
= , 0 < y < x, 0 < x < 1
4x x2

www.padeepz.net
www.padeepz.net

Result
Marginal pdf g Marginal pdf y F(y/x)
2y
4x3, 0<x<1 4y, 0<y<x ,0 < y < x, 0 < x < 1
x2

2.4 REGRESSION
* Line of regression
The line of regression of X on Y is given by
σy
x=
− x r. (y − y)
σx
The line of regression of Y on X is given by
σy
y=
− y r. (x − x)
σx

et
* Angle between two lines of Regression.
1 − r 2  σyσx 

.n
tan θ =  
r  σ x 2 + σ y2 
 
* Regression coefficient
Regression coefficients of Y on X pz
ee
σy
r. = b YX
σx
ad

Regression coefficient of X and Y


σx
r. = b XY
σy
.p

∴ Correlation coefficient r =
± b XY × b YX
w
w

Example:2.4.1
1. From the following data, find
w

(i) The two regression equation


(ii) The coefficient of correlation between the marks in Economic and Statistics.
(iii) The most likely marks in statistics when marks in Economic are 30.

Marks in Economics 25 28 35 32 31 36 29 38 34 32
Marks in Statistics 40 46 49 41 36 32 31 30 33 39

Solution

www.padeepz.net
www.padeepz.net

(X − X) (Y − Y) (X − X) (Y − Y)
2 2 2
X Y X − X = X − 32 X − Y = Y − 38
25 43 -7 5 49 25 -35
28 46 -4 8 16 64 -32
35 4 3 11 9 121 33
32 41 0 3 0 9 0
31 36 -1 -2 1 4 2
36 32 4 -6 16 36 -24
29 31 -3 -7 09 49 +21
38 30 6 -8 36 64 -48
34 33 2 -5 4 25 -48
32 39 0 1 0 1 100
320 380 0 0 140 398 -93

∑ X 320 ∑ Y 380

et
Here =
X = = 32 and=
Y = = 38
n 10 n 10

.n
Coefficient of regression of Y on X is
∑ (X − X)(Y − Y) −93
b YX = = = − 0.6643
∑ (X − X)
2
140
Coefficient of regression of X on Y is
pz
ee
∑ (X − X)(Y − Y) −93
b XY = = = − 0.2337
∑ (Y − Y)
2
398
ad

Equation of the line of regression of X and Y is


X−X =b XY (Y − Y)
.p

X – 32 = -0.2337 (y – 38)
X = -0.2337 y + 0.2337 x 38 + 32
w

X = -0.2337 y + 40.8806
Equation of the line of regression of Y on X is
w

Y−Y =b YX (X − X)
w

Y – 38 = -0.6643 (x – 32)
Y = -0.6643 x + 38 + 0.6643 x 32
= -0.6642 x + 59.2576
Coefficient of Correlation
r2 = bYX × b XY
= -0.6643 x (-0.2337)
r = 0.1552
r = ± 0.1552
r = ± 0.394
Now we have to find the most likely mark, in statistics (Y) when marks in economics (X) are 30.
y = -0.6643 x + 59.2575

www.padeepz.net
www.padeepz.net

Put x = 30, we get


y = -0.6643 x 30 + 59.2536
= 39.3286
y~ 39

2.5 COVARIANCE
Def : If X and Y are random variables, then Covariance between X and Y is defined as
Cov (X, Y) = E(XY) – E(X) . E(Y)
Cov (X, Y) = 0 [If X & Y are independent]

2.6 CORRELATION
Types of Correlation
• Positive Correlation
(If two variables deviate in same direction)
• Negative Correlation

et
(If two variables constantly deviate in opposite direction)

.n
2.7 KARL-PEARSON’S COEFFICIENT OF CORRELATION
Correlation coefficient between two random variables X and Y usually denoted by r(X,

r(X, Y) = ,
pz
Y) is a numerical measure of linear relationship between them and is defined as
Cov(X, Y)
ee
σ X .σ Y
1
Where Cov (X, Y) = ∑ XY − X Y
ad

n
∑X ∑Y
= σX ; = σY
.p

n n
* Limits of correlation coefficient
-1 ≤ r ≤ 1.
w

X & Y independent, ∴ r(X, Y) = 0.


w

Note :Types of correlation based on ‘r’.


Values of ‘r’ Correlation is said to be
w

r=1 perfect and positive


0<r<1 positive
-1<r<0 negative
r=0 Uncorrelated

SOLVED PROBLEMS ON CORRELATION


Example :2.6.1
Calculated the correlation coefficient for the following heights of fathers X and their sons
Y.
X 65 66 67 67 68 69 70 72
Y 67 68 65 68 72 72 69 71

www.padeepz.net
www.padeepz.net

Solution
X Y U = X – 68 V = Y – 68 UV U2 V2
65 67 -3 -1 3 9 1
66 68 -2 0 0 4 0
67 65 -1 -3 3 1 9
67 68 -1 0 0 1 0
68 72 0 4 0 0 16
69 72 1 4 4 1 16
70 69 2 1 2 4 1
72 71 4 3 12 16 9

et
∑U = 0 ∑V = 0 ∑ UV = 24 ∑ U 2 = 36 ∑ V 2 = 52

.n
Now
∑U 0
U= = = 0

V=
n
∑V
=
8
8
= 1
pz
ee
n 8
Cov (X, Y) = Cov (U, V)
∑ UV 24
⇒ − UV = −0= 3
ad

(1)
n 8
∑U
2
2 36
.p

σ=
U − U= − 0= 2.121 (2)
n 8
w

∑V
2
2 52
σ=
V − V= − 1= 2.345 (3)
w

n 8
Cov(U, V) 3
∴r(X, Y) =
w

= r(U, V) =
σ U .σ V 2.121 x 2.345
= 0.6031 (by 1, 2, 3)

Example :2.6.2
1
Let X be a random variable with p.d.f. f (x) = , − 1 ≤ x ≤1 and let
2
Y = x2, find the correlation coefficient between X and Y.

Solution

www.padeepz.net
www.padeepz.net


E(X) = ∫ x.f (x) dx
−∞
1
1 1 1  x2  11 1
= ∫ x. dx =   =  − = 0
−1 2 2  2  −1 2 2 2
E(X) = 0

E(Y) = ∫ x 2 .f (x) dx
−∞
1
1 1 1  x3  11 1 1 2 1
= ∫ x . dx =  
2
=  + = . =
−1 2 2  3  −1 23 3 2 3 3
E(XY) = E(XX2)
1
 x4 

= E(X )= ∫ x .f (x)
3
= dx = 3
 0

et
−∞  4  −1
E(XY) = 0

.n
Cov(X, Y)
∴r(X, Y) = ρ(X, Y) = =0
σX σY
ρ = 0. pz
Note : E(X) and E(XY) are equal to zero, noted not find σ x &σ y .
ee
ad

2.8 TRANSFORMS OF TWO DIMENSIONAL RANDOM VARIABLE


Formula:

f U (u) = ∫ f u,v (u, v) dv
.p

−∞

w

& f V (u) = ∫ f u,v (u, v) du


−∞
w

∂ (x, y)
f UV (u, V) = f XY (x, y)
∂ (u, v)
w

Example : 1
If the joint pdf of (X, Y) is given by fxy (x, y) = x+y, 0 ≤ x, y ≤ 1, find the pdf of ∪ = XY.

Solution
Given f xy (x, y) = x + y
Given U = XY
Let V=Y
u
=x =&y V
v

www.padeepz.net
www.padeepz.net

∂x 1 ∂x − u ∂y ∂y
= = . =; 0;= 1 (1)
∂u V ∂v V ∂u2
∂v
∂y ∂x
1 −u
∂ (x, y) ∂u ∂v 1 1
=
∴J = = V V2 = −1 =
∂ (u, v) ∂y ∂y v v
0 1
∂u ∂v
1
⇒|J|= (2)
V
The joint p.d.f. (u, v) is given by
f uv (u, v) = f xy (x, y) |J|
1
= (x + y)
|v|

et
1 u 
=  + u (3)
Vv 

.n
The range of V :
Since 0 ≤ y ≤ 1, we have 0 ≤ V ≤ 1
The range of u :
Given 0≤x≤1
pz
(∴ V = y)
ee
u
⇒ 0 ≤ ≤ 1
v
ad

⇒ 0≤u≤v
Hence the p.d.f. of (u, v) is given by
1u 
.p

f uv (u, v) =  + v  , 0 ≤ u ≤ v, 0 ≤ v ≤ 1
v v 
w

Now

f U (u) = ∫ f u,v (u, v) dv
w

−∞
w

1
= ∫ f u,v (u, v) dv
u

 u 1

= ∫  2 + 1 dv
u v 
1
 v −1 
=  v + u. 
 −1  u
∴fu (u) = 2(1-u), 0 < u < 1
p.d.f of (u, v) p.d.f of u = XY

www.padeepz.net
www.padeepz.net

1u 
=
f uv (u, v)  + v fu (u) = 2(1-u), 0 < u < 1
v v 
0 ≤ u ≤ v, 0 ≤ v ≤ 1

TUTORIAL QUESTIONS

1. The jpdf of r.v X and Y is given by f(x,y)=3(x+y),0<x<1,0<y<1,x+y<1 and 0 otherwise. Find


the marginal pdf of X and Y and ii) Cov(X,Y).
2. Obtain the correlation coefficient for the following data:
X: 68 64 75 50 64 80 75 40 55 64
Y: 62 58 68 45 81 60 48 48 50 70
3.The two lines of regression are 8X-10Y+66=0, 4X-18Y-214=0.The variance of x is 9 find i)
The mean value of x and y. ii) Correlation coefficient between x and y.
4. If X 1 ,X 2 ,…X n are Poisson variates with parameter λ=2, use the central limit theorem to find

et
P(120≤S n ≤160) where Sn=X 1 +X 2 +…X n and n=75.

.n
5. If the joint probability density function of a two dimensional random variable (X,Y) is
given by f(x, y) = x2 + , 0<x<1,0<y<2= 0, elsewhere Find (i) P(X>1/2)(ii) P(Y<X) and (iii)
P(Y<1/2/ X<1/2).
6. Two random variables X and Y have joint density pz
ee
Find Cov (X,Y).
7. If the equations of the two lines of regression of y on x and x on y are respectively
ad

7x-16y+9=0; 5y-4x-3=0, calculate the coefficient of correlation.

WORKEDOUT EXAMPLES
.p

Example 1
The j.d.f of the random variables X and Y is given
w

8xy, 0 < x < 1, 0<y<x


f (x, y) = 
w

0, otherwise
Find (i) f X(x) (ii) fY (y) (iii) f(y/x)
w

Solution
We know that
(i) The marginal pdf of ‘X’ is
∞ x
=
f X (x) = f(x) = ∫ f (x, y)dy ∫=
8xy dy 4x 3
−∞ 0

= 4x , 0 < x < 1
f (x) 3

(ii) The marginal pdf of ‘Y’ is


∞ 1
=
fY (y) = f(y) = ∫ f (x, y)dy ∫=
8xy dy 4y
−∞ 0
f=
(y) 4y, 0 < y < α

www.padeepz.net
www.padeepz.net

(iii) We know that


f (x, y)
f (y / x) =
f (x)
8xy 2y
= 3
= , 0 < y < x, 0 < x < 1
4x x2
Example 2
1
Let X be a random variable with p.d.f. f (x) = , − 1 ≤ x ≤1 and let
2
Y = x2, find the correlation coefficient between X and Y.

Solution

E(X) = ∫ x.f (x) dx
−∞

et
1
11 1  x2  11 1
= ∫ x. dx =   =  − = 0

.n
−1 2 2  2  −1 2 2 2
E(X) = 0

E(Y) = ∫ x 2 .f (x) dx
−∞
pz
ee
1
1 1 1  x3  11 1 1 2 1
= ∫ x . dx =  
2
=  + = . =
−1 2 2  3  −1 23 3 2 3 3
ad

E(XY) = E(XX2)
1
 x4 

= E(X )= ∫ x .f (x)
3
= dx = 
3
.p

0
−∞  4  −1
w

E(XY) = 0
Cov(X, Y)
∴r(X, Y) = ρ(X, Y) = =0
w

σX σY
ρ = 0.
w

Note : E(X) and E(XY) are equal to zero, noted not find σ x &σ y .

Result
Marginal pdf g Marginal pdf y F(y/x)
2y
4x3, 0<x<1 4y, 0<y<x ,0 < y < x, 0 < x < 1
x2

www.padeepz.net
www.padeepz.net

UNIT - III
RANDOM PROCESSES

Introduction
In chapter 1, we discussed about random variables. Random variable is a function of the
possible outcomes of a experiment. But, it does not include the concept of time. In the
real situations, we come across so many time varying functions which are random in
nature. In electrical and electronics engineering, we studied about signals.
Generally, signals are classified into two types.
(i) Deterministic
(ii) Random
Here both deterministic and random signals are functions of time. Hence it is
possible for us to determine the value of a signal at any given time. But this is not
possible in the case of a random signal, since uncertainty of some element is always

et
associated with it. The probability model used for characterizing a random signal is called
a random process or stochastic process.

.n
3.1 RANDOM PROCESS CONCEPT

pz
A random process is a collection (ensemble) of real variable {X(s, t)} that are functions
of a real variable t where s ∈ S, S is the sample space and
ee
t ∈T. (T is an index set).

REMARK
ad

i) If t is fixed, then {X(s, t)} is a random variable.


ii) If S and t are fixed {X(s, t)} is a number.
iii) If S is fixed, {X(s, t)} is a signal time function.
.p

NOTATION
w
w

Here after we denote the random process {X(s, t)} by {X(t)} where the index set T is assumed to
be continuous process is denoted by {X(n)} or {Xn}.
w

A comparison between random variable and random process

Random Variable Random Process


A function of the possible outcomes of A function of the possible outcomes of
an experiment is X(s) an experiment and also time i.e, X(s, t)
Outcomes are mapped into wave from
Outcome is mapped into a number x.
which is a fun of time 't'.

www.padeepz.net
www.padeepz.net

3.2 CLASSIFICATION OF RANDOM PROCESSES


We can classify the random process according to the characteristics of time t and the
random variable X = X(t) t & x have values in the ranges
–∞< t <∞ and –∞< x <∞.

Random Process is a function of

Random Variables Time t

et
.n
Discrete Continuous Discrete Continuous
pz
ee
3.2.1 CONTINUOUS RANDOM PROCESS
ad

If 'S' is continuous and t takes any value, then X(t) is a continuous random variable.
Example
Let X(t) = Maximum temperature of a particular place in (0, t). Here 'S' is a continuous
.p

set and t ≥ 0 (takes all values), {X(t)} is a continuous random process.


w

3.2.2 DISCRETE RANDOM PROCESS


If 'S' assumes only discrete values and t is continuous then we call such random process
w

{X(t) as Discrete Random Process.


w

Example
Let X(t) be the number of telephone calls received in the interval (0, t).
Here, S = {1, 2, 3, …}
T = {t, t ≥ 0}
∴ {X(t)} is a discrete random process.

3.2.3 CONTINUOUS RANDOM SEQUENCE


If 'S' is a continuous but time 't' takes only discrete is called discrete random sequence.
Example: Let X n denote the outcome of the nth toss of a fair die.
Here S = {1, 2, 3, 4, 5, 6}
T = {1, 2, 3, …}
∴ (X n , n = 1, 2, 3, …} is a discrete random sequence.

www.padeepz.net
www.padeepz.net

3.3 CLASSIFICATION OF RANDOM PROCESSES BASED ON ITS SAMPLE


FUNCTIONS
Non-Deterministic Process
A Process is called non-deterministic process if the future values of any sample function
cannot be predicted exactly from observed values.

Deterministic Process
A process is called deterministic if future value of any sample function can be predicted
from past values.

3.3.1 STATIONARY PROCESS


A random process is said to be stationary if its mean, variance, moments etc are constant.
Other processes are called non stationary.

et
1. 1st Order Distribution Function of {X(t)}
For a specific t, X(t) is a random variable as it was observed earlier.

.n
F(x, t) = P{X(t) ≤ x} is called the first order distribution of the process {X(t)}.

1st Order Density Function of {X(t)}


f ( x, t ) =
∂ pz
F ( x, t ) is called the first order density of {X, t}
ee
∂x

2nd Order distribution function of {X(t)}


ad

F ( x1 , x 2 ; t1 , t 2 ) =P {X ( t1 ) ≤ x1;X ( t 2 ) ≤ x 2 } is the point distribution of the random


variables X(t 1 ) and X(t 2 ) and is called the second order distribution of the process {X(t)}.
.p

2nd order density function of {X(T)}


∂ 2 F ( x1 , x 2 ; t 1 , t 2 )
f ( x1 , x 2 ; t 1 , t 2 ) =
w

is called the second order density of {X(t)}.


∂x, ∂x 2
w

3.3.2 First - Order Stationary Process


w

Definition
A random process is called stationary to order, one or first order stationary if its 1st order
density function does not change with a shift in time origin.
In other words,
( x1 , t1 ) f X ( x1 , t1 + C ) must be true for any t 1 and any real number C if {X(t 1 )} is to
f X=
be a first order stationary process.

Example :3.3.1
Show that a first order stationary process has a constant mean.
Solution
Let us consider a random process {X(t 1 )} at two different times t 1 and t 2 .

www.padeepz.net
www.padeepz.net


∴ E  X ( t1 )  = xf ( x, t1 )dx

−∞
[f(x,t 1 ) is the density form of the random process X(t 1 )]

∴ E  X ( t 2 )  = xf ( x, t 2 )dx

−∞
[f(x,t 2 ) is the density form of the random process X(t 2 )]
Let t 2 = t 1 + C
∞ ∞
 X ( t 2 ) 
∴E= ∫ xf ( x, t=
+ C )dx ∫ xf ( x, t )dx
1 1
−∞ −∞

= E  X ( t1 ) 
Thus E  X ( t 2 )  =E  X ( t1 ) 

et
Mean process {X(t1)} = mean of the random process {X(t 2 )}.
Definition 2:

.n
If the process is first order stationary, then
Mean = E(X(t)] = constant
3.3.4 Second Order Stationary Process
pz
A random process is said to be second order stationary, if the second order density
function stationary.
ee
f ( x1 , x 2 ;=
t1 , t 2 ) f ( x1 , x 2 ; t1 + C, t 2 + C ) ∀x1 , x 2 and C.
E ( X12 ) , E ( X 22 ) , E ( X1 , X 2 ) denote change with time, where
ad

X = X(t 1 ); X2 = X(t 2 ).
3.3.5 Strongly Stationary Process
.p

A random process is called a strongly stationary process or Strict Sense Stationary


Process (SSS Process) if all its finite dimensional distribution are invariance under translation of
w

time 't'.
f X (x 1 , x2 ; t 1 , t 2 ) = fX(x 1 , x2 ; t 1 +C, t 2 +C)
w

f X (x 1 , x2 , x3 ; t 1 , t 2 , t 3 ) = fX (x 1 , x2 , x 3 ; t 1 +C, t 2 +C, t 3 +C)


In general
w

f X (x 1 , x 2 ..x n ; t 1 , t 2 …t n) = f X(x1 , x2 ..x n ; t 1 +C, t 2 +C..t n +C) for any t 1 and any real number
C.

3.3.6 Jointly - Stationary in the Strict Sense


{X(t)} and Y{(t)} are said to be jointly stationary in the strict sense, if the joint
distribution of X(t) and Y(t) are invariant under translation of time.
Definition Mean:
µ X ( t )= E  X ( t1 )  , −∞ < t < ∞
µ  X ( t )  is also called mean function or ensemble average of the random process.
3.3.7 Auto Correlation of a Random Process

www.padeepz.net
www.padeepz.net

Let X(t 1 ) and X(t 2 ) be the two given numbers of the random process {X(t)}. The auto
correlation is
R XX ( t1 , t 2 ) = E {X ( t1 ) xX ( t 2 )}
Mean Square Value
Putting t 1 = t 2 = t in (1), we get
R XX (t,t) = E[X(t) X(t)]
⇒ R XX ( t, t ) = E  X 2 ( t )  is the mean square value of the random process.
3.3.8 Auto Covariance of A Random Process
{ }
E  X ( t1 ) − E ( X ( t1 ) )   X ( t 2 ) − E ( X ( t 2 ) ) 
C XX ( t1 , t 2 ) =
= R XX ( t1 , t 2 ) − E  X ( t1 )  E  X ( t 2 ) 
Correlation Coefficient
The correlation coefficient of the random process {X(t)} is defined as

et
C XX ( t1 , t 2 )
ρXX ( t1 , t 2 ) =
Var X ( t1 ) xVar X ( t 2 )

.n
Where C XX (t 1 , t 2 ) denotes the auto covariance.

3.4 CROSS CORRELATION pz


The cross correlation of the two random process {X(t)} and {Y(t)} is defined by
ee
R XY (t 1 , t 2 ) = E[X(t 1 ) Y (t 2 )]

3.5 WIDE - SENSE STATIONARY (WSS)


ad

A random process {X(t)} is called a weakly stationary process or covariance stationary


process or wide-sense stationary process if
.p

i) E{X(t)} = Constant
ii) E[X(t) X(t+τ] = R XX (τ) depend only on τ when τ = t 2 - t 1 .
w

REMARKS :
SSS Process of order two is a WSS Process and not conversely.
w

3.6 EVOLUTIONARY PROCESS


w

A random process that is not stationary in any sense is called as evolutionary process.

SOLVED PROBLEMS ON WIDE SENSE STATIONARY PROCESS


Example:3.6.1
Given an example of stationary random process and justify your claim.
Solution:
Let us consider a random process X(t) = A as (wt + θ) where A &ω are custom and 'θ' is
uniformlydistribution random Variable in the interval
(0, 2π).
Since 'θ' is uniformly distributed in (0, 2π), we have

www.padeepz.net
www.padeepz.net

1
 ,0 < C < 2π
f ( θ ) = 2π
0 ,otherwise


∴ E[X(t)] = ∫ X ( t ) f ( θ ) dθ
−∞

1
= ∫ A ω ( ω t + θ ) 2 π dθ
0

A 2π
= sin ( ωt + θ )  0

A
Sin ( 2π + ωt ) − Sin ( ωt + 0 ) 
2π 
=

et
A
= [Sinωt − sin ωt ]

.n
= 0 constant
Since E[X(t)] = a constant, the process X(t) is a stationary random process.

Example:3.6.2 which are not stationary pz


ee
Examine whether the Poisson process {X(t)} given by the probability law P{X(t)=n] =
e −λt ( λt )
, n = 0, 1, 2, ….
ad

n
Solution
We know that the mean is given by
.p


E  X ( t )  =∑ nPn ( t )
w

n =0

ne −λt ( λt )
∞ n

= ∑
w

n =0 n
w

e −λt ( λt )
∞ n

= ∑
n =1 n −1
( λt )
∞ n

=e −λt

n =1 n −1
−λt
 λt ( λt ) 2 
=e  + + ...
 0! 1! 

www.padeepz.net
www.padeepz.net

 λt ( λt ) 2 
= ( λt ) e  1 +
−λt
+ + ... 
 1 2 
 
= ( λt ) e e
−λt λt

= λt , depends on t
Hence Poisson process is not a stationary process.
3.7 ERGODIC RANDOM PROCESS
Time Average
The time average of a random process {X(t)} is defined as
T
1
X ( t ) dt
2T −∫T
XT =

Ensemble Average
The ensemble average of a random process {X(t)} is the expected value of the random

et
variable X at time t
Ensemble Average = E[X(t)]

.n
Ergodic Random Process
{X(t)} is said to be mean Ergodic
If lim X T = µ

1
T →∞
T
X ( t ) dt =
pz
T →∞ 2T ∫
⇒ lim µ
ee
−T
Mean Ergodic Theorem
ad

Let {X(t)} be a random process with constant mean µ and let X T be its time average.
Then {X(t)} is mean ergodic if
.p

lim Var X T = 0
T →∞
Correlation Ergodic Process
w

The stationary process {X(t)} is said to be correlation ergodic if the process {Y(t)} is
w

mean ergodic where


Y(t) = X(t) X(t+λ)
w

E  y ( t )  = lim YT when YT is the time average of Y(t).


|T| →∞

3.8 MARKOV PROCESS


Definition
A random process {X(t)} is said to be markovian if
P  X ( t n +1 ) ≤ X n +1 / X ( n ) + x n , x ( t n −1 ) = x n −1...x ( t 0 = x 0 ) 
P  X ( t n +1 ) ≤ X n +1 / x ( t n ) =
x n 
Where t 0 ≤ t1 ≤ t 2 ≤ ... ≤ t n ≤ t n +1
Examples of Markov Process

www.padeepz.net
www.padeepz.net

1.The probability of raining today depends only on previous weather conditions existed
for the last two days and not on past weather conditions.
2.A different equation is markovian.

Classification of Markov Process

Markov Process

Continuous Discrete Discrete Continuous


Parameter Parameter Parameter Parameter
Markov Process Markov Process Markov Chain Markov Chain

et
.n
3.9 MARKOV CHAIN
Definition
We define the Markov Chain as follows
pz
If P{X n = a n/X n-1 = a n-1 , X n-2 = a n-2 , … X 0 = a 0 }
⇒P{X n = a n / X n-1 = a n-1 } for all n. the process {X n }, n = 0, 1, 2… is called as Markov
ee
Chains.
1.a1, a2, a3, … an are called the states of the Markov Chain.
= Pij ( n − 1, n ) is called the one step
ad

2.The conditional probability P{X= n aj | X n=


−1 ai}
transition probability from state a i to state a j at the nth step.
.p

3.The tmp of a Markov chain is a stochastic matricx


i) P ij ≥ 0
ii) ΣP ij = 1
w

[Sum of elements of any row is 1]


w

3.10 Poisson Process


The Poisson Process is a continuous parameter discrete state process which is very useful
w

model for many practical situations. It describe number of times occurred. When an experiment
is conducted as a function of time.
Property Law for the Poisson Process
Let λ be the rate of occurrences or number of occurrences per unit time and P n (t) be the
probability of n occurrences of the event in the interval (0, t) is a Poisson distribution with
parameter λt.
e −λt ( λt )
n

i.e. P  X ( t ) = n  = , n = 0,1, 2,...


n!
e −λt ( λt )
n

Pn ( t ) =
n!

www.padeepz.net
www.padeepz.net

Second Order Probability Function of a Homogeneous Poisson Process


P  X ( t 1 )  = n1 =  X ( t 2 ) = n 2  = P  X ( t1 = n1 )  .P  X ( t 2 = n 2 )  /

 X ( t1 ) =n 2  , t 2 > t1
= P  X ( t1 ) =
n1  .P [the even occurs n2 -n times in the interval (t 2 =t 1 )

{λ ( t − t1 )}
−λ ( t 2 − t1 ) n 2 = n1
e −λt1 ( λt1 ) 1 e
n

, n 2 ≥ n1
2
= .
n1 n 2 − n1
 e −λt 2 .λ n 2 .t1n1 ( t 2 − t1 )n 2 − n1 
 , n 2 ≥ n1 
=  n, n 2 − n1 
 
0 , otherwise 

et
3.11SEMI RANDOM TELEGRAPH SIGNAL PROCESS

.n
If N(t) represents the number of occurrence of a specified event in (0, t) and X(t) = (–)N(t),
then {X(t)} is called a semi-random telegraph signal process.
3.11.1 RANDOM TELEGRAPH SIGNAL PROCESS
Definition pz
A random telegraph process is a discrete random process X(t) satisfying the following:
ee
i. X(t) assumes only one of the two possible values 1 or –1 at any time 't'
ii. X(0) = 1 or –1 with equal probability 1/2
ad

iii. The number of occurrence N(t) from one value to another occurring in any interval of
length 't' is a Poisson process with rate λ, so that the probability of exactly 'r' transitions is
e −λt ( λt )
r

P  N ( t ) = r  =
.p

, r = 0,1, 2,...
r!
w

A typical sample function of telegraph process.


(0,1)
w
w

(0,-1)

Note: The process is an example for a discrete random process.


* Mean and Auto Correlation P{X(t) = 1} and P{X(t) = 1" for any t.

www.padeepz.net
www.padeepz.net

TUTORIAL QUESTIONS

1.. The t.p.m of a Marko cain with three states 0,1,2 is P=

and the initial state distribution is


Find (i)P[X 2 =3] ii)P[X 3 =2, X 2 =3, X 1 =3, X 0 =2]
2. Three boys A, B, C are throwing a ball each other. A always throws the ball to B and B
always throws the ball to C, but C is just as likely to throw the ball to B as to A. S.T. the process
is Markovian. Find the transition matrix and classify the states
3. A housewife buys 3 kinds of cereals A, B, C. She never buys the same cereal in successive
weeks. If she buys cereal A, the next week she buys cereal B. However if she buys P or C the
next week she is 3 times as likely to buy A as the other cereal. How often she buys each of the
cereals?

et
4. A man either drives a car or catches a train to go to office each day. He never goes 2 days in a
row by train but if he drives one day, then the next day he is just as likely to drive again as he is

.n
to travel by train. Now suppose that on the first day of week, the man tossed a fair die and drove
to work if a 6 appeared. Find 1) the probability that he takes a train on the 3rd day. 2). The
probability that he drives to work in the long run.

WORKED OUT EXAMPLES


pz
ee
Example:1.Let X n denote the outcome of the nth toss of a fair die.
ad

Here S = {1, 2, 3, 4, 5, 6}
T = {1, 2, 3, …}
∴ (X n , n = 1, 2, 3, …} is a discrete random sequence.
.p
w

Example:2 Given an example of stationary random process and justify your claim.
Solution:
w

Let us consider a random process X(t) = A as (wt + θ) where A &ω are custom and 'θ' is
uniformly distribution random Variable in the interval
w

(0, 2π).
Since 'θ' is uniformly distributed in (0, 2π), we have
1
 ,0 < C < 2π
f ( θ ) = 2π
0 ,otherwise


∴ E[X(t)] = ∫ X ( t ) f ( θ ) dθ
−∞

www.padeepz.net
www.padeepz.net


1
= ∫ A ω ( ω t + θ ) 2 π dθ
0

A 2π
= sin ( ωt + θ )  0

A
Sin ( 2π + ωt ) − Sin ( ωt + 0 ) 
2π 
=

A
= [Sinωt − sin ωt ]

= 0 constant
Since E[X(t)] = a constant, the process X(t) is a stationary random process.

Example:3.which are not stationary .Examine whether the Poisson process {X(t)} given by the

et
e −λt ( λt )
probability law P{X(t)=n] = , n = 0, 1, 2, ….

.n
n
Solution
We know that the mean is given by

E  X ( t )  =∑ nPn ( t )
pz
ee
n =0

ne −λt ( λt )
∞ n

= ∑
ad

n =0 n
e −λt ( λt )
∞ n

= ∑
.p

n =1 n −1
( λt )
∞ n
w

=e −λt

n =1 n −1
w

−λt
 λt ( λt ) 2 
e  + + ...
w

=
 0! 1! 
 λt ( λt ) 2 
= ( λt ) e 1 + +
−λt
+ ... 

 1 2 
= ( λt ) e e
−λt λt

= λt , depends on t
Hence Poisson process is not a stationary process.

www.padeepz.net
www.padeepz.net

UNIT - 4
CORRELATION AND SPECTRAL DENSITY

Introduction
The power spectrum of a time series x(t) describes how the variance of the data x(t) is
distributed over the frequency components into which x(t) may be decomposed. This
distribution of the variance may be described either by a measure µ or by a statistical
cumulative distribution function S(f) = the power contributed by frequencies from 0 upto
f. Given a band of frequencies [a, b) the amount of variance contributed to x(t) by
frequencies lying within the interval [a,b) is given by S(b) - S(a). Then S is called the
spectral distribution function of x.
The spectral density at a frequency f gives the rate of variance contributed by
frequencies in the immediate neighbourhood of f to the variance of x per unit frequency.

et
4.1 Auto Correlation of a Random Process
Let X(t 1 ) and X(t 2 ) be the two given random variables. Then auto correlation is

.n
R XX (t 1 , t 2 ) = E[X(t 1 ) X(t 2 )]
Mean Square Value
Putting t 1 = t 2 = t in (1)


R XX (t, t) = E[X(t) X(t)]
RXX (t, t) = E[X2(t)]
pz
ee
Which is called the mean square value of the random process.
ad

Auto Correlation Function


Definition: Auto Correlation Function of the random process {X(t)} is
R XX = (τ) = E{(t) X(t+τ)}
.p

Note: R XX (τ) = R(τ) = R X (τ)


w

PROPERTY: 1
The mean square value of the Random process may be obtained from the auto correlation
w

function.
R XX(τ), by putting τ = 0.
w

is known as Average power of the random process {X(t)}.


PROPERTY: 2
R XX(τ) is an even function of τ.
R XX (τ) = R XX (-τ)

PROPERTY: 3
If the process X(t) contains a periodic component of the same period.

PROPERTY: 4
If a random process {X(t)} has no periodic components, and
E[X(t)] = X then

www.padeepz.net
www.padeepz.net

R XX ( τ ) X = lim R XX ( τ )
2
lim= (or)X
|T| →∞ |T| →∞

i.e., when τ→∞, the auto correlation function represents the square of the mean of the random
process.

PROPERTY: 5
The auto correlation function of a random process cannot have an arbitrary shape.

SOLVED PROBLEMS ON AUTO CORRELATION


Example : 1
Check whether the following function are valid auto correlation function (i) 5 sin nπ (ii)
1
1 + 9τ 2

et
Solution:
(i) Given R XX(τ) = 5 Sin nπ

.n
R XX (–τ) = 5 Sin n(–π) = –5 Sin nπ
R XX(τ) ≠ R XX (–τ), the given function is not an auto correlation function.

(ii) Given R XX (τ) =


1
1 + 9τ 2
pz
ee
1
R XX (–τ) = = R XX ( τ )
1 + 9 ( −τ )
2
ad

∴ The given function is an auto correlation function.


.p

Example : 2
Find the mean and variance of a stationary random process whose auto correlation
w

function is given by
w

2
R XX ( τ ) = 18 +
6 + τ2
w

Solution
2
Given R XX ( τ ) = 18 +
6 + τ2
=
X2 lim R XX ( τ )
| τ | →∞

 2 
= lim 18 +
| τ | →∞ 6 + τ2 
 
2
= 18 + lim
| τ | →∞ 6 + τ 2

www.padeepz.net
www.padeepz.net

2
= 18 +
6+
= 18 + 0
= 18
X = 18
E  X ( t )  = 18
Var {X(t)} = E[X2(t)] - {E[X(t)]}2
We know that
E  X 2 ( t )  = R XX(0)
2 55
= 18 + =
6+0 3
1
=

et
3

.n
Example : 3
Express the autocorrelation function of the process {X'(t)} in terms of the auto correlation
function of process {X(t)}
pz
ee
Solution
Consider, R XX '(t 1 , t 2 ) = E{X(t 1 )X'(t 2 )}
  X ( t 2 + h ) − X ( t 2 ) 
= E  X ( t1 ) lim 
ad

n →0

  h 
 X ( t1 ) X ( t 2 + h ) − X ( t1 ) X ( t 2 ) 
.p

= lim E  
h →0
 h 
w

 R XX ( t1 , t 2 + h ) − R X ( t1 , t 2 ) 
= lim  
w

h →0
 h 

w

⇒ R XX ' (t 1 , t 2 ) = R XX ( t1 , t 2 ) (1)
∂t 2

Similarly R XX ' (t 1 , t 2 ) = R XX ' ( t, t 2 )
∂t1

⇒ R X ' X (t 1 , t 2 ) = R XX ( t1 , t 2 ) by (1)
∂t, ∂t 2

Auto Covariance
The auto covariance of the process {X(t)} denoted by C XX (t 1 , t 2 ) or C(t 1 , t 2 ) is defined as

www.padeepz.net
www.padeepz.net

CXX ( t1 ,=
t2 ) {  ( )}
E  X ( t1 ) − E ( X ( t1 ) )   X t 2 − E  X ( t 2 )  

4.2 CORRELATION COEFFICIENT
C (t ,t )
ρXX ( t1 , t 2 ) =XX 1 2
Var X ( t1 ) x Var X ( t 2 )
Where C XX(t 1 , t 2 ) denotes the auto covariance.

4.3 CROSS CORRELATION


Cross correlation between the two random process {X(t)} and {Y(t)} is defined as
R XY (t 1 , t 2 ) = E[X(t 1 ) Y(t 2 )] where X(t 1 ) Y(t 2 ) are random variables.

4.4 CROSS COVARIANCE


Let {X(t)} and {Y(t)} be any two random process. Then the cross covariance is defined
as

et
CXY ( t1 ,=
t2 ) {  ( )}
E  X ( t1 ) − E ( Y ( t1 ) )   X t 2 − E  Y ( t 2 )  

.n
The relation between Mean Cross Correlation and cross covariance is as follows:
C XY ( t1 , t 2 =
) R XY ( t1 , t 2 ) − E  X ( t1 ) E  Y ( t 2 )  
Definition pz
ee
Two random process {X(t)} and {Y(t)} are said to be uncorrelated if
CXY ( t1 , t 2 ) 0, ∀ t1 , t 2
Hence from the above remark we have,
ad

R XY (t 1 , t 2 ) = E[X(t 1 ) Y(t 2 )]
4.4.1 CROSS CORRELATION COEFFICIENT
c XY ( t1 , t 2 )
.p

ρXY ( t1 , t 2 ) =
Var ( X ( t1 ) ) Var ( X ( t 2 ) )
w
w

4.4.1 CROSS CORRELATION AND ITS PROPERTIES


Let {X(t)} and {Y(t)} be two random. Then the cross correlation between them is also
w

defined as
R XY (t, t+τ) = E  X ( t ) Y ( t + τ ) 
= R XY (τ)

PROPERTY : 1
R XY (τ) = R YX (–τ)

PROPERTY : 2
If {X(t)} and {Y(t)} are two random process then R XY ( τ ) ≤ R XX ( 0 ) R YY ( 0 ) , where
R XX(τ) and R YY (τ) are their respective auto correlation functions.

www.padeepz.net
www.padeepz.net

PROPERTY : 3
If {X(t)} and {Y(t)} are two random process then,
R XY ( τ ) ≤ 1  R XX ( 0 ) + R YY ( 0 ) 
2

SOLVED PROBLEMS ON CROSS CORRELATION


Example:4.4.1
Two random process {X(t)} and {Y(t)} are given by
X(t) = A cos (ωt+θ), Y(t) = A sin (ωt + θ) where A and ω are constants and 'θ' is a uniform
random variable over 0 to 2π. Find the cross correlation function.

Solution
By def. we have
R XY (τ) = R XY (t, t+τ)

et
Now, R XY (t, t+τ) = E[X(t). Y(t+τ)]
= E [A cos (ωt + θ). A sin (ω (t+τ) + θ)]

.n
{ }
= A 2 E sin ω ( t + τ ) + θ cos ( ωt + θ ) 

Since 'θ' is a uniformly distributed random variable we have


1
pz
ee
f(0) = , 0 ≤ θ ≤ 2π

Now E sin {ω ( t + τ ) + θ} cos ( ωt + θ ) 
ad


= ∫ sin ( ωt + ωτ + θ ) .cos ( wt + θ ) f ( θ ) dθ
.p

−∞

 
∫ sin ( ωt + ω ) .cos ( ωt + θ )  2π  dθ
t +θ 1
w

=
0
w


1
sin ( ωt + ωτ + θ ) cos ( ωt + θ ) dθ
2π ∫0
=
w


1

1
{sin ( ωt + ωτ + θ + ωt + θ )
= 2π 0
2

+ sin [ ωt + ωτ + θ − ωt − θ]} dθ
1

sin [ 2ωt + ωτ + 2θ] + sin ( ωτ )
=
2π ∫
0
2

www.padeepz.net
www.padeepz.net


1  cos ( 2ωt + ωτ + 2θ ) 
= − + sin ωτ ( θ ) 
4π  2 0
1  cos ( 2ωt + ωτ ) cos ( 2ωt + ωτ + 0 ) 
= − + + sin ωτ ( 2π − 0 ) 
4π  2 2 
1  cos ( 2ωt + ωτ ) cos ( 2ωt + ωτ ) 
= − + + 2π sin ωτ 
4π  2 2 
1
= [0 + 2π sin ωτ]

1
= sin ωτ (3)
2
Substituting (3) in (1) we get

et
A2
R XY (=
t, t τ ) sin ωτ

.n
2

4.5 SPECTRAL DENSITIES (POWER SPECTRAL DENSITY)


INTRODUCTION
pz
ee
(i) Fourier Transformation
(ii) Inverse Fourier Transform
(iii) Properties of Auto Correlation Function
ad

(iv) Basic Trigonometric Formula


(v) Basic Integration
.p

4.5.1 SPECIAL REPRESENTATION


w

Let x(t) be a deterministic signal. The Fourier transform of x(t) is defined as


( t ) x =
F  x = (w) ∫ x (t)e
− iωt
w

dt
−∞
w

Here X(ω) is called "spectrum of x(t)".


Hence x(t) = Inverse Fourier Transform of X(ω)

1
= ∫ X ( ω) eiωt dω .
2π −∞

Definition
The average power P(T) of x(t) over the interval (-T, T) is given by
T
1
P (T) = ∫ x 2 ( t ) dt
2T − T

www.padeepz.net
www.padeepz.net

X T ( ω)
2

1
= ∫
2π −∞ 2T
dω (1)

Definition
The average power PXX for the random process {X(t)} is given by
T
1
PXX = lim ∫ E  X 2 ( t )  dt
T →∞ 2π
−T

E
∞  X ( ω) 2 
1  T  dω
= ∫ lim
2π −∞ T →∞ 2T
(2)

4.6 POWER SPECTRAL DENSITY FUNCTION


Definition

et
If {X(t)} is a stationary process (either in the strict sense or wide sense) with auto
correlation function R XX(τ), then the Fourier transform of R XX (τ) is called the power spectral

.n
density function of {X(t)} and is denoted by S XX (ω) or S(ω) or S X(ω).
S XX (ω)= Fourier Transform of R XX (τ)

= ∫ R ( τ) e
−∞
XX
− iωτ

pz
ee
Thus,

SXX ( f ) = ∫ R ( τ) e XX
− i2 πfτ

ad

−∞

4.6.1 WIENER KHINCHINE RELATION


.p

( ω)
SXX= ∫ R ( τ) e
− iωτ

w

XX
−∞

w

SXX ( f ) = ∫ R ( τ) e XX
− i2 πfτ

−∞
w

To find R XX(τ) if S XX(ω) or S XX(f) is given



1
=
R XX ( τ ) ∫ SXX ( ω) eiωτdω [inverse Fourier transform of S XX (ω)]
2π −∞

1
(or) R XX ( τ )
= ∫ SXX ( f ) e − i2 πfτdτ
2π −∞
[inverse Fourier transform of S XX(f)]

4.7 PROPERTIES OF POWER SPECTRAL DENSITY FUNCTION


Property 1:

www.padeepz.net
www.padeepz.net

The value of the spectral density function at zero frequency is equal to the total area
under the group of the auto correlation function.

SXX ( f ) = ∫ R ( τ) e
XX
− i2 πfc

−∞
Taking f = 0, we get

Sxx(0) = ∫ R ( τ ) dτ
−∞
XX

TUTORIAL QUESTIONS

1. Find the ACF of {Y(t)} = AX(t)cos (w 0 + ) where X(t) is a zero mean stationary random
process with ACF A and w 0 are constants and is uniformly distributed over (0, 2 ) and
independent of X(t).

et
2. Find the ACF of the periodic time function X(t) = A sinwt
3.If X(t) is a WSS process and if Y(t) = X(t + a) – X(t – a), prove that

.n
4. If X(t) = A sin( ), where A and are constants and is a random variable, uniformly
distributed over (-
pz
), Find the A.C.F of {Y(t)} where Y(t) = X2(t).

5.. Let X(t) and Y(t) be defined by X(t) = Acos t + Bsin t and Y(t) = B cos t – Asin t
ee
Where is a constant and A nd B are independent random variables both having zero mean and
varaince . Find the cross correlation of X(t) and Y(t). Are X(t) and Y(t) jointly W.S.S
ad

processes?
6. Two random processes X(t) and Y(t) are given by X(t) = A cos ( ), Y(t) = A sin(
), where A and are constants and is uniformly distributed over (0, 2 ). Find the cross
.p

correlation of X(t) and Y(t) and verify that .


7..If U(t) = X cos t + Y sin t and V(t) = Y cost + X sint t where X and Y are independent random
w

varables such that E(X) = 0 = E(Y), E[X2] = E[Y2] = 1, show that U(t) and V(t) are not jointly
W.S.S but they are individually stationary in the wide sense.
w

8. Random Prosesses X(t) and Y(t) are defined by X(t) = A cos ( ), Y(t) = B cos ( )
where A, B and are constants and is uniformly distributed over (0, 2 ). Find the cross
w

correlation and show that X(t) and Y(t) are jointly W.S.S

WORKEDOUT EXAMPLES

Example 1.Check whether the following function are valid auto correlation function (i) 5 sin nπ
1
(ii)
1 + 9τ 2
Solution:
(i) Given R XX(τ) = 5 Sin nπ
R XX (–τ) = 5 Sin n(–π) = –5 Sin nπ

www.padeepz.net
www.padeepz.net

R XX(τ) ≠ R XX (–τ), the given function is not an auto correlation function.

1
(ii) Given R XX (τ) =
1 + 9τ 2
1
R XX (–τ) = = R XX ( τ )
1 + 9 ( −τ )
2

∴ The given function is an auto correlation function.

Example : 2
Find the mean and variance of a stationary random process whose auto correlation
function is given by
2
R XX ( τ ) = 18 +
6 + τ2

et
Solution
2
Given R XX ( τ ) = 18 +

.n
6 + τ2
=
X2 lim R XX ( τ )
| τ | →∞

= lim 18 +
 2 
pz
ee
| τ | →∞ 6 + τ2 
 
2
= 18 + lim
ad

| τ | →∞ 6 + τ 2

2
= 18 +
6+
.p

= 18 + 0
w

= 18
X = 18
w

E  X ( t )  = 18
w

Var {X(t)} = E[X2(t)] - {E[X(t)]}2


We know that
E  X 2 ( t )  = R XX(0)
2 55
= 18 + =
6+0 3
1
=
3

Example : 3

www.padeepz.net
www.padeepz.net

Express the autocorrelation function of the process {X'(t)} in terms of the auto correlation
function of process {X(t)}

Solution
Consider, R XX '(t 1 , t 2 ) = E{X(t 1 )X'(t 2 )}
  X ( t 2 + h ) − X ( t 2 ) 
= E  X ( t1 ) lim  
n →0
  h 
 X ( t1 ) X ( t 2 + h ) − X ( t1 ) X ( t 2 ) 
= lim E  
h →0
 h 
 R XX ( t1 , t 2 + h ) − R X ( t1 , t 2 ) 
= lim  
h →0
 h 

R XX ( t1 , t 2 )

et
⇒ R XX ' (t 1 , t 2 ) = (1)
∂t 2

.n

Similarly R XX ' (t 1 , t 2 ) = R XX ' ( t, t 2 )
∂t1

⇒ R X ' X (t 1 , t 2 ) =

∂t, ∂t 2
pz
R XX ( t1 , t 2 ) by (1)
ee
Example :4
Two random process {X(t)} and {Y(t)} are given by
ad

X(t) = A cos (ωt+θ), Y(t) = A sin (ωt + θ) where A and ω are constants and 'θ' is a uniform
random variable over 0 to 2π. Find the cross correlation function.
Solution
.p

By def. we have
R XY (τ) = R XY (t, t+τ)
w

Now, R XY (t, t+τ) = E[X(t). Y(t+τ)]


= E [A cos (ωt + θ). A sin (ω (t+τ) + θ)]
w

{ }
= A 2 E sin ω ( t + τ ) + θ cos ( ωt + θ ) 
w

Since 'θ' is a uniformly distributed random variable we have


1
f(0) = , 0 ≤ θ ≤ 2π

Now E sin {ω ( t + τ ) + θ} cos ( ωt + θ ) 

= ∫ sin ( ωt + ωτ + θ ) .cos ( wt + θ ) f ( θ ) dθ
−∞

www.padeepz.net
www.padeepz.net


 
∫ sin ( ωt + ω ) .cos ( ωt + θ )  2π  dθ
t +θ 1
=
0

1
sin ( ωt + ωτ + θ ) cos ( ωt + θ ) dθ
2π ∫0
=

∫ 2 {sin ( ωt + ωτ + θ + ωt + θ )
1 1
= 2π 0

+ sin [ ωt + ωτ + θ − ωt − θ]} dθ
1

sin [ 2ωt + ωτ + 2θ] + sin ( ωτ )
=
2π ∫
0
2


1  cos ( 2ωt + ωτ + 2θ ) 
+ sin ωτ ( θ ) 

et
= −
4π  2 0

.n
1  cos ( 2ωt + ωτ ) cos ( 2ωt + ωτ + 0 ) 
= − + + sin ωτ ( 2π − 0 ) 
4π 

= −
2

+
pz2
1  cos ( 2ωt + ωτ ) cos ( 2ωt + ωτ ) 
+ 2π sin ωτ 

ee
4π  2 2 
1
= [0 + 2π sin ωτ]
ad


1
= sin ωτ (3)
.p

2
Substituting (3) in (1) we get
w

A2
R XY (=
t, t τ ) sin ωτ
2
w
w

www.padeepz.net
www.padeepz.net

UNIT – 5

LINEAR SYSTEM WITH RANDOM INPUTS


Introduction
Mathematically a "system" is a functional relationship between the input x(t) and
y(t). We can write the relationship as
y(f) = f[x(t): –∞< + <∞]
Let x(t) represents a sample function of a random process {X(t)}. Suppose the
system produces an output or response y(f) and the ensemble of the output functions
forms a random process {Y(t)}. Then the process {Y(t)} can be considered as the output
of the system or transformation 'f' with {X(t)} as the input and the system is completely
specified by the operator "f".

5.1 LINEAR TIME INVARIANT SYSTEM

et
Mathematically a "system" is a functional relationship between the input x(t) and output
y(t). we can write the relationship
y ( t )= f  x ( t )

.n
: − ∞ < t < ∞
5.2 CLASSIFICATION OF SYSTEM
1. Linear System: f is called a linear system, if it satisfies
f a1X1 ( t ) ± a=
2 x 2 ( t )
pz
 a1f  X1 ( t ) ± a 2f  X 2 ( t )  
ee
2. Time Invariant System:
Let Y(t) = f[X(t)]
If Y ( t + h ) =f  X ( t + h )  , then f is called a time invariant system or X(t) and Y(t) are said to
ad

form a time invariant system.


.p

3. Causal System:
Suppose the value of the output Y(t) at t = t 0 depends only on the past values of the input
w

X(t), t≤t 0 .
In other words, if Y ( t 0 ) =f  X ( t ) : t ≤ t 0  , then such a system is called a causal
w

system.
w

4. Memory less System:


If the output Y(t) at a given time t = t 0 depends only on X(t 0 ) and not on any other past or
future values of X(t), then the system f is called memory less system.
5. Stable System:
A linear time invariant system is said to be stable if its response to any bounded input is
bounded.
REMARK:
i) Noted that when we write X(t) we mean X(s,t) where s ∈ S, S is the sample space. If the
system operator only on the variable t treating S as a parameter, it is called a deterministic
system.
() ()
 →  →
Input X t Output Y t
Linear System

www.padeepz.net
www.padeepz.net

h(t)
(a)
() ()
 → LTI System  →
Input X t Output Y t

h(t)
(b)
a) Shows a general single input - output linear system
b) Shows a linear time invariant system

5.3 REPRESENTATION OF SYSTEM IN THE FORM OF CONVOLUTION


Y(t) =h (t)x X(t)


Y(t)
= ∫ h ( u ) X ( t − u ) du
−∞

et

∫ h ( t − u ) X ( u ) du

.n
=
−∞
5.4 UNIT IMPULSE RESPONSE TO THE SYSTEM
pz
If the input of the system is the unit impulse function, then the output or response is the
system weighting function.
ee
Y(t) = h(t)
Which is the system weight function.
ad

5.4.1 PROPERTIES OF LINEAR SYSTEMS WITH RANDOM INPUT


Property 1:
If the input X(t) and its output Y(t) are related by
.p


=Y(t) ∫ h ( u ) X ( t − u ) du , then the system is a linear time - invariant system.
w

−∞
Property 2:
w

If the input to a time - invariant, stable linear system is a WSS process, then the output
will also be a WSS process, i.e To show that if {X(t)} is a WSS process then the output {Y(t)} is
w

a WSS process.
Property 3:

If {X(t)} is a WSS process and if Y(t) = ∫ h ( u ) X ( t − u ) du ,
−∞
then

R XY (=
τ ) R XX ( τ ) x h ( τ )
Property 4 :

If {(X(t)} is a WSS process and if Y(t) = ∫ h ( u ) X ( t − u ) du ,
−∞
then

( τ ) R XY ( τ ) x h ( −τ )
R YY=

www.padeepz.net
www.padeepz.net

Property 5:

If {X(t)} is a WSS process and if Y(t) = ∫ h ( u ) X ( t − u ) du ,
−∞
then

( τ ) R XX ( τ ) x h ( τ ) x h ( −τ )
R YY=

Property 6:

If {X(t)} is a WSS process if Y(t) = ∫ h ( u ) X ( t − u ) du , then S ( ω=)
−∞
XY SXX ( ω) H ( ω)

. Property 7:

If {X(t)} is a WSS process and if Y(t) = ∫ h ( u ) X ( t − u ) du ,
−∞
then

SYY ( ω=
) SXX ( ω) H ( ω)
2

et
Note:
Instead of taking R XY ( τ )= E  X ( t ) Y ( t + τ )  in properties (3), (4) & (5), if we start

.n
with R XY ( τ )= E  X ( t − τ ) Y ( t )  , then the above property can also stated as
( τ ) R XY ( τ ) xh ( −τ )
a) R XY=
( τ ) R XY ( τ ) xh ( −τ )
pz
ee
b) R YY=
c) R YY ( τ ) = R XX ( τ ) x h ( −τ ) x h ( τ )
ad

REMARK :
(i) We have written H ( ω) H * ( ω)= H ( ω) because
2
.p

H ( ω) = F  h ( τ ) 
H * ( ω)= F  h ( −τ ) 
w

(
=  F h ( τ ) )
w

= H ( ω)
w

(ii) Equation (c) gives a relationship between the spectral densities of the input and output
process in the system.
(iii) System transfer function:
We call H ( ω
= ) F{h ( τ )} as the power transfer function or system transfer function.
SOLVED PROBLEMS ON AUTO CROSS CORRELATION FUNCTIONS OF INPUT
AND OUTPUT
Example :5.4.1
Find the power spectral density of the random telegraph signal.
Solution
We know, the auto correlation of the telegraph signal process X(y) is

www.padeepz.net
www.padeepz.net

R XX ( τ ) =e
−2 λ τ

∴ The power spectral density is


( ω)
SXX= ∫ R ( τ) e
XX
− iωτ

−∞
0 ∞

∫e dτ + ∫ e −2 λτe − iωτdτ
2 λτ − iωτ
= e
−∞ 0

 τ = −τ when τ < 0

 τ = −τ when τ > 0

0 ∞
( 2 λ−iω)τ
= ∫e
−∞
dτ + ∫ e −2 λτe − iωτdτ
0

et

 e( 2 λ−iω)τ 
0
 e −( 2 λ+iω)τ 
=   + 
 ( 2λ − iω)  ∞  − ( 2λ + iω)  0

.n
1 1
=
( 2λ − iω)
e0 − e −∞  −
( 2λ + iω) pz
e −∞ − e −0 
ee
1 1
= (1 − 0 ) − ( 0 − 1)
( 2λ − iω) ( 2λ + i ω )
ad

1 1
= + e −∞ − e −0 
( 2λ − i ω ) ( 2 λ + i ω )
.p

1
= (1 − 0 ) +
( 2λ − iω)
w

2λ + iω + 2λ − iω
=
( 2λ − iω)( 2λ + iω)
w


w

SXX ( ω) = 2
4λ + ω2
Example : 5.4.2
A linear time invariant system has a impulse response h ( t ) = e −βt U ( t ) . Find the power
spectral density of the output Y(t) corresponding to the input X(t).
Solution:
Given X(t) - Input
Y(t) - output
S YY (ω) - |H(ω)|2 S XX(ω)

www.padeepz.net
www.padeepz.net


Now H ( ω) = h ( t ) e − iωt dt

−∞
0 ∞

∫ h (t)e dt + ∫ e −βt e − iωt dt


− iωt
=
−∞ 0

= 0+ e ( ) dt∫
− β+ iω t

0

 e −(β+iω)t 
=  
 − ( β + iω )  0
=
1
− ( β + iω )
(
e −∞ − e −0 )

et
=
1
− ( β + iω )
( e −∞ − e −0 )

.n
1
=

|H(ω)| =
β + iω
1 pz
ee
β + iω
1
=
ad

β2 + ω2
1
∴ SYY ( ω)
= SXX ( ω)
.p

β + ω2
2
w

TUTORIAL QUESTIONS
w

1. State and Prove Power spectral density of system response theorem.


w

2. Suppose that X(t) is the input to an LTI system impulse response h1 (t) and that Y(t) is the
input to another LTI system with impulse response h2 (t). It is assumed that X(t) and Y(t) are
jointly wide sense stationary. Let V(t) and Z(t) denote that random processes at the respective
system outputs. Find the cross correlation of X(t) and Y(t).
3. The input to the RC filter is a white noise process with ACF . If the
frequency response find the auto correlation and the mean square value of the
output process Y(t).
4. A random process X(t0 having ACF , where P and
are real positive
λ e − λt , t > 0
constants, is applied to the input of the system with impulse response h(t) =  where
0, t < 0

www.padeepz.net
www.padeepz.net

λ is a real positive constant. Find the ACF of the network’s response Y(t). Find the cross
correlation .

WORKEDOUT EXAMPLES
Example: 1
Find the power spectral density of the random telegraph signal.
Solution
We know, the auto correlation of the telegraph signal process X(y) is
R XX ( τ ) =e
−2 λ τ

∴ The power spectral density is


( ω)
SXX= ∫ R ( τ) e
XX
− iωτ

−∞

et
0

∫ e e dτ + ∫ e e dτ
2 λτ − iωτ −2 λτ − iωτ
=

.n
−∞ 0

 τ = −τ when τ < 0


 pz
 τ = −τ when τ > 0
ee
0
( 2 λ−iω)τ
= ∫e
−∞
dτ + ∫ e −2 λτe − iωτdτ
0
ad


 e ( 2 λ−iω)τ

0
 e −( 2 λ+iω)τ 
=   + 
 ( 2λ − i ω ) ∞  − ( 2λ + i ω ) 0
.p

1 1
= e0 − e −∞  − e −∞ − e −0 
( 2λ − iω) ( 2λ + iω)
w

1 1
(1 − 0 ) − ( 0 − 1)
w

=
( 2λ − iω) ( 2λ + i ω )
w

1 1
= + e −∞ − e −0 
( 2λ − i ω ) ( 2 λ + i ω )
1
= (1 − 0 ) +
( 2λ − iω)
2λ + iω + 2λ − iω
=
( 2λ − iω)( 2λ + iω)

SXX ( ω) = 2
4λ + ω2

www.padeepz.net

Anda mungkin juga menyukai