Anda di halaman 1dari 21

Random Variables And Process

Prof. B B Tiwari

Introduction
Signals may be deterministic or random. If uncertainty exists then signals are random signals. They are not predictable neither are they completely unpredictable. The probability of being correct can be predicted up to certain extent.

Probability
When the possible outcomes of an experiment is not always same we deal it with probability theory. For ex: when an experiment is repeated N times and the possible outcomes A occurs NA The relative frequency of occurrence of A is NA /N . It can be written as
. P(A)=

Totally independent events A1 and A2 with probabilities P(A1) and P(A2) are mutually exclusive events P(A1 or A2) = P(A1) + P(A2) In general P(A1 or A2 or .or AL) =
=1 ()

and we know that

=1

= 1

If there are two events A and B one of the events may affect the other .In this case the conditional probability P(B|A) = P(A|B)*P(B)/P(A) This result is know as Bayes theorem. If A and B are totally independent then P(B|A) = P(B) and Joint probability P(A,B) = P(A)*P(B)

Cumulative Distribution Function

F(x) = and probability density function f(x)=


PDF has the following properties f(x)>= 0 for all x

F(x)

Probability of outcome X being less then or equal to x1 is 1 P(X<=x1) = F(x1) = Similarly 2 P(X<=x2) = F(x2) = The Probability that the outcome lies in the range x1 <= X <= x2 P(x1 <= X <= x2 )=P(X<=x2) P(X < x1)

P(x1 <= X <= x2 )= =


2 1


2 1

P(x1 <= X <= x2 )=

or P(x <= X <= x + dx ) = f(x)dx

For two random variables X and Y the probability that x <= X <= x+dx while at same time y<= Y<= y+dy is given as P(x<=X<=x+dx ,y<=Y<= y+dy) = fXY(x,y)dxdy This is indicated by volume enclosed by fXY(x,y) P(x1 <= X <= x2, 2 2 y1<=Y<=y2)= ,
1 1

FXY(x,y) = P(X<=x, Y<=y) = , FX(x)=P(X<=x,- )= , , = P(x<=X<=x+dx,y<=Y<=y+dy)=[fX(x)dx][fY(y)dy] fXY(x,y)=fX(x)fY(y) 2 P(x1<=X<=x2,y1<=Y<=y2)=[ ] [


2 1 fX(x)=

A communication example
We want to transmit one of two possible messages the message m0 that the bit 0 is intended or the message m1 that the bit 1 is intended. When received , generates some voltage , say r0 , which may be as simple as a dc voltage, while m1 received generates a voltage r1.

P(r0|m0)=probability that r0 is received given that m0 is sent, P(r1|m0)=probability that r1 is received given that m0 is sent, P(r=|m1)=probability that r0 is received given that m1 is sent, P(r1|m1)=probability that r1 is received given that m1 is sent,

The messages allow for the general case that the message m1 and m0 do not occur with equal frequency and we introduce the probabilities P(m1) and P(m0). These probabilities P(m1) and P(m0) are called the apriori probabilities. Now P(m0|ro)=probability that m0 is the message given that r0 is received, P(m1|r0)=probability that m1 is the message given that r0 is received.

Clearly if P(m0|r0)>p(m1|r0) then we should decide that m0 is intended and if the inequality is reversed we should decide for m1.Altogether then our algorithm should be : If r0 is received: Choose m0 if P(m0|r0)>P(m1|r0) Choose m1 if P(m1|r0)>P(m0|r0) If r1 is received: Choose m= if P(m0|r1)>P(m1|r1) Choose m1 if P(m1|r1)>P(m0|r1)

A receiver which operates in accordance with this algorithm is said to maximize the posteriori probability (m.a.p) of a correct decision and is called an optimum receiver . P(r0|m0)P(m0)>P(r0|m1)P(m1) P(r1|m1)P(m1)>P(r1|m0)P(m0)

Average value of random variable


The possible numerical value of the random variable X are x1 ,x2x3,., with probabilities of occurrence P(x1), P(x1), P(x1). x1P(x1)N + x2P(x2)N+.=N () The mean or average value of all these measurements and hence the average value of the random variable is calculated by dividing the sum shown above by the number of measurements N. X= E(X)=m= ()

Tchebycheffs Inequality
Proof: To prove this inequality we start with equation of variance shown in last slide. Assuming m=x=0 we have,
2 P(|X|>=)<= 2

2
=


+ 2

+
2 +

Variance of a random variable


= = 2 Writing (x-m)2 = x2 -2mx + m2 in the integral of above equation and integrating term by term , 2 = E(X2) - 2m2 + m2 = E(X2) m2 The quantity itself is called the standard deviation and is the root mean square (rms) value of (X-m). If the average value m=0, then 2 = E(X2) 2 E[(X-m)2]

Since x2 >=0 and f(x) >= 0 for all x we have that 2 0 Can be written as 2 2 2 >= + + In the ranges - <= x< = - and <= x<= x2>= 2 Replace x2 by 2 + 2 2 >= [ + + ]
But P(x<=- )=

and P(x>=+ )=
2 2

then + = Hence proved

Gaussian Probability Density


F(x)=
1 2/22 22
2 2 /2

X=

22

dx = m

E[(X-m)2]

/2 22

dx = 2

Error Function

Anda mungkin juga menyukai