Anda di halaman 1dari 27

Probability and Statistics Unit 2

Unit 2 Random Variables

Structure:
2.1 Introduction
2.2 One-dimensional Random Variable
Discrete and Continuous random variables
2.3 Distribution Function
Distribution Function of discrete random variables
Distribution Function of continuous random variables
2.4 Two-dimensional random variables
Discrete and Continuous random variables
Joint Density Function
2.5 Marginal and Condition probability distribution
2.6 Mathematical expectation
2.7 Summary
2.8 Terminal Questions
2.9 Answers

2.1 Introduction
In the previous unit we studied about sample space, different types of
events, and about probability. In this unit we introduce some important
mathematical concepts which have many applications to the probabilistic
models we are considering.
Objectives:
At the end of this unit students should be able to :
explain the concept of Random variables, Discrete and continuous
random variables.
explain the concept of Distribution function and Two-dimensional
random variables, Joint density function and its properties.

Sikkim Manipal University Page No.: 22


Probability and Statistics Unit 2

2.2 One Dimensional Random Variable

Definition:
Let S be the sample space of a random experiment. Suppose with each
element s of S, a unique real number X (s) is associated according to some
rule. Then, X is called a random variable on S.

In other words, if f is a function (mapping) from S into the set R of all real
numbers and X = f(s), sS, then X is called a random variable on S.
Roughly speaking, a random variable is a variable whose values depend on
chance.

For example, consider the random experiment of tossing a coin. For this
experiment, the sample space is
S = {H,T}. Let us define a mapping f : S R by

1 if S H
f ( s)
0 if S T
Then, X = f(s) is a random variable on S. For the outcome H, the value of
this random variable is 1 and for the outcome T, its value is zero.

As another example, consider the random experiment of tossing 3 coins


together. The corresponding sample space is
S = {HHH, HHT, HTH, THH, HTT, THT, TTH, TTT}
which consists of 8 possible outcomes.
Suppose we define the mapping f: S R by f(s) = Number of heads in an
outcome S.
i.e. f(HHH) = 3, f(HHT) = 2, f(HTH) = 2, f(THH) = 2, f(HTT) = 1, f(THT) = 1,
f(TTH) = 1, f(TTT) = 0
The random variable for this experiment is X which has the value 3 for the
element s1 = HHH of S, the value 2 for the elements s2 = HHT, s3 = HTH and
s4 = THH, the value 1 for the elements s5 = HTT, s6 = THT and s7 = TTH and

Sikkim Manipal University Page No.: 23


Probability and Statistics Unit 2

the value 0 for the element s8 = TTT. As s varies over the set S, X varies
over the set {0, 1, 2} R.
2.2.1 Discrete and Continuous Random Variables
Definition:
Let X be a random variable. If the number of possible values of X (that is,
Rx, the range space) is finite or countably infinite, we call X a discrete
random variable. That is, the possible values of X may be listed as x1, x2,
.. xn, . In the finite case the list terminates and in the countably infinite
case the list continues indefinitely.

Example: A radioactive source is emitting -particles. The emission of


these particles is observed on a counting device during a specified period of
time. The following random variable is of interest.
X = number of particles observed.

What are the possible values of X ? We shall assume that these values
consist of all non-negative integers. That is, the range space of X i.e.
Rx = {0, 1, 2, n, }.

It could be argued that during a specified (finite) time interval it is impossible


to observe more than, say N particles, where N may be a very large positive
integer. Hence the possible values for X should really be: 0, 1, 2, .N.
Definition:
Let X be a discrete random variable. Hence Rx, the range space of X,
consists of atmost a countably infinite number of values, x1, x2, with
each possible outcome xi we associate a number p(xi) = P[x = xi], called the
probability of xi. The numbers p(xi), i = 1, 2, must satisfy the following
conditions:
a) p(xi) 0 for all i

b) p( x i ) 1
i 1

Sikkim Manipal University Page No.: 24


Probability and Statistics Unit 2

The function p defined above is called the probability function (or point
probability function) of the random variable X. The collection of pairs
(xi, p(xi)), i = 1, 2,.. is sometimes called the probability distribution of X.

Definition:

Let X be a continuous random variable. A function f, is called the probability


density function (pdf) of X, if it satisfied the following conditions:

(a) f(x) 0 for all x.



(b) f ( x ) dx 1

(c) For any a, b with < a < b < +

we have P(a X b) = f(x) dx.


a
Definition:
Let X be a random variable, discrete or continuous. We define F to be the
cumulative distribution function of the random variable (abbreviated as cdf)
where F(x) = P[X x]
(a) If X is a discrete random variable then the cdf F(x) = p(xj), where the
j
sum is taken over all indices j satisfying xj x.
(b) If X is a continuous random variable with pdf f, then the cdf
x
F(x) = f(s)ds

Note: If F(x) is the cdf then pdf f(x) can be obtained by using formula

d
f x F x F1 x .
dx

Sikkim Manipal University Page No.: 25


Probability and Statistics Unit 2

2.3 Distribution Function


Definition: Let X be a random variable, then the function such that
F : R R defined by F ( X ) P( X x) is called distribution function of
random variable X.
Note 1. F () 0 and F () 1
Note 2. If F(X) is a distribution function of a random variable X if a<b, then
P(a x b) F(b) F(a) .

Note3. P (a x b) P ( x a ) F (b) F (a )
2.3.1 Distribution Function of discrete random variables
Definition: Let X be a random variable whose space is x1 , x 2 ,...x n .

Let P ( X xi ) pi then the distribution function F : R R of the discrete

random variable X is given by F ( X ) P ( X x) p


xi x
i and F(X) is called

the probability mass function (pmf) of the discrete random variable X.


Note: Let P ( X xi ) pi then the definition of probability it follows that

p i =1.

2.3.2 Distribution Function of continuous random variables


Definition: Let X be a continuous random variable with probability density
x
function f(x) , we define F : R R is given by F ( x) f ( x)dx

is called

the distribution function of the continuous random variable f(x)


Note: From the definition of f(x) we have the following property,
(i) F () 0
(ii) F ( ) 1
b b a
(iii) P(a x b) F (b) F (a ) f ( x)dx f ( x)dx f ( x)dx .
a

Sikkim Manipal University Page No.: 26


Probability and Statistics Unit 2

Example: Verify that p(x) defined by

p (x) =
e x for x 0 is a probability density function.
0 for x < 0

Find the probability that the variate having this density will fall in the interval
[1.5 2.5]. Also, evaluate the cumulative distribution function F(2.5).

Solution:
Evidently, the given p( x ) 0 . Further,
0
P( < x < ) = p(x) dx = p(x) dx + p(x) dx
|| 0
0

= e xdx = 1
0
The two conditions verified above show that p(x) is a probability density
function.
Next, we find that
2 .5 2 .5
P( 1.5 < x < 2.5) = p(x)dx = e x dx
1 .5 1 .5
= (e2.5 e1.5) = 0.1410

Lastly we find that


2 .5 2 .5
F(2.5) = p(x)dx = ex dx
0
= (e 2.5 1 )
= 0.9179.

Example: For the distribution given by the cumulative function

0 t>0
F(t) = t 2
0t1 find density function. Also, evaluate
1 t>1

Sikkim Manipal University Page No.: 27


Probability and Statistics Unit 2

(i) P(0.5 < X < 0.75)


(ii) P(X 0.5)
(iii) P(X > 0.75)

Solution:
Recalling that p(t) = F1(t), we find that

0 t>0
p(t) = 2t 0t1
0 t>1

0 x>0
p(x) = 2x 0x1
1 x>1
Also,

P(0.5 < X < 0.75) = F(0.75) F(0.5)

= (0.75)2 (0.5)2
( F( t ) P( x )dx )

= 0.3125.
P(X 0.5) = F(0.5) = (0.5)2 = 0.25
P(X > 0.75) = 1 P(X 0.75) = 1 F(0.75) = 1 (.75)2
= 0.4325
Example: The p.d.f. of a random variable x is given by
x 0x1
f(x) = 2x 1<x2
0 elsewhere
Find (i) cumulative distribution function, and (ii) P(x 1.5

Solution: By using formula


t
F(t) = P[X t] = f(x)dx

Sikkim Manipal University Page No.: 28


Probability and Statistics Unit 2

For < t < 0, we get


0 0
F(t) = f(x)dx = 0.dx = 0

For 0 t 1, we get
0 t
F(t) = f(x)dx + f(x)dx
0
t
0 t2
= 0.dx + xdx =
2
For
1 < t 2, we get
0 1 t
F(t) = 0.dx + xdx + (2 x)dx
0 1
1
=1 (t 2)2
2
For t > 2, we get
0 1 2 t
F(t) = 0.dx + xdx + (2 x) dx + 0.dx
0 1 2
1 3
= + (2 ) = 1
2 2
Thus, the required c.d.f. is
0 <t<0
t2
F(t) = 2 0t1
1
1 (t 2)2 1<t2
2
1 t>2


Now P(X 1.5) = f(x) dx
1 .5
2 1
= (2 x) dx + 0.dx =
1 .5 2 8

Sikkim Manipal University Page No.: 29


Probability and Statistics Unit 2

Example: A random variable X has the density function


f(x) = kx2 3x3
0 elsewhere

Evaluate k, and find


(i) P(1 X 2)
(ii) P(X 2) and P(X > 1)
Solution: Since f(x) is a pdf of X, we have
3 x3 3
f(x) dx = 1, i.e. kx2dx = k . dx 1
3 3
3
1
k
18

2 1 2 2 7
(i) P(1 X 2) = f(x)dx = x dx =
1 18 1 54
2 1 2 2 35
(ii) P(X 2) = f(x) dx = x dx =
18 3 54
1 3 2 13
(iii) P(X > 1) = f(x) dx = x dx =
1 18 1 27

SAQ 1: A random variable X has the following probability function

x 0 1 2 3 4

P(X=x) c 3c 5c 7c 9c

i) Find the value of c.(ii) Find P(X<4), P(X>4), P(0<X<4)


ii) Find the distribution function of X.

Sikkim Manipal University Page No.: 30


Probability and Statistics Unit 2

SAQ 2: Is the function defined as follow a density function


x
2

, x 0
f ( x) xe
2

0, otherwise

2.4 Two-Dimensional Random Variables


Definition: Let E be an experiment and S a sample space associated with
E. Let X = X(s) and Y = Y(s) be two function each assigning a real number
to each outcome s S. We call (X, Y) a two-dimensional random variable.

If X1 = X1(s), X2 = X2(s), . Xn = Xn(s) are n functions each assigning a


real number to every outcome sS, we call (X1,..Xn) as n-dimensional
random variable.

2.4.1 Discrete and Continuous Random Variable

Definition: Let (X, Y) be a two-dimensional discrete random variable with


the possible values of (X, Y) are finite or countably infinite. That is, the
possible values of (X, Y) may be represented as (xi, yj), i = 1, 2, .. n,
j = 1, 2, . m.

With each possible outcome (xi, yj) we associate a number p(xi, yj)
representing
P[X = xi, Y = yj] and satisfying the following conditions

(1) p (xi, yj) 0 x, y



(2) p (xi, yj) = 1
j 1 i 1
The function p defined for all (xi, yj) in the range space of (X, Y) is called the
probability function of (X, Y). The set of triples (xi, yj; p(xi, yj)) i, j = 1, 2, .
is called the probability distribution of (X, Y).

Sikkim Manipal University Page No.: 31


Probability and Statistics Unit 2

2.4.2 Joint Density Function

Definition: Let (X, Y) be a continuous random variable assuming all values


in some region R of the Euclidean plane. The joint probability density
function f (x, y) is a function, satisfying the following conditions:
(i) f (x, y) 0 x, y R
(ii) f(x, y) dxdy = 1
R

Definition: Let (X, Y) be a two-dimensional random variable. The


cumulative distribution function (cdf) F of the two-dimensional random
variable (X, Y) is defined by F(x, y) = P[X x, Y y]

2.5 Marginal and Condition Probability Distribution

Definition: With each two-dimensional random variable (X, Y) we associate


two one-dimensional random variables, namely X and Y, individually. That
is we may be interested in the probability distribution of X or the probability
distribution of Y.
(a) Let (X, Y) be a discrete random variable with probability distribution
p(xi, yj), i, j = 1, 2, 3 The marginal probability distribution of X is
defined as

p(xi) = P[X = xi] = p(xi, yj)
j 1
Similarly, the marginal probability distribution of Y is defined as

P(yj) = P[Y = yj] = p(xi, yj)
i1
(b) Let (X, Y) be a 2-dimensional discrete random variable with joint pdf
f(x, y).
The marginal probability density function of X, can be defined as

g(x) = f(x, y) dy

Sikkim Manipal University Page No.: 32


Probability and Statistics Unit 2

The marginal probability density function of Y can be defined as



h(y) = f(x, y) dx

Note:
P(c X d) = P(c X d, < Y < )
d
= f(x, y) dydx
c

d
P(c X d) = g(x) dx
c
b
Similarly, P(a Y b) = h(y) dy
a
Definition: Let (X, Y) be a discrete two-dimensional random variable with
probability distribution p(xi, yj). Let p(xi) and q(yj) be the marginal pdfs of X
and Y, respectively.

The conditional pdf of X for given Y = yj is defined by


P(xi | yj) = P[X = xi | Y = yj]
p( x i , y j )
= if q ( y j ) 0
p (y j )

Similarly, the conditional pdf of Y for given X = xi is defined as

q (yj/xi) = P[Y = yj / X = xi]


p( xi, y j )
= if p ( xi ) 0
p ( xi )

Definition: Let (X, Y) be a continuous two-dimensional random variable with


joint pdf f. Let g and h be the marginal pdfs of X and Y, respectively. The
conditional pdf of X for given Y = y is defined by

f ( x, y )
g ( x / y) h ( y) 0
h ( y)

Sikkim Manipal University Page No.: 33


Probability and Statistics Unit 2

The conditional pdf of Y for given X = x is defined by

f ( x, y )
h ( y / x) g(x) 0
g ( x)
Definition:
(a) Let (X, Y) be a two-dimensional discrete random variable. We say that
X and Y are independent random variables if and only if P(xi, yj) = p(xi)
q(yj) for all i and j. That is, P(X = xi, Y = yj) = P(X = xi) P(Y = yj) for all
i and j. i.e. p(xi, yj) = p (xi) q (yj) I,j
(b) Let (X, Y) be a two-dimensional continuous random variable we say
that X and Y are independent random variables if and only if f(x, y) =
g(x) h(y) for all (x, y), where f is the joint pdf, and g and h are the
marginal pdfs of X and Y, respectively.
Note:
(i) Let (X, Y) be a 2-dimensional discrete random variable. Then X and Y
are independent if and only if p(xi/yj) = p(xi) for all i and j and, if and
only if q(yj/xi) = q(yj) for all i and j.
(ii) Let (X, Y) be a two-dimensional continuous random variable. Then X
and Y are independent if and only if g(x/y) = g(x) and, if and only if
h(y/x) = h(y), for all (x, y).

2.6 Mathematical Expectation


Definition: Let (X, Y) be a two-dimensional random variable. And let
Z = H(X, Y) be a real-valued function of (X, Y). Hence Z is a one-
dimensional random variable and we define E(z) as follows.
(a) If Z is a discrete random variable with possible values z1, z2, . and with
p(zi) = P(Z = zi)
Then

E( Z) zi p( zi ) or E(H( X, Y)) p( x i , y j ) H( x i , y j )
i 1 i j

Sikkim Manipal University Page No.: 34


Probability and Statistics Unit 2

(b) If Z is a continuous random variable with pdf f, we have



E( Z) zf( z) dz or E(H( X, Y )) H( x, y ) f ( x, y ) dxdy

Note:
(1) The concepts discussed for the one-dimensional case also hold good for
2-dimensional random variable.
(2) Let (X, Y) be a two-dimensional random variable with a joint probability
distribution. Let Z = H1(X, Y) and W = H2(X, Y). Then E(Z+W) =
E(Z)+E(W).
(3) Let (X, Y) be a two-dimensional random variable and suppose that X
and Y are independent. Then E(XY) = E(X) E(Y)

E ( XY ) xy f ( x, y ) dxdy


xy g ( x) h( y ) dxdy

Proof: X and Y are independent, f(x, y) = g(x).h(y)



E( XY ) x g( x ).dx y h( y ) dy

= E(X) . E(Y)
(4) If (X, Y) is a two-dimensional random variable and if X and Y are
independent then
V(X+Y) = V(X)+V(Y)
Proof:
V(X+Y) = E(X+Y)2 [E(X+Y)]2
= E(X2+Y2+2XY) (E(X))2 2E(X) E(Y) (E(Y))2
= E(X2) [E(X)]2+E(Y)2 [E(Y)]2
= V(X)+V(Y)

Sikkim Manipal University Page No.: 35


Probability and Statistics Unit 2

Example: The distributions of two stochastically independent random


variables X and Y are given in the following tables
X 0 1 Y 1 2 3
P(X) 0.2 0.8 P(Y) 0.1 0.4 0.5

Find the joint distribution of X and Y

Solution: Since X and Y are stochastically independent, the joint


probabilities pij are given by
pij = p(xi, yj) = p(xi) q (yj) . i, j
From the given tables, we note that
p(x1) = 0.2 p(x2) = 0.8
q(y1) = 0.1 q(y2) = 0.4 q(y3) = 0.5

p 11 = p(x1, y1) = p (x1) q(y1) = (0.2) (0.1) = 0.02


p 12 = p(x1, y2) = p (x1) q(y2) = (0.2) (0.4) = 0.08
p 13 = p(x1, y3) = p (x1) q(y3) = (0.2) (0.5) = 0.10
p 21 = p(x2, y1) = p (x2) q(y1) = (0.8) (0.1) = 0.08
p 22 = p(x2, y2) = p (x2) q(y2) = (0.8) (0.4) = 0.32
p 23 = p(x2, y3) = p (x2) q(y3) = (0.8) (0.5) = 0.40

Accordingly, the joint probability distribution of X and Y is given by the


following table:

X/Y 1 2 3 Sum
0 0.02 0.08 0.1 p(x1) = 0.2
1 0.08 0.32 0.4 p(x2) = 0.8
Sum q(y1) = 0.1 q(y2) = 0.4 q(y3) = 0.5 1

Example: For jointly distributed random variable X and Y and constant a, b,


c, d, prove that
i) Cov (aX + b, cY+d) = ac Cov (X, Y)
ii) Var (X + Y) = Var (X) + Var (Y) + 2 Cov (X, Y)
Sikkim Manipal University Page No.: 36
Probability and Statistics Unit 2

Solution:
i) Cov(aX+b, cY+d)
= E[{(aX+b E (aX + b ) } {(cY+d) E [cY+d]}]
= E[{aX+b aE (X) b } {(cY+d cE (Y) d}]
= E[ac {X E (X) } {Y E (Y)}]

= ac E {(X E(X) ) (Y E (Y)}


= ac Cov (X, Y)

ii) The definition of the variance of a random variable yields:


V(X+Y) = E[{(X+Y) E(X+Y)}2]
= E[{X+Y E(X) E(Y)}2]
= E[(X E(X))2 + (Y E(Y))2 + 2(X E(X)) (Y E(Y))]
= Var (X) + Var (Y) + 2 Cov (X, Y)

Example: Find the constant k so that



y
f ( x, y ) k( x 1) e , 0 x 1, y 0

0 elsew here
is a joint probability density function. Are X and Y independent ?

Solution: We observe that f(x, y) 0 for all x, y; if k 0.


1
Further, f ( x, y ) dxdy f ( x, y ) dxdy
y 0 x 0

1 y
k ( x 1) dx e dy
0

2 2 12 3
k (0 1) k
2 2

Sikkim Manipal University Page No.: 37


Probability and Statistics Unit 2

Accordingly, f(x, y) is a joint probability density function if k = 2/3 with


k = 2/3, we find that the marginal density functions are
2
y
g ( x ) f ( x, y ) dy ( x 1) e dy

3 0
2
( x 1), 0 x 1
3
2 1
y
and h ( y ) f ( x, y ) dx e ( x 1) dx

3 0
y
e ,y 0

We observe that g(x) h(y) = f(x, y) X and Y are independent random


variables.

Example: Verify that


(x y)
; x 0, y 0
f ( x, y ) e

0 otherw ise

is a density function of a two-dimensional probability distribution. Then


evaluate the following:
i) P(1/2 < X < 2, 0 < Y < 4)
ii) P(X < 1)
iii) P(X > Y)
iv) P(X+Y 1)
v) P(0 < X < 1 | Y = 2)

Solution: We note that the given f (x, y) 0 for all x and y, and

(x y)
f ( x, y ) dxdy e dxdy
0 0

x
e dx e y dy
0 0
(0 1) (0 1) 1

Sikkim Manipal University Page No.: 38


Probability and Statistics Unit 2

Accordingly, f (x, y) is a density function


2 4
(i) P ( 12 X 2, 0 Y 4) f ( x, y ) dydx
1/ 2 0
2 4
(x y)
e dydx
1/ 2 0
2 4
e x dx e y dy
1/ 2 0
1
(e 2 e 2 ) (1 e 4 )

(ii) The marginal density function of X is



g ( x ) f ( x,y ) dy e ( x y ) dy
0

e x e y dy e x
0
1 1 1
ThereforeP[ X 1] g( x ) dx e x dx 1
0 0
e

(iii) We note that x=0 y=x



y
P [ X Y]
f ( x, y ) dx dy P Q
0 0
y axis
y
e ( x y ) dx dy

(0,0) y=0
0 0 x axis

y
y e x dx dy
e

0 0


e y 1 e y dy e y
e 2 y dy
0 0

1 1
1
2 2

Sikkim Manipal University Page No.: 39


Probability and Statistics Unit 2

Therefore
1 1
P [X > Y] = 1 P [X Y] = 1
2 2
1 1 x
(iv) P [X + Y 1] = f (x, y) dydx
x0 y0

x=0 y=x
(0,1)
Q
x+y =1

P
(1,0)
y=0

1 1 x
e ( x y ) dy dx

0 y 0


1 1 x
e x e y dy dx
y 0
0
1

e x 1 e (1 x ) dx
0
1

e x e 1 dx 1 2
e
0

(v) By using the formula for conditional probability, we note that


P 0 X 1 Y 2
P (0 < X < 1/ Y = 2) = P ( Y 2)
1
( x 2)
Now, P ((0 < X < 1) (Y = 2)) = e dx
x 0
= e 2 (1 e 1)

Sikkim Manipal University Page No.: 40


Probability and Statistics Unit 2


and P [Y = 2] = e (x+y) dx at y =2


= e (x+2) dx = e 2
0
Therefore
e 2 (1 e 1) 1
2
1
P[0 < X < 1 | Y = 2] = e e

Example: The life time X and brightness Y of a light bulb are modeled as
continuous random variables with joint density function.

e ( x + y)
e x y 0<x<
f x, y
0 0<y<
elsewhere
when and are constants.
Find (i) the marginal density functions of x and y and (ii) the compound
cumulative distribution function.

Solution: The marginal density function of X is


(x+y)
g(x) = f(x, y) dy = e dy
0

= e
x e
y
dy
0

= e x , 0<x< (i)
y
Similarly the marginal density function of Y is h(y) = e , 0 < y < (ii)
The compound cumulative distribution function is
x y
F (x, y) = f (x, y) dy dx

x y
= e (x + y) dydx

Sikkim Manipal University Page No.: 41


Probability and Statistics Unit 2

x y y 1 1
e x dx e dy (1 e x ) (1 e y )
0 0


1 e x 1 e y

F (x, y) = (1 e x) (1 e y) 0 < x < ; 0 < y <


0 elsewhere

Note: From (i) and (ii), it is evident that


g(x) h (y) = f(x, y)

Hence X and Y are stochastically independent


SAQ 3: The joint distribution of two dimensional random variable (X, Y) is
given by the following table:
X/Y 2 3 4
1 0.06 0.15 0.09
2 0.14 0.35 0.21
Determine the individual distributions of X and Y. Also, verify that X and Y
are stochastically independent.

SAQ 4: A coin is tossed three times. Let X denote 0 or 1 according as a tail


or a head occurs on the first toss. Let Y denote the number of tails which
occur. Determine
(i) The distributions of X and Y and
(ii) The joint distribution of X and Y. Also, find the expected value of
X+Y and XY.
SAQ 5: The joint distribution of X and Y is given by
x y
f ( x, y) , x 1,2,3; y 1,2 Find the marginal distributions.
21

Sikkim Manipal University Page No.: 42


Probability and Statistics Unit 2

SAQ 6: Find K, if the joint probability density function of a bivariate random


Variable (X,Y)is given by
k (1 x)(1 y ) , if 0 x 4, 1 y 5
f ( x, y )
0 otherwise
SAQ 7: Find the marginal density functions of (X,Y) ,if

2
(2 x 5 y ) , if 0 x 1, 0 y 1
f ( x, y ) 5
0 otherwise

2.7 Summary
In this chapter we covered the topics such as discrete, continuous random
variables, one & two dimensional random variables, covariance, correlation
coefficient.

2.8 Terminal Questions


1. A random variable X has the following mass function

x 0 1 2 3 4 5 6 7 8

P(x) a 3a 5a 7a 9a 11a 13a 15a 17a

i) Determine the value of a


ii) Find P(X<3), P(X>3), P(0<x<5)
iii) Find the distribution function of X.

2. The Joint density function of the random variables X and Y is given by


8 xy, 0 x 1, 0 y x
f ( x, y ) , find (i ) f X ( x) (ii ) f Y ( y ) (iii ) f ( y / x)
0 otherwise

Sikkim Manipal University Page No.: 43


Probability and Statistics Unit 2

3. If the joint p.d.f of X and Y is given by


1
(6 x y ), 0 x 2, 2 y 4
f ( x, y ) 8 , find (i ) P( X 1 Y 3) (ii ) P( X 1 / Y 3)
0 otherwise
(iii ) P( X Y 3)

2.9 Answers
Self Assessment Questions:
1 16 9 3
1. c and , ,
25 25 25 5
2. Yes
3. From the given table, we note that X takes values 1, 2 and Y takes
values 2, 3, 4. Also p11= 0.06, p12 = 0.15, p13 = 0.09, p21 = 0.14,
p22 = 0.35, p23 = 0.21 where pij = p(xi, yj).

Then the distribution of X, i.e.p(xi), i = 1, 2, 3 can be obtained as follows:

3
p( x i ) p( x i y j )
j 1

p(x1) = p(x1, y1) + p(x1, y2) + p(x1, y3)


= p11 + p12 + p13 = 0.3
3
Similarly: p(x2) = p(x2, yj) = p(x2, y1) + p (x2, y2) + p (x2, y3)
j 1
= p21 + p22 + p23
= 0.7
Similarly: The distribution of Y i.e q (yj) can be obtained as follows:
2
q(yj) = p (xi, yj)
i 1
2
q (y1) = p(xi, y1)
i 1
= p (x1, y1) + p (x2, y1)
= p11 + p21 = 0.2

Sikkim Manipal University Page No.: 44


Probability and Statistics Unit 2

2
q (y2) = p (xi, y2)
i 1
= p (x1, y2) + p(x2, y2)
= p12 + p22 = 0.5

2
q (y3) = p(xi, y3)
i 1
= p (x1, y3) + p(x2, y3)
= p13 + p23 = 0.3

the distribution of X is given by


.
xi p(xi)
1 0.3
2 0.7

the distribution of Y is given by

yj q(yi)
2 0.2
3 0.5
4 0.3

4. For the given random experiment, the sample space is given by


S = { HHH, HHT, HTH, THH, HTT, THT, TTH, TTT}

This sample spaces contains 8 outcomes and the probability of each of


these outcomes is 1/8 . From what is given, X = 0 for the outcome THH,
THT, TTH, TTT, which are four in number, and X = 1 for the remaining
ones which are also four in number.

Therefore, the distribution of X is given by the following table:


X 0 1
P(X) 4/8 4/8

Sikkim Manipal University Page No.: 45


Probability and Statistics Unit 2

Further, we note that


Y = 0 for the outcome HHH
Y = 1 for the three outcomes HHT, HTH, THH
Y = 2 for the three outcomes HTT, THT, TTH
Y = 3 for the outcome TTT
Therefore, the distribution of Y is given by the following table:

Y 0 1 2 3

P(y) 1 3 3 1
8 8 8 8

Now, we compute the joint probabilities pij on noting that


pij = P[X = xi, Y = yj].
We find that
p11 = P [ X = x1 and Y = y1] = P[X = 0, Y = 0]
= 0, because there is no outcome for which X = 0 and Y = 0
p12 = P [X = x1, Y = y2] = P [X = 0, Y = 1]

= 1 , because there is one outcome, THH, for which X = 0, Y = 1.


8
p13 = P[X = x1 and Y = y3] = P [X = 0, Y = 2]
= 2
8
p14 = P[X = 0, Y = 3] = 1/8

1
p21 = P[X = 1, Y = 0] = ,
8
2
p22 = P(X = 1, Y = 1) =
8
1
p23 = P(X = 1, Y = 2) =
8

p24= P[X = 1, Y = 3) = 0

Sikkim Manipal University Page No.: 46


Probability and Statistics Unit 2

Accordingly, the joint distribution of X and Y is given by the following table:


X/Y 0 1 2 3 Sum
0 0 1/8 2/8 1/8 4/8
1 1/8 2/8 1/8 0 4/8
Sum 1/8 3/8 3/8 1/8 1

Now, E(X+Y) = pij(xi + yj)


i j
= (p1j (x1+yj) + p2j (x2+yj))}
j
= (p1j(0+yj) + p2j (1+yj))}
j
= [p11 y1 + p12y2 + p13y3 + p14y4) + [ p21 (1+y1) + p22 (1+ y2) +
p23 (1+y3) + p24 (1+ y4)}

= (p110 + p12 +2p13 + 3p14) + (p21+2p22+3p23+4p24)

1 4 3 1 4 3
0 ( 4 0) 2
8 8 8 8 8 8
pij x i y j pij y j x 0 0
and E (XY) = i j j

= 0.p11 + p12 + 2p13 + 3p14

= 1

5 7 9 9 12
5. , , and ,
21 21 21 21 21
1
6. k
32
2 5 2
7. For X: (2 x ) and For Y: (1 5 y )
5 2 5

Sikkim Manipal University Page No.: 47


Probability and Statistics Unit 2

Terminal Questions:
1 1 8 8 1 4
1. (i ) a (ii ) , , (iii ) , ,.....1
81 9 9 27 81 81
24
2. 4 y ,0 y x (iii ) ,0 x 1 ,0 y x
32
3 5 5
3. (i ) (ii ) (iii )
8 8 24

Sikkim Manipal University Page No.: 48

Anda mungkin juga menyukai