Anda di halaman 1dari 125

ECON509 Probability and Statistics

Slides 3

Bilkent

This Version: 7 Nov 2014

(Bilkent)

ECON509

This Version: 7 Nov 2014

1 / 125

Preliminaries

In this part of the lectures, we will discuss multiple random variables.


This mostly involves extending the concepts we have learned so far to the case of
more than one variable. This might sound straightforward but, as always, we have to
be careful.
This part is heavily based on Casella & Berger, Chapter 4.

(Bilkent)

ECON509

This Version: 7 Nov 2014

2 / 125

Joint and Marginal Distribution

So far, our interest has been on events involving a single random variable only. In
other words, we have only considered univariate models.
Multivariate models, on the other hand, involve more than one variable.
Consider an experiment about health characteristics of the population. Would we be
interested in one characteristic only, say weight? Not really. There are many
important characteristics.
Denition (4.1.1): An n-dimensional random vector is a function from a sample
space into Rn , n-dimensional Euclidean space.
Suppose, for example, that with each point in a sample space we associate an
ordered pair of numbers, that is, a point (x , y ) 2 R2 , where R2 denotes the plane.
Then, we have dened a two-dimensional (or bivariate) random vector (X , Y ) .

(Bilkent)

ECON509

This Version: 7 Nov 2014

3 / 125

Joint and Marginal Distribution

Example (4.1.2): Consider the experiment of tossing two fair dice. The sample
space has 36 equally likely points. For example:

(3, 3)
(4, 1)

both dices show a 3,

rst die shows a 4 and the second die a 1,


etc... .

Now, let
X = sum of the two dice

&

Y = jdierence of the two dicej .

Then,

(3, 3)
(4, 1)

X =6

and

Y = 0,

X =5

and

Y = 3,

and so we can dene the bivariate random vector (X , Y ) thus.

(Bilkent)

ECON509

This Version: 7 Nov 2014

4 / 125

Joint and Marginal Distribution


What is, say, P (X = 5 and Y = 3)? One can verify that the two relevant sample
points in are (4, 1) and (1, 4) . Therefore, the event fX = 5 and Y = 3g will only
occur if the event f(4, 1), (1, 4)g occurs. Since each of these sample points in are
equally likely,
2
1
P (f(4, 1), (1, 4)g) =
=
.
36
18
Thus,
1
.
P (X = 5 and Y = 3) =
18
For example, can you see why
P (X = 7, Y

4) =

1
?
9

This is because the only sample points that yield this event are (4, 3), (3, 4), (5, 2)
and (2, 5).
Note that from now on we will use P (event a, event b) rather than P (event a and
event b).

(Bilkent)

ECON509

This Version: 7 Nov 2014

5 / 125

Joint and Marginal Distribution

Denition (4.1.3): Let (X , Y ) be a discrete bivariate random vector. Then the


function f (x , y ) from R2 into R, dened by f (x , y ) = P (X = x , Y = y ) is called
the joint probability mass function or joint pmf of (X , Y ). If it is necessary to stress
the fact that f is the joint pmf of the vector (X , Y ) rather than some vector, the
notation fX ,Y (x , y ) will be used.

(Bilkent)

ECON509

This Version: 7 Nov 2014

6 / 125

Joint and Marginal Distribution


The joint pmf of (X , Y ) completely denes the probability distribution of the
random vector (X , Y ), just as in the discrete case.
The value of f (x , y ) for each of 21 possible values is given in the following Table
2

0
1
2
3
4
5

1
36

3
1
18

4
1
36
1
18

5
1
18
1
18

6
1
36
1
18
1
18

x
7
1
18
1
18
1
18

8
1
36
1
18
1
18

9
1
18
1
18

10
1
36
1
18

11
1
18

12
1
36

Table: Probability table for Example (4.1.2)

The joint pmf is dened for all (x , y ) 2 R2 , not just the 21 pairs in the above Table.
For any other (x , y ) , f (x , y ) = P (X = x , Y = y ) = 0.

(Bilkent)

ECON509

This Version: 7 Nov 2014

7 / 125

Joint and Marginal Distribution

As before, we can use the joint pmf to calculate the probability of any event dened
in terms of (X , Y ) . For A R2 ,
P ((X , Y ) 2 A ) =

f (x , y ).

fx ,y g2A

We could, for example, have A = f(x , y ) : x = 7 and y


P ((X , Y ) 2 A ) = P (X = 7, Y

4g. Then,

4) = f (7, 1) + f (7, 3) =

1
1
1
+
= .
18
18
9

Expectations are also dealt with in the same way as before. Let g (x , y ) be a
real-valued function dened for all possible values (x , y ) of the discrete random
vector (X , Y ) . Then, g (X , Y ) is itself a random variable and its expected value is
E [g (X , Y )] =

g (x , y )f (x , y ).

(x ,y )2R2

(Bilkent)

ECON509

This Version: 7 Nov 2014

8 / 125

Joint and Marginal Distribution

Example (4.1.4): For the (X , Y ) whose joint pmf is given in the above Table, what
is the expected value of XY ? Letting g (x , y ) = xy , we have
E [XY ] = 2

1
+4
36

1
+ ... + 8
36

1
+7
18

1
11
= 13 .
18
18

As before,
E [ag1 (X , Y ) + bg2 (X , Y ) + c ] = aE [g1 (X , Y )] + bE [g2 (X , Y )] + c.
One very useful result is that any nonnegative function from R2 into R that is
nonzero for at most a countable number of (x , y ) pairs and sums to 1 is the joint
pmf for some bivariate discrete random vector (X , Y ).

(Bilkent)

ECON509

This Version: 7 Nov 2014

9 / 125

Joint and Marginal Distribution

Example (4.1.5): Dene f (x , y ) by


f (0, 0)
f (1, 0)
f (x , y )

=
=
=

f (0, 1) = 1/6,
f (1, 1) = 1/3,
0

for any other (x , y ) .

Then, f (x , y ) is nonnegative and sums up to one, so f (x , y ) is the joint pmf for


some bivariate random vector (X , Y ) .

(Bilkent)

ECON509

This Version: 7 Nov 2014

10 / 125

Joint and Marginal Distribution


Suppose we have a multivariate random variable (X , Y ) but are concerned with, say,
P (X = 2) only.
We know the joint pmf fX ,Y (x , y ) but we need fX (x ) in this case.
Theorem (4.1.6): Let (X , Y ) be a discrete bivariate random vector with joint pmf
fX ,Y (x , y ). Then the marginal pmfs of X and Y , fX (x ) = P (X = x ) and
fY (y ) = P (Y = y ), are given by
fX (x ) =

y 2R

fX ,Y (x , y )

and

fY (y ) =

x 2R

fX ,Y (x , y ).

Proof: For any x 2 R, let Ax = f(x , y ) : < y < g. That is, Ax is the line in
the plane with rst coordinate equal to x . Then, for any x 2 R,
fX (x )

=
=
=

P (X = x ) = P (X = x ,
P ((X , Y ) 2 Ax ) =

y 2R

(Bilkent)

< Y < )
fX ,Y (x , y )

(x ,y )2A x

fX ,Y (x , y ).

ECON509

This Version: 7 Nov 2014

11 / 125

Joint and Marginal Distribution

Example (4.1.7): Now we can compute the marginal distribution for X and Y from
the joint distribution given in the above Table. Then,
fY (0 )

fX ,Y (2, 0) + fX ,Y (4, 0) + fX ,Y (6, 0)

1/6.

+fX ,Y (8, 0) + fX ,Y (10, 0) + fX ,Y (12, 0)


As an exercise, you can check that,
fY (1) = 5/18,
Notice that
of Y .

(Bilkent)

fY (2) = 2/9,

5y =0 fY

fY (3) = 1/6,

fY (4) = 1/9,

fY (5) = 1/18.

(y ) = 1, as expected, since these are the only six possible values

ECON509

This Version: 7 Nov 2014

12 / 125

Joint and Marginal Distribution

Now, it is crucial to understand that the marginal distribution of X and Y , described


by the marginal pmfs fX (x ) and fY (y ), do not completely describe the joint
distribution of X and Y .
There are, in fact, many dierent joint distributions that have the same marginal
distributions.
The knowledge of the marginal distributions only does not allow us to determine the
joint distribution (except under certain assumptions).

(Bilkent)

ECON509

This Version: 7 Nov 2014

13 / 125

Joint and Marginal Distribution


Example (4.1.9): Dene a joint pmf by
f (0, 0) = 1/12,

f (1, 0) = 5/12,

f (x , y ) = 0

f (0, 1) = f (1, 1) = 3/12,

for all other values.

Then,
fY (0 )
fY (1 )
fX (0 )
and

fX (1 )

=
=
=
=

f (0, 0) + f (1, 0) = 1/2,


f (0, 1) + f (1, 1) = 1/2,
f (0, 0) + f (0, 1) = 1/3,
f (1, 0) + f (1, 1) = 2/3.

Now consider the marginal pmfs for the distribution considered in Example (4.1.5).
fY (0 )
fY (1 )
fX (0 )
and

fX (1 )

=
=
=
=

f (0, 0) + f (1, 0) = 1/6 + 1/3 = 1/2,


f (0, 1) + f (1, 1) = 1/6 + 1/3 = 1/2,
f (0, 0) + f (0, 1) = 1/6 + 1/6 = 1/3,
f (1, 0) + f (1, 1) = 1/3 + 1/3 = 2/3.

We have the same marginal pmfs but the joint distributions are dierent!
(Bilkent)

ECON509

This Version: 7 Nov 2014

14 / 125

Joint and Marginal Distribution

Consider now the corresponding denition for continuous random variables.


Denition (4.1.10): A function f (x , y ) from R2 to R is called a joint probability
density function or joint pdf of the continuous bivariate random vector (X , Y ) if, for
every A R2 ,
Z Z
The notation

RR

f (x , y )dxdy .

P ((X , Y ) 2 A ) =

means that the integral is evaluated over all (x , y ) 2 A.

Naturally, for real valued functions g (x , y ),


E [g (X , Y )] =

Z Z

g (x , y )f (x , y )dxdy .

It is important to realise that the joint pdf is dened for all (x , y ) 2 R2 . The pdf
may equal 0 on a large set A if P ((X , Y ) 2 A ) = 0 but the pdf is still dened for
the points in A.

(Bilkent)

ECON509

This Version: 7 Nov 2014

15 / 125

Joint and Marginal Distribution

Again, naturally,
fX (x )
fY (y )

=
=

f (x , y )dy ,

< x < ,

f (x , y )dx ,

< y < .

As before, a useful result is that any function f (x , y ) satisfying f (x , y )


(x , y ) 2 R2 and
Z Z
1=
f (x , y )dxdy ,

0 for all

is the joint pdf of some continuous bivariate random vector (X , Y ) .

(Bilkent)

ECON509

This Version: 7 Nov 2014

16 / 125

Joint and Marginal Distribution

Example (4.1.11): Let


f (x , y ) =

6xy 2
0

0<x <1
otherwise

Is f (x , y ) a joint pdf? Clearly, f (x , y )


Moreover,
Z Z

6xy 2 dxdy

=
=

Z 1
0

(Bilkent)

0<y <1

0 for all (x , y ) in the dened range.

Z 1Z 1
0

and

6xy 2 dxdy =

Z 1
0

3x 2 y 2 j1x =0 dy

3y 2 dy = y 3 j1y =0 = 1.

ECON509

This Version: 7 Nov 2014

17 / 125

Joint and Marginal Distribution

Now, consider calculating P (X + Y


1). Let A = f(x , y ) : x + y
1g in which
case we are interested in calculating P ((X , Y ) 2 A ). Now, f (x , y ) = 0 everywhere
except where 0 < x < 1 and 0 < y < 1. Then, the region we have in mind is
bounded by the lines
x = 1,

y =1

and

x + y = 1.

Therefore,
A

(Bilkent)

=
=
=

f(x , y ) : x + y
f(x , y ) : x 1
f(x , y ) : 1 y

1,

0 < x < 1,

y,

0 < x < 1,

x < 1,

ECON509

0 < y < 1g

0 < y < 1g

0 < y < 1 g.

This Version: 7 Nov 2014

18 / 125

Joint and Marginal Distribution


Hence,
P (X + Y

1)

Z Z

f (x , y )dxdy =
A

Z 1Z 1
0

Z 1Z 1
0

1 y

Z 1

6y 3

3
2

3
9
=
.
5
10

3x 2 y 2 j1x =1

y dy

3y 4 dy =

6xy 2 dxdy

1 y

Z 1h

3y 2

3 (1

6 4
y
4

3 5
y
5

i
y )2 y 2 dy

y =0

Moreover,
f (x ) =

f (x , y )dy =

1
3
<X <
2
4

Z 1
0

6xy 2 dy = 2xy 3 j1y =0 = 2x .

Then, for example,

(Bilkent)

ECON509

Z 3/4
1/2

2xdx =

5
.
16
This Version: 7 Nov 2014

19 / 125

Joint and Marginal Distribution


The joint probability distribution of (X , Y ) can be completely described using the
joinf cdf (cumulative distribution function) rather than with the joint pmf or joint
pdf.
The joint cdf is the function F (x , y ) dened by
F (x , y ) = P (X

y)

x, Y

for all (x , y ) 2 R2 .

Although for discrete random vectors it might not be convenient to use the joint cdf,
for continuous random variables, the following relationship makes the joint cdf very
useful:
Z x Z y
F (x , y ) =
f (s, t )dsdt.

From the bivariate Fundamental Theorem of Calculus,


2 F (x , y )
= f (x , y )
x y
at continuity points of f (x , y ). This relationship is very important.

(Bilkent)

ECON509

This Version: 7 Nov 2014

20 / 125

Conditional Distributions and Independence

We have talked a little bit about conditional probabilities before. Now we will
consider conditional distributions.
The idea is the same. If we have some extra information about the sample, we can
use that information to make better inference.
Suppose we are sampling from a population where X is the height (in kgs) and Y is
the weight (in cms). What is P (X > 95)? Would we have a better/more relevant
answer if we knew that the person in question has Y = 202 cms ? Usually,
P (X > 95jY = 202) is supposed to be much larger than P (X > 95jY = 165).
Once we have the joint distribution for (X , Y ) , we can calculate the conditional
distributions, as well.

Notice that now we have three distribution concepts: marginal distribution,


conditional distribution and joint distribution.

(Bilkent)

ECON509

This Version: 7 Nov 2014

21 / 125

Conditional Distributions and Independence


Denition (4.2.1): Let (X , Y ) be a discrete bivariate random vector with joint pmf
f (x , y ) and marginal pmfs fX (x ) and fY (y ). For any x such that
P (X = x ) = fX (x ) > 0, the conditional pmf of Y given that X = x is the function
of y denoted by f (y jx ) and dened by
f (y jx ) = P (Y = y jX = x ) =

f (x , y )
.
fX (x )

For any y such that P (Y = y ) = fY (y ) > 0, the conditional pmf of X given that
Y = y is the function of x denoted by f (x jy ) and dened by
f (x jy ) = P (X = x jY = y ) =

f (x , y )
.
fY (y )

Can we verify that, say, f (y jx ) is a pmf? First, since f (x , y )


f (y jx ) 0 for every y . Then,

f (y jx ) =
y

(Bilkent)

0 and fX (x ) > 0,

y f (x , y )
f (x )
= X
= 1.
fX (x )
fX (x )

ECON509

This Version: 7 Nov 2014

22 / 125

Conditional Distributions and Independence


Example (4.2.2): Dene the joint pmf of (X , Y ) by
2
3
,
f (0, 10) = f (0, 20) =
,
f (1, 10) = f (1, 30) =
18
18
4
4
f (1, 20) =
and f (2, 30) =
,
18
18
while f (x , y ) = 0 for all other combinations of (x , y ).
Then,
4
fX (0) = f (0, 10) + f (0, 20) =
,
18
10
fX (1) = f (1, 10) + f (1, 20) + f (1, 30) =
,
18
4
fX (2) = f (2, 30) =
.
18
Moreover,
f (0, 10)
2/18
1
f (10j0) =
=
= ,
fX (0 )
4/18
2
1
f (0, 20)
f (20j0) =
= .
fX (0 )
2
Therefore, given the knowledge that X = 0, Y is equal to either 10 or 20, with equal
probability.
(Bilkent)

ECON509

This Version: 7 Nov 2014

23 / 125

Conditional Distributions and Independence

In addition,
f (10j1)

f (20j1)

f (30j2)

3
3/18
=
,
10
10/18
4/18
4
=
,
10/18
10
4/18
= 1.
4/18

f (30j1) =

Interestingly, when X = 2, we know for sure that Y will be equal to 30.


Finally,
P (Y
P (Y

>
>

10jX = 1) = f (20j1) + f (30j1) = 7/10,


10jX = 0) = f (20j0) = 1/2,

etc...

(Bilkent)

ECON509

This Version: 7 Nov 2014

24 / 125

Conditional Distributions and Independence


The analogous denition for continuous random variables is given next.
Denition (4.2.3): Let (X , Y ) be a continuous bivariate random vector with joint
pdf f (x , y ) and marginal pdfs fX (x ) and fY (y ). For any x such that fX (x ) > 0, the
conditional pdf of Y given that X = x is the function of y denoted by f (y jx ) and
dened by
f (x , y )
f (y jx ) =
.
fX (x )
For any y such that fY (y ) > 0, the conditional pdf of X given that Y = y is the
function of x denoted by f (x jy ) and dened by
f (x jy ) =

f (x , y )
.
fY (y )

Note that for discrete random variables, P (X = x ) = fX (x ) and P (X = x , Y = y )


= f (x , y ). Then Denition (4.2.1) is actually parallel to the denition of
P (Y = y jX = x ) in Denition (1.3.2). The same interpretation is not valid for
continuous random variables since P (X = x ) = 0 for every x . However, replacing
pmfs with pdfs leads to Denition (4.2.3).

(Bilkent)

ECON509

This Version: 7 Nov 2014

25 / 125

Conditional Distributions and Independence

The conditional expected value of g (Y ) given X = x is given by


E [g (Y )jx ] =

g (y )f (y jx )

and

E [g (Y )jx ] =

g (y )f (y jx )dx ,

in the discrete and continuous cases, respectively.


The conditional expected value has all of the propertes of the usual expected value
listed in Theorem 2.2.5.

(Bilkent)

ECON509

This Version: 7 Nov 2014

26 / 125

Conditional Distributions and Independence


Example (2.2.6): Suppose we measure the distance between a random variable X
and a constant b by (X b )2 . What is
= arg min E [(X
b

b )2 ],

i.e. the best predictor of X in the mean-squared sense?


Now,
E [(X

b )2 ]

=
=
=

E fX
h
E (fX

E [X ] + E [X ]

b g2

E [X ]g + fE [X ]
E [X ]g2 + fE [X ]

E fX

+2E (fX

E [X ]gfE [X ]

b g)2

b g2
b g) ,

where
E (fX

E [X ]gfE [X ]

b g) = fE [X ]

b gE fX

E [X ]g = 0.

Then,
E [(X

(Bilkent)

b )2 ] = E fX

E [X ]g2 + fE [X ]

ECON509

b g2 .
This Version: 7 Nov 2014

27 / 125

Conditional Distributions and Independence

We have no control over the rst term, but the second term is always positive, and
so, is minimised by setting b = E [X ]. Therefore, b = E [X ] is the value that
minimises the prediction error and is the best predictor of X .
In other words, the expectation of a random variable is its best predictor.
Can you show this for the conditional expectation, as well? See Exercise 4.13 which
you will be asked to solve as homework.

(Bilkent)

ECON509

This Version: 7 Nov 2014

28 / 125

Conditional Distributions and Independence


Example (4.2.4): Let (X , Y ) have joint pdf f (x , y ) = e y , 0 < x < y < .
Suppose we wish to compute the conditional pdf of Y given X = x .
Now, if x 0, f (x , y ) = 0 8y , so fX (x ) = 0.
If x > 0, f (x , y ) > 0 only if y > x . Thus
fX (x ) =

f (x , y )dy =

dy = e

Then, the marginal distribution of X is the exponential distribution.


Now, likewise, for any x > 0, f (y jx ) can be computed as follows:
f (y jx ) =

f (x , y )
e
=
fX (x )
e

=e

(y x )

if y > x ,

and

0
f (x , y )
=
= 0, if y x .
fX (x )
e x
Thus, given X = x , Y has an exponential distribution, where x is the location
parameter in the distribution of Y and = 1 is the scale parameter. Notice that the
conditional distribution of Y is dierent for every value of x . Hence,
f (y jx ) =

E [Y jX = x ] =
(Bilkent)

ye (y

x)

dy = 1 + x .

ECON509

This Version: 7 Nov 2014

29 / 125

Conditional Distributions and Independence


This is obtained by integration by parts.
Z

ye (y

x)

dy

ye (y

x)

(x x )

h
x + 0 + e (x

xe

e (y

x)

dy

y =x

x)

+
i

(y x )

y =x

= x + 1.

What about the conditional variance? Using the standard denition,


Var (Y jx ) = E [Y 2 jx ]
Hence,
Var (Y jx ) =

y 2 e (y

x)

dy

fE [Y jx ]g2 .
Z

ye (y

x)

dy

= 1.

Again, you can obtain the rst part of this result by integration by parts.
What is the implication of this? One can show that the marginal distribution of Y is
gamma (2, 1) and so Var (Y ) = 2. Hence, the knowledge that X = x has reduced the
variability of Y by 50%!!!
(Bilkent)

ECON509

This Version: 7 Nov 2014

30 / 125

Conditional Distributions and Independence

The conditional distribution of Y given X = x is possibly a dierent probability


distribution for each value of x .
Thus, we really have a family of probability distributions for Y , one for each x .
When we wish to describe this entire family, we will use the phrase the distribution
of Y jX .
For example, if X
0 is an integer valued random variable and the conditional
distribution of Y given X = x is binomial (x , p ), then we might say the distribution
of Y jX is binomial (X , p ) or write Y jX
binomial (X , p ).

In addition, E [g (Y )jx ] is a function of x , as well. Hence, the conditional


expectation is a random variable whose value depends on the value of X . When we
think of Example 4.2.4, we can then say that
E [Y jX ] = 1 + X .

(Bilkent)

ECON509

This Version: 7 Nov 2014

31 / 125

Conditional Distributions and Independence

Yet, in some other cases, the conditional distribution might not depend on the
conditioning variable.
Say, the conditional distribution of Y given X = x is not dierent for dierent
values of x .
In other words, knowledge of the value of X does not provide any more information.
This situation is dened as independence.
Denition (4.2.5): Let (X , Y ) be a bivariate random vector with joint pdf or pmf
f (x , y ) and marginal pdfs or pmfs fX (x ) and fY (y ). Then X and Y are called
independent random variables if, for every x 2 R and y 2 R,
f (x , y ) = fX (x )fY (y ).

(Bilkent)

ECON509

(1)

This Version: 7 Nov 2014

32 / 125

Conditional Distributions and Independence

Now, in the case of independence, clearly,


f (y jx ) =

f (x )fY (y )
f (x , y )
= X
= fY (y ).
fX (x )
fX (x )

We can either start with the joint distribution and check independence for each
possible value of x and y , or start with the assumption that X and Y are
independent and model the joint distribution accordingly. In this latter direction, our
economic intuition might have to play an important role.
Would information on the value of X really increase our information about the
likely value of Y ?

(Bilkent)

ECON509

This Version: 7 Nov 2014

33 / 125

Conditional Distributions and Independence

Example (4.2.6): Consider the discrete bivariate random vector (X , Y ) , with joint
pmf given by
f (10, 1) = f (20, 1) = f (20, 2) = 1/10,
f (10, 2) = f (10, 3) = 1/5

and

f (20, 3) = 3/10.

The marginal pmfs are then given by


fX (10) = fX (20) = .5

and

fY (1) = .2, fY (2) = .3 and fY (3) = .5.

Now, for example,

although

f (10, 3)

f (10, 1)

1
11
6=
= fX (10)fY (3),
5
22
1
11
=
= fX (10)fY (1).
10
25

Do we always have to check all possible pairs, one by one???

(Bilkent)

ECON509

This Version: 7 Nov 2014

34 / 125

Conditional Distributions and Independence


Lemma (4.2.7): Let (X , Y ) be a bivariate random vector with joint pdf or pmf
f (x , y ). Then, X and Y are independent random variables if and only if there exist
functions g (x ) and h (y ) such that, for every x 2 R and y 2 R,
f (x , y ) = g (x )h (y ).
Proof: First, we deal with the only if part. Dene
g (x ) = fX (x )

and

h (y ) = fY (y ).

Then, by (1),
f (x , y ) = fX (x )fY (y ) = g (x )h (y ).
For the if part, suppose f (x , y ) = g (x )h (y ). Dene
Z

g (x )dx = c

and

h (y )dy = d ,

where
cd

=
=

(Bilkent)

g (x )dx

Z Z

h (y )dy

g (x )h (y )dydx =

ECON509

Z Z

f (x , y )dydx = 1.

This Version: 7 Nov 2014

35 / 125

Conditional Distributions and Independence

Moreover,
fX (x ) =

g (x )h (y )dy = g (x )d

and

fY (y ) =

g (x )h (y )dx = h (y )c.

Then,
f (x , y ) = g (x )h (y ) = g (x )h (y )|{z}
cd = fX (x )fY (y ),
1

proving that X and Y are independent.

To prove the Lemma for discrete random vectors, replace integrals with summations.

(Bilkent)

ECON509

This Version: 7 Nov 2014

36 / 125

Conditional Distributions and Independence


It might not be clear enough at rst sight but this is a powerful result. It implies
that we do not have to calculate the marginal distributions rst and then check
whether their product gives the joint distribution.
Instead, it is enough to check whether the joint distribution is equal to the product
of some function of x and some function of y , for all values of (x , y ) 2 R2 .
Example (4.2.8): Consider the joint pdf
f (x , y ) =

1 2 4
x y e
384

y (x /2 )

x >0

and

y > 0.

If we dene

y 4e y
,
384
for x > 0 and y > 0 and g (x ) = h (y ) = 0 otherwise, then, clearly,
g (x ) = x 2 e

x /2

and

h (y ) =

f (x , y ) = g (x )h (y )
for all (x , y ) 2 R2 . By Lemma (4.2.7), X and Y are independently distributed!

(Bilkent)

ECON509

This Version: 7 Nov 2014

37 / 125

Conditional Distributions and Independence


Example (4.2.9): Let X be the number of living parents of a student randomly
selected from an elementary school in Kansas city and Y be the number of living
parents of a retiree randomly selected from Sun City. Suppose, furthermore, that we
have
fX (0 )
fY (0 )

=
=

.01,

fX (1) = .09,

fX (2) = .90,

.70,

fY (1) = .25,

fY (2) = .05.

It seems reasonable that X and Y will be independent: knowledge of the number of


parents of the student does not give us any information on the number of parents of
the retiree and vice versa. Therefore, we should have
fX ,Y (x , y ) = fX (x )fY (y ).
Then, for example
fX ,Y (0, 0) = .0070,

fX ,Y (0, 1) = .0025,

etc.
We can thus calculate quantities such as,
P (X = Y )

(Bilkent)

=
=

f (0, 0) + f (1, 1) + f (2, 2)


.01

.70 + .09

ECON509

.25 + .90

.05 = .0745.

This Version: 7 Nov 2014

38 / 125

Conditional Distributions and Independence


Theorem (4.2.10): Let X and Y be independent random variables.
1

For any A R and B


R, P (X 2 A, Y 2 B ) = P (X 2 A )P (Y 2 B ); that is, the
events fX 2 A g and fY 2 B g are independent events.
Let g (x ) be a function only of x and h (y ) be a function only of y . Then
E [g (X )h (Y )] = E [g (X )]E [h (Y )].

Proof: Start with (2) and consider the continuous case. Now,
E [g (X )h (Y )]

=
=
=
=

Z Z

Z Z
Z

g (x )h (y )f (x , y )dxdy
g (x )h (y )fX (x )fY (y )dxdy

g (x )fX (x )dx

h (y )fY (y )dy

E [g (X )]E [h (Y )].

As before, the discrete case is proved by replacing integrals with sums.

(Bilkent)

ECON509

This Version: 7 Nov 2014

39 / 125

Conditional Distributions and Independence


Now, consider (1). Remember that for some set A R, the indicator function IA (x )
is given by
(
1 if x 2 A
IA (x ) =
.
0 otherwise
Let some g (x ) and h (y ) be the indicator functions of the sets A and B , respectively.
Now, g (x )h (y ) is the indicator function of the set C
R2 where
C = f(x , y ) : x 2 A, y 2 B g.
Note that for an indicator function such as g (x ), E [g (X )] = P (X 2 A ), since
P (X 2 A ) =

x 2A

f (x )dx =

IA (x )f (x )dx = E [IA (x )].

Then, using the result we proved on the previous slide,


P (X 2 A, Y 2 B )

=
=
=

P ((X , Y ) 2 C ) = E [g (X )h (Y )]

E [g (X )]E [h (Y )]

P (X 2 A )P (Y 2 B ).

These results make life a lot easier when calculating expectations of certain random
variables.
(Bilkent)

ECON509

This Version: 7 Nov 2014

40 / 125

Conditional Distributions and Independence


Example (4.2.11): Let X and Y be independent exponential (1) random variables.
Then, from the previous theorem we have that
P (X
Letting g (x ) =

x2

4, Y < 3) = P (X

4 )P (Y < 3 ) = e

(1

).

and h (y ) = y , we see that

E [X 2 Y ] = E [X 2 ]E [Y ] = Var (X ) + fE [X ]g2 E [Y ] = 1 + 12 1 = 2.
The moment generating function is back!
Theorem (4.2.12): Let X and Y be independent random variables with moment
generating functions MX (t ) and MY (t ). Then the moment generating function of
the random variable Z = X + Y is given by
M Z (t ) = M X (t )M Y (t ).
Proof: Now, we know from before that MZ (t ) = E [e tZ ]. Then,
E [e tZ ] = E [e t (X +Y ) ] = E [e tX e tY ] = E [e tX ]E [e tY ] = MX (t )MY (t ),
where we have used the result that for two independent random variables X and Y ,
E [g (X )g (Y )] = E [g (X )]E [g (Y )].
Of course, if independence does not hold, life gets pretty tough! But we will not
deal with that here.
(Bilkent)

ECON509

This Version: 7 Nov 2014

41 / 125

Conditional Distributions and Independence

We will now introduce a special case of Theorem (4.2.12).


Theorem (4.2.14): Let X
N (X , 2X ) and Y
N (Y , 2Y ) be independent
normal random variables. Then the random variable Z = X + Y has a
N (X + Y , 2X + 2Y ) distribution.
Proof: The mgfs of X and Y are
MX (t ) = exp

2 t 2
X t + X
2

and

MY (t ) = exp

2 t 2
Y t + Y
2

Then, from Theorem (4.2.12)


"

#
(2X + 2Y )t 2
MZ (t ) = MX (t )MY (t ) = exp (X + Y )t +
,
2
which is the mgf of a normal random variable with mean X + Y and variance
2X + 2Y .

(Bilkent)

ECON509

This Version: 7 Nov 2014

42 / 125

Bivariate Transformations

We now consider transformations involving two random variables rather than only
one.
Let (X , Y ) be a random vector and consider (U , V ) where
U = g1 (X , Y )

and

V = g2 (X , Y ),

for some pre-specied functions g1 ( ) and g2 ( ).


For any B

(U , V ) 2 B

R2 , notice that
if and only if

(X , Y ) 2 A,

A = f(x , y ) : g1 (x , y ), g2 (x , y ) 2 B g.

Hence,
P ((U , V ) 2 B ) = P ((X , Y ) 2 A ) .
This implies that the probability distribution of (U , V ) is completely determined by
the probability distribution of (X , Y ) .

(Bilkent)

ECON509

This Version: 7 Nov 2014

43 / 125

Bivariate Transformations

Consider the discrete case rst.


If (X , Y ) is a discrete random vector, then there is only a countable set of values for
which the joint pmf of (X , Y ) is positive. Dene this set as A.
Moreover, let

B = f(u, v ) : u = g1 (x , y ) and v = g2 (x , y ) for some (x , y ) 2 Ag.


Therefore, B gives the countable set of all possible values for (U , V ) .
Finally, dene

Auv = f(x , y ) 2 A : g1 (x , y ) = u and g2 (x , y ) = v g,

for any (u, v ) 2 B .

Then, the joint pmf of (U , V ) is given by


fU ,V (u, v ) = P (U = u, V = v ) = P ((X , Y ) 2 Auv ) =

(Bilkent)

ECON509

fX ,Y (x , y ).

(x ,y )2Auv

This Version: 7 Nov 2014

44 / 125

Bivariate Transformations

Poisson ( ) and Y

Example (4.3.1): Let X


the joint pmf is given by
fX ,Y (x , y ) =

x e
x!

y e
y!

Poisson (). Due to independence,

x = 0, 1, 2, ... and y = 0, 1, 2, ... .

Obviously

A = f (x , y ) : x = 0, 1, 2, ... and y = 0, 1, 2, ...g.


Dene
U = X +Y

and

V = Y,

g1 (x , y ) = x + y

and

g2 (x , y ) = y .

implying that

(Bilkent)

ECON509

This Version: 7 Nov 2014

45 / 125

Bivariate Transformations
What is B ? Since y = v , for any given v ,
u = x + y = x + v.
Hence, u = v , v + 1, v + 2, v + 3, ... . Therefore,

B = f (u, v ) : v = 0, 1, 2, ... and u = v , v + 1, v + 2, v + 3, ...g.


Moreover, for any (u, v ) the only (x , y ) satisfying u = x + y and v = y are given by
x = u v and y = v . Therefore, we always have

Auv = (u

v , v ).

As such,
fU ,V (u, v )

fX ,Y (x , y ) = fX ,Y (u

v, v)

(x ,y )2Auv

(Bilkent)

u
(u

e v e
v )! v !

v = 0, 1, 2, ... and u = v , v + 1, v + 2, ... .

ECON509

This Version: 7 Nov 2014

46 / 125

Bivariate Transformations
What is the marginal pmf of U ?
For any xed non-negative integer u, fU ,V (u, v ) > 0 only for v = 0, 1, ..., u. Then,
u
v =0 (u

u!
e ( +) u
(u v )!v ! u
u!
v =0

e ( +) u
u
v u
u!
v =0

e v e
v )! v !

fU (u )

=e

( +)

v =

u v v
v )!v !
v =0 (u
u

v
e ( +)
( + )u ,
u!

u = 0, 1, 2, ... ,

which follows from the binomial formula given by (a + b )n = ni=0 (xn )ax b n x .
This is the pmf of a Poisson random variable with parameter + . A theorem
follows.
Theorem (4.3.2): If X
Poisson ( ), Y
Poisson () and X and Y are
independent, then X + Y
Poisson ( + ).

(Bilkent)

ECON509

This Version: 7 Nov 2014

47 / 125

Bivariate Transformations
Consider the continuous case now.
Let (X , Y ) fX ,Y (x , y ) be a continuous random vector and

A
B

=
=

f(x , y ) : fX ,Y (x , y ) > 0g,


f(u, v ) : u = g1 (x , y ) and v = g2 (x , y ) for some (x , y ) 2 Ag.

The joint pdf fU ,V (u, v ) will be positive on the set B .


Assume now that the transformation u = g1 (x , y ) and v = g2 (x , y ) is a one-to-one
transformation from A onto B .
Why onto? Because we have dened B in such a way that any (u, v ) 2 B
corresponds to some (x , y ) 2 A.
We assume that the one-to-one assumption is maintained. That is, we are assuming
that for each (u, v ) 2 B there is only one (x , y ) 2 A such that
(u, v ) = (g1 (x , y ), g2 (x , y )) .
When the transformation is one-to-one and onto, we can solve the equations
u = g1 (x , y )

and

v = g2 (x , y ),

x = h1 (u, v )

and

y = h2 (u, v ).

and obtain

(Bilkent)

ECON509

This Version: 7 Nov 2014

48 / 125

Bivariate Transformations

The last remaining ingredient is the Jacobian of the transformation. This is the
determinant of a matrix of partial derivatives.
J=

x
u
y
u

x
v
y
v

x y
u v

x y
.
v u

These partial derivatives are given by


h (u, v ) x
h (u, v ) y
h (u, v ) y
h (u, v )
x
= 1
,
= 1
,
= 2
,
= 2
.
u
u
v
v
u
u
v
v
Then, the transformation formula is given by
fU ,V (u, v ) = fX ,Y [h1 (u, v ) , h2 (u, v )] jJ j ,
while fU ,V (u, v ) = 0 for (u, v ) 2
/ B.
Clearly, we need J 6= 0 on B .

(Bilkent)

ECON509

This Version: 7 Nov 2014

49 / 125

Bivariate Transformations

The next example is based on the beta distribution, which is related to the gamma
distribution.
The beta (, ) pdf is given by
fX (x j, ) =

( + )
x
() ( )

(1

x )

where 0 < x < 1, > 0 and > 0.

(Bilkent)

ECON509

This Version: 7 Nov 2014

50 / 125

Bivariate Transformations

Example (4.3.3): Let X

beta ( + , ) and X ?? Y .

beta (, ), Y

The symbol ?? is used to denote independence.


Due to independence, the joint pdf is given by
fX ,Y (x , y ) =

( + )
x
() ( )

x )

(1

( + + ) +
y
( + ) ()

(1

y )

where 0 < x < 1 and 0 < y < 1.


We consider
U = XY

and

V = X.

Lets dene A and B rst.

As usual,

A = f(x , y ) : fX ,Y (x , y ) > 0g.

(Bilkent)

ECON509

This Version: 7 Nov 2014

51 / 125

Bivariate Transformations
Now we know that V = X and the set of possible values for X is 0 < x < 1. Hence,
the set of possible values for V is given by 0 < v < 1.
Since U = XY = VY , for any given value of V = v , U will vary between 0 and v , as
the set of possible values for Y is 0 < y < 1. Hence
0 < u < v.
Therefore, the given transformation maps A onto

B = f(u, v ) : 0 < u < v < 1g.


For any (u, v ), we can uniquely solve for (x , y ) . That is,
x = h1 (u, v ) = v

and

y = h2 (u, v ) =

u
.
v

An aside: Of course, we assume that this transformation is one-to-one on A only.


This is not the case on R2 because for any value of y , (0, y ) is mapped into the
point (0, 0) .

(Bilkent)

ECON509

This Version: 7 Nov 2014

52 / 125

Bivariate Transformations

For
x = h1 (u, v ) = v

and

y = h2 (u, v ) =

u
,
v

we have
J=

x
u
y
u

x
v
y
v

1
v

u
v2

1
.
v

Then, the transformation formula gives,


fU ,V (u, v ) =

( + + )
v
() ( ) ()

(1

v )

u
v

+ 1

u
v

1
,
v

where 0 < u < v < 1.


Obviously, since V = X , V

(Bilkent)

beta (, ). What about U ?

ECON509

This Version: 7 Nov 2014

53 / 125

Bivariate Transformations
(+ +)

Dene K = ()( )() . Then,


fU (u )

Z 1
u

fU ,V (u, v )dv

Z 1

(1

v )

u
v

(1

v )

(u )

Z 1
u

Ku

Z 1

(1

v )

+ 1

uv
v
vu
|

Ku

(Bilkent)

u
v

ECON509

u
v

u
v

1
dv
v

u
v

1
dv
v

1 + 1
1
v
{z
}

v 1
1
(u )
u v
|
{z
}
(u/v )

Z 1

u
v

u
dv .
v2

1u
dv
v
v
|{z}
u/v 2

This Version: 7 Nov 2014

54 / 125

Bivariate Transformations
Let

u
v

y =

which implies that


dy =

u 1
dv ,
v2 1 u

or, equivalently,
v2
(1
u

dv =

dy

u
v

u
v

u) .

Now, observe that

(1

y )

=
=
=

(Bilkent)

1
1

u/v + u
1 u
1

ECON509

u/v
1 u

1
1

This Version: 7 Nov 2014

55 / 125

Bivariate Transformations
Moreover, for v = u, y = 1 and for v = 1, y = 0. Therefore,
fU (u )

Ku

Z 1
u

u
u
v
|
{z
y

=
=

Ku
Ku

1 (1

u )

u ) +

(1

u ) +

(1

} |

(1 y )

Z 0

Z 1

is the kernel for the pdf of Y


fY (y j , ) =

( + )
y
( ) ()

(1

Now,
y

y )

u
v
{z
1

u
2
v
}

(1

(1

dy

(1 u )

dv
|{z}

y )

y )

v2
u

(1 u )

dy

dy .

beta ( , )
1

y )

(1

0 < y < 1,

> 0,

> 0,

and it is straightforward to show that


Z 1
0

(Bilkent)

(1

y )

ECON509

( ) ()
.
( + )
This Version: 7 Nov 2014

56 / 125

Bivariate Transformations

Hence,
fU (u )

=
=
=

Ku

(1

u ) +

Z 1
0

(1

y )

dy

( + + ) ( ) () 1
u
(1 u ) +
() ( ) () ( + )
( + + ) 1
u
(1 u ) + 1 ,
() ( + )

where 0 < u < 1.


This shows that the marginal distribution of U is beta (, + ) .

(Bilkent)

ECON509

This Version: 7 Nov 2014

57 / 125

Bivariate Transformations

Example (4.3.4): Let X


is given by

N (0, 1), Y

U = g1 (X , Y ) = X + Y

N (0, 1) and X ?? Y . The transformation


and

V = g2 (X , Y ) = X

Y.

Now, the pdf for (X , Y ) is given by


fX ,Y (x , y )

=
=

where

< x < and

Therefore,

(Bilkent)

1
1
1 2
p exp
p exp
x
2
2
2
1
1
exp
x2 + y2 ,
2
2

1 2
y
2

< y < .

A = f (x , y ) : fX ,Y (x , y ) > 0g = R2 .

ECON509

This Version: 7 Nov 2014

58 / 125

Bivariate Transformations

We have
u = x +y

and

v =x

y,

and to determine B we need to nd out all the values taken on by (u, v ) as we


choose dierent (x , y ) 2 A.

Thankfully, when these equations are solved for (x , y ) they yield unique solutions:
x = h1 (u, v ) =

u+v
2

and

y = h2 (u, v ) =

v
2

Now, the reasoning is as follows: A = R2 . Moreover, for every (u, v ) 2 B , there is a


unique (x , y ) . Therefore, B = R2 , as well! The mapping is one-to-one and, by
denition, onto.

(Bilkent)

ECON509

This Version: 7 Nov 2014

59 / 125

Bivariate Transformations
The Jacobian is given by
J=

x
u
y
u

x
v
y
v

1
2
1
2

1
2

1
2

1
4

1
=
4

1
.
2

Therefore,
fU ,V (u, v )

=
=

where

< u < and

fX ,Y [h1 (u, v ) , h2 (u, v )] jJ j


"
#
"
u+v 2 1
1
exp
exp
2
2
2

1
2

1
,
2

< v < .

Rearranging gives,
1
1
p exp
fU ,V (u, v ) = p
2 2

u2
4

1
1
p p exp
2 2

v2
4

Since the joint density factors into a function of u and a function of v , by Lemma
(4.2.7), U and V are independent.

(Bilkent)

ECON509

This Version: 7 Nov 2014

60 / 125

Bivariate Transformations
Remember Theorem (4.2.14).
Theorem (4.2.14): Let X
N (X , 2X ) and Y
N (Y , 2Y ) be independent
normal random variables. Then the random variable Z = X + Y has a
N (X + Y , 2X + 2Y ) distribution.
Then,
U = X +Y
What about V ? Dene Z =

N (0, 2).

Y and notice that


Z =

N (0, 1).

Then, by Theorem (4.2.14)


V =X

Y = X +Z

N (0, 2),

as well.
That the sums and dierences of independent normal random variables are
independent normal random variables is true independent of the means of X and Y ,
as long as Var (X ) = Var (Y ). See Exercise 4.27.
Note that the more di cult bit here is to prove that U and V are indeed
independent.
(Bilkent)

ECON509

This Version: 7 Nov 2014

61 / 125

Bivariate Transformations
Theorem (4.3.5): Let X ?? Y be two random variables. Dene U = g (X ) and
V = h (Y ), where g (x ) is a function only of x and h (y ) is a function only of y .
Then U ?? V .
Proof: Consider the proof for continuous random variables U and V . Dene for any
u 2 R and v 2 R,
A u = fx : g (x )

ug

and

B v = fy : h (y )

v g.

Then,
FU ,V (u, v )

=
=
=

P (U

u, V

v)

P (X 2 A u , Y 2 B v )

P (X 2 A u ) P (Y 2 B v ) ,

where the last result follows by Theorem (4.2.10). Then,


fU ,V (u, v )

=
=

2
F
(u, v )
uv U ,V
d
P (X 2 A u )
du

d
P (Y 2 B v ) ,
dv

and clearly, the joint pdf factorises into a function only of u and a function only of
v . Therefore, by Lemma (4.2.7), U and V are independent.
(Bilkent)

ECON509

This Version: 7 Nov 2014

62 / 125

Bivariate Transformations

What if we start with a bivariate random variable (X , Y ) but are only interested in
U = g (X , Y ) and not (U , V )?
We can then choose a convenient V = h (X , Y ) , such that the resulting
transformation from (X , Y ) to (U , V ) is one-to-one on A.

Then, the joint pdf of (U , V ) can be calculated as usual and we can obtain the
marginal pdf of U from the joint pdf of (U , V ) .
Which V is convenient would generally depend on what U is.

(Bilkent)

ECON509

This Version: 7 Nov 2014

63 / 125

Bivariate Transformations
What if the transformation of interest is not one-to-one?
In a similar fashion to Theorem 2.1.8, we can dene

A = f(x , y ) : fX ,Y (x , y ) > 0g,


and then divide this into a partition where the transformation is one-to-one on each
block.
Specically, let A0 , A1 , ..., Ak be a partition of A which satises the following
properties.
1
2

P ((X , Y ) 2 A 0 ) = 0;
The transformation U = g 1 (X , Y ) and V = g 2 (X , Y ) is a one-to-one transformation
from A i onto B for each i = 1, ..., k .

Then we can calculate the inverse functions from B to Ai for each i . Let
x = h1i (u, v )

y = h2i (u, v )

and

for the i th partition.

For (u, v ) 2 B , this inverse gives the unique (x , y ) 2 Ai such that

(u, v ) = (g1 (x , y ) , g2 (x , y )) .
Let Ji be the Jacobian for the i th inverse. Then,
k

fU ,V (u, v ) =

fX ,Y [h1i (u, v ), h2i (u, v )] jJi j

i =1
(Bilkent)

ECON509

This Version: 7 Nov 2014

64 / 125

Hierarchical Models and Mixture Distributions

As an introduction to the idea of hierarchical models, consider the following example.


Example (4.4.1): As insect lays a large number of eggs, each surviving with
probability p. On the average, how many eggs will survive?
Often, the large number of eggs is taken to be Poisson () . Now, assume that
survival of each egg is an independent event. Therefore, survival of eggs constitutes
Bernoulli trials.
Lets put this into a statistical framework.
X jY
Y

binomial (Y , p ) ,
Poisson () .

Therefore, survival is a random variable, which in turn is a function of another


random variable.
This framework is convenient in the sense that complex models can be analysed as a
collection of simpler models.

(Bilkent)

ECON509

This Version: 7 Nov 2014

65 / 125

Hierarchical Models and Mixture Distributions

Lets get back to the original question.


Example (4.4.2): Now,

P (X = x )

P (X

= x, Y = y)

P (X

= x jY = y ) P (Y = y )

y =0

y =0

y =x

y x
p (1
x

p)

y x

y!

where the summation in the third line starts from y = x rather than y = 0 since if
y < x , the conditional probability should be equal to zero: clearly, the number of
surviving eggs cannot be larger than the number of those laid.

(Bilkent)

ECON509

This Version: 7 Nov 2014

66 / 125

Hierarchical Models and Mixture Distributions


Let t = y

x . This gives,

P (X = x )

y!
(y x )!x ! p x (1
y =x
x

(p ) e
x!

y =x

[(1

p )y

x y x

y!
y x

p )]
(y x ) !

(p )x e
x!

t =0

But the summation in the nal term is the kernel for Poisson ((1
that,

[(1 p )]t
= e (1 p ) .

t!
t =0

[(1

p ) ]t
.
t!

p )) . Remember

Therefore,

(p )x e (1 p )
(p )x p
e
e
,
=
x!
x!
which implies that X
Poisson (p ) .
The answer to the original question then is E [X ] = p : on average, p eggs survive.
P (X = x ) =

(Bilkent)

ECON509

This Version: 7 Nov 2014

67 / 125

Hierarchical Models and Mixture Distributions

Now comes a very useful theorem which you will, most likely, use frequently in the
future.
Remember that E [X jy ] is a function of y and E [X jY ] is a random variable whose
value depends on the value of Y .
Theorem (4.4.3): If X and Y are two random variables, then
n
o
E X [ X ] = E Y E X jY [ X jY ] ,

provided that the expectations exist.

It is important to notice that the two expectations are with respect to two dierent
probability densities, fX . ( ) and fX jY ( jY = y ).
This result is widely known as the Law of Iterated Expectations.

(Bilkent)

ECON509

This Version: 7 Nov 2014

68 / 125

Hierarchical Models and Mixture Distributions


Proof: Let fX ,Y (x , y ) denote the joint pdf of (X , Y ) . Moreover, let fY (y ) and
fX jY (x jy ) denote the marginal distribution of Y and the conditional distribution of
X (conditional on Y = y ), respectively. By denition,
E X [X ]

=
=
=

Z Z

xf (x , y )dxdy =

xfX jY (x jy )dx fY (y )dy

EX jY [X jY = y ]fY (y )dy
n
o
E Y E X jY [X jY ] .

The corresponding proof for the discrete case can be obtained by replacing integrals
with sums.
How does this help us? Consider calculating the expected number of survivors again.
n
o
E X [X ] = E Y E X jY [ X jY ]

(Bilkent)

EY [pY ] = pEY [Y ] = p.

ECON509

This Version: 7 Nov 2014

69 / 125

Hierarchical Models and Mixture Distributions

The textbook gives the following denition for mixture distributions.


Denition (4.4.4): A random variable X is said to have a mixture distribution if the
distribution of X depends on a quantity that also has a distribution.
Therefore, the mixture distribution is a distribution that is generated through a
hierarchical mechanism.
Hence, as far as Example (4.4.1) is concerned, the Poisson (p ) distribution, which
is the result of combining a binomial (Y , p ) and a Poisson () distribution, is a
mixture distribution.

(Bilkent)

ECON509

This Version: 7 Nov 2014

70 / 125

Hierarchical Models and Mixture Distributions

Example (4.4.5): Now, consider the following hierarchical model:


X jY

binomial (Y , p ),

exponential ( ).

Y j

Poisson (),

Then,
E X [X ]

=
=
=

o
EX jY [X jY ] = EY [pY ]
n
o
n
o
E EY j [pY j] = pE EY j [Y j]

EY

pE [] = p,

which is obtained by successive application of the Law of Iterated Expectations.


Note that in this example we considered both discrete and continuous random
variables. This is ne.

(Bilkent)

ECON509

This Version: 7 Nov 2014

71 / 125

Hierarchical Models and Mixture Distributions


The three-stage hierarchy of the previous model can also be considered as a
two-stage hierarchy by combining the last two stages.
Now,
P (Y = y )

P (Y = y , 0 < < ) =

=
=

f (y j )f ( )d =

1
y !

1
1+

(1 +

e
1
1+

!y

"0
Z
e
0

f (y , )d
#
y
1 /
e
d
y!

1
d =
(y + 1 )
y !

1
1+

!y +1

In order to see that


Z
0

just use the pdf for


(Bilkent)

(1 +

d = (y + 1 )

Gamma (y + 1, (1 + 1/)
ECON509

1
1+

!y +1

1 ).
This Version: 7 Nov 2014

72 / 125

Hierarchical Models and Mixture Distributions

The nal expression is that of the negative binomial pmf. Therefore, the two-stage
hierarchy is given by
X jY
Y

(Bilkent)

binomial (Y , p ),
negative binomial

ECON509

r = 1, p =

1
1+

This Version: 7 Nov 2014

73 / 125

Hierarchical Models and Mixture Distributions


Another useful result is given next.
Theorem (4.4.7): For any two random variables X and Y ,
VarX (X ) = EY [VarX jY (X jY )] + VarY fEX jY [X jY ]g
Proof: Remember that
VarX (X )

=
=
=

EX [X 2 ] fEX [X ]g2
n
o
E Y E X jY [X 2 jY ]
h
n
o
E Y E X jY [X 2 jY ]

EY fEX jY [X jY ]g
EY

fEX jY [X jY ]g2

+ EY fEX jY [X jY ]g2
EY fEX jY [X jY ]g
hn
i
o
EY
E X jY [X 2 jY ]
fEX jY [X jY ]g2

EY fEX jY [X jY ]g
+ EY fEX jY [X jY ]g2
h
i
EY VarX jY (X jY ) + VarY fEX jY [X jY ]g ,

which yields the desired result.


(Bilkent)

ECON509

This Version: 7 Nov 2014

74 / 125

Hierarchical Models and Mixture Distributions


Example (4.4.6): Consider the following generalisation of the binomial distribution,
where the probability of success varies according to a distribution.
Specically,
X jP
Then
E X [X ] = E P

binomial (n, P ) ,

beta (, ) .

o
EX jP [X jP ] = EP [nP ] = n

,
+

where the last result follows from the fact that for P beta (, ) ,
E [P ] = / ( + ) .
Example (4.4.8): Now, lets calculate the variance of X . By Theorem (4.4.7),
VarX (X ) = VarP fEX jP [X jP ]g + EP [VarX jP (X jP )].
Now, EX jP [X jP ] = nP and since P

beta (, ) ,

VarP (EX jP [X jP ]) = VarP (nP ) = n 2


Moreover, VarX jP (X jP ) = nP (1
variable.
(Bilkent)

( + ) ( + + 1)

P ), due to X jP being a binomial random


ECON509

This Version: 7 Nov 2014

75 / 125

Hierarchical Models and Mixture Distributions


What about EP [VarX jP (X jP )]? Remember that the beta (, ) pdf is given by
fX (x j, ) =

( + )
x
() ( )

(1

x )

p )p

(1

where 0 < x < 1, > 0 and > 0.


Then,
EP [VarX jP (X jP )]

EP [nP (1

( + )
()( )

P )]
Z 1

p (1

p )

dp.

The integrand is the kernel of another beta pdf with parameters + 1 and + 1
since the pdf for P beta ( + 1, + 1) is given by
( + + 2)
p (1
( + 1) ( + 1)

(Bilkent)

ECON509

p ) .

This Version: 7 Nov 2014

76 / 125

Hierarchical Models and Mixture Distributions

Therefore,
EP [VarX jP (X jP )]

( + ) ( + 1)( + 1)
()( ) ( + + 2)

() ( )
( + )
()( ) ( + + 1) ( + ) ( + )

n
.
( + + 1) ( + )

Thus,
VarX (X )

(Bilkent)

+ n2
2
( + + 1) ( + )
( + ) ( + + 1)
( + + n )

( + )2 ( + + 1 )

ECON509

This Version: 7 Nov 2014

77 / 125

Covariance and Correlation


Let X and Y be two random variables. To keep notation concise, we will use the
following notation.
E [X ] = X ,

E [Y ] = Y ,

Var (X ) = 2X

and

Var (Y ) = 2Y .

Denition (4.5.1): The covariance of X and Y is the number dened by


Cov (X , Y ) = E [(X

X ) (Y

Y )].

Denition (4.5.2): The correlation of X and Y is the number dened by


XY =

Cov (X , Y )
,
X Y

which is also called the correlation coe cient.


If large (small) values of X tend to be observed with large (small) values of Y , then
Cov (X , Y ) will be positive.
Why so? Within the above setting, when X > X then Y > Y is likely to be true
whereas when X < X then Y < Y is likely to be true. Hence
E [(X X ) (Y Y )] > 0.
Similarly, if large (small) values of X tend to be observed with small (large) values of
Y , then Cov (X , Y ) will be negative.
(Bilkent)

ECON509

This Version: 7 Nov 2014

78 / 125

Covariance and Correlation


Correlation normalises covariance by the standard deviations and is, therefore, a
more informative measure.
If Cov (X , Y ) = 50 while Cov (W , Z ) = 0.9, this does not necessarily mean that
there is a much stronger relationship between X and Y . For example, if
Var (X ) = Var (Y ) = 100 while Var (W ) = Var (Z ) = 1, then
XY = 0.5

and

WZ = 0.9.

Theorem (4.5.3): For any random variables X and Y ,


Cov (X , Y ) = E [XY ]

X Y .

Proof:
Cov (X , Y )

(Bilkent)

=
=
=
=

E [(X

X ) (Y

E [XY

X Y

Y )]
Y X + X Y ]

E [XY ]

X E [Y ]

E [XY ]

X Y .

ECON509

Y E [X ] + X Y

This Version: 7 Nov 2014

79 / 125

Covariance and Correlation

Example (4.5.4): Let (X , Y ) be a bivariate random variable and


fX ,Y (x , y ) = 1,
Now,
fX (x ) =
implying that X

Z x +1
x

0 < x < 1,

fX ,Y (x , y )dy =

x < y < x + 1.

Z x +1
x

1
1dy = y jxy +
=x = 1,

Uniform (0, 1). Therefore, E [X ] = 1/2 and Var (x ) = 1/12.

Now, fY (y ) is a bit more complicated to calculate. Considering the region where


fX ,Y (x , y ) > 0, we observe that 0 < x < y when 0 < y < 1 and y 1 < x < 1
when 1 y < 2. Therefore,
Ry
Ry
when 0 < y < 1
fY (y ) = 0 fX ,Y (x , y )dx = 0 1dx = y ,
R1
R1
.
fY (y ) = y 1 fX ,Y (x , y )dx = y 1 1dx = 2 y , when 1 y < 2

(Bilkent)

ECON509

This Version: 7 Nov 2014

80 / 125

Covariance and Correlation

As an exercise, you can show that


Y = 1

and

2Y = 1/6.

Moreover,
EX ,Y [XY ]

Z 1 Z x +1
0

Z 1
0

Therefore,
Cov (X , Y ) =

(Bilkent)

xydydx =

1
x2 + x
2
1
12

ECON509

x +1
Z 1
xy 2

and

dx =

dx

y =x

7
.
12

1
XY = p .
2

This Version: 7 Nov 2014

81 / 125

Covariance and Correlation

Theorem (4.5.5): If X ?? Y , then Cov (X , Y ) = XY = 0.


Proof: Since X ?? Y , by Theorem (4.2.10),

E [XY ] = E [X ]E [Y ].
Then
Cov (X , Y ) = E [XY ]

X Y = X Y

X Y = 0,

and consequently,
XY =

Cov (X , Y )
= 0.
X Y

It is crucial to note that although X ??Y implies that Cov (X , Y ) = XY = 0, the


relationship does not necessarily hold in the reverse direction.

(Bilkent)

ECON509

This Version: 7 Nov 2014

82 / 125

Covariance and Correlation


Theorem (4.5.6): If X and Y are any two random variables and a and b are any
two constants, then
Var (aX + bY ) = a2 Var (X ) + b 2 Var (Y ) + 2abCov (X , Y ).
If X and Y are independent random variables, then
Var (aX + bY ) = a2 Var (X ) + b 2 Var (Y ).
You will be asked to prove this as a homework assignment.
Note that if two random variables, X and Y , are positively correlated, then
Var (X + Y ) > Var (X ) + Var (Y ),
whereas if X and Y are negatively correlated, then
Var (X + Y ) < Var (X ) + Var (Y ).
For positively correlated random variables, large values in one tend to be
accompanied by large values in the other. Therefore, the total variance is magnied.
Similarly, for negatively correlated random variables, large values in one tend to be
accompanied by small values in the other. Hence, the variance of the sum is
dampened.
(Bilkent)

ECON509

This Version: 7 Nov 2014

83 / 125

Covariance and Correlation


Example (4.5.8): Let X
moreover,

Uniform (0, 1), Z

Uniform (0, 1/10) and Z ?? X . Let,

Y = X + Z,
and consider (X , Y ) .
Consider the following intuitive derivation of the distribution of (X , Y ) . We are
given X = x and Y = x + Z for a particular value of X . Now, the conditional
distribution of Z given X is,
Z jX

Uniform (0, 1/10),

since Z and X are independent! Therefore, one can interpret x as a location


parameter in the conditional distribution of Y given X = x .
This conditional distribution is Uniform (x , x + 1/10) : we simply shift the location
of the distribution of Z by x units.
Now,
fX ,Y (x , y ) = fY jX (y jx )fX (x ) =

1
x + 1/10

1
x1

1 1
= 10,
1/10 1

where 0 < x < 1 and x < y < x + 1/10.


(Bilkent)

ECON509

This Version: 7 Nov 2014

84 / 125

Covariance and Correlation


Based on these ndings, it can be shown that
E [X ] =

1
2

and

E [X + Z ] =

E [XY ]

E [X ]E [Y ]

1
1
11
+
=
.
2
20
20

Then,
Cov (X , Y )

=
=
=
=
=

E [X (X + Z )]

E [X ]E [X + Z ]

fE [X ]g2 E [X ]E [Z ]
E [X ] + E [X ]E [Z ] fE [X ]g2 E [X ]E [Z ]
1
E [X 2 ] fE [X ]g2 = Var (X ) =
.
12
E [X ] + E [XZ ]
2

By Theorem (4.5.6),
Var (Y ) = Var (X ) + Var (Z ) =

1
1
+
.
12
1200

Then,
XY
(Bilkent)

= p

1/12
=
1/12 1/12 + 1/1200

ECON509

100
.
101
This Version: 7 Nov 2014

85 / 125

Covariance and Correlation

A quick comparison of Examples (4.5.4) and (4.5.8) reveals an interesting result.


In the rst example, one can show that Y jX
Uniform (x , x + 1) while in the latter
example we have Y jX
Uniform (x , x + 1/10).

Therefore, in the latter case, knowledge about the value of X gives us a more precise
idea about the possible values that Y can take on. Seen from a dierent
perspective, in the second example Y is conditionally more tightly distributed around
the particular value of X .
p
In relation to this observation, XY for the rst example is equal to 1/2 which is
p
much smaller compared to XY = 100/101 from the second example.
Another important result, provided without proof at this point, is that
1

XY

1;

see part (1) of Theorem (4.5.7) in Casella & Berger (p. 172). We will provide an
alternative proof of this result when we deal with inequalities.

(Bilkent)

ECON509

This Version: 7 Nov 2014

86 / 125

Covariance and Correlation


Bivariate Normal Distribution

We now introduce the bivarite normal distribution.


Denition (4.5.10): Let < X < , < Y < , X > 0, Y > 0 and
1 < < 1. The bivariate normal pdf with means X and Y , variances 2X and
2
Y , and correlation is the bivariate pdf given by
fX ,Y (x , y )

exp
where u =

(Bilkent)

y Y
Y

and v =

1
p

2X Y

x X
X

1
1

2 (1
, while

ECON509

2 )

u2

2uv + v 2

< x < and

< y < .

This Version: 7 Nov 2014

87 / 125

Covariance and Correlation


Bivariate Normal Distribution

More concisely, this would be written as


X
Y

X
Y

2X
X Y

X Y
2Y

In addition, starting from the bivariate distribution, one can show that
Y jX = x

Y + Y

X jY = y

X + X

X
X

, 2Y (1

2 ) ,

Y
Y

, 2X (1

2 ) .

and, likewise,

Finally, again, starting from the bivariate distribution, it can be shown that
X

N (X , 2X )

and

N (Y , 2Y ).

Therefore, joint normality implies conditional and marginal normality. However, this
does not go in the opposite direction; marginal or conditional normality does not
necessarily imply joint normality.
(Bilkent)

ECON509

This Version: 7 Nov 2014

88 / 125

Covariance and Correlation


Bivariate Normal Distribution

The normal distribution has another interesting property.


Remember that although independence implies zero covariance, the reverse is not
necessarily true.
The normal distribution is an exception to this: if two normally distributed random
variables have zero correlation (or, equivalently, zero covariance) then they are
independent.
Why? Remember that independence is a property that governs all moments, not
just the second order ones (such as variance or covariance).
However, as the preceding discussion reveals, the distribution of a bivariate normal
random variable is entirely determined by its mean and covariance matrix. In other
words, the rst and second order moments are su cient to characterise the
distribution.
Therefore, we do not have to worry about any higher order moments. Hence, zero
covariance implies independence in this particular case.

(Bilkent)

ECON509

This Version: 7 Nov 2014

89 / 125

Covariance and Correlation


Bivariate Normal Distribution

The following is based on Section 3.6 of Gallant (1997).


Start with
h
1
1
p
fX ,Y (x , y ) =
exp
u2
2
2
2 (1 )
2X Y 1

2uv + v 2

Now,

u2

2uv + v 2

u2

2uv + v 2 + 2 v 2

(u

v )2 + 1

2 v 2

2 v 2 .

Then,
fX ,Y (x , y )

2X Y

=
Y

=
Y
(Bilkent)

1
p

2 (1

1
p

2 )

exp

1
2 (1

2 )

exp

2 (1
1

exp

2 )

2 (1
1

2 (1

2 )

2 )

exp
ECON509

v )2 + 1

(u

2 v 2

v )2

(u

2 v 2

1
2 (1

2 )

(u

v )2
X

1
p

exp

This Version: 7 Nov 2014

v2
2
90 / 125

Covariance and Correlation


Bivariate Normal Distribution

Then, the bivariate pdf factors into

=
Y

=
Y
and

p
p

1
2 (1

2 )

1
2 (1

2 )

1
2 (1

1
p

2 )

exp

exp
exp

exp

v2
2

2 )

2 (1
2 (1

2Y
2
) 2Y

2 (1

1
2 )2Y

=
X

1
p

v )2

(u

exp

Y
y
"

1
2

X
X

)
2

Y
(x
X
X

X )

The rst of these can be considered as the conditional density fY jX (y jx ) which is


normal with mean and variance

Y jX = Y + Y (x X ) and 2Y jX = (1 2 )2Y .
X
The second, on the other hand, is the unconditional density fX (x ) which is normal
with mean X and variance 2X .
(Bilkent)

ECON509

This Version: 7 Nov 2014

91 / 125

Covariance and Correlation


Bivariate Normal Distribution

Now, the covariance is given by


YX

=
=
=

=
=
=

Z Z

Z Z
Z Z

(y

Y )(x

X )fX ,Y (x , y )dxdy

(y

Y )(x

X )fY jX (y jx )fX (x )dxdy

(y

Y )fY jX (y jx )dy fX (x )(x


{z
}

X )dx

E [y Y jX ]

Y +

Y
(x
X

X )

Y fX (x )(x

X )dx

Y
(x X )2 fX (x )dx
X

Y 2
Y
Var (X ) =
= Y X .
X
X X

This shows that the correlation is given by since


XY =
(Bilkent)

XY

= X Y = .
X Y
X Y
ECON509

This Version: 7 Nov 2014

92 / 125

Multivariate Distributions

So far, we have discussed multivariate random variables which consist of two random
variables only. Now, we extend these ideas to general multivariate random variables.
For example, we might have the random vector (X1 , X2 , X3 , X4 ) where X1 is
temperature, X2 is weight, X3 is height and X4 is blood pressure.
The ideas and concepts we have discussed so far extend to such random vectors
directly.

(Bilkent)

ECON509

This Version: 7 Nov 2014

93 / 125

Multivariate Distributions
Let X = (X1 , ..., Xn ). Then the sample space for X is a subset of Rn , the
n-dimensional Euclidian space.
If this sample space is countable, then X is a discrete random vector and its joint
pmf is given by
f (x) = f (x1 , ..., xn ) = P (X1 = x1 , X2 = x2 , ..., Xn = xn )
Rn ,

For any A

P (X 2 A ) =

for each (x1 , ..., xn ) 2 Rn .

f (x).

x2A

Similarly, for the continuous random vector, we have the joint pdf given by
f (x) = f (x1 , ..., xn ) which satises
P (X 2 A ) =

f (x)d x =
A

f (x1 , ..., xn )dx1 ...dxn .

R
R
Note that
A is an n-fold integration, where the limits of integration are such
that the integral is calculated over all points x 2 A.

(Bilkent)

ECON509

This Version: 7 Nov 2014

94 / 125

Multivariate Distributions

Let g (x) = g (x1 , ..., xn ) be a real-valued function dened on the sample space of X.
Then, for the random variable g (X),

(discrete )
(continuous )

:
:

E [g (X)] =
E [g (X)] =

x 2 Rn
Z

g (x)f (x),
Z

g (x)f (x)d x.

The marginal pdf or pmf of (X1 , ..., Xk ) , the rst k coordinates of (X1 , ..., Xn ), is
given by

(discrete )
(continuous )

:
:

f (x1 , ..., xk ) =
f (x1 , ..., xk ) =

f (x1 , ..., xn ) ,

(xk +1 ,...,xn )2Rn


Z
Z

f (x1 , ..., xn )dxk +1 ...dxn ,

for every (x1 , ..., xk ) 2 Rk .

(Bilkent)

ECON509

This Version: 7 Nov 2014

95 / 125

Multivariate Distributions

As you would expect, if f (x1 , ..., xk ) > 0 then,


f (xk +1 , ..., xn jx1 , ..., xk ) =

f (x1 , ..., xn )
,
f (x1 , ..., xk )

gives the conditional pmf or pdf of (Xk +1 , ..., Xn ) given


(X1 = x1 , X2 = x2 , ..., Xk = xk ) , which is a function of (x1 , ..., xk ).
Note that you can pick your k coordinates as you like. For example, if you have
(X1 , X2 , X3 , X4 , X5 ) and want to condition on X2 and X5 , then simply consider the
reordered random vector (X2 , X5 , X1 , X3 , X4 ) where one would condition on the rst
two coordinates.
Lets illustrate these concepts using the following example.

(Bilkent)

ECON509

This Version: 7 Nov 2014

96 / 125

Multivariate Distributions
Example (4.6.1): Let n = 4 and
(
3/4 x12 + x22 + x32 + x42
f (x1 , x2 , x3 , x4 ) =
0

0 < xi < 1, i = 1, 2, 3, 4
.
otherwise

Clearly this is a nonnegative function and it can be shown that


Z Z Z Z

f (x1 , x2 , x3 , x4 )dx1 dx2 dx3 dx4 = 1.

For example, you can check that P X1 < 12 , X2 < 43 , X4 >


Z 1 Z 1 Z 3/4 Z 1/2
3
1/2 0

1
2

is given by

x12 + x22 + x32 + x42 dx1 dx2 dx3 dx4 =

151
.
1024

Now, consider
f (x1 , x2 )

=
=

Z Z

Z 1Z 1
3
0

f (x1 , x2 , x3 , x4 )dx3 dx4


x12 + x22 + x32 + x42 dx3 dx4 =

3
1
x 2 + x22 + ,
4 1
2

where 0 < x1 < 1 and 0 < x2 < 1.


(Bilkent)

ECON509

This Version: 7 Nov 2014

97 / 125

Multivariate Distributions

This marginal pdf can be used to compute any probably or expectation involving X1
and X2 . For instance,
E [X 1 X 2 ]

=
=

Z Z

Z Z

x1 x2 f (x1 , x2 )dx1 dx2


x1 x2

1
5
3
x 2 + x22 +
dx1 dx2 =
.
4 1
2
16

Now lets nd the conditional distribution of X3 and X4 given X1 = x1 and X2 = x2 .


For any (x1 , x2 ) where 0 < x1 < 1 and 0 < x2 < 1,
f (x3 , x4 jx1 , x2 )

(Bilkent)

3/4 x12 + x22 + x32 + x42


f (x1 , x2 , x3 , x4 )
=
f (x1 , x2 )
3/4 x12 + x22 + 1/2

x12 + x22 + x32 + x42


.
x12 + x22 + 2/3

ECON509

This Version: 7 Nov 2014

98 / 125

Multivariate Distributions

Then, for example,


P

X3 >

Z 1/2 Z 1
0

Z 1/2 Z 1
0

=
=

(Bilkent)

3/4

3/4

!
3
1
1
2
, X4 < X1 = , X2 =
4
2
3
3
#
"
(1/3)2 + (2/3)2 + x32 + x42

(1/3)2 + (2/3)2 + 2/3

5
9
9
+ x32 + x42
11
11
11

dx3 dx4

dx3 dx4

Z 1/2

5
111
9
+
+ x42 dx4
44
704
44
0
5
111
3
203
+
+
=
.
88
1408
352
1408

ECON509

This Version: 7 Nov 2014

99 / 125

Multivariate Distributions
Lets introduce some other useful extensions to the concepts we covered for the
univariate and bivariate random variables.
Denition (4.6.5): Let X1 , ..., Xn be random vectors with joint pdf or pmf
f (x1 , ..., xn ). Let fXi (xi ) denote the marginal pdf or pmf of Xi . Then, X1 , ..., Xn are
called mutually independent random vectors if, for every (x1 , ..., xn ),
n

f (x1 , ..., xn ) = fX1 (x1 ) ... fXn (xn ) =

fX (xi ).
i

i =1

If the Xi s are all one-dimensional, then X1 , ..., Xn are called mutually independent
random variables.
Clearly, if X1 , ..., Xn are mutually independent, then knowledge about the values of
some coordinates gives us no information about the values of the other coordinates.
Remember that mutual independence implies pairwise independence and beyond.
For example, it is possible to specify a probability distribution for (X1 , ..., Xn ) with
the property that each pair Xi , Xj is pairwise independent but X1 , ..., Xn are not
mutually independent.
Theorem (4.6.6) (Generalisation of Theorem 4.2.10): Let X1 , ..., Xn be mutually
independent random variables. Let g1 , ..., gn be real-valued functions such that
gi (xi ) is a function only of xi , i = 1, ..., n. Then
E [g1 (X1 ) ... gn (Xn )] = E [g1 (X1 )] E [g2 (X2 )] ... E [gn (Xn )].
(Bilkent)

ECON509

This Version: 7 Nov 2014

100 / 125

Multivariate Distributions
Theorem (4.6.7) (Generalisation of Theorem 4.2.12): Let X1 , ..., Xn be mutually
independent random variables with mgfs MX 1 (t ), ..., MX n (t ). Let
Z = X1 + ... + Xn .
Then, the mgf of Z is
MZ (t ) = MX 1 (t ) ... MX n (t ).
In particular, if X1 , ..., Xn all have the same distribution with mgf MX (t ), then
MZ (t ) = [MX (t )]n .
Example (4.6.8): Suppose X1 , ..., Xn are mutually independent random variables
and the distribution of Xi is gamma (i , ) . The mgf of a gamma (, ) distribution
is M (t ) = (1 t ) . Thus, if Z = X1 + ... + Xn , the mgf of Z is
MZ (t ) = MX 1 (t ) ... MX n (t ) = (1

t )

... (1

t )

= (1

t ) (1 +...+n ) .

This is the mgf of a gamma (1 + ... + n , ) distribution. Thus, the sum of


independent gamma random variables that have a common scale parameter also
has a gamma distribution.

(Bilkent)

ECON509

This Version: 7 Nov 2014

101 / 125

Multivariate Distributions
Corollary (4.6.9): Let X1 , ..., Xn be mutually independent random variables with
mgfs MX 1 (t ), ..., MX n (t ). Let a1 , ..., an and b1 , ..., bn be xed constants. Let
Z = (a1 X1 + b1 ) + ... + (an Xn + bn ) .
Then, the mgf of Z is
n

MZ (t ) = exp t

bi

i =1

MX 1 (a1 t ) ... MX n (an t ).

Proof: From the denition, the mgf of Z is


"
M Z (t )

E [exp (tZ )] = E exp t


n

exp t

bi

i =1
n

exp t

bi

i =1

(ai X i + bi )

i =1

!#

E [exp (ta1 X1 ) ... exp (tan Xn )]


MX 1 (a1 t ) ... MX n (an t ),

which yields the desired result.


(Bilkent)

ECON509

This Version: 7 Nov 2014

102 / 125

Multivariate Distributions
Corollary (4.6.10): Let X1 , ..., Xn be mutually independent random variables with
Xi
N (i , 2i ). Let a1 , ..., an and b1 , ..., bn be xed constants. Then,
"
#
n

Z =

(ai X i + bi )

i =1

i =1

i =1

(ai i + bi ) , ai2 2i

Proof: Recall that the mgf of a N , 2 random variable is


M (t ) = exp t + 2 t 2 /2 . Substituting into the expression in Corollary (4.6.9)
yields
!
n
2 a 2 t 2
2 a 2 t 2
MZ (t ) = exp t bi exp 1 a1 t + 1 1
... exp n an t + n n
2
2
i =1
"
#
n
n
i =1 ai2 2i t 2
= exp (ai i + bi ) t +
,
2
i =1
which is mgf of the normal distribution given in Corollary (4.6.10).
This is an important result. A linear combination of independent normal random
variables is normally distributed.

(Bilkent)

ECON509

This Version: 7 Nov 2014

103 / 125

Multivariate Distributions

Theorem (4.6.11) (Generalisation of Lemma 4.2.7): Let X1 , ..., Xn be random


vectors. Then X1 , ..., Xn are mutually independent random vectors if and only if
there exist functions gi (xi ), i = 1, ..., n, such that the joint pdf or pmf of
(X1 , ..., Xn ) can be written as
f (x1 , ..., xn ) = g1 (x1 ) ... gn (xn ).
Theorem (4.6.12) (Generalisation of Theorem 4.3.5): Let X1 , ..., Xn be
independent random vectors. Let gi (xi ) be a function only of xi , i = 1, ..., n. Then,
the random variables Ui = gi (Xi ), i = 1, ..., n, are mutually independent.

(Bilkent)

ECON509

This Version: 7 Nov 2014

104 / 125

Multivariate Distributions
We can also calculate the distribution of a transformation of a random vector, in the
same fashion as before.
Let (X1 , ..., Xn ) be a random vector with pdf fX (x1 , ..., xn ). Let
A = fx : fX (x) > 0g. Consider a new random vector (U1 , ..., Un ) , dened by
U1 = g1 (X1 , ..., Xn ), U2 = g2 (X1 , ..., Xn ), ..., Un = gn (X1 , ..., Xn ).
Suppose that A0 , A1 , ..., Ak from a partition of A with the following properties.
The set A0 , which may be empty, satises
P ((X1 , ..., Xn ) 2 A0 ) = 0.
The transformation

(U1 , ..., Un ) = (g1 (X), g2 (X), ..., gn (X))


is a one-to-one transformation from Ai onto B for each i = 1, 2, ..., k. Then for each
i , the inverse functions from B to Ai can be found. Denote the i th inverse by
x1 = h1i (u1 , ..., un ),

x2 = h2i (u1 , ..., un ), ...

xn = hni (u1 , ..., un ).

This i th inverse gives, for (u1 , ..., un ) 2 B , the unique (x1 , ..., xn ) 2 Ai such that

(u1 , ..., un ) = (g1 (x1 , ..., xn ), g2 (x1 , ..., xn ), ..., gn (x1 , ..., xn )) .
(Bilkent)

ECON509

This Version: 7 Nov 2014

105 / 125

Multivariate Distributions

Let Ji denote the Jacobian computed from the i th inverse,

Ji =

h 1i (u)
u 1
h 2i (u)
u 1

..
.

h ni (u)
u 1

which is the determinant of an n

h 1i (u)
u 2
h 2i (u)
u 2

..
.

h ni (u)
u 2

h 1i (u)
u n
h 2i (u)
u n

..

..
.

h ni (u)
u n

n matrix.

Then, assuming that these Jacobians are non-zero on B ,


k

fU (u1 , ..., un ) =

fX (h1i (u1 , ..., un ), ..., hni (u1 , ..., un )) jJi j .

i =1

(Bilkent)

ECON509

This Version: 7 Nov 2014

106 / 125

Inequalities

We will now cover some basic inequalities used in statistics and econometrics.
Most of the time, more complicated expressions can be written in terms of simpler
expressions. Inequalities on these simpler expressions can then be used to obtain an
inequality, or often a bound, on the original complicated term.
This part is based on Sections 3.6 and 4.7 in Casella & Berger.

(Bilkent)

ECON509

This Version: 7 Nov 2014

107 / 125

Inequalities
We start with one of the most famous probability inequalities.
Theorem (3.6.1) (Chebychevs Inequality): Let X be a random variable and let
g (x ) be a nonnegative function. Then, for any r > 0,
E [g (X )]
.
r
Proof: Using the denition of the expected value,
P (g (X )

E [g (X )]

r)

g (x )fX (x )dx

fx :g (x ) r g

g (x )fX (x )dx

fx :g (x ) r g

rP (g (X )

fX (x )dx

r ).

Hence,
E [g (X )]

rP (g (X )

P (g (X )

r)

r ),

implying

(Bilkent)

ECON509

E [g (X )]
.
r
This Version: 7 Nov 2014

108 / 125

Inequalities
This result comes in very handy when we want to turn a probability statement into
an expectation statement. For example, this would be useful if we already have
some moment existence assumptions and we want to prove a result involving a
probability statement.
Example (3.6.2): Let g (x ) = (x )2 /2 , where = E [X ] and 2 = Var (X ).
Let, for convenience, r = t 2 . Then,
"
#
#
"
1
1
(X )2
(X )2
2
P
= 2.
t
E
2
t2
2
t
This implies that
P [jX

1
,
t2

t ]

and, consequently,
P [jX

j < t ]

1
.
t2

Therefore, for instance for t = 2, we get


P [jX

2 ]

0.25

or

P [jX

j < 2 ]

0.75.

This says that, there is at least a 75% chance that a random variable will be within
2 of its mean (independent of the distribution of X ).
(Bilkent)

ECON509

This Version: 7 Nov 2014

109 / 125

Inequalities
This information is useful. However, many times, it might be possible to obtain an
even tighter bound in the following sense.
Example (3.6.3): Let Z
N (0, 1). Then,
P (Z

t)

2
1
p
e x /2 dx
2 t
Z
1
x x 2 /2
p
e
dx
2 t t
2

1 e t
p
t
2

/2

The second inequality follows from the fact that for x > t, x /t > 1. Now, since Z
has a symmetric distribution, P (jZ j t ) = 2P (Z
t ) . Hence,
r
2
t
/2
2e
P (jZ j t )
.

t
Set t = 2 and observe that
r
2e 2
P (jZ j 2)
= .054.
2
This is a much tighter bound compared to the one given by Chebychevs Inequality.
Generally Chebychevs Inequality provides a more conservative bound.
(Bilkent)

ECON509

This Version: 7 Nov 2014

110 / 125

Inequalities
A related inequality is Markovs Inequality.
Lemma (3.8.3): If P (Y

0) = 1 and P (Y = 0) < 1, then, for any r > 0,


P (Y

r)

E [Y ]
,
r

and the relationship holds with equality if and only if


P (Y = r ) = p = 1 P (Y = 0) , where 0 < p 1.
The more general form of Chebychevs Inequality, provided in White (2001), is as
follows.
Proposition 2.41 (White 2001): Let X be a random variable such that
E [jX jr ] < , r > 0. Then, for every t > 0,
P (jX j

t)

E [jX jr ]
.
tr

Setting r = 2, and some re-arranging, gives the usual Chebychevs Inequality. If we


let r = 1, then we obtain Markovs Inequality. See White (2001, pp.29-30).

(Bilkent)

ECON509

This Version: 7 Nov 2014

111 / 125

Inequalities

We continue with inequalities involving expectations. However, rst we present a


useful Lemma.
Lemma (4.7.1): Let a and b be any positive numbers, and let p and q be any
positive numbers (necessarily greater than 1) satisfying
1
1
+ = 1.
p
q
Then,

1 p 1 q
a + b
p
q

ab,

(2)

with equality if and only if ap = b q .

(Bilkent)

ECON509

This Version: 7 Nov 2014

112 / 125

Inequalities
Proof: Fix b, and consider the function
g (a ) =

1 p 1 q
a + b
p
q

ab.

To minimise g (a ), dierentiate and set equal to 0:


d
g (a ) = 0 =) ap
da

b = 0 =) a = b p 1 .

A check of the second derivative will establish that this is indeed a minimum. The
value of the function at the minimum is
1 pp 1
1
b
+ bq
p
q

(b p 1 )b =

1 q 1 q
b + b
p
q

bp

p
1

1
1
+
p
q

bq

bp

p
1

= 0.

+ q 1 = 1.
Hence, the unique minimum is zero and (2) is established. Since the minimum is
1
unique, equality holds only if a = b p 1 , which is equivalent to ap = b q , again from
1
1
p + q = 1.

Note that, p/(p

(Bilkent)

1) = q since p

ECON509

This Version: 7 Nov 2014

113 / 125

Inequalities
Theorem (4.7.2) (Hlders Inequality): Let X and Y be any two random
variables, and let p and q be such that
1
1
+ = 1.
p
q
Then,
1/p
1/q
E [jXY j] fE [jX jp ]g
.
fE [jY jq ]g
Proof: Dene
jY j
jX j
and b =
.
a=
1/q
p 1/p
fE [jX j ]g
fE [jY jq ]g
Applying Lemma (4.7.1), we get

1 jX jp
1 jY jq
p +
p E [jX j ]
q E [jY jq ]
Observe that,

fE [jX j ]g

jXY j

1/p

(3)

This Version: 7 Nov 2014

114 / 125

fE [jY jq ]g

1/q

1 jX jp
1 jY jq
1
1
= + = 1.
p +
p E [jX j ]
q E [jY jq ]
p
q
Using this result with (3) yields
E

E [jXY j]
(Bilkent)

fE [jX jp ]g
ECON509

1/p

fE [jY jq ]g

1/q

Inequalities
Now consider a common special case of Hlders Inequality.
Theorem (4.7.3) (Cauchy-Schwarz Inequality): For any two random variables X
and Y ,
n
o1/2 n
o1/2
E [jXY j]
E [jX j2 ]
E [jY j2 ]
.

Proof: Set p = q = 2.
Example (4.7.4): If X and Y have means X and Y and variances 2X and 2Y ,
respectively, we can apply the Cauchy-Schwarz Inequality to get
n h
io1/2 n h
io1/2
E [j(X X ) (Y Y )j]
E (X X )2
E (Y Y )2
.
(4)
Now, observe that for any two random variables W and Z ,

jWZ j

WZ

jWZ j ,

which implies that,


which, in turn, implies that

E [jWZ j]

E [WZ ]

fE [WZ ]g2
(Bilkent)

E [jWZ j],

fE [jWZ j]g2 .

ECON509

This Version: 7 Nov 2014

115 / 125

Inequalities

Therefore,

fE [(X
|

X ) (Y
{z

[Cov (X ,Y )]2

and (4) implies that,

Y )]g2
}

fE [j(X

[Cov (X , Y )]2

X ) (Y

Y )j]g2 ,

2X 2Y .

A particular implication of the previous result is that


1

XY

1,

where XY is the correlation between X and Y . One can show this without using
the Cauchy-Schwarz Inequality, as well. However, the corresponding calculations
would be a lot more tedious. See Proof of Theorem 4.5.7 in Casella & Berger (2001,
pp.172-173).

(Bilkent)

ECON509

This Version: 7 Nov 2014

116 / 125

Inequalities

Theorem (4.7.5) (Minkowskis Inequality): Let X and Y be any two random


variables. Then, for 1 p < ,
1/p
fE [jX + Y jp ]g

fE [jX jp ]g

1/p

+ fE [jY jp ]g1/p .

Proof: We will use the triangle inequality which says that

jX + Y j

jX j + jY j .

Now,
E [jX + Y jp ]

h
i
E jX + Y j jX + Y jp 1
h
i
h
E jX j jX + Y jp 1 + E jY j jX + Y jp

where we have used the triangle inequality.

(Bilkent)

ECON509

This Version: 7 Nov 2014

(5)

117 / 125

Inequalities
Next, we apply Hlders Inequality to each expectation on the right-hand side of (5)
and obtain,
n h
io1/q
1/p
E [jX + Y jp ]
E j X + Y j q (p 1 )
fE [jX jp ]g
n h
io1/q
+ fE [jY jp ]g1/p E jX + Y jq (p 1 )
,

where q is such that p

+q

= 1. Notice that q (p

1) = p and 1

=p

1.

Then,
E [jX + Y jp ]
n h
io1/q
E j X + Y j q (p 1 )

=
=
=

and

1/p
fE [jX + Y jp ]g

(Bilkent)

fE [jX jp ]g

ECON509

E [jX + Y jp ]

1/q
fE [jX + Y jp ]g
1
fE [jX + Y jp ]g

p
fE [jX + Y jp ]g
1/p

q
1

+ fE [jY jp ]g1/p .

This Version: 7 Nov 2014

118 / 125

Inequalities
The previous results can be used for the case of numerical sums, as well.
For example, let a1 , ..., an and b1 , ..., bn be positive nonrandom values. Let X be a
random variable with range a1 , ..., an and P (X = ai ) = 1/n, i = 1, ..., n. Similarly,
let Y be a random variable taking on values b1 , ..., bn with probability
P (Y = bi ) = 1/n. Moreover, let p and q be such that p 1 + q 1 = 1. Suppose also
that P (X = ai , Y = bj ) is equal to 0 whenever i 6= j and is equal to 1/n whenever
i = j.
Then,

and

(Bilkent)

E [jXY j]

fE [jX jp ]g

1/p

fE [jY jq ]g

1/q

ECON509

1 n
jai bi j ,
n i
=1
"
#1/p
1 n
p
jai j
n i
=1
"
#1/q
1 n
q
.
jbi j
n i
=1

This Version: 7 Nov 2014

119 / 125

Inequalities
Hence, by Hlders Inequality,
"
#1/p "
#1/q
1 n
1 n
1 n
p
q
jai bi j
jai j
jbi j
n i
n i
n i
=1
=1
=1
"
#1/p "
#1/q
1
1
n
1 (p +q ) n
p
q
=
,
jai j
jbi j
n
i =1
i =1
and so,

"

jai bi j

i =1

jai j

i =1

#1/p "

jbi j

i =1

#1/q

A special case of this result is obtained by letting bi = 1 for all i and setting
p = q = 2. Then,
"
#1/2 "
#1/2 "
#1/2
n

jai j

i =1

jai j2

i =1

i =1

and so,
1
n
(Bilkent)

jai j

i =1

!2

ECON509

jai j2

n 1/2 ,

i =1

ai2 .

i =1

This Version: 7 Nov 2014

120 / 125

Inequalities

Next, we turn to functional inequalities. These are inequalities that rely on


properties of real-valued functions rather than on any statistical properties.
Lets start by dening a convex function.
Denition (4.7.6): A function g (x ) is convex if
g (x + (1

) y )

g (x ) + (1

The function g (x ) is concave if

) g (y ),

for all x and y , and 0 < < 1.

g (x ) is convex.

Now we can introduce Jensens Inequality.


Theorem (4.7.7): For any random variable X , if g (x ) is a convex function, then
E [g (X )]

(Bilkent)

ECON509

g fE [X ]g.

This Version: 7 Nov 2014

121 / 125

Inequalities
Proof: Let `(x ) be a tangent line to g (x ) at the point g (E [X ]). Write
`(x ) = a + bx for some a and b.
By the convexity of g , we have g (x )
inequalities,
E [g (X )]

a + bx . Since expectations preserve


E [a + bX ] = a + bE [X ]

`(E [X ]) = g (E [X ]).

Note, for example, that g (x ) = x 2 is a convex function. Then,


E [X 2 ]

fE [X ]g2 .

Moreover, if x is positive, then 1/x is convex and


E

1
X

1
.
E [X ]

Note that, for concave functions, we have, accordingly,


E [g (X )]

(Bilkent)

ECON509

g (E [X ]).

This Version: 7 Nov 2014

122 / 125

Inequalities

We nish by providing the following theorem.


Theorem (4.7.9): Let X be any random variable and g (x ) and h (x ) any functions
such that E [g (X )], E [h (X )] and E [g (X )h (X )] exist.
1

If g (x ) is a nondecreasing function and h (x ) is a nonincreasing function, then


E [g (X )h (X )]

E [g (X )]E [h (X )].

If g (x ) and h (x ) are either both nondecreasing or both nonincreasing, then


E [g (X )h (X )]

E [g (X )]E [h (X )].

Unlike, say, Hlders Inequality or Cauchy-Schwarz Inequality, this inequality allows


us to bound an expectation without using higher-order moments.

(Bilkent)

ECON509

This Version: 7 Nov 2014

123 / 125

Inequalities

Let us prove the rst part of this Theorem for the easier case where
E [g (X )] = E [h (X )] = 0.
Now,
E [g (X )h (X )]

=
=

Z
Z

g (x )h (x )fX (x )dx

fx :h (x ) 0 g

g (x )h (x )fX (x )dx +

fx :h (x ) 0 g

g (x )h (x )fX (x )dx .

Let x0 be such that h (x0 ) = 0. Then, since h (x ) is nonincreasing,


fx : h (x ) 0g = fx : x0 x < g and fx : h (x ) 0g = fx : < x
Moreover, since g (x ) is nondecreasing,
g (x0 ) =

(Bilkent)

min

fx :h (x ) 0 g

g (x )

and

ECON509

g (x0 ) =

max

fx :h (x ) 0 g

x0 g.

g (x ).

This Version: 7 Nov 2014

124 / 125

Inequalities

Hence,
Z

and

fx :h (x ) 0 g
fx :h (x ) 0 g

g (x )h (x )fX (x )dx
g (x )h (x )fX (x )dx

g (x0 )
g (x0 )

fx :h (x ) 0 g
fx :h (x ) 0 g

h (x )fX (x )dx ,
h (x )fX (x )dx .

Thus,
E [g (X )h (X )]

fx :h (x ) 0 g

g (x0 )

g (x0 )

g (x )h (x )fX (x )dx +

fx :h (x ) 0 g

fx :h (x ) 0 g

h (x )fX (x )dx + g (x0 )

g (x )h (x )fX (x )dx

fx :h (x ) 0 g

h (x )fX (x )dx

h (x )fX (x )dx = g (x0 )E [h (X )] = 0.

You can try to do proof for the second part, following the same method.

(Bilkent)

ECON509

This Version: 7 Nov 2014

125 / 125

Anda mungkin juga menyukai