Anda di halaman 1dari 3

Statistics 351 (Fall 2015)

Prof. Michael Kozdron

September 18, 2015

Lecture #5: The Transformation Theorem


Reference. 1.2.1 pages 2023
Last class we discussed how to perform a change-of-variables in the one-dimensional case.
In multiple dimensions, the idea is the same, although things get notationally messier.
Recall. The one-dimensional change-of-variables formula is usually written as
Z

f (h(x))h (x) dx =
a

h(b)

f (u) du,
h(a)

i.e., u = h(x), du = h0 (x) dx.


We can write this as

h(b)

f (x) dx =
h(a)

f (h(y))h0 (y) dy.

()

It is in this form that there is a resemblance to the multidimensional formula.


Let X = (X1 , . . . , Xn )0 be a random vector, and let g : Rn ! Rn be a sufficiently smooth
function. That is, we write g = (g1 , . . . , gn ) with gi : Rn ! R. Set Y = g(X) so that
(Y1 , . . . , Yn )0 = (g1 (X), . . . , gn (X))0 where gi (X) = gi (X1 , . . . , Xn ). Writing things out explicitly gives
Y1 = g1 (X1 , . . . , Xn ) , . . . , Yn = gn (X1 , . . . , Xn ).
Let h = g

so that X = h(Y), and let


@x1
@y1
@x2
d(x)
J=
= @y1
d(y)
..
.
@xn
@y1

@x1

@y2
@x2

@y2
..
...
.
@xn

@y2

@x1
@yn
@x2
@yn
..
.
@xn
@yn

be the Jacobian.
Let B Rn . Our goal is to compute P {Y 2 B} which we will do in two ways.

The first way is as follows. Let fY (y) denote the density function for Y so that
Z
Z
P {Y 2 B} = fY (y) dy.
B

51

()

The second way is by changing variables. Let h = g


in analogy with the one-dimensional case, we have

so that h(B) = {x : g(x) 2 B}. Then,

P {Y 2 B} = P {g(X) 2 B} = P {X 2 g (B)} = P {X 2 h(B)} =

h(B)

fX (x) dx

and a change-of variables y = g(x) (i.e., x = h(y), dx = |J| dy) gives


Z
Z
Z
Z
fX (x) dx = fX (h(y)) |J| dy

( )

h(B)

Note that if we are in one dimension, then ( ) reduces to ().

We now see that () and () give two distinct expressions for P {Y 2 B}, namely
Z
Z
Z
Z
P {Y 2 B} = fY (y) dy = fX (h(y)) |J| dy.
B

Since this is true for any B Rn we conclude that the integrands must be equal; that is,
fY (y) = fX (h(y)) |J|.

()

In other words, this gives us a formula for the density of the random vector Y = g(X) in
terms of the density of the random vector X = g 1 (Y) = h(Y).
Example. Let X, Y 2 N (0, 1) be independent. Prove that X+Y and X Y are independent
N (0, 2) random variables.
Solution. The fact that X + Y and X Y each have a N (0, 2) distribution can be proved
using moment generating functions as done in Stat 251. It is the independence of X + Y
and X Y that is newly proved.
Let U = X + Y and V = X

Y so that
X=

U +V
2

and Y =

V
2

That is, X = (X, Y )0 and Y = (U, V )0 . We also find


g(X) = g(X, Y ) = (X + Y, X
h(Y) = h(U, V ) =

U +V
2

Y ) = (U, V ) = Y, and

, U 2 V = (X, Y ) = X.

We compute the Jacobian as


@x
@u
J=
@y
@u

@x
1/2
@v
=
1/2
@y
@v
52

1/2
=
1/2

1/2.

By the previous result (), we conclude


fU,V (u, v) = fX,Y (h(u, v)) |J|
= fX,Y ((u + v)/2, (u v)/2) 1/2
= 1/2 fX ((u + v)/2) fY ((u v)/2) by the assumed independence of X and Y
2
2
1 (u+v)
1 (u v)
1
1
1
= p e 2 4 p e 2 4
2
2
2
2
1 u
1 v2
1
1
=p
e 2 2 p
e 22
2 2
2 2
for 1 < u, v < 1. In other words, since fU,V (u, v) = fU (u) fV (v) we conclude that
U = X + Y and V = X Y are independent N (0, 2) random variables.

53

Anda mungkin juga menyukai