Anda di halaman 1dari 5

APM 504 - PS4 Solutions

6.4) (a) Let (, F, P) be a probability space and let m(F) be the collection of random variables
X : R. Dene the mapping d : m(F) m(F) [0, 1] by setting
d(X, Y ) = E
_
|X Y |
1 +|X Y |
_
.
Since d(X, Y ) E|X Y |, it is clear that d(X, Y ) = 0 whenever X = Y almost surely.
Conversely, if d(X, Y ) = 0, then |XY |/(1 +|XY |) = 0 almost surely which in turns implies
that X = Y almost surely. (To verify these claims, recall that if f 0, then
_
fd = 0 if
and only if ({x : f(x) > 0}) = 0.) Symmetry of d follows immediately from the denition.
Finally, to prove the triangle inequality, let (a) = a/(1+a) and observe that if a = b +c, where
a, b, c 0, then
(a) =
b + c
1 + b + c

b
1 + b
+
c
1 + c
= (b) + (c).
Furthermore, since (a) is an increasing function of a 0, it follows that (a) (b) + (c)
whenever a < b + c. Since |x z| |x y| +|y z|, these two observations imply that
|x z|
1 +|x z|

|x y|
1 +|x y|
+
|y z|
1 +|y z|
for all x, y, z R. In particular, the inequality
|X Z|
1 +|X Z|

|X Y |
1 +|X Y |
+
|Y Z|
1 +|Y Z|
holds almost surely whenever X, Y, Z m(F) and then taking expectations gives
d(X, Z) d(X, Y ) + d(Y, Z).
Remark: Notice that in general d is a pseudo-metric rather than a metric on m(F) since
there may be random variables X = Y for which X = Y almost surely. In this case, we can
dene an equivalence relation on m(F) by stipulating that X Y whenever X = Y almost
surely. If we then let V = m(F)/ be the set of equivalence classes under and we dene

d([X], [Y ]) = d(X, Y ) where [X] denotes the equivalence class of X, then



d is a metric on V.
(b) Suppose that X
n
X in probability. Using the fact that
E
_
|X
n
X|
1 +|X
n
X|
_
+P(|X
n
X| > )
whenever (0, 1), it follows that limsup
n
d(X
n
, X) < for every > 0 and so d(X
n
, X)
0. To prove the converse, observe that
E
_
|X
n
X|
1 +|X
n
X|
_


2
P{|X
n
X| > }
1
for any (0, 1). Thus, if d(X
n
, X) 0, it follows that P(|X
n
X| > ) 0 and since this
holds for any > 0, X
n
X in probability.
6.5) Suppose that (X
n
; n 1) m(F) has the property that
lim
n,m
d(X
n
, X
m
) = 0.
Then, arguing as in (b) of the previous problem, it follows that for every > 0,
() lim
n,m
P(|X
n
X
m
| > ) = 0,
i.e., (X
n
; n 1) is a Cauchy sequence in probability. We will show that there is a subsequence
(X
n
k
; k 1) which is almost surely Cauchy. Let n
0
= 1 and for each k 1, use () to choose
n
k
> n
k1
such that
P(|X
n
k
X
m
| > 2
k
) < 2
k
for all m n
k
. In particular, since P(|X
n
k
X
n
k+1
| > 2
k
) < 2
k
, we have

k=1
P(|X
n
k
X
n
k+1
| > 2
k
) < ,
and so the rst Borel-Cantelli lemma implies that P(|X
n
k
X
n
k+1
| > 2
k
i.o.) = 0. Thus, if we
dene

c
= { : |X
n
k
X
n
k+1
| < 2
k
for all k suciently large},
then P(
c
) = 1 and (X
n
k
() : k 1) is a Cauchy sequence whenever
c
. Indeed, this last
claim follows from the observation that if j > i, then
|X
n
i
() X
n
j
()|
j1

k=i
|X
n
k
() X
n
k+1
()|

k=i
|X
n
k
() X
n
k+1
()|
2
1i
,
which tends to 0 as i .
Next we can dene a random variable X by setting
X() =
_
lim
k
X
n
k
() if
c
0 otherwise.
Since X
n
k
X almost surely and
c
is a measurable set, it follows that X is measurable
and also that X
n
k
X in probability. However, by the preceding problem, this implies that
d(X
n
k
, X) 0 as k and so by using the triangle inequality we have
lim
n
d(X
n
, X) lim
n,k
d(X
n
, X
n
k
) + lim
k
d(X
n
k
, X) = 0.
Thus, applying problem (6.4) once more, we can conclude that X
n
X in probability.
2
6.8) Suppose that X
1
, X
2
, are independent Poisson random variables with EX
n
=
n
and
S
n
= X
1
+ +X
n
and assume that

n=1

n
= . We wish to show that S
n
/ES
n
1 almost
surely. We rst observe that we may assume without loss of generality that each
n
1. To
see why, recall that if Z
1
, , Z
n
are independent Poisson random variables with parameters

1
, ,
n
, then Z
1
+ +Z
n
is a Poisson random variable with parameter
1
+ +
n
. It follows
that if
n
> 1, then we may replace X
n
by a block of independent Poisson random variables
Z
(n)
1
, , Z
(n)
m(n)
where X
n
d
= Z
(n)
1
+ +Z
(n)
m(n)
and EZ
(n)
i
1 for each i = 1, , m(n). If
n
< 1,
then we simply keep X
n
and set m(n) = 1. Let us denote the variables in this new sequence
Y
1
, Y
2
, and write I
n
= Y
1
+ +Y
n
for the new partial sums. Letting N(n) = m(1)+ m(n),
we have I
N(n)
d
= S
n
for every n 1 and, in fact, the innite sequences (I
N(n)
; n 1) and
(S
n
; n 1) are identical in distribution. Since all of the subsequences of a convergent sequence
converge to the same limit, it follows that if we can show that I
n
/EI
n
1 almost surely, then
we can also conclude that S
n
/ES
n
1 almost surely. With this in mind, we will assume that

n
1 for the remainder of the proof. (This will be used below.)
We rst show that S
n
/ES
n
1 in probability. Indeed, since Var(X
n
) = EX
n
for each n 1,
we have Var(S
n
) = ES
n
and so Chebyshevs inequality implies that
P
_

S
n
/ES
n
1

>
_
= P
_

S
n
ES
n

> ES
n
_

Var(S
n
)

2
_
ES
n
_
2
=
1

2
ES
n
0
for any > 0. Thus S
n
/ES
n
1 in probability.
Next, proceeding as in the proof of Theorem (1.6.8), let n
k
= inf{n : ES
n
k
2
}. Let T
k
= S
n
k
and note that since each EX
m
=
m
1, we have k
2
ET
k
k
2
+ 1. In particular,
P(|T
k
ET
k
| > )
1

2
k
2
for any > 0, and since k
2
is summable, the rst Borel-Cantelli lemma implies that P(|T
k

ET
k
| > i.o.) = 0. Since > 0 is arbitrary, we have shown that T
k
/ET
k
1 almost surely.
Continuing as in the book, we know that if n
k
n n
k+1
, then
ET
k
ET
k+1

T
k
ET
k

S
n
ES
n

ET
k+1
ET
k

T
k+1
ET
k+1
.
Thus it suces to show that ET
k+1
/ET
k
1, which follows from the inequalities
k
2
ET
k
ET
k+1
(k + 1)
2
+ 1.
Notice that it is in this nal step that we make essential use of the assumption that EX
n
1
for each n 1. Were this not true, then it could happen that the ratio ET
k+1
/ET
k
would fail
to converge.
3
6.13) Suppose that X
1
, X
2
, are independent. If

n1
P(X
n
> A) < for some A, then by
the rst Borel-Cantelli lemma we have that P(X
n
> A i.o.) = 0. It follows that if N
A
() =
sup{n 1 : X
n
() > A}, then P(N
A
< ) = 1 and
sup{X
n
; n 1} A sup{X
1
, , X
N
A
}.
Thus
P(supX
n
= ) P(N
A
= ) +P(X
n
= for some n 1)
0 +

n1
P(X
n
= ) = 0.
Conversely, suppose that

n1
P(X
n
> A) = for every A < . In this case, the second Borel-
Cantelli lemma implies that for every N 1, P(X
n
> N i.o.) = 1 whence P(supX
n
> N) = 1
as well. Since
{supX
n
= } =

N=1
{supX
n
> N},
the continuity properties of probability measures imply that P(supX
n
= ) = 1.
6.14) Let X
1
, X
2
, be independent Bernoulli random variables with P(X
n
= 1) = p
n
.
(i) Since for any (0, 1), we have
lim
n
P(X
n
> ) = lim
n
P(X
n
= 1) = lim
n
p
n
= 0,
it follows that X
n
0 in probability if and only if p
n
0.
(ii) We rst observe that X
n
() 0 if and only if there exists N() < such that X
n
() = 0
for all n N(). Suppose that

n=1
p
n
< . Using the rst Borel-Cantelli lemma, this implies
that P(X
n
= 1 i.o.) = 0 and so X
n
0 almost surely. Conversely, if

n=1
p
n
= , then the
second Borel-Cantelli lemma implies that P(X
n
= 1 i.o.) = 1 and so P(X
n
0) = 0.
6.18) Let X
1
, X
2
, be independent exponential random variables with mean 1, i.e., P(X
i
>
x) = e
x
.
(i) Since for any > 0, we have

n=1
P(X
n
> (1 + ) log(n)) =

n=1
1
n
1+
< ,
the rst Borel-Cantelli lemma implies that P(X
n
/ log(n) > (1 + ) i.o.) = 0. This shows that
P(limsup
n
X
n
/ log(n) 1 + ) = 1 for every > 0 and so (by taking a countable sequence

k
0) it follows that limsupX
n
/ log(n) 1 almost surely. Similarly,

n=1
P(X
n
> log(n)) =

n=1
1
n
=
4
and so the second Borel-Cantelli lemma implies that P(X
n
/ log(n) > 1 i.o.) = 1. Since this shows
that P(limsup
n
X
n
/ log(n) 1) = 1, we can conclude that limsup
n
X
n
/ log(n) = 1 al-
most surely.
(ii) Now let M
n
= max{X
i
: 1 i n} and observe that the independence of the X
i
s implies
that
P(M
n
x) = P(X
1
x, , X
n
x) =
n

i=1
P(X
i
x) = (1 e
x
)
n
.
On the one hand, since

n=1
P(M
n
< (1 ) log(n)) =

n=1
_
1
1
n
1
_
n
<

n=1
e
n

<

n=1
e
n
<
whenever (0, 1), the rst Borel-Cantelli lemma implies that P(M
n
/ log(n) < (1) i.o.) = 0.
Taking
k
0, it follows that liminf
n
M
n
/ log(n) 1 almost surely.
To obtain an upper bound, let > 0 be given and dene

= { : limsup
n
M
n
()/ log(n) > 1 + }.
If

, then there is an increasing sequence n


k
() such that M
n
k
()/ log(n
k
()) > 1+/2
for every k 1. However, since each X
n
< , this implies that there is a second increasing
sequence m
k
() with m
k
() n
k
() and such that X
m
k
()/ log(n
k
()) > 1 + /2 for
every k 1. Since this implies that limsup
n
X
n
()/ log(n) 1 + /2, it follows from (i)
that P(

) = 0. Noting that this holds for a countable sequence


k
0, we can conclude that
limsup
n
M
n
/ log(n) 1 almost surely and then this fact, combined with the conclusion of
the preceding paragraph, shows that lim
n
M
n
/ log(n) = 1 almost surely.
5

Anda mungkin juga menyukai