Anda di halaman 1dari 19

Chapter 3

Gradient and Divergence Operators


In this chapter we construct an abstract framework for stochastic analysis
in continuous time with respect to a normal martingale (M
t
)
tR+
, using the
construction of stochastic calculus presented in Section 2. In particular we
identify some minimal properties that should be satised in order to connect
a gradient and a divergence operator to stochastic integration, and to apply
them to the predictable representation of random variables. Some applica-
tions, such as logarithmic Sobolev and deviation inequalities, are formulated
in this general setting. In the next chapters we will examine concrete exam-
ples of operators that can be included in this framework, in particular when
(M
t
)
tR+
is a Brownian motion or a compensated Poisson process.
3.1 Denition and Closability
In this chapter, (M
t
)
tR+
denotes a normal martingale as considered in
Chapter 2. We let o, |, and T denote the spaces of random variables, simple
processes and simple predictable processes introduced in Denition 2.5.2, and
we note that o is dense in L
2
() by Denition 2.5.2 and |, T are dense in
L
2
( R
+
) respectively from Proposition 2.5.3.
Let now
D : L
2
(, dP) L
2
( R
+
, dP dt)
and
: L
2
( R
+
, dP dt) L
2
(, dP)
be linear operators dened respectively on o and |. We assume that the
following duality relation holds.
Assumption 3.1.1. (Duality relation) The operators D and satisfy the
relation
IE[DF, u)
L
2
(R+)
] = IE[F(u)], F o, u |. (3.1.1)
Note that D1 = 0 is equivalent to IE[(u)] = 0, for all u |. In the next
proposition we use the notion of closability for operators in normed linear
N. Privault, Stochastic Analysis in Discrete and Continuous Settings,
Lecture Notes in Mathematics 1982, DOI 10.1007/978-3-642-02380-4 3,
c Springer-Verlag Berlin Heidelberg 2009
113
114 3 Gradient and Divergence Operators
spaces, whose denition is recalled in Section 9.8 of the Appendix. The next
proposition is actually a general result on the closability of the adjoint of a
densely dened operator.
Proposition 3.1.2. The duality assumption 3.1.1 implies that D and are
closable.
Proof. If (F
n
)
nN
converges to 0 in L
2
() and (DF
n
)
nN
converges to U
L
2
( R
+
), the relation
IE[DF
n
, u)
L
2
(R+)
] = IE[F
n
(u)], u |,
implies
[ IE[U, u)
L
2
(R+)
][
[ IE[DF
n
, u)
L
2
(R+)
] IE[U, u)
L
2
(R+)
][ +[ IE[DF
n
, u)
L
2
(R+)
[
= [ IE[DF
n
U, u)
L
2
(R+)
][ +[ IE[F
n
(u)][
|DF
n
U|
L
2
(R+)
|u|
L
2
(R+)
+|F
n
|
L
2
()
|(u)|
L
2
()
,
hence as n goes to innity we get IE[U, u)
L
2
(R+)
] = 0, u |, i.e. U = 0 since
| is dense in L
2
( R
+
). The proof of closability of is similar: if (u
n
)
nN
converges to 0 in L
2
( R
+
) and ((u
n
))
nN
converges to F L
2
(), we
have for all G o:
[ IE[FG][ [ IE[DG, u
n
)
L
2
(R+)
] IE[FG][ +[ IE[DG, u
n
)
L
2
(R+)
][
= [ IE[G((u
n
) F)][ +[ IE[DG, u
n
)
L
2
(R+)
][
|(u
n
) F|
L
2
()
|G|
L
2
()
+|u
n
|
L
2
(R+)
|DG|
L
2
(R+)
,
hence IE[FG] = 0, G o, i.e. F = 0 since o is dense in L
2
().
From the above proposition these operators are respectively extended to their
closed domains Dom(D) and Dom(), and for simplicity their extensions will
remain denoted by D and .
3.2 Clark Formula and Predictable Representation
In this section we study the connection between D, , and the predictable
representation of random variables using stochastic integrals.
Assumption 3.2.1. (Clark formula). Every F o can be represented as
F = IE[F] +
_

0
IE[D
t
F[T
t
]dM
t
. (3.2.1)
3.2 Clark Formula and Predictable Representation 115
This assumption is connected to the predictable representation property
for the martingale (M
t
)
tR+
, cf. Proposition 3.2.8 and Proposition 3.2.6
below.
Denition 3.2.2. Given k 1, let ID
2,k
([a, )), a > 0, denote the comple-
tion of o under the norm
|F|
ID
2,k
([a,))
= |F|
L
2
()
+
k

i=1
__

a
[D
i
t
F[
2
dt
_
1/2
,
where D
i
t
= D
t
D
t
denotes the i-th iterated power of D
t
, i 1.
In other words, for any F ID
2,k
([a, )), the process (D
t
F)
t[a,)
exists in
L
2
( [a, )). Clearly we have Dom(D) = ID
2,1
([0, )). Under the Clark
formula Assumption 3.2.1, a representation result for F ID
2,1
([a, )) can
be stated as a consequence of the Clark formula:
Proposition 3.2.3. For all t R
+
> 0 and F ID
2,1
([t, )) we have
IE[F[T
t
] = IE[F] +
_
t
0
IE[D
s
F[T
s
]dM
s
, (3.2.2)
and
F = IE[F[T
t
] +
_

t
IE[D
s
F[T
s
]dM
s
, t R
+
. (3.2.3)
Proof. This is a direct consequence of (3.2.1) and Proposition 2.5.7.
By uniqueness of the predictable representation of F L
2
(), an expression
of the form
F = c +
_

0
u
t
dM
t
where c R and (u
t
)
tR+
is adapted and square-integrable, implies
u
t
= E[D
t
F[T
t
], dt dP a.e.
The covariance identity proved in the next lemma is a consequence of
Proposition 3.2.3 and the It o isometry (2.5.5).
Lemma 3.2.4. For all t R
+
and F ID
2,1
([t, )) we have
IE[(IE[F[T
t
])
2
] = (IE[F])
2
+ IE
__
t
0
(IE[D
s
F[T
s
])
2
ds
_
(3.2.4)
= IE[F
2
] IE
__

t
(IE[D
s
F[T
s
])
2
ds
_
. (3.2.5)
116 3 Gradient and Divergence Operators
Proof. From the Ito isometry (2.5.4) and Relation 3.2.2 we have
IE[(IE[F[T
t
])
2
] = IE
_
_
IE[F] +
_
t
0
IE[D
s
F[T
s
]dM
s
_
2
_
= (IE[F])
2
+ IE
_
__
t
0
IE[D
s
F[T
s
]dM
s
_
2
_
= (IE[F])
2
+ IE
__
t
0
(IE[D
s
F[T
s
])
2
ds
_
, t R
+
,
which shows (3.2.4). Next, concerning (3.2.5) we have
IE[F
2
] = IE
_
_
IE[F[T
t
] +
_

t
IE[D
s
F[T
s
]dM
s
_
2
_
= IE
_
(IE[F[T
t
])
2
_
+ IE
_
IE[F[T
t
]
_

t
IE[D
s
F[T
s
]dM
s
_
+IE
_
__

t
IE[D
s
F[T
s
]dM
s
_
2
_
= IE
_
(IE[F[T
t
])
2
_
+ IE
__

t
IE[F[T
t
] IE[D
s
F[T
s
]dM
s
_
+IE
__

t
(IE[D
s
F[T
s
])
2
ds
_
= IE
_
(IE[F[T
t
])
2
_
+ IE
__

t
(IE[D
s
F[T
s
])
2
ds
_
, t R
+
,
since from (2.5.7) the It o stochastic integral has expectation 0, which shows
(3.2.5).
The next remark applies in general to any mapping sending a random variable
to the process involved in its predictable representation with respect to a
normal martingale.
Lemma 3.2.5. The operator
F (IE[D
t
F[T
t
])
tR+
dened on o extends to a continuous operator from L
2
() into L
2
(R
+
).
Proof. This follows from the bound
| IE[D

F[T

]|
2
L
2
(R+)
= |F|
2
L
2
()
(IE[F])
2
|F|
2
L
2
()
,
that follows from Relation (3.2.4) with t = 0.
3.2 Clark Formula and Predictable Representation 117
As a consequence of Lemma 3.2.5, the Clark formula can be extended in
Proposition 3.2.6 below as in the discrete case, cf. Proposition 1.7.2.
Proposition 3.2.6. The Clark formula
F = IE[F] +
_

0
IE[D
t
F[T
t
]dM
t
.
can be extended to all F in L
2
().
Similarly, the results of Proposition 3.2.3 and Lemma 3.2.4 also extend to
F L
2
().
The Clark representation formula naturally implies a Poincare type
inequality.
Proposition 3.2.7. For all F Dom(D) we have
Var (F) |DF|
2
L
2
(R+)
.
Proof. From Lemma 3.2.4 we have
Var (F) = IE[[F E[F][
2
]
= IE
_
__

0
IE[D
t
F[T
t
]dM
t
_
2
_
= IE
__

0
(IE[D
t
F[T
t
])
2
dt
_
IE
__

0
IE[[D
t
F[
2
[T
t
]dt
_

_

0
IE
_
IE[[D
t
F[
2
[T
t
]

dt

_

0
IE
_
[D
t
F[
2

dt
IE
__

0
[D
t
F[
2
dt
_
,
hence the conclusion.
Since the space o is dense in L
2
() by Denition 2.5.2, the Clark for-
mula Assumption 3.2.1 implies the predictable representation property of
Denition 2.6.1 for (M
t
)
tR+
as a consequence of the next corollary.
Corollary 3.2.8. Under the Clark formula Assumption 3.2.1 the martingale
(M
t
)
tR+
has the predictable representation property.
Proof. Denition 2.6.1 is satised because o is dense in L
2
() and the process
(IE[D
t
F [ T
t
])
tR+
in (3.2.1) can be approximated by a sequence in T from
Proposition 2.5.3.
118 3 Gradient and Divergence Operators
Alternatively, one may use Proposition 2.6.2 and proceed as follows. Consider
a square-integrable martingale (X
t
)
tR+
with respect to (T
t
)
tR+
and let
u
s
= IE[D
s
X
n+1
[T
n
], n s < n + 1, t R
+
.
Then (u
t
)
tR+
is an adapted process such that u1
[0,T]
L
2
( R
+
) for all
T > 0, and the Clark formula Assumption 3.2.1 and Proposition 3.2.6 imply
X
t
= IE[X
n+1
[ T
t
]
= IE
_
X
0
+
_
n+1
0
IE[D
s
X
n+1
[ T
s
]dM
s

T
t
_
= X
0
+
_
t
0
IE[D
s
X
n+1
[ T
s
] dM
s
= X
0
+
_
n
0
IE[D
s
X
n+1
[ T
s
] dM
s
+
_
t
n
IE[D
s
X
n+1
[ T
s
] dM
s
= X
n
+
_
t
n
IE[D
s
X
n+1
[ T
s
]dM
s
= X
n
+
_
t
n
u
s
dM
s
, n t < n + 1, n N,
where we used the Chasles relation (2.5.6), hence
X
t
= X
0
+
_
t
0
u
s
dM
s
, t R
+
, (3.2.6)
hence from Proposition 2.6.2, (M
t
)
tR+
has the predictable representation
property.
In particular, the Clark formula Assumption 3.2.1 and Relation (3.2.3) of
Proposition 3.2.3 imply the following proposition.
Proposition 3.2.9. For any T
T
-measurable F L
2
() we have
IE[D
t
F[T
T
] = 0, 0 T t. (3.2.7)
Proof. From from Relation (3.2.3) we have F = IE[F[T
T
] if and only if
_

T
IE[D
t
F[T
t
]dM
t
= 0,
which implies IE[D
t
F[T
t
], t T, by the It o isometry (2.5.4), hence (3.2.7)
holds as
IE[D
t
F[T
T
] = IE[IE[D
t
F[T
t
][T
T
] = 0, t T,
by the tower property of conditional expectations stated in Section 9.3.
3.3 Divergence and Stochastic Integrals 119
The next assumption is a stability property for the gradient operator D.
Assumption 3.2.10. (Stability property) For all T
T
-measurable F o,
D
t
F is T
T
-measurable for all t T.
Proposition 3.2.11. Let T > 0. Under the stability Assumption 3.2.10, for
any T
T
-measurable random variable F L
2
() we have F ID
[T,)
and
D
t
F = 0, t T.
Proof. Since F is T
T
-measurable, D
t
F is T
T
-measurable, t T, by the
stability Assumption 3.2.10, and from Proposition 3.2.9 we have
D
t
F = IE[D
t
F[T
T
] = 0, 0 T t.

3.3 Divergence and Stochastic Integrals


In this section we are interested in the connection between the operator
and the stochastic integral with respect to (M
t
)
tR+
.
Proposition 3.3.1. Under the duality Assumption 3.1.1 and the Clark for-
mula Assumption 3.2.1, the operator applied to any square-integrable
adapted process (u
t
)
tR+
L
2
ad
( R
+
) coincides with the stochastic in-
tegral
(u) =
_

0
u
t
dM
t
, u L
2
ad
( R
+
), (3.3.1)
of (u
t
)
tR+
with respect to (M
t
)
tR+
, and the domain Dom() of contains
L
2
ad
( R
+
).
Proof. Let u T be a simple T
t
-predictable process. From the duality
Assumption 3.1.1 and the fact (2.5.7) that
IE
__

0
u
t
dM
t
_
= 0,
we have:
IE
_
F
_

0
u
t
dM
t
_
= IE[F] IE
__

0
u
t
dM
t
_
+ IE
_
(F IE[F])
_

0
u
t
dM
t
_
= IE
_
(F IE[F])
_

0
u
t
dM
t
_
= IE
__

0
IE[D
t
F[T
t
]dM
t
_

0
u
t
dM
t
_
120 3 Gradient and Divergence Operators
= IE
__

0
u
t
IE[D
t
F[T
t
]dt
_
= IE
__

0
IE[u
t
D
t
F[T
t
]dt
_
= IE
__

0
u
t
D
t
Fdt
_
= IE[DF, u)
L
2
(R+)
]
= IE[F(u)],
for all F o, hence by density of o in L
2
() we have
(u) =
_

0
u
t
dM
t
for all T
t
-predictable u T. In the general case, from Proposition 2.5.3
we approximate u L
2
ad
( R
+
) by a sequence (u
n
)
nN
T of simple T
t
-
predictable processes converging to u in L
2
(R
+
) and use the Ito isometry
(2.5.4).
As a consequence of the proof of Proposition 3.3.1 we have the isometry
|(u)|
L
2
()
= |u|
L
2
(R+)
, u L
2
ad
( R
+
). (3.3.2)
We also have the following partial converse to Proposition 3.3.1.
Proposition 3.3.2. Assume that
i) (M
t
)
tR+
has the predictable representation property, and
ii) the operator coincides with the stochastic integral with respect to
(M
t
)
tR+
on the space L
2
ad
(R
+
) of square-integrable adapted processes.
Then the Clark formula Assumption 3.2.1 hold for the adjoint D of .
Proof. For all F Dom(D) and square-integrable adapted process u we
have:
IE[(F IE[F])(u)] = IE[F(u)]
= IE[DF, u)
L
2
(R+)
]
= IE
__

0
u
t
IE[D
t
F[T
t
]dt
_
= IE
__

0
u
t
dM
t
_

0
IE[D
t
F[T
t
]dM
t
_
= IE
_
(u)
_

0
IE[D
t
F[T
t
]dM
t
_
,
3.4 Covariance Identities 121
hence
F IE[F] =
_

0
IE[D
t
F[T
t
]dM
t
,
since by (ii) we have
_
(u) : u L
2
ad
( R
+
)
_
=
__

0
u
t
dM
t
: u L
2
ad
( R
+
)
_
,
which is dense in F L
2
() : IE[F] = 0 by (i) and Denition 2.6.1.
3.4 Covariance Identities
Covariance identities will be useful in the proof of concentration and deviation
inequalities. The Clark formula and the It o isometry imply the following
covariance identity, which uses the L
2
extension of the Clark formula, cf.
Proposition 3.2.6.
Proposition 3.4.1. For any F, G L
2
() we have
Cov (F, G) = IE
__

0
IE[D
t
F[T
t
] IE[D
t
G[T
t
]dt
_
. (3.4.1)
Proof. We have
Cov (F, G) = IE[(F IE[F])(GIE[G])]
= IE
__

0
IE[D
t
F[T
t
]dM
t
_

0
IE[D
t
G[T
t
]dM
t
_
= IE
__

0
IE[D
t
F[T
t
] IE[D
t
G[T
t
]dt
_
.

The identity (3.4.1) can be rewritten as


Cov (F, G) = IE
__

0
IE[D
t
F[T
t
] IE[D
t
G[T
t
]dt
_
= IE
__

0
IE[IE[D
t
F[T
t
]D
t
G[T
t
]dt
_
= IE
__

0
IE[D
t
F[T
t
]D
t
Gdt
_
,
provided G Dom(D).
122 3 Gradient and Divergence Operators
As is well known, if X is a real random variable and f, g are monotone
functions then f(X) and g(X) are non-negatively correlated. Lemma 3.4.2,
which is an immediate consequence of (3.4.1), provides an analog of this result
for normal martingales, replacing the ordinary derivative with the adapted
process (E[D
t
F[T
t
])
t[0,1]
.
Lemma 3.4.2. Let F, G L
2
() such that
E[D
t
F[T
t
] E[D
t
G[T
t
] 0, dt dP a.e.
Then F and G are non-negatively correlated:
Cov (F, G) 0.
If G Dom(D), resp. F, G Dom(D), the above condition can be re-
placed by
E[D
t
F[T
t
] 0 and D
t
G 0, dt dP a.e.,
resp.
D
t
F 0 and D
t
G 0, dt dP a.e..
Iterated versions of Lemma 3.2.4 can also be proved. Let

n
= (t
1
, . . . , t
n
) R
n
+
: 0 t
1
< < t
n
,
and assume further that
Assumption 3.4.3. (Domain condition) For all F o we have
D
tn
D
t1
F ID
2,1
([t
n
, )), a.e. (t
1
, . . . , t
n
)
n
.
We denote by ID
2,k
(
k
) the L
2
domain of D
k
, i.e. the completion of o under
the norm
|F|
2
ID
2,k
(
k
)
= IE
_
F
2

+ IE
__

k
[D
t
k
D
t1
F[
2
dt
1
dt
k
_
.
Note the inclusion ID
2,k
(
k
) ID
2,1
(
k
), k 1.
Next we prove an extension of the covariance identity of [56], with a shortened
proof.
3.5 Logarithmic Sobolev Inequalities 123
Theorem 3.4.4. Let n N and F, G
n+1

k=1
ID
2,k
(
k
). We have
Cov (F, G) =
n

k=1
(1)
k+1
IE
__

k
(Dt
k
Dt
1
F)(Dt
k
Dt
1
G)dt1 dt
k
_
(3.4.2)
+ (1)
n
IE
_
_

n+1
Dt
n+1
Dt
1
F IE
_
Dt
n+1
Dt
1
G|Ft
n+1

dt1 dtn+1
_
.
Proof. By polarization we may take F = G. For n = 0, ((3.4.2)) is a
consequence of the Clark formula. Let n 1. Applying Lemma 3.2.4 to
D
tn
D
t1
F with t = t
n
and ds = dt
n+1
, and integrating on (t
1
, . . . , t
n
)
n
we obtain
IE
__
n
(IE[D
tn
D
t1
F[T
tn
])
2
dt
1
dt
n
_
= IE
__
n
[D
tn
D
t1
F[
2
dt
1
dt
n
_
IE
_
_
n+1
_
IE
_
D
tn+1
D
t1
F[T
tn+1
_
2
dt
1
dt
n+1
_
,
which concludes the proof by induction.
The variance inequality
2n

k=1
(1)
k+1
|D
k
F|
2
L
2
(
k
)
Var (F)
2n1

k=1
(1)
k+1
|D
k
F|
2
L
2
(
k
)
,
for F
2n

k=1
ID
2,k
(
k
), is a consequence of Theorem 3.4.4, and extends (2.15)
in [56]. It also recovers the Poincare inequality Proposition 3.2.7 when n = 1.
3.5 Logarithmic Sobolev Inequalities
The logarithmic Sobolev inequalities on Gaussian space provide an in-
nite dimensional analog of Sobolev inequalities, cf. e.g. [77]. In this section
logarithmic Sobolev inequalities for normal martingales are proved as an
application of the It o and Clark formulas. Recall that the entropy of a su-
ciently integrable random variable F > 0 is dened by
Ent [F] = IE[F log F] IE[F] log IE[F].
124 3 Gradient and Divergence Operators
Proposition 3.5.1. Let F Dom(D) be lower bounded with F > a.s. for
some > 0. We have
Ent [F]
1
2
IE
_
1
F
_

0
(2 1
{t=0}
)[D
t
F[
2
dt
_
. (3.5.1)
Proof. Let us assume that F is bounded and T
T
-measurable, and let
X
t
= IE[F [ T
t
] = X
0
+
_
t
0
u
s
dM
s
, t R
+
,
with u
s
= IE[D
s
F [ T
s
], s R
+
. The change of variable formula Proposition
2.12.1 applied to f(x) = xlog x shows that
F log F IE[F] log IE[F] = f(X
T
) f(X
0
)
=
_
T
0
f(X
t
+
t
u
t
) f(X
t
)

t
dM
t
+
_
T
0
i
t
u
t
f

(X
t
)dM
t
+
_
T
0
j
t

2
t
(X
t
,
t
u
t
)dt +
1
2
_
T
0
i
t
u
2
t
X
t
dt,
with the convention 0/0 = 0, and
(u, v) = (u +v) log(u +v) u log u v(1 + log u), u, u +v > 0.
Using the inequality
(u, v) v
2
/u, u > 0, u +v > 0,
and applying Jensens inequality (9.3.1) to the convex function (u, v) v
2
/u
on R (0, ) we obtain
Ent [F] = IE
_
_
T
0
j
t

2
t
(X
t
,
t
u
t
)dt +
1
2
_
T
0
i
t
u
2
t
X
t
dt
_

1
2
IE
_
_
T
0
(2 i
t
)
u
2
t
X
t
dt
_

1
2
IE
_
_
T
0
IE
_
(2 i
t
)
[D
t
F[
2
F

T
t
_
dt
_
=
1
2
IE
_
1
F
_
T
0
(2 i
t
)[D
t
F[
2
dt
_
.
Finally we apply the above to the approximating sequence F
n
= F n, n N,
and let n go to innity.
3.6 Deviation Inequalities 125
If
t
= 0, i.e. i
t
= 1, t R
+
, then (M
t
)
tR+
is a Brownian motion and we
obtain the classical modied Sobolev inequality
Ent

[F]
1
2
IE

_
1
F
|DF|
2
L
2
([0,T])
_
. (3.5.2)
If
t
= 1, t R
+
then i
t
= 0, t R
+
, (M
t
)
tR+
is a standard compensated
Poisson process and we obtain the modied Sobolev inequality
Ent

[F] IE

_
1
F
|DF|
2
L
2
([0,T])
_
. (3.5.3)
More generally, the logarithmic Sobolev inequality (3.5.2) can be proved for
any gradient operator D satisfying both the derivation rule Assumption 3.6.1
below and the Clark formula Assumption 3.2.1, see Chapter 7 for another
example on the Poisson space.
3.6 Deviation Inequalities
In this section we assume that D is a gradient operator satisfying both the
Clark formula Assumption 3.2.1 and the derivation rule Assumption 3.6.1
below. Examples of such operators will be provided in the Wiener and Poisson
cases in Chapters 5 and 7.
Assumption 3.6.1. (Derivation rule) For all F, G o we have
D
t
(FG) = FD
t
G+ GD
t
F, t R
+
. (3.6.1)
Note that by polynomial approximation, Relation (3.6.1) extends as
D
t
f(F) = f

(F)D
t
F, t R
+
, (3.6.2)
for f (
1
b
(R).
Under the derivation rule Assumption 3.6.1 we get the following deviation
bound.
Proposition 3.6.2. Let F Dom(D). If |DF|
L
2
(R+,L

())
C for some
C > 0, then
P(F IE[F] x) exp
_

x
2
2C|DF|
L
2
(R+,L

())
_
, x 0. (3.6.3)
In particular we have
IE[e
F
2
] < , <
1
2C|DF|
L
2
(R+,L

()
. (3.6.4)
126 3 Gradient and Divergence Operators
Proof. We rst consider a bounded random variable F Dom(D). The
general case follows by approximating F Dom(D) by the sequence
(max(n, min(F, n)))
n1
. Let

F
(t) = IE

[D
t
F [ T
t
], t [0, T].
Since F is bounded, the derivation rule (3.6.2) shows that
D
t
e
sF
= se
sF
D
t
F, s, t R
+
,
hence assuming rst that IE[F] = 0 we get
IE[Fe
sF
] = IE
_
_
T
0
D
u
e
sF

F
(u)du
_
= s IE
_
e
sF
_
T
0
D
u
F
F
(u)du
_
s IE
_
e
sF
|DF|
H
|
F
|
H

s IE
_
e
sF

|
F
|
L

(W,H)
|DF|
L
2
(R+,L

())
sC IE
_
e
sF

|DF|
L
2
(R+,L

())
.
In the general case, letting
L(s) = IE[exp(s(F IE[F]))], s R
+
,
we obtain:
log (IE[exp (t(F IE[F]))]) =
_
t
0
L

(s)
L(s)
ds

_
t
0
IE[(F IE[F]) exp (t(F IE[F]))]
IE[exp (t(F IE[F]))]
ds
=
1
2
t
2
C|DF|
L
2
(R+,L

())
, t R
+
.
We now have for all x R
+
and t [0, T]:
P(F IE[F] x) e
tx
IE[exp (t(F IE[F]))]
exp
_
1
2
t
2
C|DF|
L
2
(R+,L

())
tx
_
,
which yields (3.6.3) after minimization in t [0, T]. The proof of (3.6.4) is
completed as in Proposition 1.11.3.
3.7 Markovian Representation 127
3.7 Markovian Representation
This subsection presents a predictable representation method that can be
used to compute IE[D
t
F[T
t
], based on the Ito formula and the Markov
property, cf. Section 9.6 in the appendix. It can applied to Delta hedging
in mathematical nance, cf. Proposition 8.2.2 in Chapter 8, and [120]. Let
(X
t
)
t[0,T]
be a R
n
-valued Markov (not necessarily time homogeneous) pro-
cess dened on , generating a ltration (T
t
)
tR+
and satisfying a change of
variable formula of the form
f(X
t
) = f(X
0
) +
_
t
0
L
s
f(X
s
)dM
s
+
_
t
0
U
s
f(X
s
)ds, t [0, T], (3.7.1)
where L
s
, U
s
are operators dened on f (
2
(R
n
). Let the (non homo-
geneous) semi-group (P
s,t
)
0stT
associated to (X
t
)
t[0,T]
be dened on
(
2
b
(R
n
) functions by
P
s,t
f(X
s
) = E[f(X
t
) [ X
s
]
= E[f(X
t
) [ T
s
], 0 s t T,
with
P
s,t
P
t,u
= P
s,u
, 0 s t u T.
Proposition 3.7.1. For any f (
2
b
(R
n
), the process (P
t,T
f(X
t
))
t[0,T]
is
an T
t
-martingale.
Proof. By the tower property of conditional expectations, cf. Section 9.3, we
have
E[P
t,T
f(X
t
) [ T
s
] = E[E[f(X
T
) [ T
t
] [ T
s
]
= E[f(X
T
) [ T
s
]
= P
s,T
f(X
s
),
0 s t T.
Next we use above the framework with application to the Clark formula.
When (
t
)
t[0,T]
is random the probabilistic interpretation, of D is unknown
in general, nevertheless it is possible to explicitly compute the predictable
representation of f(X
T
) using (3.7.1) and the Markov property.
Lemma 3.7.2. Let f (
2
b
(R
n
). We have
IE[D
t
f(X
T
) [ T
t
] = (L
t
(P
t,T
f))(X
t
), t [0, T]. (3.7.2)
Proof. We apply the change of variable formula (3.7.1) to t P
t,T
f(X
t
) =
IE[f(X
T
) [ T
t
], since P
t,T
f is (
2
. Using the fact that the nite variation term
128 3 Gradient and Divergence Operators
vanishes since (P
t,T
f(X
t
))
t[0,T]
is a martingale, (see e.g. Corollary 1, p. 64
of [119]), we obtain:
P
t,T
f(X
t
) = P
0,T
f(X
0
) +
_
t
0
(L
s
(P
s,T
f))(X
s
)dM
s
, t [0, T],
with P
0,T
f(X
0
) = IE[f(X
T
)]. Letting t = T, we obtain (3.7.2) by uniqueness
of the representation (4.2.2) applied to F = f(X
T
).
In practice we can use Proposition 3.2.6 to extend (IE[D
t
f(X
T
) [ T
t
])
t[0,T]
to a less regular function f : R
n
R.
As an example, if
t
is written as
t
= (t, M
t
), and
dS
t
= (t, S
t
)dM
t
+(t, S
t
)dt,
we can apply Proposition 2.12.2, with (X
t
)
t[0,T]
= ((S
t
, M
t
))
t[0,T]
and
L
t
f(S
t
, M
t
) = i
t
(t, S
t
)
1
f(S
t
, M
t
) +i
t

2
f(S
t
, M
t
)
+
j
t
(t, M
t
)
(f(S
t
+(t, M
t
)(t, S
t
), M
t
+(t, M
t
)) f(S
t
, M
t
)),
where j
t
= 1
{t=0}
, t R
+
, since the eventual jump of (M
t
)
t[0,T]
at time t
is (t, M
t
). Here,
1
, resp.
2
, denotes the partial derivative with respect to
the rst, resp. second, variable. Hence
IE[D
t
f(S
T
, M
T
) [ T
t
] = i
t
(t, S
t
)(
1
P
t,T
f)(S
t
, M
t
) +i
t
(
2
P
t,T
f)(S
t
, M
t
)
+
j
t
(t, M
t
)
(P
t,T
f)(S
t
+(t, M
t
)(t, S
t
), M
t
+(t, M
t
))

j
t
(t, M
t
)
(P
t,T
f)(S
t
, M
t
).
When (
t
)
tR+
and (t, x) =
t
, are deterministic functions of time and
(t, x) = 0, t R
+
, the semi-group P
t,T
can be explicitly computed as
follows.
In this case, from (2.10.4), the martingale (M
t
)
tR+
can be represented as
dM
t
= i
t
dB
t
+
t
(dN
t

t
dt), t R
+
, M
0
= 0,
with
t
= j
t
/
2
t
, t R
+
, where (N
t
)
tR+
is an independent Poisson process
with intensity
t
, t R
+
. Let

t
(T) =
_
T
t
1
{s=0}

2
s
ds, 0 t T,
3.7 Markovian Representation 129
denote the variance of
_
T
t
i
s

s
dB
s
=
_
T
t
1
{s=0}

s
dB
s
, 0 t T, and let

t
(T) =
_
T
t

s
ds, 0 t T,
denote the intensity parameter of the Poisson random variable N
T
N
t
.
Proposition 3.7.3. We have for f (
b
(R)
P
t,T
f(x) =
1

k=0
e
t(T)
k!
_

e
t
2
0
/2
_
[t,T]
k

t1

t
k
f
_
xe

t
(T)
2
+

t(T)t0
_
T
t
sssds
k

i=1
(1 +
ti

ti
)
_
dt
1
dt
k
dt
0
.
Proof. We have P
t,T
f(x) = E[f(S
T
)[S
t
= x] = E[f(S
x
t,T
)], and
P
t,T
f(x) = exp(
t
(T))

k=0
(
t
(T))
k
k!
E
_
f(S
x
t,T
)

N
T
N
t
= k
_
k N. It can be shown (see e.g. Proposition 6.1.8 below) that the time
changed process
_
N

1
t
(s)
N
t
_
sR+
is a standard Poisson process with
jump times (

T
k
)
k1
= (
t
(T
k+Nt
))
k1
. Hence from Proposition 2.3.7, condi-
tionally to N
T
N
t
= k, the jump times (

T
1
, . . . ,

T
k
) have the law
k!
(T t)
k
1
{0<t1<<t
k
<Tt}
dt
1
dt
k
.
over [0, T t]
k
. Consequently, conditionally to N
T
N
t
= k, the k rst
jump times (T
1
, . . . , T
k
) of (N
s
)
s[t,T]
have the distribution
k!
(
t
(T))
k
1
{t<t1<<t
k
<T}

t1

t
k
dt
1
dt
k
.
We then use the identity in law between S
x
t,T
and
xX
t,T
exp
_

_
T
t

s
(1 +
s

s
)
s
ds
_
NT

k=1+Nt
(1 +
T
k

T
k
),
where X
t,T
has same distribution as
exp
_
W
_

t
(T)
t
(T)/2
_
,
130 3 Gradient and Divergence Operators
and W a standard Gaussian random variable, independent of (N
t
)
t[0,T]
,
which holds because (B
t
)
t[0,T]
is a standard Brownian motion, independent
of (N
t
)
t[0,T]
.
3.8 Notes and References
Several examples of gradient operators satisfying the hypotheses of this chap-
ter will be provided in Chapters 4, 5, 6, and 7, on the Wiener and Poisson
space and also on Riemannian path space. The Ito formula has been used for
the proof of logarithmic Sobolev inequalities in [4], [6], [151] for the Poisson
process, and in [22] on Riemannian path space, and Proposition 3.5.1 can be
found in [111]. The probabilistic interpretations of D as a derivation opera-
tor and as a nite dierence operator has been studied in [116] and will be
presented in more detail in the sequel. The extension of the Clark formula
presented in Proposition 3.2.6 is related to the approach of [88] of [142]. The
covariance identity (3.4.1) can be found in Proposition 2.1 of [59]. See also [7]
for a unied presentation of the Malliavin calculus based on the Fock space.
http://www.springer.com/978-3-642-02379-8

Anda mungkin juga menyukai