Anda di halaman 1dari 35

Malliavin Calculus in Finance

Till Schr oter


St Hughs College University of Oxford A Special Topic Essay Trinity 2007

Contents
1 Malliavin Calculus 1.1 The Wiener Chaos decomposition . . . . . . . . . . . . . . . . . . . . 1.2 1.3 The Malliavin Derivative . . . . . . . . . . . . . . . . . . . . . . . . . The Divergence Operator . . . . . . . . . . . . . . . . . . . . . . . . . 1 1 8 12 17 17 17 20 22 25 32

2 Some Application of Malliavin Calculus to Finance 2.1 Monte Carlo Simulations . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 2.1.2 2.2 2.3 Greeks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Conditional Expectation . . . . . . . . . . . . . . . . . . . . .

Hedging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Insider Trading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Bibliography

Chapter 1 Malliavin Calculus


In this chapter we review the basic ideas of Malliavin Calculus. Malliavin Calculus is an innite-dimensional dierential calculus on the Wiener space. The theory originated from attempts to describe the probability law of functionals on the Wiener space and was initiated by Paul Malliavin (see [Nua06] and the references therein for the development of the theory). In the rst section we develop the analysis on the Wiener space. It rst element is the Wiener Chaos Decomposition, a decomposition of a L2 -space that yields in a less abstract setting a fundamental representation theorem of square integrable random variables. In the second section we introduce the Malliavin derivative and deduce some basic properties of the derivative that are useful when applying the theory later on to a nancial setting. The third and nal section of this chapter introduces the adjoint to the derivative operator. This adjoint operator turns then out to be an integral that, in the case of adapted processes as integrands, coincides with the well known Ito integral. The adjoint operator yields to a stochastic integration by parts formula that turns out to be extremely useful in applying the theory to nance. In this introduction to the Malliavin Calculus we follow [Nua06] and [ks96].

1.1

The Wiener Chaos decomposition

For a real, separable Hilbert space H we dene Denition 1.1.1. A stochastic process W = {W (h, ), h H } on a probability space (, F , P ) is an isonormal Gaussian process if W is centred and Gaussian such that E (W (h)W (g )) = h, g
H

for all h, g H . 1

Here the mapping h W (h) is a linear mapping, as the above relationship between scalar products yields E (W (h + g ) W (h) W (g ))2 = 0. Therefore the mapping provides a linear isometry between H and some space H1 that contains the isonormal random variables, where H1 is a subspace of L2 (, F , P ) (in short L2 (F )), due to the observation that W (h)
2 L2 (P )

= E W (h)2 = h

2 H.

We call H1 the space of Gaussian zero mean random variables. As we will see later on, given a certain structure of H , H1 is in this case the space induced by the Ito integrals { ht dWt , h H }. Denoting by G the -eld generated by the the random variables {W (h), h H } the goal of this section is to nd a decomposition of L2 (, G , P ) (in short L2 (G )). To do this we give some results concerning so called Hermite polynomials. These polynomials are given by (1)n x2 dn x2 e2 (e 2 ), H0 (x) = 1. n! dxn The Hermite polynomial are coecients of the power expansion in t of the function 2 2 F (t, x) = exp(tx t2 ), as can easily be seen by rewriting F (t, x) = exp( x 1 (x t)2 ) 2 2 Hn (x) := and expanding the function around t = 0. The power expansion combined with some particular properties of F , i.e.
F x

= tF ,

F t

= (x t)F and F (x, t) = F (x, t),

lead to the corresponding behaviour of the Hermite polynomials for n 1: Hn (x) = Hn1 (x), (n + 1)Hn+1 (x) = xHn (x) Hn1 (x), Hn (x) = (1)n Hn (x). (1.1) (1.2) (1.3)

We are able to establish the following orthogonality relationship between the Hermite polynomials: Lemma 1.1.2. Let X , Y be two random variables with joint Gaussian distribution such that E (X ) = E (Y ) = 0 and E (X 2 ) = E (Y 2 ) = 1. Then for all n, m 0 we have E (Hn (X )Hm (Y )) = 0
1 (E (XY n!

))

if if

n = m, n = m.

(1.4)

Proof. For all s, t R the multivariate moment generating function leads to the equality E exp(sX s2 t2 ) exp(tY ) 2 2
n+m

= exp(stE (XY )).

Taking on both sides the partial derivative s n tm at s = t = 0 and taking into account that n F (X, s) = n!Hn (X ) + (n + i)!si Hn+i (X ) n s i=1

yields E (n!m!Hn (X )Hm (Y )) = 0 n!(E (XY ))n if n = m, if n = m.

Lemma 1.1.3. The random variables {eW (h), h H } form a total subset1 of L2 (G ). Proof. We choose X L2 (G ) such that E (XeW (h) ) = 0 for all h H . By the linearity of W (h), for such an X holds
m

X exp(
i=1

ti W (hi ))

= 0,

ti R, hi H, m 1.

This equation states, that the Laplace transform2 of the measure is zero, where is given by (B ) = E (X 1B (W (h1 ), . . . , W (hm ))) for a Borel set B in Rm . As the transform is zero the measure itself must be zero for every set G G , i.e. E (X 1G ) = 0 for G G . Thus X must be zero. The linear subspace of L2 (G ) created by {Hn (W (h)), h H such that h = 1} for n 1 is denoted by Hn . For n = m the previous lemma yields that the spaces Hn and Hm are orthogonal. This fact leads to a orthogonal decomposition of the space L2 (G ):
We recall briey the denition of a total subset: For some vector space E and its dual E , E is a total subset over E , if = {0}. := {x E, f, x = 0 for all f }
2 1

For a (signed) measure on

Rm the Laplace transform is given for some t Rm by:


L{ }(t) =
Rm

exp( t, x )d (x)

Theorem 1.1.4. The space L2 (G ) can be decomposed into the innite sum of orthogonal subspaces: L (G ) =
n=0 2

Hn .

(1.5)

Proof. We proof this theorem by taking an arbitrary element X L2 (G ) such that X is orthogonal to Hn for all n 1. That means choosing X such that it is either not in Hn for n 1 or, if it is, it is equal to zero. We do this by choosing an X L2 (G ) that satises E (XHn (W (h))) = 0 for all h H with h
2 H

= 1, n 1.

Next we show that the only X L (G ) that fulls that condition is indeed the zero element, thereby establishing that all elements are contained in the r.h.s of (1.5). Expressing xn as a linear combination of Hermite polynomials Hi (x), 0 i n we get E (XW (h)n ) = 0 for all n 0, this, by a power expansion of the exponent, leads to E (X exp tW (h)) = 0 for all t R. By the previous lemma X = 0 follows. We have seen in Theorem 1.1.4 that the L2 (G ) can be decomposed into orthogonal subspaces. This property, of course, should be reected by the elements of L2 (G ). Next the goal is to nd a decomposition of given random variable into a sum of suitable orthogonal random variables. To do so we leave the abstract setting and consider the Hilbert space H = L2 (T, B , ) = L2 (T ), where (T, B ) is a measurable space and is a -nite measure without atoms. The Gaussian process W is characterised by the family {W (1B ), B B and (B ) < } as every element in the given Hilbert space can be approximated by linear combinations of suitable indicator functions. For the brevity of notation we will also write W (B ) for W (1B ). By denition W (A) has a distribution of N (0, (A)), if (A) < . Next we dene the multiple stochastic integral Im (f ) as it plays a key role in establishing a orthogonal decomposition of a random variable. Dening B0 := {A B , (A) < } we want to dene a stochastic integral for a function f L2 (T m , B m , m ) (for m 1, T m denotes the m-times product of spaces T and m the corresponding product measure) and denote by Em the set of simple functions
n

f (t1 , . . . , tm ) =
i1 ,...,im =1

ai1 ,...,im 1Ai1 Aim (t1 , . . . , tm ),

(1.6)

where A1 , . . . , An are pairwise disjoint sets in B0 and the coecients ai1 ,...,im vanish if any two indices ij are equal. The integral with respect to the simple functions is dened as Im (f ) :=
i1 ,...,im =1 n

ai1 ,...,im W (Ai1 ) W (Aim ). 4

As any two simple functions can be rewritten with respect to a common set of indicator functions, the linearity of the integral obvious. Two other important properties hold as well: Lemma 1.1.5. For the integral we nd: ), where f denotes the symmetrization of f given by 1. Im (f ) = Im (f (t1 , . . . , tm ) = 1 f m! f (t(1) , . . . , t(m) )

and running over all permutations of {1, . . . , m}. 2. E (Im (f )Iq (g )) = 0 , g m! f


L2 (T m )

if if

q = m, q = m.

(1.7)

Proof. The rst item can easily be checked for a function of the kind f (t1 , . . . , tm ) = 1Ai1 Aim (t1 , . . . , tm ). Due to the linearity of the integral it is sucient to consider this case alone. For the second item we consider two symmetric functions f Em and g Eq . If m = q the expectation is always zero, as there is always one random variable independent of the rest with expectation zero. For m = q and a symmetric function the coecients satisfy bi1 ,...,im = b(i1 ),...,(im ) for all permutations of {i1 , . . . , im }. Therefore
n

g (t1 , . . . , tm ) =
i1 ,...,im =1 n

bi1 ,...,im 1Ai1 Aim (t1 , . . . , tm ) bi1 ,...,im 1Ai1 Aim (t1 , . . . , tm ),
i1 <<im

= m! and this leads to


n

E (Im (f )Iq (g )) = m!
i1 <<im

m! ai1 ,...,im bi1 ,...,im (Ai1 ) (Aim )


L2 (T m ) .

, g = m! f

These results along with the density of Em in L2 (T m ) enables us to extend the integral to any arbitrary function in L2 (T m ). We choose a sequence (f n )nN Em 5

such that f n f L2 (T m ). (f n )nN is obviously a Cauchy sequence and we obtain, by letting f = g in the second item of the previous lemma, E (Im (f n ) Im (f k ))2 = E (Im (f n f k ))2 = m! f n f k
L2 (T m )

m! f n f k

L2 (T m )

0.

Therefore Im (f n ) is a Cauchy sequence in L2 (F ) and we denote the limit Im (f ) by Im (f ) =:


Tm

f (t1 , . . . , tm )dW (t1 ) dW (tm ).

This integral however is not yet the standard Ito integral. For a simple function h=
n i1 =1

ai1 1Ai1 this leads to


n n n

W (h) = W (
i1 =1

ai1 1Ai1 ) =
i1 =1

ai1 W (1Ai1 ) =
i1 =1

ai1 W (Ai1 ) =
T

hdW (t).

Obviously, by the density of Em , this property extends to the entire L2 (T ) = H and the isonormal Gaussian process is given by {W (h) = polynomials: Proposition 1.1.6. Let Hm (x) be the m-th Hermite polynomial and let h H = L2 (T ) such that h
H T

hs dWs , h H }. This

integral dened with respect to Gaussian process is closely related to the Hermite

= 1. Then h(t1 ) h(tm )dW (t1 ) dW (tm )


Tm

m!Hm (W (h)) =

m 2 m holds and, denoting by L2 S (T ) the closed subspace of L (T ) generated by the sym-

metric functions,
m Im (L2 (T m )) = Im (L2 S (T )) = Hn .

Proof. For m = 1 and simple functions this relationship surely holds true. Due to the density of E1 in L2 (T ), this relationship extends to L2 (T ). Next we assume that the relationship holds for m and, using the notation hm = h(t1 ) h(tm ), we obtain: Im+1 (hm+1 ) = Im (hm )I1 (h) mIm1 hm1
T

h(t)2 (dt)

= m!Hm (W (h)) W (h) m(m 1)!Hm1 (W (h)) = m!(m + 1)Hm+1 (W (h)) = (m + 1)!Hm+1 (W (h)) . Here the rst equality is proved in [Nua06] (Proposition 1.1.2). The second equality stems from the assumption made for the induction and the fact that 6
T

h(t)2 (dt) =

2 H

= 1. Finally we use (1.2) to obtain the result. For the second part of the state2 L2 (T m ) m 2 m holds on L2 S (T ). Therefore Im (LS (T ))

ment note, that E (Im (f )) = m! f

m is a closed subspace of L2 (F ). Due to the rst part of this proposition, Im (L2 S (T ))

also contains the random variables Hm (W (h)), h H that satisfy h = 1. Thus, m 2 m the m-th Wiener chaos is in Im (L2 S (T )), i.e Hm Im (LS (T )). Due to (1.7), the
m orthogonality of integrals of dierent order, Hn and Im (L2 S (T )) are orthogonal for m 2 2 m m = n. As Im (L2 S (T )) L (G ) this establishes Im (LS (T )) = Hm .

This proposition establishes the chaos expansion theorem (or chaos decomposition) of a square integrable random variable: Theorem 1.1.7. A random variable F L2 (G ) can be expanded into a series of multiple stochastic integrals F =
n=0

In (fn ).

As non-degenerate integrals have an expectation of zero, f0 must be E (F ) and I0 is dened as the identity mapping. In case of symmetric fn they are uniquely determined by F . We close this section by considering the relation between the stochastic integral with respect to isonormal Gaussian process and the standard Ito integral. The probabilistic setting is given by the probability space (, F , P ). Setting T = R+ and = , being the Lebesque measure, we note that we are able to dene a Brownian motion Wt via the isonormal Gaussian process W (h) by setting: Wt := W ([0, t]) = W (1[0,t] ), t T. Then for a symmetric function f : T n R the following relationship can easily be checked for simple processes:
tn t2

In (f ) = n!
0 0

f (t1 , . . . , tn )dWt1 dWtn .

The multiple Ito integral has the above structure as the integration limit must ensure that the adaptivity of the integrand is still given. By taking in account the density of the simple functions the above equality can be extended to the general case.

1.2

The Malliavin Derivative

In this section we will introduce the derivative DF of a square integrable random variable F : R. The goal is to dene a derivative with respect to the chance parameter . In order to achieve this we consider the set Cp (Rn ), the set of all innitely continuously dierentiable functions f : Rn R such that f and all of its
(Rn ) we denote derivatives have at most polynomial growth. For n 1 and f Cp

by S the set of all random variables of the form: F = f (W (h1 ), . . . , W (hn )), (1.8)

where h1 , . . . hn H . F S will be called a smooth random variable. The derivative of a smooth random variable is dened by: Denition 1.2.1. The derivative of a random variable F S is given by:
n

DF =
i=0

i f (W (h1 ), . . . , W (hn ))hi .

(1.9)

By this denition the derivative is a mapping DF : H . Why this can indeed be understood as a derivative with respect to the chance parameter will become clear later on. Next we prove an important integration by parts formula: Lemma 1.2.2. Let F F and h H , then E ( DF, h
H)

= E (F W (h)) .

(1.10)

Proof. Due to the linearity of the scalar product an W , we are able to normalise (1.10) such that h is of norm one. Setting h = e1 there exist orthonormal elements in e1 , . . . , en in H such that F can be rewritten as F = f (W (e1 ), . . . , W (en ))
for a suitable function f Cp (Rn ). Denoting by (x) the multivariate density of the standard normal distribution we obtain by the classical integration by parts formula

E ( DF, h

H)

= =

Rn

1 f (x)(x)dx

f (x)(x)xdx Rn = E (F W (e1 )) = E (F W (h)) .

By applying the previous lemma to F G we obtain Lemma 1.2.3. Let F, G F and h H , then E (G DF, h
H)

= E (F DG, h

H)

+ E (GF W (h)) .

(1.11)

For any p 1 we denote the domain of the derivative operator in Lp (F ) by D1,p , where D1,p = S and the closure is taken with respect to the norm F
1,p

= (E (|F |p ) + E ( DF

1/p p H ))

This denition of D1,p is sensible as it is shown in [Nua06] that D is a closable operator. Remark 1.2.4. For a real separable Hilbert space V , we are able to dene a (semi-) norm for a family SV of V -valued smooth random variables of the form
n

F =
j =1

Fj vj ,

vj V,

Fj S

by setting F where DF = is denoted by D (V ). We dene an auxiliary operator Dh on the set of smooth random variables by: Dh F := DF, h . This is an operator from Lp (F ) to Lp (F ) for p 1 and its domain will be denoted by Dh,p . The derivative operator also satises a certain chain rule (Proposition 1.2.3 [Nua06]): Lemma 1.2.5. Let : Rm R be a continuously dierentiable function with bounded partial derivatives. Suppose F = (F 1 , . . . , F m ) is a random vector with components in D1,p . Then (F ) is in D1,p and
m n j =1 1,p 1,p,V

= E( F

p V)

+ E ( DF

1/p p H V )

DFj vj . The completion of SV with respect to the above norm

D((F )) =
i=1

i (F )DF i .

Similar to the previous section we will now abandon the setting of an arbitrary Hilbert space H and specify H = L2 (T, B , ). Again (T, B ) is a measurable space and is -nite atomless measure. In this setting the derivative of a random variable F D1,2 will be a stochastic process given by {Dt F, t T } due to the fact, that we are able to identify L2 (; H ) with L2 (T ) by identifying each h H with (ht )tT . The next example will show why we consider the operator Dh F as a derivative w.r.t the chance parameter . Example 1.2.6. We consider the Wiener space with = C0 ([0, 1]), a Brownian motion given by Wt ( ) = (t), and subspace H 1 of containing the functions of the form x(t) = 0 x (s)ds such that x (s) H = L2 ([0, 1], ). This space is called the Cameron-Martin space and we are able to obtain a Hilbert space structure on it by setting
1 t

x, y

H1

= x ,y

=
0

x (s)y (s)ds.

We consider a random variable F S of the particular form F = f (W (t1 ), . . . , W (tn )), 0 t1 < < tn 1. Here W (1[0,ti ] ) = W (ti ), and, by the given choice of H , W (1[0,ti ] ) is a Brownian motion. Therefore F ( ) = f ( (t1 ), . . . , (tn )) = f (W (t1 ), . . . , W (tn )). In this setting Dh F yields:
n

D F = DF, h

=
i=0 n

i f (W (t1 ), . . . , W (tn )) 1[0,ti ] , h


ti

=
i=0

i f (W (t1 ), . . . , W (tn ))
0

h(s)ds

d F ( + d

h(s)ds)|=0 .
0

Making use of the Wiener decomposition for a square integrable random variable, i.e. F =
n=0

In (fn ),

(1.12)

for a symmetric function fn , we can easily compute the derivative of any random variable: Proposition 1.2.7. Let F D1,2 be a square integrable random variable with a decomposition given above. the we have:

Dt F =
n=1

nIn1 (fn (, t))

(1.13)

10

Proof. We show the statement for a simple function. The general case then results from the density of the simple functions. Let F = Im (fm ) for a symmetric function fm , then, by applying (1.9) to g (W (Ai1 ), . . . , W (Ain )) with g (x1 , . . . , xn ) = x1 xn , we obtain:
m n

Dt F =
j =1 i1 ,...,im =1

ai1 ,...,im W (Ai1 ) 1Aij W (Aim )

= mIm1 (fm (, t))

Using the chaos decomposition also yields a representation for the conditional expectation of square integrable random variable F : Proposition 1.2.8. Suppose F is a square integrable random variable with the representation (1.12). Let A B , then:

E (F |FA ) =
n=0

n In (fn 1 A ).

(1.14)

Proof. It is enough to assume that F = In (fn ) where fn is a function in En . By linearity we also may assume that the kernel fn is of the form 1B1 Bn with B1 , . . . , Bn being mutually disjoint sets of nite measure. The linearity of W and the properties of the conditional expectation then lead to E (F |FA ) = E (W (B1 ) W (Bn )|FA )
n

=E
i=1

(W (Bi A) + W (Bi Ac ))|FA

= In (1(B1 A)(Bn A) ).

This result enables us nally to calculate the derivative of a conditional expectation. Proposition 1.2.9. Assume F is member of D1,2 and A B . Then the conditional expectation E [F |FA ] belongs to D1,2 and the derivative is a.s. given by Dt (E [F |FA ]) = E [Dt F |FA ]1A (t). Proof. By the Proposition 1.2.7 and 1.2.8 we obtain

Dt (E [F |FA ]) =
n=1

n1 nIn1 (fn (, t)1 1A (t)) = E [Dt F |FA ]1A (t). A

11

1.3

The Divergence Operator

In this section we introduce the divergence operator dened as the adjoint of the derivative operator. If the underlying Hilbert space H is of the form L2 (T, B , ). Again (T, B ) we see, that the divergence operator can be understood as a integral. We therefore dene: Denition 1.3.1. We denote by the adjoint of the operator D. That is is an unbound operator on L2 (; H ) with values in L2 () and fulls: The domain of , denoted by Dom , is the set of H-valued square integrable random variables u L2 (; H ) such that |E ( DF, u
H )|

c F

2,

for all F D1,2 , where c is some constant depending on u. If u belongs to Dom , then (u) is an element of L2 () characterized by E (F (u)) = E ( DF, u for any F D1,2 . Denoting by SH the class of smooth elementary processes of the form
n H)

(1.15)

u=
j =1

Fj hj

(1.16)

where Fj is a smooth random variable and hj are elements of H . We deduce from Lemma 1.2.3, keeping in mind E [ DG, u ] =
n n j =1 n

E [Fj DG, hj ], that


H.

(u) =
j =1

Fj W (hj )
j =1

DFj , hj

(1.17)

For a random variable F D1,2 the term (F ) can be calculated with the help of the next proposition. Proposition 1.3.2. Let F D1,2 and u be in the domain of such that F u L2 (; H ). Then F u belongs to the domain of and we obtain the equality (F u) = F (u) DF, u provided the r.h.s is square integrable. 12
H

Proof. For any smooth random variable G we have: E [G (F u)] = E [ DG, F u


H]

= E [ u, D(F G) GDF = E [( (u)F u, DF

H]

H )G]

Dening Dh (u) :=

n j =1

Dh (Fj )hj for u SH , F S and h H the following + (Dh u).

commutativity relationship holds true: Dh ( (u)) = u, h Proof: (1.17) yields


n H

(1.18)

D ( (u)) =
j =1 n

D(Fj W (hj )) D DFj , hj


n

H, h H

=
j =1

Fj h, hj
H

+
j =1

(Dh Fj W (hj ) D(Dh Fj ), hj

= u, h

+ (Dh u)

Remark 1.3.3. This commutativity relationship can be extended to D1,2 (H ), a completion of SH . See [Nua06] for details. Next we are going to consider the special Hilbert space H = L2 (T ) again. In this case Dom L2 (T ) due to the denition of the adjoint operator. The operator (u) is called the Skorohod integral of the process u. The following notation will be used: (u) =
T

ut Wt .

To obtain the Wiener chaos decomposition of the Skorohod integral we rst note that any u L2 (T ) as a Wiener chaos expansion of the form

u(t) =
n=0

In (fn (, t)).

(1.19)

For each n 1, fn L2 (T n+1 ) is a symmetric function in the rst n variables. We have the following result: Proposition 1.3.4. Let u L2 (T ) have the chaos expansion (1.19). Then u Dom if and only if the series

(u) =
n=0

In+1 (f n)

(1.20)

converges in L2 (). 13

For a proof see [Nua06]. Here the functions of n + 1 variables are symmetric only in the rst n variables and the symmetrization of fn in all its variables is given by: f n (t1 , . . . , tn , t) = 1 n+1
n

fn (t1 , . . . , tn , t) +
i=1

fn (t1 , . . . , ti1 , t, ti+1 , . . . , tn , ti ) .

As a consequence of Proposition 1.3.4, Dom is formed by elements of a subspace of L2 (T ) that satisfy:


E [ (u) ] =
j =0 i=0

E [Ij +1 (f j )Ii+1 (fi )]


2 E [Ij +1 (f j) ] j =0

= =
j =0

(j + 1)! f j

2 L2 (T n+1 )

< .

In what sense can the Skorohod integral be understood as an integral? In our current setting with H = L2 (T ) the random variable W (hj ) can be written as the integral hj (t)dWt and 1.17 becomes:
n n

ut Wt =
T j =1

Fj
T

hj (t)dWj
j =1 T

Dt Fj hj (t)(dt).

Therefore, considering this special Hilbert space, the Skorohod integral can be seen as the Ito integral and an additional term involving the Malliavin derivative. The derivative of the Skorohod integral is given by: Proposition 1.3.5. Suppose that u D1,2 (L2 (T )). Assume that for almost all t the process (Dt us )tT is Skorohod integrable and the process version that is in L (T ). Then Dt ( (u)) = ut +
T 2 T

Dt us Ws

tT

has a

Dt us Ws .

Proof. (1.18) yields the commutation relation and the following remark its validity for D1,2 (L2 (T )). In this setting we obtain: Dh ( (u)) = u, h
H

+ (Dh u).

14

In the given Hilbert space this becomes: Dt ( (u))ht dt = = = ut ht dt + ut ht dt + ut + Dt us ht dt Ws Dt us Ws ht dt

Dt us Ws ht dt .

Next, we are going to relate the Skorohod and Ito integrals by observing that the Skorohod integral of an adapted process coincides with the Ito integral of this process. In order to establish this result we recall that B0 = {A B, (A) < } and quote the following Lemma from [Nua06] without proof: Lemma 1.3.6. Let A B0 and let F be a square integrable random variable that is measurable with respect to -eld FAc . The the process F 1A is Skorohod integrable and (F 1A ) = F W (A). This Lemma establishes the connection to the Ito integral in the next proposition. 2 Denoting by L2 a the closed subspace of L ([0, 1])generated by the adapted processes we obtain:
2 Proposition 1.3.7. L2 a is a subset of Dom and the operator restricted to La

coincides with the Ito integral, that is


1

(u) =
0

ut dWt .

Proof. Suppose u is an elementary adapted process of the form


n

ut =
j =1

Fj 1(tj ,tj+1 ] (t),

where Fj L2 (, Ft , P ) and 0 t1 < < tn+1 1. The the above Lemma yields that u Dom and for the Skorohod integral we have the representation:
n

(u) =
j =1

Fj (W (tj +1 ) W (tj )).

The elementary processes above are dense in L2 a and a limit argument yields the desired result. 15

We close this section by pointing out a representation theorem in terms of a Malliavin derivative, the Clark-Ocone representation formula. It is well known that any square integrable random variable can be written as
1

F = E (F ) +
0

(t)dWt .

for an adapted integrand (t). Next we show that can be written in terms of a Malliavin derivative: Proposition 1.3.8 (Clark-Ocone). Let F D1,2 and suppose that W is a onedimensional Brownian motion on [0, 1]. Then
1

F = E (F ) +
0

E (Dt F |Ft )dWt . From (1.13) and (1.14) we obtain:

(1.21)

Proof. Suppose that F = E (Dt F |Ft ) =

n=0 In (fn ).

nE (In1 (fn (, t))|Ft )


n=1

=
n=1

nIn1 (fn (t1 , . . . , tn1 , t)1{t1 tn1 <t} ).

Now, setting ut = E (Dt F |Ft ) and keeping in mind, that the symmetrization of 1 fn (t1 , . . . , tn1 , t)1{t1 tn1 <t} equals n fn (t1 , . . . , tn1 , t) if u is not trivially zero, we obtain by Proposition 1.3.4:

(u) =
n=1

In (fn ) = F E (F ).

In the next part we will look at the application to nance of the theory developed so far.

16

Chapter 2 Some Application of Malliavin Calculus to Finance


The use of Malliavin Calculus in nance was introduced in two strands of literature. One stand is concerned with the use of Malliavin calculus in Monte Carlo simulations and was introduced in [FLL+ ] and [FLLL01]. The other strand of literature centres around the use of Malliavin Calculus in hedging and was introduced in [KO91]. Another application of Malliavin Calculus can be found in Insider Trading models, that we also review in this chapter. In all applications we encounter the Hilbert space H will always be given by H = L2 ([0, T ], B , ) and the market setting is given by a Black-Scholes economy.

2.1
2.1.1

Monte Carlo Simulations


Greeks

The key idea of using Malliavin Calculus for the numerical computation of the so called Greeks (the sensitivities of a derivative with respect to dierent price relevant parameters) is an extension of the integration by parts formula developed in the previous chapter: Proposition 2.1.1. Let F , G be two random variables such that F D1,2 . Consider an H -valued random variable u such that Du F = DF, u H = 0 a.s. and Gu(Du F )1 Dom . Then for any continuously dierentiable function f with bounded derivative we have E [f (F )G] = E [f (F )H (F, G)]

17

where H (F, G) =

Gu Du F

Proof. By Lemma 1.2.5 a chain rule holds and we obtain Du (f (F )) = f (F )Du (F ). This and the duality relationship (1.15) yield: E [f (F )G] = E [Du (f (F ))(Du F )1 G] = E [ D(f (F )), u(Du F )1 G = E [f (F ) (Gu(Du F )1 )].
H]

If we chose u = DF we obtain: H (F, G) = GDF DF H .

Considering an option with pay-o B in T the price of the option at any time t = 0 is given by the expectation under an equivalent martingale measure Q: V0 = erT EQ (B ) provided that EQ (B 2 ) < . Here r denotes the constant interest rate. Suppose we are able to write B = f (F ), f being continuous dierentiable and being one of the parameters of the problem. Then dF V0 = erT EQ f (F ) d By the above theorem this can be rewritten as: V0 dF = erT EQ f (F )H (F , ) . d (2.2) . (2.1)

This is the main idea of using Malliavin Calculus to derive ecient Monte Carlo simulation formulas. [FLL+ ] have shown that the above relations also hold if f is not continuous dierentiable anymore as in European options for example. Then (2.1) is hard to handle by Monte Carlo methods due to the derivative in the expectation operator.

18

Example 2.1.2 (Delta and Gamma in a Black Scholes World). We give an example of the computation of a Delta and a Gamma of an option in a Black-Scholes world with constant parameters. It is well known that in this setting the value of a stock is given by 2 St = S0 exp(( )t + Wt ) =: v (WT ). 2 for t [0, T ]. Then the Delta of an option with pay-o function (ST ) is given by = V0 ST = EQ erT (ST ) S0 S0 = erT EQ ( (ST )ST ) . S0

Applying Proposition 2.1.1 with u = 1, F = ST , G = ST and applying the chain rule to DST = Dt v (WT ) = v (WT )Dt WT = ST we obtain by Proposition 1.3.4 (yielding (1) = I1 (1)):
T 1

ST
0

Dt ST dt

1 T

WT .

Therefore the Delta of the option can be expressed as erT EQ ((ST )WT ) . = S0 T This term can easily be simulated by Monte Carlo methods. The of an option is given by 2 V0 erT 2 = EQ (ST )ST . 2 2 S0 S0
2 Taking in Proposition 2.1.1 u = 1, F = ST and G = ST we obtain by Lemma 1.3.2 T 2 ST 0 1

Dt ST dt

ST T

= ST

WT 1 . T

This in turn yields:


2 = EQ (ST )ST EQ (ST )ST

WT 1 T

.
WT T

Applying Proposition 2.1.1 again, now to u = 1, F = ST and G = ST obtain rst ST WT 1 T


T 1

1 , we

Dt ST dt
0

= ST = =

WT 1 (T ST )1 T

WT 1 2 2 T T 2 WT 1 WT 2 2 2 T T T

19

and nally EQ (ST )ST WT 1 T = EQ (ST )


2 WT WT 1 2 2 2 T T T

Therefore the is given by: 2 V0 erT = 2 EQ (ST ) 2 S0 S0 T WT 1 WT T .

Remark 2.1.3. In Proposition 2.1.1 the weight H (F, G) depends on the choice of u. Therefore dierent weights are conceivable. In [FLLL01] the variance optimal structure for a Monte Carlo estimator of such a weight is shown. However the variance optimal weights are dicult to implement in an estimation procedure. The methods demonstrated above can also be used to obtain estimators for exotic options (e.g see [GKH03]). A comprehensive overview for practical purposes on the use of Malliavin Calculus in Finance is given in the paper [KHM04]

2.1.2

Conditional Expectation

The use of Malliavin Calculus to compute conditional expectations was introduced by [FLLL01]. They derived an representation formula that can be use in Monte Carlo simulations with little eort. The main theorem of their work is given by: Proposition 2.1.4. Let F, G D1,2 and assume there exist a process u H such that E [ DG, u H | (F, G)] = 1. Moreover let be a continuous dierentiable function of polynomial growth. Writing H (y ) = 1{y>0} we obtain: E [(F )|G = 0] = Proof. First we note that E [(F )|G = 0] = lim +
0

E [(F )H (G) (u) (F )H (G)Du F ] E [H (G) (u)]

E [(F )1(,) (G)] . E [1(,) (G)]

Next, by using the chain rule and the duality relation, we obtain

E [(F )H (G) (u)] = E [ D((F )H (G)), u

H] H] H]

= E [ (F )1(,) (G)DG + H (G) (F )DF, u = E [(F )1(,) (G) DG, u


H]

+ E [H (G) (F ) DF, u

20

and E [1(,) (G)] = E [1(,) (G) DG, u = E [ DH (G), u where H (y ) is given by if y , 0 H (y ) = y + if y [, ], 2 if y . H (G) converges a.s. to 2H (G) yields the desired result. Noting that 1 Corollary 2.1.5. If there exists an u H satisfying E [ DF, u obtain, keeping everything as in the above proposition, E [(F )|G = 0] = E [(F )H (G) (u)] . E [H (G) (u)]
H | (F, G)] H] H]

= E [H (G) (u)],

= 0 we

Example 2.1.6. In this example we derive a formula for the price of an option with pay-o (ST ) in T that is suitable for Monte Carlo simulations. The price is given by V0 (S, t), meaning todays price of an option yielding an amount (ST ) in time T under the condition that St = S where
S0 V0 (S, t) = erT EQ [(ST )|St = S ].

In order to handle the conditional expectation in that formula we make use of Corollary 2.1.5 and obtain:
S0 EQ [(ST )|St

= S] =

S0 EQ [(ST )H (St S ) (u)] S0 EQ [H (St S ) (u)]

for a suitable u satisfying the conditions of Proposition 2.1.4 and Corollary 2.1.5. Such an u is given by us = 1 1 1 1 u s = 1(0,t) (s) 1(t,T ) (s) . St St t T t

The only unknown term left is (u). We calculate the Skorohod integral with help of Proposition 1.3.2: 1 1 (u) = (St1 u s ) =
T

St1 ( u)

s ds . Ds St1 u

21

Calculating the derivative Ds St1 and the Skorohod integral with adapted integrand u s , we get for the last term:
T

u) St1 (
0

Ds St1 u s ds = St1

Wt WT Wt t T t

+ St1 .

This yields the pricing formula suitable for Monte Carlo methods: V0 (S, t) = e
rT S0 EQ (ST )H (St S )St1 S0 EQ H (St S )St1 Wt t Wt t

WT Wt (T t)

+1 .

W T Wt (T t)

+1

Remark 2.1.7. The formula representing the conditional expectation may be used to calibrate stochastic volatility modes as is also shown in [FLLL01]. Another frequent application of the formula can be found in literature on the pricing of American options (e.g. XXX). Depending on the choice of u more variance ecient estimators are possible see [BET03] for details.

2.2

Hedging

In this section we extend the Clark-Ocone formula from Proposition 1.3.8 to the case under an equivalent measure Q where the Brownian motion is given by
t

Wt = Wt +
0

s ds.
T 0 2 s ds < holds

In this case (s )s[0,T ] is a measurable adapted process such that a.s. The measure Q is given by the process ZT , Zt =
t dQ | , dP Ft t 2 s ds . 0

where

Zt = exp
0

s dWs

1 2

In order to proof the generalized Clark-Ocone formula we rst need two results that are given by the next lemmata. Lemma 2.2.1 (Generalized Bayes Formula). Suppose G L1 (Q). Then EQ [G|Ft ] = E [ZT G|Ft ] Zt

Lemma 2.2.2. Let F D1,2 be an FT measurable random variable and Let D1,2 (L2 (T )). Assume
T 2 2 E (ZT F )

+ E(
0

(ZT Dt F )2 dt) < 22

and E(
0

{Z T F

t +
t

Dt s dWs +
t

s Dt s ds }2 dt) <

then ZT F D1,2 and


T T

Dt (ZT F ) = ZT Dt F ZT F

t +
t

Dt s dWs +
t

s Dt s ds .

Proof. First by product and chain rule we obtain Dt (ZT F ) = ZT Dt F + F Dt ZT


T

= ZT Dt F F ZT Dt
0 2 = 2s Dt s yields Proposition 1.3.5 and Dt s T

s dWs +

1 2

T 2 s ds . 0

Dt
0

s dWs +

1 2

T 2 ds s 0

= t +
0

Dt s dWs +
0

s Dt s ds.

Finally the following observation leads to the desired result: Dt (s ) = Dt (E [s |Ft ]) = E [Dt s |Ft ]1[0,s] (t). The last equality above is due to Proposition 1.2.9. Now we are able to derive the generalized Clark-Ocone representation formula. Proposition 2.2.3. Let the assumptions of Lemma 2.2.2 hod true, then
T T

(2.3)

F = EQ [F ] +
0

EQ Dt F F
t

Dt s dWs |Ft dWt

Proof. We dene Yt = EQ [F |Ft ]. With the help of Lemma 2.2.1 we rewrite this as Yt = Zt1 E [ZT F |Ft ]. The Clark-Ocone formula yields
t

E [ZT F |Ft ] = E [ZT F ] +


0

E [Ds (ZT F )|Fs ]dWs .

Now combining the last two steps we obtain


t

Yt = Zt1 EQ [F ] + Zt1
0

E [Ds (ZT F )|Fs ]dWs .

(2.4)

From the previous Lemma we get: 23

E [Ds (ZT F )|Fs ] = E ZT

Dt F F (t +
t T

Dt s dWs ) |Ft Dt s dWs )|Ft


t

(2.5) (2.6) (2.7)

= Zt EQ Dt F F (t +

= Zt t Zt EQ [F t |Ft ] = Zt t Zt Yt t . Here t = EQ Dt F F
T t

Dt s dWs |Ft and the last equality stems from the adapt-

edness of . Now substituting (2.7) into (2.4) yields


t t

Yt = Zt1 EQ [F ] + Zt1
0

Zs s dWs Zt1
0

Zs Ys s dWs .

Now applying Itos product formula to the single terms in the above equation, thereby using that d(Zt1 ) = Zt1 (s dWt + 2 dt) we obtain
2 2 d(Yt ) = Yt (t dWt + t dt) + t dWt Yt t dWt + t t dt Yt t dt

= t dWt . This establishes the result. Next we are going to look at an example where we use the generalized Clark-Ocone representation to obtain a hedging portfolio. Example 2.2.4. Let B be the pay-o of an option. We are again in a Black-Scholes world with constant coecients. In this complete world this pay-o is replicable by an suitable hedging strategy t and the value of the discounted replicating portfolio at time t is given by
t

Vt ( ) := erT EQ [B ] +
0

Ss s dWs .

Let us assume that B , the pay-o, and , the process that induces Q, satisfy the assumptions of Lemma 2.2.2 (t does this trivially as it is a constant in the present setting). Then by the generalized Clark-Ocone representation formula the following equality must hold true:
T

St t = erT EQ Dt B B
t

Dt s dWs |Ft

24

Therefore the portfolio is given by t = er(T t) EQ Dt B B St


T

Dt s dWs |Ft .
t

As is a constant this reduces to t = er(T t) EQ [Dt B |Ft ] . St

If we assume that the pay-o B is given by (ST ) for a suitable function we obtain er(T t) EQ [ (ST )ST |Ft ] . St For t=0 this recovers exactly the of Example 2.1.2, i.e. t = 0 = erT EQ [ (ST )ST ] . S0 that

Remark 2.2.5. We have seen in the above example that the Malliavin approach leads to the -hedging formula for an optimal portfolio. In [Ber02] and [Ber03] Bermin analyses how the classical approach and the Malliavin Calculus based approach compare. In [Ber02] he shows that the generalized Clark-Ocone representation formula holds indeed for any square integrable random variable, not only for the members of

D1,2 . Using this result [Ber03] establishes, that the Malliavin approach yields the
-hedging formula under slightly weaker conditions than the traditional approach.

2.3

Insider Trading

Insider trading is a natural application of Malliavin Calculus and the related theory of Skorohod integration. The use of these tools arises from the need to model information that is vaster than the market knowledge. Vaster information leads to processes that are not adapted to the market ltration anymore and therefore not treatable by Ito integration theory. For this reason we will consider Skorohod integrals, or more exactly forward integrals that are closely related to Skorohod integrals, as they allow anticipative processes. We proceed by rst developing a market set-up then giving some pertinent results from [Nua06] and nally obtaining an optimal portfolio along the lines of [A04] Let (Bt )t[0,T ] be a standard Brownian motion on a ltered probability space (, F , Ft , P ). An insider is a person who is informed of the information contained a

25

ltration G = (Gt )t[0,T ] strictly bigger than the ltration F = (Ft )t[0,T ] . As pointed out in this case the question arises how to understand an integral
T

t dBt
0

(2.8)

when the process t is Gt adapted? One common approach is to assume that Bt is still a semi-martingale with respect to G and allows a decomposition as Bt = Bt + At where Bt is a Brownian motion under G and At a continuous, nite variation process. In case that such a decomposition exists the above integral can then be dened by
T T T

t dBt :=
0 0

t dBt +
0

t dAt .

However, such a decomposition does not always exist and we therefore look for a more general approach of dening the integral. One possible way to proceed is by considering forward integrals. Denition 2.3.1. Let = {t , t [0, T ]} be a stochastic process. The forward stochastic integral in then given as the limit over all partitions 0 t0 tn T
T

t d Bt := lim
0

ti 0

ti (Wti+1 Wti ),
i

where the limit is understood as limit in probability. If the limit exists in L2 (P ) then we write Dom2 . This integral is closely related to the Skorohod integral, as we will see later on, and therefore also allows non-adapted integrands. In the sequel we will always consider an integral (2.8) as forward integral. The motivation for using forward integrals in the market model is given by the following observations: Considering a buy and hold strategy t = 1[t1 ,t2 ] (t) for 0 t1 t2 T and the Brownian motion Bt as price process of an asset we obtain
T

t d Bt =
0

t2 t1

d Bt = lim

t2

ti 0

(Bti+1 Bti ) =
i t1

dBt = Bt2 Bt1 .

Therefore, just as in standard Ito integration theory, the forward integral gives just the amount gained over a period [t1 , t2 ]. 26

Assume that we have the situation pointed out previously that Bt is still a semi-martingale with respect to G and allows a decomposition Bt = Bt + At , where Bt is a Brownian motion under G and At a continuous, nite variation process, then, for a forward integrable, G-adapted process t ,
T T T T

t d Bt =
0 0

t dBt

=
0

t dBt +
0

t dAt .

Hence, the setting where Bt is still a semi-martingale is just a special instance of the more general forward integral. The above observation can easily be conrmed by considering
T T

t dBt +
0 0

t dAt = lim = lim


T

ti 0

ti (Bti + Ati )
j

ti 0

ti Bti
j

=
0

t d Bt .

The market modelled by forward integrals has dynamics given by the equations dSt0 = t St0 dt,
0 S0 =1

(2.9) S0 > 0. (2.10)

dSt = St [t dt + t d Bt ],

The dierential equations are as always understood as the corresponding integrals and coecients satisfy the following conditions: t , t and t are G-adapted, E
0 T 2 {|t | + |t | + t }dt < ,

t is Malliavin dierentiable and (D )s exists (for a denition see below), the equation (2.10) has a unique G-adapted solution. Moreover we consider three ltrations (some of them already introduced) that are related by Ht Ft Gt F for any t [0, T ]. In the present setting we consider a portfolio t as an Ht adapted process giving the fraction of the total wealth invested into the risky asset St . 27

Denition 2.3.2. The set AH of admissible portfolios consists of all Ht adapted process t that are Malliavin dierentiable and satisfy the conditions t t is Skorohod integrable, E
0 T

|t (D )t |dt < ,
T

E
0

|t t ||t |dt < .

In this model a large investor with inside information might have access to the larger ltration Gt and inuence the market by his actions. An agent, trying to optimise his wealth in some sense, has only partial information contained in Ht and behaves accordingly. We are now trying to solve a logarithmic utility maximisation problem given by:
(x) = sup E x [log(XT )], AH

(2.11)

where the wealth process is given through the dynamics dXt = (t + [t t ]t )Xt dt + t t Xt d Bt , X0 = x. (2.12)

and If the investor has an initial capital of x. An optimal portfolio is denoted by t satises (x) = E x [log(XT )].

Next, before we solve the above problem, we review some pertinent results from Malliavin Calculus and point out the relation between the Skorohod and the forward integral. First we dene a random variable that can be understood as a left Malliavin derivative. Denition 2.3.3. Assume that X L1,2 := D1,2 (L2 ([0, T ])) and p [1, 2]. The we denote by D X the element of Lp ([0, T ] ) satisfying
T n

lim

sup
0 (s1/n)0t<s

E [|Ds Xt (D X )s |p ]ds = 0.

,2 1 ,2 We denote by L1 that satises the above equation. p the class of processes in L

With the help of this denition we are now able to state the relation between the forward and the Skorohod integral. 28

Proposition 2.3.4. Let {t , t [0, T ]} be a stochastic process which is continuous in


,2 the norm of the space D1,2 . Suppose that L1 1 . Then is forward integrable and T T

() =
0

t d Bt
0

(D )s ds.

Proof. Using equation (1.17) we obtain


n1 n1 n1

i=0

ti 1(ti ,ti+1 ]

ti+1

=
i=0

ti (Wti+1 Wti )
i=0 ti

Ds ti ds.

(2.13)

Due to the continuity of in the D1,2 norm the process


n1

=
i=0

ti 1(ti ,ti+1 ] (t)

converges in the norm of L1,2 to the process . Therefore ( ) converges in L2 () to (). The st term of the r.h.s converges by denition of the forward integral towards
T 0

t d Bt and for the second term of the r.h.s we have the following approximation:
n1 ti+1 T

E |
i=0 n1 ti+1 ti ti

Ds ti ds
0

(D )s ds|

i=0 T

E |Ds ti (D )s | ds E |Ds t (D )s | ds

sup
(sti )0t<s

1 ,2 By the denition of the space L1 the last term converges to zero as the partition

gets ner, i.e. ti tends to zero. As the Expectation of the Skorohod integral is zero we obtain the following Lemma. Lemma 2.3.5. Let be as in the above proposition, then
T T

E
0

t d Bt = E
0

(D )s ds ,

provided the expectation exists.

29

A Ito type formula for forward integrals exists ([Nua06], Theorem 3.2.7) yielding a solution to the wealth process (2.12):
T T

Xt = X0 exp
0

1 2 2 t + (t t )t t t dt + 2

t t d Bt .
0

(2.14)

This nally enables us to solve the logarithmic utility maximisation problem. By using Lemma 2.3.5 we obtain:
] log x E [logXT T

=E
0 T

=E
0

T 1 2 2 t t d Bt t + (t t )t t t dt + 2 0 1 2 2 t + (t t )t t t + D ( )t dt 2

As t is Ht adapted and Ht Ft we obtain by the same reasoning as in (2.3) that for s > t: Ds t = 0. Applying the Malliavin Chain rule yields: D ( )t = t D ( )t + t D ( )t = t D ( )t . Therefore the above equation turns to
T E [logXT ]

log x = E
0

1 2 2 t + t t t t dt , 2

where t := t t + D ( )t . We basically could maximise under the integral now if there would not be a restriction due to dierent measureabilitys of the coecients. We bypass the problem by dening t = E [t |Ht ] and similarly for the other coecients and obtain
T E [log XT ] log x = E 0

1 2 2 t dt . t + t t t 2

We maximise pointwise and obtain the optimal portfolio by


t =

t . 2 t

Summarizing the above we get the following result: 30

Proposition 2.3.6. Suppose that t = 0 a.e and that for t as dened above the following holds true
T

E
0

2 t dt < , 2 t

then
T

(x) = log x + E
0

s +

2 t dt < . 2 2t

is an admissible portfolio, i.e.


= t

E [t |Ht ] AH , 2 E [t |Ht ]

then is the optimal control. As a consequence the famous Merton solution can be recovered easily from this result. Example 2.3.7. Assume that mathcalHt = Ft = Gt , then Ds = 0 and the optimal portfolio to this problem is given by
t =

E [t |Ht ] t t = . 2 2 E [t |Ht ] t

Remark 2.3.8. The use of anticipative calculus to insider problems has been developed in [LNN03]. There a problem was treated where the insider information consists of knowing the terminal value of some random variable. [Bk05] generalises the results of this section (mainly taken from [A04]) to other utility functions and a more general setting. In [KHS06] results from [A04] are also extended. A total dierent approach to the use of Malliavin Calculus is taken by [Imk03]. He assumes a set-up where the Brownian Motion is a semi-martingale under the enlarged ltration and tries to characterise the additional drift in terms of results from Malliavin Calculus.

31

Bibliography
[Ber02] H. Bermin. A general approach to hedging options: Applications to barrier and partial barrier options. Mathematical Finance, 12(3):199218, 2002. H. Bermin. Hedging options: The malliavin calculus approach versus the -hedging approach. Mathematical Finance, 13(1):7384, 2003. [BET03] B. Bouchard, I. Ekeland, and N. Touzi. On the malliavin approach to monte carlo approximation of conditional expectations. Woking Paper, 2003. [Bk05] F. Biagini and B. ksendal. A general stochastic calculus approach to insider trading. Applied Mathematics and Optimization, 52(2):167181, 2005. [FLL+ ] E. Fourni e, J. Lasry, J. Lebuchoux, P. Lions, and N. Touzi. Applications of malliavin calculus to monte carlo methods in nance, journal = Finance and Stochastics, volume = 3, number = , month = , year = 1999, pages = 391-412,. [FLLL01] E. Fourni e, J. Lasry, P. Lions, and J. Lebuchoux. Applications of malliavin calculus to monte-carlo methods in nance. ii. Finance and Stochastics, 5(2):201236, 2001. [GKH03] E. Gobet and A. Kohatsu-Higa. Computation of greeks for barrier and lookback options using malliavin calculus. Elect. Comm. in Probab., (8):51 62, 2003. [Imk03] P. Imkeller. Malliavins calculus in insider models: Additional utility and free lunches. Mathematical Finance, 13(1):153169, 2003. [KHM04] A. Kohatsu-Higa and M. Montero. Malliavin calculus in nance. Handbook of computational and numerical methods in nance, pages 111174, 2004. 32

[Ber03]

[KHS06] A. Kohatsu-Higa and A. Sulem. Utility maximization in an insider inuenced market. Mathematical Finance, 16(1):153179, 2006. [KO91] I. Karatzas and D. Ocone. A generalized clark representation formula, with applications to optimal portfolios. Stoch. Stoch. Rep., 34:187220, 1991. [LNN03] J. Le on, R. Navarro, and D. Nualart. 13(1):171185, 2003. [Nua06] David Nualart. The Malliavin Calculus and Related Topics. Probability and Its Applications. Springer, 2. edition, 2006. [A04] B. ksendal and A.Sulem. Partial observation control in an anticipating environment. Russian Math. Surveys, 59(2):355375, 2004. [ks96] B. ksendal. An introduction to malliavin calculus with applications to economics. Lecture Notes, 1996. An anticipating calculus ap-

proach to the utility maximization of an insider. Mathematical Finance,

33

Anda mungkin juga menyukai