Explicit form and robustness of martingale representations
Jean Jacod
, SylvieMeleard y
and Philip Protter z
ABSTRACT
Stochastic integral representation of martingales has been undergoing a renaissance due to questions motivated by Stochastic Finance theory. In the Brownian case one usually has formulas (of diering degrees of exactness) for the predictable integrands. We extend some of these to Markov cases where one does not necessarily have stochastic integral representation of all martingales. Moreover we study various convergence questions that arise naturally from (for example) approximations of "price processes" via Euler schemes for solutions of stochastic dierential equations. We obtain general results of the following type: let
U
,U
n be random variables with decompositions:U
=+Z 1 0 sdX
s+N
1U
n=n+Z 1 0 ns
dX
ns+N
n 1where
X
,N
,X
n,N
n are martingales. IfX
n !X
andU
n !U
, when and how does n!?1 Introduction
1)
Consider a sequenceX
n of square-integrable martingales, which converge to anothersquare-integrable martingale
X
: this convergence may hold in a strong sense (as inIL
2)and all the
X
n's andX
are on the same probability space, or it may hold in the weaksense (convergence in law) and each
X
n is dened on its own probability space. Let alsobe a bounded continuous functional (say, on the Skorokhod space of all right continuous functions with left limits), and set
U
n= (X
n) andU
= (X
), so thatU
n converges toU
. Suppose in addition that we have the martingale representation property for eachX
nand for
X
, so we can writeU
n andU
as stochastic integrals as follows:U
n =n+Z 1 0 ns
dX
nsU
= +Z 1 0 sdX
s (1.1)Laboratoire de Probabilites, Universite Paris 6, 4 place Jussieu, 75252 Paris, France (CNRS UMR
7599)
yUniversite Paris 10, MODAL'X, JE 421, 200 av. de la Republique, 92000 Nanterre, France
zMathematics and Statistics Departments, Purdue University, W.Lafayette, IN 47907-1395, USA.
where
nandare random variables measurable w.r.t. the relevant initial-elds, andnand
are predictable processes. Then an important theoretical problem is to nd whether the sequencen converges in law, for a suitable topology, to.
This problem has also much practical relevance. For example in nancial mathematics, suppose that
X
models the price of a stock, andU
is a claim based upon this stock, and for simplicity the riskless bond has constant price 1. If the model is complete and with no arbitrage opportunity, thenX
is a martingale under the unique risk neutral equivalent measure, the price of the claim is the expectationE
(U
) = under this measure, and we have the martingale representation property w.r.t.X
: then the process in (1.1) is the so-called hedging strategy. Now, for computational purposes we might want to take a discrete time approximation forX
: e.g. a binomial approximationX
n which thusconverges in law to
X
, or an Euler approximationX
n which thus converges strongly toX
when this process is the solution of a stochastic dierential equation. If one also has the martingale representation property for the discrete time models (as is the case for the binomial approximation), then it is important to know whether the \approximate" hedging strategies n do converge in some sense to. Such questions have been touched
upon in 6] for example.
The above brief description immediately gives rise to two kinds of problems. The rst one comes from the fact that the martingale representation property quite often does not hold: it holds under reasonably general conditions when the basic martingale
X
is continuous, but it is usually lost as soon asX
as jumps, and in particular in the discrete time setting (except for the binomial model).The second problem is to nd an adequate topology for which the
n's might converge.This is not obvious, because these processes have a priori no regularity in time (they are predictable, but otherwise neither right continuous nor left continuous in general).
2)
To begin with, let us consider the rst problem described above. LetX
be a locally square-integrable martingale on a ltered space (F(Ft)t0
P
) having F =W
tFt, and
U
be a square-integrable random variable. Using the theory of \stable" subspaces gener-ated by a martingale (see Dellacherie and Meyer 5], or 15] or 9] for this fact, as well as for all results on martingales and stochastic integrals), we have the decompositionU
= +Z 1 0 sdX
s+N
1 (1.2)where
=E
(U
jF0) and
N
is a square-integrable martingale (i.e. a martingale such thatsuptj
N
tj is square-integrable), orthogonal toX
and is a predictable process, and thisdecomposition is unique up to null sets: it comes in fact from the (unique) decomposition of the square-integrable martingale
M
t=E
(U
jFt) as a stochastic integral w.r.t.X
, plusan orthogonal term. Recall also that two locally square-integrable martingales
M
andN
are orthogonal if their product
MN
is a local martingale, and this is denoted byM
?N
.Observe that
andN
are dened uniquely up to aP
-null set, while is dened uniquely up to a null set w.r.t. the following measureon
IR
+. Here,
h
XX
i denotes the \angle" (or predictable) bracket. We will denotethe process
by (XU
), which is square-integrable w.r.t.Q
X.Section 2 of this paper is devoted to nding an \explicit" expression for the process
above: rst in the discrete time setting, where it is very simple next in some Markovian situations, when
U
has the formU
=f
(Y
T) for a xed timeT
and an underlying Markovprocess
Y
andX
is a locally square-integrable martingale on this Markov process. We thus extend the well known Clark-Haussmann formula, usually given for Brownian motion, in two directions: the Brownian motion is replaced by a rather general Markov process, and we do not assume the martingale representation property. But of course we are limited to variablesU
of the formU
=f
(Y
T) or more generally of the formU
=f
(Y
T1:::Y
Tk) forxed times
T
1< ::: < T
k.Let us come back to the nancial interpretation of (1.2): if the martingale representa-tion property w.r.t.
X
does not hold, the variableN
1in (1.2) is in general not equal to 0.We are in the incomplete model case, and the process
is shown to be a risk minimizing strategy for hedging the claimU
: see Follmer and Sondermann 7].3)
Let us now turn to convergence results. To get an idea of what to expect as far as convergence results are concerned, here is a trivial special case: we have a sequenceU
nof random variables tending to a limit
U
inIL
2(P
), and a xed locally square-integrablemartingale
X
. WritingM
n,n,
n and
N
n for the terms associated withU
n andX
in (1.2), the three variables
n ;, R1 0 (
ns;
s)dX
s andN
n 1;
N
1 are orthogonal in
IL
2(P
) and add up toU
n;
U
, so they all go to 0 inIL
2(
P
). Since the expected value of(R 1
0
sdX
s) 2is
Q
X(2), we deduce in particular thatU
n !IL 2(P)
U
)
(XU
n) !IL 2(Q X
)
(XU
):
(1.4)This leads us to consider rst the case where all locally square-integrable martingales
X
n andX
are dened on the same space (F(Ft)t
0
P
) with F =W
tFt. The simplest
result one can state in this direction is as follows:
Theorem A
Assume thatX
n andX
are locally square-integrable martingales on altered space, such that h
X
n;XX
n;X
it ! 0 in probability for allt
2IR
+, and that
U
n converges toU
inIL
2(P
). Then n converges to inQ
X-measure.
We also give a series of other results, which are more dicult to state, and which mainly concern discrete time approximations of a given martingale
X
, of various kinds: stepwise approximations, or Euler schemes whenX
is the solution of a stochastic dier-ential equation. All these results are proved in Section 3.4)
Section 4 is devoted to weak convergence results. First, we take advantage of the explicit results of Section 2 in the Markov case to show that ifX
n is the solution of theequation
dX
nt =g
n(X
nt;)dZ
nt andX
is the solution of a similar equation withg
andZ
,where
Z
n andZ
are Levy processes, and ifg
n !g
andZ
n converges in law toZ
, thentopology, when
U
n=f
(X
nT) andU
=f
(X
T) andf
is a dierentiable function (typicallyZ
is a Brownian motion, but theZ
n's are not, so we have the martingale representationproperty w.r.t.
X
, but not w.r.t.X
n). We also give a discrete time version of this result.Finally, we give an analogous convergence result when
U
n = (X
n) andU
= (X
)for a continuous bounded function on the Skorokhod space, when
X
is the solution of an equation as above withZ
a Brownian motion, and theX
n's are discrete time solutionsof dierence equations converging to
X
. As an example, particularly relevant in nancial applications, let us mention the case whereX
ni+1 =X
ni+g
(X
ni)Y
ni+1where for each
n
the (Y
ni)i1 are i.i.d. bounded variables, centered with variance 1=n
.Then the processes
X
nt=X
nnt]converge in law to the solution of
dX
t=g
(X
t)dW
t, whereW
is a Brownian motion, as soon asg
is Lipschitz. In this situation, withU
n = (X
n)and
U
= (X
) with as above, the processes n (naturally dened as some sort ofinterpolations of the discrete time processes
(X
nU
n)) do converge in a suitable sense to , in law.2 Explicit representations of the integrand
In this section our aim is to give an \explicit" form for the integrand
(XU
) in essentially two specic cases: one is the discrete-time case, with an extension to the discretization of a continuous-time process the other is a Markov situation. It seems hopeless to obtain such an explicit form in general, but other cases are found in the literature, essentially on the Wiener space and using Malliavin calculus: see e.g. the book of Nualart 14] and the references therein.Before starting we wish to make precise the various notions of (locally) square-integra-ble martingales used in this paper, since they play a crucial role. As said in the introduc-tion, a process
X
given on a stochastic basis, either with discrete or with continuous time, is called a square-integrable martingale if it is a martingale and if the supremum ofX
over all time is square-integrable: then the limitX
1 exists and is a square-integrable variable.X
is called a locally square-integrable martingale if there is a sequenceR
n of stoppingtimes increasing to +1, such that the process
X
stopped at anyR
nis a square-integrablemartingale. In between, we say that
X
is a martingale square-integrable on compacts if the processX
stopped at any nite deterministic time is a square-integrable martingale: for example the Wiener process is a martingale square-integrable on compacts in this sense.2.1 The discrete-time case
In this subsection, time is discrete: we have the basis (
F(Fi)i2IN
P
) with F =W Fi
and with a given locally square-integrable martingale
X
. We also have a square-integrable variableU
. For any processY
we write !Y
i =Y
i;Y
i;1. In this discrete-time case, (1.2)
becomes
U
= + 1 X i=1where the series converges in
IL
2 and i is Fi;1-measurable and
N
is a square-integrablemartingale orthogonal to
X
. Here the orthogonality ofX
andN
amounts to say thatE
(!X
i!N
ijFi;1) = 0
8
i
1:
(2.2)The above conditional expectation is to be understood in the generalized sense, since the variable !
X
i!N
i might be not integrable: it is however integrable on each Fi;1
-measurable set f
R
ni
g (whereR
n is as above), while nfR
ni
g = . The samecomment applies below.
Proposition 2.1
Assume thatX
is a locally square-integrable martingale, and letM
i =E
(U
jFi). Then a version of =(XU
) is given by i =E
(!X
iU
jFi ;1)E
((!X
i)2 jFi;1) =
E
(!X
i!M
ijFi ;1)E
((!X
i)2 jFi;1)
:
(2.3)Proof
. By denition ofM
and by the propertyE
(!X
ijFi;1) = 0 (where again the
conditional expectation is in the generalized sense), the last equality in (2.3) is obvious. Dene
by (2.3). The measurability condition is obviously met. Set!
N
i =E
(U
jFi);E
(U
jFi ;1);
i!X
i = !M
i;i!X
iand
N
i = Pij=1!
N
j. ThenN
is a square-integrable martingale with (2.2). That (2.1)holds is then obvious. 2
2.2 Discretization in time
Here we have a basis (
F(Ft)t0
P
) such that F =W
tFt. We consider a
square-integrable martingale
X
. We also consider a locally nite subdivision ofIR
+, consistingof an increasing sequence
= (T
i:i
2IN
) of stopping times such thatT
0 = 0T
i<
1 )
T
i< T
i+1
limi
T
i =1 a.s. (2.4)The discretized process is then
X
i =X
Tii
2
IN
(2.5)which makes sense even on the set f
T
i = 1g. Then the sequence (X
i)i2IN is a
square-integrable martingale w.r.t. the discrete-time ltration (FT i)i
2IN. If
U
2IL
2, we then
have the two decompositions (1.2) and (2.2), namely
U
=8 < :
+R 10
sdX
s+N
1 +P 1i=1
i!X
i+N
1(2.6)
where
N
is a square-integrable martingale w.r.t. (Ft)t0 orthogonal to
X
, andN
is asquare-integrable martingale w.r.t. (FT i)i
2IN orthogonal to
X
, and =E
(U
jFit is natural to call the discretized version of the integrand
the following continuous-time process: 0t =
i ifT
i;1< t
T
ii
1:
(2.7)In a sense, this process
0 naturally occurs if we replaceX
by the discretized version alongthe subdivision
.Our aim here is to compute
0 in terms of . This is simple, after recalling that theprocess
is square-integrable w.r.t. the nite measureQ
X dened by (1.3), and afterintroducing the
-eld P0 on ~ =
IR
+ which is generated by the sets
D
(
T
iT
i +1],where
i
2IN
andD
2FT i:Proposition 2.2
Assume thatX
is a square-integrable martingale. With the above no-tation we have 0 =Q
X(
jP0) (the conditional expectation of
w.r.t. P0 for the nite
measure
Q
X).Proof
. SetA
= hXX
i andB
t = Rt0
sdA
s. IfM
t =E
(U
jFt), then (1.2) yields
M
t = +Rt0
sdX
s+N
t, henceh
XM
i=B
+hXN
i=B
becauseX
andN
are orthogonal. Soan application of Proposition 2.1 yields the following explicit form for
: i =E
(B
Ti ;B
Ti;1 jFT
i;1)
E
(A
Ti ;A
Ti;1 jFT
i;1)
:
(2.8)For
D
2FTi;1 we have
Q
X(1D(T i;1Ti]
) =E
(1D(B
T i;
B
Ti;1)) =
E
(1DE
(B
Ti ;B
Ti;1 jFT
i;1))
:
By (2.8) and (2.7) this is equal to
E
(1Di(A
Ti ;A
Ti;1)) =
Q
X(1D (Ti;1T i]
0)
:
Since
0 is obviously P0-measurable, this implies the result. 2
Remark 2.3
Exactly the same result (with the same proof) holds if we assume thatX
is a locally square-integrable martingale, such that each stopped process (X
Tit =
X
Ti ^t)t0is a square-integrable martingale. 2
2.3 A Clark-Haussmann formula for Markov processes
In this subsection we give an alternative form of the Clark-Haussmann formula giving the integrand
(XU
): see Nualart 14] for a general form of this formula.The setting is as follows: we have a quasi-left continuous
IR
d-valued strong Markovprocess
Y
on (F(Ft)P
x), whereP
x is the probability measure under whichY
0 =x
a.s., and we assume also that
Y
is a semimartingale under eachP
x. Let be the jumpmeasure of
Y
, and (BC
) its characteristics: we refer for this to 9], and also to 3] for the following structural results, showing in particular that (BC
) do not depend on the starting point: there exist a continuous increasing additive functionalA
, a Boreltransition measure
F
fromIR
d into itself integratingz
jz
jin
y
, with all partial derivatives being continuous.Next, our basic locally square-integrable martingale
X
is of the formX
t =X
0+where
Y
c denotes the continuous martingale part ofY
, and \T" denotes the transpose,and
= (i) Observe that under (2.11),X
is well dened and is a locally square-integrable martingale under eachP
x andP
, with angle bracket hXX
it= We use here the traditional convention 00 = 0, since when
a
s= 0 the numerator in theright side of (2.12) is also 0. Observe that the process
does not depend on the measurem
inP
m:=P
=RBy hypothesis,
g
is once dierentiable int
and twice dierentiable iny
with continuous partial derivatives. By It^o's formula,+Z t
The rst three integrals above are predictable process of nite variation. The last integral may be rewritten as the sum of the stochastic integral w.r.t. the measure martingale
;,plus the integral w.r.t.
, which again is a predictable process of nite variation. SinceM
is a martingale, the sum of all predictable processes of nite variation must equal 0, and after a simple transformation we get
M
t = +In view of (2.11), this becomes
h
NX
it =of
is thus given by (2.12), and we have proved the claim. 2The class DT of functions for which (2.12) holds is rather restrictive. It might be of
interest to enlarge this class. To this eect, for each
x
2IR
d we introduce the setD 0 T of allfunctions
f
for which there is a sequencef
n2DT (called an \approximating sequence")The measure
Q
X associated by (1.3) withX
(and relative toP
) is hereQ
X(d!dt
) =P
(d!
)dA
t(!
)a
t(!
).Finally, letD 00
T be the subset of all
f
2D 0T such that
y
P
tf
(y
) is dierentiable for0
< t
T
, and for which there is an approximating sequencef
n in DT such that for allt
2(0T
] andy
2IR
d we haveP
tf
n(y
) !P
tf
(y
)@
@y
iP
tf
n(y
) !@
@y
iP
tf
(y
) (2.13)and that for
Q
X-almost all (!t
) witht
T
we have ZF
(Y
t;(!
)dz
) (!tz
)(P
T;tf
n(Y
t;(!
) +z
) ;P
T;t
f
n(Y
t;(!
))) !Z
F
(Y
t;(!
)dz
) (!tz
)(P
T;tf
(Y
t;(!
) +z
) ;P
T;t
f
(Y
t;(!
))):
(2.14)Observe that (2.13) implies (2.14) as soon as
f
, thef
n's and the@y@iP
tf
n's are uniformlybounded for each
t
, by virtue of (2.11) and of the fact that RF
(ydz
)(jz
j 2^1)
<
1.Corollary 2.5
a) Iff
2 D 0T with the approximating sequence
f
n, then a version of theprocess
(Xf
(Y
T)) is the limit of (Xf
n(Y
T)) inIL
2(Q
X).b) If further
f
2D 00T, then a version of the process
(Xf
(Y
T)) is given by (2.12) fors
T
, and by 0 fors > T
.Proof
. The claim a) readily follows from (1.4). Assume now thatf
2D 00T, with the
ap-proximating sequence
f
n. Then if0is given by (2.12), on the one hand(Xf
n(
Y
T))s(!
)! 0s(
!
) forQ
X-almost all (!s
), and on the other hand (1.4) holds: hence0 =(Xf
(Y
T))Q
X-a.s., and we have b). 22.4 A particular case
Theorem 2.4 and its corollary are not quite satisfactory, because they give
(XU
) for a variableU
of the formU
=f
(Y
T), while one would like to have it forU
=f
(X
T). Itbecomes more satisfactory when
X
itself is Markov. We give in some detail a simple case of this situation, namely whenX
is the solution of the equationdX
=g
(X
;)dZ
, whereZ
is a 1-dimensional Levy process and
g
a smooth enough coecient.Since we wish
X
to be a locally square-integrable martingale, it is natural to assume rst that the Levy processZ
, which is dened on some space (F(Ft)t0
P
), is alocally square-integrable martingale itself. This in fact implies that it is then a martingale square-integrable on compacts, and its characteristic function has the form
E
e
iuZt= exp
t
;
cu
22 +
Z
F
(dx
)(e
iux;1;iux
) ! (2.15)where
c
0 and the Levy measureF
integratesx
2. We set~
c
=c
+Zsoh
ZZ
it= ~ct
, and of course we assume that ~c >
0 (otherwiseZ
= 0 and what follows isempty). Next we have a continuously dierentiable function
g
with bounded derivativeg
0,and for any
x
we consider the solutionX
xof the following stochastic dierential equation:X
xt =x
+Z t 0g
(X
xs;)dZ
s (2.17)A classical argument (see (5.2) in the Appendix) yields that
X
x is a square-integrablemartingale over each nite interval 0
T
]. Further, the solution of the following linear equationX
0xt = 1 + Z t0
g
0(X
xs ;)X
0xs
;
dZ
s (2.18)is also a square-integrable martingale over each nite interval 0
T
]. So for each measurable functionf
with at most linear growth we can setP
tf
(x
) =E
(f
(X
xt))Q
tf
(x
) =E
(f
(X
xt)X
0xt)
:
(2.19)Observe that (
P
t) is the semi-group ofX
x, which is a Markov process. We then have:Theorem 2.6
Assume thatg
has a continuous and bounded derivative.a) For any
T
2IR
+ and any di erentiable function
f
with bounded derivativef
0 thevariable
f
(X
xT) is square-integrable, and a version of (X
xf
(X
xT)) is given by s(X
xf
(X
xT)) = (sX
xs;)10T](
s
) (2.20)where
(sy
) =Q
T;sf
0(y
) + 1~
c
Z
F
(dz
)z
2 Z1 0
(
Q
T;sf
0(
y
+g
(y
)zu
) ;Q
T;s
f
0(
y
))du:
(2.21)b) The same holds when
f
is the di erence of two convex functions, with a right derivativef
0r bounded and
f
0 above replaced byf
0r, provided we have
P
t(y:
) has no atom for allt
2(0T
],y
2IR
.In the last claim one can of course replace the right derivative
f
0rby the left derivative
f
0l. The last condition is obviously satised when
P
t(x:
) has a density: this is the casewhen
c >
0 as soon asg
does not vanish (or, does not vanish in the set in which the processX
x takes its values). Whenc
= 0, one can nd conditions implying the existenceof a density in e.g. 2].
Proof
. 1) Our assumptions always imply thatf
andg
have at most linear growth. Then the propertyf
(X
xT)2IL
2(
P
) follows from (5.2) in the Appendix.By virtue of Theorem 2.4 applied to
Y
=X
x, it suces to prove thatf
2 DT andthat (2.12) reduces to (2.21) in our situation. The rst property is proved in Lemma 5.1 of the Appendix. So it remains to identify (2.12) with (2.21). With
Y
=X
we haveb
= 0,c
(y
) =cg
(y
)2,A
t=
t
andF
(y:
) is the image of the measureF
under the mapz
7!g
(y
)z
,while in (2.10) we must take
s= 1 and (sz
) =z
. So ifa
(y
) =cg
(y
)2+ RF
(dz
)z
2g
(y
)2,(2.11) becomes
a
s =a
(X
s;), whiler
P
tf
=Q
tf
0 by (5.3). Then (2.12) yields that we
have (2.20) with
0 instead of, given by 0(
sy
) =g
(y
)2cQ
T;sf
0(
y
) + RF
(dz
)zg
(y
)(P
T;sf
(y
+g
(y
)z
) ;P
T;s
f
(y
))g
(y
)2(c
+ RF
(dz
)z
2)if
g
(y
) 6= 0, and0(
sy
) = 0 ifg
(y
) = 0. By Taylor's formula and again the property rP
tf
=Q
tf
0 and (2.16) we note that
0(sy
) equals (sy
) as given by (2.21) wheng
(y
) 6= 0. Finally the process (X
xf
(X
xT)) is unique up to aQ
Xx-null set, and the set f(
!t
) :g
(X
xt;(
!
)) = 0g is
Q
Xx-negligible: hence (2.20) gives a version of
(X
xf
(X
xT)).3) Second, we prove the result when
f
andg
are once continuously dierentiable with bounded derivativesf
0 andg
0, and furtherf
is bounded, and whenZ
is a locallysquare-integrable martingale.
First we replace
Z
by a sequence of Levy processesZ
n obtained by truncating thejumps of
Z
of size bigger thann
. More precisely, we may writeZ
t =Z
ct+ Z t0 Z
IR
z
(;)(dsdz
)where
Z
c is the continuous martingale part ofZ
(of the formcW
, whereW
is a Wienerprocess),
is the jump measure ofZ
and(dsdz
) =ds
F
(dz
). Then we setZ
nt =Z
ct+Z t 0Z
IR
z
1fjzj ng(;
)(dsdz
):
We have h
ZZ
it = ~ct
where ~c
=c
+ RF
(dz
)z
2, andh
Z
nZ
nit = ~c
nt
where ~c
n =c
+ RF
(dz
)z
21fjzj ng, and obviously
h
Z
;Z
nZ
;Z
nit ! 0:
(2.22)Next, let
be an innitely dierentiable function with compact support and inte-gral equal to 1. Then we replaceg
byg
n(x
) = Rn
(ny
)g
(x
;y
)dy
andf
byf
n(x
) = Rn
(ny
)f
(x
;y
)dy
. Thusg
nandf
nare innitely dierentiable with bounded derivativesof all orders, and there exists
K
such that for alln
:j
g
n(0)jK
jg
0n(
x
)jK
jf
n(x
)jK
jf
0n(
x
)jK
(2.23)and moreover
g
n !g g
0 n !g
0
f
n !
f f
0 n !f
0 locally uniformly
:
(2.24)Now, denote by
X
nx andX
0nx the solutions of (2.17) and (2.18), withZ
andg
f
nsatisfy the conditions of Step 2,n=(X
nxf
n(X
Tnx)) is given by (2.20), withngivenBy virtue of (2.22) and (2.24), stability results for stochastic dierential equations (see 10]) imply that (
X
nynX
0ny
n) converges locally uniformly (in time) in probability
to (
X
yX
0y) for any sequencey
in the next section (not based upon the present theorem, of course), namely Theorem 3.3: this theorem asserts that
n converges to(
X
xf
(X
xT)) inQ
X-measure. Then, since ns=n(sX
snx;)10T)(s
) and sinceX
nxconverges locally uniformly in time, in probability,
to
X
x, we deduce (2.20) from (2.26).4) For (a) it remains to consider the case where
f
has a continuous bounded derivative but is not bounded itself. We can nd a sequence (f
n) of bounded continuouslydier-entiable functions such that j
f
0n(
x
)jK
for some constantK
andf
n(x
) =f
(x
) for all jx
jn
and jf
nj jf
j. Then n = (X
xf
n(X
xT)) is given by (2.20) with n given by(2.21), where
f
is substituted withf
n. Thatf
n(X
xT) !f
(X
xT) inIL
2(
P
) and that (2.25)holds with
Q
tinstead ofQ
nt are obvious by the previous estimates onf
nandf
0nand (5.2)
of the Appendix. It follows as above that
n ! pointwise, where is given by (2.21).Then by (1.4) we have (2.20).
5) It remains to prove (b). We set
f
n(x
) = Rn
(ny
)f
(x
;y
)dy
as in Step 3. Thenf
n !f
locally uniformly, whilef
0 n !f
0
r everywhere except on an at most countable set
D
, and we still have jf
0as in Step 3, we have
f
n(X
yt)!f
(X
yt) andf
proof follows as in Step 4. 2
Remark 2.7
Whenc
= 0 andF
is a nite measure, the processZ
is a compensated compound Poisson process, and the situation is much simpler. One can show with the same methods used in Steps 3 or 4 above that the result hold without any dierentiability condition. We needf
to be continuous, and bothf
andg
with linear growth, and (2.21) takes the following simple form (with 00 = 0):
Of course in this simple situation we could also write an \elementary" proof which looks like the proof of Proposition 2.1. This is no surprise, since the compound Poisson case has a \discrete" structure.
Remark 2.8
We have considered above the \homogeneous" situation, where the coe-cientg
does not depend on time. Similar formulas would obviously hold when the coe-cient depends on time, and also when the processZ
is a non-homogeneous process with independent increments.Remark 2.9
Letf
be a function onIR
k (say, with linear growth) and 0< T
1
< ::: <
By iteration of the previous result we then get that
s = XkOf course we need some smoothness conditions of
f
to do that: thatf
is continuously dierentiable with all partial derivatives bounded is enough, in which case we need to reproduce the proof of the previous theorem.We do not have an explicit form for
(XU
) whenU
is a function of the whole path ofX
x over 0T
]. But the variables of the form above are dense into the set ofto the Clark-Haussmann formula in the Wiener case, see e.g. Nualart 14]: in this case one has an \explicit" form for
(X
xU
) for variablesU
that are smooth in the Malliavinsense (and thus include the variables
f
(X
xT1:::
) for smoothf
's). But this approach islimited to the Wiener space, and the explicit form involves the not so explicit Malliavin derivatives and predictable projections of such derivatives.
3 Strong convergence results
3.1 Discretization of a process
Here we consider the situation of Section 2.2, and we look at what happens when we have a sequence of subdivisions whose meshes go to 0. More precisely, we have a square-integrable martingale
X
on a basis (F(Ft)t0
P
) with F =W
tFt, and for each
n
a subdivision n= (T
(ni
) :i
2IN
) satisfying (2.4). The sequence (n) satisessupi
1
(
T
(ni
)^t
;T
(ni
;1)^t
) ! 0 a.s. asn
!1:
(3.1)Then set
X
ni=X
T(ni): for eachn
the sequence (X
ni)i2IN is a square-integrable martingale
w.r.t. (FT (ni))i
2IN. Finally, let
U
2IL
2(
P
) be xed. We have the rst decomposition(2.6), and the second one for each
n
with the process n, and we associate withn and
n
the process
0n by (2.7). Then we have:Theorem 3.1
Under (3.1), and ifX
is a square-integrable martingale andU
2IL
2(P
),the functions
0n tend to inIL
2(Q
X).Proof
. For eachn
we endow the space ~ with the -eld P 0n generated by the sets
D
(T
(ni
;1)T
(ni
)] :=f(!t
) :!
2DT
(ni
;1)(!
)< t
T
(ni
)(!
)g, wherei
1and
D
2 FT(ni;1). By virtue of Proposition 2.2, we have
0n =Q
X(
jP 0n). Recall that
here
Q
X is a nite measure.The sequence (
0n) is bounded inIL
2(Q
X), and thus is in a compact set for the weak
topology in
IL
2(Q
X). So there exists a subsequence, again denoted by
0n for simplicity,which converges weakly to a variable
0 inIL
2(Q
X).Let us rst show that
0 =Q
X-a.s. Take
= 1D]st], whereD
isFs-measurable.
Then
Q
X(0n)!
Q
X(0
). Consider the two stopping timesS
n = inff
T
(ni
) :i
2INT
(ni
)s
g andT
n= inffT
(ni
) :i
2INT
(ni
)t
g. ThenQ
X(0n) =Q
X(0n1D(S nTn
]) +
Q
X( 0n1D(sS n
])
;
Q
X( 0n1D(tT n
])
Q
X() =Q
X(1D(S nTn]) +
Q
X(1D(sSn]);
Q
X(1D(tTn])
:
9 =
(3.2)
If
A
(ns"
) =fS
n> s
+"
g we haveQ
X(D
(sS
n])Q
X(A
(ns"
)IR
+) +
Q
X((
ss
+"
]):
Since
Q
X(:
IR
+) is absolutely continuous w.r.t.
P
and since (3.1) impliesP
(A
(ns"
)) !0 as
n
! 1, we also haveQ
X(A
(ns"
)IR
+)other hand,
Q
X((ss
+"
])!0 as"
!0 becauseQ
X is a nite measure, so we deducethat
Q
X(D
(sS
n])! 0. The variables0n being
Q
X-uniformly integrable, we deduce
that
Q
X(0n1X(
). Then by a monotone class argument, this relation holds for allbounded predictable
, which yields =0Q
indeed strong convergence. Thus the
IL
2(Q
X)-convergence of the original sequence
0n to follows. 2Remark 3.2
When the subdivisions (n) are ner and ner, the sequence of -elds Psquare-integrable martingale and the convergence to
readily follows from the fact thatP 0
n increases toP up to
P
-null sets (by (3.1)). 23.2 A general continuous time convergence theorem
We have again a stochastic basis (
F(Ft)t0
P
) with F =W
Ft, supporting locally
square-integrable martingales
X
andX
n and square-integrable variablesU
n andU
. Wehave the following (unique) decompositions, as in (1.2):
U
= +Rn) is a square-integrable
mar-tingale orthogonal to
X
(resp. toX
n).Below, we consider again the measure
Q
X associated withX
by (1.3). It is notnecessarily nite, so we recall that
n !QX
means that n!
inR
-measure for one(hence for all) nite measure
R
equivalent toQ
X. Our main result is the following:Theorem 3.3
Assume thatU
n !U
inIL
Proof
. 1) To begin with, we introduce the following orthogonal decompositions for the locally square-integrable martingalesX
nand the square-integrable martingalesN
n (recall(3.3)) below the processes
L
nare locally square-integrable martingales andT
naresquare-integrable martingales (recall also that the orthogonality between local martingales is denoted by ?):
X
nt =X
n+Z tN
nt = Z t 0 nsdX
s+T
ntT
n?X:
(3.6)In what follows we prove a bit more than is strictly necessary for the present theorem, but the following facts will also be used in the subsequent results. The orthogonality of
X
n andN
n yields Z t0
nsnsd
hXX
is+hL
nT
nit = 0
8t
a.s. (3.7)We also have
h
X
n;XX
n;X
it = Z t0
(
ns;1) 2d
h
XX
is+hL
nL
nit (3.8)Q
X((nn+n;) 2)
E
((U
n;U
) 2)! 0
(3.9)E
(hT
nT
ni 1)
E
(hN
nN
ni 1)E
(R 1 0 (ns)2
d
h
L
nL
nis)E
(( R1
0
nsdX
ns) 2)9 =
E
((U
n) 2)
K
(3.10)for some constant
K
, and where we have used thatU
n !U
inIL
2(
P
) for the last twoproperties.
2) After these preliminaries, we can go the proof of our claim. First, we can write the (pathwise) Lebesgue decomposition of the process h
L
nT
ni, which is of locally boundedvariation, w.r.t. the increasing process h
XX
i as hL
nT
nit= Rt0
nsd
h
XX
is+A
nt, whereA
n is a function of locally bounded variation which is singular w.r.t. to hXX
i. Then(3.7) yields
nn+
n = 0
Q
X ;a.s. (3.11)But it is well known by Kunita-Watanabe inequality that the variation of the process
h
L
nT
ni over 0t
] is smaller than or equal to ph
L
nL
nit ph
T
nT
nit, while by the aboveLebesgue decomposition it is bigger than Rt 0
j
nsjd
hXX
is. Then we readily deduce from(3.4), (3.8) and (3.10) that Rt 0
j
nsjd
hXX
is !P 0 for allt
, so in view of (3.11) we get nn !Q
X 0
:
(3.12)Next, (3.4) and (3.8) on the one hand, (3.9) on the other hand, give us:
nn+
n !Q
X
n !QX 1
:
(3.13)Now, combining (3.12) and (3.13) readily gives us
n!Q X .2
Associated with this theorem, we have a result about the rate of convergence:
Theorem 3.4
Assume thatU
n !U
inIL
2(
P
) and thatX
andX
n are locallysquare-integrable martingales satisfying (3.4). Assume further that there is a sequence (
a
n) inIR
+ going to +1 such that the sequence (
a
n(U
n;U
) :n
1) is bounded inIL
2(
P
) andthat for each
t
the sequence of variables (a
2nh
X
n;XX
n;X
it :n
2IN
) is uniformlytight. Then the sequence (
a
n(n;) :n
2IN
) is uniformly tight with respect to any niteProof
. 1) We choose a nite measureR
equivalent toQ
X. Let us rst recall thatif (
u
n) is a sequence of processes such that for allt
the sequence of random variables(Rt 0
j
u
nsjd
hXX
is :n
1) is tight, then the sequence (u
n) isR
-tight.Applying this to (3.8) and (3.9) multiplied by
a
2ngives that
the two sequences
a
n(n;1)a
n(n
n+
n
;
) areR
-tight. (3.14)We also deduce from (3.8) that for each
t
the sequence (hL
nL
nit:n
1) is tight. Exactlyas in the last step of the previous proof, we deduce that the sequences (Rt 0
a
nj
nsjd
hXX
is:n
1) are tight, hence the sequence (a
nn) isR
-tight. In view of (3.11) we deduce thatthe sequence
a
nnn isR
-tight. (3.15)Now, we can write
a
n(n;) =a
n n(1;
n)(1 +
n) +
a
nn(
n
n+
n
;
);a
nn
n+
a
n(
n ;1)
:
We also know that the sequences (
n) and (n) are
R
-tight (by (3.13) and the previoustheorem). Then the result readily follows from (3.14) and (3.15). 2
3.3 A discrete version of Section (3.2)
Now we consider for each
n
a subdivisionn= (T
(ni
) :i
2IN
) of stopping times on thebasis (
F(Ft)P
) with F = WFt, satisfying (2.4), and we suppose that the sequence
(
n) satises (3.1). For eachn
we have a square-integrable martingaleX
n and asquare-integrable variable
U
n. Analoguous to (2.5), we setX
ni =X
nT(ni). As in (2.6) we have
(3.3), as well as the decomposition
U
n =n+ 1 X i=1 ni!
X
ni+N
n1
:
(3.16)Then, as in (2.7) we set
0nt = ni ifT
(ni
;1)
< t
T
(ni
):
(3.17)Theorem 3.5
Assume thatU
n!U
inIL
2(
P
) and thatX
n andX
are square-integrablemartingales and that
E
(hX
n;
XX
n;
X
i 1)! 0
:
(3.18)Then the sequence
0n converges to =(XU
) inQ
X-measure.
Proof
. In view of Proposition 2.2 we have0n=Q
Xn(
n jP0
n), whereP 0
n is the
-eld on~ dened in the proof of Theorem 3.1. Let also P be the predictable
-eld on ~. WeWe can nd a probability measure
R
on (~P) which dominates all the nite measuresQ
X andQ
Ln, and such thatQ
XaR
for some constanta
. We can thus nd nonnegativeR
-integrable and predictable functionsVV
n such thatV
a
andQ
X =V
R
Q
LNow, (3.18) and (3.8), then (3.9), then (3.10), yield
n3.3, and in view of (3.19), (3.21) and (3.11), we obtain
nn !IL
1(
R
). Putting all these results together yieldsW
n !ILOn the other hand, Bayes' rule yields
0n =Q
Now let us apply the proof of Theorem 3.1 to
R
instead ofQ
X: rst withV
insteadof
, which, sinceV
is bounded, yieldsR
(V
jP2(
R
). Combining this with (3.23) yieldsR
(W
njPIn the previous theorem, we would like to replace (3.18) by (3.4), with
X
andX
n3.4 Application to the Euler scheme
We apply the previous results to the Euler approximation scheme for a stochastic dier-ential equation. The setting, similar to that of Subsection 2.4, is as follows: we have a locally square-integrable martingale
Z
on a space (F(Ft)t0
P
) with F =W
Ftand a
locally Lipschitz continuous function with linear growth
g
, andX
is the (unique) solution of the following stochastic equation (whereX
0 is a givenF
0-measurable square-integrable
variable):
X
t =X
0+ Z t0
g
(X
s;)dZ
s:
(3.25)In comparison with Subsection 2.4, we relax the assumptions on
g
andZ
and allow an arbitrary initial conditionX
0. We also consider subdivisionsn= (T
(ni
) :i
2
IN
) ofstopping times satisfying (2.4), such that (3.1) holds. With
n0 = 0 and
nt=T
(ni
;1)for
T
(ni
;1)< t
T
(ni
), we have the \continuous" Euler approximation at stagen
,which is the solution of
X
nt =X
0+ Z t0
g
(X
nns)
dZ
s:
(3.26)Let
U
andU
n be square-integrable variables such thatU
n !U
inIL
2: typically
U
=f
(X
t) andU
n =f
(X
nt) for somet
, wheref
is a bounded continuous function. Inthis case, since by a well known result (see e.g. 11])
X
n goes in probability toX
, locallyuniformly in time, we do indeed have
U
n!U
inIL
2.Note that
X
andX
nare locally square-integrable martingales. Recall also (1.3). Thenas a corollary of Theorem 3.3 we get:
Theorem 3.6
LetU
n !U
inIL
2(
P
), and let = (XU
) and n = (X
nU
n). Then n!Q X .Proof
. It is enough to prove that (3.4) holds. Noteh
X
n;XX
n;X
it = Z t0
(
g
(X
nn s);
g
(X
s ;))2
d
hZZ
is 2Z t 0
(
g
(X
nn s);
g
(X
n s))2
d
h
ZZ
is+ 2 Z t0
(
g
(X
n s);
g
(X
s ;))2
d
h
ZZ
is:
We have already mentioned that
X
n goes toX
uniformly in time, inP
-measure. Thusthe sequence sups t(j
X
sj+jX
nsj) is bounded in probability and, sinceg
is continuous andlocally bounded, it follows that the rst term in the right side of the above inequality goes to 0 in probability for each
t
. On the other hand ns !s
and ns< s
for alls >
0: thusfor all
!
and alls >
0 we haveX
n s(!
)!
X
s;(
!
). Thus, by the continuity ofg
again,the second term in the right side of the above inequality goes to 0 for all
!
: hence (3.4) is proved. 2Let us now pass to the \discrete" Euler approximation:
Here we have some problems of integrability, because in order to apply the previous results we need each
X
n to be a discrete-time locally square-integrable martingale. On the otherhand we do not wish to assume that
X
,X
n andZ
are square-integrable up to innity.In order to resolve this problem, we suppose that
Z
is a martingale square-integrable on compacts, and also that there is a constantK
such that for allin
:T
(ni
);T
(ni
;1)K:
(3.28)Then the process
Z
stopped at any timeT
(ni
) (which is bounded by (3.28)), is a square-integrable martingale, andZ
ni =Z
T(ni) is a martingale square-integrable on compactsw.r.t. (FT (ni))i
0. Due to the linear growth of
g
, and similarly to (5.2) of the Appendix,one also checks easily that
X
ni=X
nT(ni)is also a martingale square-integrable on compactsw.r.t. the ltration (FT (ni))i
0.
As soon as
U
n is square-integrable, analogous to (3.16), we may thus writeU
n =and orthogonal to the discrete time locally square-integrable martingale (
X
ni)i0 (resp.(
Z
ni)i0), and ni and ni are FT(ni;1)-measurable. Further, we set
0nt = niTheorem 3.7
Assume (3.28) and thatZ
is a martingale square-integrable on compacts. If the variablesU
n andU
are FT-measurable for someT
2IR
same if we replace
Z
by the stopped processZ
T0in (3.29), and also
=(Z
T0U
). So we can assume thatZ
=Z
T is square-integrable. Applying Theorem 3.5 withX
n=X
=Z
then yields that
0nThe last three terms above are orthogonal martingales, and thus by identication with the previous expression we get that a.s.:
nig
(X
ni;1) = ni1fg( Xni;1
This yields
0nsg
(X
n n s) =0ns1 fg(X
n n s
)6=0g
Q
Z-a.s. (3.32)A similar argument shows that
sg
(X
s;) = s1 fg(Xs;
)6=0g
Q
Z-a.s. (3.33)As seen in the proof of Theorem 3.6,
g
(X
nn s)!
g
(X
s;) in probability for all
s
. Thenone deduces from the fact that
0n!
inQ
Z-measure and from (3.32) and (3.33) that 0n!
inQ
Z-measure on the set f(!t
) : jg
(X
t ;(!
)j
"
g, for every" >
0. Hence thesame convergence holds also on the set
A
= f(!t
) :g
(X
t ;(!
))6
= 0g, and since
Q
X isabsolutely continuous w.r.t.
Q
Z and does not charge the complement ofA
, we deducethat
0n!
inQ
X-measure. 2Remark 3.8
The same proof as above would also work for Theorem 3.6: we have n= (ZU
n)!=(ZU
) by (1.4), while the relation (3.32) holds between n and n.4 Weak convergence results
In this section we consider the weak convergence of integrands: we have a sequence
X
n of locally square-integrable martingales, each dened on its own probability space(n
Fn(Fnt)
P
n), and for eachn
a square-integrable variableU
n on the relevant space.The aim is to prove that if (
X
nU
n) converges in law to (XU
), withX
a locallysquare-integrable martingale and
U
a square-integrable variable on the space (F(Ft)P
), then(
X
n(
X
nU
n)) converges in law to (X
(XU
)) in some sense.It seems impossible to solve such a general problem, so we will concentrate on some particular cases.
4.1 Application of the Clark-Haussmann formula
Here we consider a sequence of processes of the form studied in Subsection 2.4. More precisely, we have
Z
,g
andX
x as in this subsection, given on (F(Ft)
P
). For eachn
,we also have a Levy process
Z
nwhich is a martingale square-integrable on compacts on aspace (n
Fn(Fnt)
P
n), satisfying (2.15) withc
n andF
n, and as before, we assume thatthe numbers
~
c
=c
+ZF
(dz
)z
2 ~c
n =
c
n+ ZF
n(dz
)z
2are nite and strictly positive.
Then we have dierentiable functions
g
n, and we consider the equations (2.17) and(2.18) w.r.t.
Z
n andg
n, and whose solutions are denoted byX
nx andX
0nx. We makethe following assumptions. First on
g
n andg
:j
g
n(0)jK
jg
0n(
x
)jK
jg
0n(
x
);g
0n(
y
)jK
jx
;y
j (4.1)g
ng
g
0Next on
Z
n andZ
: we basically assume thatZ
n converges in law toZ
, plus a slightlystronger assumption which is reminiscent of the Lindeberg condition more precisely we assume that
~
c
n!c
~ RF
n(dz
)h
(z
) ! RF
(dz
)h
(z
)for
h
continuous, bounded and vanishing in a neighborhood of 0,9 =
(4.3)
A
(x
) = supn ZF
n(dz
)z
21 fjzjxg! 0 as
x
!1:
(4.4)These two conditions imply that the second convergence in (4.3) also holds when
h
is con-tinuous, andh
(x
) =O
(x
2) at innity, andh
(x
) =o
(x
2) at 0. They imply the convergencein law of
Z
n toZ
(see e.g. 9]).Then we can state:
Theorem 4.1
Assume (4.1), (4.2), (4.3) and (4.4). Letf
be a di erentiable function with a bounded and Lipschitz derivative andT >
0. The processes =(X
xf
(X
xT)) and n=(
X
nxf
(X
nxT )) have versions which are left continuous with right limits, and if we
set
(+)s= limt#st>st and(+)ns= limt#st>snt, the processes (X
nx
n(+)) converge in
law for the Skorokhod topology on
IR
2 to(X
x(+)).Proof
. 1) A version of is given by (2.20), withgiven by (2.21). We wish to prove here that this version is left continuous with right limits. We can rewrite ask
(syz
) = R 1 0(Q
T;s
f
0(
y
+uzg
(y
)) ;Q
T;s
f
0(
y
))du
(sy
) =Q
T;sf
0(
y
) +1 ~ cRF
(dz
)z
2k
(syz
):
9 =
(4.5)
In view of (5.9) of the appendix and of the properties of
f
, we have for 0s
t
T
: jk
(syz
)jC
j
k
(syz
);k
(tyz
)jC
(1 +jy
j(1 +jz
j)) pt
;s
9 =(4.6)
for a constant
C
. Now, (4.3) and (4.4) yield that RF
(dz
)z
21 fjzjxgA
(x
), so the aboveestimates and (5.9) again yield that for all
N >
1_T
;1=4 and for two other constants
C
0,C
00:j
(sy
);(ty
)jC
0(1+N
j
y
j) pt
;s
+CA
(N
)C
00(1+j
y
j)(t
;s
)1=4+
A
((t
;s
);1=4)
(4.7) (take
N
= (t
;s
);1=4 to get the last estimate). On the other hand, as in the proof of
Theorem 2.6 we have (2.25) with
Q
t instead ofQ
nt, hence it is clear from (4.5) and (4.7)and another application of (5.9) that (
sy
) 7! (sy
) is continuous: hence (2.20) readilyyields that
is left continuous with right limits.2) Similarly, for each
n
we associate withZ
n,F
n, ~c
n,g
n the functionsk
n andn given
by (4.5). Exactly as before, we obtain that
n, as given by (2.20) withnand
X
nxinsteadof (4.1), (4.3) and (4.4), it is clear that the estimates (4.6) and (4.7) hold for all
k
n and n with constantsC
,C
0 independent ofn
.3) Now we apply again the stability results of 10]: by (4.1), (4.2) and (4.3), for any sequence
y
n!y
, the processes (X
nyn
X
0nyn) converge in law to (
X
yX
0y), and further
the estimate (5.2) of the Appendix yields that each sequence (
X
0ny nt )n1 is uniformly
integrable. Hence if
Q
nt is associated with (X
nxX
0nx) by (2.19) we readily deduce that(2.25) holds.
We will deduce that if
y
n!y
ands
n!s
we have n(s
ny
n) ! (sy
):
(4.8)Indeed, by (2.25) and (5.9) of the Appendix, we have
Q
nT;s nf
0(
y
n) !
Q
T ;sf
0(
y
), hencealso
k
n(s
ny
nz
n) !k
(syz
) as soon asz
n !z
because of (4.2). Hence for (4.8) itremains to prove that if
h
n(z
) =k
n(s
ny
nz
) andh
(z
) =k
(syz
)1 ~
c
n ZF
n(dz
)z
2h
n(
z
) !1 ~
c
Z
F
(dz
)z
2h
(z
) (4.9)knowing that
h
n(z
n)!h
(z
) ifz
n!z
andh
is continuous andjh
njC
for a constantC
.Now, consider the probability measures
G
n(dz
) = 1 ~cn(
F
n(dz
)z
2+c
n
"
0(dz
)) andG
(dz
) = 1~
c(
F
(dz
)z
2+c"
0(
dz
). Sinceh
n(0) =h
(0) = 0, (4.9) reads asG
n(h
n)!
G
(h
). Furthermore(4.3) and (4.4) imply that
G
nconverges weakly toG
.By the Skorokhod representation theorem we can nd random variables
V
n,V
on asuitable probability space, such that
V
n andV
have lawsG
n andG
, and thatV
n !V
everywhere. Then
G
n(h
n) =E
(h
n(V
n)) andG
(h
) =E
(h
(V
)), and the fact thath
n(z
n)!h
(z
) ifz
n!z
yields thath
n(V
n)!h
(V
) everywhere. Since furtherjh
njC
, it followsthat
G
n(h
n)!G
(h
): hence (4.9) and (4.8) are proved.4) Observe that
(+)ns =n(sX
snx)10T)(s
) and (+)s = (sX
xs)10T)(s
). Further,(4.8) implies that
n! locally uniformly. SinceX
nx converges in law toX
x and sinceX
x has no xed time of discontinuity, an application of the continuous mapping theoremyields that (
X
nx(+)n) converges in law for the Skorokhod topology to (
X
x(+)). 2
Remark 4.2
Suppose now thatf
is a continuously dierentiable function onIR
k withall partial derivatives bounded and Lipschitz, and let 0
< T
1< ::: < T
k. Set = (X
xf
(X
xT1
:::X
xTk)) and
n=
(
X
nxf
(X
Tnx 1:::X
nx
Tk )), as in Remark 2.9. Then the
statement of Theorem 4.1 holds, with exactly the same proof.
4.2 A discrete time version
Here we consider a \discrete time" version of the previous results. The setting is as follows, and will also be the same in the next subsection.
For each
n
we have a sequence (Y
ni)i1of i.i.d. variables on a given space (n
Fn
P
n),with
E
n(Y
ni) = 0E
n((Y
ni)2) = 1
n
E
n((Y
ni)4where
"
n!0. These conditions imply that the partial sums processesZ
nt = nt] X i=1Y
ni (4.11)converge weakly to a standard Wiener process
Z
=W
, dened on a (possibly dierent) ltered space (F(Ft)t0
P
). We also have a functiong
onIR
which is dierentiablewith a bounded Lipschitz derivative, and we consider the dierence equation
X
0nx =x
X
nxi =
X
inx;1+g
(X
nxi;1)
Y
ni (4.12)whose solution is a square-integrable martingale w.r.t. the discrete-time ltration Fni =
(Y
nj :j
i
). We also consider the associated continuous-time martingale w.r.t. theltration (Fn nt])t
0:
X
tnx =X
nxnt]:
(4.13)This process
X
nx can be viewed as the solution of the stochastic dierential equationX
tnx =x
+ Z t0
g
(X
snx;)dZ
ns (4.14)and by stability theorems (see 10]) it converges weakly to the unique strong solution of the following equation:
X
xt =x
+Z t 0g
(X
xs)dZ
s:
(4.15)We even have that the pair (
Z
nX
nx) weakly converges to (ZX
x). FurtherX
nxandX
nxare also related by (2.5) with
T
i =i=n
, andX
nxis a locally square-integrable martingale.Now we let
T >
0 andf
be a dierential function with a bounded and Lipschitz derivative. ThenU
n=f
(X
Tnx) is square-integrable. We can consider the decomposition(3.16), which gives
ni, and we associate 0n as in (3.17) withT
(ni
) =i=n
. On the otherhand
U
=f
(X
xT) is also square-integrable, and we set =(X
xU
).Here again, by construction
0nis left continuous with right limits, and we set(+)0ns=limt#st>s
0ns. On the other hand, the version of
given by Theorem 2.6 is not only leftcontinuous, but even continuous except at time
T
: this is because the functionof (2.21) is continuous, and the processX
x also is continuous: then the processs=
(
sX
xs)10T)(
s
)is another version of
, which is right continous with left limits (and also continuous except atT
) and diers from the rst version at timeT
only.Theorem 4.3
Assume (4.10), (4.11), (4.12), (4.14) and (4.15) withg
di erentiable with a bounded and Lipschitz derivative. Letf
be a di erentiable function with a bounded and Lipschitz derivative and letT >
0. Then the processes (X
nx(+)0n) converge in law for
the Skorokhod topology on
IR
2 to (X
x).Proof
. The explicit form of is given by (2.20), with taking the simple form(sy
) =Q
T;sf
0(
y
). Now, ifP
ntf
(x
) =E
(f
(X
nxt )), we readily deduce from Proposition 2.1 and
from (4.10) and (4.12) and (4.13) that a version of
ni is given by ni =8 < :
n g(
Xnx
i;1 )
R
ni(dy
)yP
n T;i n
f
(X
inx ;1+g
(X
nx
i;1)
y
) if in< T g
(X
nx i;1)6
= 0
where
nidenotes the law ofY
ni. In view of the properties ofg
, we readily deduce by induc-tion oni
thatx
7!X
nx
i is dierentiable (for all
!
), hencex
7!X
nxt is also dierentiable
and its derivative satises
X
0nxt = 1 +
Z t 0
g
0(X
nx s;)X
0nx s;
dZ
nsand we set
Q
ntf
(x
) =E
(f
(X
tnx)X
0nxt ). Then by virtue of (4.10) and of the properties of
g
again, one easily checks thatQ
ntf
0(y
) is bounded in (ny
) and continuous iny
, and that @@yP
ntf
(y
) =Q
ntf
0(y
) since further theY
ni's are centered, we getZ
ni(dy
)yP
n T;i n
f
(X
inx;1+g
(X
nx i;1)y
) =g
(X
inx;1)n Q
nT; i nf
0(X
nx i;1) +"
niwhere supij
"
nij!0. Therefore if n(s
) =i=n
wheni=n < s
(i
+ 1)=n
, we deduce thata version of
(+)0n is given by (+)0ns=
Q
nT;n(s)f
0(
X
nxn(s))1 0n
(T)](
s
) + 00n swhere supsj
00ns j ! 0. By the same argument as in Theorem 4.1 one has
Q
ns nf
0(
y
n) !Q
sf
0(y
) whens
n !
s
andy
n !y
. SinceX
nx converges in law toX
x, the result thenfollows as in Theorem 4.1 again. 2
Remark 4.4
Exactly as in Remark 4.2, the same result holds when instead off
(X
xT) andf
(X
Tnx) we consider the variablesf
(X
xT1:::X
xTk) andf
(X
nx T1
:::X
nx
Tk ), where
f
is a continuously dierentiable function on
IR
k with all partial derivatives bounded andLipschitz, and 0
< T
1< ::: < T
k.4.3 Another discrete time version
Here we consider exactly the same setting as in the previous subsection: we have (4.10), (4.11), (4.12), (4.13), (4.14) and (4.15).
The only two dierences are that we only assume
g
to be locally Lipschitz with at most linear growth, and that we will prove a convergence theorem for more general variables thanf
(X
xT), but in a much weaker sense.More precisely, we consider a function on the Skorohod space
ID
of all right contin-uous with left limits functions onIR
+, which is bounded, continuous for the local uniformtopology, and measurable w.r.t. the
-eldDT generated by the coordinates onID
up tosome time
T >
0 (recall that if is continuous for the Skorokhod topology, it is a fortiori continuous for the local uniform topology). Then we takeU
= (X
x) andU
n= (X
nx