## Level Sets of Additive L´evy Processes

### 1

### Davar Khoshnevisan

2### Department of Mathematics

### University of Utah

### 155 South 1400 East JWB 233

### Salt Lake City, UT 84112–0090

### Yimin Xiao

3### Department of Mathematics

### University of Utah

### 155 South 1400 East JWB 233

### Salt Lake City, UT 84112–0090

### and

### Microsoft Corporation

### One Microsoft Way

### Redmond, WA 98052

### January 18, 2001

1_{Research partially supported by grants from NSF and NATO}

**Abstract**

We provide a probabilistic interpretation of a class of natural capacities on
Eu-clidean space in terms of the level sets of a suitably chosen multiparameter
additive L´evy process *X*. We also present several probabilistic applications of
the aforementioned potential-theoretic connections. They include areas such as
intersections of L´evy processes and level sets, as well as Hausdorff dimension
computations.

* Keyword and Phrases*Additive Stable Processes, Potential Theory

**Contents**

**1 Introduction** **2**

**2 The Main Results** **4**

**3 Proof of Theorem 2.9** **11**

3.1 Proof of the First Part of Proposition 3.1 . . . 12

3.2 Proof of the Second Part of Proposition 3.1 . . . 18

3.3 Proof of Proposition 3.2 . . . 19

**4 Proof of Theorem 2.10** **20**
**5 Proof of Theorem 2.12** **22**
5.1 Lebesgue’s Measure of Stochastic Images . . . 23

5.2 Proof of Theorem 5.1: Lower Bound . . . 24

5.3 Proof of Theorem 5.1: Upper Bound . . . 26

5.4 Conclusion of the Proof of Theorem 2.12 . . . 30

**6 Consequences** **31**
6.1 Intersections of Zero Sets . . . 31

6.2 Intersections of the Sample Paths . . . 32

**1**

**Introduction**

An *N*-parameter,R

*d*_{-valued,} _{additive L´}_{evy process}_{X}_{=}_{X}_{(}_{t}_{);} _{t}_{∈}

R

*N*

+ is a

multiparameter stochastic process that has the decomposition

*X* =*X*1*⊕ · · · ⊕XN,*

where*X*1*, . . . , XN* denote independent L´evy processes that take their values in

R

*d*_{. To put it more plainly,}

*X*(*t*) =
*N*

X

*j*=1

*Xj*(*tj*)*,* *t∈*R

*N*

+*,* (1.1)

where *t _{i}* denotes the

*i*th coordinate of

*t*

*∈*R

*N*

+ (*i* = 1*, . . . , N*). These

ran-dom fields naturally arise in the analysis of multiparameter processes such as L´evy’s sheets. For example, see Dalang and Mountford [7, 8], Dalang and Walsh[9, 10],Kendall[33], Khoshnevisan[34], Khoshnevisan and

Shi[35],Mountford[40] andWalsh[51], to cite only some of the references.

Our interest is in finding connections between the level sets of*X*and capacity
in Euclidean spaces. In order to be concise, we shall next recall some formalism
from geometric probability. See Matheron [38] and Stoyan[48] for further

information and precise details. To any random set K *⊂* R

*d*_{, we assign a set}

function*µ*K onR

*d* _{as follows:}

*µ*_{K}(*E*) =P

K*∩E6=*? *,* *E⊂*R

*d _{,}*

_{Borel}

_{,}_{(1.2)}

and think of*µ*_{K} as the*distribution* of the random set K.
Let *X−*1(*a*) = *t* *∈* R

*N*

+*\{0}* : *X*(*t*) = *a* denote the level set of *X* at

*a∈*R

*d*_{. If}_{a}_{= 0,}* _{X}−*1

_{(0) is also called the zero set of}

_{X}_{. Our intention is to}show that under some technical conditions on

*X*,

*µ*1

_{X}−_{(}

_{a}_{)}is mutually absolutely

continuous with respect toC(*E*), where C(*E*) is a natural Choquet capacity of

*E* that is explicitly determined by the dynamics of the stochastic process *X*.
Our considerations also determine the Hausdorff dimension dimH*X−*1(0) of the

zero set of*X*, under very mild conditions.

In the one-parameter setting (i.e., when*d*=*N* = 1), the closure of*X−*1(*a*) is
the range of a subordinator*S*=*S*(*t*); *t*>0 ; cf. Fristedt[21]. Consequently,

in the one-parameter setting,*µ _{X}−*1

_{(}

_{a}_{)}is nothing but the

*hitting probability*for

*S*. In particular, methods of probabilistic potential theory can be used to
es-tablish capacitary interpretations of the distribution of the level sets of*X*; see

Bertoin [3], Fitzsimmons, Fristedt and Maisonneuve [18], Fristedt

[20], andHawkes [26] for a treatment of this and much more. Unfortunately,

Fitzsimmons and Salisbury [19], Hawkes [23, 25], Hirsch [27], Hirsch

and Song[28, 29],Khoshnevisan[34],Khoshnevisan and Shi[35],Peres

[43],Ren_{[45] and} Salisbury_{[46].}

We conclude the Introduction with the following consequence of our main results that are Theorems 2.9, 2.10 and 2.12 of Section 2.

**Theorem 1.1** *SupposeX*_{1}*, . . . , X _{N}*

*are independent isotropic stable L´evy*

*pro-cesses in*R

*d* _{with index}_{α}_{∈}_{]0}_{,}_{2]}_{and}_{X}_{=}_{X}

1*⊕ · · · ⊕XN. Then,*

*(i)*P*{X*

*−*1_{(0)}_{6=}

?*}>*0*if and only if* *N α > d; and*

*(ii) ifN α > d, then*P

dimH*X−*1(0) =*N− _{α}d*

*>*0

*.*

*Furthermore, for eachM >*1*, there exists a constantA >*1*, such that *
*simulta-neously for all compact setsE⊂*[*M−*1*, M*]*N _{, and for all}_{a}_{∈}*

_{[−}

_{M, M}_{]}

*d*

_{,}1

*A*Cap*d/α*(*E*)6*µ*

*X−*1(*a*)(*E*)6*A*Cap

*d/α*(*E*)*,*

*where*Cap* _{β}*(

*E*)

*denotes the Riesz–Bessel capacity of*

*E, of indexβ.*

We recall that for all*β >*0,

Cap*β*(*E*) =

n

inf
*µ∈*P(*E*)

Z Z

*|s−t|−β* *µ*(*ds*)*µ*(*dt*)

o*−*1

*,* (1.3)

Figure 1: The Zero Set of Additive Brownian Mo-tion

whereP(*E*) denote the

col-lection of all probability measures on the Borel set

*E* *⊂* R

*N*

+ and *|t|* =

max_{1}6*j*6*N|tj|*denotes the

*`∞*-norm on R

*N*_{. We shall}

prove this theorem in Sec-tion 2 below.

To illustrate some of
the irregular features of the
level sets in question, we
include a simulation of the
zero set of *X* = *X*1*⊕X*2,
where*X*1and*X*2are
inde-pendent, linear Brownian
motions; cf. Figure 1. In
this simulation, the darkest
shade of gray represents the
set*{(s, t*) :*X*_{1}(*s*)+*X*_{2}(*t*)*<*

0 , while the medium shade of gray represents the collection*{(s, t*) : *X*_{1}(*s*) +

Throughout this paper, for any*c∈*R+,**c**denotes the*N*-dimensional vector

(*c, . . . , c*) and for any integer*k*>1 and any *x∈*R

*k*_{,} _{|}_{x}_{|}_{= max}_{1}

6*`*6*k|x`|* and

*kxk*=*{*P*k _{`}*

_{=1}

*x*2

_{`}*}*1

*/*2

_{denote the}

_{`}∞_{and}

*2*

_{`}_{norms on}

R

*k*_{, respectively.}

The remainder of this paper is organized as follows. In Section 2, after presenting some preliminary results, we state our main Theorems 2.9, 2.10 and 2.12. We then prove the announced Theorem 1.1. Theorem 2.9 is proved in Section 3, while the proof of Theorem 2.10 is given in Section 4. In Section 5 we prove Theorem 2.12. Our main arguments depend heavily upon tools from multiparameter martingale theory. In Section 6, we establish some of the other consequences of Theorems 2.9, 2.12 and 2.10 together with their further connection to the existing literature.

**2**

**Preliminaries and the Statement of the Main**

**Results**

Throughout, *d* and *N* represent the spatial and temporal dimensions,
respec-tively. The*N*-dimensional “time” spaceR

*N*

+ can be partially ordered in various

ways. The most commonly used partial order on R

*N*

+ is 4, where *s*4*t* if and

only if*s _{i}*6

*ti*, for all 16

*i*6

*N*. This partial order induces a minimum operation:

*s*f*t*denotes the element ofR

*N*

+ whose*i*th coordinate is*si∧ti*, for all 16*i*6*N*.

For*s, t∈*R

*N*

+ and*s*4*t*, we write [*s, t*] = [*s*1*, t*1]*× · · · ×*[*sN, tN*].

Concerning the source of randomness, we let *X*_{1}*, . . . , XN* denote *N*
inde-pendent R

*d*_{-valued L´}_{evy processes and define} _{X}_{=} _{X}

1*⊕ · · · ⊕XN*; see Eq.
(1.1) for the precise definition. Recall that for each 16*j*6*N*, the L´evy process

*Xj* is said to be *symmetric*, if *−Xj* and *Xj* have the same finite-dimensional
distributions. In such a case, by the L´evy–Khintchine formula, there exists a

*nonnegative*function (called the*L´evy exponent*of*Xj*) Ψ*j*:R

*d* _{→}

R+, such that

for all*t*>0,

E

exp{*iξ·Xj*(*t*)}

= exp{−*t*Ψ*j*(*ξ*)}*,* *ξ∈*R

*d _{.}*

In particular, if Ψ*j*(*ξ*) =*χjkξkα*for some constant *χj* *>*0,*Xj* is said to be an

*isotropic* stable process with index*α*.

We say that the process *Xj* is *absolutely continuous*, if for all *t >* 0, the
function*ξ7→e−t*Ψ*j*(*ξ*)_{is in} * _{L}*1

_{(}

R

*d*_{). In this case, by the inversion formula, the}

random vector*X _{j}*(

*t*) has the following probability density function:

*p _{j}*(

*t*;

*x*) = (2

*π*)

*−d*

Z

Rd

*e−iξ·x*exp*−t*Ψ* _{j}*(

*ξ*)

*dξ,*

*t >*0

*, x∈*R

*d _{.}*

In all but a very few special cases, there are no known explicit formulæ
for *pj*(*t*;*x*). The following folklore lemma gives some information about the
behavior of*pj*(*t*;*x*) and follows immediately from the above representation.

**Lemma 2.1** *Suppose* *Xj* *is symmetric and absolutely continuous. Let* Ψ*j*

*(i) for all* *t >*0 *and allx∈*R

*characteristic exponent*of the additive L´evy process*X*, and say that the additive
L´evy process *X* is *absolutely continuous* if for each *t∈* R

*N*

denotes the boundary ofR

*N*

given by the formula

*p*(*t*;*x*) = (2*π*)*−d*

Clearly, if at least one of the*Xj*’s are symmetric and absolutely continuous, then
the additive L´evy process*X* is also absolutely continuous. However, there are
many examples of non-absolutely continuous L´evy processes *X*1*, . . . , XN*, such
that the associated*N*-parameter additive L´evy process *X* = *X*_{1}*⊕ · · · ⊕X _{N}*

*is* absolutely continuous. Below, we record the following additive analogue of
Lemma 2.1.

*d*_{-valued random variable}_{Y}_{is}_{κ}_{-}_{weakly unimodal}_{if there}

exists a positive constant*κ*such that for all*a∈*R

*d* _{and all}_{r >}_{0,}

Throughout much of this paper, we will assume the existence of a fixed*κ*such
that the distribution of*X*(*t*) is*κ*-weakly unimodal for all*t∈*R

*N*

+*\∂*R

*N*

+. If and

when this is so, we say that the process*X* is*weakly unimodal*, for brevity.
We now state some remarks in order to shed some light on this weak
uni-modality property.

**Remark 2.3**By a well known result of Anderson(cf. [1, Th. 1]), if the

den-sity function*p*(*t, x*) of*X*(*t*) (*t∈*R

*N*

+) is symmetric unimodal in the sense that (i)

*p*(*t, x*) =*p*(*t,−x*); and (ii) *{x*: *p*(*t, x*)>*u}* is convex for every*u*(0*< u <∞),*

then, the inequality (2.2) holds with *Y* = *X*(*t*) and *κ* = 1. In particular,
any nondegenerate, centered Gaussian random vector satisfies these conditions.
Using this fact, together with Bochner’s subordination, we can deduce that
whenever*X* =*{X*(*t*); *t∈* R

*N*

+*}* is an additive isotropic stable L´evy process of

index*α∈]0,*2] (i.e., whenever each*Xj* is an isotropic stable L´evy process), the
density function of*X*(*t*) is symmetric unimodal for each *t∈*R

*N*

+ *\ {0}. In *

par-ticular, when*X*is an isotropic stable L´evy process, Eq. (2.2) holds with*κ*= 1.

**Remark 2.4** Our definition of weak unimodality is closely related to that of
unimodal distribution functions. Recall that a distribution function*F*(*x*) onR

is said to be*unimodal* with mode*m*, if*F*(*x*) is convex on (−∞*, m*) and concave
on (*m,∞). For a multivariate distribution functionF*(*x*)*,*(*x* *∈*R

*d*_{), there are}

several different ways of defining unimodality of*F*(*x*) such as symmetric
uni-modality in the sense of Anderson given above and symmetric uniuni-modality in
the sense of Kanter; see Kanter [32] or Wolfe[53]. We refer to Wolfe

[54] for a survey of the various definitions of unimodality and related results.

**Remark 2.5** Some general conditions for the unimodality of infinitely divisible
distributions are known. In this and the next remark (Remark 2.6 below), we
cite two of them for the class of self-decomposable distributions.

Recall that a*d*-dimensional distribution function*F*(*x*) is called*self-decomposable,*

or of*class* *L*, if there exists a sequence of independentR

*d*_{-valued random }

vari-ables *{Yn}* such that for suitably chosen positive numbers *{an}* and vectors
*{bn}*in R

*d*_{, the distribution functions of the random variables}_{a}*n*

P*n*

*i*=1*Yi*+*bn*
converge weakly to*F*(*x*), and for every*ε >*0,

lim

*n→*0 16max*i*6*n*
P

*a _{n}|Y_{i}|*>

*ε*= 0

*.*

It is well known that*F*(*x*) is self-decomposable if and only if for every*a∈*(0*,*1),
there exists a distribution function*Ga*(*x*) onR

*d* _{such that}* _{F}*b

_{(}

_{ξ}_{) =}

*b*

_{F}_{(}

_{aξ}_{)}

*b*

_{G}*a*(

*ξ*), where

*H*b denotes the Fourier transform of

*H*. This result, for

*d*= 1, is due to

L´evy [36]. It is extended to higher dimensions inSato[47]; see also Wolfe

[53]. From this it follows readily that convolutions of self-decomposable
distribu-tion funcdistribu-tions are also self-decomposable. Sato_{[47] also proves in his Theorem}

2.3 that all stable distributions onR

*d* _{are self-decomposable.}

**Remark 2.6** Yamazato [55] proves that all self-decomposable distribution

functions on R are unimodal. For *d >* 1, Wolfe [53] proves that every *d*

-dimensional symmetric self-decomposable distribution function is unimodal in
the sense ofKanter_{[32]. In particular, every symmetric–though not }

necessar-ily isotropic–stable distribution onR

*d* _{is symmetric unimodal. We should also}

mention thatMedgyessy [39] andWolfe[52] give a necessary and sufficient

condition for symmetric infinitely divisible distributions in R to be unimodal

in terms of their L´evy measures. Their class is strictly larger than the class of

self-decomposable distributions.

We now apply these results to derive weak unimodality for the distribution
of a symmetric additive L´evy process*X* =*{X*(*t*); *t∈*R

*N*

+*}*in R

*d*_{.}

**Remark 2.7** Suppose that for all *t* *∈*R

*N*

+ *\∂*R

*N*

+, the distribution of *X*(*t*) is

self-decomposable, e.g., this holds whenever the distribution of *Xj*(*tj*) is
self-decomposable for every*j*>1 and for all*tj* *>*0. According to Remarks 2.4 and

2.6, the distribution of *X*(*t*) is also symmetric unimodal, in the usual sense,
when*d*= 1. Furthermore, when *d >* 1, the distribution of*X*(*t*) is symmetric
unimodal in the sense of Kanter [32]. Now, by the proof of Theorem 1 of
An-derson _{[1], we can see that for all} _{t}*∈* R

*N*

+*\∂*R

*N*

+, all *a* *∈* R

*d* _{and all} _{r >}_{0,}

(2.2) holds with*κ*= 1. In particular, every symmetric–though not necessarily
isotropic–additive stable L´evy process *X* =*X*_{1}*⊕ · · · ⊕X _{N}* satisfies weak

uni-modality (2.2) with*κ*= 1.

In all cases known to us, weak unimodality holds with*κ*= 1; cf. Eq. (2.2).
However, it seems plausible that in some cases, Eq. (2.2) holds for some*κ >*1.
This might happen when the distribution of the process *X* is not symmetric
unimodal. As we have been unable to resolve when *κ >*1, our formulation of
weak unimodality is stated in its current form for maximum generality.

Under the condition of weak unimodality, we can prove the following useful technical lemma.

**Lemma 2.8** *Let* *X* = *X*_{1}*⊕ · · · ⊕XN* *be an additive, weakly unimodal L´evy*

*process. Then,*

*(i) [Weak Regularity] For all* *t∈*R

*N*

+*\∂*R

*N*

+ *and all* *r >*0*,*

P*{|X*(*t*)|62*r}*6*κ*2

*d*

P*{|X*(*t*)|6*r};*

*(ii) [Weak Monotonicity] For alls, t∈*R

*N*

+*\∂*R

*N*

+ *withs*4*t,*
P*{|X*(*t*)|6*r}*6*κ*P*{|X*(*s*)|6*r}.*

In the analysis literature, our notion of weak regularity is typically known
as*volume doubling* for the law of*X*(*t*).

**Proof** To prove weak regularity, letB(*x*;*r*) =*{y* *∈*R

*d* _{:}_{|}_{y}_{−}_{x}_{|}

6*r}* and find

(i) the interiors ofB(*a`*;*r*)’s are disjoint, as*`*varies in *{1, . . . ,*2

*d _{}; and}*

(ii)*∪*2_{`}_{=1}*d* B(*a`*;*r*) =B(0; 2*r*).

Applying weak unimodality,

P*{|X*(*t*)|62*r}* 6

2*d*

X

*`*=1

P*{|X*(*t*)*−a`|*6*r}*

6 *κ*2

*d*

P*{|X*(*t*)*|*6*r}.*

To prove weak monotonicity, we fix*s, t∈*R

*N*

+ with*s*4*t*. Then,
P*{|X*(*t*)|6*r}* = P*{|X*(*s*) + (*X*(*t*)*−X*(*s*))|6*r}*

6 *κ*P*{|X*(*s*)|6*r},*

where the inequality follows from the independence of *X*(*s*) and *X*(*t*)*−X*(*s*)

and weak unimodality. This concludes our proof.

The following function Φ plays a central rˆole in our analysis of the process

*X*:

Φ(*s*) =*p*(*s*; 0)*,* *s∈*R

*N _{,}*

_{(2.3)}

where*s*is the element ofR

*N*

+, whose*i*th coordinate is*|si|. Clearly,s7→*Φ(*s*) is
nonincreasing in each*|si|*and, equally clearly, Φ(0) = +*∞*. We will say that Φ
is the*gauge function* for the multiparameter process*X*. Corresponding to the
gauge function Φ, we may define the Φ-*capacity* of a Borel set*E⊂*R

*N*

+ as

CΦ(*E*) =
n

inf
*µ∈*P(*E*)

Z Z

Φ(*s−t*)*µ*(*ds*)*µ*(*dt*)

o*−*1

*,* (2.4)

where P(*E*) denotes the collection of all probability measures on *E*. For any

*µ∈*P(*E*), we define the Φ-*energy* of*µ*by

EΦ(*µ*) =

Z Z

Φ(*s−t*)*µ*(*ds*)*µ*(*dt*)*.* (2.5)

Thus, the Φ-capacity of*E* is defined by the principle of minimum energy:

CΦ(*E*) = inf

*µ∈*P(*E*)
EΦ(*µ*)

*−*1

*.*

It is not hard to see thatC_{Φ}is a capacity in the sense of G. Choquet; cf. Bass

[2] andDellacherie and Meyer[11].

We are ready to state the main results of this paper. We denote Leb(*A*) the

*d*-dimensional Lebesgue measure of the Lebesgue measurable set*A⊂*R

*d*_{.}

**Theorem 2.9** *LetX*_{1}*, . . . , XN* *beN* *independent symmetric L´evy processes on*

R

*d* _{and let}_{X}_{=}_{X}

1*⊕ · · · ⊕XN. SupposeX* *is absolutely continuous and weakly*

*(i)*Φ*∈L*1* _{loc}*(R

*N*_{)}_{;}

*(ii)* P*{Leb{X*([*c,∞[*

*N*_{)}}_{>}_{0}}_{= 1}_{, for all}_{c >}_{0}_{;}

*(iii)*P*{Leb{X*([*c,∞[*

*N*_{)}}_{>}_{0}}_{>}_{0}_{, for all}_{c >}_{0}_{;}

*(iv)*P*{*Leb*{X*([*c,∞*[

*N*_{)}_{}}_{>}_{0}_{}}_{>}_{0}_{, for some}_{c >}_{0}_{;}

*(v)* P*{X*

*−*1_{(0)}_{∩}_{[}_{c,}_{∞}_{[}*N _{6}*

_{=}

?*}>*0*, for all* *c >*0*;*

*(vi)*P*{X*

*−*1_{(0)}_{∩}_{[}_{c,}_{∞[}N_{6=}

?*}>*0*, for some* *c >*0*.*

When *X−*1(0) *6=*?, it is of interest to determine its Hausdorff dimension.

Our next theorem provides upper and lower bounds for dimH*X−*1(0) in terms

of the following two indices associated to the gauge function Φ:

*γ*= inf*β >*0 : lim inf
*s→*0 *ksk*

*N−β*_{Φ(}_{s}_{)}_{>}_{0} _{,}

*γ*= sup*β >*0 :

Z

[0*,*1]*N*

1

*kskβ* Φ(*s*)*ds <∞* *.*
It is easy to verify that 06*γ*6*γ*6*N*.

Henceforth,*k***s***k*designates the *N*-dimensional vector (*ksk, . . . ,ksk*).

**Theorem 2.10** *Given the conditions of Theorem 2.9, for any*0*< c < C <∞,*

P

*γ* 6dim

H(*X−*1(0)*∩*[*c, C*]*N*)6 *γ* *>*0*.* (2.6)
*Moreover, if there exists a constantK*_{1}*>*0*such that*

Φ(*s*)6Φ(*K*1*k***s***k)* for all *s∈*[0*,*1]

*N _{,}*

_{(2.7)}

*then,* P*{dim*H(*X*

*−*1_{(0)}_{∩}_{[}_{c, C}_{]}*N*_{) =}_{γ}_{}}_{>}_{0}_{.}

**Remark 2.11** Clearly, if*X*1*, . . . , XN* have the same L´evy exponent, then (2.7)
holds.

Our next theorem further relates the distribution of the level sets of an additive L´evy process to Φ-capacity.

**Theorem 2.12** *Given the conditions of Theorem 2.9, for everyc >*0*, all *
*com-pact setsE⊂*[*c,∞[N* _{and for all}_{a}_{∈}

R

*d _{,}*

*A*_{1} sup
*µ∈*P(*E*)

[R*p*(*s*;*a*)*µ*(*ds*)]2

EΦ(*µ*)

6*µ*

*X−*1*{a}*(*E*)6*A*2CΦ(*E*)*,* (2.8)

*whereA*_{1}=*κ−*22*−d*_{Φ(}_{c}_{)} *−*1 _{and}_{A}

2=*κ*325*d*+3*N*Φ(**c**)*.*

**Corollary 2.13** *Given the conditions and the notation of Theorem 2.12, for*
*alla∈*R

*d* _{and for all compact sets}_{E}_{⊂}_{[}_{c,}_{∞[}N_{,}

*A*1CΦ(*E*)6*µ*

*X−*1(*a*)(*E*)6*A*2CΦ(*E*)*,*
*whereA*_{1}=*κ−*22*−d*_{Φ(}_{c}_{)} *−*1* _{I}*2

*E*(*a*)*,A*2=*κ*325*d*+3*N*Φ(**c**)*andIE*(*a*) = inf*s∈Ep*(*s*;*a*)*.*
Applying Lemma 2.2(*iii*), we can deduce that there exists an open
neigh-borhood*G*of 0 (that may depend on *E*), such that for all *a∈G*, *IE*(*a*) *>*0.
In particular,*µ _{X}−*1

_{(0)}(

*E*) is bounded above and below by nontrivial multiples

ofC_{Φ}(*E*).

We can now use Theorems 2.9, 2.10 and 2.12 to prove Theorem 1.1.

**Proof of Theorem 1.1**Note that for all*t∈*R

*N*

+ and all*ξ∈*R

*d*_{,}

E

exp*{iξ·X*(*t*)*}*= exp*−*
*N*

X

*j*=1

*t _{j}*

*χ*

_{j}*kξkα*

*.*

By (2.1), for all*t∈*R

*N*_{,}

Φ(*t*) = (2*π*)*−d*

Z

Rd

exp*−*
*N*

X

*j*=1

*|tj|χj* *kξkα* *dξ*

= *λ*

*N*

X

*j*=1

*|t _{j}|χ_{j}−d/α,*

where*λ*= (2*π*)*−d*R

Rd

+*e*

*−kζkα*

*dζ*. In particular,

*λN−d/αχ−d/α|t|−d/α*6Φ(*t*)6*λχ*

*−d/α _{|}_{t}_{|}−d/α_{,}*

_{(2.9)}

where *χ* = max{*χ*1*, . . . , χN}* and *χ* = min{*χ*1*, . . . , χN}, respectively. *
Conse-quently, Φ*∈L*1* _{loc}*(R

*N*_{) if and only if} _{N α > d}_{; and} _{γ}_{=}_{γ}_{=}_{N}_{−}_{d/α}_{. Hence,}

the first two assertions of Theorem 1.1 follow from Theorems 2.9 and 2.10 re-spectively. We also have

*λ−*1*χd/α*Cap* _{d/α}*(

*E*)6CΦ(

*E*)6

*λ*

*−*1_{N}d/α_{χ}d/α_{Cap}
*d/α*(*E*)*.*

In light of Corollary 2.13, it remains to show that

inf

*a∈*[*−M,M*]*d _{s}_{∈}*

_{[}

*inf*

_{M}*−*1

_{,M}_{]}

*Np*(

*s*;

*a*)

*>*0

*.*

**3**

**Proof of Theorem 2.9**

We prove Theorem 2.9 by demonstrating the following Propositions 3.1 and 3.2.

**Proposition 3.1** *Under the conditions of Theorem 2.9, the following are *
*equiv-alent:*

*(i)*C_{Φ}([0*,*1]*N*_{)}_{>}_{0}_{;}

*(ii)* P

*X−*1(0)*∩*[*c,∞[N _{6=}*

? *>*0*, for allc >*0*;*

*(iii)*P

*X−*1(0)*∩*[*c,∞[N6=*? *>*0*, for somec >*0*;*
*(vi)*Φ*∈L*1* _{loc}*(R

*N*_{)}_{.}

*Moreover, given any constants*0*< c < C <∞, then for alla∈*R

*d _{,}*

*κ−*22*−d*Φ(**c**) *−*1[

R

[*c,C*]*Np*(*s*;*a*)*ds*]2
EΦ(Leb)

6 *µ*

*X−*1*{a}*([*c, C*]*N*) (3.1)

6 *κ*

5_{2}3*d*+6*N*_{Φ(}_{c}_{)}_{C}

Φ([*c, C*]*N*)*.*

Proposition 3.1 rigorously verifies the folklore statement that the
“equilib-rium measure” corresponding to sets of the form [*c, C*[*N* is, in fact, the
nor-malized Lebesgue measure. It is sometimes possible to find direct analytical
proofs of this statement. For example, suppose that the gauge function Φ is a
radial function of form*f*(|*s−t|), where* *f* is decreasing. Then, the analytical
method ofPemantle et al. [42] can be used to give an alternative proof that

the equilibrium measure is Lebesgue’s measure. In general, we only know one probabilistic proof of this fact.

**Proposition 3.2** *Under the conditions of Theorem 2.9, the following are *
*equiv-alent:*

*(i)*C_{Φ}([0*,*1]*N*_{)}_{>}_{0}_{;}

*(ii)* P*{Leb{X*([*c,∞[*

*N*_{)}}_{>}_{0}}_{= 1}_{, for all}_{c >}_{0}_{;}

*(iii)*P*{Leb{X*([*c,∞[*

*N*_{)}}_{>}_{0}}_{>}_{0}_{, for all}_{c >}_{0}_{;}

*(iv)*P*{Leb{X*([*c,∞[*

*N*_{)}}_{>}_{0}}_{>}_{0}_{, for some}_{c >}_{0}_{.}

We shall have need for some notation. For all*i*= 1*, . . . , k*,F*i*=

F*i*(*t*); *t*>0

denotes the complete, right-continuous filtration generated by the process*Xi*.
We can define the*N*-parameter filtrationF=

+ satisfies Condition (F4) of Cairoli and Walsh

(cf. [6, 51]).

**3.1**

**Proof of the First Part of Proposition 3.1**

We now start our proof of the first part by demonstrating that assertion (*i*)
of Proposition 3.1 implies (*ii*). We first note that C_{Φ}([0*,*1]*N*_{)} _{>}_{0 implies}
CΦ([0*, T*]*N*) *>* 0 for all *T >* 1. To see this directly, we assume that *σ* is a

probability measure on [0*,*1]*N* _{such that}

For each*ε >*0, *T >*1 and for the probability measure *µ*of Eq. (3.3), we

*d*_{.) We will denote the total}

massJ*ε,T*([1*, T*]

*N*_{) of this random measure by}_{k}

J*ε,Tk*.

The following lemma is an immediate consequence of Lemma 2.2 and the dominated convergence theorem.

Next, we consider the energy ofJ*ε,T* with respect to the kernel ¯*%*and state

a second moment bound for*k*J*ε,Tk.*

We define

*Z*1 = *X*(*s*)*−X*(*s*f*t*)

*Z*_{2} = *X*(*t*)*−X*(*s*f*t*)*.*

Clearly,

P*{|X*(*s*)|6*ε ,|X*(*t*)|6*ε}*

= P*{|X*(*s*f*t*) +*Z*1*|*6*ε ,|X*(*s*f*t*) +*Z*2*|*6*ε}*

6 P*{|X*(*s*f*t*) +*Z*1*|*6*ε ,Z*1*−Z*2*|*62*ε}.* (3.9)

Elementary properties of L´evy processes imply that *X*(*s*f*t*), *Z*1 and *Z*2 are

three independent random vectors inR

*d*_{. Moreover, by the weak unimodality}

of the distribution of*X*(*s*f*t*) and by Lemma 2.2,

P*{|X*(*s*f*t*) +*Z*1*|*6*ε|Z*1*, Z*2*}* 6 *κ*P*{|X*(*s*f*t*)*|*6*ε}*
6 *κ*(2*ε*)

*d*_{Φ(}_{s}

f*t*)

6 *κ*(2*ε*)

*d*_{Φ(}_{1}_{)}_{.}

Since*Z*_{1}*−Z*_{2}=*X*(*s*)*−X*(*t*), Eq. (3.9) implies

P*{|X*(*s*)|6*ε ,|X*(*t*)|6*ε}*6*κ*(2*ε*)

*d _{·}*

_{Φ(}

_{1}_{)}

P*{|X*(*s*)*−X*(*t*)|62*ε}.*

Thus, we have demonstrated that

E

nZ

[1*,T*]*N*

Z

[1*,T*]*N*

*K*(*s, t*)J*ε,T*(*ds*)J*ε,T*(*dt*)

o

6*κ*(2*ε*)

*−d*_{Φ(}_{1}_{)}_{×}

*×*

Z

[1*,T*]*N*

Z

[1*,T*]*N*

*K*(*s, t*)P*{|X*(*s*)*−X*(*t*)|62*ε}µ*(*ds*)*µ*(*dt*)*.*

The lemma follows from this and weak regularity; cf. Lemma 2.8.

**Remark 3.5** A little thought shows that we can apply Lemma 3.4 with

*K*(*s, t*)*≡*1 to obtain

E*{k*J*ε,Tk*

2_{}}

6*κ*

2_{Φ(}_{1}_{)}_{ε}−d_{×}

*×*

Z

[1*,T*]*N*

Z

[1*,T*]*N*

P*{|X*(*s*)*−X*(*t*)*|*6*ε}µ*(*ds*)*µ*(*dt*)*.*

(3.10)

In particular,

E*{k*J*ε,Tk*

2_{}}

6*κ*

2_{2}*d*_{Φ(}_{1}_{)}_{·}

Z

[1*,T*]*N*

Z

[1*,T*]*N*

If*µ* is chosen to be the *N*-dimensional Lebesgue measure on [1*, T*]*N*_{, then, by}
the symmetry of L´evy processes*Xj* (*j*= 1*, . . . , N*), Eq. (3.10) becomes

E*{k*J*ε,Tk*

2_{}}

6*κ*

2_{2}*N*_{(}_{T}_{−}_{1)}*N*_{Φ(}_{1}_{)}_{ε}−d_{·}

Z

[0*,T−*1]*N*

P*{|X*(*s*)|6*ε}ds.* (3.12)

That is, Lemma 3.4 implies an energy estimate.

We can now prove

**Lemma 3.6** *In Proposition 3.1, (i)⇒(ii).*

**Proof** Upon changing the notation of the forthcoming proof only slightly, we
see that it is sufficient to prove that

P

0*∈X*([1*,*2]*N*) *>*0*.*

We will prove this by constructing a random Borel measure on the zero set

*X−*1(0)*∩[1,*2]*N*. Let*{*J*ε,*2*}*be the family of random measures on [1*,*2]

*N* _{defined}

by (3.6). Lemmas 3.3 and 3.4 with*K*(*s, t*) = ¯*%*(*s−t*) and Eq. (3.11), together
with a second moment argument (see Kahane_{[31, pp.204–206], or} LeGall,
Rosen and Shieh _{[37, pp.506–507]), imply that there exists a subsequence}

*{*J*ε*

*n,*2*}*that converges weakly to a random measureJ2such that
E

nZ

[1*,*2]*N*

Z

[1*,*2]*N*

¯

*%*(*s−t*)J2(*ds*)J2(*dt*)

o

6*κ*

2_{2}*d*_{Φ(}_{1}_{)}_{·}

Z

[1*,*2]*N*

Z

[1*,*2]*N*

¯

*%*(*s−t*)Φ(*s−t*)*µ*(*ds*)*µ*(*dt*)*.*

(3.13)

Moreover, letting*A*=*κ−*22*−d*Φ(**1**) *−*1, we have

P*{k*J2*k>*0} >*A·*

n Z

[1*,*2]*N*

Φ(*s*)*µ*(*ds*)

o2

*×*

*×*n Z

[1*,*2]*N*

Z

[1*,*2]*N*

Φ(*s−t*)*µ*(*ds*)*µ*(*dt*)

o*−*1

*,*

(3.14)

which is positive. The first integral is clearly positive and the second is finite, thanks to Eq. (3.3).

It remains to prove that the random measureJ2 is supported on*X*

*−*1_{(0)}_{∩}

[1*,*2]*N*. To this end, it is sufficient to show that for each *δ >*0, J2(*D*(*δ*)) = 0,

a.s., where*D*(*δ*) = *{s∈*[1*,*2]*N* _{:}_{|}_{X}_{(}_{s}_{)}_{|}_{> δ}_{}}_{. We employ an argument that is}
similar, in spirit, to that used byLeGall, Rosen and Shieh[37, pp. 507–508].

Since the sample functions of each L´evy process*Xj* (*j*= 1*, . . . , N*) are right
continuous and have left limit everywhere, the limit lim

*t−→*(*A*)*s−X*(*t*) exists for
every*A∈*Π and*s∈*R

*N*

+, where*t−→*(*A*)*s−*means*tj* *↑sj* for*j∈A*and*tj* *↓sj* for

*j∈A*{

. Note that lim
*t*(?)

*−→s−X*(*t*) =*X*(*s*). Let

*D*_{1}(*δ*) =*s∈*[1*,*2]*N* : *|* lim
*t−→*(*A*)*s−*

and

*D*_{2}(*δ*) =*s∈*[1*,*2]*N* : *|X*(*s*)|*> δ* and *|* lim
*t−→*(*A*)*s−*

*X*(*t*)|6*δ* for some *A∈*Π *.*

Then, we have the decomposition: for all*δ >*0,

*D*(*δ*)*\D*_{1}(*δ*)*⊆D*_{2}(*δ*)*.*

We observe that*D*1(*δ*) is open in [1*,*2]*N*, and*D*2(*δ*) is contained in a countable
union of hyperplanes of form*{t∈*[1*,*2]*N* _{:}_{t}

*j*=*a* for some*j}*, for various values
of*a*. These hyperplanes are solely determined by the discontinuities of*X _{i}*’s.

Directly from the definition ofJ

*ε,*2, we can deduce that for all*ε >*0 small
enough,J

*ε,*2(*D*1(*δ*)) = 0. Hence,J2(*D*1(*δ*)) = 0, almost surely. On the other

hand, Eq.’s (3.5) and (3.13) together imply that the following holds with prob-ability one:

J2*{t∈*[1*,*2]

*N* _{:}_{t}

*j*=*a* for some*j}*= 0*,* *∀a∈*R+*.*

Consequently,J2(*D*2(*δ*)) = 0, a.s., for each *δ >* 0. We have proved that with

positive probability, 0*∈X*([1*,*2]*N*_{), which verifies the lemma.}

where J*ε,*3 is described in Eq. (3.6) with *µ* replaced by the *N*-dimensional

Lebesgue measure. Clearly, *M* is an *N*-parameter martingale in the sense of

Cairoli[6]. We shall tacitly work with Doob’s separable version modification

of*M*.

The lemma follows from weak regularity; cf. Lemma 2.8.

**Lemma 3.8** *In Proposition 3.1, (iii)* *⇒(iv).*

**Proof** Upon squaring both sides of the inequality of Lemma 3.7, and after
taking the supremum over [1*,*2]*N* *∩*Q

*N* _{and taking expectations, we obtain}

P

By Cairoli’s maximal inequality (Cairoli[6, Th. 1])

E

We now apply (3.12) to obtain

P

where the last inequality follows from

Z

We have used the weak monotonicity property given by Lemma 2.8. By the
general theory of L´evy processes, we can assume*t7→X*(*t*) to be right-continuous
with respect to the partial order4; cf. Bertoin[4]. Consequently,

P

Thus, by the mentioned sample right-continuity,

In fact, this proof shows that for any*c >*0,*u∈*[*c,∞)N* _{and}_{h >}_{0,}

This proves (*iii*)*⇒*(*iv*), and concludes our proof of the first part of Proposition

3.1.

**Remark 3.9** Proposition 3.1 implies that ifCΦ([0*,*1]*N*_{)}_{>}_{0, then}_{C}_{Φ}_{([0}_{, T}_{]}*N*_{)}_{>}

0 for all*T >*0.

**3.2**

**Proof of the Second Part of Proposition 3.1**

The arguments leading to the second part of our proof are similar to those of the first part. As such, we only sketch a proof.

For any*ε >*0,*a∈*R

*d* _{and}_{T >}_{1, define a random measure}

I*a*;*ε,T* on [1*, T*]

where *B* *⊆*[1*, T*]*N* _{designates an arbitrary Borel set. Similar arguments that}
lead to Lemmas 3.3 and 3.4 can be used to deduce the following.

**Lemma 3.10** *For anya∈*R

We are ready for

**Proof of Eq. (3.1)** Without loss of generality, we may and will assume that

argument of Lemma 3.6, using Eq.’s (3.18), (3.19) and (3.20) of Lemma 3.10
with*T* = 2; see Eq. (3.14).

To prove the upper bound in (3.1), we follow the lines of proof of Lemma
3.8 and define*Ma*;*ε,*3=

This is the analogue of (3.15) and is always an*N*-parameter martingale. As in
Lemma 3.7, for all*t∈*[1*,*2]*N* _{and}_{ε >}_{0,}

The presented proof of Lemma 3.8 can be adapted, using Eq. (3.20) with*T* = 3
in place of Eq. (3.12), to yield

P

this proves the upper bound in (3.1).

**3.3**

**Proof of Proposition 3.2**

In order to prove (*i*)*⇒*(*ii*), we use the Fourier-analytic ideas of Kahane[31,

Hence,

E

Z

Rd*|*b

*σ*(*u*)*|*2*du* 6 (2*c*)

*N*

Z

[0*, c*]*N*

Z

Rd

exp*−*
*N*

X

*j*=1

*t _{j}*Ψ

*(*

_{j}*u*)

*du dt*

= 2*N*+*dπdcN*

Z

[0*, c*]*N*

Φ(*t*)*dt,*

which is finite, since Φ *∈* *L*1* _{loc}*(R

*N*_{). Consequently,} _{b}_{σ}_{∈}* _{L}*2

_{(}

R

*d*_{), a.s. By the}

Riesz–Fischer theorem and/or Plancherel’s theorem, *σ* is a.s. absolutely
con-tinuous with respect to the Lebesgue measure onR

*d* _{and its density is a.s. in}

*L*2(R

*d*_{). This proves Eq. (3.21); assertion (}_{ii}_{) of Proposition 3.2 follows suit.}

The implications (*ii*)*⇒*(*iii*)*⇒*(*iv*) being immediate, we will show (*iv*)*⇒*
(*i*). Assuming that (*iv*) holds, there exists a constant*c*_{0} *> c* and an open set

*G⊂*R

*d* _{such that}

P

Leb{*X*([*c, C*]*N*)*∩G}>*0 *>*0*,* for all*C*>*c*0*.*

It follows from Eq. (3.1) that for all*a∈*R

*d*_{,}

P*{a∈X*([*c, C*]

*N*_{)}}

6*κ*

5_{2}3*d*+6*N*_{Φ(}

**c**)CΦ([*c, C*]*N*)*.*

By Fubini’s theorem, we obtain

E Leb*{X*([*c, C*]

*N*_{)}_{∩}_{G}_{}}

6*κ*

5_{2}3*d*+6*N*_{Φ(}_{c}_{) Leb(}_{G}_{)}_{C}

Φ([*c, C*]*N*)*.*

Hence, C_{Φ}([*c, C*]*N*_{)} _{>}_{0, which implies} _{C}

Φ([0*,*1]*N*) *>* 0. This completes our

proof of Proposition 3.2.

**4**

**Proof of Theorem 2.10**

Without loss of generality, we will assume that*c*= 1 and*C*= 2. With this in
mind, we first prove the lower bound in (2.6). For any*β < γ*,

Z

[0*,*1]*Nk*

*sk−β*Φ(*s*)*ds <∞.*

This implies that for any*T >*0,

Z

[0*,T*]*N*

Z

[0*,T*]*Nk*

*s−tk−β*Φ(*s−t*)*ds dt <∞.* (4.1)

We let J2 denote the random measure constructed in the presented proof of

Lemma 3.6 with*µ*being Lebesgue’s measure on [1*,*2]*N*_{. We have already proved}
that J2 is supported on *X*

probability *η >* 0, which is independent of *β*; see Eq. (3.14). On the other
hand, Lemma 3.4 with*K*(*s, t*) =*ks−tk−β* _{and Eq. (4.1), together imply}

E

nZ

[1*,*2]*N*

Z

[1*,*2]*Nk*

*s−tk−β* J2(*ds*)J2(*dt*)

o

*<∞.*

Hence, P*{dim*

H(*X−*1(0)*∩*[1*,*2]*N*)>*β}* *> η*. Letting *β* *↑* *γ* along a rational

sequence, we obtain the lower bound in (2.6).

Next, we will use the hitting probability estimate (3.16) and a covering
argument to prove the upper bound in (2.6). For any*β0* *> γ*, we choose*β* *∈*

(*γ, β0*). Then

Φ(*s*)>

1

*kskN−β* for all *s*near 0*.*

Hence, there exists a constant*K*_{2}*>*0 such that for all*h >*0 small enough

Z

[0*,h*]*N*

Φ(*s*)*ds*>*K*2*h*

*β _{.}*

_{(4.2)}

Now, we can take*n*large enough and divide [1*,*2]*N* _{into}_{n}N_{subcubes}_{{}_{C}*n,i}n*

*N*

*i*=1,
each of which has side 1*/n*. Let us now define a covering **C***n,*1*, . . . ,***C***n,nN* of

*X−*1(0)*∩*[1*,*2]*N* _{by}
**C***n,i*=

*Cn,i* if*X−*1(0)*∩Cn,i6=*?

? otherwise

*.*

It follows from (3.16) and (4.2) that for each*Cn,i*,

P

*X−*1(0)*∩Cn,i6=*? 6*K*3

1

*n*

*N−β*

*,*

where*K*3is a positive and finite constant. Hence, with the covering*{***C***n,i}n*

*N*

*i*=1
in mind,

E

h

H*β0* *X−*1(0)*∩*[1*,*2]*N*

i

6lim inf

*n→∞*

*nN*

X

*i*=1

(*√N n−*1)*β0*P

*X−*1(0)*∩Cn,i* *6=*?

6lim inf

*n→∞* *K*3
*√*

*Nβ*

*0*

*nβ−β0* = 0*,*

whereH*β0*(*E*) denotes the*β0*-dimensional Hausdorff measure of*E*. This proves

dimH(*X−*1(0)*∩*[1*,*2]*N*)6*β*

*0* _{a.s. and hence the upper bound in (2.6).}

To prove the second assertion of Theorem 2.10, it suffices to show that under
Condition (2.7), dimH(*X−*1(0)*∩[1,*2]*N*)6*γ*, a.s. This can be done by combining

the above first moment argument and the following lemma. We omit the details.

**Lemma 4.1** *Under Condition (2.7), for anyβ >*0*,*

Z

[0*,*1]*Nk*

*sk−β* Φ(*s*)*ds*=*∞* (4.3)

*implies that for anyu∈*[1*,*2]*N* _{and any}_{β}0_{> β}_{,}

lim inf
*h→*0 *h*

*β0−N*

P

*X−*1(0)*∩*[*u, u*+**h**]*6=*? = 0*.* (4.4)
**Proof** Under Condition (4.3), for any*ε >*0 we must have

lim sup
*s→*0 *ksk*

*N−β−ε*_{Φ(}_{s}_{) =}_{∞}_{.}

This and Eq. (2.7), together imply that

lim sup
*h→*0+

*hN−β−ε*Φ(**h**) =*∞.* (4.5)

On the other hand, it is not hard to see that Φ(*s*)>Φ(k**s***k) and that Φ(***h**) is

nonincreasing in*h*. Hence, for*h >*0,

Z

[0*,h*]*N*

Φ(*s*)*ds*>*K*4Φ(**h**)*h*

*N _{,}*

_{(4.6)}

for some positive constant*K*_{4}. It follows from (3.16) and (4.6) that

P

*X−*1(0)*∩*[*u, u*+**h**]*6=*? 6*K*5

1

Φ(**h**)*.* (4.7)

Eq. (4.4) follows from Eq.’s (4.5) and (4.7), upon taking*ε∈*(0*, β0−β*). This

finishes our proof of the Lemma.

**5**

**Proof of Theorem 2.12**

Theorem 2.12 is divided into two parts: an upper bound (on the hitting proba-bility), as well as a corresponding lower bound. The latter is simple enough to prove: the proof of the lower bound in Eq. (2.8) uses Lemma 3.10 and follows the second moment argument of Lemma 3.6 closely; we omit the details.

**5.1**

**Lebesgue’s Measure of Stochastic Images**

We now intend to demonstrate the following result on Lebesgue’s measure of
the image of a compact set under the random function *X*. Throughout this
section, Leb denotes Lebesgue’s measure onR

*d*_{.}

**Theorem 5.1** *LetX*_{1}*, . . . , XN* *beN* *independent symmetric L´evy processes on*

R

*d* _{and let}_{X}_{=}_{X}

1*⊕ · · ·⊕XN. Suppose thatX* *is absolutely continuous, weakly*

*unimodal and has gauge function*Φ*. Then, for any compact set* *E⊂*R

*N*

+*,*

*κ−*12*−d*CΦ(*E*)6E

Leb[*X*(*E*)] 62

5*d*+3*N _{κ}*3

_{C}

Φ(*E*)*.*

The following is an immediate corollary.

**Corollary 5.2** *In the setting of Theorem 5.1, for any compact setE⊂*R

*N*

+*,*

E

Leb[*X*(*E*)] *>*0 *⇐⇒* C_{Φ}(*E*)*>*0*.*

**Remark 5.3** To the knowledge of the authors, this result is new at this level
of generality, even for L´evy processes, i.e. *N* = 1. Special cases of this
one-parameter problem have been treated in Hawkes [24, Th. 5] (for Brownian

motion); see alsoKahane[31, Ch. 16, 17].

Now suppose*X*_{1}*, . . . , XN* are i.i.d. isotropic stable L´evy processes all with

*α∈]0,*2]. In this case, the above completes a program initiated by J.-P. Kahane
who has shown that for*N* = 1*,*2,

Cap* _{d/α}*(

*E*)

*>*0 =

*⇒*E

Leb[*X*(*E*)] *>*0 =*⇒* H*d/α*(*E*)*>*0*,* (5.1)

where H*β* denotes the *β*-dimensional Hausdorff measure onR

*N*

+. See Kahane

[30, 31] for this and for a discussion of the history of this result, together with interesting applications to harmonic analysis. A combination of Corollary 5.2 and Eq. (2.9) yields the following that completes Eq. (5.1) by essentially closing the “hard half”.

**Corollary 5.4** *Suppose* *X*1*, . . . , XN* *are i.i.d. isotropic stable L´evy processes*

*all with the same index* *α* *∈]0,*2]*. If* *X* =*X*1*⊕ · · · ⊕XN* *and if* *E* *⊂* R

*N*

+ *is*

*compact,*

E

Leb[*X*(*E*)] *>*0 *⇐⇒* Cap* _{d/α}*(

*E*)

*>*0

*.*

Once again, the proof of Theorem 5.1 is divided in two main parts: an*upper*
*bound* (onE*{· · ·}) and a* *lower bound* (on E*{· · ·}). The latter is more or less*

**5.2**

**Proof of Theorem 5.1: Lower Bound**

For the purposes of exposition, it is beneficial to work on a canonical probability
space. Recall the space *D*(R+) of all functions *f* : R+ *→* R

*d* _{that are right}

continuous and have left limits. As usual, *D*(R+) is endowed with Skorohod’s

topology. Define Ω =*D(*R+)*⊕ · · · ⊕ D(*R+), and let it inherit the topology from

*D(*R+). That is, *f* *∈*Ω if and only if there are *f*1*, . . . , fN* *∈ D(*R+) such that

*f* = *f*_{1}*⊕ · · · ⊕fN*. Moreover, as *n* *→ ∞,* *fn* *→* *f∞* in Ω, if and only if for
all *`* = 1*, . . . , N*, lim*nf`n* = *f`∞* in *D(*R+), where *f*

*n* _{=} _{f}n

1 *⊕ · · · ⊕fNn* for all
16*n*6*∞.*

Let *X* = *{X*(*t*); *t* *∈* R

*N*

+*}* denote the canonical coordinate process on Ω.

That is, for all *ω* *∈* Ω and all *t* *∈* R

*N*

+, *X*(*t*)(*ω*) = *ω*(*t*). Also, let F denote

the collection of all Borel subsets of Ω. In a completely standard way, one can
construct a probability measurePon (Ω*,*F), such that under the measureP,*X*

has the same finite-dimensional distributions as the process of Theorem 5.1. In
fact, one can do more and define for all *x* *∈*R

*d* _{a probability measure}

P*x* on

(Ω*,*F) as follows: for all *G∈*F,

P*x{G}*=P*x{ω∈*Ω : *ω∈G}*=P*{ω∈*Ω : *x*+*ω∈G},*

where the function*x*+*ω* is, as usual, defined pointwise by (*x*+*ω*)(*t*) =*x*+*ω*(*t*)
for all *t* *∈* R

*N*

+. The corresponding expectation operator is denoted by E*x*.

Moreover, PLeb (ELeb, resp.) refers to the *σ*-finite measure

R

P*x*(•) *dx* (linear

operatorRE*x*(•)*dx*, resp.).

It is easy to see that the*σ*-finite measuresPLeb have a similar structure as
P; one can define conditional expectations, (multi-)parameter martingales, etc.

We will use the (probability) martingale theory that is typically developed for

P, and apply it to that for PLeb. It is completely elementary to see that the

theory extends easily and naturally. In a one-parameter, discrete setting, the
details can be found inDellacherie and Meyer_{[12, Eq. (40.2), p. 34]. One}

generalizes this development to our present multiparameter setting by applying the arguments of R. Cairoli; cf. Walsh[51].

The above notation is part of the standard notation of the theory of Markov
processes and will be used throughout the remainder of this section. In order
to handle the measurability issues, the*σ*-fieldFwill be assumed to be complete

with respect to the measurePLeb. This can be assumed without any loss in

gen-erality, for otherwise, (Ω*,*F) will be replaced by itsPLeb-completion throughout

with no further changes.

Our proof of Theorem 5.1 relies on the following technical lemma.

**Lemma 5.5** *Under theσ-finite measure*PLeb*, for each* *t∈*R

*N*

+ *the law ofX*(*t*)

*is Lebesgue’s measure on*R

*d _{. Moreover, for all}_{n}*

>1*, allϕj* *∈L*

1_{(}

R

*d*_{)∩}_{L}∞_{(}

R

*d*_{)}_{,}

*allsj, t∈*R

*N*

+ (*j*= 1*,· · ·, n*) *and for* Leb*-almost allz∈*R

*d _{,}*

ELeb

h_{Y}*n*

*j*=1

*ϕj* *X*(*sj*) *X*(*t*) =*z*

i

=E

h_{Y}*n*

*j*=1

*ϕj* *X*(*sj*)*−X*(*t*) +*z*

i

**Proof** The condition that *ϕj* *∈* *L*1(R

*d*_{)}_{∩}_{L}∞_{(}

R

*d*_{) for all} _{j}_{= 1}_{, . . . , n}_{, }

im-plies thatQ*n _{j}*

_{=1}

*ϕj*(

*X*(

*sj*))

*∈L*1(PLeb). Moreover, for any bounded measurable

function*g*:R

*d* _{→}

R,

ELeb

h

*g X*(*t*)
*n*

Y

*j*=1

*ϕ _{j}*

*X*(

*sj*)i =

Z

Rd E

h

*g X*(*t*) +*x·*

*n*

Y

*j*=1

*ϕ _{j}*

*X*(

*sj*) +

*x*i

*dx*

=

Z

Rd

*g*(*y*)E

hY*n*

*j*=1

*ϕ _{j}*

*X*(

*sj*)

*−X*(

*t*) +

*y*i

*dy.*

Set*ϕ*_{1} =*ϕ*_{2} =*· · · ≡*1 to see that the PLeb distribution of*X*(*t*) is Leb. Since

the displayed equation above holds true for all measurable*g*, we have verified

Eq. (5.2).

**Remarks**

(i) Equality (5.2) can also be established using regular conditional (*σ*-finite)
probabilities.

(ii) There are no conditions imposed on*sj* _{(}_{j}_{= 1}_{,}_{· · ·}_{, n}_{) and}_{t}_{.}

The second, and final, lemma used in our proof of the lower bound is a joint density function estimate.

**Lemma 5.6** *For allε >*0 *and alls, t∈*R

*N*

+*,*

*κ−*12*−dεd*6
PLeb

*|X*(*s*)*|*6*ε ,|X*(*t*)*|*6*ε*

P

*|X*(*t*)*−X*(*s*)*|*6*ε*

6*κ*(4*ε*)

*d _{,}*

*where*0*÷*0 = 1*.*

**Proof** We will verify the asserted lower bound on the probability. The upper
bound is proved by similar arguments that we omit.

PLeb*{|X*(*s*)|6*ε ,|X*(*t*)|6*ε}*
>PLeb

*|X*(*s*)*|*6*ε ,|X*(*t*)*|*6

*ε*

2

>PLeb

*|X*(*t*)|6

*ε*

2*}* *z∈*Rd_{:}inf* _{|}_{z}_{|}*6

*ε/*2

PLeb*{|X*(*s*)|6*ε|X*(*t*) =*z* *.*

By Lemma 5.5, the first term equals *εd* _{and the second is bounded below by}

P*{|X*(*t*)*−X*(*s*)|6

1

2*ε}. The lower bound on the probability follows from weak*

regularity; cf. Lemma 2.8.

**Proof of Theorem 5.1: Lower Bound** For any *µ* *∈* P(*E*) and all *ε >* 0,

define

*J* = (2*ε*)*−d*

Z

By Lemmas 5.5 and 5.6,

ELeb*{J}* = 1
ELeb*{J*

2_{}}

6 *κε*

*−d*

Z Z

P*{|X*(*t*)*−X*(*s*)|6*ε}µ*(*ds*)*µ*(*dt*)*.* (5.4)

Thus, by the Paley-Zygmund inequality applied to the*σ*-finite measurePLeb,

PLeb*{∃s∈E*: *|X*(*s*)*|*6*ε}*
>PLeb*{J >*0}
>

h

*κε−d*

Z Z

P*{|X*(*t*)*−X*(*s*)|6*ε}µ*(*ds*)*µ*(*dt*)

i_{−}_{1}

;

cf. Kahane[31] for the latter inequality. Let*ε→*0+ and use Fatou’s lemma

to conclude that

PLeb*{0∈X*(*E*)}>*κ*

*−*1_{2}*−d*

EΦ(*µ*)

*−*1

*.*

On the other hand,

PLeb*{*0*∈X*(*E*)*}*=

Z

P*{x∈X*(*E*)*}* *dx*=E*{*Leb[*X*(*E*)]*}.*

Since*µ∈*P(*E*) is arbitrary, the lower bound follows.

**5.3**

**Proof of Theorem 5.1: Upper Bound**

The verification of the upper bound of Theorem 5.1 is made particularly difficult, due to the classical fact that the parameter spaceR

*N*

+ can not be well ordered

in such a way that the ordering respects the Markovian structure ofR

*N*

+. (Of

course,R

*N*

+ can*always*be well ordered under the influence of the axiom of choice,

thanks to a classical theorem of Zermelo.) This difficulty is circumvented by the
introduction of 2*N* _{partial orders that are conveniently indexed by the power set}
of*{1, . . . , N}*as follows: let Π denote the collection of all subsets of*{1, . . . , N}*

and for all*A∈*Π, define the partial order(4*A*)onR

*N* _{as}

*s*(4*A*)*t* *⇐⇒*

(_{s}

*i*6*ti,* for all*i∈A*

*si*>*ti,* for all*i6∈A*

*.*

The key idea behind this definition is that the collection*{*(4*A*); *A∈*Π*}*of partial

orders totally orders R

*N* _{in the sense that given any two points} _{s, t}_{∈}

R

*N*_{,}

there exists *A* *∈* Π, such that*s* (4*A*)*t*. By convention, *s*

(*A*)

4 *t* is written in its

interchangeably throughout.1 Corresponding to each *A* *∈* Π, one defines an

*N*-parameter filtration F

*A* _{=}

proved along the lines of Khoshnevisan and Shi _{[35, Lemma 2.1]; see also}
Khoshnevisan_{[34, Lemma 4.1].}

**Lemma 5.7** *For eachA∈*Π*,*F

*A _{is a commuting}*

_{N}_{-parameter filtration.}In other words, when *s* (4*A*) *t* are both in R

satisfies condition (F4) ofCairoli and Walsh_{; see [51].}

The following important proposition is an analogue of the Markov property
for additive L´evy processes, with respect to the *σ*-finite measurePLeb.

**Proposition 5.8 (The Markov Property)** *For each fixed* *A∈*Π*,s, t∈*R

PLeb *almost surely*
ELeb

To this end, consider any bounded measurable function*g*:R

*d* _{→}

1_{It is worth noting that there are some redundancies in this definition. While Π has 2}*N*

elements, one only needs 2*N−1* partial orders to totally orderR

*N*_{. This distinction will not}

In the last step, we have used Fubini’s theorem, together with the independence
of*X*(*s*)*−X*(*t*) and*{X*(*rj*_{)}_{−}_{X}_{(}_{t}_{);} _{j}_{= 1}_{, . . . , n}_{}}_{under}

P. By Lemma 5.5, the
PLeb-distribution of*X*(*t*) is Leb. This proves (5.5) and, hence, the proposition.

The last important step in the proof of the upper bound of Theorem 5.1
is the following proposition. Roughly speaking, it states that for each *t* *∈*

R

*N*

+, 1l*{X*(*t*)=0*}* is comparable to a collection of reasonably nice *N*-parameter

martingales, not with respect to probability measures P, but with respect to

the*σ*-finite measurePLeb.

**Proposition 5.9** *Let* *ε >* 0 *and* *µ* *∈* P(*E*) *be fixed and recall* *J* *from (5.3).*
*Then, for every* *A∈*Π *and for allt∈*R

*N*

+*,*

ELeb*{J* *|* F

*A*
*t}*>(4*ε*)

*−d _{κ}−*1 Z

*s*(*A*<)*t*

P*{|X*(*t*)*−X*(*s*)*|*6*ε}* *µ*(*ds*)*·*1l*{|X*(*t*)*|*
6*ε/*2*},*

PLeb*-almost surely.*

It is*very* important to note that the conditional expectation on the left hand
side is computed under the*σ*-finite measurePLeb, under which the above holds

a.s., while the probability term in the integral is computed with respect to the measureP.

**Proof** Clearly, for all fixed*t∈*R

*N*

+,

ELeb*{J* *|*F

*A*

*t}* > (2)

*−d*

ELeb

Z

*s*(*A*<)*t*

1l_{{|}_{X}_{(}_{s}_{)}* _{|}*6

*ε}*

*µ*(

*ds*)

F

*A*
*t*

= (2)*−d*

Z

*s*(<*A*)*t*

PLeb*{|X*(*s*)*g|*6*ε|*F

*A*

*t}µ*(*ds*)*,*

PLeb-almost surely. It follows from Proposition 5.8 that
ELeb*{J* *|*F

*A*

*t}* > (2*ε*)

*−d*

Z

*s*(*A*<)*t*

PLeb*{|X*(*s*)|6*ε|X*(*t*)} *µ*(*ds*)

> (2*ε*)

*−d*

Z

*s*(*A*<)*t*

PLeb*{|X*(*s*)|6*ε|X*(*t*)} *µ*(*ds*)*·*1l*{|*

*X*(*t*)*|*6*ε/*2*},*

PLeb-almost surely. On the other hand, for almost all*z∈*R

*d* _{with}_{|}_{z}_{|}

6*ε/*2,

PLeb*{|X*(*s*)|6*ε|* *X*(*t*) =*z}* = P*{|X*(*t*)*−X*(*s*) +*z|*6*ε}*
> P*{|X*(*t*)*−X*(*s*)|6

1
2*ε}*

> *κ*

*−*1_{2}*−d*

The first line follows from Lemma 5.5 and the last from weak unimodality. This

proves the proposition.

**Proof of Theorem 5.1: Upper Bound**Without loss of generality, we may
assume thatE

Leb*X*(*E*)}*>*0, for, otherwise, there is nothing to prove.
Equiv-alently, we may assume that

PLeb*{0∈X*(*E*)}*>*0;

cf. the proof of the lower bound of Theorem 5.1.

Since*E* is compact, it has a countable dense subset, that we assume to be

Q

*N*

+, to keep our notation from becoming overtaxing. Fix *ε >* 0 and let T*ε*
denote any measurable selection of*t∈E∩*Q

*N*

+ for which*|X*(*t*)|6*ε/*2. If such

a *t* does not exist, define T*ε*= ∆, where ∆*∈*Q

*N*

+ *\E* but is otherwise chosen

quite arbitrarily. It is clear that T*ε*is a random vector inQ

*N*

vation that*µε∈*P(*E*). It is clear that Proposition 5.9 holds simultaneously for

all*t∈*Q

*N*

+, PLeb-almost surely. Consequently, since T*ε∈*Q

*N*

+, Proposition 5.9

can be applied with*t*= T* _{ε}*and

*µ*=

*µ*to yield

_{ε}by the Cauchy–Schwarz inequality. By Lemma 5.7, the *N*-parameter
pro-cess *t* *7→* ELeb*{J* *|*F

*A*

*t}* is an *N*-parameter martingale with respect to the *N*
-parameter, commuting filtration F

*A*_{. As such, by the} * _{L}*2

_{(}

PLeb)-maximal

in-equality of Cairoli (cf. Walsh_{[51]),}