R E S E A R C H
Open Access
Shearlet approximations to the inverse of a
family of linear operators
Lin Hu
1,2and Youming Liu
1**Correspondence:
liuym@bjut.edu.cn
1Department of Applied
Mathematics, Beijing University of Technology, Beijing, 100124, P.R. China
Abstract
The Radon transform plays an important role in applied mathematics. It is a
fundamental problem to reconstruct images from noisy observations of Radon data. Compared with traditional methods, Colona, Easley andetc.apply shearlets to deal with the inverse problem of the Radon transform and receive more effective reconstruction. This paper extends their work to a class of linear operators, which contains Radon, Bessel and Riesz fractional integration transforms as special examples. MSC: 42C15; 42C40
Keywords: inverse problems; shearlets; approximation; Radon transform; noise
1 Introduction and preliminary
The Radon transform is an important tool in medical imaging. Althoughf ∈L(R) can
be recovered analytically from the Radon dataRf(θ,t), the solution is unstable and those data are corrupted by some noise in practice []. In order to recover the objectf stably and control the amplification of noise in the reconstruction, many methods of regulariza-tion were introduced including the Fourier method, singular value decomposiregulariza-tion,etc.[]. However, those methods produced a blurred version of the original one.
Curvelets and shearlets were then proposed, which proved to be efficient in dealing with edges [–]. In , Candés and Donoho applied curvelets [] to the inverse problem
Y=Rf+εW, (.)
where the recovered functionf is compactly supported and twice continuously differen-tiable away from a smooth edge;W denotes a Wiener sheet;εis a noisy level. Because curvelets have complicated structure, Colonna, Easley,etc.used shearlets to deal with the problem (.) in and received an effective reconstructive algorithm [].
Note that the Bessel transform and the Riesz fractional integration transform arise in many scientific areas ranging from physical chemistry to extragalactic astronomy. Then this paper considers a more general problem,
Y=Kf+εW, (.)
whereKstands for a linear operator mapping the Hilbert spaceL(R) to another Hilbert
spaceY and satisfies
K*Kf∧(ξ) =b+|ξ|–αfˆ(ξ) (.) withb> ,α> (K*is the conjugate operator ofK). Here and in what follows,fˆdenotes
the Fourier transform off. The next section shows that Radon, Bessel and Riesz fractional integration transforms satisfy the condition (.).
The current paper is organized as follows. Section presents three examples for (.) and several lemmas. An approximation result is proved in the last section, which contains Theorem . of [] as a special case.
At the end of this section, we introduce some basic knowledge of shearlets, which will be used in our discussions. The Fourier transform of a functionf ∈L(R)∩L(R) is defined
by
ˆ f(ξ) =
Rf(x)e
–πix·ξ dx.
The classical method extends that definition toL(R) functions.
There exist many different constructions for discrete shearlets. We introduce the con-struction [] by taking two functionsψ,ψof one variable such thatψˆ,ψˆ∈C∞(R) with
their supportssuppψˆ⊂[–, –]∪[,],suppψˆ⊂[–, ] and
j≥
ψˆ
–jω=
|ω| ≥
,
j–
l=–j ψˆ
jω–l= |ω| ≤.
Here,C∞(Rn) stands for infinitely many times differentiable functions on the Euclidean
spaceRn. Then two shearlet functionsψ(),ψ()are defined by ˆ
ψ()(ξ) :=ψˆ(ξ)ψˆ
ξ
ξ
and ψˆ()(ξ) :=ψˆ(ξ)ψˆ
ξ
ξ
respectively.
To introduce discrete shearlets, we need two shear matrices
B=
, B=
and two dilation matrices
A=
, A=
.
Define discrete shearletsψj(,dl,)k(x) := jψ(d)(Bl
dA j
dx–k) forj≥, –j≤l≤j– ,k∈Z
andd= , . Then there existsϕˆ∈C∞(R) such that
ϕj,k(x),ψ
(d)
j,l,k(x),j≥j≥, –j≤l≤j– ,k∈Z,d= ,
forms a Parseval frame ofL(R), whereϕ
j,k(x) := jϕ(jx–k). More precisely, forf ∈
L(R),
f(x) =
k∈Z
f,ϕj,kϕj,k(x) +
d=
j≥j
j–
l=–j
k∈Z
f,ψj(,dl,)kψj(,dl,)k(x)
holds inL(R). It should be pointed out thatψ(d)
j,l,k(x) are modified forl= –jand j– , as
seen in [].
2 Examples and lemmas
In this section, we provide three important examples of a linear operator K satisfying (K*Kf)∧(ξ) = (b+|ξ|)–αfˆ(ξ) and present some lemmas which will be used in the next
section. To introduce the first one, define a subspace ofL(R),
DR:=f ∈LR,f is bounded⊆
f ∈LR,
R|
ξ|–fˆ(ξ)dξ< +∞
and a Hilbert space
L[,π)×R:=
f(θ,t), π
R
f(θ,t)dt dθ< +∞
with the inner productf,g:=πRf(θ,t)g(θ,t)dt dθ.
Example . LetLθ,t:={(x,y),xcosθ+ysinθ=t} ⊆Randds(x,y) be the Euclidean
mea-sure on the lineLθ,t. Then the classical Radon transform R:D(R)→L([,π)×R) defined
by
Rf(θ,t) =
Lθ,t
f(x,y)ds(x,y)
satisfies (R*Rf)∧(ξ) =|ξ|–fˆ(ξ).
Proof By the definition ofD(R),R|ξ|–fˆ(ξ)gˆ(ξ)dξ < +∞forf,g∈D(R). It is easy to see thatπ dθ+∞fˆ(ωcosθ,ωsinθ)gˆ(ωcosθ,ωsinθ)dω=π dθRfˆ(ωcosθ,ωsinθ)×
ˆ
g(ωcosθ,ωsinθ)dω. This with the Fourier slice theorem ([, ]) and the Plancherel for-mula leads to
R|
ξ|–fˆ(ξ)gˆ(ξ)dξ= π
dθ
+∞
ˆ
f(ωcosθ,ωsinθ)gˆ(ωcosθ,ωsinθ)dω
= π
dθ
R(Rθf)
∧(ω)(R
θg)∧(ω)dω
= π
dθ
RRf(θ,t)Rg(θ,t)dt=Rf,Rg,
whereRθf(t) :=Rf(θ,t). Moreover,(R*Rf)∧,gˆ=R*Rf,g=Rf,Rg=|ξ|–fˆ(ξ),gˆ(ξ)for
Because D(R) is dense inL(R), one receives the desired conclusion (R*Rf)∧(ξ) = |ξ|–fˆ(ξ). Here,R*Rf ∈L(R) forf ∈D(R). In fact,R*Rf= πI
f by [], whereIis the
Riesz fractional integration transform defined by
I(f)(x) :=Cα
R
f(x–y)
|y| dy
with some normalizing constant Cα []. We rewrite I(f)(x) =
|y|≤r
f(x–y)
|y| dy +
|y|>r f(x–y)
|y| dy=:J+J. Leth(y) =
|y|B(,)(y),hr(y) =
rh(
y
r), whereB(, ) stands for the unit
ball ofRand
Arepresents an indicator function on the setA. ThenJ=
|y|≤r
f(x–y)
|y| dy=
r|y|≤rhr(y)f(x–y)dy≤rMf(x) by Theorem of reference [, p.], whereMf is the
Hardy-Littlewood maximal function off.
On the other hand, the Holder inequality implies
J≤ fp–p
|y|>r
|y|pdy
p
≤r–pf p p–
withp> . Taker= [Mf(x)]–p– , one getsI
(f)(x)≤[Mf(x)]
p–
p–( +f p
p–) andI(f)≤
( +f p p–)Mf
p– p– (p–)
p–
( +f p p–)f
p– p– (p–)
p–
< +∞, sincef ∈L
p
p–(R)∩L (p–)
p– (R) due to
the assumptionf∈D(R) and(p–)
p– > ,
p p–> .
In order to introduce the next example, we usef ∗g to denote the convolution off
andg.
Example . The Bessel operator Bα:L(R)→L(R) defined byBαf =bα ∗f with ˆ
bα(ξ) = ( +|ξ|)– α
andα> satisfies
B*αBαf
∧
(ξ) = +|ξ|–αfˆ(ξ).
Proof It is known thatbα(x)∈L(R) forα> []. Hence, (Bαf)∧(ξ) =bˆα(ξ)fˆ(ξ) = ( + |ξ|)–αfˆ(ξ). Forf,g∈L(R),(B*
αBαf)∧,gˆ=B*αBαf,g=Bαf,Bαg=(Bαf)∧, (Bαg)∧= ( +|ξ|)–αfˆ(ξ),gˆ(ξ). Thus,
B*αBαf
∧
(ξ) = +|ξ|–αfˆ(ξ).
To introduce the Riesz fractional integration transform, we define
D=f∈LR,f has compact support⊆LR∩LR.
Then D⊆Ls(R) (≤s≤). Forf ∈Dand <α< , the Riesz fractional integration
transform is defined by
Iα(f)(x) :=Cα
R
f(y)
|x–y|–αdy∈L
R, (.)
Lemma . Let S(R)be the Schwartz space and={ψ∈S(R), ∂β
∂xβψ() = ,β∈Z+×
Z+} withZ+being the non-negative integer set.Define :={ϕ=ψˆ,ψ ∈}.Then with
α> ,
(Iαf)∧(ξ) =|ξ|–αfˆ(ξ)
holds for each f ∈.
Example . The transformIαdefined by (.) satisfies (Iα*Iαf)∧(ξ) =|ξ|–αfˆ(ξ) forf ∈D
and <α<.
Proof As proved in Examples ., ., it is sufficient to show that forf ∈D,
(Iαf)∧(ξ) =|ξ|–αfˆ(ξ). (.)
One proves (.) firstly forf∈C∞(R). Takeμ(r)∈C∞([,∞)) with ≤μ(r)≤ and
μ(r) = ⎧ ⎨ ⎩
, r≥;
, ≤r≤.
Define ψN(ξ) :=μ(N|ξ|)fˆ(ξ). Then ψN(ξ)∈ and fN(x) :=ψˇN(x) =ψˆN(–x)∈ . By
Lemma .,
(IαfN)∧(ξ) =|ξ|–αfˆN(ξ). (.)
Letk(x) be the inverse Fourier transform of the function –μ(|x|) andkN(x) :=Nk(Nx). Then Rk(x)dx= andfN(x) =f(x) –kN∗f(x). Moreover, the classical approximation
theorem [] tells
lim
N→∞fN–fp=
forp> . On the other hand,IαfN–Iαf=Iα(fN–f)≤CfN–f
+α due to Theorem
[, p.]. Hence,limN→∞(IαfN)∧(ξ) – (Iαf)∧(ξ)=limN→∞IαfN–Iαf= . That is,
lim
N→∞(IαfN)
∧(ξ) = (I
αf)∧(ξ) (.)
in L(R) sense. Note that|ξ|–αfˆ
N(ξ) –|ξ|–αfˆ(ξ)=
R|ξ|–α|ˆf(ξ)|[ –μ(N|ξ|)]dξ;
|ξ|–α|ˆf(ξ)|∈L(R) with <α<
andlimN→∞[ –μ(N|ξ|)] = . Then lim
N→∞
|ξ|–αfˆN(ξ) –|ξ|–αfˆ(ξ)= (.)
thanks to the Lebesgue dominated convergence theorem, which meanslimN→∞|ξ|–α×
ˆ
fN(ξ) =|ξ|–αfˆ(ξ) inL(R) sense. This with (.), (.) shows (.) forf ∈C∞(R).
In order to show (.) forf ∈D, one can findg∈C∞ (R) such that
Rg(x)dx= and
f ∗gN ∈C∞ (R), the above proved fact says
Iα(f∗gN)
∧
(ξ) =|ξ|–α(f ∗gN)∧(ξ). (.)
The same arguments as (.) and (.) show thatlimN→∞(Iα(f ∗gN))∧(ξ) = (Iαf)∧(ξ) and limN→∞|ξ|–α(f ∗gN)∧(ξ) =|ξ|–αfˆ(ξ). Hence,
(Iαf)∧(ξ) =|ξ|–αfˆ(ξ).
This completes the proof of (.) forf ∈D.
Next, we prove a lemma which will be used in the next section. For convenience, here and in what follows, we defineM=N∪MwithN=Z,M:={μ= (j,l,k,d) :j≥j
, –j≤ l≤j– ,k∈Z,d= , }. Then the shearlet system (introduced in Section ) can be
rep-resented as{sμ:μ∈M}, wheresμ=ψμ=ψj(,dl,)kifμ∈M, andsμ=ϕμ=ϕj,kifμ∈N. Lemma . Let K satisfy(K*Kf)∧(ξ) = (b+|ξ|)–αfˆ(ξ)and{s
μ,μ∈M}be shearlets in-troduced in the first section. Defineσˆμ(ξ) = (b+|ξ|)αˆsμ(ξ)and Uμ := –αjKσμ.Then Uμ ≤C and forμ∈M,
f,sμ= αjKf,Uμ.
Proof By the Plancherel formula and the assumptionσˆμ(ξ) = (b+|ξ|)αsˆμ(ξ), one knows
thatf,sμ=ˆf,ˆsμ=ˆf(ξ), (b+|ξ|)–ασˆμ(ξ). Moreover, f,sμ=
ˆ
f(ξ),K*Kσμ
∧
(ξ)=f,K*Kσμ
=Kf,Kσμ= αjKf,Uμ
due to (K*Kf)∧(ξ) = (b+|ξ|)–αfˆ(ξ) andU
μ:= –αjKσμ.
Next, one shows Uμ ≤ C. Note that Kσμ = Kσμ,Kσμ = K*Kσμ,σμ = (K*Kσ
μ)∧,σˆμ, (K*Kσμ)∧= (b+|ξ|)–ασˆμ(ξ) andσˆμ(ξ) = (b+|ξ|)αˆsμ(ξ). ThenKσμ= ˆsμ(ξ), (b+|ξ|)αˆsμ(ξ)and
Uμ= –αjKσμ= –αj
R
b+|ξ|αˆsμ(ξ)
dξ.
Becausesuppsˆμ⊆Cj:= [–j–, j–]\[–j–, j–], one receivesUμ= –αj
Cj(b+
|ξ|)α|ˆs
μ(ξ)|dξ≤C. This completes the proof of Lemma ..
At the end of this section, we introduce two theorems which are important for our dis-cussions. As in [], we useSTAR(A) to denote all setsB⊆[, ]withC boundary∂B
given by
β(θ) = ρ(θ)cosθ
ρ(θ)sinθ
cμ:=f,sμ,Mj:={(j,l,k,d),|k| ≤j+, –j≤l≤j– ,d= , }and
R(j,ε) =μ∈Mj:|cμ|>ε
.
Then withR(j,ε) standing for the cardinality ofR(j,ε), the following conclusion holds []. Theorem . For f ∈ε(A),R(j,ε)≤Cε– and
μ∈Mj
|cμ|≤C–j.
Theorem .[] Let X∼N(u, )and t=log(η–)with <η≤ .Then
ETs(X,t) –u
=logη–+ η+minu, ,
where N(u, )denotes the normal distribution with mean u and variance,while Ts(y,t) :=
sgn(y)(|y|–t)+is the soft thresholding function.
3 Main theorem
In this section, we give an approximation result, which extends the result [, Theorem .] from the Radon transform to a family of linear operators. To do that, we introduce a set N(ε) of significant shearlet coefficients as follows. Let
s=
+ α
logε–, s=
+ α
logε–,
andj=s,j=s. DefineN(ε) :=M(ε)∪N(ε)⊆M, where N(ε) =μ=k∈Z:|k| ≤j+;
M(ε) =μ= (j,l,k,d) :j≤j≤j, –j≤l≤j– ,|k| ≤j+,d= ,
.
Consider the modelY=Kf +εW with (K*Kf)∧(ξ) = (b+|ξ|)–αfˆ(ξ). Lemma . tells
thatyμ:= αjY,Uμ=f,sμ+εαjnμ, wherenμis Gaussian noise with zero mean and
bounded varianceσμ=Uμ≤C[]. Letcμ=f,sμand˜f=
μ∈N(ε)c˜μsμwith
˜ cμ=
⎧ ⎨ ⎩
Ts(yμ,ε
log(N(ε))αjσ
μ), μ∈N(ε);
, otherwise,
whereTs(y,t) is the soft thresholding function. Then the following result holds.
Theorem . Let f ∈ε(A) be the solution to Y =Kf +εW with (K*Kf)∧(ξ) = (b+
|ξ|)–αfˆ(ξ)andf be defined as above˜ .Then
sup
f∈ε(A)
E˜f –f≤Clogε–ε
+α (ε→).
Proof Since{sμ,μ∈M}is a Parseval frame,f=
μ∈Mcμsμandf˜=
μ∈N(ε)c˜μsμ,˜f–f =
μ∈M(˜cμ–cμ)sμ. Moreover,˜f –f=
μ∈M|˜cμ–cμ|and
E˜f –f=
μ∈N(ε)
E| ˜cμ–cμ|+
μ∈N(ε)C
|cμ|. (.)
In order to estimateμ∈N(ε)C|cμ|, one observes
μ∈Mj|cμ|
≤C–jdue to Theorem ..
Thenj>jμ∈Mj|cμ|
≤C
j>j
–j≤C–j. By jε– +α,
j>j
μ∈Mj
|cμ|ε
+α. (.)
(Here and in what follows,ABdenotesA≤CBfor some constantC> ).
Next, one considerscμforj≤j≤jand|k| ≥j+. Note that|ψ(d)(x)| ≤Cm( +|x|)–m
(d= , ,m= , , . . .). Then|ψj(,dl,)k(x)| ≤Cm
j( +|Bl
dA j
dx–k|)–m. Sincef ∈ε(A),suppf⊂
Q:= [, ]and
f,ψj(,dl,)k≤Cm
jf∞
Q
+Bl dA
j dx–k
–m
dx.
On the other hand,|Bl dA
j
dx| ≤ BldA j
d|x| ≤j|x| ≤
√
jforx∈Q
. Hence, ( +|BldA j dx–
k|)–m≤( +|k|–|BldAjdx|)–m≤(|k|–√j)–mfor|k| ≥j+. Moreover,|k|≥j+|cμ|≤
j
|k|≥j+(|k|– √
j)–m= j∞ n=
j+n≤|k|≤j+n+–mj(n–
√
)–m j–mj×
∞
n=(j+n+)(n– √
)–m j–j(m–), since m can be chosen big enough.
There-fore,
j
j=j
j–
l=–j
|k|≥j+
|cμ|≤Cm
∞
j=j
j–mj–j≤–jε
+α (.)
due to the choice ofj. The similar (even simpler) arguments show|k|≥j+|f,ϕj,k|
ε
+α withϕj
,k(x) =
jϕ(jx–k). This with (.) and (.) leads to
μ∈N(ε)C
|cμ|ε
+α. (.)
Finally, one estimates μ∈N(ε)E| ˜cμ–cμ|. By the definition of yμ, ε––αjσμ–yμ∼ N(ε––αjσ–
μ cμ, ). Applying Theorem . withη–=N(ε), one obtains that
ETs
ε––αjσμ–yμ,
logN(ε)–ε––αjσμ–cμ
=logN(ε)+
N(ε)+min
ε––αjσ–
μ c
μ,
Hence, E|Ts[yμ,εαjσμ
log(N(ε))] – cμ| [log(N(ε)) + ][ε
αjσ
μ
N(ε) + min{cμ,
εαjσ
μ}]. Byc˜μ=Ts[yμ,εαjσμ
log(N(ε))] forμ∈N(ε), one knows that
E
μ∈N(ε)
| ˜cμ–cμ|
logN(ε)+
×
ε
μ∈N(ε)
αjσ
μ
N(ε)+
μ∈N(ε)
mincμ,ε αjσ
μ
. (.)
Note that N(ε)∩Mj ⊂ {(j,l,k,d) : |k| ≤j+,|l| ≤j}. Then N(ε)≤ Cj≤j
j
jε
–
+α, andlog(N(ε)) +α
log(ε–)log(ε–). Since{σμ:μ∈M}is uniformly
bounded, [log(N(ε)) + ]εμ∈N(ε)
αjσ
μ
N(ε) log(ε
–)εαj. This with the choice of j shows that
logN(ε)+ ε
μ∈N(ε)
αjσ
μ
N(ε)log
ε–ε
+α logε–ε
+α. (.)
It remains to estimate [log(N(ε)) + ]μ∈N(ε)min{cμ,εαjσμ}. Clearly,
μ∈N(ε)
mincμ,ε
αj=
{μ∈N(ε):|cμ|≥αjε}
αjε+ {μ∈N(ε):|cμ|<αjε}
|cμ|. (.)
By Theorem .,{μ∈Mj:|cμ|≥αjε}
αjε(αjε)–αjεαjε. Hence,
{μ∈N(ε):|cμ|≥αjε}
αjε=
j
j=j
{μ∈Mj:|cμ|≥αjε}
αjεαjε. (.)
On the other hand,{μ∈N(ε):|cμ|<αjε}|cμ|=
j
j=j ∞
n=
{αj–n–ε<|c
μ|≤αj–nε}|cμ|.
Ac-cording to Theorem .,R(j, αj–n–ε)–(αj–n–)ε– and
{αj–n–ε<|c
μ|≤αj–nε}
|cμ|–
(αj–n–)ε–(αj–n)εαj–nε.
Therefore,
{μ∈N(ε):|cμ|<αjε}
|cμ|≤
j
j=j ∞
n=
αj–nε≤αjε.
Combining this with (.) and (.), one knows thatμ∈N(ε)min{cμ,εαj}
αjε. Furthermore,
μ∈N(ε)
mincμ,ε αjε
α+
thanks to jε– α+
. Now, it follows from (.), (.) and (.) that
μ∈N(ε)
E| ˜cμ–cμ|log
ε–ε
α+
.
This with (.) and (.) leads to the desired conclusionsupf∈ε(A)E˜f–f≤Clog(ε–)×
ε
+α. The proof is completed.
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
LH and YL finished this work together. Two authors read and approved the final manuscript.
Author details
1Department of Applied Mathematics, Beijing University of Technology, Beijing, 100124, P.R. China.2Department of Basic
Courses, Beijing Union University, Beijing, 100101, P.R. China.
Acknowledgements
This work is supported by the National Natural Science Foundation of China (No. 11271038) and the Natural Science Foundation of Beijing (No. 1082003).
Received: 14 August 2012 Accepted: 18 December 2012 Published: 7 January 2013 References
1. Natterer, F: The Mathematics of Computerized Tomography. Wiley, New York (1986)
2. Natterer, F, Wübbeling, F: Mathematical Methods in Image Reconstruction. SIAM Monographs on Mathematical Modeling and Computation. SIAM, Philadelphia (2001)
3. Candés, EJ, Donoho, DL: Continuous curvelet transform: II. Discretization and frames. Appl. Comput. Harmon. Anal.
19, 198-222 (2005)
4. Candés, EJ, Donoho, DL: New tight frames of curvelets and optimal representations of objects withC2singularities.
Commun. Pure Appl. Math.57, 219-266 (2004)
5. Candés, EJ, Donoho, DL: Recovering edges in ill-posed inverse problems: optimality of curvelet frames. Ann. Stat.30, 784-842 (2002)
6. Easley, G, Labate, D, Lim, WQ: Sparse directional image representations using the discrete shearlet transform. Appl. Comput. Harmon. Anal.25, 25-46 (2008)
7. Guo, K, Labate, D: Optimally sparse multidimensional representation using shearlets. SIAM J. Math. Anal.39, 298-318 (2007)
8. Colonna, F, Easley, G, Guo, K, Labate, D: Radon transform inversion using the shearlet representation. Appl. Comput. Harmon. Anal.29, 232-250 (2010)
9. Lee, NY, Lucier, BJ: Wavelet methods for inverting the Radon transform with noisy data. IEEE Trans. Image Process.10, 1079-1094 (2001)
10. Strichartz, RS: Radon inversion-variations on a theme. Am. Math. Mon.89, 377-384 (1982) 11. Samko, SG: Hypersingular Integrals and Their Applications. Taylor & Francis, London (2002) 12. Zhou, MQ: Harmonic Analysis. Beijing Normal University Press, Beijing (1999)
13. Lu, SZ, Wang, KY: Real Analysis. Beijing Normal University Press, Beijing (2005)
14. Donoho, DL, Johnstone, IM: Ideal spatial adaptation via wavelet shrinkage. Biometrika81, 425-455 (1994) 15. Johnstone, IM: Gaussian Estimation: Sequence and Wavelet Models. http://www-stat.stanford.edu/imj/GE12-27-11
(1999)
doi:10.1186/1029-242X-2013-11