An Adaptively Accelerated Bayesian Deblurring Method with Entropy Prior

Full text

(1)

Volume 2008, Article ID 674038,13pages doi:10.1155/2008/674038

Research Article

An Adaptively Accelerated Bayesian Deblurring

Method with Entropy Prior

Manoj Kumar Singh,1Uma Shanker Tiwary,2and Yong-Hoon Kim1

1Sensor System Laboratory, Department of Mechatronics, Gwangju Institute of Science and Technology (GIST),

1 Oryong-dong, Buk-gu, Gwangju-500 712, South Korea

2Indian Institute of Information Technology, Allahabad 211-012, India

Correspondence should be addressed to Yong-Hoon Kim,yhkim@gist.ac.kr

Received 28 August 2007; Revised 15 February 2008; Accepted 4 April 2008

Recommended by C. Charrier

The development of an efficient adaptively accelerated iterative deblurring algorithm based on Bayesian statistical concept has been reported. Entropy of an image has been used as a “prior” distribution and instead of additive form, used in conventional acceleration methods an exponent form of relaxation constant has been used for acceleration. Thus the proposed method is called hereafter as adaptively accelerated maximum a posteriori with entropy prior (AAMAPE). Based on empirical observations in different experiments, the exponent is computed adaptively using first-order derivatives of the deblurred image from previous two iterations. This exponent improves speed of the AAMAPE method in early stages and ensures stability at later stages of iteration. In AAMAPE method, we also consider the constraint of the nonnegativity and flux conservation. The paper discusses the fundamental idea of the Bayesian image deblurring with the use of entropy as prior, and the analytical analysis of superresolution and the noise amplification characteristics of the proposed method. The experimental results show that the proposed AAMAPE method gives lower RMSE and higher SNR in 44% lesser iterations as compared to nonaccelerated maximum a posteriori with entropy prior (MAPE) method. Moreover, AAMAPE followed by wavelet wiener filtering gives better result than the state-of-the-art methods.

Copyright © 2008 Manoj Kumar Singh et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

1. INTRODUCTION

Image deblurring, process of restoration of an image from its blurred and noisy version, is an enduring linear inverse problem and is encountered in many application areas such as in remote sensing, medical imaging, seismology, and astronomy [1–3]. Generally, many linear inverse problems are ill-conditioned since the inverse of linear operators either does not exist or is nearly singular yielding highly noise sensitive solutions. The methods for solving ill-conditioned linear inverse problems can be classified into the following two general categories: (a) methods based on regularization [2–4] and (b) methods based on Bayesian theory [2,4,5].

The main idea of regularization and Bayesian approach is the use of a priori information expressed by prior term. The prior term gives a higher score to most likely images, hence, helps in selection of single image from many images, which fits the noisy and blurred observation. However, modeling a prior for real-word images is not a trivial and

subjective matter. Many directions for prior modeling have been proposed such as derivative energy in the Wiener filter [3], the compound Gauss-Markov random field [6,7], the Markov random fields (MRFs) with nonquadratic potentials [2,8,9], entropy [1,3,4,10–12], and heavy-tailed densities of images in wavelet domain [13]. An excellent review on image deblurring methods is available in [14].

(2)

In this paper, we describe the maximum a posteriori with entropy prior (MAPE) method for image deblurring in Bayesian framework. This method is nonlinear and solved iteratively. However, it has the drawbacks of slow conver-gence and being intensive in computation. Many techniques for accelerating the iterative methods for faster convergence have been proposed [1,16–21]. These acceleration methods can also be used for ensuring the acceleration of the MAPE method. All these acceleration techniques use correction term which is computed in each iteration and added to the result obtained in the previous iteration. In most of these acceleration methods, the correction term is obtained by multiplying gradient of objective function with acceleration parameter. Acceleration methods given in [17–19] use the line search approach to find acceleration parameter to maximize the objective function (likelihood/log-likelihood function) at each iteration. It speeds up the iterative method by a factor of 2∼5, but requires a prior limit on acceleration parameter to prevent the divergence. Maximizing a function in the direction of the gradient is termed as steepest ascent, and minimizing a function (in the negative gradient direction) is called steepest descent. The main problem with gradient-based methods is the selection of optimal acceleration step. Large acceleration step speeds up the algorithms, but it may introduce error. If error is amplified during iteration, it can lead to instability. Thus, gradient-based methods require an acceleration step followed by a correction step to ensure the stability. This correction step reduces the gain obtained by the acceleration step.

A gradient search method proposed in [20], known as conjugate gradient (CG), method is better than the above discussed methods. This approach has also been proposed by the authors of [22] as well as [17,21]. The CG method requires gradient of the objective function and an efficient line search technique. But drawback of CG method is that several function evaluations are needed in order to accurately maximize/minimize objective function, and in many cases the objective function and its gradient are not trivial.

One of our objectives in this paper is to give simple and efficient method which overcomes difficulties in previously proposed methods. In order to cope with the problems of earlier accelerated methods, the proposed AAMAPE method requires minimum information about the iterative process. Our method uses the multiplicative correction term instead of using additive correction term. Multiplicative correction term is obtained from derivative of conditional log-likelihood. We use an exponent on multiplicative cor-rection as an acceleration parameter which is computed adaptively in each iteration, using first-order derivatives of deblurred image from previous two iterations. The positivity of pixel intensity in the proposed acceleration method is automatically ensured since multiplicative correction term is always positive, while in other acceleration methods based on additive correction term, the positivity is enforced manually at the end of iteration.

Another important objective of this paper is to analyze the superresolution and the nature of noise amplification in the proposed AAMAPE method. Superresolution means restoring the frequency beyond the diffraction limit. It is

often bandied about nonlinear methods that they have superresolution capability, but very limited insight for super-resolution is available. In [23], an analysis of superresolution is performed assuming that the point spread function (PSF) of the system and the object intensity are Gaussian function. In this paper, we present general analytical interpretation of superresolving capability of the proposed AAMAPE method and confirmed it experimentally.

It is a well-known fact about the nonlinear methods based on maximum likelihood that the restored images begin to deteriorate after certain number of iterations. This deterioration is due to the noise amplification in successive iterations. Due to the nonlinearity, an analytical analysis of the noise amplification for a nonlinear method is difficult. In this paper, we investigate the process of noise amplification qualitatively for the proposed AAMAPE method.

The rest of the paper is organized as follows.Section 2

describes the observation model and the proposed AAMAPE method.Section 3presents analytical analysis for the super-resolution and noise amplification for the proposed method. Experimental results and discussion are given inSection 4. The conclusion is presented inSection 5which is followed by references.

2. BAYESIAN FRAMEWORK FOR IMAGE DEBLURRING

2.1. Observation model

Consider an original image,xof sizeM×N, blurred by shift-invariant point spreading function (PSF),h, and corrupted by Poisson noise. Observation model for the blurring in case of Poisson noise is given as

y=P(h⊗x)(z), (1)

wherePdenotes the Poisson distribution,is convolution operator, z is defined on a regular M × N lattice Z = {m1,m2:m1=1, 2,. . .,M, m2=1, 2,. . .,N}. Alternatively, observation model (1) can be expressed as

y(z)=(h⊗x)(z) +n(z), (2)

where nis zero-mean with variance σ2

n(z) = var{n(z)} =

(h⊗x)(z). Blurred and noisy image,y, has mean,E{y(z)} = (h x)(z), and variance, σ2

y(z) = (h x)(z). Thus,

the observation variance σ2

y(z) is signal dependent and,

consequently, spatially variant. For mathematical simplicity, observation model (2) can be expressed in matrix-vector form as follows:

y=Hx+n, (3)

where H is the blurring operator of size MN × MN

(3)

2.2. Entropy as a prior distribution

The basic idea of Bayesian framework is to incorporate the prior information, about the desired image. The prior information is included using a priori distribution. The a priori distributionp(x) is defined using entropy as [12]

p(x)=expλE(x), λ >0, (4)

whereE(x) is the entropy of the original image x. The role of entropy in defining the a prioridistribution has been in discussion for four decades [24–26], and researcher proposed many entropy functions [27]. Frieden first used the Shannon form of entropy in the context of image reconstruction [25]. We also use the Shannon entropy, and, it is given as follows:

E(x)= −

i

xilogxi. (5)

It is important to note that in spite of the large volume of literature justifying the choice of entropy as a priori distribution, the entropy of an image does not have the same firm conceptual foundation as in statistical mechanics [14]. Trussell [28] has found a relationship between the use of entropy as a prior in a maximum aposteriori (MAP) image restoration and a simple maximum entropy image restoration with constraints for the Gaussian noise case [3]. In the case of Gaussian noise first-order approximation of MAP with Shannon entropy gives the well-known Tikhonov method of image restoration [3].

2.3. Accelerated MAP with entropy prior

Whennis zero in (3), we consider only blurring, the expected value at theith pixel in the blurred image isjhi jxj. Where

hi j is (i,j)th element ofH andxj is thejth element of x.

Because of Poison noise, the actualith pixel value, yi, in y

is one realization of Poisson distribution with meanjhi jxj.

Thus, we have the following relation:

pyi/x

=

jhi jxj

yi

e(−jhi jxj)

yi! .

(6)

Each pixel in blurred and noisy image y is realized by an independent Poisson process. Thus, the likelihood of getting noisy and blurred imageyis given by

py/x=

i

jhi jxj

yi

e(−jhi jxj)

yi!

. (7)

From Bayes’s theorem, we get a posteriori probabilityp(x/ y) ofxfor givenyas

px/ y= p(y/x)

p(y) . (8)

MAPE method with flux conservation for image deblurring seeks an approximate solution of (1) that maximizes the a posteriori probability p(x/ y) or logp(x/ y), subject to the

constraint of flux conservation, jxj =

jyj = N. We

consider the maximization of the following function:

L(x,μ)=logpx/ y−μ

j

xj−N

, (9)

whereμis the Lagrange multiplier for flux conservation. On substitution ofp(x/ y) from (8) into (9), we get

L(x,μ)=logpy/x+ logp(x)logp(y)−μ

j

xj−N

.

(10)

Substituting thep(x),p(y/x) from (5), (7) in (10), we get the following:

L(x,μ)=

i

j

hi jxj+yilog

l

hilxl

−λ

j

xjlogxj−logp(y)−μ

j

xj−N

.

(11)

From∂L/∂xj=0 we get the following relation:

1 +ρμ=ρ

i

hi j

y

i

lhilxl

−ρ−logxj

, (12)

whereρ=λ−1. Adding a positive constantCand introducing an exponentq on both sides of (12), we get the following relation:

(1 +ρμ+C)q= ρ

i

hi j

y

i

lhilxl

−ρ−logxj

+C

q

.

(13)

Equation (13) is nonlinear, and is solved iteratively. Multiply both sides of (13) byxj, we arrive on the following iterative

procedure:

xk+1

j =Axkj ρ

i

hi j

y

i

lhilxlk

−ρ−logxk j

+C

q

, (14)

whereA = [1 +ρμ+C]−q.For ensuring the nonnegativity of xk

j and the computation of log(xkj) in the next iteration

a constant c > ρ+ log(xk

j) added both sides of (14). By

generalizing (14), we get the following:

xk+1=Axk

ρHTy/Hxk−ρ−logxk+C

q

, (15)

where superscript T denotes transpose of matrix, and (y/Hxk) denotes the vector obtained by component-wise division ofybyHxk. Likewise, component-wise

(4)

In further discussions, we use the word “correction factor” to refer to the expression in square bracket of (15). The constant A is known if ρ, μ are known and their values must be such that at convergencejxj = N

holds. Accordingly, in the iteration (14), the constant Ais recalculated at each iteration so that jxkj = N satisfied.

Summing both sides of (14) over all pixel values, we get

j

xkj+1=A

j

xkj ρ

i

hi j

yi

lhilxkl

−ρ−logxkj

+C

q

.

(16)

Using flux conservation constraint,jxkj+1=N, in (16) we

get the following expression forA:

A=A(k)

=N

j

xkj ρ

i

hi j

yi

lhilxkl

−ρ−logxkj+C

q−1

.

(17)

From (17), we see that the constantAdoes not depend on the Lagrange multiplierμ. Thus, iterative procedure (14), (15) does not depend onμ.

We observed that iteration given in (15) converges only for some values ofqlying between 1 and 3. Large values of

q(≈ 3) may give faster convergence but with the increased risk of instability. Small values of q(≈ 1) lead to slow convergence with reduced risk of instability. Between these two extremes, the adaptive selection of exponentqprovides means for achieving faster convergence while ensuring stability. Thus, (15) with an adaptive selection of exponent

q leads to the AAMAPE method. An empirical method for adaptive selection of q is discussed in Section 2.5. Putting

q=1 in (15), we get the following equation:

xk+1=Axk

ρHT

y/Hxk

−ρ−logxk+C

. (18)

Equation (18) is nonaccelerated maximum a posteriori with entropy (MAPE) method for image deblurring. This can be derived from (12).

As λ→0 in (11), maximization of L(x,μ) amounts to maximize the first term of (11), equivalently log(y/x), with respect to x. This leads to well-known Lucy-Richardson method, where flux conservation and nonnegativity is automatic. Hence, for large ρ, this method behaves like the Lucy-Richardson method and can be expected to be unstable in presence of noise. For large value of λ the prior probability term becomes dominant in L(x,μ) and, hence, prior probability is maximized whenx is a uniform image. Thus, small value of ρ imposes high degree of uniformity in the restored image, and, hence, smoothing of high frequency component. The optimal value of ρ is obtained by trial and error. Starting with a small value forρ

and iterating to convergence, the procedure must be repeated with larger values ofρuntil a value is found which sufficiently sharpens the image at convergence and without undue noise degradation.

2.4. Nonnegativity in AAMAPE

It is reported that disallowing the negative pixel intensity strongly suppresses the artifact, increases the resolution, and may speed up the convergence of the algorithm [14]. Nonnegativity is a necessary restriction for almost all kind of images. In many algorithms, nonnegativity constraint is imposed either by change of variable or replacing negative values to some positive value at the end of each iteration.

In accelerated MAPE (15), for xk > 0 the first term,

ρHT(y/Hxk), is nonnegative and selection of constantC >

(ρ+log(xk)) gives the correction factor and constantA, which is given by (17), is always positive. Thus, in accelerated MAPE the nonnegativity of intensity, xk+1 > 0, is automatically guaranteed.

2.5. Adaptive selection of q

It is found that the choice of q in (15) mainly depends on the noise, n, and its amplification during iterations. If noise is high, smaller value ofq is selected and vice-versa. The convergence speed of the proposed method depends on the choice of the parameterq. Drawback of this accelerated form of MAPE is that the selection of exponentqhas to be done manually by trial and error. We overcome this serious limitation by proposing a method by whichqis computed adaptively as iterations proceed. We proposed an expression forqbased on an empirical observation as follows:

q(k+ 1)=exp ∇x

k

xk−1

∇x2

∇x1, (19)

(5)

2.6. Contraction mapping for AAMAPE

In order to show that the iterative procedure (15) gives the solution of (13), we prove that (15) provides contraction mapping [29,30]. A function f on complete metric space (X,ψ) is said to have contracting property, if for anyx,x in

Xthe inequality

ψfx,fx ≤cψx,x (20)

holds, for some fixed 0< c <1.

Equation (15) can be considered as a function fromRn

to Rn, wheren = MN and R takes values in [0,G], and

G is the maximum gray code value. For (15), contracting property holds, if for anyxk1, xk2inRnthere exists a constant 0< c <1 such that following inequality holds

dxk2+1,xk1+1cdxk2,xk1, (21) wheredis Euclidean distance inRn.

In order to make mathematical steps simple and tractable, we rewrite (15) using (A.1) as follows:

xk+1=A(k)xkCFk, (22)

whereCFk = [ρHTuklog(xk) +C]q(k),uk = (yHxk)/

Hxk. Index is used with exponent,q, in order to show the

result for AAMAPE in which exponent varies with iteration. For anyxk1, xk2inRnwe get the following relation:

xk1+1=A(k

1)xk1CF

k1 ,

xk2+1=A(k

2)xk2CF

k2

.

(23)

Upon using (23), we get Euclidian distance between xk1+1,

xk2+1as follows:

dxk2+1,xk1+1=

j

xk2+1

j −xkj1+1

2

=

j

pk2

j xkj2−pkj1xkj1

2 ,

(24)

where pkj = A(k)CF k

j. Putting value ofA(k) from (17) and

jxkj =N, we get following relation forpkj:

pk j =

jxkj

ρHTukj−logxkj+C

q(j)

jxkj

ρHTuk

j−logxkj+C

q(j) . (25)

SinceC >(ρ+ log(xk)),|HTuk| 1, andxk >0, therefore

pkj >0. It is also evident from (25) thatpkj 1. Replacingpkj

in (24) by minimum of allpkj, we get following inequality:

dxk2+1,xk1+1p2

j

xk2

j −xkj1

2

=p2dxk2,xk1,

(26)

where p = min{pk1

1,p2k1,. . .,pkn1,p1k2,pk22,. . .,pkn2} is lying

between 0 and 1. Thus, for anyxk1, xk2 inRnthere exists a constant 0 < c(= p2)< 1 such that inequality (21) holds; and therefore contraction mapping property is satisfied for (15).

2.7. Implementation and computational

considerations

In implementation of MAPE and AAMAPE, we use the shift-invariant property of the PSF. For linear shift-invariant system, convolution in spatial domain is equivalent to pointwise multiplication in Fourier domain [31]. In the MAPE and AAMAPE, evaluation of the array HT(y/Hxk)

involves the heaviest computation in each iteration. This has been accomplished using Fast Fourier Transform (FFT)

h(ξ,η),xk(ξ,η) of the PSF,h, and the image corresponding

toxk, in four steps as follows: (1) formHxkby taking inverse

FFT of the producth(ξ,η)xk(ξ,η); (2) replace all elements

less than 1 by 1 inHxk, and form the ratio, y/Hxk, in the spatial domain; (3) compute the FFT of the result obtained in step 2 and multiply it by complex conjugate ofh(ξ,η); (4) take the inverse FFT of the result of step 3 and replace all negative entries by zero.

The FFT is the heaviest computation in each iteration of the MAPE and AAMAPE method. Thus the overall compu-tational complexity of these methods isO(MNlogMN).

3. SUPERRESOLUTION AND NOISE

AMPLIFICATION IN AAMAPE

3.1. Superresolution

It is often mentioned that the nonlinear methods have superresolution capability, restoring the frequency beyond the diffraction limit, without any rigorous mathematical support. In spite of highly nonlinear nature of AAMAPE method, we explain its superresolution characteristic quali-tatively by using simplified form of (15) given as

xk+1=AKqxk+qρAKq−1xkHTuk, (27)

whereuk=(y−Hxk)/Hxk, andK=C−log(xk). Derivation of (27) is given inAppendix A. An equivalent expression of (27) in Fourier domain is obtained by using convolution, correlation theorem as [31]

Xk+1(f)=AKqXk(f) +qρAK

q−1

MN X

k(f)H(f)Uk(f),

(28)

where superscript denotes the conjugate transpose of a matrix; andXk+1,Xk, andUkare discrete Fourier transforms

of sizeM×N corresponding to the variable in lower case letters; f is 2-D frequency index.His the Fourier transform of PSF,h, and it is known as optical transfer function (OTF). The OTF is band limited, say, its upper cutofffrequency is fC;

that is,H(f)=0 for|f|> fC. In order to make explanation

of superresolution easy, we rewrite (28) as follows:

Xk+1(f)=AKqXk(f)

+qρAK

q−1

MN

v

Xk(f −v)H∗(v)Uk(v). (29)

At any iteration, the product H∗Uk in (29) is also band

(6)

Table1: Blurring PSF, BSNR, and SNR.

Experiment Image Blurring BSNR SNR

PSF [dB] [dB]

Exp.1 Cameraman 5×5 Box-car 40 17.35 Exp.2 Lenna 5×5 Box-car 32 20.27 Exp.3 Anuska 5×5 Gauss, scale=3 38.60 22.95

Due to multiplication ofH∗UkbyXkand summation over

all available frequency indexes, second term of (29) is never zero. Indeed, the inband frequency components of Xk are

spread out of the band. Thus, the restored image spectrum,

Xk+1(f), has frequencies beyond the cutofrequency, f

C. It

is important to note that the observation, y, is noisy and noise has frequency component at high frequencies. Thus, the restored frequencies are contaminated and spurious high frequencies are also present in restored image. How to ensure the fidelity of the restored frequencies and rejection of spurious high frequencies without knowing the frequency present in the true data, x, is challenging and an open problem.

3.2. Noise amplification

In this section, signal dependent noise characteristic of noise amplification has been investigated qualitatively. It is worth noting that the complete recovery of frequencies present in true image from the observed image requires large number of iterations. But due to noisy observation, noise also amplifies as iteration increases. Hence, restored image may become unacceptably noisy and unreliable at large number of iterations. Noise in (k+ 1)th iteration is estimated by finding the correlation of the deviation of restored spectraXk+1(f) form its expected valueE[Xk+1(f)]. This correlation is the measure of noise and is given as follows:

μkX+1

f,f

=EXk+1fEXk+1fXk+1(f)EXk+1(f), (30)

where denotes the complex conjugate. In order to get simple and compact expression for amplified noise (30), we assume that the correlation at two different spatial frequencies is independent, that is, vanishing correlation at two different spatial frequencies. SubstitutingXk+1 from (29) into (30) and using this assumption we get following relation:

Nk+1

X (f)−NXk(f)=

α21Nk X(f)

+β2

v

H(v)2

|Uk(v)|2NXk(f −v)

+ 2αβ

v

ReH∗(v)Uk(v)E|Xk(f)|2

2αβ

v

ReH∗(v)Uk(v)|EXk(f)|2, (31)

(a) (b)

(c) (d)

(e) (f)

Figure1: “Cameraman” image (a) original image, (b) noisy and blurred image; PSF 5×5 Box-Car, BSNR =40 dB, (c) restored image by MAPE corresponding to maximum SNR (348 iterations), (d) restored image by AAMAPE correspondsing to maximum SNR (201 iterations), (e) residual corresponding (c), and (f) residual corresponding (d).

where,α=AKq, β=qρAKq−1/MNandNk

X(f)=μkX(f,f)

represents the noise inXkat frequencyf. Derivation of (31)

is given inAppendix B. From third and fourth term of (31), it is clear that in AAMPAE amplified noise is signal dependent. Moreover, noise from one iteration to next is cumulative. Thus, increasing number of iterations does not guarantee that the restored quality of image will become acceptable. We can find total amplified noise by summing (31) over allMN

frequencies, provided that the statistics of the restored spatial frequency is known at each iteration. But in practice, there is no way to estimate statistics of spatial frequency—E(Xk(f)),

E(|Xk(f)|2)—at each iteration. Moreover, due to intricate

(7)

17 19 21 23 25

SNR

(dB)

0 100 200 300 400 500

Number of iterations (a)

8 10 12 14 16 18

RMSE

0 100 200 300 400 500

Number of iterations (b)

Figure2: “Cameraman” image (a) SNR of the MAPE (solid line) and AAMAPE (dotted line). (b) RMSE of the MAPE (solid line) and AAMAPE (dotted line).

to give reliable and exact noise amplification formula. In derivation of (31), we use the assumption that one spatial frequency is independent from the other, that is, correlation term at two different spatial frequencies is zero. Thus, Monte Carlo simulations are the only realistic way to assess the amplified noise. Equation (31) is for understanding the well-known fact that the amplified noise is signal dependent in this nonlinear method.

4. EXPERIMENTAL RESULTS AND DISCUSSIONS

In this section, we describe three experiments demonstrating the performance of the AAMAPE method in comparison

(a) (b)

(c) (d)

(e) (f)

Figure3: “Lenna” image (a) original image, (b) noisy and blurred image; PSF 5×5 Box-Car, BSNR = 32 dB, (c) restored image by MAPE corresponding to maximum SNR (79 iterations), (d) restored image by AAMAPE corresponding to maximum SNR (44 iterations), (e) residual corresponding (c), and (f) Residual corresponding (d).

with MAPE method. Test images used in these experiments are Cameraman (Experiment 1), Lenna (Experiment 2), and Anuska (Experiment 3); all are 256×256, 8-bit gray scale image. The corrupting noise is of Poisson type for both experiments. Table 1displays the blurring PSF, BSNR, and SNR for all three experiments. The level of noise in the observed image is characterized in decibels by blurred SNR (BSNR) and defined as

BSNR=10 log10

Hx−(1/MN)Hx2 σ2MN

10 log10

Hx(1/MN)Hx2

(y−Hx)2

,

(8)

Table2: SNR, iterations, and time for MAPE, AAMAPE, WaveGsm TI [32], ForWaRD [33], and RI [34].

Method SNR [dB] No. of iterations Time [sec.]

Exp.1 Exp.2 Exp.3 Exp.1 Exp.2 Exp.3 Exp.1 Exp.2 Exp.3

MAPE 24.46 24.54 28.63 348 79 124 39.1 9.2 14.6

AAMAPE 24.46 24.54 28.65 201 44 71 22.5 5.0 7.8

WaveGSM TI 21.63 21.81 27.84 504 164 105 2349.4 772.57 601.5

ForWaRD 25.17 25.64 26.58 —- —- —- —- —-

—-RI 25.50 25.74 27.94 —- —- —- 20.0 —-

—-20 21 22 24

23 25

SNR

(dB)

0 50 100 150 200

Number of iterations (a)

7 8 9 11

10 12

RMSE

0 50 100 150 200

Number of iterations (b)

Figure4: “Lenna” image (a) SNR of the MAPE (solid line) and AAMAPE (dotted line). (b) RMSE of the MAPE (solid line) and AAMAPE (dotted line).

where σ is the standard deviation of noise. The following standard imaging performance criterions are used for com-parison of AAMAPE method and MAPE method:

RMSE= 1

MN x−x

k2,

SNR=10 log10

|x|2

x−xk2

.

(33)

Most of these criteria actually define the accuracy of approximation of the image intensity function. There is no one-to-one link between the image quality and the above criteria. Another criterion based on residual,r = y−Hxk, is discussed in [14]. The deblurred image,xk, is acceptable,

if the residual are consistent with statistical distribution of noise in observed image y. If any systematic structure in residual is observed, or if the residual distribution is inconsistent with noise statistics of y, there is something wrong with the deblurred image. In most of the deblurring method, particularly in linear methods, there is tradeoff between residual and artifacts appearing in the deblurred image [14]. Excellent residual does not guarantee that the deblurred image has less or free from artifact. Thus, a visual inspection, which of course is quite subjective, continues to be the most important final performance criterion.

Figures 1(c)-1(d), Figures 3(c)-3(d), and Figures 5(c)

-5(d) show the restored images, corresponding to the max-imum SNR, of Experiments one, two, and three. It is clear from these figures that the AAMAPE gives almost same visual results in less number of iterations than MAPE method for all experiments. Figure 2,Figure 4, and Figure 6 show the variations of SNR, RMSE versus iterations for experiments. It is observed that the AAMAPE has faster increase in SNR and faster decrease in RMSE in comparison to that of MAPE method, for all three experiments. It is clear from Figures

(9)

(a) (b)

(c) (d)

(e) (f)

Figure5: “Anuska” image (a) original image, (b) noisy and blurred image; PSF 5 ×5 Gauss, BSNR = 38.6 dB, (c) restored image by MAPE corresponding to maximum SNR (124 iterations), (d) restored image by AAMAPE corresponding to maximum SNR (71 iterations), (e) residual corresponding (c), and (f) residual corresponding (d).

and RMSE increases. This is due to the fact that the noise amplification is signal dependent, as discussed inSection 3. Thus, increasing number of iterations does not necessarily improve the quality of restored image. In order to terminate the iterations corresponding to the best result, some stopping criterion must be used [14,35].

Table 2 shows the SNR, number of iterations, and computation time of the MAPE, proposed AAMAPE, WaveGSM TI [32], ForWaRD [33], and RI [34] for Expe-riments 1–3. Matlab implementation of ForWaRD is avai-lable athttp://www.dsp.rice.edu/software/, and RI athttp:// www.cs.tut.fi/lasip/re software.

The proposed AAMAPE method gives same SNR as MAPE in less iteration and less computation time. The performance of AAMAPE is better—higher SNR, less

itera-23 25 27 29

SNR

(dB)

0 50 100 150 200

Number of iterations (a)

4 5 6 7 8

RMSE

0 50 100 150 200

Number of iterations (b)

Figure6: “Anuska” image (a) SNR of the MAPE (solid line) and AAMAPE (dotted line). (b) RMSE of the MAPE (solid line) and AAMAPE (dotted line).

(10)

1.4 1.8 2.2 2.6 3

q

0 100 200 300 400 500

Number of iterations (a)

1.4 1.8 2.2 2.6 3

q

0 50 100 150 200

Number of iterations (b)

1 1.5 2 2.5 3

q

0 50 100 150 200

Number of iterations (c)

Figure 7: Iteration versus q (a) Cameraman image, (b) Lenna image, and (c) Anuska image.

(a) (b)

(c) (d)

Figure8: Spectra of images fromFigure 5. All spectra are range compressed with log10(1 +| ·|2), (a) original image inFigure 5(a),

(b) blurred and noisy imageFigure 5(b), (c)Figure 5(c), and (d) Figure 5(d).

ultrasound), with different blurs and noise levels; we found that the proposed AAMAPE performs better than the state-of-the-arts methods [32–34].

In order to illustrate the superresolution capability of the MAPE and AAMAPE, we present the spectra of the original, blurred, and restored images in Figure 8 for the third experiment. It is evident that the restored spectra, as given in Figures8(c)-8(d), have frequency component that are not present in observed spectra as in Figure 8(b). But restored spectra are not identical to that of original image spectra as shown in Figure 8(a). In principle, an infinite number of iteration are required to recover the true spectra from the observed spectra using any nonlinear method. But due to noisy observation, noise also gets amplified as the number of iteration increases and the quality of restored image degrades.

5. CONCLUSIONS

(11)

suggest that AAMAPE method give better results in terms of low RMSE and high SNR, even when 44% lesser iterations than MAPE method. This adaptive method has simple form and very easy to implement. Moreover, computations required per iteration in AAMAPE are almost the same as those in MAPE. AAMAPE yields better result, in terms of SNR, than the recently published state-of-the-art methods. The noise amplification and the superresolution capability of AAMAPE method are not evident due to extremely intri-cate restoration process. We explained the superresolution property of the proposed accelerated method analytically and verified it experimentally. For interpreting the noise ampli-fication of the proposed method, we have also performed an analytical analysis, which confirms its signal dependent nature of amplified noise.

In AAMAPE, we assumed that the PSF is known and is shift-invariant. However, in many cases, the PSF is unknown. In such blind deblurring problems, the PSF and true image must be estimated simultaneously from noisy and blurred observation. Extension of AAMAPE for shift-variant PSF and blind deblurring is subject of future research.

APPENDICES

A. DERIVATION OF(27)

The term in square bracket of (15) is expressed as follows:

ρHT

y Hxk

−ρ−logxk+C

HT

y

Hxk−1 + 1

−ρ−logxk+C

=ρHT

y−Hxk Hxk

logxk+C

=ρHTuklogxk+C,

(A.1)

whereukis relative fitting error and it is given as follows:

uk= y−Hx

k

Hxk . (A.2)

Raising powerqboth sides of (A.1), we get

ρHT

y

Hxk

−ρ−logxk+C

q

=Kq

1 + ρ

KH T uk q , (A.3)

whereK = C−log(xk). It is observed that|HTuk| 1,

for sufficiently largek. Moreover, by the Riemann-Lebesgue lemma it is possible to show that the sumHTuk has value very close to zero [4]. It is obvious thatρ/K <1, and hence,

|(ρ/K)HTuk| 1. Expanding square bracket in left-hand

side of (A.3) using Taylor series expansion:

Kq

1 + ρ

KH

T

uk

q

=Kq

1 +

KH

T

uk+q(q−1) 2! ρ KH T uk 2 +· · ·

=Kq+qρKq−1HTuk+q(q−1)

2! K q ρ KH T uk 2 +· · ·. (A.4)

In both experiments we use ρ = 10000, C = 11000, and found that at higher iteration most of the values of|HTuk| are in between 1010and 108. Moreover, we observed that the|HTuk|fall below 102just in four iterations. In order to calculate maximum possible value of third term of (A.4), at higher iteration, we take image intensity value 255 andq=3. From these we found that third term of (A.4) is between 108 and 104. Thus, the third and higher order term in (A.4) is very small. Retaining only first-order term we arrive on following relation:

ρHT

y HTx

−ρ−logxk+C

q

=Kq+qρKq−1HTuk.

(A.5)

Using (A.5) in (15) we get the following:

xk+1=AKqxk+qρAKq−1xkHTuk. (A.6) In order to validate this approximation, we perform the simulation for Experiment 1. The results are shown inFigure 9. From Figures9(a)-9(b)and Figures1(d),1(f), we see that the simplified form, (A.6), gives almost same restored image and residual as AAMAPE. Moreover, from

Figure 9(c), we see that RMSE variation of simplified form, (A.6), agrees with RMSE of AAMAPE.

B. DERIVATION OF(31)

In order to make mathematical step understandable, we rewrite (29) as follows:

Xk+1(f)=αXk(f) +βYk(f), (B.1)

where

Yk(f)= v

Xk(f v)H(v)Uk(v). (B.2)

For estimating noise amplification during iteration, we use covariance analysis. By using covariance, we find the rule for evolution of spatial frequency from one iteration to next iteration. By using (30) and (B.1), covariance ofXk+1for two different spatial frequencies is given as

μkX+1

f,f 2μkX

f,f +β2μkY

f,f

+αβμkXY

f,f +αβμkX∗Y

(12)

(a) (b)

8 10 12 14 16 18

RMSE

0 100 200 300 400 500

Number of iterations (c)

Figure9: Result for Experiment 1 using (27): (a) restored image by simplified form (27) of AAMAPE (15) corresponding to maximum SNR (203 Iterations), (b) residual corresponding (a), and (c) RMSE of the MAPE (solid line withx), AAMAPE (solid line), and AAMAPE simplified form given by (27) (dotted line).

where

μk XY∗

f,f

=EXk(f)EXk(f)Yk∗

f −EYk∗

f .

(B.4)

Using (30) and (B.2) we get

μkY

f,f =

v

v

H(v)H∗vUk(v)Uk∗v

×EXk(f v)Xk∗

f −v

v

v

H(v)H∗vUk(v)Uk∗ (v)

×E[Xk(f v)]E[Xk∗f v],

μk Y

f,f

=

v

v

H(v)H∗(v)Uk(v)Uk∗vμkX

f −v,f −v.

(B.5)

Using (B.2) into (B.4) we getμkXY∗

μk XY∗

f,f

=

v

H(v)Uk∗

(v)EXk(f)Xk∗

f −v

v

H(v)Uk∗

(v)EXk(f)EXk∗

f −v.

(B.6)

Similarly,

μkX∗Y(f,f )

=

v

H∗(v)Uk(v)EXk∗(f)Xkf −v

v

H∗(v)Uk(v)EXk∗(f)EXkf −v.

(B.7)

It is evident from (B.6) and (B.7) thatμXY∗ =[μX∗Y]. We

use the assumption that one spatial frequency is independent from the other, that is, correlation term at two different spatial frequencies is zero. Using (B.7), (B.6), and (B.5) into (B.3) we have

μk+1

X (f,f)2μkX(f,f)

+β2

v

H(v)2Uk(v)2μk

X(f −v,f −v)

+ 2αβ

v

ReH∗(v)Uk(v)EXk(f)2

2αβ

v

ReH∗(v)Uk(v)EXk(f)2,

(B.8)

where Re denotes the real part of complex quantity.

ABBREVIATIONS

MAPE: maximum a Posteriori with entropy prior; AAMAPE: adaptively accelerated MAPE;

RMSE: root mean square error; SNR: signal-to-noise ratio; PSF: point spread function.

ACKNOWLEDGMENTS

This work was supported by the Dual Use Center through the contract at Gwangju Institute of Science and Technology and by the BK21 program in the Republic of Korea.

REFERENCES

[1] P. A. Jansson,Deconvolution of Images and Spectra, Academic Press, New York, NY, USA, 1997.

[2] A. K. Katsaggelos,Digital Image Restoration, Springer, New York, NY, USA, 1989.

(13)

[4] A. S. Carasso, “Linear and nonlinear image debluring: a documented study,”SIAM Journal on Numerical Analysis, vol. 36, no. 6, pp. 1659–1689, 1999.

[5] A. K. Katsaggelos, J. Biemond, R. W. Schafer, and R. M. Mersereau, “A regularized iterative image restoration algo-rithm,”IEEE Transactions on Signal Processing, vol. 39, no. 4, pp. 914–929, 1991.

[6] F.-C. Jeng and J. W. Woods, “Compound Gauss-Markov random fields for image estimation,” IEEE Transactions on Signal Processing, vol. 39, no. 3, pp. 683–697, 1991.

[7] R. Molina, A. K. Katsaggelos, J. Mateos, A. Hermoso, and C. A. Segall, “Restoration of severely blurred high range images using stochastic and deterministic relaxation algorithms in compound Gauss-Markov random fields,” Pattern Recogni-tion, vol. 33, no. 4, pp. 555–571, 2000.

[8] J. Zerubia, A. Jalobeanu, and Z. Kato, “Markov random fields in image processing application to remote sensing and astrophysics,”Journal de Physique IV, vol. 12, no. 1, pp. 117– 136, 2002.

[9] M. Nikolova, “Thresholding implied by truncated quadratic regularization,”IEEE Transactions on Signal Processing, vol. 48, no. 12, pp. 3437–3450, 2000.

[10] E. S. Meinel, “Maximum-entropy image restoration: lagrange and recursive techniques,” Journal of the Optical Society of America A, vol. 5, no. 1, pp. 25–29, 1988.

[11] R. Narayan and R. Nityananda, “Maximum entropy image restoration in astronomy,”Annual Review of Astronomy and Astrophysics, vol. 24, pp. 127–170, 1986.

[12] J. L. Starck, E. Pantin, and F. Murtagh, “Deconvolution in astronomy: a review,”Publications of the Astronomical Society of the Pacific, vol. 114, no. 800, pp. 1051–1069, 2002.

[13] J. M. Bioucas-Dias, “Bayesian wavelet-based image deconvo-lution: a GEM algorithm exploiting a class of heavy-tailed priors,”IEEE Transactions on Image Processing, vol. 15, no. 4, pp. 937–951, 2006.

[14] R. C. Puetter, T. R. Gosnell, and A. Yahil, “Digital image reconstruction: deblurring and denoising,”Annual Review of Astronomy and Astrophysics, vol. 43, pp. 139–194, 2005. [15] M. Nikolova, “Local strong homogeneity of a regularized

estimator,”SIAM Journal on Applied Mathematics, vol. 61, no. 2, pp. 633–658, 2000.

[16] D. S. C. Biggs and M. Andrews, “Acceleration of iterative image restoration algorithms,” Applied Optics, vol. 36, no. 8, pp. 1766–1775, 1997.

[17] L. Kaufman, “Implementing and accelerating the EM algo-rithm for positron emission tomography,”IEEE Transactions on Medical Imaging, vol. 6, no. 1, pp. 37–51, 1987.

[18] H. M. Adorf, R. N. Hook, L. B. Lucy, and F. D. Murtagh, “Accelerating the Richardson-Lucy restoration algorithm,” in Proceedings of the 4th ESO/ST-ECF Data Analysis Workshop, pp. 99–103, Garching, Germany, May 1992.

[19] T. J. Holmes and Y.-H. Liu, “Acceleration of maximum-likelihood image restoration for fluorescence microscopy and other noncoherent imagery,”Journal of the Optical Society of America A, vol. 8, no. 6, pp. 893–907, 1991.

[20] D. S. C. Biggs and M. Andrews, “Conjugate gradient acceler-ation of maximum-likelihood image restoracceler-ation,”Electronics Letters, vol. 31, no. 23, pp. 1985–1986, 1995.

[21] R. G. Lane, “Methods for maximum-likelihood deconvolu-tion,”Journal of the Optical Society of America A, vol. 13, no. 6, pp. 1992–1998, 1996.

[22] W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery,Numerical Recipes in C, Cambridge University Press, Cambridge, UK, 2nd edition, 1992.

[23] J.-A. Conchello, “Superresolution and convergence properties of the expectation-maximization algorithm for maximum-likelihood deconvolution of incoherent images,”Journal of the Optical Society of America A, vol. 15, no. 10, pp. 2609–2619, 1998.

[24] E. T. Jaynes, “On the rationale of maximum-entropy meth-ods,”Proceedings of the IEEE, vol. 70, no. 9, pp. 939–952, 1982. [25] B. R. Frieden, “Restoring with maximum likelihood and maximum entropy,”Journal of the Optical Society of America, vol. 62, no. 4, pp. 511–518, 1972.

[26] J. Skilling and S. F. Gull, “The entropy of an image,” Proceedings of the American Mathematical Society, vol. 14, pp. 167–189, 1984.

[27] J.-L. Starck, F. Murtagh, P. Querre, and F. Bonnarel, “Entropy and astronomical data analysis: perspectives from multireso-lution analysis,”Astronomy & Astrophysics, vol. 368, no. 2, pp. 730–746, 2001.

[28] H. J. Trussell, “The relationship between image reconstruction by the maximum a posteriori method and maximum entropy method,”IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 28, no. 1, pp. 114–117, 1980.

[29] N. Vincent, A. Seropian, and G. Stamon, “Synthesis for handwriting analysis,”Pattern Recognition Letters, vol. 26, no. 3, pp. 267–275, 2005.

[30] K. Goebel and W. A. Kirk,Topics in Metric Fixed Point Theory, Cambridge University Press, Cambridge, UK, 1990.

[31] J. G. Proakis and D. G. Manolakis,Digital Signal Processing, Pearson Education, Delhi, India, 3rd edition, 2004.

[32] J. M. Bioucas-Dias, “Bayesian wavelet-based image donvo-lution: a GEM algorithm exploiting a class of heavy-tailed priors,”IEEE Transactions on Image Processing, vol. 15, no. 4, pp. 937–951, 2006.

[33] R. Neelamani, H. Choi, and R. Baraniuk, “ForWaRD: fourier-wavelet regularized deconvolution for ill-conditioned sys-tems,”IEEE Transactions on Signal Processing, vol. 52, no. 2, pp. 418–433, 2004.

[34] V. Katkovnik, K. Egiazarian, and J. Astola, “A spatially adaptive nonparametric regression image deblurring,”IEEE Transactions on Image Processing, vol. 14, no. 10, pp. 1469– 1478, 2005.

[35] E. Veklerov and J. Llacer, “Stopping rule for the MLE algorithm based on statistical hypothesis testing,”IEEE Trans-actions on Medical Imaging, vol. 6, no. 4, pp. 313–319, 1987. [36] S. Ghael, A. M. Sayeed, and R. G. Baraniuk, “Improved

Figure

Figure 5: “Anuska” image (a) original image, (b) noisy and blurredby MAPE corresponding to maximum SNR (124 iterations), (d)restored image by AAMAPE corresponding to maximum SNR(71 iterations), (e) residual corresponding (c), and (f) residualimage; PSF 5 × 5 Gauss, BSNR = 38.6 dB, (c) restored imagecorresponding (d).

Figure 5.

Anuska image a original image b noisy and blurredby MAPE corresponding to maximum SNR 124 iterations d restored image by AAMAPE corresponding to maximum SNR 71 iterations e residual corresponding c and f residualimage PSF 5 5 Gauss BSNR 38 6 dB c restored imagecorresponding d . View in document p.9
Figure 6: “Anuska” image (a) SNR of the MAPE (solid line) andAAMAPE (dotted line). (b) RMSE of the MAPE (solid line) andAAMAPE (dotted line).

Figure 6.

Anuska image a SNR of the MAPE solid line andAAMAPE dotted line b RMSE of the MAPE solid line andAAMAPE dotted line . View in document p.9
Figure 8: Spectra of images fromFigure 5(d) Figure 5. All spectra are rangecompressed with log10(1 + | ·|2), (a) original image in Figure 5(a),(b) blurred and noisy image Figure 5(b), (c) Figure 5(c), and (d).

Figure 8.

Spectra of images fromFigure 5 d Figure 5 All spectra are rangecompressed with log10 1 2 a original image in Figure 5 a b blurred and noisy image Figure 5 b c Figure 5 c and d . View in document p.10
Figure 7: Iteration versus q (a) Cameraman image, (b) Lennaimage, and (c) Anuska image.

Figure 7.

Iteration versus q a Cameraman image b Lennaimage and c Anuska image . View in document p.10
Figure 9(c), we see that RMSE variation of simplified form,
Figure 9 c we see that RMSE variation of simpli ed form . View in document p.11

References

Updating...

Download now (13 pages)