• No results found

Strong convergence of a relaxed CQ algorithm for the split feasibility problem

N/A
N/A
Protected

Academic year: 2020

Share "Strong convergence of a relaxed CQ algorithm for the split feasibility problem"

Copied!
11
0
0

Loading.... (view fulltext now)

Full text

(1)

R E S E A R C H

Open Access

Strong convergence of a relaxed CQ

algorithm for the split feasibility problem

Songnian He

1,2*

and Ziyi Zhao

1

*Correspondence:

hesongnian2003@yahoo.com.cn 1College of Science, Civil Aviation

University of China, Tianjin, 300300, China

2Tianjin Key Laboratory for

Advanced Signal Processing, Civil Aviation University of China, Tianjin, 300300, China

Abstract

The split feasibility problem (SFP) is finding a point in a given closed convex subset of a Hilbert space such that its image under a bounded linear operator belongs to a given closed convex subset of another Hilbert space. The most popular iterative method is Byrne’s CQ algorithm. Lópezet al.proposed a relaxed CQ algorithm for solving SFP where the two closed convex sets are both level sets of convex functions. This algorithm can be implemented easily since it computes projections onto half-spaces and has no need to knowa priorithe norm of the bounded linear operator. However, their algorithm has only weak convergence in the setting of infinite-dimensional Hilbert spaces. In this paper, we introduce a new relaxed CQ algorithm such that the strong convergence is guaranteed. Our result extends and improves the corresponding results of Lópezet al.and some others.

MSC: 90C25; 90C30; 47J25

Keywords: split feasibility problem; relaxed CQ algorithm; Hilbert space; strong convergence; bounded linear operator

1 Introduction

The split feasibility problem (SFP) was proposed by Censer and Elfving in []. It can math-ematically be formulated as the problem of finding a pointxsatisfying the property:

xC, AxQ, (.)

whereAis a given M×N real matrix (whereA∗ is the transpose of A),C andQare nonempty, closed and convex subsets inRN andRM, respectively. This problem has

re-ceived much attention [] due to its applications in signal processing and image recon-struction, with particular progress in intensity-modulated radiation therapy [–], and many other applied fields.

We assume the SFP (.) is consistent, and letSbe the solution set,i.e.,

S={xC:AxQ}.

It is easily seen thatSis closed convex. Many iterative methods can be used to solve the SFP (.); see [–]. Byrne [, ] was among others the first to propose the so-called CQ algorithm, which is defined as follows:

xn+=PC

xnτnA∗(I–PQ)Axn

, (.)

(2)

whereτn∈(,A), andPC andPQare the orthogonal projections onto the setsCand

Q, respectively. Compared with Censer and Elfving’ algorithm [], the Byrne’ algorithm is easily executed since it only deal with orthogonal projections with no need to compute matrix inverses.

The CQ algorithm (.) can be obtained from optimization. In fact, if we introduce the (convex) objective function

f(x) = 

(I–PQ)Ax

, (.)

and analyze the minimization problem

min

xCf(x), (.)

then the CQ algorithm (.) comes immediately as a special case of the gradient projection algorithm (GPA)(For more details about the GPA, the reader is referred to []). Since the convex objectivef(x) is differentiable, and has a Lipschitz gradient, which is given by

f(x) =A∗(I–PQ)Ax, (.)

the GPA for solving the minimization problem (.) generates a sequence (xn) recursively

as

xn+=PC

xnτnf(xn)

, (.)

where the stepsize τnis chosen in the interval (,L), whereLis the Lipschitz constant of∇f.

Observe that in algorithms (.) and (.) mentioned above, in order to implement the CQ algorithm, one has to compute the operator (matrix) normA, which is in general not an easy work in practice. To overcome this difficulty, some authors proposed different adaptive choices of selecting the stepsizeτn(see [, , ]). For instance, very recently

Lópezet al.introduced a new way of selecting the stepsize [] as follows:

τn:=

ρnf(xn)

f(xn)

,  <ρn< . (.)

The computation of a projection onto a general closed convex subset is generally diffi-cult. To overcome this difficulty, Fukushima [] suggested a so-called relaxed projection method to calculate the projection onto a level set of a convex function by computing a sequence of projections onto half-spaces containing the original level set. In the setting of finite-dimensional Hilbert spaces, this idea was followed by Yang [], who introduced the relaxed CQ algorithms for solving SFP (.) where the closed convex subsetsCandQ are level sets of convex functions.

Recently, for the purpose of generality, the SFP (.) is studied in a more general setting. For instance, Xu [] and Lópezet al.[] considered the SFP (.) in infinite-dimensional Hilbert spaces (i.e., the finite-dimensional Euclidean spacesRNandRMare replaced with

general Hilbert spaces). Very recently, Lópezet al.proposed a relaxed CQ algorithm with a new adaptive way of determining the stepsize sequence (τn) for solving the SFP (.) where

(3)

be implemented easily since it computes projections onto half-spaces and has no need to knowa priorithe norm of the bounded linear operator. However, their algorithm has only weak convergence in the setting of infinite-dimensional Hilbert spaces. In this paper, we introduce a new relaxed CQ algorithm such that the strong convergence is guaranteed in infinite-dimensional Hilbert spaces. Our result extends and improves the corresponding results of Lópezet al.and some others.

The rest of this paper is organized as follows. Some useful lemmas are listed in Section . In Section , the strong convergence of the new relaxed CQ algorithm of this paper is proved.

2 Preliminaries

Throughout the rest of this paper, we denote byHorKa Hilbert space,Ais a bounded linear operator fromHtoK, and byIthe identity operator onH orK. Iff :H→Ris a differentiable function, then we denote by∇f the gradient of the functionf. We will also use the notations:

• →denotes strong convergence. • denotes weak convergence.

ωw(xn) ={x| ∃{xnk} ⊂ {xn}such thatxnkx}denotes the weakω-limitset of{xn}.

Recall that a mappingT:HHis said to be nonexpansive if

TxTyxy, x,yH.

T:HHis said to be firmly nonexpansive if, forx,yH,

TxTy≤ xy–(I–T)x– (I–T)y.

Recall that a functionf :H→Ris called convex if

fλx+ ( –λ)y≤λf(x) + ( –λ)f(y), ∀λ∈(, ),∀x,yH.

A differentiable functionf is convex if and only if there holds the inequality:

f(z)≥f(x) +∇f(x),zx, ∀zH.

Recall that an elementgHis said to be a subgradient off :H→Ratxif f(z)≥f(x) +g,zx, ∀zH.

This relation is called the subdifferentiable inequality.

A functionf:H→Ris said to be subdifferentiable atx, if it has at least one subgradient atx. The set of subgradients off at the pointxis called the subdifferentiable off atx, and it is denoted by∂f(x). A functionf is called subdifferentiable, if it is subdifferentiable at allxH. If a functionf is differentiable and convex, then its gradient and subgradient coincide.

A functionf :H→Ris said to be weakly lower semi-continuous (w-lsc) atxifxnx

implies

f(x)≤lim inf

(4)

We know that the orthogonal projectionPC fromHonto a nonempty closed convex

subsetCHis a typical example of a firmly nonexpansive mapping, which is defined by

PCx:=arg min yCxy

, xH. (.)

It is well known thatPCis characterized by the inequality (forxH)

xPCx,yPCx ≤, ∀yC. (.)

The following lemma is not hard to prove (see [, ]).

Lemma . Let f be given as in(.).Then (i) f is convex and differential.

(ii) ∇f(x) =A∗(I–PQ)Ax,xH.

(iii) f is w-lsc onH.

(iv) ∇f isA-Lipschitz:f(x) –f(y)Axy,x,yH.

The following are characterizations of firmly nanexpansive mappings (see []).

Lemma . Let T:HH be an operator.The following statements are equivalent. (i) Tis firmly nonexpansive.

(ii) TxTyxy,TxTy,x,yH.

(iii) ITis firmly nonexpansive.

Lemma .[] Assume(αn)is a sequence of nonnegative real numbers such that

αn+≤( –γn)αn+γnσn, n= , , , . . . ,

where(γn)is a sequence in(, ),and(σn)is a sequence inRsuch that

(i) ∞n=γn=∞.

(ii) lim supn→∞σn≤,or

n=|γnσn|<∞.

Thenlimn→∞αn= .

3 Iterative Algorithm

In this section, we turn to consider a new relaxed CQ algorithm in the setting of infinite-dimensional Hilbert spaces for solving SFP (.) where the closed convex subsetsCandQ are of the particular structure,i.e.level sets of convex functions given as follows:

C=xH:c(x)≤ and Q=yK:q(y)≤ , (.)

(5)

Set

Cn=

xH:c(xn)≤ ξn,xnx , (.)

whereξn∂c(xn), and

Qn=

yK:q(Axn)≤ ζn,Axny , (.)

whereζn∂q(Axn).

Obviously,CnandQnare half-spaces and it is easy to verify thatCnCandQnQ

hold for everyn≥ from the subdifferentiable inequality. In what follows, we define

fn(x) =

(I–PQn)Ax

, n≥, whereQnis given as in (.). We then have

fn(x) =A∗(I–PQn)Ax.

Firstly, we recall the relaxed CQ algorithm of Lópezet al.[] for solving the SFP (.) whereCandQare given in (.) as follows.

Algorithm . Choose an arbitrary initial guessx. Assumexnhas been constructed. If

fn(xn) = , then stop; otherwise, continue and constructxn+viathe formula

xn+=PCn

xnτnfn(xn)

,

whereCnis given as (.), and

τn=

ρnfn(xn)

fn(xn)

,  <ρn< . (.)

Lópezet al.proved that under some certain conditions the sequence (xn) generated by

Algorithm . converges weakly to a solution of the SFP (.). Since the projections onto half-spacesCnandQnhave closed forms andτnis obtained adaptivelyviathe formula (.)

(no need to knowa priorithe norm of operatorA), the above relaxed CQ algorithm . is implementable. But the weak convergence is its a weakness. To overcome this weakness, inspired by Algorithm ., we will introduce a new relaxed CQ algorithm for solving the SFP (.) whereCandQare given in (.) so that the strong convergence is guaranteed.

It is well known that Halpern’s algorithm has a strong convergence for finding a fixed point of a nonexpansive mapping [, ]. Then we are in a position to give our algorithm. The algorithm given below is referred to as a Halpern-type algorithm [].

Algorithm . LetuH, and start an initial guessx∈H arbitrarily. Assume that the

nth iteratexnhas been constructed. If∇fn(xn) = , then stop (xnis a approximate solution

of SFP (.)). Otherwise, continue and calculate the (n+ )th iteratexn+viathe formula:

xn+=PCn

αnu+ ( –αn)

xnτnfn(xn)

, (.)

(6)

The convergence result of Algorithm . is stated in the next theorem.

Theorem . Assume that(αn)and(ρn)satisfy the assumptions:

(a) limn→∞αn= and

n=αn=∞.

(a) infnρn( –ρn) > .

Then the sequence(xn)generated by Algorithm.converges in norm to PSu.

Proof We may assume that the sequence (xn) is infinite, that is, Algorithm . does not

terminate in a finite number of iterations. Thus∇fn(xn)=  for alln≥. Recall thatSis

the solution set of the SFP (.),

S={xC:AxQ}.

In the consistent case of the SFP (.),Sis nonempty, closed and convex. Thus, the metric projectionPSis well-defined. We setz=PSu. SincezSCnand the projection operator

PCnis nonexpansive, we obtain

xn+–z=PCn

αnu+ ( –αn)

xnτnfn(xn)

z

αn(u–z) + ( –αn)

xnτnfn(xn) –z

≤( –αn)xnτnfn(xn) –z

+ αnuz,xn+–z.

Note thatIPQnis firmly nonexpansive and∇fn(z) = , it is deduced from Lemma . that

fn(xn),xnz

=(I–PQn)Axn,AxnAz

≥(I–PQn)Axn

= fn(xn),

which implies that xnτnfn(xn) –z

=xnz+τnfn(xn) 

– τn

fn(xn),xnz

xnz+τn∇fn(xn) 

– τnfn(xn)

xnz–ρn( –ρn)

fn(xn)

fn(xn)

.

Thus, we have

xn+–z≤( –αn)xnτnfn(xn) –z+ αnuz,xn+–z

≤( –αn)xnz+ αnuz,xn+–z

– ( –αn)ρn( –ρn)

fn(xn)

fn(xn)

. (.)

Now we prove (xn) is bounded. Indeed, we have from (.) that

xn+–z≤( –αn)xnz+ αnuz,xn+–z

≤( –αn)xnz+

αnxn+–z

+ α

(7)

and consequently

xn+–z≤

 –αn

 –αn

xnz+  αn

 –αn

  uz

.

It turns out that

xn+–z ≤max

xnz,

  uz

,

and inductively

xnz ≤max

x–z,

 uz

,

and this means that (xn) is bounded. Sinceαn→, with no loss of generality, we may

assume that there isσ >  so thatρn( –ρn)( –αn)≥σ for alln. Settingsn=xnz,

from the last inequality of (.), we get the following inequality:

sn+–sn+αnsn+ σf

n(xn)

fn(xn) ≤

αnuz,xn+–z. (.)

Now, following an idea in [], we provesn→ by distinguishing two cases.

Case : (sn) is eventually decreasing (i.e.there existsk≥ such thatsn>sn+holds for

allnk). In this case, (sn) must be convergent, and from (.) it follows that σf

n(xn)

fn(xn) ≤

Mαn+ (snsn+), (.)

whereM>  is a constant such that xn+–zuzMfor alln∈N. Using the

condi-tion (a), we have from (.) that fn(xn)

fn(xn) →. Thus, to verify thatfn(xn)→, it suffices

to show that (∇f(xn)) is bounded. In fact, it follows from Lemma . that (noting that

fn(z) =  due tozS)

fn(xn)=∇fn(xn) –∇fn(z)≤ Axnz.

This implies that ∇fn(xn) is bounded and it yields fn(xn) → , namely (I –

PQn)Axn →.

Since∂qis bounded on bounded sets, there exists a constantη>  such thatζnη

for alln≥. From (.) and the trivial fact thatPQn(Axn)∈Qn, it follows that

q(Axn)≤

ζn,AxnPQn(Axn)

η(I–PQn)Axn→. (.)

Ifx∗∈ωw(xn), and (xnk) is a subsequence of (xn) such thatxnkx∗, then thew-lscofq

and (.) imply that

qAx∗≤lim inf

(8)

It turns out thatAx∗∈Q. Next, we turn to provex∗∈C. For convenience, we setyn:= αnu+ ( –αn)(xnτnfn(xn)). In fact, since thePCnis firmly nonexpansive, it concludes

that

PCnxnPCnz≤ xnz–(I–PCn)xn

. (.)

On the other hand, we have

xn+–z=PCnynPCnz

=PCnynPCnxn+PCnxnPCnz

PCnxnPCnz+ PCnynPCnxn,xn+–z, (.)

and

PCnynPCnxnynxn

=αnu+ ( –αn)

xnτnfn(xn)

xn

αnuxn+τnfn(xn). (.)

Noting that (xn) is bounded, we have from (.)-(.) that

xn+–z≤ xnz–(I–PCn)xn

+ αnuxn+τnfn(xn)xn+–z

xnz–(I–PCn)xn

+

αn+

fn(xn)

fn(xn)

M, (.)

whereMis some positive constant. Clearly, from (.), it turns out that

(I–PCn)xn≤snsn++

αn+

fn(xn)

fn(xn)

M. (.)

Thus, we assert that(I–PCn)xn → due to the fact thatsnsn++ (αn+fnfn(xn(xn)))M→.

Moreover, by the definition ofCn, we obtain

c(xn)≤

ξn,xnPCn(xn)

δxnPCnxn → (n→ ∞),

whereδis a constant such thatξnδfor alln≥. The w-lsc ofcthen implies that

cx∗≤lim inf

k→∞ c(xnk) = .

Consequently,x∗∈C, and henceωw(xn)⊂S. Furthermore, due to (.), we get

lim sup

n→∞

uz,xnz= max wωw(xn)

uPSu,wPSu ≤. (.)

Taking into account of (.), we have

sn+≤( –αn)sn+ αnuz,xn+–z. (.)

(9)

Case : (sn) is not eventually decreasing, that is, we can find an integernsuch that

sn≤sn+. Now we define

Vn:={n≤kn:sksk+}, n>n.

It is easy to see thatVnis nonempty and satisfiesVnVn+. Let ψ(n) :=maxVn, n>n.

It is clear thatψ(n)→ ∞asn→ ∞(otherwise, (sn) is eventually decreasing). It is also

clear thatsψ(n)≤sψ(n)+for alln>n. Moreover,

snsψ(n)+, n>n. (.)

In fact, ifψ(n) =n, then the inequity (.) is trivial; ifψ(n) <n, from the definition of

ψ(n), there exists somei∈Nsuch thatψ(n) +i=n, we deduce that (n)+>(n)+>· · ·>sψ(n)+i=sn,

and the inequity (.) holds again. Sincesψ(n)≤sψ(n)+for alln>n, it follows from (.)

that

σf

ψ(n)(xψ(n))

fψ(n)(xψ(n)) ≤

Mαψ(n)→,

so thatfψ(n)(xψ(n))→ asn→ ∞(noting that∇(n)(xψ(n))is bounded). By the same

argument to the proof in case , we haveωw(xψ(n))⊂S. On the other hand, notingsψ(n)≤

sψ(n)+again, we have from (.) and (.) that

xψ(n)–(n)+ ≤ xψ(n)–PCψ(n)xψ(n)+PCψ(n)xψ(n)–PCψ(n)(n)

αψ(n)+

fψ(n)(xψ(n))

fψ(n)(xψ(n))

×

αψ(n)+

fψ(n)(xψ(n))

fψ(n)(xψ(n))

+ 

M,

whereMis a positive constant. Lettingn→ ∞yields that

xψ(n)–(n)+ →, (.)

from which one can deduce that

lim sup

n→∞ uz,xψ(n)+–z=lim supn→∞ uz,xψ(n)–z

= max

wωw((n))

uPSu,wPSu ≤. (.)

Sincesψ(n)≤(n)+, it follows from (.) that

(10)

Combining (.) and (.) yields

lim sup

n→∞ (n)≤, (.)

and hence(n)→, which together with (.) implies that

sψ(n)+≤(xψ(n)–z) + (xψ(n)+–xψ(n))

≤ √sψ(n)+xψ(n)+–xψ(n) →, (.)

which, together with (.), in turn implies thatsn→, that is,xnz.

Remark . Sinceucan be chosen inHarbitrarily, one can compute the minimum-norm solution of SFP (.) whereCandQare given in (.) by takingu=  in Algorithm . whether ∈Cor  /∈C.

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All the authors read and approved the final manuscript.

Acknowledgements

The authors wish to thank the referees for their helpful comments, which notably improved the presentation of this manuscript. This work was supported by the Fundamental Research Funds for the Central Universities (ZXH2012K001) and in part by the Foundation of Tianjin Key Lab for Advanced Signal Processing.

Received: 15 November 2012 Accepted: 9 April 2013 Published: 22 April 2013

References

1. Censor, Y, Elfving, T: A multiprojection algorithm using Bregman projection in product space. Numer. Algorithms

8(2-4), 221-239 (1994)

2. López, G, Martín-Márquez, V, Xu, HK: Iterative algorithms for the multipul-sets split feasiblity problem. In: Censor, Y, Jiang, M, Wang, G (eds.) Biomedical Mathematics Promising Directions in Imaging, Therapy Planning and Inverse Problems, pp. 243-279. Medical Physics Publishing, Madison, WI (2010).

3. Censor, Y, Bortfeld, T, Martin, B, Trofimov, A: A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol.51, 2353-2365 (2003)

4. Censor, Y, Elfving, T, Kopf, N, Bortfeld, T: The multiple-sets split feasibility problem and its applications for inverse problem. Inverse Probl.21, 2071-2084 (2005)

5. López, G, Martín-Márquez, V, Xu, HK: Perturbation techniques for nonexpansive mappings with applications. Nonlinear Anal., Real World Appl.10, 2369-2383 (2009)

6. Byrne, C: Iterative oblique projection onto convex sets and the split feasibity problem. Inverse Probl.18, 441-453 (2002)

7. Dang, Y, Gao, Y: The strong convergence of a KM-CQ-like algorithm for a split feasibility problem. Inverse Probl.27, 015007 (2011)

8. Qu, B, Xiu, N: A note on the CQ algorithm for the split feasibility problem. Inverse Probl.21, 1655-1665 (2005) 9. Schöpfer, F, Schuster, T, Louis, AK: An iterative regularization method for the solution of the split feasibility problem in

Banach spaces. Inverse Probl.24, 055008 (2008)

10. Wang, F, Xu, HK: Approximating curve and strong convergence of the CQ algorithm for the split feasibility problem. J. Inequal. Appl.2010, 102085 (2010)

11. Xu, HK: A variable Krasonosel’skii-Mann algorithm and the multiple-set split feasibility problem. Inverse Probl.22, 2021-2034 (2006)

12. Xu, HK: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl.26, 105018 (2010)

13. Yang, Q: The relaxed CQ algorithm for solving the split feasibility problem. Inverse Probl.20, 1261-1266 (2004) 14. Yang, Q: On variable-set relaxed projection algorithm for variational inequalities. J. Math. Anal. Appl.302, 166-179

(2005)

15. Zhao, J, Yang, Q: Generalized KM theorems and their applications. Inverse Probl.22(3), 833-844 (2006)

16. Zhao, J, Yang, Q: Self-adaptive projection methods for the multiple-sets split feasibility problem. Inverse Probl.27, 035009 (2011)

(11)

18. Figueiredo, MA, Nowak, RD, Wright, SJ: Gradient projection for space reconstruction: application to compressed sensing and other inverse problems. IEEE J. Sel. Top. Signal Process.1, 586-598 (2007)

19. López, G, Martín-Márquez, V, Wang, FH, Xu, HK: Solving the split feasibility problem without prior knowledge of matrix norms. Inverse Probl. (2012). doi:10.1088/0266-5611/28/8/085004

20. Fukushima, M: A relaxed projection method for variational inequalities. Math. Program.35, 58-70 (1986) 21. Aubin, JP: Optima and Equilibria: An Introduction to Nonlinear Analysis. Springer, Berlin (1993) 22. Goebel, K, Kirk, WA: Topics on Metric Fixed Point Theory. Cambridge University Press, Cambridge (1990) 23. Xu, HK: Iterative algorithms for nonlinnear operators. J. Lond. Math. Soc.66, 240-256 (2002)

24. Bauschke, HH, Borwein, JM: On projection algorithms for solving convex feasibility problem. SIAM Rev.38, 367-426 (1996)

25. Suzuki, T: A sufficient and necessary condition for Halpern-type strong convergence to fixed points of nonexpansive mappings. Proc. Am. Math. Soc.135, 99-106 (2007)

26. Xu, HK: Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl.298, 279-291 (2004) 27. Halpern, B: Fixed points of nonexpanding maps. Bull. Am. Math. Soc.73, 957-961 (1967)

28. Maingé, PE: A hybrid extragradient-viscosity method for monotone operators and fixed point problems. SIAM J. Control Optim.47, 1499-1515 (2008)

doi:10.1186/1029-242X-2013-197

Cite this article as:He and Zhao:Strong convergence of a relaxed CQ algorithm for the split feasibility problem.

References

Related documents

In view of the spread of drug resistance among the Staphylococcus aureus strains observed, it may be necessary to embark on multidrug therapy or combined

For example, to clarify the role of cholinergic neurons in the amygdala on learning and memory, topical scopolamine injection into the bilateral amygdala of mice suggested that

It has been confirmed by a large number of engineering practice that a series of widely-adopted classical data processing algorithms (e.g., least-squared

To gain further insight into the transcriptional control of mesoderm progenitor cell fate, we performed whole-transcriptome RNA sequencing (RNA-seq) on Pax2-GFP on mesoderm

Interestingly, we also identified a significant reduction in the proportion of cells in G1/G0 (Fig. 6D), which may contribute towards a significant, yet moderate, increase in total

We hereby report that sub-inhibitory concentrations of some plant extracts can enhance amoxicillin activity and these plants may provide lead compounds that may serve

The appropriate selection of suppliers, the optimal amounts of raw materials shipped from the selected suppliers to the plants in each period, the optimal amounts of products