• No results found

General iterative scheme based on the regularization for solving a constrained convex minimization problem

N/A
N/A
Protected

Academic year: 2020

Share "General iterative scheme based on the regularization for solving a constrained convex minimization problem"

Copied!
15
0
0

Loading.... (view fulltext now)

Full text

(1)

R E S E A R C H

Open Access

General iterative scheme based on the

regularization for solving a constrained

convex minimization problem

Ming Tian

*

*Correspondence: tianming1963@126.com College of Science, Civil Aviation University of China, Tianjin, 300300, P.R. China

Abstract

It is well known that the regularization method plays an important role in solving a constrained convex minimization problem. In this article, we introduce implicit and explicit iterative schemes based on the regularization for solving a constrained convex minimization problem. We establish results on the strong convergence of the sequences generated by the proposed schemes to a solution of the minimization problem. Such a point is also a solution of a variational inequality. We also apply the algorithm to solve a split feasibility problem.

MSC: 47H09; 47H05; 47H06; 47J25; 47J05

Keywords: averaged mapping; gradient-projection algorithm; constrained convex minimization; regularization; split feasibility problem; variational inequality

1 Introduction

The gradient-projection algorithm is a classical power method for solving constrained convex optimization problems and has been studied by many authors (see [–] and the references therein). The method has recently been applied to solve split feasibility prob-lems which find applications in image reconstruction and the intensity modulated radia-tion therapy (see [–]).

Consider the problem of minimizingf over the constraint setC(assuming that Cis

a nonempty closed and convex subset of a real Hilbert spaceH). Iff :H→Ris a

con-vex and continuously Fréchet differentiable functional, the gradient-projection algorithm

generates a sequence{xn}n=determined by the gradient off and the metric projection

ontoC. Under the condition thatf has a Lipschitz continuous and strongly monotone

gradient, the sequence{xn}n=can be strongly convergent to a minimizer off inC. If the

gradient off is only assumed to be inverse strongly monotone, then{xn}n=can only be

weakly convergent ifHis infinite-dimensional.

Recently, Xu [] gave an operator-oriented approach as an alternative to the gradient-projection method and to the relaxed gradient-gradient-projection algorithm, namely, an aver-aged mapping approach. He also presented two modifications of gradient-projection al-gorithms which are shown to have strong convergence.

On the other hand, regularization, in particular the traditional Tikhonov regularization, is usually used to solve ill-posed optimization problems [, ]. Under some conditions, we know that the regularization method is weakly convergent.

(2)

The purpose of this paper is to present the general iterative method combining the reg-ularization method and the averaged mapping approach. We first propose implicit and ex-plicit iterative schemes for solving a constrained convex minimization problem and prove that the methods converge strongly to a solution of the minimization problem, which is also a solution of the variational inequality. Furthermore, we use the above method to solve a split feasibility problem.

2 Preliminaries

Throughout the paper, we assume thatHis a real Hilbert space whose inner product and

norm are denoted by·,·and · , respectively, and thatCis a nonempty closed convex

subset ofH. The set of fixed points of a mappingTis denoted byFix(T), that is,Fix(T) =

{xH:Tx=x}. We writexnxto indicate that the sequence{xn} converges weakly

tox. The fact that the sequence{xn}converges strongly toxis denoted byxnx. The

following definition and results are needed in the subsequent sections.

Recall that a mappingT:HHis said to beL-Lipschitzian if

TxTyLxy, ∀x,yH, ()

whereL>  is a constant. In particular, ifL∈[, ), thenT is called a contraction onH;

ifL= , thenT is called a nonexpansive mapping onH.Tis called firmly nonexpansive if

TIis nonexpansive, or equivalently,xy,TxTyTxTy,∀x,yH.

Alterna-tively,Tis firmly nonexpansive if and only ifT can be expressed asT=

(I+W), where

W:HHis nonexpansive.

Definition . A mappingT:HHis said to be an averaged mapping if it can be written

as the average of the identityIand a nonexpansive mapping; that is,

T= ( –α)I+αW, ()

whereαis a number in (, ) andW:HHis nonexpansive. More precisely, when ()

holds, we say thatTisα-averaged. Clearly, a firmly nonexpansive mapping (in particular,

projection) is a -averaged map.

Proposition .[, ] For given operators W,T,V:HH:

(i) IfT= ( –α)W+αVfor someα∈(, )and ifWis averaged andVis

nonexpansive,thenTis averaged.

(ii) Tis firmly nonexpansive if and only if the complementIT is firmly nonexpansive. (iii) IfT= ( –α)W+αVfor someα∈(, )and ifWis firmly nonexpansive andVis

nonexpansive,thenTis averaged.

(iv) The composite of finitely many averaged mappings is averaged.That is,if each of the mappings{Ti}N

i=is averaged,then so is the compositeT· · ·TN.In particular,ifTis α-averaged andTisα-averaged,then the compositeTTisα-averaged,where α=α+α–αα.

Recall that the metric (or nearest point) projection fromHontoCis the mappingPC:

HCwhich assigns to each pointxHthe unique pointPCxCsatisfying the property

xPCx=inf

(3)

Lemma . For given xH:

(i) z=PCxif and only if

xz,yz ≤, ∀yC;

(ii) z=PCxif and only if

xz≤ xy–yz, ∀yC;

(iii)

PCxPCy,xyPCxPCy, ∀x,yH.

Consequently,PCis nonexpansive.

Lemma . The following inequality holds in a Hilbert space X:

x+y≤ x+ y,x+y, ∀x,yX.

Lemma .[] In a Hilbert space H,we have

λx+ ( –λ)y=λx+ ( –λ)y–λ( –λ)xy, ∀x,yH andλ∈[, ].

Lemma .(Demiclosedness principle []) Let C be a closed and convex subset of a Hilbert space H,and let T:CC be a nonexpansive mapping withFix(T)=∅.If{xn}n= is a sequence in C weakly converging to x and if{(IT)xn}n=converges strongly to y,then (IT)x=y.In particular,if y= ,then x∈Fix(T).

Definition . A nonlinear operatorGwith domainD(G)⊆Hand rangeR(G)⊆His said to be:

(i) monotone if

xy,GxGy ≥, ∀x,yD(G),

(ii) β-strongly monotone if there existsβ> such that

xy,GxGyβxy, ∀x,yD(G),

(iii) ν-inverse strongly monotone (for short,ν-ism) if there existsν> such that

xy,GxGyνGxGy, ∀x,yD(G).

Proposition .[] Let T:HH be an operator from H to itself.

(i) Tis nonexpansive if and only if the complementIT is  -ism. (ii) IfT isν-ism,then forγ > ,γTis ν

γ-ism.

(4)

Lemma .[] Assume that{an}is a sequence of nonnegative real numbers such that

an+≤( –γn)an+γnδn, n≥,

where{γn}is a sequence in(, )and{δn}is a sequence inRsuch that (i) ∞n=γn=∞;

(ii) lim supn→∞δn≤or

n=γn|δn|<∞. Thenlimn→∞an= .

3 Main results

We now look at the constrained convex minimization problem:

min

xCf(x), ()

whereCis a closed and convex subset of a Hilbert spaceHandf:C→Ris a real-valued

convex function. Assume that problem () is consistent, letSdenote the solution set. Iffis

Fréchet differentiable, then the gradient-projection algorithm (GPA) generates a sequence

{xn}n=according to the recursive formula

xn+=ProjC(Iγf)(xn), n≥, ()

or more generally,

xn+=ProjC(Iγn∇f)(xn), n≥, ()

where, in both () and (), the initial guessxis taken fromCarbitrarily, the parameters

γ orγnare positive real numbers.

As a matter of fact, it is known that if∇ffails to be strongly monotone, and is onlyL-ism, namely, there is a constantL>  such that

xy,∇f(x) –∇f(y)≥ 

Lf(x) –∇f(y) 

, x,yC,

under some assumption forγ orγn, then algorithms () and () can still converge in the

weak topology.

Now consider the regularized minimization problem

min

xCfα(x) :=minxC

f(x) +α

x

,

where α>  is the regularization parameter, and again f is convex with a L-ism

gradi-ent∇f.

It is known that the regularization method is defined as follows:

xn+=ProjC(Iγfαn)(xn).

We also know that{xn}x˜, wherex˜ is a solution of constrained convex minimization

(5)

Leth:CHbe a contraction with a constantρ> . In this section, we introduce the following implicit scheme generating a net{xs,ts}in an implicit way:

xs,ts=PC

sh(xs,ts) + ( –s)Ttsxs,ts , ()

where  <γ <L,ts∈(,γ–L). LetTtsandssatisfy the following conditions:

(i) λ:=λ(ts) = –γ(L+ts);

(ii) PC(Iγfts) =λI+ ( –λ)Tts.

Consider a mapping

Qsx=PC

sh(x) + ( –s)Ttsx , ∀xC. ()

It is easy to see thatQsis a contraction. Indeed, we have

QsxQsysh(x) + ( –s)Ttsx

sh(y) + ( –s)Ttsy

ρsxy+ ( –s)xy= – ( –ρ)s xy.

Hence,Qshas a unique fixed point inC, denoted byxs,ts, which uniquely solves fixed point

equation ().

We proved the strong convergence of{xs,ts}ts∈(,γL)to a solutionx∗of the minimization problem

(Ih)x∗,x∗–z≤, ∀zS. ()

For a given arbitrary guessx∈Cand a sequence{αn} ∈(,γ –L), we also propose the

following explicit scheme that generates a sequence{xn}in an explicit way:

xn+=PC

θnh(xn) + ( –θn)Tnxn , n≥, ()

where  <γ < L,λn=–γ(L+αn) andPC(Iγfαn) =λnI+ ( –λn)Tnfor eachn≥. It is

proven that this sequence{xn}converges strongly to a minimizerx∗∈Sof ().

3.1 Convergence of the implicit scheme Proposition . If <γ <

L,α∈(,

γL),∇f is

L-ism,then

ProjC(Iγ) = ( –μα)I+μαTα,

ProjC(Iγf) = ( –μ)I+μT,

whereμα=+γ(L+α),μ=+γL.

In addition,forxC,

TαxTxαM(x),

where

(6)

Proof Since∇f is L-ism, soγ is γ(L+α)-ism, by Proposition ., Iγ is γ(L+α)

-averaged, becauseProjCis-averaged, by Proposition .,ProjC(Iγ) isμα-averaged,

i.e.,

ProjC(Iγ) = ( –μα)I+μαTα,

whereμα=+γ(L+α). The same case holds forProjC(Iγf).

Hence,

ProjC(Iγ)x–ProjC(Iγ)x

=(μμα)x+μαTαxμTx

≤(Iγ)x– (Iγf)x

=γ(x) –∇f(x)=αγx,

then

μα(Tαx) –μTx≤ |μμα|x+αγx,

TαxTx

αγ(x+Tx)

 +γ(L+α) ≤αM(x),

whereM(x) =γ(x+Tx).

Proposition . Let h:CH be a contraction with <ρ< and <γ < L,let tsbe

continuous with respect to s,ts=o(s).Suppose that problem()is consistent,let S denote the

solution set for each s∈(, ),and let xs,tsdenote a unique solution of fixed point equation

().Then the following properties hold for the net{xs,ts}:

(i) {xs,ts}ts∈(,γ–L)is bounded; (ii) lims→xs,tsTtsxs,ts= ;

(iii) xs,tsdefines a continuous curve from(, )intoC.

Proof (i) Take anypS, then

xs,tsp=PC

sh(xs,ts) + ( –s)Ttsxs,tsPCp.

Therefore,

xs,tspsρxs,tsp+sh(p) –p+ ( –s)Ttsxs,tsTp

sρxs,tsp+sh(p) –p+ ( –s)

Ttsxs,tsTtsp+TtspTp

≤ – ( –ρ)s xs,tsp+sh(p) –p+ ( –s)tsM(p),

hence,

xs,tsp

h(p) –p

 –ρ + ( –s)

ts

s( –ρ)M(p). ()

(7)

(ii)

xs,tsTtsxs,tssh(xs,ts) –sTtsxs,ts→.

(iii) Takes,s∈(, ), and calculate

xs,tsxs,tssh(xs,ts) + ( –s)Ttsxs,ts

sh(xs,ts) + ( –s)Ttsxs,ts

=sh(xs,ts) –h(xs,ts)

+ (ss)h(xs,ts) –Ttsxs,ts

+ ( –s)[Ttsxs,tsTtsxs,ts] + ( –s)[Ttsxs,tsTtsxs,ts]

sρxs,tsxs,ts+ ( –s)xs,tsxs,ts

+ ( –s)Ttsxs,ts–Ttsxs,ts+|ss|h(xs,ts) –Ttsxs,ts, ()

Ttsxs,tsTtsxs,ts

=PC(Iγfts) – [ –γ(L+ts)]I  +γ(L+ts)

xs,ts

–PC(Iγfts) – [ –γ(L+ts)]I

 +γ(L+ts)

xs,ts

≤PC(Iγfts)

 +γ(L+ts)

xs,ts–

PC(Iγfts)

 +γ(L+ts)

xs,ts

+–[ –γ(L+ts)]  +γ(L+ts)

xs,ts+

[ –γ(L+ts)]

 +γ(L+ts)

xs,ts

=[ +γ(L+ts)]PC(Iγfts)xs,ts– [ +γ(L+ts)]PC(Iγfts)xs,ts

[ +γ(L+ts)][ +γ(L+ts)]

+ γ|tsts|xs,ts

[ +γ(L+ts)][ +γ(L+ts)]

=γ(tsts)PC(Iγfts)xs,ts

[ +γ(L+ts)][ +γ(L+ts)]

+( +γ(L+ts))[PC(Iγfts)xs,ts–PC(Iγfts)xs,ts]

[ +γ(L+ts)][ +γ(L+ts)]

+ γ|tsts|xs,ts

[ +γ(L+ts)][ +γ(L+ts)]

M|tsts|. ()

So, by () and (),

xs,tsxs,ts → (ss).

Theorem . Assume that minimization problem()is consistent,and let S denote the solution set.Assume that the gradientf is

L-ism.Let h:CH be aρ-contraction with

ρ∈[, ),

xs,ts=PC

(8)

where <γ<L,ts∈(,γ –L),ts=o(s).Let Ttssatisfy the following conditions:

(i) λ:=λ(ts) = –γ(L+ts);

(ii) PC(Iγfts) =λI+ ( –λ)Tts.

Then the net{xs,ts}converges strongly as s→to a minimizer of problem(),which is also

the unique solution of the variational inequality

x∗∈S, (Ih)x∗,xx∗≥, ∀xS.

Proof SetProjC(Iγf) = ( –τ)I+τT,τ =–γL. Letys,ts=sh(xs,ts) + ( –s)Ttsxs,ts,ts

(,

γL).

We then havexs,ts=PCys,ts. For any givenzS,z=PC(Iγf)z, we obtain

xs,tsz=PCys,tsys,ts+ys,tsz

=PCys,tsys,ts+sh(xs,ts) + ( –s)Ttsxs,tsTz

=PCys,tsys,ts+s

h(xs,ts) –h(z) +s

h(z) –Tz + ( –s)[Ttsxs,tsTz].

Next we prove that{xs,ts} →x∗∈S, which is also the unique solution of the variational

inequality. We have

xs,tsz =PCys,tsys,ts,PCys,tsz+s

h(xs,ts) –h(z),xs,tsz

+sh(z) –Tz,xs,tsz

+ ( –s)Ttsxs,tsTz,xs,tsz

sρxs,tsz

+sh(z) –Tz,x

s,tsz

+ ( –s)Ttsxs,tsTz,xs,tsz

≤ – ( –ρ)s xs,tsz

+sh(z) –Tz,x

s,tsz

+ ( –s)tsM(z)xs,tsz.

So,

xs,tsz

h(z) –Tz,xs,tsz

 –ρ +

( –s)tsM(z)xs,tsz

s( –ρ) . ()

Then ifxsn,tsnp, thenxsn,tsnp. Next, we prove that

xs,ts–ProjC(Iγf)xs,ts→.

Since

ProjC(Iγfts)xs,tsxs,ts=λxs,ts+ ( –λ)Tts(xs,ts) –xs,ts

Tts(xs,ts) –xs,ts.

So,

xs,ts–ProjC(Iγf)xs,tsγtsxs,ts+Tts(xs,ts) –xs,ts→.

Finally, we prove that{xs,ts} →x∗∈S, which is also the unique solution of the variational

inequality. We only need to prove that ifxsn,tsnx˜, then

(9)

Suppose thatxsn,tsnx˜, by Lemma . andxs,ts–ProjC(Iγf)xs,ts →,x˜=ProjC(I

γf)x˜, it follows thatx˜∈S. Note thatxsn,tsn→ ˜xby (). From the definition

xs,ts=PC

sh(xs,ts) + ( –s)Ttsxs,ts ,

we have

(Ih)(xsn,tsn) =

sn

(PCysn,tsnysn,tsn) –

sn

(ITtsn)xsn,tsn + [xsn,tsnTtsnxsn,tsn].

So

(Ih)(xsn,tsn),xsn,tsnz

= 

sn

PCysn,tsnysn,tsn,xsn,tsnz

– 

sn

xsn,tsnTtsnxsn,tsn,xsn,tsnz+xsn,tsnTtsnxsn,tsn,xsn,tsnz

≤– sn

(ITtsn)xsn,tsn,xsn,tsnz

+(ITtsn)xsn,tsn,xsn,tsnz

≤– sn

(IT)xsn,tsn,xsn,tsnz

– 

sn

(TTtsn)xsn,tsn,xsn,tsnz

+xsn,tsnTtsnxsn,tsn,xsn,tsnz,

then

(Ih)x˜,x˜–z= lim n→∞

(Ih)(xsn,tsn),xsn,tsnz

≤. ()

So,{xs,ts} →x∗∈S, which is also the unique solution of the variational inequality.

3.2 Convergence of the explicit scheme

Theorem . Assume that minimization problem()is consistent,and let S denote the solution set.Assume that the gradientf isL-ism.Let h:CH be aρ-contraction with ρ∈[, ).Let a sequence{xn}n=be generated by the following hybrid gradient projection algorithm:

xn+=Pc

θnh(xn) + ( –θn)Tn(xn), n= , , , . . . , ()

where <γ < L, Pc[Iγfαn] =λnI+ ( –λn)Tnandλn=

–γ(L+αn)

 , and,in addition, assume that the following conditions are satisfied for{θn}n=and{αn}n=:

(i) θn→;αn=o(θn);

(ii) ∞n=θn=∞;

(iii) ∞n=|θn+–θn|<∞; (iv) ∞n=|αn+–αn|<∞.

Then the sequence{xn}n=converges in norm to a minimizer of()which is also the unique solution of the variational inequality(VI)

(10)

In other words,xis the unique fixed point of the contractionProjSh,

x∗=ProjShx∗.

Proof (◦) We first prove that{xn}n=is bounded. SetProjC(Iγf) = ( –τ)I+τT,τ= –γL

 . Indeed, we have, forx˜∈S,

xn+–x˜

=PC

θnh(xn) + ( –θn)Tn(xn) –PCx˜

≤θnh(xn) + ( –θn)Tn(xn) –x˜

n

h(xn) –h(x˜)

+θn

h(x˜) –x˜+ ( –θn)

Tn(xn) –x˜

θnρxnx˜+θnh(x˜) –x˜+ ( –θn)

xnx˜+Tn(x˜) –T(x˜) ≤ – ( –ρ)θn

xnx˜+θnh(x˜) –x˜+αnM(x˜)

≤max

xnx˜,

 –ρh(x˜) –x˜+M(x˜)

.

So,{xn}is bounded.

(◦) Next we prove thatxn+–xn → asn→ ∞.

xn+–xn

=PC

θnh(xn) + ( –θn)TnxnPC

θn–h(xn–) + ( –θn–)Tn–xn–

θnh(xn) + ( –θn)Tnxn

θn–h(xn–) + ( –θn–)Tn–xn–

=θn

h(xn) –h(xn–)

+ ( –θn)(TnxnTnxn–)

+ (θnθn–)

h(xn–) –Tnxn–

+ ( –θn–)(Tnxn––Tn–xn–)

≤ – ( –ρ)θn xnxn–+M|θnθn–|+ ( –θn–)Tnxn––Tn–xn–,

but

Tnxn––Tn–xn–

=PC(Iγfαn) – [ –γ(L+αn)]I  +γ(L+αn)

xn–

–PC(Iγfαn–) – [ –γ(L+αn–)]I

 +γ(L+αn–)

xn–

≤PC(Iγfαn)

 +γ(L+αn)

xn––

PC(Iγfαn–)

 +γ(L+αn–) xn–

+–[ –γ(L+αn)]  +γ(L+αn)

xn–+

[ –γ(L+αn–)]  +γ(L+αn–)

xn–

=[ +γ(L+αn–)]PC(Iγfαn)xn–– [ +γ(L+αn)]PC(Iγfαn–)xn–

[ +γ(L+αn)][ +γ(L+αn–)]

+ γ|αnαn–|xn– [ +γ(L+αn)][ +γ(L+αn–)]

=γ(αn––αn)PC(Iγfαn)xn–

(11)

+( +γ(L+αn))[PC(Iγfαn)xn––PC(Iγfαn–)xn–]

[ +γ(L+αn)][ +γ(L+αn–)]

+ γ|αnαn–|xn– [ +γ(L+αn)][ +γ(L+αn–)]

≤γ|αn––αn|PC(Iγfαn)xn–

[ +γ(L+αn)][ +γ(L+αn–)]

+γ|αn––αn|[ +γ(L+αn)]xn– [ +γ(L+αn)][ +γ(L+αn–)]

+ γ|αnαn–|xn– [ +γ(L+αn)][ +γ(L+αn–)]

≤ |αn––αn|

γPC(Iγfαn)xn–+ γxn–

M|αn––αn|.

So,

xn+–xn ≤

 – ( –ρ)θn xnxn–+M|θnθn–|+M|αn––αn|,

by Lemma .,

xn+–xn →.

(◦) Next we show thatxnTnxn →.

Indeed, it follows that

xnTnxn ≤ xnxn++xn+–Tnxn

=xnxn++Pc

θnh(xn) + ( –θn)Tn(xn) –PCTn(xn)

xnxn++θnh(xn) –Tnxn→.

Now we show that

lim sup n→∞

hx∗–x∗,xnx

≤.

Letxnkx˜, observe that

PC(Iγfαn)xnxnnI+ ( –λn)Tnxnxn

= ( –λn)Tnxnxn

Tnxnxn,

hence we have

PC(Iγf)xnxn

PC(Iγf)xnPC(Iγfαn)xn+PC(Iγfαn)xnxn

(12)

So

lim

n→∞PC(Iγf)xnxn= ,

by Lemma . andlimn→∞PC(Iγf)xnxn= , then

˜

x=PC(Iγf)(x˜).

This shows that

lim sup n→∞

hx∗–x∗,xnx

≤.

It follows that xn+–x∗

=Pc

θnh(xn) + ( –θn)Tn(xn) –Pcx∗

≤θn

h(xn) –x

+ ( –θn)

TnxnTx∗

n

h(xn) –h

x∗+ ( –θn)

TnxnTx

+θn

hx∗–x∗

≤θn

h(xn) –h

x∗+ ( –θn)

TnxnTx

 + θn

hx∗–x∗,xn+–x

θnh(xn) –h

x∗+ ( –θn)Tnxnx

 + θn

hx∗–x∗,xn+–x

θnρxnx

+ ( –θn)TnxnTnx∗+Tnx∗–Tx

+ θn

hx∗–x∗,xn+–x

θnρxnx

+ ( –θn)xnx∗+αnM

x∗ + θn

hx∗–x∗,xn+–x

≤ – ( –ρ)θn xnx∗+ ( –θn)

αnM

xxnx∗+αn

Mx∗

+ θn

hx∗–x∗,xn+–x

.

Hence,

xn+–x∗= ( –βn)xnx∗+βnδn, ()

βn= ( –ρ)θn,

δn=

 –ρ

αn

θn

( –θn)M

xxnx∗+ ( –θn)

Mx∗α

n

θn

+ hx∗–x∗,xn+–x

,

by Lemma . andlimn→∞βn= ,

n=βn=∞,lim supn→∞δn≤, we havexnx∗.

4 Application of the iterative method

Next, we give an application of Theorem . to the split feasibility problem (say SFP, for short) which was introduced by Censor and Elfving [].

(13)

whereCandQare nonempty closed convex subsets of Hilbert spacesHandH,

respec-tively.A:HHis a bounded linear operator.

It is clear thatx∗is a solution of split feasibility problem () if and only ifx∗∈Cand Ax∗–PQAx∗= .

We define the proximity functionf by

f(x) = 

AxPQAx

,

and consider the convex optimization problem

min

xCf(x) =minxC

AxPQAx

. ()

Thenx∗solves split feasibility problem () if and only ifx∗solves minimization () with

minimizer equal to . Byrne [] introduced the so-calledCQalgorithm to solve the (SFP)

xn+=PC

IμA∗(IPQ)A

xn, n≥, ()

where  <μ<A∗A=A.

He obtained that the sequence{xn}generated by () converges weakly to a solution of

the (SFP).

Now we consider the regularization technique. Let

(x) =

AxPQAx

+α

x

,

then we establish the iterative scheme as follows:

xn+=PC

θnh(xn) + ( –θn)Tnxn , n≥,

whereh:CHis a contraction with  <ρ< ,

PC

IμA∗(IPQ)A+αnI =λnI+ ( –λn)Tn,

λn=

 –μ(A+αn)

 .

Applying Theorem ., we obtain the following result.

Theorem . Assume that the split problem is consistent,let the sequence{xn}be generated by

xn+=PC

θnh(xn) + ( –θn)Tnxn , n≥,

where

PC

IμA∗(IPQ)A+αnI =λnI+ ( –λn)Tn,

λn=

 –μ(A+αn)

 ,

(14)

(i) θn→; <μ<A∗A=A;

(ii) ∞n=θn=∞;

(iii) ∞n=|θn+–θn|<∞; (iv) ∞n=|αn+–αn|<∞;

(v) αn=o(θn).

Then the sequence{xn}converges strongly to the solution of split feasibility problem().

Proof By the definition of the proximity functionf, we have

f(x) =A∗(I–ProjQ)Ax,

and∇f is /A-ism,i.e., sinceProj

Qis /-averaged mapping, thenI–ProjQis -ism,

f(x) –∇f(y),xy– /A·∇f(x) –∇f(y)

=A∗(I–ProjQ)AxA∗(I–ProjQ)Ay,xy

– /A·A∗(I–ProjQ)AxA∗(I–ProjQ)Ay

=A∗(I–ProjQ)Ax– (I–ProjQ)Ay ,xy

– /A·A∗(I–ProjQ)Ax– (I–ProjQ)Ay

=(I–ProjQ)Ax– (I–ProjQ)Ay,AxAy

– /A·A∗(I–ProjQ)Ax– (I–ProjQ)Ay

≥(I–ProjQ)Ax– (I–ProjQ)Ay

–(I–ProjQ)Ax– (I–ProjQ)Ay

= .

Hence,∇f(x) –∇f(y),xy ≥/A· ∇f(x) –f(y). Setfλn(x) =f(x) +

α

x

; consequently,

(x) =∇f(x) +αI(x)

=A∗(I–ProjQ)Ax+αx.

Letγ =μ,L=A, then the iterative scheme is equivalent to

xn+=PC

θnh(xn) + ( –θn)Tnxn , n≥,

where  <γ <

L,Pc[Iγfαn] =λnI+ ( –λn)Tnandλn=–γ(L+αn).

Due to Theorem ., we have the conclusion immediately.

Competing interests

The author declares that they have no competing interests.

Author’s contributions

(15)

Acknowledgements

The author thanks the referees for their helpful comments, which notably improved the presentation of this manuscript. This work was supported by the Fundamental Research Funds for the Central Universities (the Special Fund of Science in Civil Aviation University of China: No. 3122013K004).

Received: 2 January 2013 Accepted: 10 October 2013 Published:22 Nov 2013

References

1. Levitin, ES, Polyak, BT: Constrained minimization methods. Zh. Vychisl. Mat. Mat. Fiz.6, 787-823 (1966)

2. Calamai, PH, More, JJ: Projected gradient methods for linearly constrained problem. Math. Program.39, 93-116 (1987) 3. Polyak, BT: Introduction to Optimization. Optimization Software, New York (1987)

4. Su, M, Xu, HK: Remarks on the gradient-projection algorithm. J. Nonlinear Anal. Optim.1, 35-43 (2010) 5. Moudafi, A: Viscosity approximation methods for fixed-points problems. J. Math. Anal. Appl.241, 46-55 (2000) 6. Xu, HK: Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl.298, 279-291 (2004) 7. Ceng, LC, Ansari, QH, Yao, JC: Some iterative methods for finding fixed points and for solving constrained convex

minimization problems. Nonlinear Anal. TMA74, 5286-5302 (2011)

8. Marino, G, Xu, HK: Convergence of generalized proximal point algorithm. Commun. Pure Appl. Anal.3, 791-808 (2004)

9. Marino, G, Xu, HK: A general iterative method for nonexpansive mappings in Hilbert spaces. J. Math. Anal. Appl.318, 43-52 (2006)

10. Tian, M: A general iterative algorithm for nonexpansive mappings in Hilbert spaces. Nonlinear Anal. TMA73, 689-694 (2010)

11. Yao, Y, Liou, YC, Chen, CP: Algorithms construction for nonexpansive mappings and inverse-strongly monotone mapping. Taiwan. J. Math.15, 1979-1998 (2011)

12. Yao, Y, Chen, R, Liou, Y-C: A unified implicit algorithm for solving the triple-hierarchical constrained optimization problem. Math. Comput. Model.55, 1506-1515 (2012)

13. Yao, Y, Liou, Y-C, Kang, SM: Two-step projection methods for a system of variational inequality problems in Banach spaces. J. Glob. Optim. (2013). doi:10.1007/s10898-011-9804-0

14. Wiyada, K, Praairat, J, Poom, K: Generalized systems of variational inequalities and projection methods for inverse-strongly monotone mapping. Discrete Dyn. Nat. Soc. (2011). doi:10.1155/2011/976505

15. Censor, Y, Elfving, T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms8, 221-239 (1994)

16. Byrne, C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl.20, 103-120 (2004)

17. Censor, Y, Elfving, T, Kopf, N, Bortfeld, T: The multiple-sets split feasibility problem and its applications for inverse problem. Inverse Probl.21, 2071-2084 (2005)

18. Censor, Y, Bortfeld, T, Martin, B, Trfimov, A: A unified approach for inversion problems in intensity-modulated radiation therapy. Phys. Med. Biol.51, 2353-2365 (2006)

19. Xu, HK: A variable Krasnosel’skii-Mann algorithm and the multiple-set split feasibility problem. Inverse Probl.22, 2021-2034 (2006)

20. Xu, HK: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert space. Inverse Probl.26, 105018 (2010)

21. Lopez, G, Martin, V, Xu, HK: Perturbation techniques for nonexpansive mapping with applications. Nonlinear Anal., Real World Appl.10, 2369-2383 (2009)

22. Lopez, G, Martin, V, Xu, HK: Iterative algorithms for the multiple-sets split feasibility problem. In: Censor, Y, Jiang, M, Wang, G (eds.) Biomedical Mathematics: Promising Directions in Imaging, Therapy Planning and Inverse Problems, pp. 243-279. Medical Physica Publishing, Madison (2009)

23. Xu, HK: Averaged mapping and the gradient-projection algorithm. J. Optim. Theory Appl.150, 360-378 (2011). doi:10.1007/s10957-011-9837-z

24. Xu, HK: A regularization method for the proximal point algorithm. J. Glob. Optim.36, 115-125 (2006)

25. Cho, YJ, Petrot, N: Regularization and iterative method for general variational inequality problem in Hilbert spaces. J. Inequal. Appl.2011, 21 (2011)

26. Combettes, PL: Solving monotone inclusions via composition of nonexpansive averaged operators. Optimization

53(5-6), 475-504 (2004)

27. Geobel, K, Kirk, WA: Topics on Metric Fixed-Point Theory. Cambridge Studies in Advanced Mathematics, vol. 28. Cambridge University Press, Cambridge (1990)

10.1186/1029-242X-2013-550

References

Related documents

For example, to clarify the role of cholinergic neurons in the amygdala on learning and memory, topical scopolamine injection into the bilateral amygdala of mice suggested that

Acute oral treatment (single administration prior to scopolamine treatment) of mice with ESP- 102 (doses in the range of 10 to 100mg/kg body weight) significantly

In order to know the fuel mix used in the area and whether the rural households are showing a tendency to switch from traditional to modern fuel for domestic purpose,

Prostatic intra-epithelial neoplasia is believed to be a pre-malignant lesion and is divided into two main grades: low grade (PIN I) and high grade (PIN II&amp;III).6

Please cite this article as: Bello Abubakar , A STUDY ON MUSLIM HISTORIOGRAPHY WITH PARTICULAR REFERENCE TO THE WORKS OF SHAYKH ABDUL-KADIR BN MUSTAPHA AL-TORODI ,

4 A1-astrocyte markers are significantly altered in TKO mice upon prion infection a qPCR expression analysis of microglia and astrocyte disease markers in thalamus tissue

2) Shortages are not allowed. 3) Time horizon is infinite.. 6) If the orders quantity is larger than retailer’s OW storage capacity W, the excess quantity is stored in

We propose the algorithm FuzzyPrefMiner designed to predict fuzzy preferences and show through a series of experiments that it outperforms pairwise preference mining techniques