• No results found

Bounded perturbation resilience of extragradient type methods and their applications

N/A
N/A
Protected

Academic year: 2020

Share "Bounded perturbation resilience of extragradient type methods and their applications"

Copied!
28
0
0

Loading.... (view fulltext now)

Full text

(1)

R E S E A R C H

Open Access

Bounded perturbation resilience

of extragradient-type methods

and their applications

Q-L Dong

1

, A Gibali

2*

, D Jiang

1

and Y Tang

3

*Correspondence:

avivg@braude.ac.il

2Department of Mathematics, ORT

Braude College, Karmiel, 2161002, Israel

Full list of author information is available at the end of the article

Abstract

In this paper we study the bounded perturbation resilience of the extragradient and the subgradient extragradient methods for solving a variational inequality (VI) problem in real Hilbert spaces. This is an important property of algorithms which guarantees the convergence of the scheme under summable errors, meaning that an inexact version of the methods can also be considered. Moreover, once an algorithm is proved to be bounded perturbation resilience, superiorization can be used, and this allows flexibility in choosing the bounded perturbations in order to obtain a superior solution, as well explained in the paper. We also discuss some inertial extragradient methods. Under mild and standard assumptions of monotonicity and Lipschitz continuity of the VI’s associated mapping, convergence of the perturbed

extragradient and subgradient extragradient methods is proved. In addition we show that the perturbed algorithms converge at the rate ofO(1/t). Numerical illustrations are given to demonstrate the performances of the algorithms.

MSC: 49J35; 58E35; 65K15; 90C47

Keywords: inertial-type method; bounded perturbation resilience; extragradient method; subgradient extragradient method; variational inequality

1 Introduction

In this paper we are concerned with the variational inequality (VI) problem of finding a pointx∗such that

Fx∗,xx∗≥ for allxC, (.)

whereCHis a nonempty, closed and convex set in a real Hilbert spaceH,·,·denotes the inner product inH, andF:HHis a given mapping. This problem is a fundamental problem in optimization theory, and it captures various applications such as partial dif-ferential equations, optimal control and mathematical programming; for the theory and application of VIs and related problems, the reader is referred, for example, to the works of Cenget al. [], Zegeyeet al. [], the papers of Yaoet al. [–] and the many references therein.

Many algorithms for solving VI (.) are projection algorithms that employ projections onto the feasible setCof VI (.), or onto some related set, in order to reach iteratively a

(2)

solution. Korpelevich [] and Antipin [] proposed an algorithm for solving (.), known as theextragradient method, see also Facchinei and Pang [, Chapter ]. In each iteration of the algorithm, in order to get the next iteratexk+, two orthogonal projections ontoCare

calculated according to the following iterative step. Given the current iteratexk, calculate

yk=PC(xkγkF(xk)),

xk+=P

C(xkγkF(yk)),

(.)

whereγk∈(, /L), andLis the Lipschitz constant ofF, orγkis updated by the following adaptive procedure:

γkF

xkFykμxkyk, μ∈(, ). (.)

In the extragradient method there is the need to calculate twice the orthogonal projection ontoCin each iteration. In case that the setCis simple enough so that projections onto it can be easily computed, this method is particularly useful; but ifCis a general closed and convex set, a minimal distance problem has to be solved (twice) in order to obtain the next iterate. This might seriously affect the efficiency of the extragradient method. Hence, Censoret al. in [–] presented a method called thesubgradient extragradient method, in which the second projection (.) ontoCis replaced by a specific subgradient projection which can be easily calculated. The iterative step has the following form:

yk=P

C(xkγF(xk)),

xk+=P

Tk(x

kγF(yk)), (.)

whereTkis the set defined as

Tk:=

wH|xkγFxkyk,wyk≤ , (.)

andγ∈(, /L).

In this manuscript we prove that the above methods, the extragradient and the sub-gradient extrasub-gradient methods, are bounded perturbation resilient, and the perturbed methods have the convergence rate ofO(/t). This means that that will show that an in-exact version of the algorithms allows incorporating summable errors that also converge to a solution of VI (.) and, moreover, their superiorized version can be introduced by choosing the perturbations. In order to obtain a superior solution with respect to some new objective function, for example, by choosing the norm, we can obtain a solution to VI (.) which is closer to the origin.

Our paper is organized as follows. In Section  we present the preliminaries. In Section  we study the convergence of the extragradient method with outer perturbations. Later, in Section , the bounded perturbation resilience of the extragradient method as well as the construction of the inertial extragradient methods are presented.

(3)

2 Preliminaries

LetHbe a real Hilbert space with the inner product·,·and the induced norm · , and letDbe a nonempty, closed and convex subset ofH. We writexkxto indicate that the sequence{xk}

k=converges weakly toxandxkxto indicate that the sequence

{xk}

k=converges strongly tox. Given a sequence{xk}k∞=, denote byωw(xk) its weakω -limit set, that is, anyxωw(xk) such that there exists a subsequence{xkj}∞j=of{xk}∞k=

which converges weakly tox.

For each pointxH, there exists a unique nearest point inDdenoted byPD(x). That is,

xPD(x)≤ xy for allyD. (.)

The mappingPD:HDis called the metric projection ofHontoD. It is well known that

PDis anonexpansive mappingofHontoD,i.e., and evenfirmly nonexpansive mapping.

This is captured in the next lemma.

Lemma . For any x,yHand zD,it holdsPD(x) –PD(y)≤ xy;

PD(x) –z≤ xz–PD(x) –x.

The characterization of the metric projectionPD[, Section ] is given by the following two properties in this lemma.

Lemma . Given xHand zD.Then z=PD(x)if and only if

PD(x)∈D (.)

and

xPD(x),PD(x) –y

≥ for all xH,yD. (.)

Definition . Thenormal coneofDatvDdenoted byND(v) is defined as

ND(v) :=

dH| d,yv ≤ for allyD . (.)

Definition . Let B:H⇒H be a point-to-set operator defined on a real Hilbert

spaceH. The operatorBis called amaximal monotone operatorifBismonotone,

i.e.,

uv,xy ≥ for alluB(x) andvB(y), (.)

and the graphG(B) ofB,

G(B) :=(x,u)∈H×H|uB(x) , (.)

is not properly contained in the graph of any other monotone operator.

(4)

Definition . Thesubdifferential setof a convex functioncat a pointxis de-fined as

∂c(x) :=ξH|c(y)≥c(x) +ξ,yxfor allyH . (.)

ForzH, take anyξ∂c(z) and define

T(z) :=wH|c(z) +ξ,wz ≤ . (.)

This is a half-space, the bounding hyperplane of which separates the setDfrom the point

zifξ= ; otherwiseT(z) =H; see,e.g., [, Lemma .].

Lemma .([]) Let D be a nonempty,closed and convex subset of a Hilbert spaceH.Let

{xk}

k=be a bounded sequence which satisfies the following properties:

every limit point of{xk}

k=lies inD;

• limn→∞xkxexists for everyxD.

Then{xk}

k=converges to a point in D.

Lemma . Assume that{ak}∞k=is a sequence of nonnegative real numbers such that

ak+≤( +γk)ak+δk, ∀k≥, (.)

where the nonnegative sequences{γk}∞k=and{δk}∞k=satisfy

k=γk< +∞and

k=δk< +∞,respectively.Thenlimk→∞akexists.

3 The extragradient method with outer perturbations

In order to discuss the convergence of the extragradient method with outer perturbations, we make the following assumptions.

Condition . The solution set of (.), denoted bySOL(C,F), is nonempty.

Condition . The mappingFismonotoneonC,i.e.,

F(x) –F(y),xy≥, ∀x,yC. (.)

Condition . The mappingFisLipschitz continuousonCwith the Lipschitz constant

L> ,i.e.,

F(x) –F(y)≤Lxy, ∀x,yC. (.)

(5)

Theorem . Assume that Conditions.-.hold.Then any sequence{xk}

k=generated

by the extragradient method(.)with the adaptive rule(.)weakly converges to a solution of the variational inequality(.).

Denoteeki :=ei(xk),i= , . The sequences of perturbations{eki}∞k=,i= , , are assumed

to be summable,i.e.,

k=

eki< +∞, i= , . (.)

Now we consider the extragradient method with outer perturbations.

Algorithm . The extragradient method with outer perturbations:

Step : Select a starting pointxCand setk= .

Step : Given the current iteratexk, compute

yk=PC

xkγkF

xk+e

xk, (.)

whereγk=σρmk,σ> ,ρ∈(, ) andmkis the smallest nonnegative integer such that (see [])

γkF

xkFykμxkyk, μ∈(, ). (.)

Calculate the next iterate

xk+=PC

xkγkF

yk+e

xk. (.)

Step : Ifxk=yk, then stop. Otherwise, setk←(k+ ) and return toStep .

3.1 Convergence analysis

Lemma .([]) The Armijo-like search rule(.)is well defined.Besides,γγkσ,

whereγ =min{σ,μρL}.

Theorem . Assume that Conditions.-.hold.Then the sequence{xk}

k=generated

by Algorithm.converges weakly to a solution of the variational inequality(.).

Proof Takex∗∈SOL(C,F). From (.) and Lemma .(ii), we have

xk+x∗xkγ kF

yk+ekx∗–xkγkF

yk+ekxk+

=xkx∗–xkxk++ γk

Fyk,x∗–xk+

– ek,x∗–xk+. (.)

From the Cauchy-Schwarz inequality and the mean value inequality, it follows

–ek,x∗–xk+≤ekxk+–x

(6)

Usingx∗∈SOL(C,F) and the monotone property ofF, we haveykx,F(yk) ≥ and, consequently, we get

γk

Fyk,x∗–xk+≤γk

Fyk,ykxk+. (.)

Thus, we have

xkxk++ γk

Fyk,x∗–xk+

≤–xkxk++ γk

Fyk,ykxk+

= –xkyk–ykxk+

+ xkγkF

ykyk,xk+–yk, (.)

where the equality comes from

xkxk+= –xkyk–ykxk+– xkyk,ykxk+. (.) Usingxk+C, the definition ofykand Lemma ., we have

ykxk+γkF

xkek,xk+–yk≥. (.)

So, we obtain

xkγkF

ykyk,xk+–yk

≤γk

FxkFyk,xk+–yk– ek,xk+–yk

≤γkF

xkFykxk+–yk+ ekxk+–yk

≤μxkykxk+–yk+ek+ekxk+–yk

μxkyk+μxk+–yk+ek+ekxk+–yk

=μxkyk+μ+ekxk+–yk+ek. (.)

From (.), it follows

lim

k→∞e

k

i= , i= , . (.)

Therefor, we assumeek

 ∈[,  –μν) andek ∈[, /),k≥, whereν∈(,  –μ).

So, using (.), we get

xkγkF

ykyk,xk+–ykμxkyk+ ( –ν)xk+–yk+ek. (.) Combining (.)-(.) and (.), we obtain

xk+–x∗≤xkx∗– ( –μ)xkyk–νxk+–yk

(7)

where

ek:=ek+ek. (.)

From (.), it follows

xk+–x∗≤ 

 –ek

xkx∗–  –μ  –ek

xkyk

ν

 –ekx

k+yk

+ e

k

 –ek. (.)

Sinceek

 ∈[, /),k≥, we get

≤ 

 –ek

≤ + ek< . (.)

So, from (.), we have

xk+–x∗≤ + ekxkx∗– ( –μ)xkyk

νxk+–yk+ ek

≤ + ekxkx∗+ ek. (.)

Using (.) and Lemma ., we get the existence oflimk→∞xkxand then the

bound-edness of{xk}

k=. From (.), it follows

( –μ)xkyk+νxk+–yk≤ + ekxkx∗–xk+–x∗+ ek, (.)

which means that

k=

xkyk< +∞ and

k=

xk+–yk< +∞. (.)

Thus, we obtain

lim

k→∞x

kyk=  and lim

k→∞x

k+yk= , (.)

and consequently,

lim

k→∞x

k+xk= . (.)

Now, we are to showωw(xk)⊆SOL(C,F). Due to the boundedness of{xk}∞k=, it has at least one weak accumulation point. Letxˆ∈ωw(xk). Then there exists a subsequence{xki}∞i=

of{xk}

k=which converges weakly toxˆ. From (.), it follows that{yki}∞i=also converges

(8)

We will show thatxˆis a solution of the variational inequality (.). Let

A(v) =

F(v) +NC(v), vC,

∅, v∈/C, (.)

whereNC(v) is the normal cone ofCatvC. It is known thatAis a maximal monotone operator and A–() =SOL(C,F). If (v,w)G(A), then we havewF(v)N

C(v) since

wA(v) =F(v) +NC(v). Thus it follows that

wF(v),vy≥, yC. (.)

SinceykiC, we have

wF(v),vyki. (.)

On the other hand, by the definition ofykand Lemma ., it follows that

xkγkF

xk+ekyk,ykv≥, (.)

and consequently,

ykxk

γk

+Fxk,vyk

– 

γk

ek,vyk≥. (.)

Hence we have

w,vykiF(v),vyki

F(v),vyki

ykixki γki

+Fxki,vyki

+ 

γki

eki

,vyki

=F(v) –Fyki,vyki+FykiFxki,vyki

ykixki

γki

,vyki

+ 

γki

eki

,vyki

FykiFxki,vyki

ykixki γki

,vyki

+ 

γki

eki

,vyki

, (.)

which implies

w,vykiFykiFxki,vyki

ykixki γki

,vyki

+ 

γki

eki

,vyki

. (.)

Taking the limit asi→ ∞in the above inequality and using Lemma ., we obtain

w,vxˆ ≥. (.)

Since A is a maximal monotone operator, it follows that xˆ ∈A–() =SOL(C,F). So,

(9)

Sincelimk→∞xkx∗exists andωw(xk)⊆SOL(C,F), using Lemma ., we conclude that{xk}∞k=weakly converges to a solution of the variational inequality (.). This

com-pletes the proof.

3.2 Convergence rate

Nemirovski [] and Tseng [] proved theO(/t) convergence rate of the extragradient method. In this subsection, we present the convergence rate of Algorithm ..

Theorem . Assume that Conditions.-.hold.Let the sequences{xk}∞k=and{yk}∞k= be generated by Algorithm..For any integer t> ,we have ytC which satisfies

F(x),ytx

≤  ϒt

xx+M(x), ∀xC, (.)

where

yt= 

ϒt t

k=

γkyk, ϒt= t

k=

γk (.)

and

M(x) =sup

k

maxxk+–yk,xk+–x

k=

ek. (.)

Proof Take arbitrarilyxC. From Conditions . and ., we have

xkxk++ γk

Fyk,xxk+

= –xkxk++ γk

FykF(x),xyk+F(x),xyk

+Fyk,ykxk+

≤–xkxk++ γk

F(x),xyk+Fyk,ykxk+

= –xkyk–ykxk++ γk

F(x),xyk

+ xkγkF

ykyk,xk+–yk. (.)

By (.) and Lemma ., we get

xkγkF

ykyk,xk+–yk

= xkγkF

xk+ekyk,xk+–yk– ek,xk+–yk

+ γk

FxkFyk,xk+–yk

≤–ek,xk+–yk+ γk

FxkFyk,xk+–yk

≤ekxk+–yk+ μxkykxk+–yk

(10)

Identifyingx∗withxin (.) and (.), and combining (.) and (.), we get

xk+–x

xkx+ ekxk+–yk– –μxkyk

+ ekxk+–x+ γk

F(x),xyk

xkx+ ekxk+–yk+ ekxk+–x

+ γk

F(x),xyk. (.)

Thus, we have

γk

F(x),ykx

≤ 

x

kx

xk+–x+ekxk+–yk+ekxk+–x

≤ 

x

kxxk+x+M(x)ek, (.)

whereM(x) =supk{max{xk+yk,xk+x}}< +. Summing inequality (.) over

k= , . . . ,t, we obtain

F(x),

t

k=

γkyk

t

k=

γk

x

≤  x

x+M(x)

t

k=

ek

= x

x+

M(x). (.)

Using the notations ofϒtandytin the above inequality, we derive

F(x),ytx

≤  ϒt

xx+M(x), xC. (.)

The proof is complete.

Remark . From Lemma ., it follows

ϒt≥(t+ )γ, (.)

thus Algorithm . hasO(/t) convergence rate. In fact, for any bounded subsetDC

and given accuracy> , our algorithm achieves

F(x),ytx

, ∀xD (.)

in at most

t=

m

γ

(.)

(11)

4 The bounded perturbation resilience of the extragradient method

In this section, we prove the bounded perturbation resilience (BPR) of the extragradient method. This property is fundamental for the application of the superiorization method-ology (SM) to them.

The superiorization methodology of [–], which originates in the papers by But-nariu, Reich and Zaslavski [–], is intended for constrained minimization (CM) prob-lems of the form

minφ(x)|x , (.)

whereφ:H→Ris an objective function andHis the solution set of another

prob-lem. Here, we assume =∅throughout this paper. Assume that the set is a closed

convex subset of a Hilbert spaceH, the minimization problem (.) becomes a standard CM problem. Here, we are interested in the case whereinis the solution set of another CM of the form

minf(x)|x , (.)

i.e., we wish to look at

:=x∗∈|fx∗≤f(x)|for allx (.)

provided that is nonempty. Iff is differentiable, and letF=∇f, then CM (.) equals the following variational inequality: to find a pointx∗∈Csuch that

Fx∗,xx∗≥, ∀xC. (.)

The superiorization methodology (SM) strives not to solve (.) but rather the task is to find a point inwhich is superior,i.e., has a lower, but not necessarily minimal, value of the objective functionφ. This is done in the SM by first investigating the bounded pertur-bation resilience of an algorithm designed to solve (.) and then proactively using such permitted perturbations in order to steer the iterates of such an algorithm toward lower values of theφ objective function while not loosing the overall convergence to a point in.

In this paper, we do not investigate superiorization of the extragradient method. We prepare for such an application by proving the bounded perturbation resilience that is needed in order to do superiorization.

Algorithm . The basic algorithm:

Initialization:xis arbitrary;

Iterative step: Given the current iterate vectorxk, calculate the next iteratexk+via

xk+= Axk. (.)

(12)

Definition . An algorithmic operator A:His said to bebounded pertur-bations resilient if the following is true. If Algorithm . generates sequences {xk}

k=withx∈that converge to points in, then any sequence{yk}∞k=starting from

anyy, generated by

yk+= Ayk+λkvk

for allk≥, (.)

also converges to a point in, provided that (i) the sequence{vk}

k= is bounded, and

(ii) the scalars {λk}∞k= are such that λk ≥  for all k≥ , and

k=λk < +∞, and (iii)yk+λkvkfor allk≥.

Definition . is nontrivial only if=H, in which condition (iii) is enforced in the su-periorized version of the basic algorithm, see step (xiv) in the ‘Susu-periorized Version of Algorithm P’ in ([], p.) and step () in ‘Superiorized Version of the ML-EM Algo-rithm’ in ([], Subsection II.B). This will be the case in the present work.

Treating the extragradient method as the Basic Algorithm A, our strategy is to first prove convergence of the iterative step (.) with bounded perturbations. We show next how the convergence of this yields BPR according to Definition ..

A superiorized version of any Basic Algorithm employs the perturbed version of the Ba-sic Algorithm as in (.). A certificate to do so in the superiorization method, see [], is gained by showing that the Basic Algorithm is BPR. Therefore, proving the BPR of an al-gorithm is the first step toward superiorizing it. This is done for the extragradient method in the next subsection.

4.1 The BPR of the extragradient method

In this subsection, we investigate the bounded perturbation resilience of the extragradient method whose iterative step is given by (.).

To this end, we treat the right-hand side of (.) as the algorithmic operator Aof Def-inition ., namely, we define, for allk≥,

A

xk=PC

xkγkF

PC

xkγkF

xk (.)

and identify the solution setwith the solution set of the variational inequality (.) and identify the additional setwithC.

According to Definition ., we need to show the convergence of the sequence{xk}

k=

that, starting from anyx∈C, is generated by

xk+=PC

xk+λkvk

γkF

PC

xk+λkvk

γkF

xk+λkvk

, (.)

which can be rewritten as

yk=PC((xk+λkvk) –γkF(xk+λkvk)),

xk+=P

C((xk+λkvk) –γkF(yk)),

(.)

whereγk=σρmk,σ> ,ρ∈(, ) andmkis the smallest nonnegative integer such that

γkF

xk+λkvk

(13)

The sequences{vk}

k=and{λk}∞k=obey conditions (i) and (ii) in Definition .,

respec-tively, and also (iii) in Definition . is satisfied.

The next theorem establishes the bounded perturbation resilience of the extragradient method. The proof idea is to build a relationship between BPR and the convergence of the iterative step (.).

Theorem . Assume that Conditions .-. hold. Assume the sequence {vk}k= is

bounded,and the scalars{λk}∞k=are such thatλk≥for all k≥,and

k=λk< +∞.

Then the sequence{xk}

k=generated by(.)and(.)converges weakly to a solution of

the variational inequality(.).

Proof Takex∗∈SOL(C,F). Fromk=λk< +∞and that{vk}∞k=is bounded, we have

k=

λkvk< +∞, (.)

which means

lim

k→∞λkv

k= . (.)

So, we assumeλkvk ∈[, ( –μν)/), whereν∈[,  –μ). Identifyingekwithλkvkin (.) and (.) and using (.), we get

xk+–x∗=xkx∗+λkvk+λkvkxk+–x∗–xkyk

ykxk++ xkγkF

ykyk,xk+–yk. (.)

Fromxk+C, the definition ofykand Lemma ., we have

ykxkλkvk+γkF

xk+λkvk

,xk+–yk≥. So, we obtain

xkγkF

ykyk,xk+–yk

≤γk

Fxk+λkvk

Fyk,xk+–yk– λk

vk,xk+–yk. (.)

We have

γk

Fxk+λkvk

Fyk,xk+–yk

≤γkF

xk+λkvk

Fykxk+–yk

≤μxk+λkvkykxk+–yk ≤μxkyk+λkvkxk+–yk

≤μxkykxk+–yk+ μλkvkxk+–ykμxkyk+μ+λkvkxk+–yk

(14)

Similar to (.), we can show

–λk

vk,xk+–ykλkvk+λkvkxk+–yk

. (.)

Combining (.)-(.), we get

xkγkF

ykyk,xk+–yk

μxkyk+μ+ λkvkxk+–yk+

 +μλkvk

μxkyk+ ( –ν)xk+–yk+ λkvk, (.)

where the last inequality comes fromλkvk< ( –μ)/ andμ< . Substituting (.) into (.), we get

xk+–x∗≤xkx∗– ( –μ)xkyk–νxk+–yk+ λkvk

+xk+–x∗. (.)

Following the proof line of Theorem ., we get{xk}

k=weakly converges to a solution of

the variational equality (.).

By using Theorems . and ., we obtain the convergence rate of the extragradient method with BP.

Theorem . Assume that Conditions .-. hold. Assume the sequence {vk}k= is

bounded,and the scalars{λk}∞k=are such thatλk≥for all k≥,and

k=λk< +∞.

Let the sequences{xk}

k=and{yk}∞k=be generated by(.)and(.).For any integer t> ,

we have ytC which satisfies

F(x),ytx

≤  ϒt

xx+M(x), xC, (.)

where

yt= 

ϒt t

k=

γkyk, ϒt= t

k=

γk, (.)

and

M(x) =sup

k

maxxk+–yk, xk+–x ∞

k=

λkvk. (.)

4.2 Construction of the inertial extragradient methods by BPR

In this subsection, we construct two classes of inertial extragradient methods by using BPR,i.e., identifyingeki,k= , , andλk,vkwith special values.

(15)

speed up the original algorithms without the inertial effects, recently there has been in-creasing interest in studying inertial-type algorithms (see,e.g., [–]). The authors [] introduced an inertial extragradient method as follows:

⎧ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎩

wk=xk+α k

xkxk–,

yk=PC

wkγFwk,

xk+= ( –λk)wk+λkPC

wkγFyk

(.)

for eachk≥, whereγ ∈(, /L),{αk}is nondecreasing withα=  and ≤αkα<  for eachk≥ andλ,σ,δ>  are such that

δ>α[( +γL)

α( +α) + ( –γL)ασ+σ( +γL)]

 –γL (.)

and

 <λλk

δ( –γL) –α[( +γL)α( +α) + ( –γL)ασ+σ( +γL)]

δ[( +γL)α( +α) + ( –γL)ασ+σ( +γL)] ,

whereLis the Lipschitz constant ofF.

Based on the iterative step (.), we construct the following inertial extragradient method:

yk=P

C(xkγkF(xk) +α()k (xkxk–)),

xk+=P

C(xkγkF(yk) +αk()(xkxk–)),

(.)

where

α(ki)=

⎧ ⎨ ⎩

βk(i)

xkxk–, ifxkxk–> ,i= , , βk(i), ifxkxk– ≤.

(.)

Theorem . Assume that Conditions.-.hold.Assume that the sequences{βk(i)}∞k=,

i= , ,satisfyk=βk(i)<∞,i= , .Then the sequence{xk}

k=generated by the inertial

ex-tragradient method(.)converges weakly to a solution of the variational inequality(.).

Proof Leteki =βk(i)vk,i= , , where

vk=

xkxk–

xkxk–, ifxkxk–> ,i= , ,

xkxk–, ifxkxk– ≤. (.)

It is obvious thatvk. So, it follows that{ek

i},i= , , satisfy (.) from the condition

on{βk(i)}. Using Theorem ., we complete the proof.

Remark . From (.), we havexkxk– for big enoughk, that is,α(i)

k =β

(i)

(16)

Using the extragradient method with bounded perturbations (.), we construct the following inertial extragradient method:

yk=P

C(xk+αk(xkxk–) –γkF(xk+αk(xkxk–))),

xk+=P

C(xk+αk(xkxk–) –γkF(yk)),

(.)

where

αk=

β

k

xkxk–, ifxkxk–> ,i= , , βk, ifxkxk– ≤.

(.)

We extend Theorem . to the convergence of the inertial extragradient method (.).

Theorem . Assume that Conditions .-. hold. Assume that the sequence {βk}∞k=

satisfiesk=βk<∞.Then the sequence{xk}∞k=generated by the inertial extragradient

method(.)converges weakly to a solution of the variational inequality(.).

Remark . The inertial parameter αk in the inertial extragradient method (.) is bigger than that of the inertial extragradient method (.). The inertial extragradient method (.) becomes the inertial extragradient method (.) whenλk= .

5 The extension to the subgradient extragradient method

In this section, we generalize the results of extragradient method proposed in the previous sections to the subgradient extragradient method.

Censoret al. [] presented the subgradient extragradient method (.). In their method the step size is fixedγ ∈(, /L), whereL is a Lipschitz constant ofF. So, in order to determine the stepsizeγk, one needs first calculate (or estimate)L, which might be difficult or even impossible in general. So, in order to overcome this, the Armijo-like search rule can be used:

γkF

xkFykμxkyk, μ∈(, ). (.)

To discuss the convergence of the subgradient extragradient method, we make the fol-lowing assumptions.

Condition . The mappingFis monotone onH,i.e.,

f(x) –f(y),xy≥, ∀x,yH. (.)

Condition . The mappingFis Lipschitz continuous onHwith the Lipschitz constant

L> ,i.e.,

F(x) –F(y)≤Lxy, ∀x,yH. (.)

(17)

Theorem . Assume that Conditions., .and.hold. Then the sequence{xk}k=

generated by the subgradient extragradient methods(.)and(.)weakly converges to a solution of the variational inequality(.).

5.1 The subgradient extragradient method with outer perturbations

In this subsection, we present the subgradient extragradient method with outer perturba-tions.

Algorithm . The subgradient extragradient method with outer perturbations:

Step : Select a starting pointxHand setk= .

Step : Given the current iteratexk, compute

yk=P C

xkγ

kF

xk+e

xk, (.)

whereγk=σρmk,σ> ,ρ∈(, ) andmkis the smallest nonnegative integer such that (see [])

γkF

xkFykμxkyk, μ∈(, ). (.)

Construct the set

Tk:=

wH|xkγkF

xk+e

xkyk,wyk≤ , (.)

and calculate

xk+=PTk

xkγkF

yk+e

xk. (.)

Step : Ifxk=yk, then stop. Otherwise, setk(k+ ) and return toStep .

Denoteeki :=ei(xk),i= , . The sequences of perturbations{eki}∞k=,i= , , are assumed

to be summable,i.e.,

k=

eki< +∞, i= , . (.)

Following the proof of Theorems . and ., we get the convergence analysis and con-vergence rate of Algorithm ..

Theorem . Assume that Conditions., .and.hold. Then the sequence{xk}k=

generated by Algorithm .converges weakly to a solution of the variational inequality

(.).

Theorem . Assume that Conditions., .and.hold.Let the sequences{xk}∞k=and

{yk}

k=be generated by Algorithm..For any integer t> ,we have ytC which satisfies

F(x),ytx

≤  ϒt

(18)

where

yt= 

ϒt t

k=

γkyk, ϒt= t

k=

γk, (.)

and

M(x) =sup

k

maxxk+–yk,xk+–x

k=

ek. (.)

5.2 The BPR of the subgradient extragradient method

In this subsection, we investigate the bounded perturbation resilience of the subgradient extragradient method (.).

To this end, we treat the right-hand side of (.) as the algorithmic operator Aof Def-inition ., namely, we define, for allk≥,

A

xk=PT(xk)

xkγkF

PC

xkγkF

xk, (.)

whereγksatisfies (.) and

Txk=wH|xkγkF

xkyk,wyk≤ . (.)

Identify the solution setwith the solution set of the variational inequality (.) and iden-tify the additional setwithC.

According to Definition ., we need to show the convergence of the sequence{xk}k=

that, starting from anyxH, is generated by

xk+=PT(xk+λ kvk)

xk+λkvk

γkF

PC

xk+λkvk

γkF

xk+λkvk

, (.)

which can be rewritten as

⎧ ⎪ ⎨ ⎪ ⎩

yk=PC((xk+λkvk) –γkF((xk+λkvk)),

T(xk+λ

kvk) ={wH| ((xk+λkvk) –γkF(xk+λkvk)) –yk,wyk ≤},

xk+=P

T(xk+λ kvk)((x

k+λ

kvk) –γkF(yk)),

(.)

whereγk=σρmk,σ> ,ρ∈(, ) andmkis the smallest nonnegative integer such that

γkF

xk+λkvk

Fykμxkyk+λkvk, μ∈(, ). (.)

The sequences{vk}

k=and{λk}∞k=obey conditions (i) and (ii) in Definition .,

respec-tively, and also (iii) in Definition . is satisfied.

The next theorem establishes the bounded perturbation resilience of the subgradient extragradient method. Since its proof is similar to that of Theorem ., we omit it.

Theorem . Assume that Conditions., .and.hold.Assume the sequence{vk}∞k= is bounded,and the scalars{λk}∞k=are such thatλk≥for all k≥,and

k=λk< +∞.

Then the sequence{xk}

k=generated by(.)and(.)converges weakly to a solution of

(19)

We also get the convergence rate of the subgradient extragradient methods with BP (.) and (.).

Theorem . Assume that Conditions., .and.hold.Assume the sequence{vk}k=

is bounded,and the scalars{λk}∞k=are such thatλk≥for all k≥,and

k=λk< +∞.

Let the sequences{xk}∞k=and{yk}∞k=be generated by(.)and(.).For any integer t> ,

we have ytC which satisfies

F(x),ytx

≤  ϒt

xx+M(x), xC, (.)

where

yt= 

ϒt t

k=

γkyk, ϒt= t

k=

γk, (.)

and

M(x) =sup

k

maxxk+–yk, xk+–x ∞

k=

λkvk. (.)

5.3 Construction of the inertial subgradient extragradient methods by BPR

In this subsection, we construct two classes of inertial subgradient extragradient methods by using BPR,i.e., identifyingeki,k= , , andλk,vkwith special values.

Based on Algorithm ., we construct the following inertial subgradient extragradient method:

⎧ ⎪ ⎨ ⎪ ⎩

yk=P

C(xkγkF(xk) +αk()(xkxk–))),

Tk:={wH| (xkγkF(xk) +αk()(xkxk–)) –yk,wyk ≤},

xk+=P

Tk(x

kγ

kF(yk) +αk()(xkxk–)),

(.)

whereγksatisfies (.) and

α(ki)=

⎧ ⎨ ⎩

βk(i)

xkxk–, ifxkxk–> ,i= , ,

βk(i), ifxkxk–. (.)

Similar to the proof of Theorem ., we get the convergence of the inertial subgradient extragradient method (.).

Theorem . Assume that Conditions., .and.hold.Assume that the sequences

{βk(i)}k=,i= , ,satisfyk=βk(i)<∞,i= , .Then the sequence{xk}

k=generated by the

(20)

Using the subgradient extragradient method with bounded perturbations (.), we con-struct the following inertial subgradient extragradient method:

⎧ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎩

wk=xk+αk(xkxk–),

yk=P

C(wkγkF(wk)),

Tk:={wH| (wkγkF(wk)) –yk,wyk ≤},

xk+=P

Tk(w

kγ

kF(yk)),

(.)

whereγk=σρmk,σ> ,ρ∈(, ) andmkis the smallest nonnegative integer such that

γkF

wkFykμwkyk, μ∈(, ), (.)

and

αk=

β

k

xkxk–, ifxkxk–> ,i= , , βk, ifxkxk– ≤.

(.)

We extend Theorem . to the convergence of the inertial subgradient extragradient method (.).

Theorem . Assume that Conditions., .and.hold.Assume that the sequence

{βk}∞k=satisfies

k=βk<∞.Then the sequence{xk}∞k=generated by the inertial

subgradi-ent extragradisubgradi-ent method(.)converges weakly to a solution of the variational inequality

(.).

6 Numerical experiments

In this section, we provide three examples to compare the inertial extragradient method (.) (iEG), the inertial extragradient method (.) (iEG), the inertial extragradient method (.) (iEG), the extragradient method (.), the inertial subgradient extragradient method (.) (iSEG), the inertial subgradient extragradient method (.) (iSEG) and the subgradient extragradient method (.).

In the first example, we consider a typical sparse signal recovery problem. We choose the following set of parameters. Takeσ= ,ρ= . andμ= .. Set

αk=α(ki)= 

k ifx

kxk– (.)

in the inertial extragradient methods (.) and (.), and the inertial subgradient extra-gradient methods (.) and (.). Chooseαk= . andλk= . in the inertial extra-gradient method (.).

Example . Letx∈Rnbe aK-sparse signal,Kn. The sampling matrixARm×n

(m<n) is stimulated by the standard Gaussian distribution and a vectorb=Ax+e, where

(21)

It is well known that the sparse signal x can be recovered by solving the following

LASSO problem []:

min

xRn

Axb

 

s.t. x≤t, (.)

wheret> . It is easy to see that the optimization problem (.) is a special case of the vari-ational inequality problem (.), whereF(x) =AT(Axb) andC={x| x

≤t}. We can

use the proposed iterative algorithms to solve the optimization problem (.). Although the orthogonal projection onto the closed convex setCdoes not have a closed-form so-lution, the projection operatorPC can be precisely computed in a polynomial time. We include the details of computing PC in Appendix. We conduct plenty of simulations to compare the performances of the proposed iterative algorithms. The following inequality was defined as the stopping criterion:

xk+–xk,

where>  is a given small constant. ‘Iter’ denotes the iteration numbers. ‘Obj’ represents the objective function value and ‘Err’ is the -norm error between the recovered signal and the trueK-sparse signal. We divide the experiments into two parts. One task is to recover the sparse signalxfrom noise observation vectorb, and the other is to recover

[image:21.595.116.479.487.733.2]

the sparse signal from noiseless data b. For the noiseless case, the obtained numerical results are reported in Table . To visually view the results, Figure  shows the recovered signal compared with the true signalxwhenK= . We can see from Figure  that the

Table 1 Numerical results obtained by the proposed iterative algorithms whenm= 240,

n= 1,024 in the noiseless case

K-sparse signal Methods = 10–4 = 10–6

Iter Obj Err Iter Obj Err

K= 20 EG 444 9.7346e-4 0.0080 817 9.6625e-8 7.9856e-5

SEG 444 9.7272e-4 0.0080 817 9.6555e-8 7.9827e-5

iEG 374 6.2389e-4 0.0064 675 6.3456e-8 6.4715e-5

iEG1 159 7.0799e-5 0.0021 263 7.4280e-9 2.2041e-5

iEG2 158 8.3897e-5 0.0023 273 1.0889e-8 2.6809e-5

iSEG1 415 8.9563e-4 0.0076 787 2.3571e-7 5.2470e-5

iSEG2 414 9.2167e-4 0.0077 760 9.1586e-8 7.7275e-5

K= 30 EG 1,285 0.0035 0.0281 2,583 3.4535e-7 2.8035e-4

SEG 1,285 0.0035 0.0281 2,583 3.4534e-7 2.8035e-4

iEG 1,091 0.0023 0.0227 2,144 2.2732e-7 2.2745e-4

iEG1 532 3.7493e-4 0.0092 944 3.7522e-8 9.2287e-5

iEG2 535 3.7961e-4 0.0093 956 4.3181e-8 9.3120e-5

iSEG1 1,176 0.0031 0.0266 2,351 3.1038e-7 2.6137e-4

iSEG2 1,176 0.0031 0.0266 2,346 3.1635e-7 2.6784e-4

K= 40 EG 1,729 0.0050 0.0405 3,599 5.0237e-7 4.0488e-4

SEG 1,729 0.0050 0.0405 3,599 5.0228e-7 4.0484e-4

iEG 1,473 0.0033 0.0328 2,990 3.3182e-7 3.2905e-4

iEG1 744 5.4838e-4 0.0134 1,361 5.5456e-8 1.3440e-4

iEG2 745 5.4807e-4 0.0134 1,355 6.4785e-8 1.4191e-4

iSEG1 1,570 0.0045 0.0384 3,246 4.5079e-7 3.8146e-4

(22)
[image:22.595.118.478.81.313.2]

Figure 1 Comparison of the different methods for sparse signal recovery. (a1)is the true sparse signal,

[image:22.595.119.478.369.571.2]

(a2)-(a8)are the recovered signals vs the true signal by ‘EG’, ‘SEG’, ‘iEG’, ‘iEG1’, ‘iEG2’, ‘iSEG1’ and ‘iSEG2’, respectively.

Figure 2 Comparison of the objective function value versus the iteration numbers of different methods.

recovered signal is the same as the true signal. Further, Figure  presents the objective function value versus the iteration numbers.

(23)
[image:23.595.116.479.94.550.2]

Table 2 Numerical results for the proposed iterative algorithms with different noise valueβ

Variances Methods = 10–4 = 10–6

Iter Obj Err Iter Obj Err

β= 0.01 EG 1,264 0.0092 0.0317 2,192 0.0061 0.0131

SEG 1,264 0.0092 0.0317 2,192 0.0061 0.0131

iEG 1,070 0.0081 0.0272 1,812 0.0061 0.0131

iEG1 519 0.0063 0.0164 788 0.0061 0.0130

iEG2 516 0.0063 0.0166 786 0.0061 0.0130

iSEG1 1,156 0.0089 0.0305 1,995 0.0061 0.0131

iSEG2 1,157 0.0089 0.0304 1,990 0.0061 0.0131

β= 0.02 EG 1,274 0.0163 0.0387 2,086 0.0142 0.0272

SEG 1,274 0.0163 0.0387 2,086 0.0142 0.0272

iEG 1,070 0.0154 0.0356 1,728 0.0142 0.0272

iEG1 492 0.0144 0.0300 756 0.0142 0.0272

iEG2 495 0.0143 0.0300 759 0.0142 0.0272

iSEG1 1,163 0.0161 0.0378 1,899 0.0142 0.0272

iSEG2 1,161 0.0161 0.0380 1,895 0.0142 0.0272

β= 0.05 EG 1,190 0.1012 0.0749 1,869 0.0991 0.0651

SEG 1,190 0.1012 0.0749 1,869 0.0991 0.0651

iEG 996 0.1005 0.0727 1,542 0.0991 0.0650

iEG1 460 0.0993 0.0677 670 0.0991 0.0650

iEG2 461 0.0993 0.0675 665 0.0991 0.0650

iSEG1 1,084 0.1010 0.0742 1,704 0.0991 0.0651

[image:23.595.116.478.98.518.2]

iSEG2 1,084 0.1010 0.0742 1,704 0.0991 0.0651

Figure 3 Comparison of the objective function value versus the iteration numbers of different methods in the noise case ofβ= 0.02.

Example . LetF:RRbe defined by

F(x,y) =x+ y+sin(x), –x+ y+sin(y), ∀x,y∈R. (.)

The authors [] proved that Fis Lipschitz continuous withL=√ and -strongly monotone. Therefore the variational inequality (.) has a unique solution, and (, ) is its solution.

LetC={x∈R|e

≤xe}, wheree= (–, –) ande= (, ). Take the initial

pointx= (–, )∈R. Since (, ) is the unique solution of the variational inequality

(24)
[image:24.595.118.478.81.311.2]

Figure 4 Comparison of the different methods for sparse signal recovery. (a1)is the true sparse signal,

(a2)-(a8)are the recovered signals vs the true signal by ‘EG’, ‘SEG’, ‘iEG’, ‘iEG1’, ‘iEG2’, ‘iSEG1’ and ‘iSEG2’ in the noise case ofβ= 0.02, respectively.

Example . LetF:Rn→Rndefined byF(x) =Ax+b, whereA=ZTZ,Z= (zij)n×nand

b= (bi)∈Rn, wherezij∈(, ) andbi∈(, ) are generated randomly.

It is easy to verify thatFisL-Lipschitz continuous andη-strongly monotone withL= max(eig(A)) andη=min(eig(A)).

LetC:={x∈Rn| xdr}, where the center

d∈(–, –, . . . , –), (, , . . . , )⊂Rn (.)

and radius r∈(, ) are randomly chosen. Take the initial pointx= (ci)∈Rn, where

ci∈[, ] is generated randomly. Setn= . Takeρ= . and other parameters are set the same values as in Example .. Although the variational inequality (.) has a unique solution, it is difficult to get the exact solution. So, denote byDk:=xk+–xk ≤–the stopping criterion.

(25)
[image:25.595.118.477.80.281.2]

Figure 5 Comparison of the number of iterations of different methods for Example 6.2.

Figure 6 Comparison of the number of iterations of different methods for Example 6.3.

7 Conclusions

[image:25.595.117.478.331.540.2]
(26)

Appendix

In this part, we present the details of computing a vectoryRnonto the-norm ball

constraint. For convenience, we consider projection onto the unit-norm ball first. Then

we extend it to the general-norm ball constraint.

The projection onto the unit-norm ball is to solve the optimization problem

min

xRn

 xy

 

s.t. x≤.

The above optimization problem is a typical constrained optimization problem, we con-sider solving it based on the Lagrangian method. Define the Lagrangian functionL(x,λ) as

L(x,λ) =  xy

 +λ

x– 

.

Let (x∗,λ∗) be the optimal primal and dual pair. It satisfies the KKT conditions of

∈x∗–y+λ∂x,

λx– = ,

λ∗≥.

It is easy to check that ify≤, thenx∗=yandλ∗= . In the following, we assume

y > . Based on the KKT conditions, we obtainλ∗>  andx∗= . From the first

order optimality, we havex∗ =max{|y|–λ∗, } ⊗Sign(y), where⊗represents element-wise multiplication and Sign(·) denotes the symbol function,i.e.,Sign(yi) =  ifyi≥; otherwiseSign(yi) = –.

Define a functionf(λ) =x(λ), wherex(λ) =Sλ(y) =max{|y|–λ, }⊗Sign(y). We prove

the following lemma.

Lemma A. For the function f(λ),there must existλ∗> such that f(λ∗) = .

Proof Sincef() =S(y)=y> . Letλ+=max≤in{|yi|}, thenf(λ+) =  < . Notice thatf(λ) is decreasing and convex. Therefore, by the intermediate value theorem, there

existsλ∗>  such thatf(λ∗) = .

To findλ∗such thatf(λ∗) = , we follow the following steps.

Step . Define a vectorywith the same element as|y|, which was sorted in descending

order. That is,yy≥ · · · ≥yn≥.

Step . For everyk= , , . . . ,n, solve the equationki=yi= . Stop search until the solutionλ∗belongs to the interval [yk+,yk].

In conclusion, the optimalx∗can be computed byx∗=max{|y|–λ∗, } ⊗Sign(y). The next lemma extends the projection onto the unit-norm ball to the general-norm ball

Figure

Table 1 Numerical results obtained by the proposed iterative algorithms when m = 240,n = 1,024 in the noiseless case
Figure 1 Comparison of the different methods for sparse signal recovery. (a1) is the true sparse signal,(a2)-(a8) are the recovered signals vs the true signal by ‘EG’, ‘SEG’, ‘iEG’, ‘iEG1’, ‘iEG2’, ‘iSEG1’ and ‘iSEG2’,respectively.
Table 2 Numerical results for the proposed iterative algorithms with different noise value β
Figure 4 Comparison of the different methods for sparse signal recovery. (a1) is the true sparse signal,(a2)-(a8) are the recovered signals vs the true signal by ‘EG’, ‘SEG’, ‘iEG’, ‘iEG1’, ‘iEG2’, ‘iSEG1’ and ‘iSEG2’ in thenoise case of β = 0.02, respectively.
+2

References

Related documents

Items on the survey pertained to students‟ perceptions on the effectiveness of computer simulation for learning to accurately administer and score a standardized

Appendix 2: Letter VII – Executive Education Programs from the Collective Agreement between the Faculty association and the Board of Governors of the University of Windsor (2014

The degree of resistance exhibited after 1, 10 and 20 subcultures in broth in the absence of strepto- mycin was tested by comparing the number of colonies which grew from the

No data exist to show whether incentives to study part- ners improve recruitment for AD clinical trials, but evi- dence from survey research convincingly shows that larger offers

Table 2 Adjusted hazard ratios (95% confidence intervals) for incident coronary heart disease, stroke, and circulatory disease associated with number of pregnancies,

PFGE was performed on 2 susceptible isolates obtained within 2 weeks of each resistant isolate at the corresponding site for genetic comparison.. The den- dogram, to the left,

In this way, the first part of the present study is descriptive- correlation because this study aimed to collect the necessary information for determining