• No results found

A projection descent method for solving variational inequalities

N/A
N/A
Protected

Academic year: 2020

Share "A projection descent method for solving variational inequalities"

Copied!
14
0
0

Loading.... (view fulltext now)

Full text

(1)

R E S E A R C H

Open Access

A projection descent method for solving

variational inequalities

Abdellah Bnouhachem

1,2

, Qamrul Hasan Ansari

3,4

and Ching-Feng Wen

5*

*Correspondence:

cfwen@kmu.edu.tw

5Center for Fundamental Science,

Kaohsiung Medical University, Kaohsiung, 807, Taiwan Full list of author information is available at the end of the article

Abstract

In this paper, we propose a descent direction method for solving variational

inequalities. A new iterate is obtained by searching the optimal step size along a new descent direction which is obtained by the linear combination of two descent directions. Under suitable conditions, the global convergence of the proposed method is studied. Two numerical experiments are presented to illustrate the efficiency of the proposed method.

MSC: 49J40; 65N30

Keywords: variational inequalities; self-adaptive rules; pseudomonotone operators; projection method

1 Introduction

The theory of variational inequalities is a well-established subject in nonlinear analysis and optimization. It was started during early sixties by the pioneering work of Fichera [] and Stampacchia [] (see also []). Since then it has been extended and studied in several directions. One of the most important aspects of the theory of variational inequalities is the solution method. Several solution methods were proposed and analyzed in the litera-ture (see, for example, [–] and the references therein). The fixed point theory plays an important role in developing various kinds of algorithms for solving variational inequali-ties. By using the projection operator technique, one can easily establish the equivalence between variational inequalities and fixed point problems. This alternative equivalent for-mulation provides an elegant solution method, known as projection gradient method, for solving variational inequalities. The convergence of this method requires the strong monotonicity and Lipschitz continuity of the underlying operator. Many applications do not have these strong conditions. As a result, the projection gradient method is not ap-plicable for such problems. Korpelevich [] modified the projection gradient method to overcome these difficulties and introduced the so-called extragradient method. It gener-ates the itergener-ates according to the following recursion:

uk+=PK

ukρTu¯k,

where

¯

uk=PK

ukρTuk,

(2)

andρ>  is a fixed parameter. This method overcomes the difficulties arising in the project gradient method by performing an additional forward step and a projection at each iter-ation according to double projection. It was proved in [] that the extragradient method is globally convergent ifT is monotone and Lipschitz continuous onKand  <ρ< /L, whereLis the Lipschitz constant ofT. When the operator T is not Lipschitz continu-ous, or it is Lipschitz but the constantLis not known, the fixed parameterρ must be replaced by the step size computed through an Armijo-like line search procedure with a new projection needed for each trial point (see,e.g., [, ]), and this can be computa-tionally very expensive. To overcome these difficulties, several modified projection and extragradient-type methods [–] have been suggested and developed for solving vari-ational inequalities. It was shown in [, ] that three-step method performs better than two-step and one-step methods for solving variational inequalities.

The aim of this paper is to develop an algorithm, inspired by Fu [], for solving varia-tional inequalities. More precisely, in the first method, a new iterate is obtained by search-ing the optimal step size along the integrated descent direction from the linear combi-nation of two descent directions, and another optimal step length is employed to reach substantial progress in each iteration for the second method. It is proved theoretically that the lower-bound of the progress is greater when obtained by the second method than by the first one. Under certain conditions, the global convergence of the proposed methods is proved. Our results can be viewed as significant extensions of the previously known re-sults.

2 Preliminaries

LetK⊂Rnbe a nonempty closed convex set andT:KRnbe an operator. A classical variational inequality problem, denoted byVI(T,K), is to find a vectoru∗∈Ksuch that

Tu∗,vu∗≥, ∀vK. ()

It is worth to mention that the solution setS∗ofVI(T,K) is nonempty ifTis continuous andKis compact.

It is well known that if K is a closed convex cone, thenVI(T,K) is equivalent to the nonlinear complementarity problem of findingu∗∈Ksuch that

Tu∗∈K∗ and Tu∗,u= , ()

whereK∗:={y∈Rn:y,x ≥ for allxK}. For further details on variational inequalities

and complementarity problems, we refer to [–] and the references therein.

In what follows, we always assume that the underlying operatorT is continuous and pseudomonotone, that is,

T(u),vu≥ ⇒ T(v),vu≥, ∀u,vK,

and the solution set of problem (), denoted byS∗, is nonempty. The following results will be used in the sequel.

Lemma [] Let K⊂Rnbe a nonempty closed convex set and P

K(·)denote the projection

on K under the Euclidean norm,that is,PK(z) =arg min{zx:xK}.Then the following

statements hold:

(3)

(b) PK(z) –v≤ zv–zPK(z),∀z∈Rn,vK.

(c) PK(w) –PK(v)≤ wv,PK(w) –PK(v) ,∀v,w∈Rn. Lemma [] uis a solution of problem()if and only if

u∗=PK

u∗–ρTu∗, whereρ> . ()

From Lemma , it is clear thatuis a solution of () if and only ifuis a zero point of the function

e(u,ρ) :=uPK

uρT(u).

The next lemma shows that e(u,ρ) is a non-decreasing function, while e(uρ,ρ) is a non-increasing one with respect toρ.

Lemma [–] For all u∈Rnandρ>ρ> ,

eu,ρe(u,ρ) ()

and

e(u,ρ) ρ

e(u,ρ)

ρ . ()

3 Algorithm and convergence results

In this section, we suggest and analyze two new methods for solving variational inequal-ities (). For givenukKandρ

k> , each iteration of the first method consists of three steps, the first step offersu˜k, the second step makesu¯k and the third step produces the new iterateuk+.

Algorithm 

Step . GivenuK,> ,ρ

= ,ν> ,μ∈(,

),σ∈(, ),ζ ∈(, ),η∈(,ζ),

η∈(ζ,ν)and letk= .

Step . Ife(uk,ρ

k) ≤, then stop. Otherwise, go to Step .

Step . () For a givenukK, calculate the two predictors

˜

uk=PK

ukρkT

uk, (a)

¯

uk=P K

˜

ukρ kT

˜

uk. (b)

() Ife(u˜k,ρk) ≤, then stop. Otherwise, continue.

() Ifρksatisfies both

r:=|ρku

ku¯k,T(uk) –T(u˜k) uku¯k,T(u˜k) –T(u¯k) ]|

˜uku¯k ≤μ

()

and

r:=

ρk(T(u˜k) –T(u¯k))

˜uku¯kν, ()

(4)

() Perform an Armijo-like line search via reducingρk,

ρk:=ρkσ

max(r, ) ()

and go to Step .

Step . Take the new iterationuk+by setting

uk+(αk) =PK

ukαkd

˜

uk,u¯k, ()

where

du˜k,u¯k=βdu˜k,u¯k+βρkF

¯

uk ()

and

du˜k,u¯k:=u˜ku¯kρk

Tu˜kTu¯k. ()

Step . Adaptive rule of choosing a suitableρk+as the start prediction step size for the

next iteration.

() Prepare a properρk+,

ρk+:=

⎧ ⎪ ⎨ ⎪ ⎩

ρkζ/r ifrη,

ρkζ/r ifr≥η,

ρk otherwise.

() Return to Step , withkreplaced byk+ .

Remark  (a) The proposed method can be viewed as a refinement and improvement of the method of Heet al.[] by performing an additional projection step at each itera-tion and another optimal step length is employed to reach substantial progress in each iteration.

(b) Ifβ=  andβ= , we obtain the method proposed in [].

(c) Ifβ=  andβ= , we obtain a descent method, the new iterate is obtained along a

descent directiond(u˜k,u¯k).

Remark  In (), ifσ>max(r, ), it indicates thatρkwill be too large for the next iteration and will increase the number of Armijo-like line searches. So, we chooseρkfor the next iteration to be only modestly smaller thanρkto avoid expensive implementations in the next iteration.

We now consider the criteria forαk, which ensures thatuk+(αk) is closer to the solution set thanuk. For this purpose, we define

Θk(αk) :=uku∗–uk+(αk) –u∗. ()

Theorem  Let u∗∈S∗.Then we have

(5)

where

Φ(αk) =ukuk+(αk)+ αk

uk+(αk) –u¯k,d

˜

uk,u¯k, ()

and

Ψ(αk) = αk(β+β)

uku¯k,du˜k,u¯kαk(β+β)d

˜

uk,u¯k.

Proof Sinceu∗∈K, settingv=u∗andz=ukα

kduk,u¯k) in (b), we have

uk+(αk) –u

uku∗–αkd

˜

uk,u¯k–ukuk+(αk) –αkd

˜

uk,u¯k.

Using the definition ofΘ(αk), we get

Θ(αk)≥ukuk+(αk)+ αk

uku∗,du˜k,u¯k

– αk

ukuk+(αk),d

˜

uk,u¯k

=ukuk+(αk)

+ αk

uk+(αk) –u∗,d

˜

uk,u¯k. ()

For any solutionu∗∈S∗of problem (), we have

ρkT

u∗,u¯ku∗≥.

By the pseudomonotonicity ofT, we obtain

ρkT

¯

uk,u¯ku∗≥. ()

Substitutingz=u˜kρ

kTuk) andv=u∗into (a), we get

˜

ukρkT

˜

uku¯k,u¯ku∗≥. ()

Adding () and (), and using the definition ofduk,u¯k), we have

du˜k,u¯k,u¯ku∗≥. ()

Multiplying () by αkβand () by αkβ, then adding the resultants with (), we obtain

Θ(αk)≥ukuk+(αk)

+ αk

uk+(αk) –u¯k,d

˜

uk,u¯k

=Φ(αk)

=ukuk+(αk)+ αkβ

uk+(αk) –u¯k,d

˜

uk,u¯k

+ αkβ

uk+(αk) –u¯k,ρkT

¯

uk. ()

Note thatu¯k=P

K[u˜kρkT(u˜k)]. We can apply (a) withv=uk+, to obtain

uk+(αk) –u¯k,u˜kρkT

˜

(6)

Multiplying () by αkβand then adding the resultant to () and using the definition

ofd(u˜k,u¯k), we have

Φ(αk)≥ukuk+(αk)

+ αkβ

uk+(αk) –u¯k,d

˜

uk,u¯k

+ αkβ

uk+(αk) –u¯k,d

˜

uk,u¯k

=ukuk+(αk)

+ αk(β+β)

uk+(αk) –u¯k,d

˜

uk,u¯k

=ukuk+(αk) –αk(β+β)d

˜

uk,u¯k

+ αk(β+β)

uku¯k,du˜k,u¯k

αk(β+β)d

˜

uk,u¯k

≥αk(β+β)

uku¯k,du˜k,u¯kαk(β+β)d

˜

uk,u¯k

=Ψ(αk),

and the theorem is proved.

Proposition  Assume that T is continuously differentiable,then we have

(i) Φ(α) = uk+(α) –u¯k,duk,u¯k) ;

(ii) Φ(α)is a non-increasing function with respect toα≥,and hence,Φ(α)is concave.

Proof For givenuk,u˜k,u¯kK, let

h(α,y) =yukαdu˜k,u¯k–αdu˜k,u¯k– αu¯kuk,du˜k,u¯k. ()

It easy to see that the solution of the following problem

min

y

h(α,y) :yK

isy∗=PK[ukαd(u˜k,u¯k)]. By substitutingy∗into () and simplifying it, we obtain

Φ(α) =h(α,y)|y=PK[ukαd(u˜k,u¯k)]=min

y

h(α,y) :yK.

Φ(α) is differentiable and its derivative is given by

Φ(α) =∂h(α,y) ∂α

y=PK[ukαd(u˜k,u¯k)]

= uk+(α) –uk+αdu˜k,u¯k,du˜k,u¯k– αdu˜k,u¯k

– u¯kuk,du˜k,u¯k

= uk+(α) –u¯k,du˜k,u¯k,

and hence (i) is proved. We now establish the proof of the second assertion. Letα¯>α≥. We show that

(7)

that is,

uk+(α¯) –uk+(α),du˜k,u¯k≤. ()

By settingz:=ukα¯d(u˜k,u¯k),v:=uk+(α) andz:=ukαd(u˜k,u¯k),v:=uk+(α¯) in (a),

re-spectively, we get

ukα¯du˜k,u¯kuk+(α¯),uk+(α) –uk+(α¯)≤ ()

and

ukαdu˜k,u¯kuk+(α),uk+(α¯) –uk+(α)≤. ()

By adding () and (), we obtain

uk+(α¯) –uk+(α),uk+(α¯) –uk+(α) + (α¯–α)du˜k,u¯k≤,

that is,

uk+(¯α) –uk+(α)+ (α¯–α)uk+(α¯) –uk+(α),du˜k,u¯k≤.

It follows that

uk+(α¯) –uk+(α),du˜k,u¯k≤ –

(α¯–α)u

k+(α¯) –uk+(α)

.

We obtain inequality () and complete the proof.

Now for the samekth approximate solutionuk, let

αk

=arg maxα

Ψ(α)|α> 

and

αk

=arg maxα

Φ(α)|α> .

SinceΨ(α) is a quadratic function ofαand it reaches its maximum at

αk

=

uku¯k,d(u˜k,u¯k)

(β+β)d(u˜k,u¯k)

()

and

Ψαk

=αk

(β+β)

uku¯k,du˜k,u¯k. ()

In order to makeαk

to be obtained more easily, we approximately computeα

kby solving

the following simple optimization problem:

αk

=arg maxα

Φ(α)| <αmαk

, ()

(8)

Based on Theorem  and Proposition , the following result can be proved easily.

Proposition  Letαk

andα

kbe defined by()and(),respectively,and let T be pseu-domonotone and continuously differentiable.Then

(i) ukuuk+(α

k) –u

Φ(α

k);

(ii) ukuuk+(α

k) –u

Ψ(α

k);

(iii) Φ(αk

)≥Ψ(α

k);

(iv) ifΦ(αk

) = ,thenu

kuuk+(α

k) –u

ukuk+(α

k) .

Remark  Let

ukI+αk

=PK

ukαk

d

uk,ρk

and

ukII+αk=PK

ukαkduk,ρk

represent the new iterates generated by the proposed method withαk=αkandαk=α

k,

respectively. Let

ΘI

αk=uku∗–ukI+αku∗

and

ΘII

αk

=uku∗–ukII+αk

u∗

measure the progresses made by the new iterates, respectively. By using Proposition , it is easy to show that

ΘI

αk

Ψαk

,

ΘII

αk

Φαk

and

Φαk

Ψαk

.

The above inequalities show that the proposed method withαk=αk∗is expected to make

more progress than the proposed method withαk=αkat each iteration, and so it explains

theoretically that the proposed method withαk=αk∗ outperforms the proposed method

withαk=αk.

In the next theorem, we show thatαk

 andΨ(α

k) are lower bounded away from zero,

and it is one of the keys to prove the global convergence results.

Theorem  Let ube a solution of problem().For given ukK,letu˜k,u¯kbe the predictors

produced by(a)and(b).Then

αk

≥

 –μ (β+β)( +ν)

(9)

and

Ψαk

≥( –μ) ( +ν) u˜

ku¯k. ()

Proof Note thatu˜k=P

K[ukρkT(uk)],u¯k=PK[u˜kρkTuk)]. We can apply (c) withv=

ukρ

kT(uk),w=u˜kρkTuk) to obtain

ukρkT

uku˜kρkT

˜

uk,u˜ku¯ku˜ku¯k.

By some manipulations, we obtain

uku˜k,u˜ku¯ku˜ku¯k+ρk

˜

uku¯k,TukTu˜k.

Then we have

uku˜k,du˜k,u¯k=uku˜k,u˜ku¯kρk

uku˜k,Tu˜kTu¯k

u˜ku¯k+ρk

˜

uku¯k,TukTu˜k

ρk

uku˜k,Tu˜kTu¯k. ()

By using (), () and the definition ofd(u˜k,u¯k), we get

uku¯k,du˜k,u¯k=uku˜k,du˜k,u¯k+u˜ku¯k,du˜k,u¯k

u˜ku¯k+ρk

˜

uku¯k,TukTu˜k

ρk

uku˜k,Tu˜kTu¯k+u˜ku¯k

ρk

˜

uku¯k,Tu˜kTu¯k

≥ –μu˜ku¯k. ()

Recalling the definition ofd(u˜k,u¯k) (see ()) and applying criterion (), it is easy to see

that

du˜k,u¯k≤u˜ku¯k+ρk

Tu˜kTu¯k≤( +ν)u˜ku¯k. ()

Moreover, by using () together with (), we get

αk= u

ku¯k,d(u˜k,u¯k)

(β+β)duk,u¯k)≥

 –μ

(β+β)( +ν)

> , μ∈(,√). ()

By substituting () in (), we get the assertion of this theorem and the proof is

com-pleted.

(10)

Lemma  Let ube a solution of problem()and let uk+(γ α

k)be the sequence generated

by Algorithmwithαk=αk∗orαk=α

k.Then u

kis bounded and

uk+(γ αk) –u

uku∗–cuku˜k, ()

where

c:=( –μ

)

( +ν) > .

Proof Ifαk=αk∗. It follows from (), () and () that

uk+γ αk

u∗≤uku∗–Φαk

uku∗–Ψαk

uku∗–( –μ

)

( +ν) u˜

ku¯k

.

Ifαk=αk, then we have uk+γ αk

u∗≤uku∗–Ψαk

uku∗–( –μ

)

( +ν) u˜

ku¯k.

Sinceμ∈(,√), we have

uk+(γ αk) –u∗≤uku∗≤ · · · ≤u–u∗.

From the above inequality, it is easy to verify that the sequenceukis bounded.

We now present the convergence result of Algorithm .

Theorem  Ifinf∞k=ρk:=ρ> ,then any cluster point of the sequenceuk}generated by

Algorithmis a solution of problem().

Proof It follows from () that

k=

cuku˜k< +∞,

which means that

lim

k→∞

u˜ku¯k= .

Since the sequenceukis bounded,uk}is too, and it has at least a cluster point. Letu∞ be a cluster point of{˜uk}and the subsequence{˜ukj}converges tou. By the continuity of e(u,ρ) and inequality (), it follows that

eu∞,ρ= lim

kj→∞

eu˜kj,ρlim

kj→∞

eu˜kj,ρ

kj= lim

kj→∞

u˜kju¯kj= .

(11)

We now prove that the sequence{uk}has exactly one cluster point. Assume thatu˜ is another cluster point and satisfies

δ:=u˜–u∞> .

Sinceu∞is a cluster point of the sequence{uk}, there isk

>  such that

uku∞≤δ

.

On the other hand, sinceu∞∈S∗and from (), we have

uku∞≤ukufor allkk,

it follows that

uku˜≥u˜–u∞–uku∞≥δ

 ∀kk.

This contradicts the assumption thatu˜is a cluster point of{uk}. Thus, the sequence{uk}

converges tou∞∈S∗.

4 Numerical experiments

In order to verify the theoretical assertions, we consider the nonlinear complementarity problems

u≥, T(u)≥, uT(u) = , ()

whereT(u) =D(u) +Mu+q,D(u) andMu+qare the nonlinear part and linear part of

T(u), respectively. Problem () is equivalent to problem () by takingK=Rn

+.

Example  We form the linear part in the test problems as

M=

⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝

   · · ·     · · ·     · · · 

..

. ... ... . .. ...    · · · 

⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠

n×n

and q= ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝

– – – .. . –

⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠

n×

.

InD(u), the nonlinear part ofT(u), the components are chosen to beDj(u) =dj∗arctan(uj), wheredjis a random variable in (, ).

In all tests we takeρ= ,ζ= .,η= .,η= .,μ= .,ν= .,γ = .,β= .,

andβ= .. We employe(uk,ρk) ≤–as the stopping criterion and chooseu=  as the initial iterative points. All codes were written in Matlab. We compare Algorithm  withαk=αk∗, with Algorithm  withαk=α

kand the method in []. The test results for

(12)
[image:12.595.118.479.99.180.2]

Table 1 Numerical results for problem (36) withσ= 0.8 (see (9))

Dimension of the problem

The method in [14] Algorithm 1 withαk1 Algorithm 1 withαk2

k l CPU (sec.) k l CPU (sec.) k l CPU (sec.)

n= 100 228 465 0.024 138 447 0.023 73 240 0.06

n= 300 259 540 0.04 185 580 0.03 103 332 0.09

n= 500 531 1,109 0.15 227 700 0.10 128 403 0.16

n= 600 520 1,160 0.22 225 701 0.13 135 419 0.20

[image:12.595.115.479.216.295.2]

n= 800 568 1,236 0.51 244 779 0.34 157 520 0.40

Table 2 Numerical results for problem (36) withσ= 0.4 (see (9))

Dimension of the problem

The method in [14] Algorithm 1 withαk1 Algorithm 1 withαk2

k l CPU (sec.) k l CPU (sec.) k l CPU (sec.)

n= 100 228 465 0.02 142 439 0.01 99 309 0.06

n= 300 259 540 0.04 180 553 0.03 97 302 0.08

n= 500 531 1,109 0.15 226 699 0.09 189 579 0.23

n= 600 520 1,160 0.23 251 768 0.14 129 400 0.18

[image:12.595.116.478.332.412.2]

n= 800 568 1,236 0.48 246 754 0.31 197 603 0.48

Table 3 Numerical results for problem (36) withσ= 0.8 (see (9))

Dimension of the problem

The method in [14] Algorithm 1 withαk1 Algorithm 1 withαk2

k l CPU (sec.) k l CPU (sec.) k l CPU (sec.)

n= 100 318 676 0.03 93 312 0.01 68 235 0.05

n= 300 435 936 0.07 127 404 0.03 111 356 0.09

n= 500 489 1,035 0.15 146 491 0.07 129 416 0.17

n= 600 406 877 0.18 117 378 0.08 92 299 0.15

[image:12.595.118.478.448.530.2]

n= 800 386 832 0.64 110 359 0.29 76 249 0.28

Table 4 Numerical results for problem (36) withσ= 0.4 (see (9))

Dimension of the problem

The method in [14] Algorithm 1 withα

k1 Algorithm 1 withαk2

k l CPU (sec.) k l CPU (sec.) k l CPU (sec.)

n= 100 318 676 0.02 157 793 0.02 84 298 0.06

n= 300 435 995 0.06 199 936 0.07 164 613 0.16

n= 500 489 1,035 0.14 190 769 0.12 155 550 0.22

n= 600 406 877 0.20 129 402 0.08 89 300 0.14

n= 800 386 832 0.35 169 714 0.32 89 309 0.26

Example  We form the linear part in the test problems similarly as in Harker and Pang []. The matrixM=AA+B, whereAis ann×nmatrix whose entries are randomly generated in the interval (–, +), and a skew-symmetric matrixBis generated in the same way. The vectorqis generated from a uniform distribution in the interval (–, ). In

D(u), the nonlinear part ofT(u), the components are chosen to beDj(u) =dj∗arctan(uj), wheredjis a random variable in (, ). A similar type of problem was tested in [] and []. In all tests we takeρ= ,ζ= .,η= .,η= .,μ= .,ν= .,γ = .,β= .,

β= .. We employe(uk,ρk) ≤– as the stopping criterion and chooseu=  as the initial iterative points. The test results for problem () with different dimensions are reported in Tables  and .

Tables - show that Algorithm  withαk=αk∗is very efficient even for large-scale

(13)

that Algorithm  withαk=αk∗is more effective than the method presented in [] and

Al-gorithm  withαk=αkin the sense that Algorithm  withαk=αk needs fewer iterations and less evaluation numbers ofT, which clearly illustrates its efficiency.

Competing interests

The authors declare that they have no competing interests.

Authors’ contributions

All authors have equal contribution. All authors read and approved the final manuscript.

Author details

1School of Management Science and Engineering, Nanjing University, Nanjing, 210093, P.R. China.2ENSA, Ibn Zohr

University, BP 1136, Agadir, Morocco.3Department of Mathematics, Aligarh Muslim University, Aligarh, 202 002, India. 4Department of Mathematics and Statistics, King Fahd University of Petroleum and Minerals, Dhahran, Saudi Arabia. 5Center for Fundamental Science, Kaohsiung Medical University, Kaohsiung, 807, Taiwan.

Acknowledgements

The research part of second author was done during his visit to King Fahd University of Petroleum and Minerals, Dhahran, Saudi Arabia. In this research, third author was partially supported by the grant MOST 103-2115-M-037-001.

Received: 14 February 2015 Accepted: 14 April 2015

References

1. Fichera, G: Problemi elettrostatici con vincoli unilaterali: il problema di Signorini con ambigue condizioni al contorno. Atti Accad. Naz. Lincei, Mem. Cl. Sci. Fis. Mat. Nat., Sez. I7, 91-140 (1964)

2. Stampacchia, G: Formes bilineaires coercivitives sur les ensembles convexes. C. R. Math. Acad. Sci. Paris258, 4413-4416 (1964)

3. Stampacchia, G: Variational inequalities. In: Ghizzetti, A (ed.) Theory and Applications of Monotone Operators. Proc. NATO Adv. Study Institute, Venice, Oderisi, Gubbio (1968)

4. Ansari, QH, Lalitha, CS, Mehta, M: Generalized Convexity, Nonsmooth Variational Inequalities and Nonsmooth Optimization. CRC Press, Boca Raton (2014)

5. Facchinei, F, Pang, JS: Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer, Berlin (2003)

6. Glowinski, R, Lions, JL, Trémolieres, R: Numerical Analysis of Variational Inequalities. North-Holland, Amsterdam (1981) 7. Kinderlehrer, D, Stampacchia, G: An Introduction to Variational Inequalities and Their Applications. Academic Press,

New York (1980)

8. Konnov, IV: Combined Relaxation Methods for Variational Inequalities. Springer, Berlin (2001)

9. Khobotov, EN: Modification of the extragradient method for solving variational inequalities and certain optimization problems. USSR Comput. Math. Math. Phys.27, 120-127 (1987)

10. Nagurney, A, Zhang, D: Projected Dynamical Systems and Variational Inequalities with Applications. Kluwer Academic, Dordrecht (1996)

11. Korpelevich, GM: The extragradient method for finding saddle points and other problems. Matecon12, 747-756 (1976)

12. Iusem, AN, Svaiter, BF: A variant of Korpelevich’s method for variational inequalities with a new strategy. Optimization 42, 309-321 (1997)

13. Auslender, A, Teboule, M: Interior projection-like methods for monotone variational inequalities. Math. Program., Ser. A104, 39-68 (2005)

14. Bnouhachem, A, Noor, MA, Rassias, TR: Three-steps iterative algorithms for mixed variational inequalities. Appl. Math. Comput.183, 436-446 (2006)

15. Bnouhachem, A, Fu, X, Xu, MH, Zhaohan, S: Modified extragradient methods for solving variational inequalities. Comput. Math. Appl.57, 230-239 (2009)

16. Fu, X: A two-stage prediction-correction method for solving variational inequalities. J. Comput. Appl. Math.214, 345-355 (2008)

17. He, BS, Liao, LZ: Improvement of some projection methods for monotone variational inequalities. J. Optim. Theory Appl.112, 111-128 (2002)

18. He, BS, Yang, H, Meng, Q, Han, DR: Modified Goldstein-Levitin-Polyak projection method for asymmetric strongly monotone variational inequalities. J. Optim. Theory Appl.112, 129-143 (2002)

19. Wang, YJ, Xiu, NH, Wang, CY: A new version of extragradient method for variational inequalities problems. Comput. Math. Appl.42, 969-979 (2001)

20. Xiu, NH, Zhang, JZ: Some recent advances in projection-type methods for variational inequalities. J. Comput. Appl. Math.152, 559-585 (2003)

21. Bnouhachem, A: A self-adaptive method for solving general mixed variational inequalities. J. Math. Anal. Appl.309, 136-150 (2005)

22. He, BS, Yang, ZH, Yuan, XM: An approximate proximal-extragradient type method for monotone variational inequalities. J. Math. Anal. Appl.300, 362-374 (2004)

23. Zarantonello, EH: Projections on convex sets in Hilbert space and spectral theory. In: Zarantonello, EH (ed.) Contributions to Nonlinear Functional Analysis. Academic Press, New York (1971)

(14)

25. Gafni, EM, Bertsekas, DP: Two-metric projection methods for constrained optimization. SIAM J. Control Optim.22, 936-964 (1984)

26. Peng, JM, Fukushima, M: A hybrid Newton method for solving the variational inequality problem via the D-gap function. Math. Program.86, 367-386 (1999)

27. He, BS, Yuan, XM, Zhang, JZ: Comparison of two kinds of prediction-correction methods for monotone variational inequalities. Comput. Optim. Appl.27, 247-267 (2004)

28. Harker, PT, Pang, JS: A damped-Newton method for the linear complementarity problem. Lect. Appl. Math.26, 265-284 (1990)

29. Marcotte, P, Dussault, JP: A note on a globally convergent Newton method for solving variational inequalities. Oper. Res. Lett.6, 35-42 (1987)

Figure

Table 1 Numerical results for problem (36) with σ = 0.8 (see (9))

References

Related documents

Ak chcete nastaviť, aby sa klávesnica zablokovala automaticky po uplynutí definovaného času, keď sa zariadenie nepoužíva, vyberte položku Menu &gt; Nastavenia &gt; Telefón

Phytochemical tests indicated the presence of Carbohydrate, proteins and amino acids, flavonoids and fixed oil in the methanolic extract; alkaloids, steroids, tannins

Regardless of whether the ultimate aim is to compare marital fertility patterns in different countries or to undertake a detailed study of a single country over a long time

Immunohistochemistry to assess the reactivity of a β Co- mAb GW-23B7 was performed on old 3 × Tg mice (&gt;19 months old) with extensive A β and tau pathology or on

In conclusion, our protocols of simultaneous splicing overlap extension PCR have the following characteristics: (1) Designing shorter outermost primers with lower Tm

Cassie’s parents play a major role in her failure to under- stand and connect with Jeremy. Obviously, the adults cannot see beyond the racial lines, beyond the colors of both races,

Management control systems include two variables namely interactive approach (with twelve items) and diagnostic approach (with eight items) as well as contemporary

22/9/08 15 Korkeakoulujen arviointineuvosto — Rådet för utvärdering av högskolorna — The Finnish Higher Education Evaluation Council