Volume 2009, Article ID 691952,11pages doi:10.1155/2009/691952
Research Article
Relatively Inexact Proximal Point Algorithm and
Linear Convergence Analysis
Ram U. Verma
Department of Mathematical Sciences, Florida Institute of Technology, Melbourne, FL 32901, USA
Correspondence should be addressed to Ram U. Verma,verma99@msn.com
Received 30 July 2009; Accepted 9 November 2009
Recommended by Petru Jebelean
Based on a notion of relatively maximalm-relaxed monotonicity, the approximation solvability of a general class of inclusion problems is discussed, while generalizing Rockafellar’s theorem
1976on linear convergence using the proximal point algorithm in a real Hilbert space setting. Convergence analysis, based on this new model, is simpler and compact than that of the celebrated technique of Rockafellar in which the Lipschitz continuity at 0 of the inverse of the set-valued mapping is applied. Furthermore, it can be used to generalize the Yosida approximation, which, in turn, can be applied to first-order evolution equations as well as evolution inclusions.
Copyrightq2009 Ram U. Verma. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
1. Introduction
LetXbe a real Hilbert space with the inner product·,·and with the norm · onX.We consider the inclusion problem. Find a solution to
0∈Mx, 1.1
whereM:X → 2Xis a set-valued mapping onX.
Rockafellar 1, Theorem 2 discussed general convergence of the proximal point algorithm in the context of solving 1.1, by showing forM maximal monotone, that the sequence{xk}generated for an initial pointx0by the proximal point algorithm
xk1≈P
k
converges strongly to a solution of1.1,provided that the approximation is made sufficiently accurate as the iteration proceeds, wherePk I ckM−1 is the resolvent operator for a sequence{ck}of positive real numbers, that is bounded away from zero. We observe from
1.2thatxk1is an approximate solution to inclusion problem
0∈Mx ck−1x−xk. 1.3
Next, we state the theorem of Rockafellar1, Theorem 2, where an approach of using the Lipschitz continuity ofM−1instead of the strong monotonicity ofMis considered, that
turned out to be more application enhanced to convex programming. Moreover, it is well-known that the resolvent operatorPk IckM−1is nonexpansive, so it does not seem to be possible to achieve a linear convergence without having the Lipschitz continuity constant less than one in that setting. This could have been the motivation behind looking for the Lipschitz continuity ofM−1 at zero which helped achieving the Lipschitz continuity of P
k
with Lipschitz constant that is less than one instead.
Theorem 1.1. LetX be a real Hilbert space, and letM : X → 2X be maximal monotone. For an arbitrarily chosen initial pointx0,let the sequence{xk}be generated by the proximal point algorithm
1.2such that
xk1−P
k
xk≤
k, 1.4
wherePk IckM−1, and the scalar sequences{k}and{ck},respectively, satisfyΣ∞k0k < ∞
and{ck}is bounded away from zero.
We further suppose that sequence{xk}is generated by the proximal point algorithm
1.2such that
xk1−P
k
xk≤δ
kxk1−xk, 1.5
where scalar sequences{δk}and{ck},respectively, satisfyΣk∞0δk<∞andck↑c≤ ∞. Also, assume that {xk} is bounded in the sense that the solution set to 1.1 is nonempty, and thatM−1isa-Lipschitz continuous at 0 fora >0.Let
μk a
a2c2
k
<1. 1.6
Then the sequence{xk}converges strongly tox∗,a unique solution to1.1with
xk1−x∗≤α
where
0≤αk μkδk
1−δk <1 ∀k≥k ,
αk−→0 as ck−→ ∞.
1.8
As we observe that most of the variational problems, including minimization or maximization of functions, variational inequality problems, quasivariational inequality problems, minimax problems, decision and management sciences, and engineering sciences can be unified into form1.1, the notion of the general maximal monotonicity has played a crucially significant role by providing a powerful framework to develop and use suitable proximal point algorithms in exploring and studying convex programming and variational inequalities. Algorithms of this type turned out to be of more interest because of their roles in certain computational methods based on duality, for instance the Hestenes-Powell method of multipliers in nonlinear programming. For more details, we refer the reader to1–15.
In this communication, we examine the approximation solvability of inclusion problem 1.1 by introducing the notion of relatively maximal m-relaxed monotone mappings, and derive some auxiliary results involving relatively maximal m-relaxed monotone and cocoercive mappings. The notion of the relatively maximal m-relaxed monotonicity is based on the notion of A-maximal m-relaxed monotonicity introduced and studied in9,10, but it seems more application-oriented. We note that our approach to the solvability of1.1differs significantly than that of1in the sense thatMis without the monotonicity assumption; there is no assumption of the Lipschitz continuity on M−1,
and the proof turns out to be simple and compact. Note that there exists a huge amount of research on new developments and applications of proximal point algorithms in literature to approximating solutions of inclusion problems of the form1.1in different space settings, especially in Hilbert as well as in Banach space settings.
2. Preliminaries
In this section, first we introduce the notion of the relatively maximalm-relaxed monotonicity, and then we derive some basic properties along with some auxiliary results for the problem on hand.
LetXbe a real Hilbert space with the norm · forX, and with the inner product·,·.
Definition 2.1. LetXbe a real Hilbert space, and letM:X → 2Xbe a multivalued mapping andA:X → Xa single-valued mapping onX.The mapMis said to be the following.
iMonotone if
u∗−v∗, u−v ≥0 ∀u, u∗,v, v∗∈graphM. 2.1
iiStrictly monotone ifMis monotone and equality holds only ifuv.
iii r-strongly monotone if there exists a positive constantrsuch that
iv r-expanding if there exists a positive constantrsuch that
u∗−v∗ ≥ru−v ∀u, u∗,v, v∗∈graphM. 2.3
vStrongly monotone if
u∗−v∗, u−v ≥ u−v2 ∀u, u∗,v, v∗∈graphM. 2.4
viExpanding if
u∗−v∗ ≥ u−v ∀u, u∗,v, v∗∈graphM. 2.5
vii m-relaxed monotone if there is a positive constantmsuch that
u∗−v∗, u−v ≥ −mu−v2 ∀u, u∗,v, v∗∈graphM. 2.6
viii c-cocoercive if there exists a positive constantcsuch that
u∗−v∗, u−v ≥cu∗−v∗2 ∀u, u∗,v, v∗∈graphM. 2.7
ixMonotone with respect toAif
u∗−v∗, Au−Av ≥0 ∀u, u∗,v, v∗∈graphM. 2.8
xStrictly monotone with respect toAifMis monotone with respect toAand equality holds only ifuv.
xi r-strongly monotone with respect toAif there exists a positive constantr such that
u∗−v∗, Au−Av ≥ru−v2 ∀u, u∗,v, v∗∈graphM. 2.9
xii m-relaxed monotone with respect toAif there exists a positive constantmsuch that
u∗−v∗, Au−Av ≥ −mu−v2 ∀u, u∗,v, v∗∈graphM. 2.10
xiii h-hybrid relaxed monotone with respect toAif there exists a positive constanth such that
xiv m-cocoercive with respect toAif there exists a positive constantmsuch that
u∗−v∗, Au−Av ≥mu∗−v∗2 ∀u, u∗,v, v∗∈graphM. 2.12
Definition 2.2. Let X be a real Hilbert space, and let M : X → 2X be a mapping on X. Furthermore, letA : X → X be a single-valued mapping on X.The mapMis said to be the following.
iNonexpansive if
u∗−v∗ ≤ u−v ∀u, u∗,v, v∗∈graphM. 2.13
iiCocoercive if
u∗−v∗, u−v ≥ u∗−v∗2 ∀u, u∗,v, v∗∈graphM. 2.14
iiiCocoercive with respect toAif
u∗−v∗, Au−Av ≥ u∗−v∗2 ∀u, u∗,v, v∗∈graphM. 2.15
Definition 2.3. LetXbe a real Hilbert space. LetA:X → Xbe a single-valued mapping. The mapM:X → 2Xis said to be relatively maximalm-relaxed monotonewith respect toA if
iMism-relaxed monotone with respect toAform >0,
iiRIρM Xforρ >0.
Definition 2.4. LetXbe a real Hilbert space. LetA:X → Xbe a single-valued mapping. The mapM:X → 2Xis said to be relatively maximal monotonewith respect toAif
iMis monotone with respect toA,
iiRIρM Xforρ >0.
Definition 2.5. LetXbe a real Hilbert space, and letA : X → Xber-strongly monotone. LetM:X → 2Xbe a relatively maximalm-relaxed monotone mapping. Then the resolvent operatorJρ,m,AM :X → Xis defined by
JM ρ,m,Au
IρM−1u
forr−ρm >0. 2.16
Proof. For anyz∈X,assumex, y∈IρM−1z.Then we have
−xz∈ρMx, −yz∈ρMy. 2.17
Since M is relatively maximal m-relaxed monotone, andA is r-strongly monotone, it follows that
−ρmx−y2≤ −x−y, Ax−Ay ≤ −rx−y2
⇒r−ρmx−y2≤
0
⇒xy forr−ρm >0.
2.18
Definition 2.7. Let X be a real Hilbert space. A map M : X → 2X is said to be maximal monotone if
iMis monotone,
iiRIρM Xforρ >0.
Note that all relatively monotone mappings are relativelym-relaxed monotone for
m >0.We include an example of the relative monotonicity and other of the relativeh-hybrid relaxed monotonicity, a new notion to the problem on hand.
Example 2.8. LetX −∞,∞, Ax −1/2x,and Mx −xfor all x ∈ X.ThenM is relatively monotone but not monotone, whileMis relatively m-relaxed monotone for
m >0.
Example 2.9. LetX be a real Hilbert space, and letM : X → 2X be maximal m-relaxed monotone. Then we have the Yosida approximationMρρ−1I−JρM,whereJρM IρM−1
is the resolvent ofM,that satisfies
Mρu−Mρv, JρMu−JρMv
≥ −mJM
ρ u−JρMv
2
, 2.19
that is,Mρis relativelym-hybrid relaxed monotonewith respect toJρM.
3. Generalization to Rockafellar’s Theorem
This section deals with a generalization to Rockafellar’s theorem1, Theorem 2in light of the new framework of relative maximalm-relaxed monotonicity, while solving1.1.
Theorem 3.1. LetXbe a real Hilbert space, letA:X → Xber-strongly monotone, and letM:
equivalent:
ian elementu∈Xis a solution to1.1, iifor anu∈X,one has
uRM
ρ,m,Au, 3.1
where
RM ρ,m,Au
IρM−1u
forr−ρm >0. 3.2
Proof. To showi⇒ii, ifu∈Xis a solution to1.1, then forρ >0 we have
0∈ρMu
or u∈IρMu
or RMρ,m,Au u.
3.3
Similarly, to showii⇒i,we have
uRM ρ,m,Au
⇒ u∈IρMu ⇒ 0∈ρMu
⇒ 0∈Mu.
3.4
Theorem 3.2. Let X be a real Hilbert space, let A : X → X ber-strongly monotone, and let
M:X → 2Xbe relatively maximalm-relaxed monotone. Furthermore, suppose thatM:X → 2X
is relativelyh-hybrid relaxed monotone and (AoRMρ
k,m,Aisγ-cocoercive with respect toR
M ρk,m,A. i For an arbitrarily chosen initial point x0, suppose that the sequence {xk} is
generated by the proximal point algorithm1.2such that
xk1−RM
ρk,m,A
xk≤
k, 3.5
whereΣ∞k0k<∞, r−ρkm >0, RMρk,m,A IρkM−1, and the scalar sequence{ρk}satisfies
ρk ↑ ρ ≤ ∞.Suppose that the sequence{xk}is bounded in the sense that the solution set of
1.1is nonempty.
iiIn addition to assumptions ini, we further suppose that, for an arbitrarily chosen initial pointx0,the sequence{xk}is generated by the proximal point algorithm1.2such that
xk1−RM
ρk,m,A
xk≤δ
whereδk → 0, RMρk,m,A IρkM−1, and the scalar sequences{δk}and{ρk}, respectively,
satisfyΣ∞k0δk<∞,andρk↑ρ≤ ∞.Then the following implications hold:
iiithe sequence{xk}converges strongly to a solution of1.1,
ivrate of convergences
0≤ lim
k→ ∞
δkγ−hρkr−1
1−δk <1,
3.7
where 1/γ−hρkr<1.
Proof. Suppose thatx∗is a zero ofM.We begin with the proof for
RM
ρk,m,A
xk−RM ρk,m,Ax
∗≤ 1
γ−hρkr
xk−x∗, 3.8
whereγ−hρk>0.It follows from the definition of the generalized resolvent operatorRMρk,m,A,
the relativeh-hybrid relaxed monotonicity ofMwith respect toAand theγ-cocoercivity ofAoRMρk,m,Awith respect toRMρk,m,Athat
xk−x∗−RM ρk,m,A
xk−RM ρk,m,Ax
∗, ARM ρk,m,A
xk−ARM ρk,m,Ax
∗
≥ −hρkARMρk,m,Ax
k−ARM
ρk,m,Ax
∗2
3.9
or
xk−x∗, ARM ρk,m,A
xk−ARM ρk,m,Ax
∗
≥RM ρk,m,A
xk−RM ρk,m,Ax
∗, ARM ρk,m,A
xk−ARM ρk,m,Ax
∗
−hρkARMρk,m,Ax
k−ARM
ρk,m,Ax
∗2
≥γARM ρk,m,Ax
k−ARM
ρk,m,Ax
∗2
−hρkARρMk,m,Axk−A
RM ρk,m,Ax
∗2.
3.10
Next, we move to estimate
xk1−x∗≤RM ρk,m,A
xk−x∗ k
RM ρk,m,A
xk−RM ρk,m,Ax
∗
k
≤ 1
γ−hρkr
xk−x∗ k.
Forγ−hρkr >1,combining the previous inequality for allk, we have
xk1−x∗≤x0−x∗k i0
i ≤x0−x∗ ∞
k0
k. 3.12
Hence,{xk}is bounded.
Next we turn our attention to convergence part of the proof. Since
xk1−x∗≤xk1−RM ρk,m,A
xkRM ρk,m,A
xk−RM ρk,m,Ax
∗,
xk1−RM ρk,m,A
xk≤δ
kxk1−xk≤δkxk1−x∗xk−x∗
,
3.13
we get
xk1−x∗≤xk1−RM
ρk,m,A
xk
RM ρk,m,A
xk−RM ρk,m,Ax
∗
≤δkxk1−x∗xk−x∗
1
γ−hρkr
xk−x∗,
3.14
where 1/γ−hρkr <1.
It follows that
xk1−x∗≤
γ−hρkr−1δk
1−δk
xk−x∗. 3.15
It appears that3.15holds since 1/γ−hρkr <1seems to holdandδk → 0.
Hence, the sequence{xk}converges strongly tox∗.
To conclude the proof, we need to show the uniqueness of the solution to1.1. Assume thatx∗is a zero ofM.Then usingxk−x∗ ≤ x0−x∗∞
k0k ∀k,we have that
a∗ lim k→ ∞inf
xk−x∗ 3.16
is nonnegative and finite, and as a result,xk−x∗ → a∗.Considerx1∗andx2∗to be two limit points of{xk},then we have
xk−x∗
and both exist and are finite. If we express
xk−x∗
2 2
xk−x∗
1 2
2xk−x1∗, x1∗−x∗2x∗1−x∗22, 3.18
then it follows that
lim
k→ ∞
xk−x∗
1, x∗1−x∗2
1
2
a2
2−a21−x∗1−x2∗
2.
3.19
Sincex∗1is a limit point of{xk},the left hand side limit must tend to zero. Therefore,
a2
1a22−x∗1−x∗2
2. 3.20
Similarly, we obtain
a2
2a21−x∗1−x∗2
2. 3.21
This results inx∗1x∗2.
Remark 3.3. WhenM:X → 2X∗equals∂f,the subdifferential off, wheref:X → −∞,∞ is a functional on a Hilbert spaceX,can be applied for minimizingf.The functionfis proper iff /≡ ∞and is convex if
f1−λxλy≤1−λfx λfy, 3.22
wherex, y ∈X and 0< λ <1.Furthermore, the functionfis lower semicontinuous onXif the set
x:x∈X, fx≤λ∀λ∈R 3.23
is closed inX.
The subdifferential∂f:X → 2X∗offatxis defined by
∂fx x∗∈X∗:fy−fx≥y−x, x∗ ∀y∈X. 3.24
In an earlier work7, Rockafellar has shown that iff is a lower semicontinuous proper convex functional onX,then∂f :X → 2X∗is maximal monotone onX,whereXis any real Banach space. Several other special cases can be derived.
respect toA, that is,
u∗−v∗, Au−Av ≥ −u−v2 ∀u, v∈X, 3.25
whereu∗∈∂fu,andv∗∈∂fv.Then∂fis relatively maximalm-relaxed monotone.
Acknowledgment
The author is greatly indebted to Professor Petru Jebelean and reviewers for their valuable comments and suggestions leading to the revised version.
References
1 R. T. Rockafellar, “Monotone operators and the proximal point algorithm,” SIAM Journal on Control
and Optimization, vol. 14, no. 5, pp. 877–898, 1976.
2 R. P. Agarwal and R. U. Verma, “The over-relaxed η−proximal point algorithm and nonlinear variational inclusion problems,” Nonlinear Functional Analysis and Applications. In press.
3 R. P. Agarwal and R. U. Verma, “Role of relativeA-maximal monotonicity in overrelaxed proximal point algorithms with applications,” Journal of Optimization Theory and Applications, vol. 143, no. 1, pp. 1–15, 2009.
4 H.-Y. Lan, Y. J. Cho, and R. U. Verma, “Nonlinear relaxed cocoercive variational inclusions involving
A, η-accretive mappings in Banach spaces,” Computers & Mathematics with Applications, vol. 51, no. 9-10, pp. 1529–1538, 2006.
5 A. Moudafi and M. Th´era, “Finding a zero of the sum of two maximal monotone operators,” Journal
of Optimization Theory and Applications, vol. 94, no. 2, pp. 425–448, 1997.
6 R. T. Rockafellar, “Augmented Lagrangians and applications of the proximal point algorithm in convex programming,” Mathematics of Operations Research, vol. 1, no. 2, pp. 97–116, 1976.
7 R. T. Rockafellar, “On the maximal monotonicity of subdifferential mappings,” Pacific Journal of
Mathematics, vol. 33, pp. 209–216, 1970.
8 P. Tossings, “The perturbed proximal point algorithm and some of its applications,” Applied
Mathematics and Optimization, vol. 29, no. 2, pp. 125–159, 1994.
9 R. U. Verma, “A-monotonicity and its role in nonlinear variational inclusions,” Journal of Optimization
Theory and Applications, vol. 129, no. 3, pp. 457–467, 2006.
10 R. U. Verma, “A-monotone nonlinear relaxed cocoercive variational inclusions,” Central European
Journal of Mathematics, vol. 5, no. 2, pp. 386–396, 2007.
11 R. U. Verma, “A generalization to variational convergence for operators,” Advances in Nonlinear
Variational Inequalities, vol. 11, no. 2, pp. 97–101, 2008.
12 R. U. Verma, “Approximation solvability of a class of nonlinear set-valued variational inclusions involvingA, η-monotone mappings,” Journal of Mathematical Analysis and Applications, vol. 337, no. 2, pp. 969–975, 2008.
13 E. Zeidler, Nonlinear Functional Analysis and Its Applications. I, Fixed-Point Theorems, Springer, New York, NY, USA, 1986.
14 E. Zeidler, Nonlinear Functional Analysis and Its Applications. II/B, Nonlinear Monotone Operators, Springer, New York, NY, USA, 1990.