Via Cosimo Ridolfi, 10 – 56124 PISA – Tel. Segr. Amm. 050 945231 Segr. Stud. 050 945317 Fax 050 945375
Università degli Studi di Pisa
Dipartimento di Statistica e Matematica
Applicata all’Economia
Report n. 212
Necessary Optimality Conditions
in Vector Optimization
Riccardo Cambini
Pisa, Settembre 2001
-Necessary Optimality Conditions
in Vector Optimization
Riccardo Cambini
∗Dept. of Statistics and Applied Mathematics, University of Pisa
Via Cosimo Ridolfi 10, 56124 Pisa, ITALY
E-mail: cambric@ec.unipi.it
September 2001
Abstract
This paper deals with vector optimization problems having a vec-tor valued objective function and three kinds of constraints: inequality constraints, equality constraints, and a set constraint (which covers the constraints which cannot be expressed by means of neither equalities nor inequalities). Necessary optimality conditions and sufficient ones are given in the image space for the nonsmooth case (when the continu-ity is not required), while necessary conditions in the image and in the decision spaces are given for the nondifferentiable case (when just the Hadamard directional differentiability is assumed). The new concept of U-regularity is introduced in order to study necessary optimality conditions in the decision space. Finally, the results are specialized under differentiability hypothesis, thus obtaining conditions generaliz-ing the so called “maximum principle conditions”.
Keywords Vector Optimization, Optimality Conditions, Image Space, Maximum Principle Conditions.
AMS - 2000 Math. Subj. Class. 90C29, 90C46, 90C30
JEL - 1999 Class. Syst. C61, C62
1
Introduction
The aim of this paper is to study optimality conditions for vector problems
having a vector valued objective function and three kinds of constraints:
in-equality constraints, in-equality constraints, and a set constraint (which covers
the constraints which cannot be expressed by means of neither equalities
nor inequalities). The partial ordering in the image of the objective
func-tion is given by a closed convexpointed cone
C
with nonempty interior
(that is a solid cone, not necessarily the Paretian one), while the inequality
constraints are expressed by means of a partial ordering given by a closed
convexpointed cone
V
with nonempty interior.
Problems of this kind have been studied in the literature in finite
di-mensional spaces with a scalar objective function and under differentiability
hypothesis (
1), obtaining (with some additional hypothesis) necessary
opti-mality conditions of the so called “maximum/minimum principle” type (also
called “generalized Lagrange multiplier rule”) [3, 23, 21]; these optimality
conditions are stated in the decision space, that is to say that they are based
on the use of derivatives and multipliers.
The aim of this paper is twofold; first it is to state some optimality
con-ditions by means of the so called image space approach [5, 6, 7, 8, 9, 10, 11],
first suggested in [20], then it is to generalize the minimum/maximum
princi-ple conditions to multiobjective problems having nondifferentiable functions.
In particular, in Section 3 a characterization of the efficiency of a point is
first stated in the image space without any assumptions on the functions of
the problem, then some more necessary optimality conditions in the image
space are given assuming the Hadamard directional differentiability of the
functions.
In Section 4, the existence of necessary optimality conditions in the
decision space is studied, still assuming that the functions are Hadamard
directionally differentiable; a characterization in the image space of such
conditions is provided thus making possible a comparison with the
previ-ously stated conditions in the image space. The conditions in the decision
space result to be stronger than the image space ones, hence a new
regu-larity concept, called “
U
-regularity”, is introduced in order to commute the
conditions in the image space to the ones in the decision space.
Finally, in Section 5 the previously obtained results are specified
assum-ing the differentiability of the functions, it is also pointed out that the given
conditions generalize some of the results known in the literature.
2
Statement of the problem
The vector optimization problem studied in this paper has both inequality
and equality constraints as well as a set constraint, covering the constraints
which cannot be expressed by means of neither equalities nor inequalities:
P
:
C
max
f
(
x
)
g
(
x
)
∈
V
inequality constraints
h
(
x
) = 0
equality constraints
x
∈
X
set constraint
1Minimum/maximum principle optimality conditions are used also in infinite dimen-sional spaces, for instance in optimal control theory [15, 19, 24, 26].
where
f
:
A
→
s,
g
:
A
→
mand
h
:
A
→
pare vector valued functions,
with
A
⊆
nopen set,
C
⊂
sand
V
⊂
mare closed convexcones with
nonempty interior (that is to say solid cones), and
X
⊆
A
is a set verifying
no particular topological properties, that is to say that
X
is not required to
be open or convexor with nonempty interior. For the sake of convenience,
note that problem
P
can be rewritten in the following form:
P
:
C
max
f
(
x
)
g
(
x
)
∈
V
x
∈
(
X
∩
S
)
S
=
{
x
∈
A
:
h
(
x
) = 0
}
The aim of this paper is to study optimality conditions for a feasible
point
x
0∈
X
which is assumed, without loss of generality, to bind all the
inequality constraints, so that
g
(
x
0) = 0. Note that it is not known whether
or not
x
0belongs to the boundary of
X
. The feasible point
x
0∈
X
is said
to be a
local efficient point
if there exists a suitable neighbourhood
I
x0of
x
0such that:
∃
y
∈
I
x0∩
X
such that
f
(
y
)
∈
f
(
x
0) +
C
0
, g
(
y
)
∈
V, h
(
y
) = 0
(2.1)
where
C
0=
C
\ {
0
}
. For the sake of simplicity the following function is also
used:
F
:
A
→
s+m+psuch that
F
(
x
) = (
f
(
x
)
, g
(
x
)
, h
(
x
))
By means of function
F
,
x
0∈
X
is a local efficient point if and only if there
exists a suitable neighbourhood
I
x0of
x
0such that:
∃
y
∈
I
x0∩
X
such that
F
(
y
)
∈
F
(
x
0) + (
C
0
×
V
×
0)
(2.2)
The study of optimality conditions is based on the so called image space
approach, originally suggested by Hestenes [20]; with this aim a key tool
results to be the
Bouligand Tangent cone to
X
at
x
0, denoted with
T
(
X, x
0),
which is a closed cone defined as follows:
T
(
X, x
0)
=
{
x
∈
n:
∃{
x
k} ⊂
X, x
k→
x
0,
∃{
λ
k} ⊂
++, λ
k→
+
∞
,
x
= lim
k→+∞
λ
k(
x
k−
x
0)
}
.
Also subcones of
T
(
X, x
0) are fundamental in this paper; with this aim just
recall that particular subcones of
T
(
X, x
0) are the well known
cone of feasible
directions to
X
at
x
0(
2), denoted with
F
(
X, x
0), and the
cone of interior
directions to
X
at
x
0, denoted with
I
(
X, x
0) (see for example [3, 17, 18]).
2
LetX⊆ nbe a nonempty set and letx0∈Cl(X). Thecone of feasible directions to
X atx0 F(X, x0) and thecone of interior directions to X at x0 I(X, x0) are defined as
follows:
Note finally that the results stated in this paper deal also with problems
having no equality and/or no inequality constraints. With this regard, it
is important to note that the absence of equality constraints is extremely
relevant in the optimality conditions expressed in the decision space; for this
reason, when necessary, the absence of equality constraints is specified with
the condition
p
= 0 (remind that
h
:
A
→
p) and in this case
S
=
A
is
assumed.
3
Optimality conditions in the Image Space
The aim of this section is to state in the image space necessary and/or
sufficient optimality conditions for problem
P
.
By means of an approach similar to the one used in [5, 6, 7, 8, 9, 10, 11],
the following subset of the Bouligand tangent cone at
F
(
x
0) in the image
space is introduced:
T
1=
{
t
∈
s+m+p:
∃{
x
k} ⊂
X, x
k→
x
0, h
(
x
k) = 0
,
∃{
λ
k} ⊂
,
λ
k>
0
, λ
k→
+
∞
, t
= lim
k→+∞
λ
k(
F
(
x
k)
−
F
(
x
0))
}
.
(3.1)
The cone
T
1plays a key role in stating optimality conditions in the image
space. The forthcoming results extend the ones stated in [6, 7, 8, 10], which
can be seen as the particular cases of problem
P
where
X
is an open set or
where
x
0∈
Int(
X
).
3.1
The nonsmooth case
The aim of this subsection in to characterize in the image space the efficiency
of
x
0, this allows also to determine a necessary optimality condition as well
as a sufficient one. Note that no hypothesis on the functions
f
,
g
and
h
are assumed, that is to say that they may be not only nondifferentiable but
even noncontinuous.
Theorem3.1
Consider problem
P
. If
x
0∈
X
is a local efficient point
then:
T
1∩
(Int(
C
)
×
Int(
V
)
×
0) =
∅
(3.2)
I(X, x0) = {x∈ n:∃ >0,∃δ >0 such thatλ∈(0, δ),y−x< imply
x0+λy∈X}.
It is very well known that for any setX:
Proof
The result is proved by contradiction. Suppose that
∃
t
∗∈
T
1∩
(Int(
C
)
×
Int(
V
)
×
0); then
∃{
x
k} ⊂
X
,
x
k→
x
0,
h
(
x
k) = 0,
∃{
λ
k} ⊂
,
λ
k>
0,
λ
k→
+
∞
, such that
t
∗= lim
k→+∞λ
k(
F
(
x
k)
−
F
(
x
0)).
Being
t
∗∈
(Int(
C
)
×
Int(
V
)
×
0) and being
h
(
x
k) = 0
∀
k
then for a known
limit theorem:
∃
¯
k >
0 such that
λ
k(
F
(
x
k)
−
F
(
x
0))
∈
(Int(
C
)
×
Int(
V
)
×
0)
∀
k >
¯
k
so that, being
λ
k>
0,
F
(
x
k)
∈
F
(
x
0) + (Int(
C
)
×
Int(
V
)
×
0)
∀
k >
k
¯
and
this contradicts the local efficiency of
x
0.
The next theorem shows that it is possible to characterize in the image
space the optimality of
x
0.
Theorem3.2
Consider problem
P
. The point
x
0∈
X
is a local efficient
point if and only if the following condition holds:
∀
t
∈
T
1∩
(
C
×
V
×
0)
,
t
= 0
, and
∀{
x
k} ⊂
X
,
x
k→
x
0,
h
(
x
k) = 0
, such that
∃{
λ
k} ⊂
,
λ
k>
0
,
λ
k→
+
∞
, with
t
= lim
k→+∞λ
k(
F
(
x
k)
−
F
(
x
0))
, there exists an integer
¯
k >
0
such that:
F
(
x
k)
∈
/
F
(
x
0) + (
C
0×
V
×
0)
∀
k >
¯
k
.
Proof
⇒
) If
x
0is a local efficient point then, for (2.2),
∀{
x
k} ⊂
X
,
x
k→
x
0,
h
(
x
k) = 0, there exists an integer ¯
k >
0 such that
F
(
x
k)
∈
/
F
(
x
0) + (
C
0×
V
×
0)
∀
k >
k
¯
, and this is true also for particular sequences
such that
t
= lim
k→+∞λ
k(
F
(
x
k)
−
F
(
x
0)) with
t
∈
T
1∩
(
C
×
V
×
0).
⇐
) The result is proved by contradiction. Suppose that
x
0∈
X
is not a
local efficient point, then by means of (2.2)
∃{
x
k} ⊂
X
,
x
k→
x
0, such
that
F
(
x
k)
∈
F
(
x
0) + (
C
0×
V
×
0)
∀
k
, so that in particular
h
(
x
k) = 0
∀
k
.
Let us consider now the sequence
{
d
k} ⊂
s+m+pwith
d
k=
FF((xkx )−F(x0)k)−F(x0)
;
since the unit ball is a compact set, we can suppose (substituting
{
d
k}
with
a suitable subsequence, if necessary) that lim
k→+∞d
k=
t
∗= 0,
t
∗∈
T
1.
On the other hand,
d
k=
FF((xkxk))−−FF((xx00))∈
(
C
0×
V
×
0) so that its limit
t
∗∈
(
C
×
V
×
0).
It then results that
t
∗∈
T
1∩
(
C
×
V
×
0),
t
= 0,
and this contradicts the assumptions since
t
∗= lim
k→+∞FF((xkxk))−−FF((xx00))and
F
(
x
k)
∈
F
(
x
0) + (
C
0×
V
×
0)
∀
k
.
Directly from Theorem 3.2 we can state the following sufficient
optimal-ity condition.
Corollary 3.1
Consider problem
P
. If the following condition holds then
x
0∈
X
is a local efficient point:
3.2
The nondifferentiable case
The previously stated optimality conditions are extremely general since no
properties are assumed regarding to functions
f
,
g
and
h
. On the other
hand, those conditions are not easy to be verified, since the cone
T
1is not
trivial to be determined.
Some more “easy to use” necessary optimality conditions, still based on
the image space approach, can be proved with the following assumption.
(
H
N)
Nondifferentiability Assumptions
•
Functions
f
,
g
and
h
are Hadamard directionally differentiable at the
point
x
0∈
X
(
3).
A complete study of Hadamard directionally differentiable functions can
be found for example in [16] (see also [1, 2, 25, 28]). The nondifferentiability
hypothesis (
H
N) allows to define the following cones, which play a key role
in stating further necessary optimality conditions in the image space.
Definition 3.1
Consider problem
P
, suppose (
H
N) holds and let
U
⊆
nbe a cone. The following sets are defined:
Ker
∂h=
{
0
} ∪ {
v
∈
n\ {
0
}
:
∂h
∂v
(
x
0) = 0
}
Ker
∂hC=
n
\
Ker
∂h=
{
v
∈
n\ {
0
}
:
∂h
∂v
(
x
0)
= 0
}
Im
∂h(
U
)
=
{
0
} ∪ {
t
∈
p:
t
=
∂h
∂v
(
x
0)
, v
= 0
, v
∈
U
}
L
(
X, S, x
0)
=
T
(
X
∩
S, x
0)
∪
Ker
∂hC=
n
\
(
Ker
∂h\
T
(
X
∩
S, x
0)) =
L
K
L=
{
t
∈
m+s+p:
t
= (
∂f
∂v
(
x
0)
,
∂g
∂v
(
x
0)
,
∂h
∂v
(
x
0))
, v
= 0
, v
∈
L
}
3Letf:A→ , withA⊆ nopen set. The limit:
lim
λ→0+,h→v
f(x0+λh)−f(x0)
λ
is called theHadamard directional derivative of f(x)atx0 ∈A in the directionv; if this
derivative exists and is finite for allvthenf(x) isHadamard directionally differentiable at x0∈A. In order to verify the Hadamard directional derivability, remind that a function
f(x) is Hadamard directionally differentiable atx0 (see [16]) if and only if its derivative
∂f ∂v(x0)
def
= limλ→0+f(x0+λvλ)−f(x0)is continuous as a function of direction and the function
itself isDini uniformly directionally differentiable atx0(hence directionally differentiable
atx0), that is to say that:
lim v→0 f(x0+v)−f(x0)− ∂f ∂v(x0) = 0
Recall also that if a functionf(x) is Hadamard directionally differentiable atx0 then it is
also continuous atx0. A vector valued function F :A→ m is Hadamard directionally
K
U=
{
t
∈
m+s+p:
t
= (
∂f
∂v
(
x
0)
,
∂g
∂v
(
x
0)
,
∂h
∂v
(
x
0))
, v
= 0
, v
∈
U
}
Note that
Ker
∂h,
Ker
∂hC,
Im
∂h(
U
),
K
Land
K
Uare cones, since
∂f∂v(
x
0),
∂g
∂v
(
x
0) and
∂h∂v
(
x
0) are positively homogeneous (of the first degree) as
func-tions of direction
v
, due to the Hadamard directional differentiability of
f
,
g
and
h
(
4). In the rest of the paper, cones
U
⊆
L
(
X, S, x
0) will be very
used, with this aim note that:
U
⊆
L
(
X, S, x
0)
⇔
U
∩
Ker
∂h⊆
T
(
X
∩
S, x
0)
Remark 3.1
Since
L
(
X, S, x
0) =
T
(
X
∩
S, x
0)
∪
Ker
C∂hit is worth noticing
that if
h
is Hadamard directionally differentiable at
x
0∈
X
then (
5):
T
(
X
∩
S, x
0)
⊆
T
(
S, x
0)
⊆
Ker
∂hIn order to verify this property, firstly note that
T
(
X
∩
S, x
0)
⊆
T
(
S, x
0)
being
X
∩
S
⊆
S
. Being
t
= 0
∈
T
(
S, x
0)
∩
Ker
∂hjust the case
t
∈
T
(
S, x
0),
t
= 0, has to be considered. By means of the definition of tangent cone,
∃{
x
k} ⊂
S
,
x
k→
x
0,
∃{
λ
k} ⊂
,
λ
k>
0,
λ
k→
+
∞
, such that
t
=
lim
k→+∞λ
k(
x
k−
x
0); it can be supposed also (eventually substituting
{
x
k}
with a proper subsequence) that
v
= lim
k→+∞xxkk−−xx00. Since
{
x
k} ⊂
S
it yields
h
(
x
0) =
h
(
x
k) = 0
∀
k >
0 so that, by means of the Hadamard
directional differentiability of
h
(
x
), it is:
0 = lim
k→+∞h
(
x
k)
−
h
(
x
0)
x
k−
x
0=
lim
γk→0+,dk→vh
(
x
0+
γ
kd
k)
−
h
(
x
0)
γ
k=
∂h
∂v
(
x
0)
where
γ
k=
x
k−
x
0and
d
k=
xxk−x0 k−x0, so that
v
∈
Ker
∂h.
By means of the definition it results:
t
= lim
k→+∞
λ
k(
x
k−
x
0) = lim
k→+∞λ
kx
k−
x
0k→
lim
+∞x
k−
x
0x
k−
x
0=
µv
where
µ
= lim
k→+∞λ
kx
k−
x
0≥
0 and
v
= 1. Being
Ker
∂ha cone
and being
v
∈
Ker
∂hit follows that
t
∈
Ker
∂h.
By means of these cones the following necessary optimality conditions in
the image space can be stated.
4
Note also that the given definition ofKL generalizes the one given in [5, 6, 7, 8, 9,
10, 11] for differentiable problems having no set constraints; in particular these papers considerKL={t∈ s+m:t= [Jf(x0), Jg(x0)]v, v∈ n}, which is nothing but the image
of [Jf(x0), Jg(x0)]. 5
It is also known, see for instance [3], that ifhis differentiable atx0∈X it is:
T(S, x0)⊆Cl(Co(T(S, x0)))⊆Ker∂h
Theorem3.3
Consider Problem
P
and suppose
(
H
N)
holds; if the feasible
point
x
0∈
X
is a local efficient point then the two following equivalent
conditions hold:
K
L∩
(Int(
C
)
×
Int(
V
)
×
0) =
∅
(3.4)
(
K
L−
(
C
×
V
×
0))
∩
(Int(
C
)
×
Int(
V
)
×
0) =
∅
(3.5)
In addiction, for any cone
U
⊆
nsuch that
U
∩
Ker
∂h⊆
T
(
X
∩
S, x
0)
the
two following further equivalent conditions hold:
K
U∩
(Int(
C
)
×
Int(
V
)
×
0) =
∅
(3.6)
(
K
U−
(
C
×
V
×
0))
∩
(Int(
C
)
×
Int(
V
)
×
0) =
∅
(3.7)
Proof
Condition (3.4) is proved by contradiction. Suppose that there
ex-ists
t
= (
t
f, t
g, t
h)
∈
K
L∩
(Int(
C
)
×
Int(
V
)
×
0), so that
∃
µ >
0,
∃
v
∈
L
(
X, S, x
0),
v
= 1, such that
t
=
µ
(
∂f
∂v
(
x
0)
,
∂g
∂v
(
x
0)
,
∂h
∂v
(
x
0))
∈
(Int(
C
)
×
Int(
V
)
×
0)
.
Being
∂h∂v(
x
0) = 0 then
v
∈
Ker
∂hwhich implies that
v /
∈
Ker
C∂hand
v
∈
T
(
X
∩
S, x
0). By means of the definition of
T
(
X
∩
S, x
0) it yields
that
∃{
x
k} ⊂
(
X
∩
S
),
x
k→
x
0,
∃{
λ
k} ⊂
,
λ
k>
0,
λ
k→
+
∞
, such that
v
= lim
k→+∞v
kwhere
v
k=
λ
k(
x
k−
x
0). Being functions
f
and
g
Hadamard
directionally differentiable it results:
lim
k→+∞f
(
x
k)
−
f
(
x
0)
1 λk= lim
k→+∞f
(
x
0+
λk1v
k)
−
f
(
x
0)
1 λk=
∂f
∂v
(
x
0)
∈
Int(
C
)
and, in the same way:
lim
k→+∞g
(
x
k)
−
g
(
x
0)
1 λk=
∂g
∂v
(
x
0)
∈
Int(
V
)
By means of a well known limit theorem it then exists ¯
k >
0 such that
f
(
x
k)
−
f
(
x
0)
∈
Int(
C
) and
g
(
x
k)
−
g
(
x
0)
∈
Int(
V
) for any
k >
k
¯
; this
means that the sequence
{
x
k} ⊂
(
X
∩
S
),
x
k→
x
0, is feasible for
k >
k
¯
and
that
x
0is not a local efficient point, which is a contradiction.
The equivalence of (3.4) and (3.5) can be easily verified; the whole result
then follows noticing that
U
⊆
L
(
X, S, x
0) implies
K
U⊆
K
L.
Remark 3.2
For the sake of completeness, note that (3.4) can be stated as
a corollary of Theorem 3.1.
Denoting with
B
=
{
t
= (
t
f, t
g, t
h)
∈
s+m+p:
t
h= 0
}
, directly from
Theorem 3.1 it follows that the efficiency of
x
0implies:
It is now just needed to verify that
K
L⊆
(
T
1∪
B
). Let
t
=
µ
∂F∂v(
x
0)
∈
K
L,
v
∈
L
(
X, S, x
0),
v
= 1,
µ
≥
0; if
µ
= 0 then
t
=
µ
∂F∂v(
x
0) = 0
∈
T
1while if
µ
= 0 and
v
∈
Ker
C∂hthen
∂v∂h(
x
0)
= 0 and
t
∈
B
. Suppose now
µ
= 0 and
v
∈
T
(
X
∩
S, x
0), then
∃{
x
k} ⊂
X
,
x
k→
x
0,
h
(
x
k) = 0, such
that
v
= lim
k→+∞xxkk−−xx00; let also
λ
k=
x
k−
x
0−1
. By means of the
Hadamard directional differentiability of
F
(
x
) at
x
0it is:
∂F
∂v
(
x
0) = lim
k→+∞F
(
x
k)
−
F
(
x
0)
x
k−
x
0= lim
k→+∞λ
k(
F
(
x
k)
−
F
(
x
0))
∈
T
1;
being
T
1a cone it then follows that
t
=
µ
∂F∂v(
x
0)
∈
T
1too.
4
Optimality conditions in the Decision Space:
the nondifferentiable case
In the literature some necessary optimality conditions expressed in the
de-cision space are stated for particular problems
P
having a scalar objective
function and assuming the differentiability of functions
f
,
g
and
h
[3, 21, 23].
These conditions are useful in the applications (consider for all the optimal
control theory) and are known as “maximum/minimum principle”
condi-tions.
The aim of this section is to generalize those conditions for Hadamard
directionally differentiable functions and for multiobjective problems. In
other words, the necessary optimality conditions in the decision space (hence
involving the directional derivatives and some multipliers) which are going
to be studied in this section are the followings:
(
C
N)
∃
α
f∈
C
+,
∃
α
g∈
V
+,
∃
α
h∈
p, (
α
f, α
g, α
h)
= 0, such that:
α
Tf∂f
∂v
(
x
0) +
α
T g∂g
∂v
(
x
0) +
α
T h∂h
∂v
(
x
0)
≤
0
∀
v
∈
Cl(
U
)
\ {
0
}
where
U
⊆
nis a cone and (
H
N
) is assumed.
It is important to note that conditions (
C
N), depending on the particular
chose cone
U
, do not hold in general even if
x
0is an efficient point. This is
shown in the following example, which implicitly points out that condition
(3.4) is more general than (
C
N) ones.
Example 4.1
Consider the following problem:
where
X
=
X
1∪
X
2∪
X
3with:
X
1=
{
(
x
1, x
2)
∈
2:
x
1+
x
2≥
0
,
2
x
1+
x
2≤
0
}
,
X
2=
{
(
x
1, x
2)
∈
2:
x
1≤
0
, x
2≤
0
}
,
X
3=
{
(
x
1, x
2)
∈
2:
x
1+
x
2≥
0
, x
1+ 2
x
2≤
0
}
and
x
0= (0
,
0); since the problem has no equality constraints it is
p
= 0 and
S
=
2
. Note that (Int(
C
)
×
Int(
V
)) =
2
++
and
X
=
T
(
X
∩
S, x
0) =
K
Lsince [
J
f(
x
0)
, J
g(
x
0)] is equal to the identity matrix. The point
x
0is the
global efficient point of the problem and the necessary optimality condition
(3.4) is verified being
X
∩
2++
=
∅
; on the other hand the sets
X
,
I
(
X, x
0),
T
(
X
∩
S, x
0) and
K
Lare not convex.
Assume now
U
=
T
(
X
∩
S, x
0); even if
x
0∈
X
is a global efficient point it
can be easily verified that (
C
N) does not hold; this points out that condition
(3.4) is more general than (
C
N) one.
In this section it is going to be proved that the additional assumption,
needed in order to state the necessary optimality conditions in the
deci-sion space, is the existence of a separation hyperplane between the cone
(Int(
C
)
×
Int(
V
)
×
0) and
K
Uor
K
L. This result is stated by means of
sep-arating theorems and the use of multipliers, hence a key tool of this approach
is the positive polar of a cone
K
, denoted with
K
+.
4.1
Characterization in the image space
The aim of this subsection is to characterize conditions (
C
N) in the image
space, thus making possible a complete comparison with condition (3.4).
With this aim, the following preliminary results are needed.
Lemma 4.1
Consider problem
P
with
p
≥
1
, suppose
(
H
N)
holds and let
U
⊆
nbe a cone such that
Co(
Im
∂h
(
U
))
=
p
. Then
∃
α
h∈
p,
α
h= 0
,
such that:
α
Th∂h
∂v
(
x
0)
≤
0
∀
v
∈
Cl(
U
)
\ {
0
}
and hence
(
C
N)
is verified.
Proof
Since Co(
Im
∂h(
U
))
=
p
there exists a support hyperplane for the
convexcone Co(
Im
∂h(
U
)), so that
∃
α
h∈
p,
α
h= 0, such that
α
Tht
≤
0
∀
t
∈
Co(
Im
∂h(
U
)); this implies that
α
Th∂v∂h(
x
0)
≤
0
∀
v
∈
U
,
v
= 0. Being
∂h∂v
(
x
0) continuous as a function of direction
v
due to the Hadamard
direc-tional differentiability of
h
, it then follows that
α
hT∂h∂v(
x
0)
≤
0
∀
v
∈
Cl(
U
),
v
= 0. The whole result is then proved just assuming
α
f= 0 and
α
g= 0.
Note that Lemma 4.1 points out that the case Co(
Im
∂h(
U
))
=
p
is
trivial, since a support hyperplane for Co(
Im
∂h(
U
)) exists without the need
of any additional hypothesis, such as convexity ones, optimality assumptions
on
x
0, regularity conditions for the problem.
Lemma 4.2
Consider problem
P
with
p
≥
1
, suppose
(
H
N)
holds and
let
U
⊆
nbe a cone. If
(
C
N)
is verified and
Co(
Im
∂h(
U
)) =
p
then
(
α
f, α
g)
= 0
.
Proof
Suppose by contradiction that
α
f= 0 and
α
g= 0, so that
α
h= 0.
Then
α
Th∂h
∂v
(
x
0)
≤
0
∀
v
∈
Cl(
U
)
\ {
0
}
,
and this yields
α
Tht
≤
0
∀
t
∈
Im
∂h(
U
). Consequently it results
α
Tht
≤
0
∀
t
∈
Co(
Im
∂h(
U
)) =
p
which implies
α
h= 0, and this is a contradiction
since (
α
f, α
g, α
h)
= 0.
It is now possible to fully characterize condition (
C
N) in the image space.
Theorem4.1
Consider problem
P
, suppose
(
H
N)
holds and let
U
⊆
nbe
a cone. Then condition
(
C
N)
is verified if and only if the following
implica-tion holds:
p
= 0
or
Co(
Im
∂h(
U
)) =
p
⇒
Co(
K
U)
∩
(Int(
C
)
×
Int(
V
)
×
0) =
∅
In particular, if
p
= 0
or
Co(
Im
∂h(
U
)) =
p
then
(
α
f, α
g)
= 0
.
Proof
⇒
)
Suppose (
C
N) holds and first consider the case
p
≥
1 and
Co(
Im
∂h(
U
)) =
p
. By means of Lemma 4.2 it is (
α
f, α
g)
= 0. Suppose now
by contradiction that
∃
(
t
f, t
g, t
h)
∈
Co(
K
U)
∩
(Int(
C
)
×
Int(
V
)
×
0)
=
∅
;
be-ing
α
f∈
C
+,
α
g∈
V
+, (
α
f, α
g)
= 0,
t
f∈
Int(
C
),
t
g∈
Int(
V
) and
t
h= 0 it
is:
α
Tft
f+
α
gTt
g+
α
Tht
h>
0
(4.1)
Since (
t
f, t
g, t
h)
∈
Co(
K
U)
∃
q
∈
N
,
q >
0,
∃
v
1, . . . , v
q∈
U
, such that
(
t
f, t
g, t
h) =
q i=1∂f
∂v
i(
x
0)
,
∂g
∂v
i(
x
0)
,
∂h
∂v
i(
x
0)
hence
α
Tft
f+
α
Tgt
g+
α
Tht
h=
q i=1α
Tf∂f
∂v
i(
x
0) +
α
Tg∂g
∂v
i(
x
0) +
α
Th∂h
∂v
i(
x
0)
≤
0
and this contradicts (4.1). The proof for the case
p
= 0 is analogous.
⇐
)
If
p
≥
1 and Co(
Im
∂h(
U
))
=
p
the result follows from Lemma 4.1.
Consider now the case
p
≥
1 and Co(
Im
∂h(
U
)) =
p
, so that Co(
K
U)
∩
(Int(
C
)
×
Int(
V
)
×
0) =
∅
; by means of a well known separation theorem
between convexsets,
∃
(
α
f, α
g, α
h)
∈
(Int(
C
)
×
Int(
V
)
×
0)
+, (
α
f, α
g, α
h)
=
0, such that (
α
f, α
g, α
h)
Tt
≤
0
∀
t
∈
Co(
K
U)
⊇
K
U. A known result on polar
cones (
6) implies that (Int(
C
)
×
Int(
V
)
×
0)
+= Int(
C
)
+×
Int(
V
)
+×
pand
hence, being
C
and
V
convexcones (
7),
∃
α
f∈
C
+,
∃
α
g∈
V
+,
∃
α
h∈
p,
(
α
f, α
g, α
h)
= 0, such that:
α
Tf∂f
∂v
(
x
0) +
α
T g∂g
∂v
(
x
0) +
α
T h∂h
∂v
(
x
0)
≤
0
∀
v
∈
U, v
= 0
.
The directional derivatives
∂f∂v(
x
0),
∂g∂v(
x
0) and
∂h∂v(
x
0) are continuous as
functions of direction, since
f
,
g
and
h
Hadamard directionally differentiable
at
x
0, hence (
C
N) is verified. In particular for Lemma 4.2 it is (
α
f, α
g)
= 0.
The proof for the case
p
= 0 is analogous.
It is now worth making a comparison between conditions (
C
N) and (3.4)
one. Condition (3.4) states that
K
L∩
(Int(
C
)
×
Int(
V
)
×
0) =
∅
while, for a given cone
U
, (
C
N) implies
Co(
K
U)
∩
(Int(
C
)
×
Int(
V
)
×
0) =
∅
.
It is then clear that, even when
K
U⊆
K
L, (
C
N) condition is stronger than
(3.4) since it requires the existence of a separating hyperplane between
K
Uand (Int(
C
)
×
Int(
V
)
×
0), while
K
Lin (3.4) is not convexin general and
hence a separation hyperplane may not exists.
Note finally that in Example 4.1, where
U
=
T
(
X
∩
S, x
0) is assumed
and (
C
N) does not hold, it results that Co(
K
U) =
2
and hence no
separat-ing hyperplane exists; note also that in Example 4.1 condition (3.4) holds
without any convexity assumption regarding to the cones
U
,
T
(
X
∩
S, x
0),
K
Uor
K
L.
4.2
U-regularity conditions
As it has been pointed out in the previous subsection, condition
K
L∩
(Int(
C
)
×
Int(
V
)
×
0) =
∅
6
LetC1, . . . , Cnbe cones, then (C1×. . .×Cn)+= (C+1 ×. . .×C +
n).
To prove this property it is sufficient to consider just the casen = 2. First verify that (C+ 1 ×C + 2)⊆(C1×C2)+; assuming (α1, α2)∈(C1+×C + 2) it yields thatα T 1c+αT2v≥0 ∀c ∈ C1 and ∀v ∈ C2 so that (α1, α2) ∈ (C1×C2)+. Verify now that (C1 ×C2)+ ⊆
(C+ 1 ×C
+
2); assume (α1, α2)∈ (C1×C2)+ and suppose by contradiction thatα1 ∈C1+
[α2 ∈C2+], then∃c¯∈C1 [∃¯v∈C2] such thatαT1c <¯ 0 [α2Tv <¯ 0]; sinceC1 [C2] is a cone
thenλc¯∈C1 [λv¯∈C2]∀λ >0 so that, givenv∈C2 [c∈C1], forλ >0 great enough we
haveαT1(λ¯c)+α2Tv <0 [αT1c+αT2(λ¯v)<0] and this contradicts that (α1, α2)∈(C1×C2)+. 7LetC be a cone; it is known (see for all [27]) thatC+ = Cl(C)+ so that Int(C)+ =
Cl(Int(C))+too. IfCis a convex cone we also have (see for instance [4]) that Cl(Int(C)) =
does not guarantee (
C
N), since
p
= 0 or
Co(
Im
∂h(
U
)) =
p
⇒
Co(
K
U)
∩
(Int(
C
)
×
Int(
V
)
×
0) =
∅
is needed. This behaviour suggests the introduction of the following
regu-larity condition (
8).
Definition 4.1
Consider Problem
P
and suppose (
H
N) holds. A cone
U
⊆
n
verifies an
U
-regularity condition
if the following implication holds:
KL∩(Int(C)×Int(V)×0) =∅and
[p= 0 or Co(Im∂h(U)) =p ]
⇒ Co(KU)∩(Int(C)×Int(V)×0) =∅
(4.2)
The use of
U
-regularity conditions is focused on in the next theorem
which follows directly from (4.2) and Theorem 4.1.
Theorem4.2
Consider Problem
P
and suppose
(
H
N)
holds; the following
properties hold:
i)
U
verifies an
U
-regularity condition if and only if
K
L∩
(Int(
C
)
×
Int(
V
)
×
0) =
∅
⇒
(
C
N)
holds
;
ii) if
x
0∈
X
is a feasible local efficient point and
U
⊆
nis a cone then:
U
verifies an
U
-regularity condition
⇔
(
C
N)
holds.
In other words, an
U
-regularity condition is nothing but the additional
hypothesis needed in order to commute condition (3.4) in the image space
to condition (
C
N) in the decision space. Hence, from now on, the study of
(
C
N) optimality conditions can be equivalently done in the image space by
means of
U
-regularity conditions.
Theorem4.3
Consider Problem
P
, suppose
(
H
N)
holds and let
x
0∈
X
be a feasible local efficient point. Then for every cone
U
⊆
nverifying an
U
-regularity condition
∃
α
f∈
C
+,
∃
α
g∈
V
+,
∃
α
h∈
p,
(
α
f, α
g, α
h)
= 0
,
such that:
α
Tf∂f
∂v
(
x
0) +
α
T g∂g
∂v
(
x
0) +
α
T h∂h
∂v
(
x
0)
≤
0
∀
v
∈
Cl(
U
)
\ {
0
}
.
In particular, if
p
= 0
or
Co(
Im
∂h(
U
)) =
p
then
(
α
f, α
g)
= 0
.
8A different definition ofU-regularity condition, not characterizing conditions (C
N),