• No results found

Search | Preprints

N/A
N/A
Protected

Academic year: 2020

Share "Search | Preprints"

Copied!
12
0
0

Loading.... (view fulltext now)

Full text

(1)

On the influence of center-Lipschitz conditions in the convergence analysis of multi-point iterative methods

Ioannis K. Argyros1and Santhosh George2

Abstract

The aim of this article is to extend the local as well as the semi-local convergence analysis of multi-point iterative methods using center Lipschitz conditions in combination with our idea, of the restricted con-vergence region. It turns out that this way a finer concon-vergence analysis for these methods is obtained than in earlier works and without additional hypotheses. Numerical examples favoring our technique over earlier ones completes this article.

AMS Subject Classification: 65F08, 37F50, 65N12.

Key Words: Multi-point iterative methods; Banach space; local-semi-local convergence analysis.

1

Introduction

Let X,Y be Banach spaces and Ω ⊂ X be a nonempty and open set. By B(X,Y), we denote the space of bounded linear operators fromX into Y.Let alsoU(w, d),be an open set centered atw∈ X and of radiusd >0 and ¯U(w, d) be its closure.

Many problems from diverse disciplines such that Mathematics, Optimiza-tion, Mathematical Programming, Chemistry, Biology, Physics, Economics, Statis-tics, Engineering and other disciplines [?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?, ?,?,?,?,?,?,?,?,?,?,?,?,?,?], can be reduced to finding a solution x∗ of the equation

H(x) = 0, (1.1)

where H : Ω −→ Y is a continuous operator. Since,a unique solution x∗ of equation (??) in a neighborhood of some initial data x0 can be obtained only in special cases. Researchers construct iterative methods which generate a se-quence converging tox∗.

The most widely used iterative method is Newton’s defined for each n = 0,1,2, . . .by

x0∈Ω, xn+1 =xn− H0(xn)−1H(xn). (1.2) 1Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA,

Email: iargyros@cameron.edu

2Department of Mathematical and Computational Sciences, National Institute of

Technol-ogy Karnataka, India-575 025, Email:sgeorge@nitk.edu.in

(2)

The order of convergence is an important concern when dealing with iterative methods. The computational cost increases in general especially when the con-vergence order increases.

That is why researchers and practitioners have developed iterative methods that on the one hand avoid the computation of derivatives and on the other hand achieve high order of convergence.

We consider the following multi-step iterative method defined for eachn= 0,1,2, . . .by

un = v(0)n

v(1)n = v(0)n − H0(v(0)n )−1H(v(0)n )

v(2)n = v(1)n − H0(v(0)n )−1H(v(1)n ) (1.3) · · ·

un+1 = v(nk)=v

(k−1)

n − H

0(v(0)

n )

−1H(v(k−1)

n ).

The semi-local convergence of method (??) was given in [?]. It is well known that as the convergence order increases the convergence region decreases in general. To avoid this problem, we introduce a center-Lipschitz-type condition that helps us determine an at least as small region as before containing the iterates{un}. This way the resulting Lipschitz constants are at least as small.

A tighter convergence analysis is obtained this way. The order of convergence was shown using Taylor expansions and conditions reaching up to the k+ 1 order derivative ofH,although these derivatives do not appear in this method. As an academic example: LetX =Y =R,Ω = [−5

2, 3

2].Defineϕon Ω by ϕ(x) =x3logx2+x5−x4

Then

ϕ0(x) = 3x2logx2+ 5x4−4x3+ 2x2, ϕ00(x) = 6xlogx2+ 20x3−12x2+ 10x,

ϕ000(x) = 6 logx2+ 60x2= 24x+ 22.

Obviouslyϕ000(x) is not bounded on Ω. So, the convergence of methods (??) is not guaranteed by the analysis in [?,?,?].

The rest of the article is organized as follows: Section 2 contains the condi-tions to be used in the semi-local convergence that follows in Section 3. Finally the numerical examples are given in the concluding Section 4.

2

Local convergence

Let L0 > 0, L > 0 and L1 ≥ 1 be parameters. Define the scalar quadratic polynomialpby

(3)

The discriminantD ofpis given by

D = (4L0+ 4L0L1+L)2−8L0(2L0+L) = 16(L0L1)2+L2+ 32L20L1+ 8L0L1L >0,

sophas rootss1ands2with 0< s1< s2by the Descarte’s rule of signs. Define also parameters

γ=

L 2(1−L0s1)+

2L0L1 (1−L0s1)2

s1 (2.2)

and

rA=

2

2L0+L. (2.3)

Notice thatγ∈(0,1],sincep(s1) = 0.The local convergence analysis of method (??) uses the conditions (A):

(a1) H: Ω−→ Y is a differentiable operator in the sense of Fr´echet and there existsx∗∈Ω such thatH(x∗) = 0 andH0(x∗)−1∈ L(Y,X).

(a2) There existsL0>0 such that for eachx∈Ω

kH0(x∗)−1(H0(x)− H0(x∗))k ≤L0kx−x∗k.

Set Ω0= Ω∩B(x∗,L10).

(a3) There exist L = L(L0) > 0 and L1 = L1(L0) ≥ 1 such that for each x, y∈Ω0

kH0(x∗)−1(H0(y)− H0(x))k ≤Lkx−yk and

kH0(x∗)−1H0(x)k ≤L1kx−x∗k.

(a4) ¯B(x∗, s1)⊂Ω.

(a5) There existss3≥s1 such thats3<L20.Set Ω1= Ω∩B¯(x∗,L20).

Based on the preceding conditions and notations we can show a local con-vergence result for method (??).

THEOREM 2.1 Under the conditions (A), further assume thatu0∈B(x∗, s1)− {x∗}. Then,limn−→∞un=x∗,and the following estimations hold

kv1n−x∗k ≤

Lkv0

n−x∗k2 2(1−L0kv0

0−x∗k)

, (2.4)

kvi

n−x∗k ≤γikvn0−x∗kfor eachi= 2, . . . , k (2.5)

and

kun+1−x∗k ≤γk+nku0−x∗k. (2.6)

(4)

Proof. We use an induction based proof to show estimations (??)-(??). Let x∈B(x∗, s1)− {x∗}.By (a1) and (a2), we obtain that

kH0(x∗)−1(H0(x)− H0(x∗))k ≤L0kx−x∗k< L0s1<1. (2.7)

It follows from the Banach lemma on invertible operators [?] and (??) that H0(x∗)−1∈ L(Y,X) and

kH0(x)−1H0(x ∗)k ≤

1 1−L0kx−x∗k

. (2.8)

Then, x0 =v00, v01, . . . , v0k are well defined by method (??) forn = 0. We can write

v01−x∗=v00−x∗− H0(v00)− 1H(v0

0). (2.9)

Then, by using (a1), (a3), (??) and (??), we get in turn that

kv1

0−x∗k = kv00−x∗− H0(v00) −1H(v0

0)k ≤ kH0(v00)−1H0(x∗)k

Z 1 0

kH0(x∗)−1[H0(x∗+θ(v00−x∗))− H0(v00)]dθ(v00−x∗)k

≤ Lkv

0 0−x∗k2 2(1−L0kv0

0−x∗k) < Ls1

2(1−L0s1) kv0

0−x∗k

< kv0

0−x∗k< s1, (2.10)

which shows (??) forn= 0 andv10∈B(x∗, s1).Similarly by the second substep forn= 0, k= 2 we also get

v02−x∗ = v10−x∗− H0(v00)− 1

H(v01) = v10−x∗− H0(v01)

−1H(v1 0)

+H0(v01)−1[(H0(v00)− H0(x∗)) + (H0(x∗)− H0(v10))] ×H0(v00)−1H(v01), (2.11)

so by (a3), the definition ofs1and (??) (for x=v10, v00), we get in turn that

kv2

0−x∗k ≤

Lkv1 0−x∗k2 2(1−L0kv1

0−x∗k) + L0(kv

1

0−x∗k+kv00−x∗k) (1−L0kv1

0−x∗k)(1−L0kv00−x∗k) ≤

L 2(1−L0s1)+

2L0L1 (1−:0s1)2

s1kv10−x∗k

≤ γs1kv1

(5)

which shows (??) forn= 0 andk= 2.Similarly, from

v0i−x∗ = v0i−1−x∗− H 0(v0

0)−1(H(v

i−1 0 )) = v0i−1−x∗− H0(v0i−1)H(v

i−1 0 ) +H0(vi0−1)−1[(H0(v00)− H0(x∗)) +(H0(x∗)− H0(vi0−1))]H

0(v0 0)

−1H(vi−1

0 ) (2.13)

so

kvi

0−x∗k ≤γkv

i−1

0 −x∗k ≤γ

ikv0

0−x∗k< s1, (2.14) and

kx1−x∗k ≤γk+1kx0−x∗k< s1, (2.15) which show (??) and (??) for n = 1, i = 2,3, . . . , k and vi

0, x1 ∈ B(x∗, s1). Similarly, by induction we have in turn (as in (??))

kv1

j −x∗k = kv0j−x∗− H0(vj0)− 1H(v0

j)k

≤ Lkv

0

j −x∗k2 2(1−L0kv0

j −x∗k)

< s1, (2.16)

kvi

j−x∗k = kvij−1−x∗− H

0(v0

j)−1H(vj1)

+H0(vij−1)−1[(H0(vji−1)− H0(x∗)) +(H0(x∗)− H0(v0j))]H0(v0j)−1H(v

i−1

j )k

leading to (as in (??))

kvji−x∗k ≤γikv0j−x∗k< s1 (2.17)

and (as in (??))

kun+1−x∗k=kvnk−x∗k ≤γk+nku0−x∗k< s1, (2.18)

which completes the induction for (??)-(??) and also show thatun+1∈B(x∗, s1). It also follows from (??) that limn−→∞un =x∗,sinceγ ∈[0,1).Let x∗∗ ∈Ω1 with H(x∗∗) = 0. It then follows from the definition of Ω1, (a2) and T = R1

0 H 0(x

∗∗+τ(x∗−x∗∗))dτ that

kH0(x

∗)−1(T− H(x∗))k ≤ L0

2 Z 1

0

kx∗−x∗∗kdτ ≤ L0

2 s3<1,

soT−1∈ L(Y,X).Then, from the estimate

0 =H(x∗)− H(x∗∗) =T(x∗−x∗∗), (2.19)

we getx∗=x∗∗.

(6)

REMARK 2.2 (a) In view of (a2), we can write

kH0(x∗)−1H0(x) = kH0(x∗)−1[(H0(x)− H0(x∗)) +H0(x∗)]k ≤ 1 +kH0(x

∗)−1(H0(x)− H0(x∗))k

≤ 1 +L0kx−x∗k, (2.20)

so the second condition in (a3) can be dropped, and we choose L1 = 2,

sincekx−x∗k ≤s1< L10.

(b) It follows from the definition of s1andrA thats1< rA.That is the radius of convergence s1 cannot be larger than the radius of convergence rA of Newton’s method obtained by us [?,?,?,?,?].

(c) The local convergence of method (??) was not studied in [?]. But if it was call;s1¯ the smallest positive solution of p¯(t) = 0,where

¯

p(t) = (2L0+ ¯L)L0t2−(4L0+ 4L0L1¯ + ¯L)t+ 2, (2.21)

whereL¯ andL1¯ are the constants for the conditions in (a3) holding onΩ.

But, we have

L≤L¯ (2.22)

L1≤L1¯ (2.23)

and

γ≤γ,¯ (2.24)

sinceΩ0⊂Ω.Hence, we have

¯

s1≤s1. (2.25)

Moreover, if strict inequality holds in (??) or (??), then, we have s¯1 < s1.

Furthermore, by (??), our error bounds are more precise than the ones using

L0,L,¯ L¯1 and γ.¯ Hence, we have expanded the applicability of method (??) in

the local convergence case.

In a similar way, we improve the semi-local convergence analysis of method (??) given in [?]. The work is given in the next section.

3

Semi-local convergence

We need the following auxiliary result on majorizing sequences for method (??). LEMMA 3.1 Let K0 > 0, K > 0, and r01 be parameters. Denote by δ the

unique root in the interval(0,1) of the polynomialϕgiven by

ϕ(t) = 2K0tk+1+K(tk+tk−1−2).

Define the sequence{qn}for each n= 0,1,2, . . .andi= 1,2, . . . , k−1 by

r00= 0, rn0 =rn, r1n+1=qn+1+

K(qn+1−qn+rkn−1−qn)(qn+1−rnk−1)

1−2K0qn+1

(7)

and

rkn=qn+1, rin+1=r i n+

K(rin−qn+rin−1−qn)(rin−rni−1)

1−2K0qn

.

Moreover, suppose that

0< K(q1+r

k−1 0 ) 1−2K0q1

≤δ <1−2K0r10. (3.2)

Then, the sequence{qn} is increasing, bounded from above by q∗∗ = r 1 0 1−α and converges to its unique least upper boundq∗ satisfying q1≤q∗≤q∗∗,

r1n−rin−1≤δ(rni−1−rin−2)≤δkn+i−1r10, (3.3)

r1n+1−rkn≤δ(rkn−rkn−1)≤δk(n+1)r10 (3.4)

and

qn=rn0 ≤r

1

n≤r

2

n≤. . .≤r k−1

n ≤r

k

n=qn+1. (3.5) Proof. Replacetn, sin, L0, L, α in [?] byqn, rni, K0, K, δ.

Next, we present the semi-local convergence analysis of method (??). THEOREM 3.2 Let H: Ω−→ Y be a continuously differentiable operator in the sense of Fr´echet and[., .;H] : Ω−→ L(X,Y)be a divided difference of order one ofH. Suppose there existx0∈ΩandK0>0 such that

H0(x

0)−1∈ L(Y,X). (3.6)

kH0(x

0)−1H(x0)k ≤r01, (3.7) kH−1(x0)([x, y;H]− H0(x0))k ≤K0(kxx0k+kyx0k). (3.8)

SetΩ2= Ω∩B(x0,2K1

0). Moreover, suppose that for eachx, y, z, w∈Ω2 kH0(x0)−1([x, y;H]−[z, w;H])k ≤K(kx−zk+ky−wk), (3.9)

and the hypotheses of Lemma?? hold. Then,{un} ∈B(v0, q∗), limn−→∞un =

u∗∈B¯(v0, q∗),H(u∗) = 0.and

ku∗−unk ≤q∗−vn. (3.10)

Moreover,u∗ is the unique solution of equationH(x) = 0 inB¯(v0, q∗). Proof. Replacexn, yn, sin, tn, α, L0, Lin [?] byun, vn, rin, qn, δ, K0, K.

REMARK 3.3 The condition

kH0(x0)−1([x, y;H]−[z, w;H])k ≤L(kx−zk+ky−wk) (3.11)

for eachx, y, z, w∈Ω, someL >0 is used in [?] instead of (??). But we have

(8)

K≤L, (3.13)

and

δ≤α, (3.14)

sinceΩ2⊆Ω.Denote byq¯n,¯rni the majorizing sequences used in [?] and defined as sequencesqn, rni but withK0=L0andK replaced byL. Then, we have by a

simple induction argument, that

qn ≤ q¯n (3.15)

rin ≤ r¯in (3.16)

0 ≤ rin−qn≤r¯in−q¯n (3.17)

0 ≤ rin−rni−1≤¯rni −r¯in−1 (3.18) 0 ≤ qn+1−rnk−1≤q¯n−r¯kn−1 (3.19)

and

q∗≤q¯∗. (3.20)

Moreover, if K < L, then (??-(??) hold as strict inequalities. Let us consider the setΩ3= Ω∩B(x1,21K −r10)provided that r10< 21K.Moreover, suppose for eachx, y, z, w∈Ω3

kH−1(x0)([x, y;H]−[z, w;H])k ≤λ(kx−zk+ky−wk). (3.21)

Notice thatΩ3 ⊆Ω2, so λ≤K. Then, Ω3, (??), λ can replace Ω2, (??), and K respectively in Theorem ??. Clearly, the corresponding to {tn} majorizing sequence call it {t¯n} is even tighter than {tn}. Hence, we have extended the

applicability of method (??) in the semi-local convergence analysis too. These improvements are derived under the same conditions as in [?], since the com-putation ofL is included in the computation ofK as a special case. Examples where the new constants are smaller than the older ones can be found in the numerical section that follows and in [?,?,?,?,?].

4

Numerical examples

We present the following examples to test the convergence criteria. Define the divided difference by

[x, y;H] = Z 1

0

H0(τ x+ (1−τ)y)dτ.

EXAMPLE 4.1 Let X =Y =R3,Ω =U(0,1), x= (0,0,0)T and defineH onΩby

H(x) =H(x1, x2, x3) = (ex1−1, e−1

2 x2 2+x

(9)

For the pointsu= (u1, u2, u3)T, the Fr´echet derivative is given by

H0(u) =  

eu1 0 0

0 (e−1)u2+ 1 0

0 0 1

 .

Using the norm of the maximum of the rows and (a3)-(a4) and sinceH0(x) = diag(1,1,1),we can define parameters for method (??) byL0=e−1, L1=L= ee−11 ,L¯= ¯L1=e.Then, s1= 0.0997The old radius iss1¯ = 0.0727.

EXAMPLE 4.2 Let X = Y = R,Ω = ¯U(x0,1−ξ), x0 = 1 and ξ ∈ [0,12).

Define functionHonΩby

H(x) =x3−ξ.

Then, we get by (??)-(??) and (??) that for

(i) k = 1, q1 = r01 = 13(1−ξ), L0 = K0 = 12(3−ξ), L = 2−ξ, and K = 1 +2K1

0. Notice that L0 < K < L. The conditions of Lemma ?? are satisfied

for ξ ∈ I1 = [0.434523,0.5) but the earlier conditions in [?] are satisfied for

ξ∈I2 = [0.464523,0.5), and I2 ⊂I1. Moreover, if λ= 3(31ξ)(−2ξ2+ 5ξ+ 6)

then conditions of Lemma??are satisfied for ξ∈I3= [0.3720452,0.5).

(ii)Fork= 2,the conditions of Lemma??are satisfied forξ∈I1= [0.6161045,0.7)

but the earlier conditions in [?] are satisfied for ξ∈I2= [0.6266523,0.7), and

I2 ⊂I1. Moreover, if λ= 3(31ξ)(−2ξ2+ 5ξ+ 6) then conditions of Lemma??

are satisfied forξ∈I3= [0.5966523,0.7).

5

Conclusion

Our idea of the convergence region in connection to the center Lipschitz condi-tion were utilized to provide a local as well as a semilocal convergence analysis of method (??). Due to the fact that we locate a region at least as small as in earlier works [?] containing the iterates, the new Lipschitz parameters are also at least as small. This technique leads to a finer convergence analysis (see also Remark ??, Remark ?? and the numerical examples). The novelty of the paper not only lies in the introduction of the new idea but also ob-tained using special cases of Lipschitz parameters appearing in [?]. Hence, no additional work to [?] is needed to arrive at these developments. This idea can be used to extend the applicability of other iterative methods appear-ing [?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?] along the same lines.

References

(10)

[2] S. Amat, C. Ber ´mudez, M.A. Her´nandez-Ve´ron, and E. Martinez. On an efficient k-step iterative method for nonlinear equations. Journal of Com-putational and Applied Mathematics, 302:258-271, 2016.

[3] S. Amat, S. Busquier, C. Ber ´mudez, and S. Plaza, On two families of high order newton type methods. Applied Mathematics Letters, 25(12):2209-2217, 2012.

[4] S. Amat, S. Busquier, and S. Plaza, Review of some iterative root-finding methods from a dynamical point of view. Scientia, 10(3):35, 2004.

[5] I.K. Argyros, Computational theory of iterative methods, volume 15. Else-vier, 2007.

[6] I. K. Argyros, On the semilocal convergence of a fast two-step newton method. Revista Colombiana de Matematicas, 42(1):15-24, 2008.

[7] I. K. Argyros, S. George, N. Thapa, Mathematical Modeling For The Solu-tion Of EquaSolu-tions And Systems Of EquaSolu-tions With ApplicaSolu-tions, Volume-I, Nova Publishes, NY, 2018.

[8] I. K. Argyros, S. George, N. Thapa, Mathematical Modeling For The Solu-tion Of EquaSolu-tions And Systems Of EquaSolu-tions With ApplicaSolu-tions, Volume-II, Nova Publishes, NY, 2018.

[9] I. K. Argyros, A. Cordero, A. A. Magre˜n´an, and J. R. Torregrosa, Third-degree anomalies of traubs method. Journal of Computational and Applied Mathematics, 309:511-521, 2017.

[10] I. K. Argyros and S. Hilout, Weaker conditions for the convergence of new-tons method. Journal of Complexity, 28(3):364-387, 2012.

[11] I. K. Argyros and S. Hilout, Computational methods in nonlinear analysis: efficient algorithms, fixed point theory and applications. World Scientific, 2013.

[12] I. K. Argyros, A. A. Magre˜n´an, L. Orcos, and J.A Sicilia, Local convergence of a relaxed two-step newton like method with applications. Journal of Mathematical Chemistry, 55(7):1427- 1442, 2017.

[13] I. K.Argyros, A. A. Magr´e˜nan, A contemporary study of iterative methods, Elsevier (Academic Press), New York, 2018.

[14] I. K.Argyros, A. A. Magr´e˜nan, Iterative methods and their dynamics with applications, CRC Press, New York, USA, 2017.

(11)

[16] A. Cordero and J. R. Torregrosa, Low-complexity root-finding iteration functions with no derivatives of any order of convergence. Journal of Com-putational and Applied Mathematics, 275:502-515, 2015.

[17] A. Cordero, J. R. Torregrosa, and P. Vindel, Study of the dynamics of third-order iterative methods on quadratic polynomials. International Journal of Computer Mathematics, 89(13-14):1826-1836, 2012.

[18] J.A. Ezquerro and M.A. Her´nandez-Ve´ron, How to improve the domain of starting points for steffensen’s method. Studies in Applied Mathematics, 132(4):354-380, 2014.

[19] J. A. Ezquerro and M. A. Her´nandez-Ve´ron, Majorizing sequences for non-linear fredholm hammerstein integral equations. Studies in Applied Math-ematics.

[20] J. A. Ezquerro, M. Grau-´Sanchez, M. A. Her´nandez-Ve´ron, and M. Noguera, A family of iterative methods that uses divided differences of first and second orders. Numerical algorithms, 70(3):571-589, 2015.

[21] J. A. Ezquerro, M.A .Her´nandez-Ve´ron, and A. I. Velasco, An analysis of the semilocal convergence for secant-like methods. Applied Mathematics and Computation, 266:883-892, 2015.

[22] M.A. Her´nandez-Ve´ron, E. Mart´ınez, and C. Teruel, Semilocal convergence of a k-step iterative process and its application for solving a special kind of conservative problems. Numerical Algorithms, 76(2):309-331, 2017.

[23] L.V. Kantlorovich and G.P. Akilov, Functional analysis pergamon press, 1982.

[24] M. Moccari and T. Lofti, Using majorizing sequences for the semi-local convergence of a high order and multi-point method along with stability analysis, Intern.J. Non. Sci. and Numer. Stimulation,

[25] A. A. Magre˜n´an, A.Cordero, J. M. Guti´errez, and J. R. Torregrosa, Real qualitative behavior of a fourth-order family of iterative methods by using the convergence plane. Mathematics and Computers in Simulation, 105:49-61, 2014.

[26] A. A. Magre˜n´an and I. K .Argyros, Improved convergence analysis for Newton-like methods. Numerical Algorithms, 71(4):811-826, 2016.

[27] A. A. Magre˜n´an and I. K. Argyros, Two-step newton methods. Journal of Complexity, 30(4):533-553, 2014.

(12)

[29] Rheinboldt, W.C., An adaptive continuation process for solving systems of nonlinear equations, Polish Academy of Science, Banach Ctr. Publ. 3 (1978), no. 1, 129–142.

[30] H.Ren and I. K. Argyros, On the convergence of kingwerner-type meth-ods of order free of derivatives. Applied Mathematics and Computation, 256:148-159, 2015. R volume 1. Cambridge University

[31] W.T. Shaw, Complex analysis with Mathematical Press, 2006.

References

Related documents

Studying data from national time use surveys conducted in the United States, France, and Spain, we extract information about who undertakes certain activities in order to examine

Overexpression of CD147 in the tumor cells has been reported to be correlated with metastasis of breast cancer 194 , and oral squamous cell carcinoma 195 ; and with poor

We slightly adapt the basic model presented by Schrady ( 1967 ) and Minner and Lindner ( 2004 ) and analyze the changes on the optimum inventory control policy induced by

So from the northern region of Italy where people speak a national Italian language and a local dialect, they can just as easily switch between their local dialect and the

Instead, the usual approach is to use stated preference techniques such as contingent valuation or stated choice approaches to estimate the victim’s willingness to pay (WTP) to

In order to force the firms to internalize these neglected costs to the social insurance administration, Austria reacted with a reform in employment protection for the elderly:

HEKATE aims at supporting the spread of knowledge alliances to foster interest in entrepreneurship particularly amongst STEM students, by encouraging and