• No results found

Mathematics for Economists: Lecture 7

N/A
N/A
Protected

Academic year: 2021

Share "Mathematics for Economists: Lecture 7"

Copied!
34
0
0

Loading.... (view fulltext now)

Full text

(1)

Mathematics for Economists: Lecture 7

Juuso V ¨alim ¨aki

Aalto University School of Business

Spring 2021

(2)

This lecture covers

1. Existence of solutions to optimization problems 2. Simplest constrained optimization

3. Optimization subject to an equality constraint: first-order conditions

4. Examples of equality constrained optimization problems

(3)

Existence of optimal choices: why care?

Example (1 is the largest natural number)

Proof.

I Denote the largest natural number (i.e. strictly positive integer) by x . I Since x is a natural number, also x 2 is a natural number.

I Since x is the largest natural number, we have:

x ≥ x 2 .

I Dividing both sides by x (a positive number since it is a natural number), we get

1 ≥ x .

I Since all natural numbers are larger than or equal to 1, the claim follows.

I Where is the mistake?

(4)

Existence helps the characterization

I Suppose that you know that a problem has a solution

I Suppose you know that a single point satisfies first-order necessary conditions

I Do you need to worry about second-order conditions at the critical point?

I Hence it is important to find general enough conditions that guarantee the existence of a solution

I Weierstrass’ theorem is pretty good at this

(5)

Existence of optimal choices

I You need to check conditions

1. On the domain (feasible set) of the problem 2. On the objective function

I Feasible set F bounded

1. Can you fit the feasible set in a large enough box of finite size?

2. If F ∈ R

n

, show that F ⊂ [−M, M]

n

for some M < ∞.

3. E.g. F = { x ∈ R

n

|p · x ≤ w , x ≥ 0} for some strictly positive price vector p satisfies x

i

minw

ipi

for all i.

I Feasible set F closed

1. If x

n

→ x and x

n

∈ F for all n, then x ∈ F .

2. Constraint given by continuous functions F = {x|g(x) ≤ 0} for a continuous g : R

n

→ R

k

.

3. F = { x ∈ R

n

|p · x ≤ w , x ≥ 0} satisfies this.

(6)

Existence of optimal choices

I Objective function continuous

1. Almost all functions you will encounter in economics are continuous 2. Utility functions, cost functions etc.

3. Failure of continuity: demand for price setting firms where all buyers buy from the firm with the lowest price.

I A set F ⊂ R n is called compact if it is closed and bounded.

I Once we recall that all sets in R have a greatest lower bound (infimum) and a

lowest upper bound (supremum), we are ready for the main existence result,

Weierstrass theorem

(7)

Main existence theorem

Theorem (Weierstrass’ Theorem)

Suppose f is a continuous function on a compact set F , and M = sup

x ∈F

f (x ) , m = inf

x ∈F f (x ) .

Then there exist points x , x ∈ F such that f (x ) = M and f (x ) = m.

(8)

Weierstrass’ theorem

Remark

To see that F must be closed and bounded and that f has to be continuous, consider the following examples where one property fails in each case:

1. f (x ) = x and F = R (F not bounded).

2. f (x ) = x and F = {x : 0 < x < 1}. (F not closed).

3. f (x ) = x for 0 ≤ x < 1, f (1) = 0 and E = {x : 0 ≤ x ≤ 1} (f not continuous).

(9)

Constrained Optimization: Example

We start with a simplest example of constrained optimization to set up expectations for the more general case to follow.

Example

Consider finding the maximum for f (x ) = 3 + 2x − x 2 on the feasible set F = {x : −∞ < a ≤ x ≤ b < ∞.

Since f is continuous and the feasible set F is compact. Therefore Weierstrass’

theorem guarantees the existence of a maximizer, i.e. an x ∈ F such that for all y ∈ F , we have f (x ) ≥ f (y ).

Notice that f is strictly increasing for x < 1 and strictly decreasing for x > 1.

If a ≤ 1 ≤ b, then the function is maximized at its critical point x = 1.

We say that a direction (x − x 0 ) is feasible from x 0 ∈ F if for a small ∆, we have

x 0 + ∆ ∈ F .

(10)

Quadratic function on an interval

−2 −1 1 2 3

−3

−2

−1 1 2 3 4 5

f (x )

x

y

(11)

Constrained Optimization: Example continued

Linear approximation by the derivative gives:

f (x 0 + ∆) − f (x 0 ) = f 0 (x 0 )∆.

If we have a maximum at x 0 , then for all feasible direction f 0 (x 0 )∆ ≤ 0.

I If a < x 0 < b, then we must have f 0 (x 0 ) = 0 since both directions

∆ > 0, ∆ < 0 are feasible.

I If f 0 (x 0 ) > 0, then x > x 0 cannot be feasible if x 0 is a maximum. Therefore

x 0 = b if x 0 is the optimal choice and f 0 (x 0 ) > 0. Similarly, if f 0 (x 0 ) < 0 and x 0

is the optimum, then x 0 = a.

(12)

Example: Consumer’s problem

I A consumer chooses how to spend her wealth w on two goods x 1 , x 2 whose prices are p 1 , p 2 .

I Continuously differentiable utility, strictly positive marginal utility for both goods.

I The feasible set (budget set) {(x 1 , x 2 )|x 1 ≥ 0, x 2 ≥ 0, p 1 x 1 + p 2 x 2 ≤ w } is closed (since it is defined by weak inequalities for a continuous constraint function).

I By Weierstrass theorem, a utility maximizing choice exists. What can we say about it?

I Budget constraint must bind: if p 1 x ˆ 1 + p 2 x ˆ 2 < w , then (ˆ x 1 + h, ˆ x 2 ) is feasible and:

u(ˆ x 1 + h, ˆ x 2 ) > u(ˆ x 1 , ˆ x 2 ).

I Therefore, we must have p 1 x ˆ 1 + p 2 ˆ x 2 = w .

I Can the indifference curve through the optimum intersect the budget line?

(13)

Optimization with a single equality constraint

I Local considerations: Let f : R n → R be the objective function to be maximized

I Suppose the constraints take the form g(x ) = g(x 1 , ..., x n ) = 0.

I In other words, F = {x : g(x ) = 0}. We write the maximization problem often as:

max x f (x )

subject to g(x ) = 0.

(14)

Optimization with a single equality constraint

I A solution to this problem finds point ˆ x such that f (ˆ x) ≥ f (x) for all x ∈ F . I What can we say about such an ˆ x?

I At this point, we do not know if it exists. If it exists, and f is differentiable, then for small ∆,

f (ˆ x + ∆(x − ˆ x)) − f (ˆ x) = Df (ˆ x)(x − ˆ x)∆ ≤ 0 for all feasible directions (x − ˆ x).

I But how do we know which directions are feasible?

(15)

Optimization with a single equality constraint

I Assume that the function g defining the constraint is also differentiable.

I To find the feasible directions, we go back to implicit function theorem.

I If ˆ x ∈ F and ∂x ∂g

i

x) 6= 0 for some i ∈ {1, ..., n}, then we can find a write x i = h(x 1 , ..., x i−1 , x i+1 , ..., x n ) =: h(x −i ) in a neighborhood of ˆ x −i so that

g(h(x −i ), x −i ) = 0.

I Notice that it is not possible to use the implicit function theorem at a critical point of the constraint function.

I Therefore, we must assume that Dg(ˆ x) 6= 0.

I We call this the constraint qualification and we will see different versions of

this in more complex situations with multiple constraints.

(16)

Optimization with a single equality constraint

I Since the function g is at constant value in the feasible set, we have for all feasible directions (x − ˆ x) :

∇g(ˆ x)(x − ˆ x) = 0.

I Notice also that if (x − ˆ x) is feasible, then also −(x − ˆ x) is feasible. From the linear approximation above, this means that for all feasible directions,

Df (ˆ x)(x − ˆ x) = 0.

I But therefore we have shown that at optimum ˆ x,

∇f (ˆ x) = µ∇g(ˆ x).

I We have the following necessary condition for a constrained optimum at ˆ x:

1. the gradient of the objective function must be a scalar multiple of the gradient of the constraint function at the optimum

2. the choice must be feasible, i.e. g(ˆ x) = 0

3. we have assumed constraint qualification at optimum

(17)

Lagrangean function

I The previous discussion motivates the following function that incorporates the constraints into an augmented objective function called the Lagrangean function.

I For a constrained optimization problem, we define the following function of n + 1 variables:

L(x, µ) = f (x) − µg(x).

I We call the new variable µ the Lagrange multiplier. We will give it a good economic interpretation later in the course.

I We are interested in the critical points of this augmented function. Therefore we look for (ˆ x, ˆ µ) such that

∂L

∂x ix, ˆ µ) = ∂f

∂x ix) − ˆ µ ∂g

∂x ix) = 0 for all i,

∂L

∂µ (ˆ x, ˆ µ) = g(ˆ x) = 0.

(18)

Figure: Consumer’s problem on a budget line

x y

p x x + p y y = w

∇u(2.5, 1.5)

(p x , p y )

2.5

1.5

(19)

Figure: Single equality constraint

x y

g(x , y ) = 0

f (x , y )

∇f (ˆ x , ˆ y )

∇g(ˆ x , ˆ y )

x ˆ

y ˆ

(20)

Optimization with a single equality constraint

Find the minima and maxima of f (x , z) = x + z 2 subject to constraints x 2 + z 2 = 1

Form the Lagrangean:

L(x, z, µ) = x + z 2 − µ(x 2 + z 2 − 1) Differentiate to get the first-order conditions (FOC):

∂L

∂x = 1 − 2µx = 0 (1)

∂L

∂y = 2z − 2µz = 0 (2)

∂L

∂µ = 1 − x 2 − z 2 = 0 (3)

(21)

Optimization with a single equality constraint

I I

I The second FOC gives:

z(2 − 2µ) = 0

I therefore either z = 0, or µ = 1. Consider first the possibility that z = 0. In that case, (3) implies that x = ±1. We get two critical points from (1):

x = 1, z = 0, µ = 1 2  and x = −1, z = 0, µ = − 1 2 

I If µ = 1, (1) implies that x = 1 2 . By substituting into (3) we get the critical points:



x = 1 2 , z =

√ 3

2 , µ = 1  and 

x = 1 2 , z = −

√ 3

2 , µ = 1 

(22)

Optimization with a single equality constraint

I As a result, we have four critical points for the Lagrangean. Draw the

constraint set and level curves for the objective function to get a guess of the classification of the critical points.

I Can you show the existence of a maximum? Which of the local maxima is the

global maximum?

(23)

Optimization with multiple equality constraints

Consider next the case, where we have k equality constraints

g(x) = (g 1 (x), ..., g k ( x)) : R n → R k . In this case, we have the problem:

max x f (x) subject to g 1 (x) = 0,

g 2 (x) = 0, .. .

g k (x) = 0.

Form the Lagrangean now with k constraints as a function of n + k variables:

L(x, µ 1 , ..., µ k ) = f (x) −

k

X

j=1

µ j g j (x).

(24)

Optimization with multiple equality constraints

We can proceed exactly as before to use the linear approximations to characterize the feasible directions from ˆ x as {(x − ˆ x) : Dg(ˆ x)(x − ˆ x)} = 0.

Since the objective function cannot increase at the optimum in any feasible direction, we have that

Df (ˆ x)(x − ˆ x) = 0 whenever Dg(ˆ x)(x − ˆ x) = 0.

If Dg(ˆ x) has full rank, then this is equivalent to requiring that Df (ˆ x) and Dg jx) must be linearly dependent. Since we assume that Dg(ˆ x) has full rank, this means that there must exist (µ 1 , ..., µ k ) such that

∇f (ˆ x) =

k

X

j=1

µ j ∇g jx).

(25)

Optimization with multiple equality constraints

Hence we can summarize the three necessary conditions for local maximum:

i) Gradient alignment: ∇f (ˆ x) = P k

j=1 µ j ∇g jx), ii) Constraint holds: g(ˆ x) = 0,

iii) Constraint qualification: Dg 1x), ..., Dg kx) are linearly independent.

The first two can be achieved by requiring that (ˆ x, ˆ µ 1 , ..., ˆ µ k ) be a critical point of

the Lagrangean.

(26)

Optimization with multiple equality constraints: an example

I Consider the problem of maximizing

f (x , y , z) = xz + yz subject to:

g 1 (x , y , z) = y 2 + z 2 − 1 g 2 (x , y , z) = xz − 3

1. Find the critical points of f subject to constraints g

1

(x , y , z) = 0 and g

2

(x , y , z) = 0.

2. How would you determine which of the critical points are local minima and which

are local maxima?

(27)

Optimization with multiple equality constraints: an example

1. Find first the critical points of the Lagrangean

L(x, y , z, µ 1 , µ 2 ) = xz + yz − µ 1 (y 2 + z 2 − 1) − µ 2 (xz − 3)

2. First-order conditions:

∂L

∂x = z − µ 2 z = 0 (4)

∂L

∂y = z − 2µ 1 y = 0 (5)

∂L

∂z = x + y − 2µ 1 z − µ 2 x = 0 (6)

∂L

∂µ 1 = y 2 + z 2 − 1 = 0 (7)

∂L

∂µ 2 = xz − 3 = 0 (8)

(28)

Optimization with multiple equality constraints: an example

We need to solve this system of equations to find the critical points. Start with (4), giving

z(1 − µ 2 ) = 0, ⇔ z = 0 or µ 2 = 1

If z = 0, then (8) is not true for any x and as a result, we must havez 6= 0.

Therefore, we can only have µ 2 = 1 as a candidate solution. The second FOC (5) gives

y − 2µ 1 z = 0, ⇔ y = z

1 .

(29)

Optimization with multiple equality constraints: an example

Plug in the solutions for y and µ 2 into (6) : z

2µ 1

− 2µ 1 z = 0 ⇔ z

 1 2µ 1

− 2µ 1



= 0.

We already know that z 6= 0, and therefore 1

1 − 2µ 1 = 0 ⇔ 4µ 2 1 = 1 ⇔ µ 1 = ± 1

2

(30)

Optimization with multiple equality constraints: an example

We have now solved for possible Lagrange multipliers µ 1 ja µ 2 , i.e. we have:

µ 1 = ± 1 2 and µ 2 = 1

(31)

Optimization with multiple equality constraints: an example

To get the values of the choice variables, plug in the values of the multipliers into (6) to get:

y = ±z.

Substituting into (7), we get (by squaring):

2z 2 − 1 = 0 ⇔ z = ± 1

√ 2 The fifth FOC (8) gives:

x = 3 z , or x = 3 √

2 if z = 1

2 and x = −3 √

2 if z = − 1

2 . We have now found all that

we need for the critical points of f subject to the constraints.

(32)

Optimization with multiple equality constraints: an example

If z = 1

2 , then x = 3 √

2, y = ±z. This yields two critical points (x , y , z):

1 :  3 √

2, 1

2 , 1

2



2 :

 3 √

2, − 1

2 , 1

2

 If z = − 1

2 , then x = −3 √

2, y = ±z. This gives also two critical points (x , y , z):

3 :



−3 √ 2, − 1

2 , − 1

2

 4 :



−3 √ 2, 1

2 , − 1

2



(33)

Optimization with multiple equality constraints: an example

We know that for all critical points, µ 2 = 1, and we can check the sign of µ 1 from FOC (5). After this, we have all the critical points of the problem as:

Critical points for the problem (x , y , z, µ 1 , µ 2 ):

1 :

 3 √

2, 1

√ 2 , 1

√ 2 , 1

2 , 1



2 :

 3

√ 2, − 1

√ 2 , 1

√ 2 , − 1

2 , 1



3 :



−3 √ 2, − 1

√ 2 , − 1

√ 2 , 1 2 , 1



4 :



−3 √ 2, 1

√ 2 , − 1

√ 2 , − 1

2 , 1



(34)

Next Lecture

I Optimization with inequality constraints

I Economic examples of constrained optimization

References

Related documents

Diversity Council: Supports the following: Hispanic Advisory Council, Secretary's Advisory Council on Employees with Disabilities, African-American Group, Asian-American Group

The PAWC IT Local Area Network infrastructure uses a 3-tier model made up of Core layer, Distribution layer and the Access layer. The

A higher-order function is one that either takes functions as parameters or yields a function as its result, or both For example, Function composition Apply-to-all

The elastic body of the sensor mechanical structure comprises of central support beam, cross elastic beams, compliant beams and base of the body.. Here the

(1991) &#34;An annotated bibliography of computer supported cooperative work: Revision 2.&#34; Research Report, Department of Computer Science, University of Calgary,

❖ Eligible Medical Expenses Subject to deductible &amp; coinsurance ❖ Emergency Medical Evacuation Up to maximum lifetime benefit per person. ❖ Emergency Reunion $10,000

Libraries are therefore seen as spaces and places that can be used to include and integrate youth into civil society (Derr and Rhodes, 2010; Feinberg and Keller, 2010 and

• Query optimization is a crucial and difficult part of the overall query processing • Objective of query optimization is to minimize the following cost function:. I/O cost + CPU cost