• No results found

Taylor’s theorem

The Derivative

4.3 Taylor’s theorem

Note: less than a lecture (optional section)

4.3.1

Derivatives of higher orders

When f: IRis differentiable, we obtain a function f0: I→R. The function f0is called thefirst

derivativeof f. If f0is differentiable, we denote by f00: IRthe derivative of f0. The function f00 is called thesecond derivativeof f. We similarly obtain f000, f0000, and so on. With a larger number of derivatives the notation would get out of hand; we denote by f(n)thenth derivativeof f.

When f possessesnderivatives, we say f isn times differentiable.

4.3.2

Taylor’s theorem

Taylor’s theorem is a generalization of the . Mean value theorem says that up to a small error f(x)forxnearx0can be approximated by f(x0), that is

f(x) = f(x0) +f0(c)(x−x0),

where the “error” is measured in terms of the first derivative at some point cbetween xand x0. Taylor’s theorem generalizes this result to higher derivatives. It tells us that up to a small error, any ntimes differentiable function can be approximated at a pointx0by a polynomial. The error of this approximation behaves like(x−x0)n near the pointx0. To see why this is a good approximation notice that for a bign,(xx0)nis very small in a small interval aroundx0.

Definition 4.3.1. For anntimes differentiable function f defined near a pointx0R, define the nth orderTaylor polynomialfor f atx0as

Px0 n (x):= n

k=0 f(k)(x 0) k! (x−x0) k = f(x0) +f0(x0)(x−x0) + f 00(x0) 2 (x−x0)2+ f(3)(x0) 6 (x−x0)3+···+ f(n)(x 0) n! (x−x0)n. See for the odd degree Taylor polynomials for the sin function atx0=0. The even degree terms are all zero, as even derivatives of sine are again a sine, which are zero at the origin.

Taylor’s theorem says a function behaves like its nth Taylor polynomial. The is really Taylor’s theorem for the first derivative.

Theorem 4.3.2(Taylor). Suppose f: [a,b]Ris a function with n continuous derivatives on[a,b]

and such that f(n+1)exists on(a,b). Given distinct points x

0and x in[a,b], we can find a point c between x0and x such that

f(x) =Px0 n (x) + f (n+1)(c) (n+1)! (x−x0) n+1 .

Named for the English mathematician (1685–1731). It was first found by the Scottish mathematician (1638–1675). The statement we give was proved by (1736–1813)

y=sin(x) y=P 0 1(x) y=P0 3(x) y=P50(x) y=P70(x)

Figure 4.8:The odd degree Taylor polynomials for the sine function.

The termRx0

n (x):= f

(n+1)(c)

(n+1)! (x−x0)

n+1is called theremainder term. This form of the remainder term is called theLagrange formof the remainder. There are other ways to write the remainder term, but we skip those. Note thatcdepends on bothxandx0.

Proof. Find a numberMx,x0 (depending onxandx0) solving the equation

f(x) =Px0

n (x) +Mx,x0(x−x0)n+1.

Define a functiong(s)by

g(s):= f(s)Px0

n (s)−Mx,x0(s−x0)n+1.

We compute the kth derivative at x0 of the Taylor polynomial (Pnx0)(k)(x0) = f(k)(x0) for k= 0,1,2, . . . ,n(the zeroth derivative of a function is the function itself). Therefore,

g(x0) =g0(x0) =g00(x0) =···=g(n)(x0) =0.

In particular,g(x0) =0. On the other handg(x) =0. By the there exists anx1 betweenx0andxsuch thatg0(x1) =0. Applying the tog0we obtain that there existsx2betweenx0andx1(and therefore betweenx0andx) such thatg00(x2) =0. We repeat the argumentn+1 times to obtain a numberxn+1betweenx0andxn(and therefore betweenx0andx) such thatg(n+1)(x

n+1) =0.

Letc:=xn+1. We compute the(n+1)th derivative ofgto find g(n+1)(s) = f(n+1)(s)(n+1)!M

x,x0.

Plugging incforswe obtainMx,x0 = f

(n+1)(c)

(n+1)! , and we are done. In the proof we have computed (Px0

n )(k)(x0) = f(k)(x0) for k=0,1,2, . . . ,n. Therefore, the Taylor polynomial has the same derivatives as f at x0 up to thenth derivative. That is why the

Taylor polynomial is a good approximation to f. Notice that in the Taylor polynomials are reasonably good approximations to the sine nearx=0.

We do not necessarily get good approximations by the Taylor polynomial everywhere. For example, if we start expanding the function f(x) = 1xx around 0, we get the graphs in . The dotted lines are the first, second, and third degree approximations. The dashed line is the 20th degree polynomial, and yet the approximation only seems to get better with the degree forx>1,

and for smallerx, it in fact gets worse. The polynomials are the partial sums of the geometric series

∑∞n=1xn, and the series only converges on(−1,1). See the discussion of power series .

Figure 4.9:The function 1xx, and the Taylor polynomialsP10,P20,P30(all dotted), and the polynomial

P0

20(dashed).

If f isinfinitely differentiable, that is, if f can be differentiated any number of times, then we define theTaylor series:

k=0 f(k)(x 0) k! (x−x0)k.

There is no guarantee that this series converges for anyx6=x0. And even where it does converge, there is no guarantee that it converges to the function f. Functions f whose Taylor series at every pointx0 converges to f in some open interval containingx0 are called analytic functions. Most functions one tends to see in practice are analytic. See , for an example of a non-analytic function.

The definition of derivative says that a function is differentiable if it is locally approximated by a line. We mention in passing that there exists a converse to Taylor’s theorem, which we will neither state nor prove, saying that if a function is locally approximated in a certain way by a polynomial of degreed, then it hasdderivatives.

Taylor’s theorem gives us a quick proof of a version of the second derivative test. By astrict relative minimum of f at c, we mean that there exists a δ > 0 such that f(x)> f(c) for all

x(cδ,c+δ)wherex6=c. Astrict relative maximumis defined similarly. Continuity of the

second derivative is not needed, but the proof is more difficult and is left as an exercise. The proof also generalizes immediately into thenth derivative test, which is also left as an exercise.

Proposition 4.3.3(Second derivative test). Suppose f: (a,b)→Ris twice continuously differen-

tiable, x0∈(a,b), f0(x0) =0and f00(x0)>0. Then f has a strict relative minimum at x0.

Proof. As f00 is continuous, there exists aδ >0 such that f00(c)>0 for allc(x0δ,x0+δ), see . Takex(x0−δ,x0+δ),x6=x0. Taylor’s theorem says that for somecbetween

x0andx, f(x) = f(x0) + f0(x0)(x−x0) + f 00(c) 2 (x−x0)2= f(x0) + f00(c) 2 (x−x0)2. As f00(c)>0, and(xx0)2>0, then f(x)> f(x0).

4.3.3

Exercises

Exercise4.3.1: Compute the nth Taylor Polynomial at0for the exponential function.

Exercise4.3.2: Suppose p is a polynomial of degree d. Given any x0∈R, show that the(d+1)th Taylor

polynomial for p at x0is equal to p.

Exercise4.3.3: Let f(x):=|x|3. Compute f0(x)and f00(x)for all x, but show that f(3)(0)does not exist.

Exercise 4.3.4: Suppose f: R→Rhas n continuous derivatives. Show that for any x0∈R, there exist

polynomials P and Q of degree n and anε>0such that P(x)≤f(x)≤Q(x)for all x∈[x0−ε,x0+ε]and Q(x)P(x) =λ(x−x0)nfor someλ ≥0.

Exercise4.3.5: If f:[a,b]→Rhas n+1continuous derivatives and x0∈[a,b], prove lim

x→x0

Rxn0(x)

(x−x0)n =0. Exercise4.3.6: Suppose f:[a,b]Rhas n+1continuous derivatives and x0∈(a,b). Prove: f(k)(x0) =0 for all k=0,1,2, . . . ,n if and only if g(x):= f(x)

(x−x0)n+1 is continuous at x0.

Exercise 4.3.7: Suppose a,b,c∈R and f: R→Ris differentiable, f00(x) =a for all x, f0(0) =b, and

f(0) =c. Find f and prove that it is the unique differentiable function with this property.

Exercise4.3.8(Challenging): Show that a simple converse to Taylor’s theorem does not hold. Find a function

f:R→Rwith no second derivative at x=0such that|f(x)| ≤x3

, that is, f goes to zero at 0 faster than

x3, and while f0(0)exists, f00(0)does not.

Exercise4.3.9: Suppose f:(0,1)→Ris differentiable and f00is bounded.

a) Show that there exists a once differentiable function g:[0,1)→Rsuch that f(x) =g(x)for all x6=0.

Hint: See .

b) Find an example where the g is not twice differentiable at x=0.

Exercise 4.3.10: Prove the nth derivative test. Suppose nN, x0∈(a,b), and f: (a,b)→Ris n times

continuously differentiable, with f(k)(x

0) =0for k=1,2, . . . ,n−1, and f(n)(x0)6=0. Prove: a) If n is odd then f has neither a relative minimum, nor a maximum at x0.

b) If n is even then f has a strict relative minimum at x0if f(n)(x0)>0and a strict relative maximum at x0 if f(n)(x

0)<0.

Exercise4.3.11: Prove the more general version of the second derivative test. Suppose f: (a,b)→Ris

differentiable and x0∈(a,b)is such that, f0(x0) =0, f00(x0)exists, and f00(x0)>0. Prove that f has a strict relative minimum at x0. Hint: Consider the limit definition of f00(x0).