Calculus, Taylor Polynomials

Taylor Polynomials

If a function f has nth order derivatives in some neighborhood about 0, The Maclaurin (biography) series is the sum of f′k(0)×xk/k! as k runs from 0 to n. In other words, the kth term of the series is the kth derivative, evaluated at 0, times xk over k factorial. Actually this is a Maclaurin polynomial of degree n; if we let k run from 0 to ∞ we have the Maclaurin series.

If the first n derivatives are known at x = a, use these derivatives, and x-a instead of x, in the Maclaurin series. This is called the Taylor series. (biography) We just shifted our frame of reference, expanding f at x=a instead of x=0; there is really no difference. For simplicity we will assume a = 0 and use Maclaurin's formula. And we'll call it a taylor series anyways, just like most text books. Maclaurin's dead, I don't think he'll mind.

Note, these are sometimes misspelled as Tailor polynomial or Tailor series, and it's all the more confusing, since there is a well-known mathematician named Tailor. (biography)

The taylor polynomial of degree n is the only polynomial that agrees with f in its first n derivatives. In other words, it is a good approximation to f, at least near the origin. But how good? We'll try to figure that out.

If the next derivative, f′n+1, also exists throughout this neighborhood, we may compute the difference between f and the n-degree taylor polynomial for some nearby point x. This is the error term. I'm getting ahead of myself, but let's assume you know some integral calculus. If not, come back to this section later.

Consider the error in the linear approximation: f(x) - (f(0)+f′(0)×x). This is equal to the following integral, as t runs from 0 to x.

∫ f′(t)-f′(0)

Using integration by parts, let u = f′(t)-f′(0) and let v = t-x. The uv term drops to zero, leaving the following integral.

∫ (x-t)×f′′(t)

Thus the error term is equal to an integral that involves the next derivative of f.

We can refine the taylor polynomial by bringing in additional terms. These terms are then subtracted from the error term. In general, the error of the nth degree approximation is the following integral, as t runs from 0 to x.

∫ (x-t)n × f′n+1(t) over n!

We've already shown this is true for n = 1. Proceed by induction on n. As we move from n to n+1, take the next term in the taylor expansion of f, having exponent n+1, and subtract it from the error term, which we have written as an integral (shown above). This gives the next error term. As you might guess, we will express the next term as an integral, as t runs from 0 to x, so we can subtract it from the integral above. But before we subtract the second integral, use x-t instead of t. Since the integral runs from 0 to x, this is like scanning the curve from right to left instead of left to right; the area is the same. So subtract integrals and get the following.

∫ (x-t)n × f′n+1(t) - f′n+1(0) × (x-t)n over n!

This can be simplified as follows.

∫ (x-t)n × (f′n+1(t) - f′n+1(0)) over n!

Use integration by parts as before, with u = f′n+1(t)-f′n+1(0). Again, the uv term drops to 0, and the remaining integral follows our general formula for the error term, where n is now n+1. That completes the inductive step.

If b bounds f′n+1 in the interval (0,x), the error of the nth approximation is at most bxn+1/(n+1)!.

If f′n+1 is continuous, use the second mean value theorem to rewrite the error term as f′n+1(c) times the integral of (x-t)n/n!, for some c between 0 and x. This gives Lagrange's version of the error term, which is valid for some c between 0 and x.

f′n+1(c)×xn+1/(n+1)!