Linear Transforms, Operating on Sequences

Operating on Sequences

Let a linear operator accept, as its input, a sequence of real or complex numbers. A sequence is a function, defined on the nonnegative integers, so it's fair to say that our operator takes a function as its input, and produces another function as its output.

You are already familiar with the taylor operator, which takes a sequence of real or complex numbers and transforms it into a continuous, analytic function. Let t() be the taylor operator. If t([1,0,0,0…]) and t([0,1,0,0…]) are established, then t([3,7,0,0…]) is determined by linearity. We only need define t() on the basis elements a0 a1 a2 a3 etc, and everything follows from there.

As you probably know, the taylor operator maps ai to the function ai×xi, and in so doing, any finite sequence becomes coefficients for the corresponding polynomial. For instance, t([1,0,3,9]) becomes 1+3x2+9x3.

Notice I used the word finite. The individual elements of the sequence don't really form a basis for the vector space, because the span of a basis only includes finite sums. The action of t() on the geometric sequence ai = ½i has not been defined. We could map this sequence, and every other infinite sequence, to 0, and remain perfectly consistent. This is a well defined linear operator, but it's not the taylor operator, and it's not really what we want.

The most interesting transforms are invertible, and the taylor operator is no exception. Let u() be the untaylor operator. It takes an analytic function, defined in a neighborhood about 0, and cranks out the taylor sequence. The nth term is computed by looking at the nth derivative of the function, evaluated at 0. (The details are not important here.) The point is, t and u are inverse operators, at least for polynomials, and if we want them to be inverse operators for all functions, then t has to be linear with respect to infinite sums, as well as finite sums. This is called hyperlinear. Therefore t(a) produces the function:

a0 + a1x + a2x2 + a3x3 + a4x4 + …

This is an infinite sum, and f is defined wherever it converges. We know it converges when x = 0, and it may converge inside a circle about the origin, or it may converge on the entire complex plane.

Since t() is linear across infinite sums, the resulting function is called a taylor series, rather than a taylor sequence.

In the previous section, I introduced the fourier transform, which turns a continuous function into a sequence of numbers. There is an inverse fourier transform that turns the sequence back into the function that created it. To work properly, the inverse function is linear with respect to infinite sums. The basis functions, trig functions in this case, are multiplied by their respective coefficients and added together, and the resulting function is defined wherever it converges. Since the sequence acts as a series, it is called a fourier series.

Whenever a linear operator acts on a sequence of numbers (input), or generates a sequence of numbers (output), the operator, or inverse operator, almost always treats the sequence as a series, and so it is referred to as a series, e.g. the taylor series and the fourier series.