## Linear Algebra, Dot Product

### Dot Product

Learning tools: a linear algebra app, also on Google Play.

Sometimes an appropriate inner product or dot product is defined. This function, written x.y, maps pairs of vectors back into scalars.

The dot product must be bilinear. If c is a scalar and x y and z are vectors, we have:
c×(x.y) = cx.y
d×(x.y) = x.cy (where c and d are conjugate)
x.y + x.z = x.(y+z)
y.x + z.x = (y+z).x

Note that x.0 = x.(x-x) = x.x - x.x = 0. Similarly, 0.x = 0.

The dot product need not be commutative. That is, x.y may not equal y.x. It does, when real vectors are involved, but this does not hold for complex numbers, as we shall see below.

When scalars are taken from the field of real numbers, the "standard" dot product of two vectors is the sum of the products of the corresponding entries. Thus [1,3,5].[2,4,6] = 2+12+30 = 44. Notice that v.v becomes a sum of squares. The square root of v.v, or |v|, gives the euclidean distance from the origin to the point v in n space, or the length of the vector v. This is true in two dimensions, thanks to the Pythagorean theorem. The same is true in 3 dimensions; just use the theorem twice. This generalizes to n dimensions.

We may define the angle between two vectors u and v as the arc cosine of u.v over |u|×|v|. We don't have to worry about a zero denominator, unless u or v is 0. Cauchy-Schwarz tells us the square of the quotient is bounded by 1, hence the quotient is between -1 and 1, and the arc cosine is well defined.

### Complex Numbers

When scalars are complex numbers, take the conjugate of the second vector, then apply the dot product formula above. There is a method to this madness. Watch what happens as you compute v.v. Each entry in v is multiplied by its conjugate. If an entry is a+bi, the result is a2+b2. Thus v.v becomes the sum of the squares of all the components, just as it was in real space. The square root of v.v is again the distance from the origin to v, or |v|.

With sqrt(v.v) as a metric, the vector space becomes a metric space, with the usual "open ball" topology. The only tricky part is the triangular inequality, but a triangle is a triangle, even if it is embedded in 23 dimensions. The third side cannot be longer than the sum of the other two sides. This can be proved algebraically, if you have some time on your hands.

Let's take a look at perpendicularity. Let u.v = 0. This means the real part of u.v is 0, and that's just the sum of the product of the corresponding terms. The vectors are perpendicular in real space.

In addition, the imaginary part of u.v is also 0. This means u is perpendicular to vi, in real space. Therefore u.v is 0 iff u is perpendicular to both v and vi.

### The Dot Product of Continuous Functions

Here is one more example, in infinite dimensions. Recall that the continuous functions on [0,1] form a vector space. If f and g are two such functions, then f.g is the integral, over the unit interval, of fg. The product fg is continuous, and integrable, so there's no trouble here.

Defining f.g as ∫fg is really the limit of the previous definition. If we divide the interval into n segments, to approximate the integral, and sample f and g at each segment, then f and g look like vectors in n dimensional space. Their dot product, divided by n, approximates the integral of fg. This approaches the continuous dot product as n approaches infinity.

The angle between two continuous functions is defined as above, the arc cosine of f.g over |f|×|g|. Is this well defined? The argument to arc cosine is one limit divided by two other limits, as n approaches infinity. Rewrite this as a single limit, the limit of fn.gn over |fn|×|gn|. This expression is between -1 and 1 for every n, hence the limit is bounded by ±1, and is the cosine of some angle between 0 and π.

The norm of f-g gives a specific distance between two functions f and g. This is zero for f = g, positive otherwise. A limit argument, similar to the above, proves the triangular inequality. hence this vector space is a metric space with the open ball topology.