Linear Algebra, Matrix as Linear Transformation

Matrix as Linear Transformation

An m×n matrix is a convenient way to represent m vectors in n space. Each row corresponds to a vector in n space.

The rank of a matrix A is the dimension of the subspace spanned by the vectors in A. The rank of the zero matrix is 0.

A square matrix A is singular if its rows are dependent, nonsingular if they are independent. The identity matrix is square and nonsingular, and its rows span every possible vector of length n. In 3 space, x times the first row + y times the second + z times the third yields x,y,z. The matrix spance the entire vector space, and is a basis for that space. The space has n dimensions. Any other nonsingular matrix also contains n independent vectors, and acts as a basis, spanning the entire space. Conversely, a basis has to have n independent rows, and forms a nonsingular matrix.

The m×n matrix A defines a function from m space into n space. If v is a vector in m space, v*A becomes a vector in n space. We will prove that the matrix defines the linear function, and each linear function is implemented as a matrix.

Let the standard coordinate axes act as a basis for m space, the domain. Watch what happens when v is one of these coordinate vectors. Set the jth entry equal to 1, while all other entries are 0, hence v points along the jth axis, and represents the jth coordinate. After matrix multiplication, v*A extracts the jth row. The jth coordinate in m space is mapped onto the jth vector in A, which lives in n space. Since (v+w)*A = v*A+w*A, the function commutes with vector addition. If we multiply v by k on the left, then by A on the right, that's the same as k*(v*A). The function commutes with scalar multiplication, even if r is noncommutative, and is a valid linear transformation. thus the matrix A implements a linear transformation from m space into n space.

Conversely, any linear map from m space into n space is completely defined by its actions on basis elements. We may as well use the coordinate vectors as a basis for m space. Now each linear function is an ordered set of m vectors in n space, the image of the coordinate vectors in m space. These m vectors form a matrix, and matrix multiplication implements the linear function. thus linear functions and matrices correspond 1-1.

If the vector spaces are right modules, the linear function must commute with scaling on the right. Take the transpose of the matrix A, so that the columns hold the m vectors in n space. The function is now A*v, where v is a column vector on the right. Note that (A*v)*k = A*(v*k).

If r is commutative, we have our choice. We can write row vectors, with f(v) = v*A, or column vectors, with f(v) = A*v.

Function Composition

If two matrices,A and B, represent linear functions that can be applied in sequence, i.e. the range of A is the domain of B, then the composition of these two functions acts on the vector v through v*A*B. (The * operator is matrix multiplication as usual.) We already proved matrix multiplication is associative, thus the action is also given by v*(A*B). The composition of two linear functions is given by the product of their matrices.

If you're multiplying by v on the right, the composition of A and B is implemented as B*A.