Determinants, Elementary Row Operations

Elementary Row Operations

An elementary row operation on a matrix scales a row, swaps two rows, or subtracts a scaled version of one row from another. We've already seen what these do to the determinant. Scaling a row scales the determinant, permuting rows negates the determinant if the permutation is odd, and row subtraction leaves the determinant unchanged. Note that gaussian elimination uses only elementary row operations.

A matrix e is elementry if e*M performs an elementary row operation on M, or if M*e performs an elementary column operation. The identity matrix is considered elementary; it doesn't change a thing.

Replace ej,j with x and e*M multiplies the jth row by x, while M*e multiplies the jth column by x. The determinant of e is x, so det(e*M) = det(e)×det(M).

To save time, we can replace all the diagonal elements with scaling factors, and scale all the rows at once. The determinant of M is multiplied by all the scaling factors, which is det(e).

Let e be a permutation matrix if it contains exactly one 1 in each row and column. Write down the column numbers in the order they are used, and you get a permutation on the numbers 1 through n. This permutation is applied to the rows or columns in e*M and M*e respectively. Note that det2(e) is -1 for an odd permutation, and 1 for an even permutation. Meanwhile, the determinant of M is multiplied by -1 when an odd permutation is applied to its rows or columns. Thus det(e*M) = det(e)×det(M).

Finally, e could have ones down the main diagonal and x somewhere else. Now e*M adds x times one row to another. Note that e is a triangular matrix; its determinant is 1. Thus det(e*M) = det(e)×det(M).

The Determinant of the Product of two Matrices

And now, the reason for all these elementary row operations. We would like to prove that det(AB) = det(A)×det(B), for any two matrices A and B.

To start, assume B is a singular matrix, with determinant 0. In other words, the vectors of B do not span the entire space. If A is another matrix, each row in A*B is a linear combination of the rows of B. The space spanned by A*B is also spanned by B. This is a proper subspace, not the entire space, hence A*B is singular and its determinant is 0.

Next assume B is nonsingular, hence it has a nonzero determinant. Premultiply B by a series of elementary matrices that perform the elementary operations associated with gaussian elimination. Thus t = e1e2e3e4…ejB, where t is the upper triangular matrix that results from gaussian elimination. Since the determinant is nonzero, the main diagonal of t contains all nonzero entries.

Premultiply t by yet more elementary matrices to perform back substitution. In other words, scaled versions of the bottom row are subtracted from all the others, clearing out the last column. Then the second to last column is cleared, and so on, until a diagonal matrix is left. Write this as d = e1e2e3…ejB, where d is the diagonal matrix. The determinant of d is the determinant of B times the determinants of the elementary matrices. Of course most of these determinants are 1, except for the matrices that swap two rows.

Verify that every elementary matrix has an inverse, and let fi be the inverse of ei. These inverses are also elementary matrices, hence det(fi*M) = det(fi)×det(M). Now B = f1f2f3…fjd. Replace A*B with A times a bunch of elementary matrices times d, and fold each elementary matrix into A, one at a time. Each time the determinant commutes with multiplication. This also holds for d. When everything is multiplied together, det(A*B) = det(A)×det(B).

For any two matrices A and B, the determinant of the product is the product of the determinants. Restrict attention to nonsingular matrices, and the determinant implements a group homomorphism into the base ring.

Zero Divisors

The above proof, which is standard in linear algebra textbooks, is somewhat unsatisfying, because the base ring has to be a field, or at least an integral domain that you can embed in a field. This is necessary for gaussian elimination to work properly, using elementary row operations. What if the base ring has zero divisors? Isn't there a more direct proof?

With some help from algebraic geometry, it is possible to parlay this result, over the complex numbers (a closed field), to any commutative ring. Consider 4 by 4 matrices. Let the first matrix consist of 16 variables, just letters if you will, and let the second matrix consist of 16 more variables. Express det(AB) - det(A)*det(B) as a polynomial in 32 variables. Replace these 32 variables with any 32 complex numbers and the polynomial evaluates to 0. By nullstellensatz, the polynomial is 0. The relationship holds at an algebraic level, as long as the base ring is commutative.

I checked this with a symbolic math package for n = 1 2 and 3. The latter entails 18 variables, building two 3×3 matrices. Here s is the product of determinants, and t is the determinant of the product. Sure enough, the expressions are identical.

Noncommutative

The product rule fails over a noncommutative ring. Consider the 2 by 2 matrices over the quaternions. Let M = [1,i|k,j]. The determinant is 1*j-i*k, or 2j. Swap rows, and the determinant is k*i-j*1, or 0. The swap is implemented by the elementary matrix [0,1|1,0] on the left. Take determinants, and get -1 * 2j = 0, which is obviously false. In fact, determinant isn't even well defined when the base ring is noncommutative.