Simple Modules, Characterizing Semisimple Rings

The Artin Wedderburn Theorem

The Artin (biography) Wedderburn (biography) theorem completely characterizes semisimple rings. Such a ring is represented as a finite direct product of simple artinian rings. Then simple artinian rings are described. Finally the components are reassembled to produce R, the semisimple ring.

If a search engine has sent you here, you probably won't understand anything on this page, because I relie on a number of concepts and theorems that have been developed before. You should probably start back at the introduction.

To start the process, let's prove that the decomposition of R into left simple modules is finite.

Semisimple Rings have a Finite Decomposition

If the ring R is semisimple it is a semisimple module by definition. Thus R is the direct sum of simple left R modules M1 M2 M3 etc.

But R isn't just a module, it's a ring, and it contains 1. Suppose 1, projected onto Mi, = 0. Let x be any nonzero element in Mi and consider x*1. Within Mi, we have x*0 = 0. Multiplication by 1 has changed x, at least in the ith component, and that's not suppose to happen. Therefore 1 has a nonzero projection in Mi, for every component Mi.

Since R is a direct sum of modules, and each element of R is a finite sum of elements drawn from these modules, 1 must be spanned by finitely many modules. Yet every module contributes to 1, so the number of modules in the decomposition is finite.

It's interesting to see what goes wrong with an infinite direct product. Let R be the direct product of infinitely many copies of Z3, or any other field for that matter. Verify that R is a ring, and the simple R modules are the components, the various copies of Z3. At first it seems like R is semisimple. The submodules that come to mind are the direct products of some, but not all, of the components. If U is the direct product of the odd numbered components, then it has a summand, V, which is the product of the even numbered components. However, there is a submodule that you might not think of right away. Let U be the direct sum of the component rings. Let V be the disjoint summand, so that U*V = R. Let x be a nonzero member of V, with a nonzero value in the jth component. Multiply by 1j, to show that the jth copy of Z3 belongs to V. This simple module also belongs to U, hence they are not disjoint after all. There is no summand, and R is not a semisimple module; not a semisimple ring.

It is easy to build a semisimple ring, e.G. Zp*Zq, but must every semisimple ring be a direct product of simple rings? Let's take a closer look.

Right Identity Elements

Let Mi be the ith component, the ith left module in a semisimple ring R. Let ei be the ith component of 1. We already showed ei is nonzero.

If x is an element of Mi, then x*1 = x, thus x*ei = x in Mi, and ei is the right multiplicative identity inside Mi.

If y is in some other module Mj, (x+y)*1 = x+y, whence y*ei = 0 in Mi. The "other" components drive ei to zero. Thus ei implements a projection operation, extracting the ith component and leaving everything else on the cuttingroom floor.

If R is commutative then each ei is the (two sided) multiplicative identity for its module. Each component becomes a simple ring. Since simple commutative rings are fields, R is the finite direct product of fields. This was illustrated by our example Zp*Zq. But what about noncommutative rings?

If ei fails to be a left identity, then eix is not equal to x for some x in Mi. Let d be the difference x-eix. Now yx = yeix = y(eix), hence yd = 0. The element d kills all of Mi.

Since 1*d = d, d can't kill everything. Let ejd = dj, for j ≠ i, and note that the sum of dj = d. Here dj is the spillover from Mj into Mi.

Write ejdj = ejejd = ejd = dj. If y is in Mj, yd = yejd = ydj.

Although spilloer can occur, we can put a fence around it. In particular, spillover can only occur between isomorphic modules.

Building Blocks

The order of the components isn't important, so group like components together. If M1 M2 and M3 are isomorphic, as left R modules, let B1 be the left R module, the left ideal, spanned by these three simple modules. This is called a block.

Build a block for each cluster of isomorphic modules and note that R is the direct sum of these blocks. Each block is a semisimple R module, and, each block is generated by the projection of 1 onto that block. In our example, B1 is generated by e1+e2+e3. We will show that Bi is a two sided ideal.

Review the ring of endomorphisms of R. An element x in R implements an R endomorphism from R into R via right multiplication by x. By correspondence, the image of M2 is another submodule inside R. The kernel has to be 0 or M2, since M2 is simple, hence the image is 0, or something isomorphic to M2. The image could be M3, for example. But suppose the image is not in B1. Remember that the image is simple, so it lies completely in a submodule, or it is disjoint. Let S be the submodule spanned by M1 M2 M3 and the image of M2. These are all independent modules. Since S is a summand of R, let T satisfy R = S*T. Now T is semisimple, the direct product of simple modules. Put these simple modules together and write R as the product of simple modules, which has at least four copies of M1. Yet our original decomposition only had three. This contradicts jordan holder, hence the image of M2 is in B1 after all.

Every x drives M1 M2 and M3 into B1, and since B1 is entirely spanned by these submodules, x maps B1 into itself. Thus B1 is a right ideal, and a two sided ideal.

Each block is a Ring

We already know from the above that x1+x2+x3 times e1+e2+e3 = x1+x2+x3. Now consider e1+e2+e3 times x1+x2+x3. We know that 1 times x1+x2+x3 equals itself. The element e4 is part of another block, say B2, and e4 times x1+x2+x3 lies entirely in B2. It does not contribute to the projection of the product in B1. This holds for each ej beyond 3. Thus, when restricted to B1, we only need look at e1+e2+e3. The product has to be x1+x2+x3, and we have found our multiplicative identity for B1. This makes B1 a ring.

In the same fashion, B2 is a ring, and B3 is a ring, and so on. Each Bj is a left and right R module, and a left and right Bj module. Also, the product of elements from different blocks is 0. After all, xy has to belong to both ideals simultaneously, and the ideals are disjoint. Thus the product of two elements in R can be evaluated block by block, and R is the direct product of the rings Bj.

Direct Product of Simple Rings

If B1 is not a simple ring, it has a proper ideal H. Write B1 as H*G. By jordan holder, H factors into left simple modules isomorphic to M1, and so does G. In this case there are 3 in total. Perhaps H has two and G has one. Now 1 (in R) projects onto G and H, giving eG and eH. show that eH is an identity element for H, and after some more algebra, which I will leave to you, eG becomes an identity element for G. In other words, G is also an ideal.

Both G and H are rings, and if they aren't simple, we can repeat the process. Finally R is the finite direct product of simple rings.

Simple Artinian Ring is Semisimple

In the above, we started with a semisimple ring R and built a block B, an ideal in R, which is a ring, and a semisimple R module, and a B module. But that doesn't insure B is a semisimple B module. We don't know that B is a left semisimple ring, only that it is a simple ring. Now we will prove it is semisimple.

In fact the proof is more general, for any simple left artinian ring R is semisimple.

Does this help? Do we know that B is left artinian? Since it is a submodule of R, B is a left artinian R module. Is it a left artinian B module? Suppose it contains an infinite descending chain of B modules. Since B is a summand of R, B is the homomorphic image of R. Every B module is also an R module. Let R act on the module by mapping to B, then applying the action of B. this means B is not a left artinian R module, which is a contradiction. Thus B is indeed a left artinian B module, and a left artinian ring. Now - on with the proof.

Let R be a simple left artinian ring. We're going to call upon a powerful theorem from the world of jacobson radicals. In particular, jac(R) is a proper two sided ideal, and since R is simple, this ideal is 0. In other words, R is jacobson semisimple. Now a ring that is both left artinian and jacobson semisimple is left semisimple. Such a ring is the finite direct product of left simple modules.

If some of these component modules are not isomorphic, group them into blocks, as we did above. Each block is its own ideal, yet R has no proper ideals, so R is the finite direct product of copies of a left simple R module.

Matrices over a Division Ring

If we can characterize simple left artinian rings, then we have characterized the left semisimple ring R, for R is the finite direct product of simple left artinian rings.

Let's look at an example. Let R be the ring of n×n matrices over a division ring D. We will show that R is simple and left artinian. Such a ring can act as a component of a larger, semisimple ring.

Let x be a nonzero matrix in R. Premultiply by a matrix that is all zeros except for a 1 somewhere in the top row. This extracts one of the rows of x and moves it to the top. Postmultiply by a matrix that is all zeros except for a 1 in the first column. This extracts an entry from the top row and moves it to the upper left. The ideal containing x includes a matrix that is zero except for some nonzero value in the upper left. Scale by the inverse of this element to put 1 in the upper left. Permutation matrices can be used to move this 1 to any position in the grid. The resulting collection of n2 matrices is sufficient to span the entire ring. If an ideal is nonzero it is all of R, hence R is a simple ring.

Let's look at the left ideal generated by x. Again, we can extract any row and move it to the top, then scale it by anything in D. If x has three nonzero rows, the top row of our new matrix can have any linear combination of these three rows. And we can do the same for the second row, and the third, and so on. Indeed, premultiplication by a matrix puts a prescribed linear combination of the rows of x into the kth row of the product, as dictated by the entries in the kth row of y. The matrices generated by x have rows drawn from the span of the vectors of x, and all such matrices can be produced. That's what happens when an ideal is principal, generated by x.

Inside an ideal H, all the matrices present rows in n space. Any of these rows can be extracted, and moved to the top, and scaled, and combined. Therefore H is characterized by a subspace of an n dimensional vector space over D. Each row in each matrix is drawn from this subspace, and all such matrices are possible.

A chain of ideals is a chain of subspaces of decreasing dimension. There can be at most n, or n+1 if you count 0. Therefore R is left noetherian and left artinian. It is also right noetherian and right artinian, for a right ideal consists of matrices where each column is drawn from a subspace of n dimensional space.

As it turns out, all simple left artinian rings are isomorphic to matrices over a division ring. this is the characterization of a simple left artinian ring, and the direct product of these rings builds a left semisimple ring.

Analyzing the Simple Artinian Ring

Let R be simple and left artinian. Write R as a finite product of simple left R modules, all of them isomorphic. Let M be one of these R modules, hence R is Mn. The value of n is well defined, established by jordan holder. It coresponds to the n in the n×n matrices described above.

Let D be the ring of endomorphisms of M. Let e be a nonzero element of D, an endomorphism of M. Since M is simple the kernel must be 0, and the image must be all of M. In other words, e is a module automorphism, and is reversible. Every nonzero element of D is invertible, and D is a division ring. Is this the same D we saw earlier?

In the world of n×n matrices, the simple module M could be the matrices that are zero except for the last column. Let G be the matrix that is zero except for 1 in the upper right. Note that G generates M.

An endomorphism could scale G, and everything in M, by any x in D. This produces a valid endomorphism on M. But what if e does more than scale G by x? What if it creates new entries down the rightmost column? Perhaps e(G) has a 1 in the first two positions. Let C be the matrix with 1 and -1 in the first two entries of the top row. Before the endomorphism, CG was nonzero; after the endomorphism e(CG) = ce(G) = 0. Thus e maps a nonzero element to 0, which is a contradiction. One can build a similar matrix C for any endomorphism that spreads G down the rightmost column.

Even if e moves the nonzero entry of G down the column, we have a problem. Suppose e(G) moves 1 to the second column. Let C have a 1 in the upper left. Thus CG is nonzero before the endomorphism, and 0 afterwords.

The only valid endomorphisms scale by something in D, and the result is the same division ring we used to build our matrices. Both n and D are well defined.

We will analyze R by looking at the ring of endomorphisms of R. The two structures are isomorphic.

Write R as the direct product M1*M2*…Mn, where each module Mj is isomorphic to M1. Now an element x in R determines, and is determined by, its projections xj in Mj. Using this, an endomorphism becomes n component homomorphisms from R into each Mj. Conversely, any collection of homomorphisms into each Mj respects addition and scaling, and builds an R endomorphism on R.

Let ej be the right identity element for Mj. A module homomorphism into Mj determines, and is determined by, the images of ei in Mj. These are the module homomorphisms from Mi into Mj.

Remember that these modules are simple, so a module homomorphism from one to another is either trivial, or it is an isomorphism.

Remember that M1 through Mn are all isomorphic. Build an isomorphism between M1 and each Mj. Given an isomorphism from Mi onto Mj, map Mi and Mj onto M1 to find an endomorphism on M1. This process can be reversed, so that a nontrivial endomorphism on M1 inplies an isomorphism from Mi onto Mj. Finally, the trivial map from Mi into Mj corresponds to the trivial endomorphism on M1. As a set, the homomorphisms from Mi into Mj correspond to D.

Add homomorphisms from Mi into Mj, and the corresponding endomorphisms on M1 are added. The representation of component homomorphisms as elements of D respects addition.

A complete R endomorphism x can be described by a matrix over D, where xi,j indicates the R module homomorphism from Mi into Mj. As a set, the endomorphisms of R correspond to the n×n matrices over D.

Let x and y represent R endomorphisms and consider the matrix x+y. Each entry adds the two homomorphisms from Mi into Mj. This in turn adds the endomorphisms of R. Matrix addition corresponds to addition in the ring of endommorphisms of R.

Consider the composition of the endomorphisms represented by x and y. Can we describe the resulting homomorphism from Mi into Mj? Let x carry Mi into each possible Mk, then let y carry Mk into Mj. But how do we combine two component homomorphisms?

Let s be the element in D that represents x from Mi into Mk. In other words, s = xi,k. Now the homomorphism is really the isomorphism from Mi onto M1, then s, then the map back to Mk. If t, another element in D, represents the action of y from Mk into Mj, then this map runs from Mk to M1, through t, and back to Mj. The paths between M1 and Mk cancel, and s and t can be combined to produce st. Thus st represents the composite homomorphism from Mi through Mk into Mj. This is added up over k. The ith row of the matrix x is dotted with the jth column of the matrix y to produce a value in D that represents the map from Mi into Mj. This is the definition of matrix multiplication.

Therefore, matrix multiplication corresponds to endomorphism composition. Combine this with addition, and the ring of matrices is isomorphic to the ring of endomorphisms of R, which is isomorphic to R. Our simple ring R is the n×n matrices over D. Since n and D are determined uniquely from R, there is one simple left artinian ring, up to isomorphism, for each division ring D and each positive integer n. The simple left artinian rings have been classified.

Symmetry

Notice that R is also right artinian, and noetherian. In other words, a simple ring that is artinian from either side is artinian and noetherian from both sides. It is the ring of matrices over D.

A left semisimple ring is the direct product of these rings, and so is a right semisimple ring. Either way the components are matrices over division rings. Therefore R is left semisimple iff it is right semisimple. Given this, you'll understand if I sometimes say R is semisimple, without specifying left or right.

A semisimple ring, which is both left and right semisimple after all, is left and right artinian and noetherian.

Center

If R is a simple ring, and x is in the center of R, then the ideal generated by x spans 1. This makes x left and right invertible, i.e. a unit. The center of R is a field.

Assume R is simple and artinian, and write R as the n×n matrices over D. A copy of D exists in R, namely the identity matrix scaled by D. The largest field inside D, i.e. the elements in the center of D, lives in the center of R. Conversely, any matrix with an off diagonal element has another matrix with which it does not commute. Thus the field in the center of R is a subring of D, which is a subring of R.

Maximal Ideals

Let R be a semisimple ring. An ideal, or left ideal, determines, and is determined by, ideals or left ideals in each of the component rings.

If M is a maximal ideal or maximal left ideal, then it fails to span 1 in at least one of the component rings. If 1 is not spanned in two separate components, adjoin 1 in one of the components and find a larger, proper ideal. Therefore a maximal left ideal in R is a maximal left ideal in one of the component rings, crossed with all the other components.

What is a maximal left ideal in a component ring? We have analyzed this above. The ring is the n×n matrices over D, and a maximal ideal is an n-1 dimensional subspace, a hyperplane in Dn. This can be represented by the perpendicular vector on the right, hence there is one maximal ideal in R for each nonzero vector in each of the component rings.

If R is commutative the component rings are all fields, and there are n maximal ideals.