A ring has two operators designated + and *, and colloquially referred to as addition and multiplication.  Addition forms an abelian group, and multiplication forms a monoid.  What does this mean?  Addition is associative and commutative, 0 is the additive identity, and every x has an opposite -x.  Multiplication is associative, and 1 is the multiplicative identity.  If multiplication is commutative, the ring is commutative.

These operators do not run independently of each other.  Multiplication distributes over addition, just as it does in the integers.  Thus xa + xb = x(a+b), and ax + bx = (a+b)x.

For every x, 0*x = (x-x)*x = x2-x2 = 0.

0 times anything is 0, as you would expect - even if the ring does not contain 1.

If 1 = 0 then x*1 = x*0 for every x, and the ring contains only 0, which isn't very interesting.  Unless stated otherwise, each ring contains 1, and 1 is different from 0.  The ring has at least two elements.  And there is a ring with precisely two elements, namely the integers mod 2, written Z/2.

Continue adding 1+1+1+… until you reach 0.  If n becomes 0, then the ring has characteristic n.  If the multiples of 1 go on forever, the ring has characteristic 0.  Z (the integers) has characteristic 0, and Z/n (the integers mod n) has characteristic n.

If a ring R has characteristic n, add x to itself n times, which is the same as n*x, which is 0.  Thus the characteristic of x is a factor of the characteristic of R.  If the characteristic of R is p, for p prime, then the characteristic of x is p across the board.  However, Z/12, which has characteristic 12, contains some elements, such as 2, that become 0 when added to itself 6 times.  There is a little wiggle room when the characteristic is composite or 0.

Let R be the direct product of the integers mod p for every prime p.  If x = [1,1,1,0,0,0,0…], then you have to add x to itself 2󫢭 = 30 times to get back to 0.  Elements of R have arbitrarily high, yet still finite, characteristic.  R itself has characteristic 0.

Recall that we happily assigned a characteristic of p, or 0, to a field F.  If the characteristic is p, then it applies to every x in F, as described above.  If the characteristic of F is 0, then the characteristic of every x is 0, because every x is invertible.  For any x, write xy = 1, whence (x+x)y = 2, (x+x+x)y = 3, and so on until mxy = 0, where m is the characteristic of x.  This sets m = 0, which is a contradiction.  A field or division ring has characteristic p or 0 everywhere.

A semiring is based on an additive monoid, rather than an additive group.  This is a bit of a misnomer.  You'd think a semiring would be based on a semigroup, but we really want the ring to contain 0.  The classic example is the nonnegative integers or reals under addition and multiplication.  The latter is a division semiring.

There is very little information in this book regarding semirings.  Many of the theorems on rings carry over to semirings; some do not.  You'll just have to step through them and see if the additive inverse is required.

If a semiring has characteristic n, it is automatically a ring.  Use the distributive property to show x + (n-1)x = 0.

If multiplication is a group, then every x has an inverse 1/x, and the ring is a division ring.  If multiplication is also commutative then the ring is a field. If xy = 1, x is the left inverse of y and y is the right inverse of x.  Clearly x and y are nonzero.

If x has a left inverse w and a right inverse y they are equal.  Write y = 1y = wxy = w(xy) = w1 = w.  In this case x is called invertible, or a unit.

The set of units forms a group under multiplication.  If R is the ring, the group of units is denoted R*.

Elements x and y are associates if x divides y and y divides x.  Show that "associate" forms an equivalence relation.  Symmetry comes from the definition itself, and transitivity (x divides y and y divides z implies x divides z) is straightforward.  Actually we need to be consistent here.  Divides always means left divides, or right divides, as you prefer.  Thus cx = y and ky = z, hence kcx = z.

We sometimes lump an element and all its associates together.  In the integers, 5 and -5 are associates.  Each one divides the other.  When factoring 45, we don't spend a lot of time worrying about 5 versus -5.  They are essentially the same prime factor; merely associates of each other.  Of course 5 does not equal -5, and sometimes the particular associate does matter.

If xy = 0, and x and y are nonzero, x is a left zero divisor and y is a right zero divisor.

Suppose x has a left inverse w, and x is a left zero divisor.  Write 0 = xy = w(xy) = (wx)y = 1y = y, which contradicts x being a zero divisor.  x cannot be invertible on one side and a zero divisor on the other.  A unit cannot be a zero divisor from either side.

Assume every nonzero element is left invertible.  Given v, write uv = 1, which makes u left and right invertible.  Now u is a unit, with inverse v on either side, and that makes v a unit with inverse u.  The ring becomes a division ring.

The expression 1-xy is left invertible iff 1-yx is left invertible.  Start with the latter assumption and write u*(1-yx) = 1.  Now construct the left inverse of 1-xy.  Expand (1+xuy) * (1-xy), giving 1 - xy + xuy - xuyxy.  Yet uyx = u-1, so the expression simplifies to 1.

A similar result holds for the right inverse, using similar algebra.  If the right inverse of 1-yx is v, the right inverse of 1-xy is 1+xvy.  Expand the product and replace yxv with v-1.

Rings contain 1 by default, but for a couple of paragraphs, let's assume R doesn not contain 1.  A left identity e satisfies xe = x for all x.  If R also has a right identity f then fe equals both e and f, and R contains 1.  If e is also a left zero divisor, with ez = 0, then xz = xez = 0, and z kills the entire ring.  If R is the 22 matrices that are 0 on the right, put 1 in the upper left to find e, a left identity in R.  Put 1 in the lower left to create z, which kills e, and kills the entire ring on its left.

There is a variation of the two sided inverse rule here.  Assume xu = uw = e.  We can show ux = e, putting x on the other side of u.  Write xuw = xe = x.  At the same time, (xu)w = ew = x.  Multiply ew = x by u on the left, and uew = ux, thus e = ux.  If u has a right inverse (using the term inverse loosely), then every left inverse of u is also a right inverse of u.  However, u might not have a right inverse at all.  This is shown by the 2 by 2 matrix ring described above.  Remember that e has 1 in the upper left.  Set x = e, and let u have 1's on the left.  Note that u has no right inverse at all.  As if in confirmation, xu = e, but ux is not e.

If u has a right inverse w and two different left inverses, then subtract, and find zu = 0.  Multiply by w on the right and zuw = 0, whence ze = 0 and z = 0.  The left inverse of u is unique, if it exists, and it also becomes a right inverse of u.

Thinking of x as a left and right inverse of u, consider some other right inverse w.  Let z = w-x, and write uz = 0.  Multiply by x on the left, and ez = 0.  Multiply this on the left by anything in R, and z kills all of R.  Each right inverse associated with the left inverse x is x+z such that Rz = 0. A domain is a ring with left and right cancellation.  That is, b*x = b*y implies x = y when b is nonzero.

An equivalent definition says there are no zero divisors.  If cancellation fails then b*(x-y) = 0, making b and x-y zero divisors.  Conversely, if there are zero divisors, b*x = b*0, and cancellation fails.

An integral domain is a commutative domain.  There are no zero divisors, and multiplication is commutative.  It looks somewhat like the integers.

If x and y are associates in an integral domain, write xu = y and yv = x.  Thus xuv = x, and by cancellation, uv = 1, making u and v units.  In an integral domain, each set of associates contains one element for every unit.  Each set has the same cardinality, namely the cardinality of the units.  Zero is of course an exception to the rule.  Its associates are all zero.

The integers have units 1, and every nonunit has one other associate, its opposite.

Every finite domain is a division ring, and consequently, a field.  Given a nonzero element c, multiplication by c is a map from the ring into itself.  If x and y are different, cx and cy are different by cancellation, therefore the map is injective.  An injective map from a finite set into itself is onto, so c maps some element d to 1.  This means cd = 1, hence c is right invertible.  Since c was arbitrary, all elements are right invertible, and as shown in the previous section, the ring is a division ring.  I'll prove later on that every finite division ring is a field.  Combine these results and a finite domain is a field. R is a subring of S if R and S are rings, R is contained in S, and R contains 1 in S.  The latter condition is significant.  Z cross 0 is not a subring of Z*Z, even though both structures are perfectly good rings (the integers within the integers cross the integers).  The multiplicative identity in the smaller ring is [1,0], yet the multiplicative identity in the larger ring is [1,1].

Similarly, a ring extension of R is a larger ring that has the same multiplicative identity as R.

The integers, or integers mod n (if n is the characteristic of R), form a subring of R, and not just any old subring, but a subring in the center of R.  Multiply x by 3, for instance, on either side, and get x+x+x, thus x commutes with the integers in R.  We saw the same thing with fields; the base field, Q or Z/p, was the smallest subfield inside the field F.  Turn this around, and every ring is a ring extension of Z or Z/n, just as every field is a field extension of Q or Z/p.
R
0 1 2 3 4 …
Let H be a subgroup of R under +.  Thus x and y in H implies x+y is in H, and x in H implies -x is in H.  If x*H is in H for every x in R, H is a left ideal.  Since x might be drawn from H, H is closed under multiplication.  If H contains a left invertible v, with uv = 1, H includes xuv for every x, and H = R.  Of course if H = R then H contains 1, which is invertible.  Thus a left ideal is proper iff it contains no left invertibles.

If H*x is in H for every x in R, H is a right ideal.  If H is a left ideal and a right ideal, H is an ideal.

The intersection of arbitrarily many left ideals is another left ideal.  Let H be such an intersection and note that x*H is in every ideal in the set, hence x*H is in the intersection, and x*H is in H.  Make the same statement about the sum of two elements in H, and H is a left ideal.

Given a set S of ring elements, the "smallest" left ideal containing S is well defined.  Take the intersection of all the left ideals that contain S.  This is the left ideal generated by S, and it consists of all finite sums of all left multiples of members of S.

The sum H1+H2, consisting of elements x+y for x in H1 and y in H2, is another left ideal.  It includes both H1 and H2, as in H1+0 and 0+H2.

The product of left ideals H1*H2 is the smallest left ideal containing all elements xy from H1 cross H2.  The product is contained in H2, but not necessarily in H1.

The product operator is usually applied to two sided ideals, whence the product lies in both ideals.  We can characterize the product of two ideals as all finite sums of xy, where x is from the first ideal and y is from the second.  This is indeed the smallest ideal containing the pairs xy.  Ideal multiplication is commutative if R is commutative.  Note that the product is contained in the intersection.  This is illustrated by the tower of ideals at the right.

H1 + H2
H1H2
H1 ∩ H2
H1 * H2

Multiply three ideals together and the result is all finite sums of xyz, where variables are taken from their respective ideals.  This characterization shows ideal multiplication is associative.  It is not usually commutative, unless the base ring is commutative. Review the general definition of a homomorphism.  A ring homomorphism f maps the ring R into or onto a ring S, and respects addition and multiplication.  Since + is a group operator, f is automatically a group homomorphism on addition.  With respect to *, f is a monoid homomorphism, or a group homomorphism if R is a division ring or field.

If K is the kernel of a ring homomorphism, i.e. all the elements that map to 0, verify that K is an ideal.  With x and y in k, f(x+y) = f(x) + f(y) = 0+0 = 0, hence x+y is in K; and if x is in K then f(vx) = f(v)f(x) = f(v)*0 = 0, and vx is in K.

Conversely, if K is an ideal, the cosets of K in R form a quotient ring, and the coset function implements a ring homomorphism onto this canonical quotient ring.  Take a moment to show addition and multiplication are well defined on these cosets.  Addition is easy, since K is a subgroup of an abelian group, hence a normal subgroup, thus the cosets of K form a quotient group.  Or, if you prefer, let a and b represent to cosets of k, and a+u + b+v is a+b + something in K, for any u and v in K.  Multiply a+u times b+v, and the result is ab + (ub+av+uv), or ab + something in K, giving the same coset ab.
a ab
K b

In summary, a ring homomorphism f defines an ideal K, which is the kernel of f, and conversely an ideal K defines a unique quotient ring R/K, based on the cosets of K, which looks just like the image of f.  This is completely analogous to a normal subgroup inside a larger group.  The normal subgroup is the kernel of a group homomorphism f, and it also defines a factor group, the cosets of K, that looks just like the image of f.  This should be familiar territory.

To illustrate, let R be any ring with characteristic 0, and let n be any integer > 1.  Let K be n times the elements of R, and verify that K is an ideal.  The group homomorphism with kernel K reduces all coefficients mod n.  If the original ring contained integer polynomials, for example, the quotient ring contains polynomials with coefficients ranging from 0 to n-1.

If n is composite, with prime factor p, a second homomorphism can be applied.  The kernel contains all multiples of p, and the ring is reduced mod p.

There is a catch.  If n is a unit, as occurs in the rationals, then the ideal generated by n is the entire ring.  The quotient ring is then not technically a ring, because it is entirely 0, and 0 is suppose to be different from 1.  You can take the quotient group G/G and get a group e, the trivial group, but the quotient ring R/R isn't really a ring, or so we say in this book.

After 23 chapters, the notation Z/7 for the integers mod 7 is finally justified.  Let 7 generate an ideal within the integers, that is, all the multiples of 7.  Mod out by this ideal to get a quotient ring.  In other words, Z/(7Z) is the integers mod the multiples of 7, or the integers mod 7.  Some books do write it as Z/7Z, or Z/{7} (the ideal generated by 7), and those are fine, but I shorten it to Z/7.

R is an ideal of R, and 0 is an ideal of R, just as G and e are normal subgroups of G.

R is a simple ring if its only ideals are R and 0, just as G is a simple group if its only normal subgroups are G and e.

A ring monomorphism is injective, with a kernel of 0.  This is also called an embedding of one ring into another.  A ring isomorphism is injective and onto, and is essentially a relabeling of the ring elements.  A ring automorphism maps a ring faithfully onto itself.  All these definitions come from their counterparts in group theory.

If the characteristic of R is p, where p is prime, and R is commutative, the ring homomorphism f(x) = xp is valid, and is called the Frobenius homomorphism.  Obviously f respects multiplication.  Use the binomial theorem, and the fact that all the intermediate binomial coefficients are divisible by p to show (x+y)p = xp + yp.  This homomorphism is sometimes an automorphism, whence it is called the Frobenius automorphism.

The correspondence theorems apply to rings, as well as groups.  Left ideals, right ideals, ideals, and subrings carry forward from R to S, where S is the homomorphic image of R, and backward from S to R.  (The correspondence is 1 for 1 for those ideals of R that contain K.; each becomes a unique ideal in S.)  Addition is a group, so this correspondence is already established by group theory.  We only need verify closure under multiplication.  Let x lie in the left ideal H, with f(x) = y in S.  Consider vy in S, and select any u that maps to v.  Now ux lies in H, hence f(ux) is in the image of H, hence vy is in the image of H, hence H maps to a left ideal in S.

The reverse map, from S back to R, holds even if f maps R into S, rather than onto S.  Restrict S to f(R) and find a smaller ideal or subring; then pull this back to its preimage in R.  However, the correspondence is no longer one for one.  There may be many ideals in S that pull back to the same ideal in R containing K.  Let R be the integers and let S be the polynomials with integer coefficients mod 7.  Map R into S by reducing the integers mod 7.  Thus R maps onto the constant polynomials of S.  Pick any polynomial with degree > 1 and let it generate an ideal in S.  Intersect this ideal with the constants and find only 0.  The preimage of 0 is K in R, which is the multiples of 7.  Thus many different ideals in S pull back to K in R.

The preimage of the image of an ideal can be larger than the original ideal, if that ideal does not contain all of K.  In the above example, map 0 onto 0, then pull back to the multiples of 7, which is larger.  We saw this before from group theory.

Recall that the product of two ideals consists of all finite sums of xy, where x comes from the first ideal and y comes from the second.  Let H1 and H2 be ideals, with images J1 and J2.  Take x from H1 and y from H2, and their product maps to something in J1 times something in J2.  Conversely, uv, from J1*J2, comes from some xy in H1*H2.  Hence the image of the product of two ideals is the product of their images.  This extends to the product of finitely many ideals.

Run the above in reverse.  Let H1 be the preimage of J1 and let H2 be the preimage of J2.  Let xy map to uv, which lies in the product J1*J2, hence the product of the preimages is contained in the preimage of the product.  However, containment could be proper.  Let F map Z onto Z/7 by reducing the integers mod 7.  The kernel of f is the multiples of 7.  Multiply the kernel by itself and get the multiples of 49.  This is the square of the preimage of 0 in Z/7.  However, the preimage of the square of 0 is all multiples of 7.

The ideals of a ring R form a monoid under multiplication.  Multiplication of ideals is associative: (H1H2)H3 = H1(H2H3), because the same holds for elements xyz drawn from these three ideals.  R is the multiplicative identity: R*H = H.

A ring homomorphism f from R onto S induces a monoid homomorphism from the ideals of R onto the ideals of S.  This because f respects H1*H2, the product of two ideals, as described above.

This is an example of a covariant functor in category theory.  The category upstairs is rings and ring epimorphisms.  The category downstairs is monoids and monoid epimorphisms.  The functor transforms a ring into a set of ideals that can be meaningfully multiplied together.  The ring homomorphism becomes a homomorphism on ideals that respects multiplication.  The only thing remaining is to verify composition.  If f and g are two successive ring homomorphisms, then fg, acting on ideals, is the same as f acting on ideals, followed by g acting on ideals.  It's all function composition, upstairs and down, so the functor is compatible with composition, as it should be. Let R be a field or a division ring, and let S be the n by n matrices over R.  Note that R embeds in S, as scale multiples of the identity matrix.  Thus R is a subring of S.

Let H be a nonzero ideal in S.  Select a matrix in H with some nonzero x in row i column j.  Premultiply by a matrix that is all zeros except for 1 in row 1 column i.  This extracts row i and moves it up to the top.  Postmultiply by a matrix that is all zeros except for 1 in row j column 1.  This extracts column j and moves it to the left.  You now have x alone in position 1,1.  Multiply by 1/x to get 1.  Permute rows and columns to move 1 into any position on the main diagonal.  Add up all these ones to get the identity matrix, which is 1 in S.  Therefore H is all of S, the entire ring.  The only ideals are S and 0.  S is a simple ring.

Let's look at the left ideals of S.  If M is a matrix in H, premultiplication by a matrix replaces each row of M with a linear combination of rows of M.  Thus H contains matrices whose rows are any and all rows spanned by the rows of M.  Bring in another matrix and join the two vector spaces together.  This gives a possibly larger vector space, whose vectors can be combined, as you like, to build the matrices of H.  Therefore the left ideals correspond to the subspaces of n dimensional space.  If R is the reals and n = 3, select any subspace of R3: the origin, a line, a plane, or space itself.  The origin leads to the zero ideal.  All space allows for any vector to be placed in any row of M, thus the entire ring.  For an intermediate ideal, consider the line x = y = z.  This is the vector [1,1,1], and scale multiples thereof.  Let these fill the rows of a matrix, as illustrated below.

111444
333000
777-5-5-5

Premultiply by any matrix and get a matrix of the same form, hence a left ideal.  Right ideals are also based on subspaces, using columns rather than rows.

A chain of left ideals in S corresponds to a chain of subspaces in n dimensional space.  Example: the origin, inside a line, inside a plane, inside 3-space.  In general, a maximal chain of left ideals has length n+1, and if you set aside the zero ideal, this is the dimension of the vector space that the matrices operate on.

A maximal left ideal is a subspace of dimension n-1.

Take one step back, and let R be any ring, rather than a field.  Let S be the n by n matrices over R as above.  If H is an ideal of R, verify that the matrices over H form an ideal in S.  Do all the ideals of S come from ideals in R?

Givan an ideal in S, let H be the union of all the nonzero entries in all the matrices in this ideal.  For any x in H, find x in some matrix, and pre and post multiply by extraction matrices, as shown above, to isolate x and leave it in row 1 column 1.  Thus everything in H appears in the upper left, all by itself.  Add and scale these matrices to show H is an ideal of R.  Then use permutation matrices to move the elements of H to any location, and add these matrices together to build any matrix over H.  Therefore the ideals of S and the ideals of R correspond 1 for 1.

If R is simple then S is simple, which is exactly what we saw when R was a field or division ring. We now have the machinery to count the quaternion primes over a prime p.  Since 2+i+j+k times 2-i-j-k = 7, 2+i+j+k is a "prime" (stretching the definition just a bit) that lies over 7.  There are others of course, such as 1-2i-j+k.

There are probably hundreds of primes over 997, but what if there are none?  What if there is no a2 + b2 + c2 + d2 = 997?  What if 997 is inert?

Let p be prime and let l be a quaternion such that l*l = p.  Thus l is a quaternion prime lying over p.  Note that l is not a unit, nor is it an associate of p; it is a proper factor of p.  In fact its norm has to be p, the norm being l*l.

Let H be the left ideal generated by l in the ring of integer quaternions.  Since l is not a unit, H is a proper ideal that does not contain 1.

Let J be the ideal generated by p.  We could call J a left ideal, but p commutes with every quaternion, so J is also an ideal.  Since l is a factor of p, J is an ideal that is properly contained in H.

Mod out by J, and by correspondence, H/p is a left ideal in the quaternions mod p.  In fact H/p is a proper nonzero left ideal in this ring.  This gives a map from the quaternions lying over p, (up to associates), into the left ideals of the quaternions mod p.

Now reverse this map.  Let H be a proper nonzero left ideal in the quaternions mod p, or equivalently, by correspondence, a proper left ideal that properly contains J.  Extend this to the half quaternions, so that H becomes principal, generated by some l.  Choose an associate of l that has integer coefficients.

Everything in H is a left multiple of l, perhaps by a half quaternion.  Write p = kl and take norms.  Since H is proper, l is not a unit.  The norm of l is not 1.  Since H is larger than J, l is not an associate of p.  The norm of l is not p2.  The norm of l is p, and l*l = p.  Therefore k = l, and l becomes a prime lying over p.

If H contains some other prime m over p, then l generates m.  Write m = kl and take norms.  Since |m| and |l| are both p, |k| = 1, k is a unit, and l and m are associates.  The left ideal H/p pulls back to a unique prime l over p, up to associates.  The quaternion primes lying over p correspond to the proper left ideals in the quaternions mod p.  The problem has been transformed into a statement about left ideals in a ring.

What do we know about the quaternions mod p?  This ring is isomorphic to the ring of 2 by 2 matrices mod p.  Call this matrix ring R and look for left ideals.

As shown in the previous section, left ideals correspond to the subspaces of 2 dimensional space over Z/p.  The subspace must be proper, and nonzero.  This is a line in the plane.  The slope of the line is 0 through p-1, or infinity (for the y axis).  Therefore there are p+1 subspaces living in 2 dimensional space, p+1 proper nonzero ideals in the 2 by 2 matrix ring mod p, and p+1 primes lying over p.

Review the example from chapter 1, where p = 7.  There are 8 quaternion primes over p, grouped into 4 pairs by conjugates.  This agrees with our formula: 8 = 7+1.  Use this program to confirm 24 primes when p = 23. If n is divisible by 9, and n/9 is the sum of 4 squares, then multiply through by 3 and n is the sum of 4 squares.  It is enough to prove this theorem for n squarefree.

There are many quaternions over a prime p, but we only need one.  With a2 + b2 + c2 + d2 = p, p has been written as a sum of 4 squares.

This assumes p is odd, but if p = 2 you can use 1+i.

If p and q are two primes, such as 7 and 11, and if u has norm p and v has norm q, then uv has norm pq.  This allows pq to be written as a sum of 4 squares.

Given an integer n, write n as a product of primes, put a quaternion over each prime, multiply these quaternions together, and find a quaternion over n.  Thus every integer is the sum of four squares. The chinese remainder theorem was developed for modular arithmetic, but it generalizes to ideals in a commutative ring R.

Let H1 H2 … Hn be a set of coprime ideals.  Coprime means the sum of any two ideals spans the entire ring.

Refer back to the integers.  Let H1 be the multiples of p and let H2 be the multiples of q, for two different primes p and q.  Remember that some linear combination of p and q yields 1.  Therefore these two primes span all of Z.  In the same way, coprime ideals span 1, and thus the entire ring.  Some x+y = 1, for x in H1 and y in H2, and thus H1 + H2 = R.

Let J be the product of all these coprime ideals.  We will prove R/J is isomorphic to the direct product of the quotient rings R/Hi, as i runs from 1 to n.  Once again, this is inspired by modular math, where m is a product of primes, and the integers mod m is the direct product of the integers mod p for the primes p that divide m.

An element in R/J can be mapped to the ith component in the direct product via R/Hi.  This is a well defined ring homomorphism, since each Hi wholly contains J.  It is an example of correspondence.  We need to show this homomorphism is injective and onto.

Focus on H1.  Being coprime, there is some x in H1 and y in H2 with x + y = 1.  Do the same for each Hi in the set.  Multiply all these equations together, and something in H1 + something in the product of the other ideals gives 1.  Write this as x1 + y1 = 1.  Reduce mod H1, and y1 = -1.  If y1 is mapped into any other quotient ring, other than R/H1, it maps to 0,as it lives in the product of the other ideals.

Establish y1 through yn as above, then show the map is onto.  Assume z is an element in the direct product, where zi is the ith component.  Let w be the sum of -yizi, as i runs from 1 to n.  Reduce w mod Hi and get ci back again.  Our ring homomorphism is onto.

If an element maps to 0 in all components, then it lies in each ideal Hi.  The kernel of the map is the intersection of the ideals.  If this is the same as the product J, we are done.

The product always lies in the intersection, as shown earlier.  Let's prove that the intersection lies in the product.  This is where we need R to be commutative.  However, if you can prove, for a particular noncommutative ring, that the product equals the intersection, using a method specific to that ring, then this theorem applies.

For two ideals, find x+y = 1, and if w is in both ideals, w = (x+y)w = xw + yw, a sum of two products, hence w is contained in the product H1*H2.  The intersection lies in the product, and the two ideals coincide.

If H1, H2, and H3 are pairwise coprime, then write x1+y = 1 and x2+z = 1, with x1 and x2 coming from H1, and y and z from H2 and H3 respectively.  Let s = x1x2 + x1z + x2y.  Note that s is in H1.  Let t = yz, and note that t is in H2*H3.  Verify that s+t = 1.  H1 and H2*H3 are coprime.  Thus H1 ∩ (H2*H3) = H1 * (H2*H3). Remember that the product and intersection of H2 and H3 coincide.  Make this substitution and H1 ∩ (H2∩H3) = H1 * (H2*H3). The intersection and product of all three ideals coincide.

An inductive argument extends this result to finitely many ideals.  Build s as the product of (xi+yi), with each xi in H1, and each yi in Hi, but leave out the last term, which lies in the product of all the ideals beyond H1; then reason as above.  Let I be the intersection of H2 through Hn, and let P be their product.  With H1 and P coprime, H1∩P = H1*P.  Since P = I by induction, H1∩I = H1*P, and that completes the proof.  The product is the intersection, the map is injective, and R/J is the same as the direct product of the quotient rings.

If quotient rings are finite, the size of R/J is the product of the sizes of R/Hi over all i.

If H1 and H2 are coprime, then the same is true of any two powers of H1 and H2.  Going back to the integers, 49 and 1331 are coprime, because 7 and 11 are coprime.

Select x from H1 and y from H2, and write x+y = 1.  Raise this to a sufficiently high power, at least the sum of the exponents on H1 and H2.  By the binomial theorem, every term winds up in one of the two exponentiated ideals.  Thus H1k and H2l span 1, and are coprime.

If a finite set of ideals are pairwise coprime, the same is true if each ideal is replaced with a power of that ideal.  The chinese remainder theorem still applies. Review the definitions of prime and irreducible in the integers; the same definitions apply in an arbitrary ring.

An element p is prime if it is a nonzero nonunit, and p divides a*b implies p divides a or p divides b.  Note that prime elements are usually restricted to commutative rings, unlike prime ideals, which are an important part of noncommutative ring theory, and will be addressed later.

An element c is irreducible if it is a nonzero nonunit, and c = a*b only when a or b is a unit.

Note that 2 is prime in Z/6, but 2 = 24, so 2 is not irreducible.  At the same time, irreducible elements need not be prime.  We demonstrated this by adjoining the square root of -5 to Z.

In an integral domain, p prime implies p irreducible.  Write p = ab, as though p could be reduced.  Now p divides a or b; say pc = a.  Thus p = pcb, and p*(1-cb) = 0.  There are no zero divisors in an integral domain.  With p nonzero, 1-cb = 0, cb = 1, and b is a unit.  This contradicts p = ab for two nonunits.  Thus p is irreducible after all. Like subgroups, an ideal H is maximal if there is no ideal properly containing H, other than R itself.

A maximum or largest ideal, if it exists, is maximal, and contains all the other proper ideals.

Since 0 is always an ideal, a minimal ideal is understood to be nonzero.  Of course there can be maximal and minimal left/right ideals in a noncommutative ring.

The union of an ascending chain of proper ideals remains a proper ideal, since none of the ideals contains 1.  Use zorn's lemma to extend any proper ideal H up to a maximal ideal that contains H.  This can be done for left or right ideals.

A principal ideal is generated by a single element x in the ring.  If R is commutative, the principal ideal becomes x*R.  A noncommutative ring can have principal left ideals R*x and principal right ideals x*R.  A principal two-sided ideal contains all the elements R*x*R, and all finite sums thereof.  This is the ideal generated by x in a noncommutative ring.

In an integral domain, the ideal generated by x is maximal in the collection of principal ideals iff x is irreducible.  (This doesn't mean its a maximal ideal; it is maximal in the collection of principal ideals.)  If x is irreducible and the ideal generated by y contains the ideal generated by x, y divides x, and y or x/y is a unit.  The new ideal generated by y is the old ideal or the entire ring.

Conversely, if x generates a principal ideal, and x = a*b, where a and b are nonunits, then a generates a larger, proper, principal ideal that contains x.  How do we know it's larger?  If xc = a, then xcb = x, cb = 1, and b is a unit.  We need an integral domain here, to cancel x. A proper ideal P is prime if for any ideals A and B in R, A*B in P implies A is in P or B is in P.

This definition seems backwards relative to the definition of a prime element, but it's not.  Referring to the integers, p divides ab means p divides a or p divides b.  But containment reverses things.  The multiples of ab are contained in the multiples of p, and that means either the multiples of a are contained in the multiples of p, or the multiples of b are contained in the multiples of p.

If P contains a finite product of ideals it contains one of them.  Use induction on the number of ideals in the product.

There is another equivalent definition of prime ideal, which is often easier to verify or refute.  We will show that "xRy in P implies x is in P or y is in P" is equivalent to "P is prime".  Note that xRy means x times anything in R times y, which includes xy by selecting 1 from R; but even if R does not contain 1, it is understood that xRy also includes xy.

Assume P is prime and A and B are principal, generated by x and y respectively.  If xwy is in P for every w in R, i.e. xRy is in P, then P contains RxRyR, and all finite sums thereof, P contains the product ideal A*B, P contains either A or B, and P contains either x or y.  This holds for every x and y in the ring R.  Thus a prime ideal satisfies the xRy criterion.

Conversely, assume xRy in P implies x is in P or y is in P, for every x and y in R.  Let A and B be two left ideals with A*B in P.  If A is in P we are done, at least for left ideals.  Assume A is not entirely in P, and let x be an element in A-P.  Select any y in B.  For every w in R, xwy lies in P.  This because A*B lies in P.  Now apply the xRy criterion, and either x or y lies in P.  Yet x does not lie in P, so y must.  This holds for all y in B, hence B lies in P.

A similar argument can be made for right ideals.

Since ideals are also left ideals, we are done.  The xRy test is necessary and sufficient for P to be prime.

If R is commutative, the test simplifies to: xy in P implies x is in P or y is in P.  Run through the proof again, with xy in stead of xRy, and R commutative.

Let R be commutative and let p generate a principal ideal P.  Thus xy is in P iff p divides xy.  If p is prime then p divides x or y, and x or y is in P, and P is a prime ideal.  Conversely, P a prime ideal and p dividing xy means xy is in P, x or y is in P, p divides x or y, hence p is prime.  The principal ideal generated by p in a commutative ring is prime iff p is a prime element in the ring.

Nothing in the above requires R to contain 1.  In the case of the previous paragraph, an element is not prime if it generates all of R, thus p a prime element generates P a proper prime ideal.)  Any ring, with or without 1, noncommutative or commutative, may contain prime ideals, and the xRy (or xy) test applies.

Let P be a prime ideal in R.  If A is a right ideal and B is a left ideal, and A*B is in P, P contains the product of the two-sided ideals generated by A and B, P contains one of these two ideals, and P contains either A or B.  This does not always hold when A is a left ideal and B is a right ideal, as we shall see below.

Realize that 0 need not be a prime ideal, as shown by {0,2,4} * {0,3} in Z/6.  However, 0 is always a prime ideal in a domain.  If xwy is 0 for all w in R, then xy = 0 (set w = 1), and with no zero divisors, x or y = 0.  Or set w = x if R does not contain 1.  This satisfies our criterion for a prime ideal.

Let's consider another ring, the n by n matrices over any domain, such as the integers.  Let x and y be any two nonzero matrices.  Think of x as a collection of vectors, each column a vector.  Multiplication by a matrix w produces a new set of column vectors, where each column in the product xw is a prescribed linear combination of the vectors in x.  All linear combinations are possible; it depends on the entries in w.  Multiplication by y does the same thing.  Assume y uses the jth column of xw to build one of its linear combinations.  It may scale this column by the element t.  Make sure the jth column of xw, and only the jth column of xw, is nonzero.  Construct w so that the jth linear combination copies a nonzero column from x, and ignores the other columns in x.  Thus w contains one 1, and the rest of the matrix is 0.  Now one of the columns in xwy is a scale multiple of one of the columns of x, where t is the scaling factor.  Since this is a domain, the scaled column is nonzero.  Therefore xwy is nonzero.

For every x and y, there is some matrix w with xwy nonzero.  We don't have to worry about xRy = 0; it never happens.  Thus the zero matrix is a prime ideal.

Now place 1 in the upper left of the zero matrix and let that generate the left ideal A.  Place 1 in the lower right of the zero matrix and let that generate the right ideal B.  A*B always = 0, and lives in the zero prime ideal, even though A and B do not.

As a foretaste of material to come, there are rings, called dedekind domains, wherein every proper nonzero ideal is uniquely a product of prime ideals.  Yes, unique factorization rides again.  You're definitely going to enjoy that chapter.

Let R be a (possibly infinite) direct sum of component rings.  Remember that projections are ring homomorphisms.  Project an ideal H in R onto its components, giving ideals Hi in the rings Ri.  Multiply H by 1 in Ri and get Hi.  In other words, the projection Hi is actually part of H.  Since H is closed under addition, it includes all finite sums of elements drawn from Hi.  Thus H defines, and is defined by, ideals in the component rings.  Each ideal in the direct sum corresponds to a specific collection of ideals in the components.
R R1H1
R2H2
R3H3
R4H4

This reasoning holds for left ideals or right ideals.

If a maximal ideal, or maximal left ideal, does not include all of Ri and Rj, increase it so that Hi is all of Ri.  Since Hj is still a proper subset of Rj, the resulting ideal remains proper.  Therefore a maximal ideal has Hi = Ri in every component except 1, and in this component Hi is maximal.

Consider the product of two ideals G and H in the direct sum.  This product includes Gi*Hi, and all finite sums thereof.  Conversely, let x be a finite sum of elements from Gi and let y be a finite sum of elements from Hi.  The product xy becomes a sum of elements taken from GiHi.  Put this all together and the product GH corresponds to the product GiHi per component.

Let P be a prime ideal in R, where Pi and Pj are proper ideals of Ri and Rj respectively.  Now PiRj times RiPj lies in P, while neither factor lies in P.  This is a contradiction, hence only one component, Pi, can be a proper ideal in its component ring.  This component must be prime, hence we have characterized the prime ideals of R.

An infinite direct sum may leave you feeling a bit uneasy, since R does not contain 1, and rings in this book usually contain 1.  Still, it's a valid ring sans 1, and we've got a handle on its prime and maximal ideals, assuming each Ri contains 1.

If R is the infinite direct product of component rings, it contains 1, but there are ideals, even maximal ideals, that we haven't characterized.  A straightforward maximal ideal is maximal in the first component, crossed with all the rings in all the other components.  This projects to M1 and Ri.  But the direct sum of rings is a proper ideal in R, and it ratchets up to a maximal ideal in R whose projection onto each Ri is all of Ri.  It's hard to know what such a maximal ideal might look like; we only know it exists by zorn's lemma.

An augmented direct sum is a direct sum of rings with 1 thrown into the mix.  An element of R has all but finitely many components set to 0, or all but finitely many components set to 1 - or 2 or 3 or 4 etc, since 1 implies all the integers.  The tail of a sequence in R is that portion, beyond some index, where all the components are set to 0 or 1 or 2 etc.  The tail implements a ring homomorphism.  If all the component rings have characteristic p, the tail homomorphism maps R onto Z/p.  The quotient ring is a field, and the kernel is the direct sum, hence the direct sum is a maximal ideal.  The same thing happens if each component is or contains a field F, and the tail is allowed to take on any value of F.  Again the direct sum is maximal. If R has an ideal K, which acts as the kernel for a ring homomorphism onto S, then prime ideals in R containing K correspond 1 for 1 with prime ideals in S.  This is another correspondence theorem, in the theme of the correspondence theorems given earlier.  We already have ideal correspondence; we only need verify primality in both directions.  Apply the xRy test for a prime ideal.  It is satisfied for H in R iff it is satisfied for H/K in S.  I'll leave the details to you.

Note that ideal correspondence, and prime ideal correspondence, hold, even if R does not contain 1.

When ideals don't contain the kernel, all bets are off.  For instance, the multiples of 6 are not prime in Z, but they map to Z/2 in Z/4, a maximal/prime ideal.

For a prime ideal that maps to a nonprime ideal, let K be a field and let R be K[x,y]/xy.  These are the polynomials in x and y with no mixed terms.  Let P be the ideal generated by y.  Since R/P is K[x], an integral domain, with 0 a prime ideal, P is prime by correspondence.  Now consider S = R mod x2.  These are polynomials in y, with one possible linear term in x.  The ideal P, generated by y, is no longer prime, since it contains x讀, and x is not in P.

If rings are commutative, and f maps R into S, prime ideals pull back to prime ideals.  It is enough to show the restriction of a prime ideal in S, to the image of R, is prime within that subring.  From there the prime ideal pulls back to a prime ideal in R by correspondence.

Let H be prime in S, and let G be the intersection of H with f(R).  Let x and y lie in f(R), with xy in G.  Since xy lies in H, either x or y is in H, which puts x or y in G.  This makes G a prime ideal in f(R), which then pulls back to a prime ideal in R.

If rings are not commutative the restriction to a subring, such as f(R), may disrupt primality.  Let K be a field and adjoin the indeterminants x y and z.  Let x and y commute, and mod out by xy, giving polynomials in x or in y with no mixed terms.  This is a commutative subring, and since xy = 0, 0 is not a prime ideal.  Bring in z, which does not commute with x or y.  For any polynomials p and q, pzq is nonzero.  The xRy condition never happens, hence 0 is a prime ideal, that pulls back to a nonprime ideal in the commutative subring. If a prime ideal P is the intersection of finitely many ideals, P equals one of the intersecting ideals.

Since the intersection contains the product, P contains the product, and since P is prime it contains one of the ideals.  Yet all of the ideals contain P, so P is one of the ideals.

The intersection of prime ideals need not be prime, as shown by 2Z∩3Z = 6Z in the integers.

The intersection of a descending chain of prime ideals is prime.  If xRy is in every ideal in the chain, each ideal contains x or y.  If x is no longer present, after a while, y is present throughout the rest of the chain.  Thus the intersection is prime.

By zorn's lemma, every prime ideal contains a minimal prime ideal.  Keep taking smaller prime ideals, or take the intersection of descending chains of prime ideals, until you reach a minimal prime ideal.

The minimal prime ideal in a domain is always 0.

Since R contains 1 it contains a maximal ideal.  Start with 0 and build an ascending chain of proper ideals missing 1, culminating in a maximal ideal.  I'm going to prove, about four sections down, that a maximal ideal is prime.  If you accept this for now, then build a descending chain of prime ideals, leading to a minimal prime ideal.  Therefore minimal prime ideals exist.

If you want a minimal prime ideal below a given prime ideal, that contains some other ideal H, mod out by H and apply the above to the quotient ring.  This gives a minimal prime ideal that pulls back to a minimal prime ideal in R containing H. Let P1 P2 … Pn be a set of prime ideals in a commutative ring.  Let H be a set, closed under addition and multiplication, such that H is contained in the union of the prime ideals.  We aren't talking about the ideal generated by the prime ideals, just their union, as sets.  We will show that H is contained in one of the prime ideals.

If there are only two ideals, they don't even have to be prime.  If H contains x from P1-P2 and y from P2-P1, H must contain x+y, which is not contained in either ideal.  Yet all the members of H come from some ideal, hence H belongs to one of the two ideals.

For more than two ideals we need primality.  Proceed by induction on the number of prime ideals.

Suppose we have a minimal counter example; the smallest number of prime ideals that refutes this theorem.  If H is contained in some of the prime ideals, but not all, it is contained in one of them by induction.  Thus a piece of H is in every prime ideal, and outside of the others.

Let xi be an element in H that comes from Pi, and is not present in any of the other prime ideals.  For each i, let yi be the product of xj, where j is not equal to i.  In other words, yi is the product of the other x values.  If yi is in the prime ideal pi, then one of the other x values is in pi, which is a contradiction.

Let z be the sum of yi.  For every prime ideal Pi, all the terms of z are contained in Pi, except for yi.  Since yi is not in pi, z is not in pi either.  Yet z is in H, and must lie in some pi.  This is a contradiction, therefore the set H is contained in one of the prime ideals. An ideal C in R is semiprime if, for any ideal A, A2 in C implies A is in C.  Note that C can equal R.  A prime ideal must be proper, but a semiprime ideal need not be.  Also, every prime ideal is semiprime.

Replace A2 with An, and get an equivalent definition.  Multiply An By A, again and again, until n becomes a power of 2.  If An lies in C then so does An/2 and An/4, and so on down to A.

The xRy test for prime ideals has a counterpart for semiprime ideals.  Review the earlier proof, and apply it to A*A instead of A*B.  The ideal C is semiprime iff xRx in C implies x is in C, for all x in R.  Use x2 instead of xRx for a commutative ring.

The homomorphic image of a semiprime ideal is semiprime.  The proof is the same as that for prime ideal correspondence.

The intersection of semiprime ideals is semiprime; A2 in all of them implies A is in all of them.  This is new; the intersection of prime ideals need not be prime.

The intersection of a descending chain of semiprime ideals is semiprime, hence every semiprime ideal contains a minimal semiprime ideal.  The proof is the same as that for a descending chain of prime ideals.  Given xRx in all of them, x is in all of them.

The minimal semiprime ideal beneath a given semiprime ideal is always 0 in a domain.  After all, 0 is prime in a domain. A ring R is prime if 0 is a prime ideal.

A ring R is semiprime if 0 is a semiprime ideal.

By correspondence, a kernel is prime/semiprime iff its quotient ring is same.

The direct product of prime rings fails to be prime.  Multiply [P,0] by [0,P] to get [0,0] - whence the zero ideal is not prime.  In contrast, the direct sum or product of semiprime rings is semiprime.  The xRx test sets x = 0, because it does so per component.

Let R be a ring, and S a nonempty multiplicatively closed subset of R.  If H is any ideal disjoint from S, order the ideals in R-S that contain H by inclusion, and use zorn's lemma to find a maximal ideal P missing S.

Suppose P is not prime, with A*B as counterexample.  The ideals P+A and P+B properly contain P, and must intersect S.  Select u and v from P, and x from A, and y from B, such that u+x and v+y lie in S.  Since S is closed under multiplication, the product of these two elements lies in S.  Since P contains xy, the product also lies in P, which contradicts P disjoint from S.

S
P
H

Note, if S were empty, P could be all of R, which is technically not a prime ideal.

Set S to 1, and a maximal ideal missing S is in fact a maximal ideal of R.  A larger ideal would contain 1, and is all of R.  Conversely, every maximal ideal of R misses 1.  Therefore, as long as R contains 1, every maximal ideal is prime.  (I alluded to this earlier.)

There is at least one maximal / prime ideal; start the chain of ideals at 0 and go up from there.

If R does not contain 1, we don't have the luxury of setting S to 1, hence there may be no prime ideals.  Let R contain x, such that x2 = 0.  Now R consists of 0, x, 2x, 3x, etc, and their opposites, and that's all.  If an ideal P is prime it contains 0, and x, and all of R.  There are no prime ideals.  However, the ideal generated by 2x is maximal, but not prime.

An element x is nilpotent if some xn = 0.  Yes, if x is a matrix, this is consistent with the definition of a nilpotent transformation.

Best to let n be minimal, since all powers of x after that are 0.  If n = 1 then x = 0.

If xi = xj, for i < j < n, then n is not minimal.  n-(j-i) would do just as well.  Thus the powers of x up to xn are distinct.

Let x be the square root of 2, and adjoin x to the integers mod 4.  Thus x is nilpotent with order 4.  In this ring, x3 = x+x.

If there is any x that is not nilpotent, let S consist of all the powers of x, and push 0 up to a prime ideal missing S.  Thus there is at least one prime ideal.  If x is a unit, that works, because the power of a unit is a unit; but other values of x are possible.

If R is commutative then the converse is also true.  Let P be a prime ideal and suppose every x is nilpotent, hence every x has xn in P.  That means every x is in P, and P is all of R, which is a contradiction.  Thus some x is not nilpotent.

In the noncommutative world you can build a ring (without 1) where everything is nilpotent, yet 0 is a prime ideal.  Let x y and z be three indeterminants that do not commute, and order the letters: x, y, z.  Whenever two instances of the same letter bracket lower letters, the string drops to 0.  Thus xzx survives, but zxz = 0.  Adjacent letters, such as xx, bracket nothing, but since there are no higher letters between them, this too drops to 0.  Take a moment to concatenate three strings, and show that it drops to 0, or not, whether you concatenate the first two and then the third, or the first followed by the (second and third).  The structure is a valid ring.

The square of any string is 0.  Let z be the highest letter in the string s, and ss has z followed by lower stuff followed by z, thus 0.

Let R have characteristic 2, so that a polynomial is merely a sum of strings.  Consider an expression like s + t, where s and t are strings.  Raise this to the fourth power and get the following expression.

ssss + ssst + ssts + sstt + stss + stst + stts + sttt + tsss + tsst + tsts + tstt + ttss + ttst + ttts + tttt

Every term has a double block, be it ss, tt, stst, or tsts, hence the expression drops to 0.  Thus a sum of two strings is nilpotent of order 4 or less.

Raise an expression like (s+t+u+v) to a high power.  This is a sum of 4 strings, wherein each string could be ridiculously long.  Let z be the highest letter in the expression, and assume z is in v.  Every string with two instances of v drops to 0.  The remaining strings consist of s t or u in each position, except for one slot that could be v.  By induction, s+t+u is nilpotent with some exponent n.  Every string of length n, drawn from s t and u, drops to 0.  Raise s+t+u+v to the 2n, and there is a string of length n to the left or the right of v.  That kills off the remaining strings, and s+t+u+v is nilpotent.  A sum of k different strings is nilpotent with exponent 2k.  Every polynomial in R is nilpotent.

Let R have an infinite number of indeterminants, not just 3.  Let p and q be two polynomials in R.  Let z be an indeterminant that is higher than any letter in p or in q.  Show that pzq is nonzero; in fact every string in pzq survives.  The xRy test never maps anything into 0, hence 0 is a prime ideal. As stated in the previous section, x in R is nilpotent if some integer n satisfies xn = 0.  The integer represents successive multiplications and need not be in R.

A reduced ring has no nonzero nilpotents.

In a commutative ring, any linear combination of nilpotent elements is nilpotent.  Use the multinomial theorem and make n bigger than the sum of all the exponents that drive the individual nilpotents to 0.

If x is nilpotent, 1-x is a unit.  This is because synthetic division terminates.  That is, 1-xn is divisible by 1-x.

An element x is idempotent if x2 = x.  If R is free of zero divisors, write x2-x = 0, whence x = 0 or 1.

Note that every idempotent x has an orthogonal counterpart 1-x.  Each squared is itself, their sum is 1, and their product is 0.  This works even in characteristic 2.

Let R be the finite direct product of subrings R1 R2 … Rn.  Let ei be 1 in the ith component and 0 elsewhere.  Thus the sum of all ei is 1 in R, and the product ei*ej is 0 when i does not equal j.  Note also that each ei is idempotent, and these idempotents commute with R.
R R1     e1
R2     e2
R3     e3
R4     e4

Conversely, let a finite set of orthogonal idempotents sum to 1, such that these idempotents commute with R.  Let Ri be the principal ideal generated by ei.  Show that each Ri is a subring, with ei acting as 1, and that R is isomorphic to the direct product of these subrings.  The isomorphism is accomplished via x*1, where 1 is replaced with the sum of idempotents.  Then, x*1 times y*1 expands into a large cross product, but all the mixed terms drop out, leaving e1*xy + e2*xy + … + en*xy, which is the same as multiplication per component.

The algebra doesn't work when the orthogonal idempotents don't commute with R.  Let a and b be idempotents in a ring with characteristic 2.  Include x and y, such that xa = x, yb = y, ay = y, bx = x, and the other nonidempotent products are all 0.  Verify this is a ring, with a+b = 1, that does not split into a finite product of subrings.  R contains 16 elements: {0,1,a,b}, alone, +x, +y, or +x+y.  In characteristic 2, the square of a sum is the sum of the squares, thus x and y cannot participate in an idempotent.  The only orthogonal idempotents are a and b, so these would have to seed the two subrings.  Only elements that commute with a can live in the "other" subring, but that is just b.  Similarly, a lives alone in its subring, and these two subrings cannot combine to build R. Let R be a commutative ring with kernel K and quotient ring S.  Let K be a maximal ideal.  If x is not in K, consider the ideal generated by x and K.  Characterize the ideal as p(x)+K, where p is a polynomial with no constant term and coefficients in R.  Verify that these expressions are closed under addition and multiplication, and generated by x and K.  Let x map to y in S, and remember that K maps to 0.  Since x and K span 1, the ideal generated by y also includes 1.  Some polynomial in y, with no constant term, = 1.  This means y is invertible.  The choice of x was arbitrary, so every y in S is invertible, and S is a field.

If R is noncommutative, p(x), and hence p(y), could be a polynomial with coefficients on either side.  Thus p(y) may not be divisible by y on the left or the right, and the quotient need not be a division ring.  For example, let r be the 22 matrices over the reals.  We saw earlier that this is a simple ring, hence 0 is the maximal ideal.  The quotient ring S is the same as R, which is not a division ring.  There are plenty of matrices that are not invertible.

Now let's go the other direction, assuming S is a field or division ring.  If K is not maximal, Let H properly contain K, and note that the image of H in S does not include 1.  The nonzero elements in this image are not invertible, else it would bring in all of S.  This contradicts S a division ring, hence K is maximal.

If the kernel K is a prime ideal in R, R being commutative again, there are no elements x and y outside of K with xy in K.  Moving to S, there are no zero divisors, hence S is an integral domain.  Conversely, if K is not prime, and xy lies in K, then x and y map to zero divisors and S is not an integral domain.

As a corollary, a maximal ideal in a commutative ring is also prime.  This because a field has no zero divisors.  (Of course we saw earlier that a maximal ideal in any ring is prime, by separating that maximal ideal from 1).  In contrast, there are lots of prime ideals that aren't maximal, such as 0 in the integers. If f is a ring homomorphism from R into S, the extension of an ideal in R is the ideal generated by its image in S, and the contraction of an ideal in S is its preimage in R, which is already an ideal.  If f is onto, this is just the correspondence of ideals under a ring homomorphism.

R
S
f(R)

Applying these transformations in either order need not reproduce the original ideal.  The contraction of the extension always contains the original ideal, while the extension of the contraction is contained in the original ideal.

If the image of a commutative ring R lies in the center of S, the contraction of a prime/semiprime ideal is same.  Let P be a prime ideal in S, and pull it back to R.  Since 1 maps to 1, the preimage of P is a proper ideal.  If xy lies in the preimage, then f(x)*f(y) lies in P.  Remember that f(x) and f(y) commute with everything in S, so The product of the principal ideals generated by f(x) and f(y) lies in P, either f(x) or f(y) lies in P, and either x or y lies in the preimage of P.  This is our test for primality in a commutative ring.  Similar reasoning pulls a semiprime ideal back to a semiprime preimage.

This does not hold for noncommutative rings.  Let R be the integer polynomials over x and y, with xy mapped to 0.  That leaves polynomials in x and polynomials in y, and sums thereof.  Since xy = 0, 0 is not a prime ideal.  Build S by adjoining w to R, such that w does not commute with x or y.  Given any two polynomials p and q in S, p*w*q is nonzero.  Just look at the lowest degree term of the product.  By default, 0 is a prime ideal in S.  It contracts to 0 in R, which is not prime.  Map x2 to 0, and this embedding violates semiprime contractions.

If both rings are commutative, the extension of the product is the product of the extensions.  Map an ideal H into another ring S, and the extension is the linear combinations of the elements in the image of H, with coefficients from S.  Do this for H1 and H2 and consider the product ideal in S.  It is spanned by pairwise products from the two extensions.  Each pairwise product is a linear combination of images of elements from H1 times a linear combination of images of elements of H2.  This reduces to a linear combination of images of pairwise products from H1 cross H2, which is precisely the extension of H1*H2 in S.

The contraction of the product contains, but need not equal, the product of the two contractions.  Let R be the integers and map the multiples of 7 into 0 in any other ring.  The square of 0 is 0, and the contraction is the multiples of 7; but the contraction of 0 times the contraction of 0 is the multiples of 49. This theorem isn't used very often, and it's rather technical, so if you want to skip it you can move on to the next chapter.

Let P be a maximal, infinitely generated ideal in a commutative ring R.  Thus all larger ideals are finitely generated.  Remember that the entire ring is generated by 1, so an infinitely generated ideal is always proper; P is proper.  We want to prove P is prime.

Suppose xy is in P, but x and y are not.  Now P+Rx and P+Ry properly contain P, and are finitely generated.  Let the generators of P+Ry be ui+vi*y, where u comes from P and v comes from R.

Let J be the ideal formed by the elements of R that, when multiplied by y, lie in P.  For future reference, J is called a conductor ideal, conducting y into P.  Since 1*y does not lie in P, J is a proper ideal.  Now J contains P, and x, and is finitely generated.  Let W be a set of generators for J.

Let t be any element in P, hence in P+Ry, hence equal to some linear combination of the generators ui+vi*y.  Group t and the linear combination of ui together.  This lies in P, hence the corresponding linear combination of vi*y also lies in P.  Since y is driven into P, this linear combination of vi is an element in J, and can be produced by a linear combination of the generators in W.

Let G be the generators in u, along with y times the generators in W.  Now G is able to span t, and since t was arbitrary, G spans all of P.  Since all the generators in G lie in P, P is finitely generated, which is a contradiction.  Therefore P is prime.

A similar proof shows that a maximal ideal in the set of nonprincipal ideals is also prime.  Again, R is generated by 1, so P is proper.  Suppose xy lies in P, while x and y do not.

Let d generate P+Ry, which is principal.  Let J be the ideal that drives y into P.  Let c generate J, which is principal.  Given t in P, let t = ud.  Now u maps d into P; u maps all of P+Ry into P; u maps y into P; hence u is in J.  Write u = vc.  Now t = vcd.  Since t was arbitrary, cd spans P.  Since c drives y, and hence P+Ry, into P, cd is in P, making P a principal ideal, which is a contradiction.