Well we've put off fractions long enough.  It's time.  This is the very essence of abstract algebra - taking something you know, like 3/7, and extending it to the most general setting.  Fractions are more tedious, and more head-spinning, and more beautiful, then you remember from fifth grade.

Once upon a time an ancient bookkeeper wanted to talk about half an apple, or half a cow, or whatever; and fractions were born.  This is a natural extension of the integers.  If an object is divided into 7 pieces, put the 7 down below and write 1/7.

Now that we've reached the age of abstract algebra, this process can be generalized to various rings.  Most of these rings are commutative, in fact most are integral domains.  However, we will, on occasion, take the fractions of a ring with zero divisors, maybe even a noncommutative ring.

Let R be a ring and let S be a nonempty subset of elements in the center of R, excluding 0, such that S is closed under multiplication.  Formally, if x and y are in S then xy is also in S, and if x is in S then x commutes with everything in R.  S will become the denominators, thus 0 is not allowed.  Assuming R contains 1, we usually toss 1 into S.  It doesn't bring in any other elements, and 1 commutes with all of R, so we're ok.  S could be 1 all by itself, but that doesn't change anything, like writing 5 as 5/1.  You have to have some denominators other than 1 to get anywhere.

The new ring, S inverse of R, is based on the cross product of R and S.  The ordered pairs are written a/b, rather than [a,b] or any of the other standard conventions.  Still, the / is a delimiter separating the two elements of an ordered pair, it is not an arithmetic operator - although the resemblance to fractions and division is unmistakable, and deliberate.

Consider the rationals, where 4/7 is the same as 8/14.  The two fractions look different, but they are equal.  Cross multiply to show a/b and c/d are equivalent iff ad = bc.  In other words, 414 = 87.  Let's generalize this to arbitrary rings and multiplicative sets.

Define an equivalence relation on the set R cross S as follows.  Let a/b = c/d when u*(ad-bc) = 0 for some u in S.

If S contains 0 then all fractions are equivalent via u = 0, and the whole ring collapses to 0, which isn't very interesting.  That's why S excludes 0.  And if S contains a zero divisor it better not contain the other zero divisor, else S contains 0.

When S has no zero divisors we can forget about the factor u, and write ad-bc = 0, or ad = bc.  This is the simplified formula when R is an integral domain, having no zero divisors.  This is the simplified formula when the integers become the rationals.

Since S is nonempty, select any u in S, and a/b is equal to itself.  That is, u(ab-ba) = 0.  I am using the fact that b comes from S, and is in the center of R, hence b commutes with a, hence ab-ba = 0.

Symmetry is also straightforward: u(ad-bc) = 0 iff u(cb-da) = 0.

Now let's look at transitivity.  As a warmup, assume S has no zero divisors.  Thus a/b = c/d is the same as saying ad = bc.  Similarly, c/d = e/f means cf = ed.  Remember that b d and f are all in the center of R, so we have some latitude here.  Multiply the first equation through by f, giving adf = cfb.  Remember that cf = ed, so by substitution, adf = edb.  Again, d is in the center, so afd = ebd, and since d is not a zero divisor, af = eb, and a/b = e/f.

Next assume S contains zero divisors.  We are given a/b = c/d = e/f.  Apply the definition.

1. uad = ucb

2. vcf = ved

Multiply (1) by f on the right and v on the left.

3. uvadf = uvcbf

Use (2) to substitute for vcf in (3) giving:

4. uvadf = uvedb

5. uvd(af-eb) = 0

This completes the proof.  The relation is an equivalence relation, and the equivalence classes are well defined.  These classes are referred to as the fractions of R by S, or S inverse of R, or R/S.  Returning to the fractions you know, 4/7 is equivalent to 8/14, hence both represent the same equivalence class, i.e. the same fraction.

Here is a small exercise; watch what happens when S is not commutative.  Let R be the quaternions, with no zero divisors to worry about, and let S be the units (1,i,j,k).  First, if we want i/j to be equivalent to itself, we have to change the equivalence formula to ad-cb, rather than ad-bc.  Now ij = ij, and we're ok.  But that only postpones the train wreck.  Using this formula, show that -i/1 = 1/i = k/j.  However, setting -i/1 = k/j forces k = -k, which is a contradiction.  We really need S to commute with R.

Ok, R/S forms a well defined set of equivalence classes, but that doesn't make it a ring.  Let's prove that now.

Define a/b + c/d = (ad+bc)/bd, and a/b * c/d = ac/bd.  These are the formulas you learned in elementary school for manipulating fractions, and they still work.

Begin by showing these operations are well defined.  In other words, 2/3 + 4/7 will give the same fraction as 2/3 + 8/14.  It doesn't matter which representative you choose, the result is the same.  Replace c/d with an equivalent fraction e/f and watch what happens.

ucf = ude
ubcf = ubde
uadf + ubcf = uadf + ubde
uf(ad+bc) = ud(af+be)
ubf(ad+bc) = ubd(af+be)
(ad+bc)/bd = (af+be)/bf ( definition of equivalence )

ucf = ude
uabcf = uabde
ac/bd = ae/bf ( definition of equivalence )

It doesn't matter which representatives you select; the sum and product produce the same equivalence class, the same element in the fraction ring.

0 over any denominator is equal to 0 over any other denominator.  u*(0-0) = 0.  This means 0/w times any other fraction still has 0 on top, and is still 0.  This is looking very much like 0 in the ring of fractions.

Add 0/w to a/b and get aw/bw, which is equivalent to a/b, thus 0 is the additive identity.

For any v and w in S, v/v = w/w.  This class, represented by w/w, is the multiplicative identity.  The fraction ring contains 1, even if the original ring did not.

The other ring properties, i.e. associative and distributive, and commutative if R is commutative, are inherited from R.  I'll leave these as an exercise for you.  Thus the result is indeed a ring, and is called S inverse of R, or the fraction ring of R by S, or the complete fraction ring of R (if S contains everything other than the zero divisors), or the fraction field of R if the result is a field.

If S has no zero divisors, we can characterize 0 and 1 in the fraction ring.  To find the class that is 0 in the new ring, set 0/w = a/b, whence a = 0.  Set w/w = a/b, whence a = b.

When S has no zero divisors, R embeds in its fraction ring.  Map R to R/1, or Rw/w if R does not contain 1.  Verify this is a ring homomorphism.  If a/1 = b/1, or perhaps aw/w = bw/w, then a = b, hence The map is a monomorphism, and R is a subring of its fractions.  If s contains only 1, a rather trivial case, then R is the same as the fractions R/S.

In the above paragraph, I allowed for the possibility that S does not contain 1, and even the more general case where R does not contain 1, by referring to aw/w rather than a/1.  Rings often contain 1 in this book, but not always, and even if they do, S might not contain 1.  So I may refer to xw/w instead of x/1, from time to time, to cover the case where S does not contain 1.  Most of the theorems carry along, they are just a bit more tedious.  I hope this does not cause too much confusion.

If R is an integral domain, and S contains all the nonzero elements of R, the fractions form a field.  The inverse of a/b is b/a.  Furthermore, R embeds in its fraction field via R/1.  I sort of glossed over this step when proving gauss' lemma, and the rank of a free module, but now we know that fraction fields are well defined.

What about the fraction field of the fraction field of R?  In this case S contains only units, so step back and ask what happens when S contains only units.  Equate a/b with ab-1/1, and each fraction is equivalent to something over 1, just another element of R.  If two elements become 1 in R/S then x/1 = y/1, or x = y.  (Remember S has no zero divisors.)  So the ring has not changed at all.  You need to put some nonunits in the denominator or it doesn't make any difference.

Assume R has no zero divisors, and write ac/bd = 0/w.  This means wac = 0, and since w is nonzero, a or c is 0.  The fractions of a domain form another domain.

The fraction ring is an R module, and a left or right R module if R is not commutative.  Let R act on the numerators.

Of course R/S is an R/S module, in the way that any ring is a module of itself. Let M be a left R module, and let S be a multiplicatively closed set in the center of R, not containing 0.  We can put denominators from S on to the elements of M, just as we did with R.  The result, S inverse of M, or M/S, is a left R module and a left R/S module.  The following is a condensed recapitulation of the previous section.

M/S consists of ordered pairs a/b from M cross S, clumped into equivalence classes, wherein a/b = c/d when u*(da-bc) = 0 for some u in S.  Reflexive, symmetric, transitive; all proved as above.  Just keep R on the left and M on the right throughout, as I did with my definition of equivalence: uda = ubc.

Define a/b + c/d = (da+bc)/bd, and a * c/d = ac/d, and a/b (from R/S) * c/d = ac/bd.  These are well defined on equivalence classes, and they have the associative and distributive properties consistent with an R module or an R/S module.  The proof is as above.

Let w be any member of S, and 0/w becomes 0 in M/S.  If nothing in M has an annihilator in S, then this is the entire class of 0 in M/S.

With no annihilators, M embeds in M/S via M/1.  The map is a homomorphism, and since only 0 maps to 0/1, it is a monomorphism.  If s contains only 1, a rather trivial case, then M is the same as the fractions M/S. This section contains several small theorems that connect the ideals of R with their images in the fraction ring R/S.  In most cases these ideals could be left ideals in a noncommutative ring, or submodules of a left module M, but remember that S is always in the center of R.

I use the word "image" rather loosely.  The resulting ideal in R/S is not the image of a ring homomorphism applied to H.  It is rather the extension of the homomorphic image H/1 in R/S.  If x is in H, and you start with x/1, multiply by 1/u to get x/u for every denominator u in S.  We are slapping all the denominators of S onto H.  Even if R does not contain 1, let the image of H be H cross S.
H
S

Verify the result is a left ideal, or left module.  Given h1/s1 + h2/s2, apply the definition of addition and write (s1h2 + s2h1) / s1s2.  The numerator is in the ideal H, and the common denominator is in the set S, so the result is in fact some element of H with a denominator from S.  Scale h1/s1 by a/s2 and come to the same conclusion.

That's the image, how about the preimage?  An ideal J in R/S is H/S for some ideal H in R.  Select a denominator w and let H be all the numerators that are associated with w, in any representation of any fraction in J.  Is this well defined?  Let J include x/s1, and multiply by s1/s2 to find another element in J.  This is equivalent to x/s2, hence s2 also brings in x, and serves just as well, giving the same set H.  In fact J is all of H cross S.

Is H an ideal?  Add x/w and y/w and get w(xy)/w2, which is equivalent to (x+y)/w.  Similarly, multiply x/w by z/w for any z in R and find zx/w2.  This implies zx/w, hence zx belongs to H, and H is an ideal in R, or a submodule of M.

Put the denominators back again to show that the image of the preimage gives the same ideal in R/S.  However, the converse does not hold.  Consider the integers, with the rationals as fraction field.  The even numbers form an ideal in Z.  Apply all denominators and get every rational number, including 2/14, which is equivalent to 1/7.  Now the numerators associated with the denominator 7 include 1, and all the other integers.  The preimage of the image of 2Z is Z, which properly contains the original.

The sum, product, and intersection of ideals in R produces the sum, product, and intersection of the corresponding ideals in R/S.  Sum and intersection also apply to modules.  Let's tackle intersection first.

If x is in H1 and H2, x/S is in both images, and in their intersection.  Conversely, assume a1/s1 and a2/s2 represent the same fraction, which is part of both images.  Now us1a2 and us2a1 are equal.  This common value, call it x, is present in H1 and H2.  Note that x/(us1s2) is in the image of H1∩H2, and is the same fraction as a1/s1 and a2/s2, hence the intersection of the images comes from H1∩H2.
H1∩H2
S

This works, by induction, for finitely many ideals, but does not hold for an arbitrary intersection, or even a descending chain of ideals.  Push any ideal of the integers into the rationals.  The rationals are a field, so if you have anything nonzero you have everything.  The powers of p define a descending chain of ideals that intersects in 0, and the image of 0 is 0.  However, each image is all of Q, and their intersection is Q.

As for the sum, start with a1+a2 in H1+H2, and its fraction, (a1+a2)/w, is equivalent to a1/w+a2/w, which is in the sum of the images.  Conversely, add a1/s1 and a2/s2, giving something in H1 + something in H2 over s1s2.  Therefore the sum of the images is the image of the sum.

The image of the product ideal is generated by a1a2 over the denominators of S, for every a1 in H1 and a2 in H2.  Each of these generators implies, and is implied by, a1/w times a2/w, which come from H1/S and H2/S.  Hence the image of the product is the product of the images.

When considering the product, H2 could be an R module while H1 is an ideal acting on H2.

If S has no zero divisors, the [left] ideals disjoint from S become proper ideals in R/S.  The ideal H/S is proper iff no a/b from H/S is equivalent to w/w, iff wa ≠ wb, iff a ≠ b, iff H and S are disjoint.

If H is a subring of R, and S is contained in H, H/S is a subring of R/S.  Both H/S and R/S contain 0/w and w/w, for some w in S.  These represent 0 and 1 respectively.  The fractions of H and the fractions of R form rings, we only need show the former embeds in the latter.  If the fractions a/b and c/d are taken from H/S, but become equivalent in R/S, then u(ad-bc) = 0.  Yet this makes them equivalent in H/S as well, hence the map is injective, and H/S is a subring of R/S. Let R be commutative, with H disjoint from S.  If R has no zero divisors then H/S is a proper ideal, as shown in the previous section.  Let's prove the same thing when H is prime.  The image is the entire ring iff a/b = w/w, iff uwb = uwa.  With a in H, and H prime, either u w or b is in H.  With S disjoint from H, b has to lie in H.  Yet H/S does not include a/b, thus H/S is proper, even though S may have zero divisors.  (Other ideals may map to all of R/S.)

Let a1/s1 * a2/s2 be a fraction in the ideal H/S.  In other words, there is some c/d from H/S that is equivalent to a1a2/s1s2.  Write ucs1s2 = uda1a2.  Either u, d, a1, or a2 is in H.  H and S are disjoint, hence H contains a1 or a2, H/S contains a1/s1 or a2/s2, and H/S is prime.  The image of a prime ideal disjoint from S is prime.

Now let H1 and H2 be distinct prime ideals, with a1 in H1, but not in H2.  Suppose a1/S is in the image of H2.  That is, a1/s1 = a2/s2, or ua1s2 = ua2s1.  Now ua1s2 is in H2, and if H1 and H2 are disjoint from S, a1 must belong to H2, which is a contradiction.  The prime ideals in R that are disjoint from S map to distinct prime ideals in R/S.

This is a perfect correspondence.  Let J be any prime ideal in R/S.  Build H as before, taking the numerators from any given denominator.  Since J is proper, H and S are disjoint.  Let a1a2 lie in H, hence a1a2/w2 lies in J.  Either a1/w or a2/w is in J, so either a1 or a2 is in H, and H is prime.

In summary, the prime ideals disjoint from S correspond 1 for 1 with the prime ideals in R/S.

Since 0 is prime in an integral domain, and maps to 0 in the fraction ring, S inverse of an integral domain is an integral domain.

The maximal ideals in R/S (which are also prime) correspond to the maximal prime ideals in R missing S.  There is always at least one maximal ideal in any ring, hence there is a maximal ideal in R/S, and a maximal prime ideal missing S.

On the other side, start with an ideal H disjoint from S, take its image in R/S, raise this to a maximal ideal in R/S, and pull back to a maximal prime ideal in R that contains H and misses S.  We only need show H/S is not all of R/S.  This is assured when S has no zero divisors, or when H is prime.  The proper ideal H becomes proper in R/S, as shown above. Let R be a commutative ring and let P be a prime ideal in R.  Let S be the elements of R that are not in P, which is sometimes written R-P.  Since P is prime, S is closed under multiplication, so the fractions of R by S, written R/S, are well defined.  In this case there are other words and symbols to describe the fraction ring of R by S.

The notation RP indicates the ring of fractions with numerators in R and denominators in S = R-P.  If H is an ideal in R, then HP is the localization of H at P, or the fractions with numerators in H and denominators in S.

Any proper ideal in RP is the image of an ideal disjoint from S, in other words, an ideal wholly contained in P.  Thus all proper ideals in RP are contained in the unique maximal ideal PP.

If a commutative ring has one maximal ideal it is called a local ring.  As shown above, RP is a local ring, and PP is its unique maximal ideal.  Examples of local rings include the rationals with odd denominators (the localization of Z about 2), or quotients of integer polynomials without any factors of x2+1 in the denominator.

In a local ring R, all the nonunits are sequestered in the maximal ideal M.  If a nonunit x is in R-M then x is contained in a maximal ideal other than M, which is a contradiction. Assume R is commutative.  An application of zorn's lemma shows there is a maximal set S, without 0, and closed under multiplication.  You can always start an ascending chain with 1.

The complement of S contains the ideal 0, and hence a maximal prime ideal P.  The complement of P is multiplicatively closed and contains S, so it must be S.  Thus the complement of S is a prime ideal.  Furthermore, a smaller prime ideal inside P would imply a larger, multiplicatively closed set containing S, hence the complement of S is a minimal prime ideal.

Conversely, if S is the complement of a minimal prime ideal P then S is maximal, for any larger S has some prime ideal in its complement, and hence in P.  Minimal prime ideals and maximal closed sets are in one to one correspondence.

A multiplicatively closed set S is saturated if S contains x and y whenever S contains xy.  For instance, the set of elements that are not zero divisors is saturated.  We will show that S is saturated iff its complement is the union of prime ideals in R.  (Yes, we're still assuming R is commutative.)

Assume S is saturated and let x be in R-S.  Since xy is not in S, the principal ideal generated by x misses S.  Take the image of this ideal in R/S?  It consists of x in the numerator, and all elements of S downstairs.  Is x/S the entire fraction ring R/S?  Only if x/v = w/w, the multiplicative identity.  This means uwx = uwv for some u in S.  An element in the ideal generated by x is in S, and this is a contradiction, so the image remains proper in R/S.  Drive x up to a maximal prime ideal missing S.  This holds for all x, so R-S is the union of prime ideals.
P1
P2
P3
S

Conversely, assume R-S consists of prime ideals.  If xy is in R-S then x or y is also in R-S.  Turning this around, if x and y are in S, so is xy, hence S is multiplicatively closed.  Now let xy lie in S.  If x is not in S it is in some prime ideal P disjoint from S.  This means xy is in P and not in S, which is a contradiction.  Therefore S is saturated.

The saturation of a multiplicatively closed set S is a minimal saturated set S′ containing S.  Is this well defined?

Let S′ be the complement of the union of the prime ideals in R that do not hit S.  By the above, S′ is saturated, and it clearly contains S.  If T is saturated and contains S, its complement is some or all of the prime ideals missing S, thus T contains S′, and S′ is minimal. If S is one of those multiplicatively closed sets in R, then the preimage of the image of H in R/S is the saturation of H, with respect to S.  This is written sat(H).  Unfortunately this has nothing to do with the saturation of the set S, which was described in the previous section.  The two concepts are unrelated.  Sorry for the ambiguity.

If sat(H) contains x, then x/y = a/b in H/S, and ubx = uya.  Thus something in S carries x into H.  Conversely, if ux = a, for some a in H, then wx/w = a/u, and x is in sat(H).  Since HS lies in H, sat(H) contains H.

This may remind you of a conductor ideal.  In fact it is tempting to write sat(H) as [S:H], although S is not an ideal, but rather a multiplicatively closed set, and x is in sat(H) if there is just one u in S with xu in H.  In contrast, a conductor ideal contains x if x drives all of S into H.  so [S:H] definitely abuses the notation.  I find it helpful to think of sat(H) as a conductor ideal, of sorts, but we must remember that it is a different species.

Apply the definition of sat(H) twice.  Let ux = a for a in H, so that x is in sat(H), then let vy = x, whence uvy = a.  Therefore sat(sat(H)) = sat(H).  This makes sense, since the saturation is the preimage of the image in R/S, and if you do that a second time you get the same preimage back again.

If H and J are ideals, show that sat(H∩J) = sat(H)∩sat(J).  If ux = a for a in H∩J, then u drives x into both H and J.  Conversly, assume ux lies in H and vx lies in J.  Now uvx lies in H and J, and we're done.

If H contains J, sat(H) contains sat(J).

sat(H) is the entire ring R, iff 1 times something in S lies in H, iff H and S intersect.  (Here H is an ideal, and not just a submodule.)

Take the saturation of H with respect to S, then saturate this ideal with respect to another closed set T.  Now x is in sat(sat(H)) when vux lies in H, for some v in T and some u in S.  Cross multiply all the elements of S by all the elements of T to get a multiplicatively closed set Z, which includes uv.  Thus x is in the saturation of H with respect to Z.  By symmetry, saturation with respect to S, then T, is the same as saturation with respect to T, then S.  Both are equal to saturation with respect to Z. Let T and S be multiplicatively closed sets in a commutative ring R, with T contained in S.

Let R/T map into R/S in the obvious way.  Is this a bijection?  Are the rings essentially the same?

First assume the map is a bijection.  Let w be a fixed element in T, and let c be an arbitrary element of S.  Thus cw/w belongs to R/S and R/T.  The inverse, w/cw, has to have an equivalent fraction in R/T.  Write a/b = w/cw, or uacw = ubw.  Something times c produces an element in T.  If H is an ideal containing c, it intersects T.  This holds for every c in S, so every ideal that intersects S also intersects T.  And if this is true for all ideals then it certainly holds for prime ideals.

Conversely, assume the prime ideals that intersect S always intersect T.  Recall from an earlier section that the saturation of T is the complement of the union of the prime ideals missing T, hence S is contained in the saturation of T.

Let H be an ideal that misses T.  Let a/b be a fraction in H/T.  This is equivalent to 1 only if ua = ub for some u in T.  This causes H and T to intersect, which is a contradiction.  Thus a/b cannot equal 1, and H/T is a proper ideal in R/T.  Drive H up to a prime ideal P that misses T.  (This is where we need R to be commutative.)  Since P misses S, H also misses S.  Every ideal that misses T misses S.  This was true for prime ideals; and now it's true for all ideals.

Return to the ring homomorphism from R/T into R/S.  Let x/y lie in the kernel, hence x/y = 0/w, or uwx = 0.  The elements of R that kill x form an ideal H, and H intersects S in uw, hence H intersects T.  Something in T kills x, and x/y is equivalent to 0 in R/T.  Only 0 maps to 0, and the map is an injection.

To demonstrate a bijection, find a/b in R/T that is equivalent to c/d in R/S.  Let d generate an ideal H, which certainly intersects S.  Since H also intersects T, Let d*a = b, for some b in T.  Now ca/b is equivalent to c/d, and the map is onto.

The rings R/T and R/S are isomorphic iff all the prime ideals that intersect S also intersect T. Like the previous theorem, this theorem places a fraction ring inside another, but this time rings need not be commutative.  However, the multiplicatively closed sets that contain denominators are in the center of R, as usual.

Let S and T be two such closed sets, and let Q be the set of pairwise products of elements in S cross elements in T.  Verify that Q is a multiplicatively closed set, hence R/Q is a valid ring.

Let U be the fractions represented by T in R/S.  U is simply T/1 if R contains 1, or Tw/w for some fixed w in S.  Verify that U is multiplicatively closed.  Start with a and b in T and map them to aw/w and bw/w in R/S.  Their product is abww/ww, which is equivalent to abw/w in U.  Thus U is a closed set in R/S, and we can talk about R/S/U.  This is essentially the same as R/S/T, the fractions of R/S by T.  We will show this is the same ring as R/Q.

You know the drill.  Create a map, show it is well defined, show it is injective, show it is onto, show it is a ring homomorphism.  Here is the map.  If s1 comes from S, and t1 comes from T, and w is a fixed element in S, define the function f on R/Q as follows.

a1/(s1t1) → (a1/s1) / (t1w/w)

Is f well defined?  Suppose a2/(s2t2) maps into the same class.  Leave out the w's, (they don't really contribute to our understanding), and cross multiply, so that some factor from U makes a1t2/s1 and a2t1/s2 equal.  Cross multiply again and write an equation in R: va1s2t2 = va2s1t1.  Since v is in Q, this sets a1/s1t1 and a2/s2t2 equal in R/Q.  Thus f is well defined.

Surjective is easy; a1/s1t1 covers (a1/s1) / t1.

To show injective, suppose the image is in the class of 0.

v2a1/s1 = 0/1

The 0/1 on the right isn't 0, it's a class in R/S.  If this equation holds, then there is a v1 in S satisfying the following.

v1v2a1 = 0

This means a1 over anything is equal to 0/1 in R/Q, and the kernel is zero after all.

Finally, f preserves addition and multiplication.  When adding two fractions in R/Q, give them a common denominator first, then you can just add numerators, and carry the denominators s1 and t1 along.  Multiplication is also straightforward; I'll leave the details to you.  Therefore f is a ring isomorphism, and the two rings are the same.

As a corollary, R/S/T = R/T/S.

If S contains T contains 1, then S and ST are the same set.  R/S = R/(ST) = R/S/T.  This makes sence, since T is entirely units in R/S, so taking T inverse of R/S doesn't change anything. Let R be a commutative ring.  Let D0 be the set of 0 divisors in R, along with 0, and let S0 be its complement.  Verify that S0 is multiplicatively closed and saturated.  Thus D0 is the union of prime ideals.

Remember that S0 includes 1, and all the other units.

Let P be any minimal prime ideal.  Since minimal prime ideals correspond to maximal closed sets, the complement of P is a maximal closed set S.  If S does not contain S0, then the product set S*S0 is properly larger than S0, and does not contain 0.  This is a contradiction, hence P lies in D0.

In summary, D0 is a union of prime ideals, and contains all the minimal prime ideals.  Furthermore, every minimal prime ideal consists entirely of zero divisors.

With this in mind, S0 is the largest set S for which the map from R to R/S is injective.  The kernel is 0, for there are no zero divisors in S0.  Thus the embedding of R → R/1 is injective.

Conversely, suppose S contains a zero divisor x, such that xy = 0.  Now y/1 = 0/x, and the mapping is not injective.  The set S0 is the largest set, and includes all other sets S that allow an embedding from R into R/S.

Let W be the fraction ring R/S0.  Every nonzero element of W is either a unit or a zero divisor.  Such a ring, units and 0 divisors, always equals its fraction ring over S0.  This because S0 is nothing but units. If R is noetherian, then R/S is noetherian.  (R and R/S could be left noetherian if R is noncommutative.)

Let J be an ideal in R/S, and write J as H/S, where H is an ideal in R.  Since R is noetherian, H is finitely generated.  Put a denominator w on these generators and J is finitely generated.  This is pretty clear when w = 1, but follow along when S does not contain 1.  To create x/v, use the generators (with w in the denominator), times other elements of R with w in the denominator, to build x/ww.  Multiply through by ww/v to generate x/v.

This proof generalizes to modules.  If M is noetherian then so is M/S.  But M/S is a noetherian R/S module, not a noetherian R module.  Let R = Z and localize about 2.  This brings in all fractions with odd denominators.  Let Z be the first Z module in an ascending chain.  Then bring in the fractions whose denominators are powers of 3.  Then let the denominators contain factors of 3 or 5.  Then 3 5 or 7, and so on through the primes, building an infinite ascending chain of Z modules.

The other direction doesn't work at all.  Let R be any non-noetherian integral domain, such as Z adjoin infinitely many indeterminants.  The fraction field becomes a noetherian ring.

If R is a commutative ring with a multiplicatively closed set S, and M is a finitely generated R module, then M/S is 0 iff some u ∈ S kills M.

If uM = 0, then ux = 0 for every x in M, and all the fractions of M/S are 0.  Conversely, if each generator gi of M has some si satisfying sigi = 0, then set u to the product over si, and u kills M. Let R be an integral domain, and let F be the fraction field of R.  Thus R embeds in the localizations of R, embeds in F.

Let U be an integral domain containing R.  Consider the localizations of U about M, for each maximal ideal M in R.  These are the fractions of U by R-M.  U embeds in these localizations, embeds in the fractions of U by all the nonzero elements of R.

Choose a in U and b nonzero in R, and assume a/b belongs to every fractional subring RM.  Then a/b belongs to R.

If a/B is already in R, then a/b = c/1 for some c in R, and b divides a in R.  Otherwise let J be the conductor ideal in R that drives a/b into R.  Now y is in J iff b divides ya.  J does not contain 1, so push J up to a maximal ideal M.  Localize about M, and RMdoes not contain a/b, nor an equivalent fraction c/d, for then d, in the denominator, and not a member of M, satisfies ad = bc, whence b divides da, and d belongs to M after all.  Thus the fraction a/b is in R.

Set U = R in the above, and the intersection of the localizations of R is R.

Note that this is trivially true if M = 0, and R is a field, wherein the localization about 0 doesn't change a thing. The term locally is an adverb that describes a property of a ring, rather than the ring itself.  For instance, imagine a ring can be green.  A ring R is locally green if every localization RP is green for every prime ideal P.  (Sometimes maximal ideals are sufficient, as described in the previous section.)  Furthermore, green is a local property if - R is green iff all its localizations are green.  In other words, R is green iff R is locally green.

These concepts apply to ideals, and even elements of the ring.  For instance, an element x is locally green iff all its localizations x/1 in RP are green.

With this in mind, 0 is a local property.  An element x is 0 iff all its localizations are 0.

Since 0/1 represents the zero element in each RP, one direction is obvious.  We only need show the converse.  For this we need R to be commutative.

Let J be the annihilator of x.  Since 1 does not kill x, J is a proper ideal, and embeds in a maximal ideal M, which happens to be prime.  Localize about M, and suppose x/1 is equivalent to 0/1.  Thus ux = 0 for some u outside of M.  Since all the elements that kill x lie inside M, this is a contradiction.  Therefore x is 0 iff it is locally 0.  More specifically, x = 0 iff its localizations about maximal ideals are all zero. Let f be an R module homomorphism from U into V.  Let S be a multiplicatively closed set in the center of R, so that fractions make sense.  There is an induced module homomorphism g from U/S into V/S.

If S contains 1, map x/1 to f(x)/1.  If S does not contain 1, map wx/w to wf(x)/w.  Since V is a module, this is the same as f(wx)/w.  In other words, apply f to each numerator and carry the denominator along.  Verify this map, g(U/S) into V/S, respects addition, and scaling by R.  For instance, f(x)/b + f(y)/c = (cf(x)+bf(y)/bc = f(cx+by)/bc, and cf(x)/b = f(cx)/b.  You can bring in a denominator before or after, that doesn't matter.  Thus g becomes a homomorphism on R/S modules.

The homomorphism is well defined only if 0 maps to 0.  The numerator x, in the fractions U/S, is 0 only if something in S kills x.  Write dx = 0, for some d in S, and apply f.  Now f(dx) = df(x) = 0, and f(x) becomes 0 in V/S.  Thus g is a valid module homomorphism.

To see if g is injective, find the preimage of 0.  If something in S kills y in V, so that y represents 0 in V/S, and f(x) = y, and something in S kills x, and this holds for each y representing 0 and each x onto y, then g is injective.

A similar form of reverse engineering establishes criteria for surjective.  Every z in v and every d in S must have some x in U and b and c in S such that c(df(x)-bz) = 0.  If f is surjective then that does the trick.  Let f(x) = z, and f(x)/d = z/d.

U
f
V
↓/S↓/S
U/S
g
V/S

If R is commutative, the homomorphisms from U into V, written hom(U,V), form an R module.  Map f to g as above, and build a map from hom(U,V) into hom(U/S,V/S).  Show that this map is an R module homomorphism.  The sum f1+f2 leads to g1+g2, and a scaled version of f leads to a scaled version of g.

Apply this map to the composition of successive functions, from U to V to W, and the result is the same as the composition of the functions downstairs.  Either way the numerator is mapped to f2(f1(x)).  Thus localization by S is a functor from R modules into R/S modules.

As a special case, think of U and V and W as the same module M, and build a ring homomorphism from the endomorphisms of M into the endomorphisms of M/S.  The identity map on M becomes the identity map on M/S, so 1 maps to 1, as it should. In a special case of the above, let U = R and let V = R/H, where f is the quotient map from R onto V.  Let S be a multiplicatively closed set disjoint from H.  There is then an induced R/S homomorphism g from R/S onto V/S.

Write g(1) = 1, which becomes 0 iff something in S kills 1, iff something in S belongs to H.  Yet S and H are disjoint, hence 1 maps to 1.  That's good, because I'm going to turn this into a ring homomorphism, and 1 should map to 1.

It is convenient, in this case, to apply f to the denominators as well as the numerators, so that both numerators and denominators represent cosets of H in R.  This doesn't change V/S, or the action of G.  Add anything in H to a denominator and the new fraction is equivalent to the original.  That is, a/b = a/(b+z) for z in H.  Nor does it drive a denominator to 0, since S does not contain anything in H.  So g is the same, but now g can act as a ring homomorphism from R/S onto V/S.

The fraction x/y is in the kernel of g if f(x), times some z in S, yields 0.  Since 0 in the quotient ring means H, zx lies in H.  Thus x is in the saturation of H.  If H is already saturated then the kernel is H/S.  This is the case when H is a prime ideal disjoint from S.  The preimage of the image of H, also known as sat(H), is the same prime ideal H by correspondence.

A saturated kernel makes g injective, but it also makes the map from V into V/S injective.  This is the homomorphism shown on the right.

R
f
V
↓/S↓/S
R/S
g
V/S

If c represents a coset of H in R, let d = c/1 represent a coset of H/S in R/S.  Since H maps into H/S, cosets of H map into cosets of H/S, and the function is well defined on V.  Clearly this map respects addition and multiplication, and is a ring homomorphism from V into V/S.

To show injective, suppose c maps into the kernel H/S.  Thus c is a numerator of some fraction in the ideal spanned by H/S.  Yet H is saturated, so c already lives in H.  Thus c represents 0 in the quotient ring V.  The kernel is 0, and V embeds in V/S.  Sometimes this map is surjective, i.e. an isomorphism, as shown in the next section. Assume R is commutative.  Continuing the above, let R map onto a quotient ring V, and using the same ring homomorphism, let r/S map onto the quotient ring V/S.  Let the kernel H be a maximal ideal, which is prime, and saaturated, hence both homomorphisms into V/S are injective.  In this case the embedding from V into V/S is also surjective, hence an isomorphism.  These quotient rings are called residue fields, and they are identical.

Let a/b represent a coset of H/S in V/S.  Since b lies outside of H, b and H generate 1.  For some z in H, let z = 1-cb.  Subtract az/b from a/b, giving another fraction in the same coset of a/b.  The new fraction is acb/b, which is equivalent to ac/1.  This comes from ac in V, hence the map is surjective, and the quotient rings are isomorphic.

All we need for this to work is that every b in S generates a principal ideal that is coprime to H.  H does not have to be maximal, though it often lies in a maximal ideal M with S = R-M.  For instance, let p2 generate H in the integers, and localize about p.  Every b in S is not divisible by p (by definition), and is coprime to p2.  Thus b and p2 span 1, and {b} and H are coprime ideals.  Furthermore, by unique factorization, if bx lies in H, and b is not divisible by p, then x already lies in H, hence H is saturated.  Z/H is the integers mod p2, and the same is true after localization.  The local ring Zp, mod the ideal Hp, is isomorphic to the integers mod p2.

Here is an example that is not surjective.  Let K be a field and adjoin x and y to build R.  Let x generate the kernel of the homomorphism.  The quotient ring V is K[y], which is an integral domain, hence the kernel is a prime ideal P.  Let S = R-P.  S is every polynomial that is not divisible by x.  R/S is now a localization about P, and since P is prime and saturated, both maps into V/S, from the left and from above, are injective.  The kernel of g is P/S, which is the maximal ideal in the local ring R/S, and that makes V/S a field, a residue field to be precise.  In fact V/S is the transcendental field extension K(y), quotients of polynomials in y with coefficients in K.

V is K[y], polynomials in y, as mentionned above, and this integral domain is certainly not isomorphic to the field K(y), though it does embed.

The same pattern appears when P is any prime ideal in a larger maximal ideal.  Localize about P, and V, an integral domain, embeds into its fraction field, which is the residue field of RP.