Commutative Algebra/Print version

From Wikibooks, open books for an open world
Jump to navigation Jump to search

Objects and morphisms[edit | edit source]

Basics[edit | edit source]

Definition 1.1 (categories):

A category is a collection of objects together with morphisms, which go from an object to an object each (where is called the domain and the codomain), such that

  1. Any morphism can be composed with a morphism such that the composition of the two is a morphism .
  2. For each , there exists a morphism such that for any morphism we have and for any morphism we have .

Examples 1.2:

  1. The collection of all groups together with group homomorphisms as morphisms is a category.
  2. The collection of all rings together with ring homomorphisms is a category.
  3. Sets together with ordinary functions form the category of sets.

To every category we may associate an opposite category:

Definition 1.3 (opposite categories):

Let be a category. The opposite category of is the category consisting of the objects of , but all morphisms are considered to be inverted, which is done by simply define codomains to be the domain of the former morphism and domains to be codomains of former morphisms.

For instance, within the opposite category of sets, a function (where , are sets) is a morphism .

Algebraic objects within category theory[edit | edit source]

A category is such a general object that some important algebraic structures arise as special cases. For instance, consider a category with one object. Then this category is a monoid with composition as its operation. On the other hand, if we are given an arbitrary monoid, we can define the elements of that monoid to be the morphisms from a single object to itself, and thus have found a representation of that monoid as a category with one object.

If we are given a category with one object, and the morphisms all happen to be invertible, then we have in fact a group structure. And further, just as described for monoids, we can turn every group into a category.

Special types of morphisms[edit | edit source]

The following notions in category may have been inspired by stuff that happens within the category of sets and similar categories.

In the category of sets, we have surjective functions and injective functions. We may characterise those as follows:

Theorem 1.4:

Let be sets and be a function. Then:

  • is surjective if and only if for all sets and functions implies .
  • is injective iff for all sets and functions implies .

Proof:

We begin with the characterisation of surjectivity.

: Let be surjective, and let . Let be arbitrary. Since is surjective, we may choose such that . Then we have . Since was arbitrary, .

: Assume that for all sets and functions implies . Assume for contradiction that isn't surjective. Then there exists outside the image of . Let . We define as follows:

, .

Then (since , the only place where the second function might be , is never hit by ), but .

Now we prove the characterisation of injectivity.

: Let be injective, let be another set and let be two functions such that . Assume that for a certain . Then due to the injectivity of , contradiction.

: Assume that for all sets and functions implies . Let be arbitrary such that . Take and . Then and hence surjectivity.

It is interesting that the change from injectivity and surjectivity swapped the use of indirect proof from the -direction to the -direction.

Since in the characterisation of injectivity and surjectivity given by the last theorem there is no mention of elements of sets any more, we may generalise those concepts to category theory.

Definition 1.5:

Let be a category, and let be a morphism of . We say that

  • is an epimorphism if and only if for all objects of and all morphisms , and
  • is a monomorphism if and only if for all objects of and all morphisms .

Exercises[edit | edit source]

  • Exercise 1.3.1: Come up with a category , where the objects are some finitely many sets, such that there exists an epimorphism that is not surjective, and a monomorphism that is not injective (Hint: Include few morphisms).

Terminal, initial and zero objects and zero morphisms[edit | edit source]

Within many categories, such as groups, rings, modules,... (but not fields), there exist some sort of "trivial" objects which are the simplest possible; for instance, in the category of groups, there is the trivial group, consisting only of the identity. Indeed, within the category of groups, the trivial group has the following property:

Theorem 1.6:

Let and let be another group. Then there exists exactly one homomorphism and exactly one homomorphism .

Futhermore, if is any other group with the property that for every other group , there exists exactly one homomorphism and exactly one homomorphism , then .

Proof: We begin with the first part. Let be a homomorphism, where . Then must take the value of the one element of everywhere and is thus uniquely determined. If furthermore is a homomorphism, by the homomorphism property we must have (otherwise obtain a contradiction by taking a power of ).

Assume now that , and let be an element within that does not equal the identity. Let . We define a homomorphism by . In addition to that homomorphism, we also have the trivial homomorphism . Hence, we don't have uniqueness.

Using the characterisation given by theorem 1.6, we may generalise this concept into the language of category theory.

Definition 1.7:

Let be a category. A zero object of is an object of such that for all other objects of there exist unique morphisms and .

Within many usual categories, such as groups (as shown above), but also rings and modules, there exist zero objects. However, not so within the category of sets. Indeed, let be an arbitrary set. If , then from any nonempty set there exist at least 2 morphisms with codomain , namely the two constant functions. If , we may pick a set with and obtain two morphisms from mapping to . If , then there does not exist a function .

But, if we split the definition 1.6 in half, each half can be found within the category of sets.

Definition 1.8:

Let be a category. An object of is called

  • terminal iff for every other object of there exists exactly one morphism ;
  • initial iff for every other object of there exists exactly one morphism .

In the category of sets, there exists one initial object and millions (actually infinitely many, to be precise) terminal objects. The initial object is the empty set; the argument above definition 1.7 shows that this is the only remaining option, and it is a valid one because any morphism from the empty set to any other set is the empty function. Furthermore, every set with exactly one element is a terminal object, since every morphism mapping to that set is the constant function with value the single element of that set. Hence, by generalizing the concept of a zero object in two different directions, we have obtained a fine description for the symmetry breaking at the level of sets.

Now returning to the category of groups, between any two groups there also exist a particularly trivial homomorphism, that is the zero homomorphism. We shall also elevate this concept to the level of categories. The following theorem is immediate:

Theorem 1.9:

Let be the trivial group, and let and be any two groups. If and are homomorphisms, then is the trivial homomorphism.

Now we may proceed to the categorical definition of a zero morphism. It is only defined for categories that have a zero object. (There exists a more general definition, but it shall be of no use to us during the course of this book.)

Definition 1.10:

Let be a category with a zero object , and let be objects of that category. Then the zero morphism from to is defined as the composition of the two unique morphisms and .

Functors, natural transformations, universal arrows[edit | edit source]

Functors[edit | edit source]

Definitions[edit | edit source]

There are two types of functors, covariant functors and contravariant functors. Often, a covariant functor is simply called a functor.

Definition 2.1:

Let be two categories. A covariant functor associates

  • to each object of an object of , and
  • to each morphism in a morphism ,

such that the following rules are satisfied:

  1. For all objects of we have , and
  2. for all morphisms and of we have .

Definition 2.2:

Let be two categories. A contravariant functor associates

  • to each object of an object of , and
  • to each morphism in a morphism ,

such that the following rules are satisfied:

  1. For all objects of we have , and
  2. for all morphisms and of we have .

Forgetful functors[edit | edit source]

I'm not sure if there is a precise definition of a forgetful functor, but in fact, believe it or not, the notion is easily explained in terms of a few examples.

Example 2.3:

Consider the category of groups with homomorphisms as morphisms. We may define a functor sending each group to it's underlying set and each homomorphism to itself as a function. This is a functor from the category of groups to the category of sets. Since the target objects of that functor lack the group structure, the group structure has been forgotten, and hence we are dealing with a forgetful functor here.

Example 2.4:

Consider the category of rings. Remember that each ring is an Abelian group with respect to addition. Hence, we may define a functor from the category of rings to the category of groups, sending each ring to the underlying group. This is also a forgetful functor; one which forgets the multiplication of the ring.

Natural transformations[edit | edit source]

Definition 2.5:

Let be categories, and let be two functors. A natural transformation is a family of morphisms in , where ranges over all objects of , that are compatible with the images of morphisms of by the functors and ; that is, the following diagram commutes:

Example 2.6:

Let be the category of all fields and the category of all rings. We define a functor

as follows: Each object of shall be sent to the ring consisting of addition and multiplication inherited from the field, and whose underlying set are the elements

,

where is the unit of the field . Any morphism of fields shall be mapped to the restriction ; note that this is well-defined (that is, maps to the object associated to under the functor ), since both

and

,

where is the unit of the field .

We further define a functor

,

sending each field to its associated prime field , seen as a ring, and again restricting morphisms, that is sending each morphism to (this is well-defined by the same computations as above and noting that , being a field morphism, maps inverses to inverses).

In this setting, the maps

,

given by inclusion, form a natural transformation from to ; this follows from checking the commutative diagram directly.

Universal arrows[edit | edit source]

Definition 2.7 (universal arrows):

Let be categories, let be a functor, let be an object of . A universal arrow is a morphism , where is a fixed object of , such that for any other object of and morphism there exists a unique morphism such that the diagram

commutes.

Kernels, cokernels, products, coproducts[edit | edit source]

Kernels[edit | edit source]

Definition 3.1:

Let be a category with zero objects, and let be a morphism between two objects of . A kernel of is an arrow , where is what we shall call the object associated to the kernel , such that

  1. , and
  2. for each object of and each morphism such that , there exists a unique such that .

The second property is depicted in the following commutative diagram:

Note that here, we don't see kernels only as subsets, but rather as an object together with a morphism. This is because in the category of groups, for example, we can take the morphism just by inclusion. Let me explain.

Example 3.2:

In the category of groups, every morphism has a kernel.

Proof:

Let be groups and a morphism (that is, a group homomorphism). We set

and

,

the inclusion. This is indeed a kernel in the category of groups. For, if is a group homomorphism such that , then maps wholly to , and we may simply write . This is also clearly a unique factorisation.

For kernels the following theorem holds:

Theorem 3.3:

Let be a category with zero objects, let be a morphism and let be a kernel of . Then is a monic (that is, a monomorphism).

Proof:

Let . The situation is depicted in the following picture:

Here, the three lower arrows depict the general property of the kernel. Now the morphisms and are both factorisations of the morphism over . By uniqueness in factorisations, .

Kernels are essentially unique:

Theorem 3.4:

Let be a category with zero objects, let be a morphism and let , be two kernels of . Then

;

that is to say, and are isomorphic.

Proof:

From the first property of kernels, we obtain and . Hence, the second property of kernels imply the commutative diagrams

and .

We claim that and are inverse to each other.

and .

Since both and are monic by theorem 3.3, we may cancel them to obtain

and ,

that is, we have inverse arrows and thus, by definition, isomorphisms.

Cokernels[edit | edit source]

An analogous notion is that of a cokernel. This notion is actually common in mathematics, but not so much at the undergraduate level.

Definition 3.5:

Let be a category with zero objects, and let be a morphism between two objects of . A cokernel of is an arrow , where is an object of which we may call the object associated to the cokernel , such that

  1. , and
  2. for each object of and each morphism such that , there exists a unique factorisation for a suitable morphism .

The second property is depicted in the following picture:

Again, this notion is just a generalisation of facts observed in "everyday" categories. Our first example of cokernels shall be the existence of cokernels in Abelian groups. Now actually, cokernels exist even in the category of groups, but the construction is a bit tricky since in general, the image need not be a normal subgroup, which is why we may not be able to form the factor group by the image. In Abelian groups though, all subgroups are normal, and hence this is possible.

Example 3.6:

In the category of Abelian groups, every morphism has a cokernel.

Proof:

Let be any two Abelian groups, and let be a group homomorphism. We set

;

we may form this quotient group because within an Abelian group, all subgroups are normal. Further, we set

,

the projection (we adhere to the custom of writing Abelian groups in an additive fashion). Let now be a group homomorphism such that , where is another Abelian group. Then the function

is well-defined (because of the rules for group morphisms) and the desired unique factorisation of is given by .

Theorem 3.7:

Every cokernel is an epi.

Proof:

Let be a morphism and a corresponding cokernel. Assume that . The situation is depicted in the following picture:

Now again, , and and are by their equality both factorisations of . Hence, by the uniqueness of such factorisations required in the definition of cokernels, .

Theorem 3.8:

If a morphism has two cokernels and (let's call the associated objects and ), then ; that is, and are isomorphic.

Proof:

Once again, we have and , and hence we obtain commutative diagrams

and .

We once again claim that and are inverse to each other. Indeed, we obtain the equations

and

and by cancellation (both and are epis due to theorem 8.7) we obtain

and

and hence the theorem.

Interplay between kernels and cokernels[edit | edit source]

Theorem 3.9:

Let be a category with zero objects, and let be a morphism of such that is the kernel of some arbitrary morphism of . Then is also the kernel of any cokernel of itself.

Proof:

means

.

We set , that is,

.

In particular, since , there exists a unique such that . We now want that is a kernel of , that is,

.

Hence assume . Then . Hence, by the topmost diagram (in this proof), for a unique , which is exactly what we want. Further, follows from the second diagram of this proof.

Theorem 3.10:

Let be a category with zero objects, and let be a morphism of such that is the kernel of some arbitrary morphism of . Then is also the cokernel of any kernel of itself.

Proof:

The statement that is the cokernel of reads

.

We set , that is

.

In particular, since , for a suitable unique morphism . We now want to be a cokernel of , that is,

.

Let thus . Then also and hence has a unique factorisation by the topmost diagram.

Corollary 3.11:

Let be a category that has a zero object and where all morphisms have kernels and cokernels, and let be an arbitrary morphism of . Then

and

.

The equation

is to be read "the kernel of is a kernel of any cokernel of itself", and the same for the other equation with kernels replaced by cokernels and vice versa.

Proof:

is a morphism which is some kernel. Hence, by theorem 3.9

(where the equation is to be read " is a kernel of any cokernel of "). Similarly, from theorem 3.10

,

where .

Products[edit | edit source]

Definition 3.12:

Let be a category, and let be two objects of . A product of and , denoted , is an object of together with two morphisms

and ,

called the projections of , such that for any morphisms and there exists a unique morphism such that the following diagram commutes:

[[]]

Example 3.13:


Theorem 3.14:

If is a category, are objects of and are products of and , then

,

that is, and are isomorphic.

Theorem 3.15:

Let be a category, objects of and a product of and . Then the projection morphisms and are monics.

Coproducts[edit | edit source]

Definition 3.16:

Let be a category, and let and be objects of . Then a coproduct of and is another object of , denoted , together with two morphisms and such that for any morphisms and , there exist morphisms such that and .

Example 3.17:

Theorem 3.18:

Theorem 3.19:

Biproducts[edit | edit source]

Definition 3.20:

Let be a category that contains two objects and . Assume we are given an object of together with four morphisms that make it into a product, and simultaneously into a coproduct. Then we call a biproduct of the two objects and and denote it by

.

Example 3.21:

Within the category of Abelian groups, a biproduct is given by the product group; if are Abelian groups, set the product group of and to be

,

the cartesian product, with component-wise group operation.

Proof:

Diagram chasing within Abelian categories[edit | edit source]

Exact sequences of Abelian groups[edit | edit source]

Definition 4.1 (sequence):

Given Abelian groups and morphisms (that is, since we are in the category of Abelian groups, group homomorphisms)

,

we may define the whole of those to be a sequence of Abelian groups, and denote it by

.

Note that if one of the objects is the trivial group, we denote it by and simply leave out the caption of the arrows going to it and emitting from it, since the trivial group is the zero object in the category of Abelian groups.

There are also infinite exact sequences, indicated by a notation of the form

;

it just goes on and on and on. The exact sequence to be infinite means, that we have a sequence (in the classical sense) of objects and another classical sequence of morphisms between these objects (here, the two have same cardinality: Countably infinite).

Definition 4.2 (exact sequence):

A given sequence

is called exact iff for all ,

.

There is a fundamental example to this notion.

Example 4.3 (short exact sequence):

A short exact sequence is simply an exact sequence of the form

for suitable Abelian groups and group homomorphisms .

The exactness of this sequence means, considering the form of the image and kernel of the zero morphism:

  1. injective
  2. surjective.

Example 4.4:

Set , , , where we only consider the additive group structure, and define the group homomorphisms

and .

This gives a short exact sequence

,

as can be easily checked.

A similar construction can be done for any factorisation of natural numbers (in our example, , , ).

Diagram chase: The short five lemma[edit | edit source]

We now should like to briefly exemplify a supremely important method of proof called diagram chase in the case of Abelian groups. We shall later like to generalize this method, and we will see that the classical diagram lemmas hold in huge generality (that includes our example below), namely in the generality of Abelian categories (to be introduced below).

Theorem 4.5 (the short five lemma):

Assume we have a commutative diagram

,

where the two rows are exact. If and are isomorphisms, then so must be .

Proof:

We first prove that is injective. Let for a . Since the given diagram is commutative, we have and since is an isomorphism, . Since the top row is exact, it follows that , that is, for a suitable . Hence, the commutativity of the given diagram implies , and hence since is injective as the composition of two injective maps. Therefore, .

Next, we prove that is surjective. Let thus be given. Set . Since is surjective as the composition of two surjective maps, there exists such that . The commutativity of the given diagram yields . Thus, by linearity, whence , and since is an isomorphism, we find such that . The commutativity of the diagram yields , and hence .

Additive categories[edit | edit source]

Definition 4.6:

An additive category is a category such that the following holds:

  1. is an Abelian group for all objects of .
  2. The composition of arrows
is bilinear; that is, for and , we have
(note that, since no scalar multiplication is involved, this definition of bilinearity is less rich than bilinearity in vector spaces).
  1. has a zero object.
  2. Each pair of objects of has a biproduct .

Although additive categories are important in their own right, we shall only treat them as in-between step to the definition of Abelian categories.

Abelian categories[edit | edit source]

Definition 4.7:

An Abelian category is an additive category such that furthermore:

  1. Every arrow of has a kernel and a cokernel, and
  2. every monic arrow of is the kernel of some arrow, and every epic arrow of is the cokernel of some arrow.

We now embark to obtain a canonical factorisation of arrows within Abelian categories.

Lemma 4.8:

Let be a category with a zero object and kernels and cokernels for all arrows. Then every arrow of admits a factorisation

,

where .

Proof:

The factorisation comes from the following commutative diagram, where we call and :

Indeed, by the property of as a kernel and since , factors uniquely through .

In Abelian categories, is even a monomorphism:

Lemma 4.9:

Let be an Abelian category. If and we have any factorisation , then is an epimorphism.

Proof:

Theorem 4.10:

Let be an Abelian category. Then every arrow of has a factorisation

,

where and .

Exact sequences in Abelian categories[edit | edit source]

We begin by defining the image of a morphism in a general context.

Definition 4.12:

Let be a morphism of a (this time arbitrary) category . If it exists, a kernel of a cokernel of is called image of .

Construction 4.13:

We shall now construct an equivalence relation on the set of all morphisms whose codomain is a certain , where is a category. We set

for a suitable (that is, factors through ).

This relation is transitive and reflexive. Hence, if we define

,

we have an equivalence relation (in fact, in this way we can always construct an equivalence relation from a transitive and reflexive binary relation, that is, a preorder).

With the image at hand, we may proceed to the definition of sequences, exact sequences and short exact sequences in a general context.

Definition 4.14:

Let be an Abelian category.

Definition 4.15:

Let be an Abelian category.

Definition 4.16:

Let be an Abelian category.

Diagram chase within Abelian categories[edit | edit source]

Now comes the clincher we have been working towards. In the ordinary diagram chase, we used elements of sets. We will now replace those elements by arrows in a simple way: Instead of looking at "elements" "" of some object of an abelian category , we look at arrows towards that element; that is, arrows for arbitrary objects of . For "the codomain of an arrow is ", we write

,

where the subscript stands for "member".

We have now replaced the notion of elements of a set by the notion of members in category theory. We also need to replace the notion of equality of two elements. We don't want equality of two arrows, since then we would not obtain the usual rules for chasing diagrams. Instead, we define yet another equivalence relation on arrows with codomain (that is, on members of ). The following lemma will help to that end.

Lemma 4.18 (square completion):

Construction 4.19 (second equivalence relation):

Now we are finally able to prove the proposition that will enable us doing diagram chases using the techniques we apply also to diagram chases for Abelian groups (or modules, or any other Abelian category).

Theorem 4.20 (diagram chase enabling theorem):

Let be an Abelian category and an object of . We have the following rules concerning properties of a morphism:

  1. is monic iff .
  2. is monic iff .
  3. is epic iff .
  4. is the zero arrow iff .
  5. A sequence is exact iff
    1. and
    2. for each with , there exists such that .
  6. If is a morphism such that , there exists a member of , which we shall call (the brackets indicate that this is one morphism), such that:

We have thus constructed a relatively elaborate machinery in order to elevate our proof technique of diagram chase (which is quite abundant) to the very abstract level of Abelian categories.

Examples of diagram lemmas[edit | edit source]

Theorem 4.21 (the long five lemma):

Theorem 4.22 (the snake lemma):

Modules, submodules and homomorphisms[edit | edit source]

Basics[edit | edit source]

Definition 5.1 (modules):

Let be a ring. A left -module is an Abelian group together with a function

such that

  1. ,
  2. ,
  3. and
  4. .

Analogously, one can define right -modules with an operation ; the difference is only formal, but it will later help us define bimodules in a user-friendly way.

For the sake of brevity, we will often write module instead of left -module.

  • Exercise 5.1.1: Prove that every Abelian monoid together with an operation as specified in 1.) - 4.) of definition 5.1 is already a module.

Submodules[edit | edit source]

Definition 5.2 (submodules):

A subgroup which is closed under the module function (i.e. the left multiplication operation defined above) is called a submodule. In this case we write .

The following lemma gives a criterion for a subset of a module being a submodule.

Lemma 5.3:

A subset is a submodule iff

.

Proof:

Let be a submodule. Then since since we have an Abelian group and further due to closedness under the module operation, also .

If is such that , then for any also .

Definition and theorem 5.4 (factor modules): If is a submodule of , the factor module by is defined as the group together with the module operation

.

This operation is well-defined and satisfies 1. - 4. from definition 5.1.

Proof:

Well-definedness: If , then , hence and thus .

  1. analogous to 3. (replace by )

Sum and intersection of submodules[edit | edit source]

We shall now ask the question: Given a module and certain submodules , which module is the smallest module containing all the ? And which module is the largest module that is itself contained within all ? The following definitions and theorems answer those questions.

Definition and theorem 5.5 (sum of submodules):

Let be a module over a certain ring and let be submodules of . The set

is a submodule of , which is the smallest submodule of that contains all the . It is called the sum of .

Proof:

1. is a submodule:

  • It is an Abelian subgroup since if , then
.
  • It is closed under the module operation, since
.

2. Each is contained in :

This follows since for each and each .

3. is the smallest submodule containing all the : If is another such submodule, then must contain all the elements

due to closedness under addition and submodule operation.

Definition and theorem 5.6 (intersection of submodules):

Let be a module over a ring , and let be submodules of . Then the set

is a submodule of , which is the largest submodule of containing all the . It is called the intersection of the .

Proof:

1. It's a submodule: Indeed, if , then for each and thus for each , hence .

2. It is contained in all by definition of the intersection.

3. Any set that contains all elements from each of the is contained within the intersection.

We have the following rule for computing with intersections and sums:

Theorem 5.7 (modular law; Dedekind):

Let be a module and such that . Then

.

Proof:

: Let . Since , and hence . Since also by assumption, .

: Let . Since , and since further , . Hence, .

More abstractly, the properties of the sum and intersection of submodules may be theoretically captured in the following way:

Lattices[edit | edit source]

Definition 5.8:

A lattice is a set together with two operations (called the join or least upper bound) and (called the meet or greatest lower bound) such that the following laws hold:

  1. Commutative laws: ,
  2. Idempotency laws: ,
  3. Absorption laws: ,
  4. Associative laws: ,

There are some special types of lattices:

Definition 5.9:

A modular lattice is a lattice such that the identity

holds.

Theorem 5.10 (ordered sets as lattices):

Let be a partial order on the set such that

  1. every set has a least upper bound (where a least upper bound of satisfies for all (i.e. it is an upper bound) and for every other upper bound of ) and
  2. every set has a greatest lower bound (defined analogously to least upper bound with inequality reversed).

Then , together with the joint operation sending to the least upper bound of that set and the meet operation analogously, is a lattice.

In fact, it suffices to require conditions 1. and 2. only for sets with two elements. But as we have shown, in the case that is the set of all submodules of a given module, we have the "original" conditions satisfied.

Proof:

First, we note that least upper bound and greatest lower bound are unique, since if for example are least upper bounds of , then and and hence . Thus, the joint and meet operation are well-defined.

The commutative laws follow from .

The idempotency laws from clearly being the least upper bound, as well as the greatest lower bound, of the set .

The first absorption law follows as follows: Let be the least upper bound of . Then in particular, . Hence, is a lower bound of , and any lower bound satisfies , which is why is the greatest lower bound of . The second absorption law is proven analogously.

The first associative law follows since if is the least upper bound of and is the upper bound of , then (as is an upper bound for ) and if is the least upper bound of , then since is an upper bound and further, and . The same argument (with and swapped) proves that is also the least upper bound of the l.u.b. of and . Again, the second associative law is proven similarly.

From theorems 5.5-5.7 and 5.10 we note that the submodules of a module form a modular lattice, where the order is given by set inclusion.

Exercises[edit | edit source]

  • Exercise 5.2.1: Let be a ring. Find a suitable module operation such that together with its own addition and this module operation is an -module. Make sure you define this operation in the simplest possible way. Prove further, that with respect to this module operation, the submodules of are exactly the ideals of .

Homomorphisms[edit | edit source]

We shall now get to know the morphisms within the category of modules over a fixed ring .

Definition 5.11 (homomorphisms):

Let be two modules over a ring . A homomorphism from to , also called an -linear function from to , is a function

such that

  1. and
  2. .

The kernel and image of homomorphisms of modules are defined analogously to group homomorphisms.

Since we are cool, we will often simply write morphisms instead of homomorphisms where it's clear from the context in order to indicate that we have a clue about category theory.

We have the following useful lemma:

Lemma 5.12:

is -linear iff

.

Proof:

Assume first -linearity. Then we have

.

Assume now the other condition. Then we have for

and

since due to ; since is an abelian group, we may add the inverse of on both sides.

Lemma 5.13:

If is -linear, then .

Proof:

This follows from the respective theorem for group homomorphisms, since each morphism of modules is also a morphism of Abelian groups.

Definition 5.8 (isomorphisms):

An isomorphism is a homomorphism which is bijective.

Lemma 5.14:

Let be a morphism. The following are equivalent:

  1. is an isomorphism
  2. has an inverse which is an isomorphism

Proof:

Lemma 5.15:

The kernel and image of morphisms are submodules.

Proof:

1. The kernel:

2. The image:

The following four theorems are in complete analogy to group theory.

Theorem 5.16 (factoring of morphisms):

Let be modules, let be a morphism and let . Then there exists a unique morphism such that , where is the canonical projection. In this situation, .

Proof:

We define . This is well-defined since . Furthermore, this definition is already enforced by . Further, .

Corollary 5.17 (first isomorphism theorem):

Let be -modules and let be a morphism. Then .

Proof:

We set and obtain a homomorphism with kernel by theorem 5.11. From lemma 5.16 follows the claim.

Corollary 5.18 (third isomorphism theorem):

Let be an -module, let and let . Then

.

Proof:

Since and also by definition. We define the function

.

This is well-defined since

.

Furthermore,

and hence . Hence, by theorem 5.17 our claim is proven.

Theorem 5.19 (second isomorphism theorem):

Let . Then

.

Proof:

Consider the isomorphism

.

Then , which is why the kernel of that homomorphism is given by . Hence, the theorem follows by the first isomorphism theorem.

And now for something completely different:

Theorem 5.20:

Let be a homomorphism of modules over and let . Then is a submodule of .

Proof:

Let . Then and hence . Let further . Then .

Similarly:

Theorem 5.21:

Let be a homomorphism of modules over and let . Then is a submodule of .

Proof: Let . Then and . Let further . Then .

Exercises[edit | edit source]

  • Exercise 5.3.1: Let be rings regarded as modules over themselves as in exercise 5.2.1. Prove that the ring homomorphisms are exactly the module homomorphisms ; that is, every ring hom. is a module hom. and vice versa.

The projection morphism[edit | edit source]

Definition 5.22:

Let be a module and . By the mapping we mean the canonical projection mapping to ; that is,

.

The following two fundamental equations for and shall gain supreme importance in later chapters, , .

Theorem 5.23:

Let be a module and . Then for every set , . Furthermore, for every other submodule , .

Proof:

Let first . Then , since . Hence, . Let then . Then there exists such that , that is . Now means that . Hence, .

Let first , that is, for suitable , . Then , which is why by definition . Let then . Then , that is with , that is for a suitable , that is .

The following lemma from elementary set theory have relevance for the projection morphism and we will need it several times:

Lemma 5.24:

Let be a function, where are completely arbitrary sets. Then induces a function via , the image of , where . This function preserves inclusion. Further, the function , also preserves inclusion.

Proof:

If , let . Then for an . Similarly for .

Exercises[edit | edit source]

Generators and chain conditions[edit | edit source]

Generators[edit | edit source]

Definition 6.1 (generators of modules):

Let be a module over the ring . A generating set of is a subset such that

.

Example 6.2:

For every module , the whole module itself is a generating set.

Definition 6.3:

Let be a module. is called finitely generated if there exists a generating set of which has a finite cardinality.

Example 6.4: Every ring is a finitely generated -module over itself, and a generating set is given by .

Definition 6.5 (generated submodules):

Exercises[edit | edit source]

Noetherian and Artinian modules[edit | edit source]

Definition 6.6 (Noetherian modules):

Let be a module over the ring . is called a Noetherian module iff for every ascending chain of submodules

of , there exists an such that

.

We also say that ascending chains of submodules eventually become stationary.

Definition 6.7 (Artinian modules):

A module over a ring is called Artinian module iff for every descending chain of submodules

of , there exists an such that

.

We also say that descending chains of submodules eventually become stationary.

We see that those definitions are similar, although they define a bit different objects.

Using the axiom of choice, we have the following characterisation of Noetherian modules:

Theorem 6.8:

Let be a module over . The following are equivalent:

  1. is Noetherian.
  2. All the submodules of are finitely generated.
  3. Every nonempty set of submodules of has a maximal element.

Proof 1:

We prove 1. 2. 3. 1.

1. 2.: Assume there is a submodule of which is not finitely generated. Using the axiom of dependent choice, we choose a sequence in such that

;

it is possible to find such a sequence since we may just always choose , since is not finitely generated. Thus we have an ascending sequence of submodules

which does not stabilize.

2. 3.: Let be a nonempty set of submodules of . Due to Zorn's lemma, it suffices to prove that every chain within has an upper bound (of course, our partial order is set inclusion, i.e. ). Hence, let be a chain within . We write

.

Since every submodule is finitely generated, so is

.

We write , where only finitely many of the are nonzero. Hence, we have

for suitably chosen . Now each is eventually contained in some . Since the are an ascending sequence with respect to inclusion, we may just choose large enough such that all are contained within . Hence, is the desired upper bound.

3. 1.: Let

be an ascending chain of submodules of . The set has a maximal element and thus this ascending chain becomes stationary at .

Proof 2:

We prove 1. 3. 2. 1.

1. 3.: Let be a set of submodules of which does not have a maximal element. Then by the axiom of dependent choice, for each we may choose such that (as otherwise, would be maximal). Hence, using the axiom of dependent choice and starting with a completely arbitrary , we find an ascending sequence

which does not stabilize.

3. 2.: Let be not finitely generated. Using the axiom of dependent choice, we choose first an arbitrary and given we choose in . Then the set of submodules

does not have a maximal element, although it is nonempty.

2. 1.: Let

be an ascending chain of submodules of . Since these are finitely generated, we have

for suitable and . Since every submodule is finitely generated, so is

.

We write , where only finitely many of the are nonzero. Hence, we have

for suitably chosen . Now each is eventually contained in some . Hence, the chain stabilizes at , if is chosen as the maximum of those .

The second proof might be advantageous since it does not use Zorn's lemma, which needs the full axiom of choice.

We can characterize Noetherian and Artinian modules in the following way:

Theorem 6.9:

Let be a module over a ring , and let . Then the following are equivalent:

  1. is Noetherian.
  2. and are Noetherian.

Proof 1:

We prove the theorem directly.

1. 2.: is Noetherian since any ascending sequence of submodules of

is also a sequence of submodules of (check the submodule properties), and hence eventually becomes stationary.

is Noetherian, since if

is a sequence of submodules of , we may write

,

where . Indeed, "" follows from and "" follows from

.

Furthermore, is a submodule of as follows:

  • since and ,
  • since and .

Now further for each , as can be read from the definition of the by observing that . Thus the sequence

becomes stationary at some . But If , then also , since

.

Hence,

becomes stationary as well.

2. 1.: Let

be an ascending sequence of submodules of . Then

is an ascending sequence of submodules of , and since is Noetherian, this sequence stabilizes at an . Furthermore, the sequence

is an ascending sequence of submodules of , which also stabilizes (at , say). Set , and let . Let . Then and thus , that is for an and an . Now , hence . Hence . Thus,

is stable after .


Proof 2:

We prove the statement using the projection morphism to the factor module.

1. 2.: is Noetherian as in the first proof. Let

be a sequence of submodules of . If is the projection morphism, then

defines an ascending sequence of submodules of , as preserves inclusion (since is a function). Now since is Noetherian, this sequence stabilizes. Hence, since also preserves inclusion, the sequence

also stabilizes ( since is surjective).

2. 1.: Let

be an ascending sequence of submodules of . Then the sequences

and

both stabilize, since and are Noetherian. Now , since . Thus,

stabilizes. But since , the theorem follows.

Proof 3:

We use the characterisation of Noetherian modules as those with finitely generated submodules.

1. 2.: Let . Then and hence is finitely generated. Let . Then the module is finitely generated, with generators , say. Then the set generates since is surjective and linear.

2. 1.: Let now . Then is finitely generated, since it is also a submodule of . Furthermore,

is finitely generated, since it is a submodule of . Let be a generating set of . Let further be a finite generating set of , and set . Let be arbitrary. Then , hence (with suitable ) and thus , where ; we even have due to , which is why we may write it as a linear combination of elements of .

Proof 4:

We use the characterisation of Noetherian modules as those with maximal elements for sets of submodules.

1. 2.: If is a family of submodules of , it is also a family of submodules of and hence contains a maximal element.

If is a family of submodules of , then is a family of submodules of , which has a maximal element . Since is inclusion-preserving and for all , is maximal among .

2. 1.: Let be a nonempty family of submodules of . According to the hypothesis, the family , where is defined such that the corresponding are maximal elements of the family , is nonempty. Hence, the family , where

,

has a maximal element . We claim that is maximal among . Indeed, let . Then since . Hence, . Furthermore, let . Then , since . Thus for a suitable , which must be contained within and thus also in .

We also could have first maximized the and then the .

These proofs show that if the axiom of choice turns out to be contradictory to evident principles, then the different types of Noetherian modules still have some properties in common.

The analogous statement also holds for Artinian modules:

Theorem 6.10:

Let be a module over a ring , and let . Then the following are equivalent:

  1. is Artinian.
  2. and are Artinian.

That statement is proven as in proofs 1 or 2 of the previous theorem.

Lemma 6.11:

Let be modules, and let be a module isomorphism. Then

.

Proof:

Since is also a module isomorphism, suffices.

Let be Noetherian. Using that is an inclusion-preserving bijection of submodules which maps generating sets to generating sets (due to linearity), we can use either characterisation of Noetherian modules to prove that is Noetherian.

Theorem 6.12:

Let be modules and let be a surjective module homomorphism. If is Noetherian, then so is .

Proof:

Let be a submodule of . By the first isomorphism theorem, we have . By theorem 6.9, is Noetherian. Hence, by lemma 6.11, is Noetherian.

Exercises[edit | edit source]

  • Exercise 6.2.1: Is every Noetherian module finitely generated?
  • Exercise 6.2.2: We define the ring as the real polynomials in infinitely many variables, i.e. . Prove that is a finitely generated -module over itself which is not Noetherian.

The Cayley–Hamilton theorem and Nakayama's lemma[edit | edit source]

Determinants within a commutative ring[edit | edit source]

We shall now derive the notion of a determinant in the setting of a commutative ring.

Definition 7.1 (Determinant):

Let be a commutative ring, and let . A determinant is a function satisfying the following three axioms:

  1. , where is the identity matrix.
  2. If is a matrix such that two adjacent columns are equal, then .
  3. For each we have , where are columns and .

We shall later see that there exists exactly one determinant.

Theorem 7.2 (Properties of a (the) determinant):

  1. If has a column consisting entirely of zeroes, then .
  2. If is a matrix, and one adds a multiple of one column to an adjacent column, then does not change.
  3. If two adjacent columns of are exchanged, then is multiplied by .
  4. If any two columns of a matrix are exchanged, then is multiplied by .
  5. If is a matrix, and one adds a multiple of one column to any other column, then does not change.
  6. If is a matrix that has two equal columns, then .
  7. Let be a permutation, where is the -th symmetric group. If , then .

Proofs:

1. Let , where the -th column is the zero vector. Then by axiom 3 for the determinant setting ,

.

Alternatively, we may also set and to obtain

,

from which the theorem follows by subtracting from both sides.

Those proofs correspond to the proofs for for a linear map (in whatever context).

2. If we set or (dependent on whether we add the column left or the column right to the current column), then axiom 3 gives us

,

where the latter determinant is zero because we have to adjacent equal columns.

3. Consider the two matrices and . By 7.2, 2. and axiom 3 for determinants, we have

.

4. We exchange the -th and -th column by first moving the -th column successively to spot (using swaps) and the -th column, which is now one step closer to the -th spot, to spot using swaps. In total, we used an odd number of swaps, and all the other columns are in the same place since they moved once to the right and once to the left. Hence, 4. follows from applying 3. to each swap.

5. Let's say we want to add to the -th column. Then we first use 4. to put the -th column adjacent to , then use 2. to do the addition without change to the determinant, and then use 4. again to put the -th column back to its place. In total, the only change our determinant has suffered was twice multiplication by , which cancels even in a general ring.

6. Let's say that the -th column and the -th column are equal, . Then we subtract column from column (or, indeed, the other way round) without change to the determinant, obtain a matrix with a zero column and apply 1.

7. Split into swaps, use 4. repeatedly and use further that is a group homomorphism.

Note that we have only used axioms 2 & 3 for the preceding proof.

The following lemma will allow us to prove the uniqueness of the determinant, and also the formula .

Lemma 7.3:

Let and be two matrices with entries in a commutative ring . Then

.

Proof:

The matrix has -th columns . Hence, by axiom 3 for determinants and theorem 7.2, 7. and 6., we obtain, denoting :

Theorem 7.4 (Uniqueness of the determinant):

For each commutative ring, there is at most one determinant, and if it exists, it equals

.

Proof:

Let be an arbitrary matrix, and set and in lemma 7.3. Then we obtain by axiom 1 for determinants (the first time we use that axiom)

.

Theorem 7.5 (Multiplicativity of the determinant):

If is a determinant, then

.

Proof:

From lemma 7.3 and theorem 7.4 we may infer

.

Theorem 7.6 (Existence of the determinant):

Let be a commutative ring. Then

is a determinant.

Proof:

First of all, has nonzero entries everywhere except on the diagonal. Hence, if , then vanishes except , i.e. is the identity. Hence .

Let now be a matrix whose -th and -th columns are equal. The function

is bijective, since the inverse is given by itself. Furthermore, since amounts to composing with another swap, it is sign reversing. Hence, we have

.

Now since the -th and -th column of are identical, . Hence .

Linearity follows from the linearity of each summand:

.

Theorem 7.7:

The determinant of any matrix equals the determinant of the transpose of that matrix.

Proof:

Observe that inversion is a bijection on the inverse of which is given by inversion (). Further observe that , since we just apply all the transpositions in reverse order. Hence,

.

Theorem 7.8 (column expansion):

Let be an matrix over a commutative ring . For define to be the matrix obtained by crossing out the -th row and -th column from . Then for any we have

.

Proof 1:

We prove the theorem from the formula for the determinant given by theorems 7.5 and 7.6.

Let be fixed. For each , we define

.

Then

Proof 2:

We note that all of the above derivations could have been done with rows instead of columns (which amounts to nothing more than exchanging with each time), and would have ended up with the same formula for the determinant since

as argued in theorem 7.7.

Hence, we prove that the function given by the formula satisfies 1 - 3 of 7.1 with rows instead of columns, and then apply theorem 7.4 with rows instead of columns.

1.

Set to obtain

.

2.

Let have two equal adjacent rows, the -th and -th, say. Then

,

since each of the has two equal adjacent rows except for possibly and , which is why, by theorem 7.6, the determinant is zero in all those cases, and further , since in both we deleted "the same" row.

3.

Define , and for each define as the matrix obtained by crossing out the -th row and the -th column from the matrix . Then by theorem 7.6 and axiom 3 for the determinant,

.

Hence follows linearity by rows.

For the sake of completeness, we also note the following lemma:

Lemma 7.9:

Let be an invertible matrix. Then is invertible.

Proof:

Indeed, due to the multiplicativity of the determinant.

The converse is also true and will be proven in the next subsection.

Exercises[edit | edit source]

  • Exercise 7.1.1: Argue that the determinant, seen as a map from the set of all matrices (where scalars are -matrices), is idempotent.

Cramer's rule in the general case[edit | edit source]

Theorem 7.10 (Cramer's rule, solution of linear equations):

Let be a commutative ring, let be a matrix with entries in and let be a vector. If is invertible, the unique solution to is given by

,

where is obtained by replacing the -th column of by .

Proof 1:

Let be arbitrary but fixed. The determinant of is linear in the first column, and hence constitutes a linear map in the first column mapping any vector to the determinant of with the -th column replaced by that vector. If is the -th column of , . Furthermore, if we insert a different column into , we obtain zero, since we obtain the determinant of a matrix where the column appears twice. We now consider the system of equations

where is the unique solution of the system , which exists since it is given by since is invertible. Since is linear, we find an matrix such that for all

;

in fact, due to theorem 7.8, . We now add up the lines of the linear equation system above in the following way: We take times the first row, add times the second row and so on. Due to our considerations, this yields the result

.

Due to lemma 7.9, is invertible. Hence, we get

and hence the theorem.

Proof 2:

For all , we define the matrix

this matrix shall represent a unit matrix, where the -th column is replaced by the vector . By expanding the -th column, we find that the determinant of this matrix is given by .

We now note that if , then . Hence

,

where the last equality follows as in lemma 7.9.

Theorem 7.11 (Cramer's rule, matrix inversion):

Let be an matrix with entries in a ring . We recall that the cofactor matrix of is the matrix with -th entry

,

where is obtained from by crossing out the -th row and -th column. We further recall that the adjugate matrix was given by

.

With this definition, we have

.

In particular, if is a unit within , then is invertible and

.

Proof:

For , we set , where the zero is at the -th place. Further, we set to be the linear function from proof 1 of theorem 7.10, and its matrix. Then is given by

due to theorem 7.8. Hence,

where we used the properties of established in proof 1 of theorem 7.10.

The theorems[edit | edit source]

Now we may finally apply the machinery we have set up to prove the following two fundamental theorems.

Theorem 7.12 (the Cayley–Hamilton theorem):

Let be a finitely generated -module, let be a module morphism and let be an Ideal of such that . Then there exist and such that

;

this equation is to be read as

,

where means applying to times.

Note that the polynomial in is monic, that is, the leading coefficient is , the unit of the ring in question.

Proof: Assume that is a generating set for . Since , we may write

(*),

where for each . We now define a new commutative ring as follows:

,

where we regard each element of as the endomorphism on . That is, is a subring of the endomorphism ring of (that is, multiplication is given by composition). Since is -linear, is commutative.

Now to every matrix with entries in we may associate a function

.

By exploiting the linearities of all functions involved, it is easy to see that for another matrix with entries in called , the associated function of equals the composition of the associated functions of and ; that is, .

Now with this in mind, we may rewrite the system (*) as follows:

,

where has -th entry . Now define . From Cramer's rule (theorem 7.11) we obtain that

,

which is why

, the zero vector.

Hence, is the zero mapping, since it sends all generators to zero. Now further, as can be seen e.g. from the representation given in theorem 7.4, it has the form

for suitable .

Theorem 7.13 (Nakayama's lemma):

Let be a ring, a finitely generated -module and an ideal such that . Then there exists an such that .

Proof:

Choose in theorem 7.12 to obtain for that

for suitable , since the identity is idempotent.

Direct products, direct sums and the tensor product[edit | edit source]

Direct products and direct sums[edit | edit source]

Definition 8.1:

Let be modules. The direct product of is the infinite cartesian product

together with component-wise addition, module operation and thus zero and additive inverses.

Theorem 8.2:

In the category of modules, the direct product constitutes a product.

Proof:

Let be any index category, that contains one element for each , no other elements, and only the identity morphisms. Let be any other object such that

Definition 8.3:

Let be a commutative ring, and let be modules over . The direct sum

is defined to be the module consisting of tuples where only finitely many of the s are nonzero, together with component-wise addition and component-wise module operation.

Lemma 8.4:

Let be modules. Their direct sum is a submodule of the direct product.

Proof:

Both have the same elements and the same operations, and the direct product is a subset that is a module with those operations. Therefore we have a submodule.

Lemma 8.5:

For each , there is a canonical morphism

.

Proof:

.

Lemma 8.6:

.

Proof:

Consider the morphism

.

We claim that this is an isomorphism, so we check all points.

1. Well-defined:

Both and are morphisms (with suitable domains and images), so is as well.

2. Injective:

Assume . Then for any contained in we have

;

note that the sum is finite, since we are in the direct sum; this is necessary since infinite sums are not defined. Hence .

3. Surjective:

Let . Define

.

The latter sum is finite because and all but finitely many are nonzero. Thus this is well-defined as a function, and direct computation proves easily that it is -linear. Hence we have a morphism, and further

.

Theorem 8.7:

direct sum is coproduct in category of modules

Quotient spaces[edit | edit source]

To be then used to construct the tensor product.

The tensor product[edit | edit source]

Definition 8.8:

Let be a ring and modules over that ring. Consider the set of all pairs

and endow this with multiplication and addition by formal linear combinations, producing elements such as

where the are in . We have obtained the vector space of formal linear combinations (call ). Set the subspace

,

the generated subspace. We form the quotient

.

This is called the tensor product. To indicate that are -modules, one often writes

.

The following theorem shows that the tensor product has something to do with bilinear maps:

Theorem 8.9:

Let be -modules and let be -bilinear. Then there exists a unique morphism such that the following diagram commutes:

Proof:

Let be any -bilinear map. Define

,

where the square brackets indicate the equivalence class.

Once we proved that this is well-defined, the linearity of easily follows. We thus have to show that maps equivalent vectors to the same element, which after subtracting the right hand side follows from mapping to zero.

Indeed, let

,

where all are one of the four types of generators of . By distinguishing cases, one obtains that each type of generator of is mapped to zero by because of bilinearity. Well-definedness follows, and linearity is clear from the definition and since addition and module operation interchange with equivalence class formation.

Note that from a category theory perspective, this theorem 8.9 states that for any two modules over the same ring, the arrow

is a universal arrow. Hence, we call the result of theorem 8.9 the universal property of the tensor product.

Lemma 8.10:

Let be a ring and be an -module. Recall that using canonical operations, is an -module over itself. We have

.

Proof:

Define the morphism

,

extend it to all formal linear combinations via summation

and then observe that

is well-defined; again, by subtracting the right hand side, it's enough to show that is mapped to zero, and this is again done by consideration of each of the four generating types.

This is a morphism as shown by direct computation (using the rules for the module operation), it is clearly surjective (map ) and it is injective because if

, then

since .

Lemma 8.11:

Let be -modules. Then

.

Proof:

For fixed, define the bilinear function

.

Applying theorem 8.9 yields

such that . Then define

.

This function is bilinear (linearity in from

)

and thus theorem 8.9 yields a morphism

such that

.

An analogous process yields a morphism

such that

.

Since addition within tensor products commutes with equivalence class formation, and are inverses.

Lemma 8.12:

Let be -modules, let be an -module. Then

.

Proof:

We define

.

This is bilinear (since formation of equivalence classes commutes with summation and module operation), and hence theorem 8.9 yields a morphism

such that

.

This is obviously surjective. It is injective because

by the linearity of and component-wise addition in the direct sum, and equality for the direct sum is component-wise. We split the argument up into sums where only one component of the right direct sum matters, and observe equality since we divide out isomorphic spaces.

Lemma 8.13:

.

Proof:

Linear extension of

defines a morphism which is well-defined due to symmetry, linear by definition and bijective because of the obvious inverse.

We have proven:

Theorem 8.14:

Let be a fixed ring. The set of all -modules forms a commutative semiring, where the addition is given by (direct sum), the multiplication by (tensor product), the zero by the trivial module and the unit by .

Note that we have more: From lemma 8.12 even infinite direct sums (uncountably many, as many as you like, ...) distribute over the tensor product. Incidentally, only finite direct sums are identical to the direct product. This may give hints for an infinite distributive law for infinitesimals.

Theorem 8.15 ("tensor-hom adjunction"):

Let be -modules. Then

.

Proof:

Set

.

Due to the equalities holding for elements of the tensor product and the linearity of , this is well-defined. Further, we obviously have linearity in since function addition and module operation are defined point-wise.

Further set

.

By theorem 8.9 and thinking outside the box, we get a map

such that

.

Then and are inverse morphisms, since is determined by what it does on elements of the form .

Theorem 8.16:

Let be -modules isomorphic to each other (via ), and let be any other -module. Then

via an isomorphism

such that

for all , .

Proof:

The map

is bilinear, and hence induces a map

such that

.

Similarly, the map

induces a map

such that

.

These maps are obviously inverse on elements of the type , , and by their linearity and since addition and equivalence classes commute, they are inverse to each other.

Fractions, annihilator[edit | edit source]

Fractions within rings[edit | edit source]

Definition 9.1:

Let be a commutative ring, and let be an arbitrary subset. is called multiplicatively closed iff the following two conditions hold:

Definition 9.2:

Let be a ring and a multiplicatively closed subset. Define

,

where the equivalence relation is defined as

.

Equip this with addition

and multiplication

.

The following two lemmata ensure that everything is correctly defined.

Lemma 9.3:

is an equivalence relation.

Proof:

For reflexivity and symmetry, nothing interesting happens. For transitivity, there is a little twist. Assume

and .

Then there are such that

and .

But in this case, we have

;

note because is multiplicatively closed.

Lemma 9.4:

The addition and multiplication given above turn into a ring.

Proof:

We only prove well-definedness; the other rules follow from the definition and direct computation.

Let thus and .

Thus, we have and for suitable .

We want

and

.

These translate to

and

for suitable . We get the desired result by picking and observing

and

.

Note that we were heavily using commutativity here.

Theorem 9.5 (properties of augmentation):

Let a ring and multiplicatively closed. Set

,

the projection morphism. Then:

  1. is a unit.
  2. for some .
  3. Every element of has the form for suitable , .
  4. Let be ideals. Then , where
.
  1. Let an ideal. If , then .

We will see further properties like 4. when we go to modules, but we can't phrase it in full generality because in modules, we may not have a product of two module elements.

Proof:

1.:

If , then the rules for multiplication for indicate that is an inverse for .

2.:

Assume . Then there exists such that .

3.:

Let be an arbitrary element of . Then .

4.

5.

Let , that is, . Then , where is a unit in . Further, is an ideal within since is a morphism. Thus, .

Theorem 9.6 (universal property):

Let be a ring, multiplicatively closed, let be another ring and let

be a morphism, such that for all , . Then there exists a unique morphism

such that

.

Proof:

We first prove uniqueness. Assume there exists another such morphism . Then we would have

.

Then we prove existence; we claim that

defines the desired morphism.

First, we show well-definedness.

Firstly, exists for .

Secondly, let , that is, . Then

The multiplicativity of this morphism is visually obvious (use that is a morphism and commutativity); additivity is proven as follows:

It is obvious that the unit is mapped to the unit.

Theorem 9.7:

Category theory context

Fractions within modules[edit | edit source]

Definition 9.8:

Let be a ring, a multiplicative subset of and an -module. Set to be the ring augmented by inverses of . We define the -module as follows:

(the formal fractions),

where again

,

with addition

and module operation

.

Note that applying this construction to a ring that is canonically an -module over itself, we obtain nothing else but canonically seen as an -module over itself, since multiplication and addition coincide. Thus, we have a generalisation here!

That everything is well-defined is seen exactly as in the last section; the proofs carry over verbatim.

Theorem 9.9 (properties of the augmented module):

Let be an -module, let be a multiplicatively closed subset of , and let be submodules. Then

  1. ,
  2. , and
  3. ;

in the first two equations, all modules are seen as submodules of (as above with ), and in the third isomorphy relation, the modules are seen as independent -modules.

Proof:

1.

note that to get from the third row back to the second, we used that submodules are closed under multiplication by an element of to equalize denominators and thus get a suitable ( is closed under multiplication).

2.

to get from the second to the first row, we note for a suitable , and in particular for example

,

where .

3.

We set

and prove that this is an isomorphism.

First we prove well-definedness. Indeed, if , then , hence and thus .

Then we prove surjectivity. Let be given. Then obviously is mapped to that element.

Then we prove injectivity. Assume . Then , where and , that is for a suitable . Then and therefore .

Theorem 9.10:

functor relating tensor product and fractions

Theorem 9.11:

Let be -modules and multiplicatively closed. Then

.

Proof:

Exercises[edit | edit source]

  • Exercise 9.2.1: Let be -modules and an ideal. Prove that is a submodule of and that (this exercise serves the purpose of practising the proof technique employed for theorem 9.11).

The annihilator, faithfulness[edit | edit source]

Definition 9.12:

Let be a ring, a module over and an arbitrary subset. Then the annihilator of with respect to is defined to be the set

.

Theorem 9.13:

Let be a ring, a module over and an arbitrary subset. Then is an ideal of .

Proof:

Let and . Then for all , . Hence the theorem by lemma 5.3.

Definition 9.14:

An -module is called faithful iff .

Theorem 9.15:

Let be a ring. Then regarded as an module over itself is faithful.

Proof: Let such that . Then in particular .

Theorem 9.16:

Let be an -module and an arbitrary subset. Let be the submodule of generated by . Then .

Proof:

From the definition it is clear that , since annihilating all elements of is a stronger condition than only those of .

Let now and , where and . Then .

Local properties[edit | edit source]

Definition 9.17:

Let be an -module (where is a ring) and let be a prime ideal. Then the localisation of with respect to , denoted by

,

is defined to be with ; note that is multiplicatively closed because is a prime ideal.

Definition 9.18:

A property which modules can have (such as being equal to zero) is called a local-global property iff the following are equivalent:

  1. has property (*).
  2. has property (*) for all multiplicatively closed .
  3. has property (*) for all prime ideals .
  4. has property (*) for all maximal ideals .

Theorem 9.19:

Being equal to zero is a local-global property.

Proof:

We check the equivalence of 1. - 4. from definition 9.12. Clearly, 4. 1. suffices.

Assume that is a nonzero module, that is, we have such that . By theorem 9.11, is an ideal of . Therefore, it is contained within some maximal ideal of , call (unfortunately, we have to refer to a later chapter, since we wanted to separate treatments of different algebraic objects. The required theorem is theorem 12.2). Then for we have and therefore in .

The following theorems do not really describe local-global properties, but are certainly similar and perhaps related to those.

Theorem 9.20:

If is a morphism, then the following are equivalent:

  1. surjective.
  2. surjective for all multiplicatively closed.
  3. surjective for all prime.
  4. surjective for all maximal.

Proof:

Sequences of modules[edit | edit source]

Modules in category theory[edit | edit source]

Definition 10.1 (-mod):

For each ring , there exists one category of modules, namely the modules over with module homomorphisms as the morphisms. This category is called -mod.

We aim now to prove that if is a ring, -mod is an Abelian category. We do so by verifying that modules have all the properties required for being an Abelian category.

Theorem 10.1:

The category of modules has kernels.

Proof:

For -modules and a morphism we define

.

Sequences of augmented modules[edit | edit source]

Theorem 10.?:

Let be a ring and let be multiplicatively closed. Let be -modules. Then

exact implies exact.

-category-theoretic comment

Torsion-free, flat, projective and free modules[edit | edit source]

Free modules[edit | edit source]

The following definitions are straightforward generalisations from linear algebra. We begin by repeating a definition we already saw in chapter 6.

Definition 6.1 (generators of modules):

Let be a module over the ring . A generating set of is a subset such that

.

We also have:

Definition 11.1:

Let be an -module. A subset of is called linearly independent if and only if, whenever , we have

.

Definition 11.2:

A free -module is a module over where there exists a basis, that is, a subset of that is a linearly independent generating set.

Theorem 11.3:

Let be free modules. Then the direct sum

is free.

Proof:

Let bases of the be given. We claim that

is a basis of

.

Indeed, let an arbitrary element be given. Then by assumption, each of the has a decomposition

for suitable . By summing this, we get a decomposition of in the aforementioned basis. Furthermore, this decomposition must be unique, for otherwise projecting gives a new composition of one of the particular .

The converse is not true in general!

Theorem 11.4:

Let be free -modules, with bases and respectively. Then

is a free module, with basis

,

where we wrote for short

(note that it is quite customary to use this notation).

Proof:

We first prove that our supposed basis forms a generating system. Clearly, by summation it suffices to show that elements of the form

,

can be written in terms of the . Thus, write

and ,

and obtain by the rules of computing within the tensor product, that

.

On the other hand, if

is a linear combination (i.e. all but finitely many summands are zero), then all the must be zero. The argument is this: Fix and define a bilinear function

,

where , are the coefficients of , in the decomposition of and respectively. According to the universal property of the tensor product, we obtain a linear map

with ,

where is the canonical projection on the quotient space. We have the equations

,

and inserting the given linear combination into this map therefore yields the desired result.

Projective modules[edit | edit source]

The following is a generalisation of free modules:

Definition 11.5:

Let be an -module. is called projective if and only if for a fixed module and a fixed surjection every other module morphism with codomain (call ) has a factorisation

.

Theorem 11.6:

Every free module is projective.

Proof:

Pick a basis of , let be surjective and let be some morphism. For each pick with . Define

where .

This is well-defined since the linear combination describing is unique. Furthermore, it is linear, since we have

,

where the right hand side is the sum of the linear combinations coinciding with and respectively, which is why . By linearity of and definition of the , it has the desired property.

There are a couple equivalent definitions of projective modules.

Theorem 11.7:

A module is projective if and only if there exists a module such that is free.

Proof:

: Define the module

(this obviously is a free module) and the function

.

is a surjective morphism, whence we obtain a commutative diagram

;

that is, .

We claim that the map

is an isomorphism. Indeed, if , then and thus also (injectivity) and further , where , which is why

(surjectivity).

: Assume is a free module. Assume is a surjective morphism, and let be any morphism. We extend to via

.

This is still linear as the composition of the linear map and the linear inclusion . Now is projective since it's free. Hence, we get a commutative diagram

where satisfies . Projecting to gives the desired diagram for .

Definition 11.8:

An exact sequence of modules

is called split exact iff we can augment it by three isomorphisms such that

commutes.

Theorem 11.9:

A module is projective iff every exact sequence

is split exact.

Proof:

: The morphism is surjective, and thus every other morphism with codomain lifts to . In particular, so does the projection . Thus, we obtain a commutative diagram

where we don't know yet whether is an isomorphism, but we can use to define the function

,

which is an isomorphism due to injectivity:

Let , that is . Then first

and therefore second

.

And surjectivity:

Let . Set . Then

and hence for a suitable , thus

.

We thus obtain the commutative diagram

and have proven what we wanted.

: We prove that is free for a suitable .

We set

,

where is defined as in the proof of theorem 11.7 . We obtain an exact sequence

which by assumption splits as

which is why is isomorphic to the free module and hence itself free.

Theorem 11.10:

Let and be projective -modules. Then is projective.

Proof:

We choose -modules such that and are free. Since the tensor product of free modules is free, is free. But

,

and thus occurs as the summand of a free module and is thus projective.

Theorem 11.11:

Let be -modules. Then is projective if and only if each is projective.

Proof:

Let first each of the be projective. Then each of the occurs as the direct summand of a free module, and summing all these free modules proves that is the direct summand of free modules.

On the other hand, if is the summand of a free module, then so are all the s.

Flat modules[edit | edit source]

The following is a generalisation of projective modules:

Definition 11.12:

An -module is called flat if and only if tensoring by it preserves exactness:

exact implies exact.

The morphisms in the right sequence induced by any morphism are given by the bilinear map

.

Theorem 11.13:

The module is a flat -module.

Proof: This follows from theorems 9.10 and 10.?.

Theorem 11.14:

Flatness is a local property.

Proof: Exactness is a local property. Furthermore, for any multiplicatively closed

by theorem 9.11. Since every -module is the localisation of an -module (for instance itself as an -module via ), the theorem follows.

Theorem 11.15:

A projective module is flat.

Proof:

We first prove that every free module is flat. This will enable us to prove that every projective module is flat.

Indeed, if is a free module and a basis of , we have

via

,

where all but finitely many of the summands on the left are nonzero. Hence, by distributivity of direct sum over tensor product, if we are given any exact sequence

,

to show that the sequence

is exact, all we have to do is to prove that

is exact, since we may then augment the latter sequence by suitable isomorphisms

Theorem 11.16:

direct sum flat iff all summands are

Theorem 11.17:

If are flat -modules, then is as well.

Proof:

Let

be an exact sequence of modules.

Torsion-free modules[edit | edit source]

The following is a generalisation of flat modules:

Definition 11.18:

Let be an -module. The torsion of is defined to be the set

.

Lemma 11.19:

The torsion of a module is a submodule of that module.

Proof:

Let , . Obviously (just multiply the two annihilating elements together), and further if (we used commutativity here).

We may now define torsion-free modules. They are exactly what you think they are.

Definition 11.20:

Let be a module. is called torsion-free if and only if

.

Theorem 11.21:

A flat module is torsion-free.

To get a feeling for the theory, we define -torsion for a multiplicatively closed subset .

Definition 11.22:

Let be a multiplicatively closed subset of a ring , and let be an -module. Then the -torsion of is defined to be

.

Theorem 11.23:

Let be a multiplicatively closed subset of a ring , and let be an -module. Then the -torsion of is precisely the kernel of the canonical map .

Basic ideal theory[edit | edit source]

Prime ideals[edit | edit source]

Definition 12.1:

Let be a ring. A prime ideal is an ideal of such that whenever , either or .

Lemma 12.2:

Let be a ring and an ideal. is prime if and only if is an integral domain.

Proof:

prime is equivalent to . This is equivalent to

.

Theorem 12.3:

Let be multiplicatively closed. Then there exists a prime ideal not intersecting .

Proof:

Order all ideals of not intersecting by set inclusion, and let a chain

be given. The ideal

(this is an ideal, since , hence , ) is an upper bound of the chain, since cannot intersect for else one of the would intersect . Since the given chain was arbitrary, Zorn's lemma implies the existence of a maximal ideal among all ideals not intersecting . This ideal shall be called ; we prove that it is prime.

Let , and assume for contradiction that and . Then , are strict superideals of and hence intersect , that is,

,
,

, , . Then , contradiction.

Projection to the quotient ring[edit | edit source]

In this section, we want to fix a notiation. Let be a ring and an ideal. Then we may form the quotient ring consisting of the elements of the form , . Throughout the book, we shall use the following notation for the canonical projection :

Definition 12.4:

Let an ideal. The map

is the canonical projection of to .

Maximal ideals[edit | edit source]

Definition 12.5:

Let be a ring. A maximal ideal of is an ideal that is not the whole ring, and there is no proper ideal such that .

Lemma 12.6:

An ideal is maximal iff is a field.

Proof:

A ring is a field if and only if its only proper ideal is the zero ideal. For, in a field, every nonzero ideal contains , and if is not a field, it contains a non-unit , and then does not contain .

By the correspondence given by the correspondence theorem, corresponds to , the zero ideal of corresponds to , and any ideal strictly in between corresponds to an ideal such that . Hence, is a field if and only if there are no proper ideals strictly containing .

Lemma 12.7:

Any maximal ideal is prime.

Proof 1:

If is a ring, maximal, then is a field. Hence is an integral domain, hence is prime.

Proof 2:

Let be maximal. Let . Assume . Then for suitable , . But then .

Theorem 12.8:

Let be a ring and an ideal not equal to all of . Then there exists a maximal with .

Proof:

We order the set of all ideals such that and by inclusion. Let

be a chain of those ideals. Then set

.

Clearly, all are contained within . Since , . Further, assume . Then for some , contradiction. Hence, is a proper ideal such that , and hence an upper bound for the given chain. Since the given chain was arbitrary, we may apply Zorn's lemma to obtain the existence of a maximal element with respect to inclusion. This ideal must then be maximal, for any proper superideal also contains .

Lemma 12.9:

Let be a ring, . Then via , maximal ideals of containing correspond to maximal ideals of .

Proof: From the correspondence theorem.

Local rings[edit | edit source]

Definition 12.10:

A local ring is a ring that has exactly one maximal ideal.

Theorem 12.11 (characterisation of local rings):

Let be a ring. The following are equivalent:

  1. is a local ring.
  2. If is a unit, then either or is a unit, where arbitrary.
  3. The set of all non-units forms a maximal ideal.
  4. If where is a unit, then one of the is a unit.
  5. If is arbitrary, either or is a unit.

Proof:

1. 2.: Assume and are both non-units. Then and are proper ideals of and hence they are contained in some maximal ideal of by theorem 12.7. But there is only one maximal ideal of , and hence , thus . Maximal ideals can not contain units.

2. 3.: The sum of two non-units is a non-unit, and if is a non-unit and , is a non-unit (for if , is an inverse of ). Hence, all non-units form an ideal. Any proper ideal of contains only non-units, hence this ideal is maximal.

3. 4.: Assume the are all non-units. Since the non-units form an ideal, is contained in that ideal of non-units, contradiction.

4. 5.: Assume , are non-units. Then is a non-unit, contradiction.

5. 1.: Let two distinct maximal ideals. Then , hence , , , that is, . is not a unit, so is, contradiction.

Localisation at prime ideals[edit | edit source]

In chapter 9, we had seen how to localise a ring at a multiplicatively closed subset . An important special case is , where is a prime ideal.

Lemma 12.12:

Let be a prime ideal of a ring. Then is multiplicatively closed.

Proof: Let . Then can't be in , hence .

Definition 12.13:

Let be a prime ideal of a ring. Set . Then

is called the localisation of at .

Theorem 12.14:

Let be a ring, be prime. is a local ring.

Proof:

Set , then . Set

.

All elements of are non-units, and all elements of are of the form , , and thus are units. Further, is an ideal since is and by definition of addition and multiplication in and since is multiplicatively closed. Hence is a local ring.

This finally explains why we speak of localisation.

Nilradical and Jacobson radical[edit | edit source]

Commutative Algebra/Nilradical and Jacobson radical

Jacobson rings[edit | edit source]

Definition and elementary characterisations[edit | edit source]

Definition 14.1:

A Jacobson ring is a ring such that every prime ideal is the intersection of some maximal ideals.

Before we strive for a characterisation of Jacobson rings, we shall prove a lemma first which will be of great use in one of the proofs in that characterisation.

Lemma 14.2:

Let be a Jacobson ring and let be an ideal. Then is a Jacobson ring.

Proof:

Let be prime. Then is prime. Hence, according to the hypothesis, we may write

,

where the are all maximal. As is surjective, we have . Hence, we have

,

where the latter equality follows from implying that for all , , where and and thus . Since the ideals are maximal, the claim follows.

Theorem 14.3:

Let be a ring. The following are equivalent:

  1. is a Jacobson ring.
  2. Every radical ideal (see def. 13.1) is an intersection of maximal ideals.
  3. For every prime the Jacobson radical of equals the zero ideal.
  4. For every ideal , the Jacobson radical of is equal to the nilradical of .

Proof 1: We prove 1. 2. 3. 4. 1.

1. 2.: Let be a radical ideal. Due to theorem 13.3,

.

Now we may write each prime ideal containing as the intersection of maximal ideals (we are in a Jacobson ring) and hence obtain 1. 2.

2. 3.: Let be prime. In particular, is radical. Hence, we may write

,

where the are maximal. Now suppose that is contained within the Jacobson radical of . According to theorem 13.7, is a unit within , where is arbitrary. We want to prove . Let thus be such that . Then and thus with and , that is . Let be the inverse of , that is . This means for all , and in particular, . Hence , contradiction.

3. 4.: Let . Assume there exists and a prime ideal such that , but for all maximal . Let be the canonical projection. Since preimages of prime ideals under homomorphism are prime, is prime.

Let be a maximal ideal within . Assume . Let be the canonical projection. As in the first proof of theorem 12.2, is maximal.

We claim that is maximal. Assume , that is for a suitable . Since , , contradiction. Assume is strictly contained within . Let . Then . If , then , contradiction. Hence and thus , that is .

Furthermore, if , then . Now since . Hence, , that is, , a contradiction to .

Thus, is contained within the Jacobson radical of .

4. 1.: Assume is prime not the intersection of maximal ideals. Then

.

Hence, there exists an such that for every maximal ideal of .

The set is multiplicatively closed. Thus, theorem 12.3 gives us a prime ideal such that .

Let be a maximal ideal of that does not contain . Let be the canonical projection. We claim that is a maximal ideal containing . Indeed, the proof runs as in the first proof of theorem 12.2. Furthermore, does not contain , for if it did, then . Thus we obtained a contradiction, which is why every maximal ideal of contains .

Since within , the Jacobson radical equals the Nilradical, is also contained within all prime ideals of , in particular within . Thus we have obtained a contradiction.

Proof 2: We prove 1. 4. 3. 2. 1.

1. 4.: Due to lemma 3.10, is a Jacobson ring. Hence, it follows from the representations of theorem 13.3 and def. 13.6, that Nilradical and Jacobson radical of are equal.

4. 3.: Since is a radical ideal (since it is even a prime ideal), has no nilpotent elements and thus it's nilradical vanishes. Since the Jacobson radical of that ring equals the Nilradical due to the hypothesis, we obtain that the Jacobson radical vanishes as well.

3. 2.: I found no shorter path than to combine 3. 1. with 1. 2.

2. 1.: Every prime ideal is radical.

Remaining arrows:

1. 3.: Let be a prime ideal of . Now suppose that is contained within the Jacobson radical of . According to theorem 13.7, is a unit within , where is arbitrary. Write

,

where the are maximal. We want to prove . Let thus be such that . Then and thus with and , that is . Let be the inverse of , that is . This means for all , and in particular, . Hence , contradiction.

3. 1.: Let be prime. If is maximal, there is nothing to show. If is not maximal, is not a field. In this case, there exists a non-unit within , and hence, by theorem 12.1 or 12.2 (applied to where is a non-unit), contains at least one maximal ideal. Furthermore, the Jacobson radical of is trivial, which is why there are some maximal ideals of such that

.

As in the first proof of theorem 12.2, are maximal ideals of . Furthermore,

.

2. 4.: Let be the nilradical of . We claim that

.

Let first , that is, . Then , that is and . The other inclusion follows similarly, only the order is in reverse (in fact, we just did equivalences).

Due to the assumption, we may write

,

where the are maximal ideals of .

Since is surjective, . Hence,

,

where the last equality follows from implying that for and and hence for all . Furthermore, the are either maximal or equal to , since any ideal of properly containing contains one element not contained within , which is why , hence and thus .

Thus, is the intersection of some maximal ideals of , and thus the Jacobson radical of is contained within it. Since the other inclusion holds in general, we are done.

4. 2.: As before, we have

.

Let now be the Jacobson radical of , that is,

,

where the are the maximal ideals of . Then we have by the assumption:

.

Furthermore, as in the first proof of theorem 12.2, are maximal.

Goldman's criteria[edit | edit source]

Now we shall prove two more characterisations of being a Jacobson ring. These were established by Oscar Goldman.

Theorem 14.4 (Goldman's first criterion):

Let be a ring. is Jacobson if and only if is.

This is the hard one, and we do it right away so that we have it done.

Proof:

One direction () isn't too horrible. Let be a Jacobson ring, and let be a prime ideal of . (We shall denote ideals of with a small zero as opposed to ideals of to avoid confusion.)

We now define

.

This ideal contains exactly the polynomials whose constant term is in . It is prime since

as can be seen by comparing the constant coefficients. Since is Jacobson, for a given that is not contained within , and hence not in , there exists a maximal ideal containing , but not containing . Set . We claim that is maximal. Indeed, we have an isomorphism

via

.

Therefore, is a field if and only if is. Hence, is maximal, and it does not contain . Since thus every element outside can be separated from by a maximal ideal, is a Jacobson ring.

The other direction is a bit longer.

We have given a Jacobson ring and want to prove Jacobson. Hence, let be a prime ideal, and we want to show it to be the intersection of maximal ideals.

We first treat the case where and is an integral domain.

Assume first that does contain a nonzero element (i.e. is not equal the zero ideal).

Assume is contained within all maximal ideals containing , but not within . Let such that is of lowest degree among all nonzero polynomials in . Since , . Since is an integral domain, we can form the quotient field . Then .

Assume that is not irreducible in . Then , , where , are not associated to . Let such that . Then . As is prime, wlog. . Hence . Thus, and are associated, contradiction.

is Euclidean with the degree as absolute value. Uniqueness of prime factorisation gives a definition of the greatest common divisor. Since is irreducible in and , . Applying the Euclidean algorithm, , . Multiplication by an appropriate constant yields , . Thus, . Hence, is contained within every maximal ideal containing . Further, .

Let be any maximal ideal of not containing . Set

.

Assume . Then , . We divide by by applying a polynomial long division algorithm working for elements of a general polynomial ring: We successively eliminate the first coefficient of by subtracting an appropriate multiple of . Should that not be possible, we multiply by the leading coefficient of , that shall be denoted by . Then we cannot eliminate the desired coefficient of , but we can eliminate the desired coefficient of . Repeating this process gives us

,

for . Furthermore, since this equation implies , we must have since the degree of was minimal among polynomials in . Then

with . By moving such coefficients to , we may assume that no coefficient of is in . Further, is nonzero since otherwise . Denote the highest coefficient of by , and the highest coefficient of by . Since the highest coefficients of and must cancel out (as ),

.

Thus, and , but , which is absurd as every maximal ideal is prime. Hence, .

According to theorem 12.2, there exists a maximal ideal containing . Now does not equal all of , since otherwise . Hence, and the maximality of imply . Further, is a maximal ideal containing and thus contains . Hence, .

Thus, every maximal ideal that does not contain contains ; that is, for all maximal ideals of . But according to theorem 12.3, we may choose a prime ideal of not intersecting the (multiplicatively closed) set , and since is a Jacobson ring, there exists a maximal ideal containing and not containing . This is a contradiction.

Let now be the zero ideal (which is prime within an integral domain). Assume that there are only finitely many elements in which are irreducible in , and call them . The element

factors into irreducible elements, but at the same time is not divisible by any of , since otherwise wlog.

,

which is absurd. Thus, there exists at least one further irreducible element not listed in , and multiplying this by an appropriate constant yields a further element of irreducible in .

Let be irreducible in . We form the ideal and define . We claim that is prime. Indeed, if , then and factor in into irreducible components. Since is a unique factorisation domain, occurs in at least one of those two factorisations.

Assume there is a nonzero element contained within all the , where is irreducible over . factors in uniquely into finitely many irreducible components, leading to a contradiction to the infinitude of irreducible elements of . Hence,

,

where each is prime and . Hence, by the previous case, each can be written as the intersection of maximal elements, and thus, so can .

Now for the general case where is an arbitrary Jacobson ring and is a general prime ideal of . Set . is a prime ideal, since if , where , then or , and hence or . We further set . Then we have

via the isomorphism

.

Set

and .

Then is an integral domain and a Jacobson ring (lemma 14.2), and is a prime ideal of with the property that . Hence, by the previous case,

.

Thus, since ,

,

which is an intersection of maximal ideals due to lemma 12.4 and since isomorphisms preserve maximal ideals.

Theorem 14.5 (Goldman's second criterion):

A ring is Jacobson if and only if for every maximal ideal , is maximal in .

Proof:

The reverse direction is once again easier.

Let be a prime ideal within , and let . Set

.

Assume . Then there exist , such that

.

By shifting parts of to , one may assume that does not have any coefficients contained within . Furthermore, if follows . Further, , since if , , , then annihilates all higher coefficients of , which is why equals the constant term of times and thus . Hence and let be the leading coefficient of . Since the nontrivial coefficients of the polynomial must be zero for it being constantly one, , contradicting the primality of .

Thus, let be maximal containing . Assume contains . Then and thus . contracts to a maximal ideal of , which does not contain , but does contain . Hence the claim.

The other direction is more tricky, but not as bad as in the previous theorem.

Let thus be a Jacobson ring. Assume there exists a maximal ideal such that is not maximal within . Define

and . is a prime ideal, since if such that , or and hence or . Further

via the isomorphism

.

According to lemma 12.5, is a maximal ideal within . We set

and .

Then is a Jacobson ring that is not a field, is a maximal ideal within (isomorphisms preserve maximal ideals) and , since if is any element of which is not mapped to zero by , then at least one of must be nonzero, for, if only , then , which is absurd.

Replacing by and by , we lead the assumption to a contradiction where is an integral domain but not a field and .

is nonzero, because else would be a field. Let have minimal degree among the nonzero polynomials of , and let be the leading coefficient of .

Let be an arbitrary maximal ideal of . can not be the zero ideal, for otherwise would be a field. Hence, let be nonzero. Since , . Since is maximal, . Hence, , where and . Applying the general division algorithm that was described above in order to divide by and obtain

for suitable and such that . From the equality holding for we get

.

Hence, , and since the degree of was minimal in , . Since all coefficients of are contained within (since they are multiplied by ), . Thus (maximal ideals are prime).

Hence, is contained in all maximal ideals of . But since was assumed to be an integral domain, this is impossible in view of lemma 12.3 applied to the set , yielding a prime ideal which is separated from by a maximal ideal since is a Jacobson ring. Hence, we have obtained a contradiction.

The spectrum and the Zariski topology[edit | edit source]

Definition 16.1:

Let be a commutative ring. The spectrum of is the set

;

i.e. the set of all prime ideals of .

On , we will define a topology, turning into a topological space. This topology will be called Zariski topology, although only Alexander Grothendieck gave the definition in the above generality.

Closed sets[edit | edit source]

Definition 16.2:

Let be a ring and a subset of . Then define

.

The sets , where ranges over subsets of , satisfy the following equations:

Proposition 16.3:

Let be a ring, and let be a family of subsets of .

  1. and
  2. If is finite, then .

Proof:

The first two items are straightforward. For the third, we use induction on . is clear; otherwise, the direction is clear, and the other direction follows from lemma 14.20.

Definition 16.4:

Principal open sets[edit | edit source]

Topological properties of the spectrum[edit | edit source]

Noetherian rings[edit | edit source]

Rings as modules[edit | edit source]

Theorem 14.1:

We had already observed that a ring is a module over itself, where the module operation is given by multiplication and the addition by ring addition. In this context, we further have that the submodules of are exactly the ideals.

Proof: Being a submodule means being an additive subgroup closed under the module operation. In the above context, this is exactly the definition of ideals.

Transfer of the properties[edit | edit source]

Definition 14.2:

Let be a (commutative) ring. is called Noetherian if and only if every ascending chain of ideals of

eventually becomes stationary.

From theorems 6.7 and 14.1, we obtain the following characterisation of Noetherian rings:

Theorem 14.3:

The following are equivalent:

  1. is Noetherian.
  2. Every ideal of is finitely generated.
  3. Every set of ideals of has a maximal element with respect to inclusion.

In analogy to theorem 6.11, we further obtain

Theorem 14.4:

If is Noetherian, is another ring and is a surjective ring homomorphism, then is Noetherian.

Proof 1: Proceed in analogy to theorem 6.11, using the isomorphism theorem of rings.

Proof 2: Use theorem 6.11 directly.

New properties in the ring setting[edit | edit source]

When rings are considered, several new properties show themselves in the noetherian case.

{{TextBox| M=0 | W=100% | BG=#FFFFFF |1=Theorem 14.4:


Noetherian rings and constructions[edit | edit source]

In this section we will prove theorems involving Noetherian rings and module or localisation-like structures over them.

Theorem 14.4:

Let be Noetherian and let be a finitely generated -module. Then is Noetherian.

Theorem 14.5 (Hilbert's basis theorem):

Let be a Noetherian ring. Then the polynomial ring over , , is also Noetherian.

Proof 1:

Consider any ideal . We form the ideal , that shall contain all the leading coefficients of any polynomials in ; that is

.

Since is Noetherian, as a finite set of generators; call those generators . All belong to a certain as a leading coefficient; let thus be the degree of that polynomial for all . Set

.

We further form the ideals and of and claim that

.

Indeed, certainly and thus (see the section on modules). The other direction is seen as thus: If , , then we can set to be the leading coefficient of , write for suitable and then subtract , to obtain

so long as . By repetition of this procedure, we subtract a polynomial of to obtain a polynomial in , that is, .

However, both and are finitely generated ideals ( is finitely generated as an -module and hence Noetherian by the previous theorem, which is why so is as a submodule of a Noetherian module). Since the sum of finitely generated ideals is clearly finitely generated, is finitely generated.

Exercises[edit | edit source]

  • Let be a Noetherian ring, and let be an -module. Prove that is Noetherian if and only if it is finitely generated. (Hint: Is there any surjective ring homomorphism , where is the number of generators of ? If so, what does the first isomorphism theorem say to that?)

Noetherian spaces[edit | edit source]

Primary decomposition[edit | edit source]

The following theory was originally developed by world chess champion Emmanuel Lasker in his doctoral thesis under David Hilbert and then greatly simplified (and generalised to noetherian rings) by Emmy Noether.

Primary ideals[edit | edit source]

Definition 19.4:

An ideal is called primary ideal if and only if the following holds:

.

Clearly, every prime ideal is primary.

We have the following characterisations:

Theorem 19.5 (characterisations of primary ideals):

Let , with denoting the radical ideal of . The following are equivalent:

  1. is primary.
  2. If , then either or or .
  3. Every zerodivisor of is nilpotent.

Proof 1:

1. 2.: Let be primary. Assume and neither nor . Since , for a suitable . Since and , for a suitable .

2. 3.: Let be a zerodivisor of , that is, for a certain such that . Hence , that is, for a suitable .

3. 1.: Let . Then either or or is a zerodivisor within , which is why for a suitable .

Proof 2:

1. 3.: Let be primary, and let be a zerodivisor within . Then for a and hence for a suitable .

3. 2.: Let . Assume neither nor . Then both and are zerodivisors in , and hence are nilpotent, which is why for suitable and hence .

2. 1.: Let . Assume not and not . Then in particular , that is, for suitable .

Theorem 19.6:

If is any primary ideal, then is prime.

Proof:

Let . Then for a suitable . Hence either and thus or for a suitable and hence .

Existence[edit | edit source]

Existence in the Noetherian case[edit | edit source]

Following the exposition of Zariski, Samuel and Cohen, we deduce the classical Noetherian existence theorem from two lemmas and a definition.

Definition 19.7:

An ideal is called irreducible if and only if it can not be written as the intersection of finitely many proper superideals.

Lemma 19.8:

In a Noetherian ring, every irreducible ideal is primary.

Proof:

Assume there exists an irreducible ideal which is not primary. Since is not primary, there exist such that , but neither nor for any . We form the ascending chain of ideals

;

this chain is ascending because . Since we are in a Noetherian ring, this chain eventually stabilizes at some ; that is, for we have . We now claim that

.

Indeed, is obvious, and for we note that if , then

for suitable and , which is why , hence , since thus , , hence and . Therefore .

Furthermore, by the choice of and both and are proper superideals, contradicting the irreducibility of .

Lemma 19.9:

In a Noetherian ring, every ideal can be written as the finite intersection of irreducible ideals.

Proof:

Assume otherwise. Consider the set of all ideals that are not the finite intersection of irreducible ideals. If we are given an ascending chain within that set

,

this chain has an upper bound, since it stabilizes as we are in a Noetherian ring. We may hence choose a maximal element among all ideals that are not the finite intersection of irreducible ideals. itself is thus not irreducible. Hence, it can be written as the intersection of strict superideals; that is

for appropriate . Since is maximal, each is a finite intersection of irreducible ideals, and hence so is , which contradicts the choice of .

Corollary 19.10:

In a Noetherian ring, every ideal can be written as the finite intersection of primary ideals.

Proof:

Combine lemmas 19.8 and 19.9.

Minimal decomposition[edit | edit source]

Definition 19.11:

Let be an ideal in a ring, and let

be a primary decomposition of . This decomposition is called minimal if and only if

  1. there does not exist with , and
  2. for all , (that is, the radicals of the prime ideals are pairwise distinct).

In fact, once we have a primary composition for a given ideal, we can find a minimal primary decomposition of that ideal. But before we prove that, we need a general fact about radicals first.

Lemma 19.12:

Let be ideals. Then

.

One could phrase this lemma as "radical interchanges with finite intersections".

Proof:

:

: Let . For each , choose such that . Set

.

Then , hence .

Note that for infinite intersections, the lemma need not (!!!) be true.

Theorem 19.13:

Let be an ideal in a ring that has a primary decomposition. Then also has a minimal primary decomposition.

Proof 1:

First of all, we may exclude all primary ideals for which

;

the intersection won't change if we do that, for intersecting with a superset changes nothing in general.

Then assume we are given a decomposition

,

and for a fixed prime ideal set

;

due to theorem 19.6,

.

We claim that is primary, and . For the first claim, note that by the previous lemma

.

For the second claim, let . If there is nothing to prove. Otherwise let . Then there exists such that , and hence for a suitable . Thus , and hence for all and suitable . Pick

.

Then . Hence, is primary.

Uniqueness properties[edit | edit source]

In general, we don't have uniqueness for primary decompositions, but still, any two primary decompositions of the same ideal in a ring look somewhat similar. The classical first and second uniqueness theorems uncover some of these similarities.

Theorem 19.14 (first uniqueness theorem):

Let be an ideal within a ring , and assume we are given a minimal primary decomposition

.

Then the prime (theorem 19.6) ideals are exactly the prime ideals among the ideals and hence are independent of the choice of the particular decomposition. That is, the ideals are uniquely determined by .

Proof:

We begin by deducing an equation. According to theorem 19.2 and lemma 19.12,

.

Now we fix and distinguish a few cases.

  1. If , then obviously .
  2. If (where again ), then if we must have since no power of is contained within .
  3. If , but , we have , since

In conclusion, we find

.

Assume first that is prime. Then the prime avoidance lemma implies that is contained within one of the , , and since , .

Let now for be given. Since the given primary decomposition is minimal, we find such that , but . In this case, by the above equation.

This theorem motivates and enables the following definition:

Definition 19.15:

Let be any ideal that has a minimal primary decomposition

.

Then the ideals are called the prime ideals belonging to .

We now prove two lemmas, each of which will below yield a proof of the second uniqueness theorem (see below).

Lemma 19.16:

Let be an ideal which has a primary decomposition

,

and let again for all . If we define

,

then is an ideal of and .

Proof:

Let . There exists such that without , and a similar with an analogous property in regard to . Hence , but not since is prime. Also, . Hence, we have an ideal.

Let . There exists such that

.

In particular, . Since no power of is in , .

Lemma 19.17:

Let be multiplicatively closed, and let

be the canonical morphism. Let be a decomposable ideal, that is

for primary , and number the such that the first have empty intersection with , and the others nonempty intersection. Then

.

Proof:

We have

by theorem 9.?. If now , lemma 9.? yields . Hence,

.

Application of on both sides yields

,

and

since holds for general maps, and means , where and ; thus , that is . This means that

.

Hence , and since no power of is in ( is multiplicatively closed and ), .

Definition 19.18:

Let be an ideal which admits a primary decomposition, and let be a set of prime ideals of that all belong to . is called isolated if and only if for every prime ideal , if is a prime ideal belonging to such that , then as well.

Theorem 19.19 (second uniqueness theorem):

Let be an ideal that has a minimal primary decomposition. If is a subset of the set of the prime ideals belonging to which is isolated, then

is independent of the particular minimal primary decomposition from which the are coming.

Note that applied to reduced sets consisting of only one prime ideal, this means that if all prime subideals of a prime ideal belonging to also belong to , then the corresponding is predetermined.

Proof 1 (using lemma 19.16):

We first reduce the theorem down to the case where is the set of all prime subideals belonging to of a prime ideal that belongs to . Let be any reduced system. For each maximal element of that set (w.r.t. inclusion) define to be the set of all ideals in contained in . Since is finite,

;

this need not be a disjoint union (note that these are not maximal ideals!). Hence

.

Hence, let be an ideal belonging to and let be an isolated system of subideals of . Let be all the primary ideals belonging to not in . For those ideals, we have , and hence we find . For each take large enough so that . Then

,

which is why . From this follows that

,

where is the element in the primary decomposition of to which is associated, since clearly for each element of the left hand side, and thus , but also . But on the other hand, implies . Hence for any such lemma 19.16 implies

,

which in turn implies

.

Proof 2 (using lemma 19.17):

Let be an isolated system of prime ideals belonging to . Pick

,

which is multiplicatively closed since it's the intersection of multiplicatively closed subsets. The primary ideals of the decomposition of which correspond to the are precisely those having empty intersection with , since any other primary ideal in the decomposition of must contain an element outside all , since otherwise its radical would be one of them by isolatedness. Hence, lemma 19.17 gives

and we have independence of the particular decomposition.

Characterisation of prime ideals belonging to an ideal[edit | edit source]

The following are useful further theorems on primary decomposition.

First of all, we give a proposition on general prime ideals.

Proposition 19.20:

Let be a (commutative) ring, and let be a prime ideal. If contains

either the intersection or the product

of certain arbitrary ideals, then it contains one of the completely.

Proof:

Since the product is contained in the intersection, it suffices to prove the theorem under the assumption that .

Indeed, assume none of the is contained in . Choose for . Since is prime, . But it's in the product, contradiction.

This proposition has far-reaching consequences for primary decomposition, given in Corollary 19.22. But first, we need a lemma.

Lemma 19.21:

Let be a primary ideal, and assume is prime such that . Then .

Proof:

If , then .

Corollary 19.22:

Let be an ideal admitting a prime decomposition

.

If is any prime ideal that contains , then it also contains a prime ideal belonging to . Further, the prime ideals belonging to are exactly those that are minimal with respect to the partial order induced by inclusion on .

Proof:

The first assertion follows from proposition 19.20 and lemma 19.21. The second assertion follows since any prime ideal belonging to contains .

Artinian rings[edit | edit source]

Definition, first property[edit | edit source]

Definition 19.1:

A ring is called artinian if and only if each descending chain

of ideals of eventually terminates.

Equivalently, is artinian if and only if it is artinian as an -module over itself.

Proposition 19.2:

Let be an artinian integral domain. Then is a field.

Proof:

Let . Consider in the descending chain

.

Since is artinian, this chain eventually stabilizes; in particular, there exists such that

.

Then write , that is, , that is (as we are in an integral domain) and has an inverse.

Corollary 19.3:

Let be an artinian ring. Then each prime ideal of is maximal.

Proof:

If is a prime ideal, then is an artinian (theorem 12.9) integral domain, hence a field, hence is maximal.

Characterisation[edit | edit source]

Theorem 19.4:

Let be a ring. We have:

is artinian is noetherian and every prime ideal of is maximal.

Proof:

First assume that the zero ideal of can be written as a product of maximal ideals; i.e.

for certain maximal ideals . In this case, if either chain condition is satisfied, one may consider the normal series of considered as an -module over itself given by

.

Consider the quotient modules . This is a vector space over the field ; for, it is an -module, and annihilates it.

Hence, in the presence of either chain condition, we have a finite vector space, and thus has a composition series (use theorem 12.9 and proceed from left to right to get a composition series). We shall now go on to prove that is a product of maximal ideals in cases

  1. is noetherian and every prime ideal is maximal
  2. is artinian.

1.: If is noetherian, every ideal (in particular ) contains a product of prime ideals, hence equals a product of prime ideals. All these are then maximal by assumption.

2.: If is artinian, we use the descending chain condition to show that if (for a contradiction) is not product of prime ideals, the set of ideals of that are product of prime ideals is inductive with respect to the reverse order of inclusion, and hence contains a minimal (w.r.t. inclusion) element . We lead this to a contradiction.

We form . Since as , . Again using that is artinian, we pick minimal subject to the condition . We set and claim that is prime. Let indeed and . We have

, hence, by minimality of ,

and similarly for . Therefore

,

whence . We will soon see that . Indeed, we have , hence and therefore

.

This shows , and contradicts the minimality of .

Krull dimension[edit | edit source]

Definition 17.1:

Let be a ring. The (Krull) dimension of is defined to be

.

Theorem 18.1 (prime avoidance):

Let be ideals within a ring such that at most two of the ideals are not prime ideals. If , then there exists an such that .

Proof 1:

We prove the theorem directly. First consider the case . Let and . Then , and . In case , we have and in case we have . Both are contradictions.

Now consider the case . Without loss of generality, we may assume are not prime and all the other ideals are prime. If , the claim follows by what we already proved. Otherwise, there exists an element . Without loss of generality, we may assume . We claim that . First assume

Assume otherwise. If there exists (or ), then .

INCOMPLETE

Proof 2:

We prove the theorem by induction on . The case we take from the preceding proof. Let . By induction, we have that is not contained within any of , where the hat symbol means that the -th ideal is not counted in the union, for each . Hence, we may choose for each . Since , at least one of the ideals is prime; say is this prime ideal. Consider the element of

.

For , is not contained in because otherwise would be contained within . For , is also not contained within , this time because otherwise , contradicting being prime. Hence, we have a contradiction to the hypothesis.

Valuation rings[edit | edit source]

Augmented ordered Abelian groups[edit | edit source]

In this section, for reasons that will become apparent soon, we write Abelian groups multiplicatively.

Definition 18.1:

An ordered Abelian group is a group together with a subset such that:

  1. is closed under multiplication (that is, ).
  2. If , then . (This implies in particular that .)
  3. .

We write ordered Abelian groups as pair .

The last two conditions may be summarized as: is the disjoint union of , and .

Theorem 18.2:

Let an ordered group be given. Define an order on by

, .

Then has the following properties:

  1. is a total order of .
  2. is compatible with multiplication of (that is, and implies ).

Proof:

We first prove the first assertion.

is reflexive by definition. It is also transitive: Let and . When or , the claim follows trivially by replacing in either of the given equations. Thus assume and . Then and hence (even ).

Let and . Assume for a contradiction. Then and , and since is closed under multiplication, , contradiction. Hence .

Let such that . Since , (which is not equal ) is either in or in (but not in both, since otherwise and since , , contradiction). Thus either or .

Then we proceed to the second assertion.

Let . If , the claim is trivial. If , then , but . Hence .

Definition 18.3:

Let be an ordered Abelian group. An augmented ordered Abelian group is together with an element (zero) such that the following rules hold:

, .

We write an augmented ordered Abelian group as triple .

Valuations and valuation rings[edit | edit source]

Definition 18.4:

Let be a field, and let be an augmented ordered Abelian group. A valuation of the field is a mapping such that:

  1. .
  2. .
  3. .

Definition 18.5:

A valuation ring is an integral domain , such that there exists an augmented ordered Abelian group and a valuation with .

Theorem 18.6:

Let be a valuation ring, and let be its field of fractions. Then the following are equivalent:

  1. is a valuation ring.
  2. is an integral domain and the ideals of are linearly ordered with respect to set inclusion.
  3. is an integral domain and for each , either or .

Proof:

We begin with 3. 1.; assume that

1. 2.: Let any two ideals. Assume there exists . Let any element be given.

Properties of valuation rings[edit | edit source]

Theorem 18.8:

A valuation ring is a local ring.

Proof:

The ideals of a valuation ring are ordered by inclusion. Set . We claim that is a proper ideal of . Certainly for otherwise for some proper ideal of . Furthermore, .

Theorem 18.9:

Let be a Noetherian ring and a valuation ring. Then is a principal ideal domain.

Proof:

For, let be an ideal; in any Noetherian ring, the ideals are finitely generated. Hence let . Consider the ideals of . In a valuation rings, the ideals are totally ordered, so we may renumber the such that . Then .

Algebras and integral elements[edit | edit source]

Algebras[edit | edit source]

note to self: 21.4 is false when the constant polynomials are allowed!

Definition 21.1:

Let be a ring. An algebra over is an -module together with a multiplication . This multiplication shall be -bilinear.

Within an algebra it is thus true that we have an addition and a multiplication, and many of the usual rules of algebra stay true. Thus the name algebra.

Of course, there are some algebras whose multiplication is not commutative or associative. If the underlying ring is commutative, the ring gives a certain commutativity property in the sense of

.

Definition 21.2:

Let be an algebra, and let be a subset of . is called a subalgebra of iff it is closed with respect to the operations

  • addition
  • multiplication
  • module operation

of .

Note that this means that , together with the operations inherited from , is itself an -algebra; the necessary rules just carry over from .

Example 21.3: Let be a ring, let be another ring, and let be a ring homomorphism. Then is an -algebra, where the module operation is given by

,

and multiplication and addition for this algebra are given by the multiplication and addition of , the ring.

Proof:

The required rules for the module operation follow as thus:

Since in we have all the rules for a ring, the only thing we need to check for the -bilinearity of the multiplication is compatibility with the module operation.

Indeed,

and analogously for the other argument.

We shall note that if we are given an -algebra , then we can take a polynomial and some elements of and evaluate as thus:

  1. Using the algebra multiplication, we form the monomials .
  2. Using the module operation, we multiply each monomial with the respective coefficient: .
  3. Using the algebra addition (=module addition), we add all these together.

The commutativity of multiplication (1.) and addition (3.) ensure that this procedure does not depend on the choices of order, that can be made in regard to addition and multiplication.

Definition 21.4:

Let be an -algebra, and let be any elements of . We then define a new object, , to be the set of all elements of that arise when applying the algebra operations of and the module operation (with arbitrary elements of the underlying ring) to the elements a finite number of times, in an arbitrary fashion (for example the elements , , are all in ). By multiplying everything out (using the rules we are given for an algebra), we find that this is equal to

.

We call the algebra generated by the elements .

Theorem 21.5:

Let an -algebra be given, and let . Then

  • is a subalgebra of .

Furthermore,

and

  • is (with respect to set inclusion) smaller than any other subalgebra of containing each element .

Proof:

The first claim follows from the very definition of subalgebras of : The closedness under the three operations. For, if we are given any elements of , applying any operation to them is just one further step of manipulations with the elements .

We go on to prove the equation

.

For "" we note that since are contained within every occuring on the right hand side. Thus, by the closedness of these , we can infer that all finite manipulations by the three algebra operations (addition, multiplication, module operation) are included in each . From this follows "".

For "" we note that is also a subalgebra of containing , and intersection with more things will only make the set at most smaller.

Now if any other subalgebra of is given that contains , the intersection on the right hand side of our equation must be contained within it, since that subalgebra would be one of the .

Exercises[edit | edit source]

  • Exercise 21.1.1:

Symmetric polynomials[edit | edit source]

Definition 21.6:

Let be a ring. A polynomial is called symmetric if and only if for all ( being the symmetric group), we have

.

That means, we can permute the variables arbitrarily and still get the same result.

This section shall be devoted to proving a very fundamental fact about these polynomials. That is, there are some so-called elementary symmetric polynomials, and every symmetric polynomial can be written as a polynomial in those elementary symmetric polynomials.

Definition 21.7:

Fix an . The elementary symmetric polynomials in variables are the polynomials

Without further ado, we shall proceed to the theorem that we promised:

Theorem 21.8:

Let any symmetric polynomial be given. Then we find another polynomial such that

.

Hence, every symmetric polynomial is a polynomial in the elementary symmetric polynomials.

Proof 1:

We start out by ordering all monomials (remember, those are polynomials of the form ), using the following order:

.

With this order, the largest monomial of is given by ; this is because for all monomials of , the sum of the exponent equals , and the last condition of the order is optimized by monomials which have the first zero exponent as late as possible.

Furthermore, for any given , the largest monomial of

is given by ; this is because the sum of the exponents always equals , further the above monomial does occur (multiply all the maximal monomials from each elementary symmetric factor together) and if one of the factors of a given monomial of coming from an elementary symmetric polynomial is not the largest monomial of that elementary symmetric polynomial, we may replace it by a larger monomial and obtain a strictly larger monomial of the product ; this is because a part of the sum is moved to the front.

Now, let a symmetric polynomial be given. We claim that if is the largest monomial of , then we have .

For assume otherwise, say . Then since is symmetric, we may exchange the exponents of the -th and -th variable respectively and still obtain a monomial of , and the resulting monomial will be strictly larger.

Thus, if we define for

and furthermore , we obtain numbers that are non-negative. Hence, we may form the product

,

and if is the coefficient of the largest monomial of , then the largest monomial of

is strictly smaller than that of ; this is because the largest monomial of is, by our above computation and calculating some telescopic sums, equal to the largest monomial of , and the two thus cancel out.

Since the elementary symmetric polynomials are symmetric and sums, linear combinations and products of symmetric polynomials are symmetric, we may repeat this procedure until we are left with nothing. All the stuff that we subtracted from collected together then forms the polynomial in elementary symmetric polynomials we have been looking for.

Proof 2:

Let be an arbitrary symmetric polynomial, and let be the degree of and be the number of variables of .

In order to prove the theorem, we use induction on the sum of the degree and number of variables of .

If , we must have (since would imply the absurd ). But any polynomial of one variable is already a polynomial of the symmetric polynomial .

Let now . We write

,

where every monomial occuring within lacks at least one variable, that is, is not divisible by .

The polynomial is still symmetric, because any permutation of a monomial that lacks at least one variable, also lacks at least one variable and hence occurs in with same coefficient, since no bit of it could have been sorted to the "" part.

The polynomial has the same number of variables, but the degree of is smaller than the degree of . Furthermore, is symmetric because of

.

Hence, by induction hypothesis, can be written as a polynomial in the symmetric polynomials:

for a suitable .

If , then is a polynomial of the elementary symmetric polynomial anyway. Hence, it is sufficient to only consider the case . In that case, we may define the polynomial

.

Now has one less variable than and at most the same degree, which is why by induction hypothesis, we find a representation

for a suitable .

We observe that for all , we have . This is because the unnecessary monomials just vanish. Hence,

.

We claim that even

.

Indeed, by the symmetry of and and renaming of variables, the above equation holds where we may set an arbitrary of the variables equal to zero. But each monomial of lacks at least one variable. Hence, by successively equating coefficients in where one of the variables is set to zero, we obtain that the coefficients on the right and left of are equal, and thus the polynomials are equal.

Integral dependence[edit | edit source]

Definition 21.9:

If is any ring and a subring, is called integral over iff

for suitable .

A polynomial of the form

(leading coefficient equals )

is called a monic polynomial. Thus, being integral over means that is the root of a monic polynomial with coefficients in .

Whenever we have a subring of a ring , we consider the module structure of as an -module, where the module operation and summation are given by the ring operations of .

Theorem 21.10 (characterisation of integral dependence):

Let be a ring, a subring. The following are equivalent:

  1. is integral over
  2. is a finitely generated -module.
  3. is contained in a subring that is finitely generated as an -module.
  4. There exists a faithful, nonzero -module which is finitely generated as an -module.

Proof:

1. 2.: Let be integral over , that is, . Let be an arbitrary element of . If is larger or equal , then we can express in terms of lower coefficients using the integral relation. Repetition of this process yields that generate over .

2. 3.: .

3. 4.: Set ; is faithful because if annihilates , then in particular .

4. 1.: Let be such a module. We define the morphism of modules

.

We may restrict the module operation of to to obtain an -module. is also a morphism of -modules. Further, set . Then (). The Cayley–Hamilton theorem gives an equation

, ,

where is to be read as the multiplication operator by and as the zero operator, and by the faithfulness of , in the usual sense.

Theorem 21.11:

Let be a field and a subring of . If is integral over , then is a field.

Proof:

Let . Since is a field, we find an inverse ; we don't know yet whether is contained within . Since is integral over , satisfies an equation of the form

for suitable . Multiplying this equation by yields

.

Theorem 21.12:

Let be a subring of . The set of all elements of which are integral over constitutes a subring of .

Proof 1 (from the Atiyah–Macdonald book):

If are integral over , is integral over . By theorem 21.10, is finitely generated as -module and is finitely generated as -module. Hence, is finitely generated as -module. Further, and . Hence, by theorem 21.10, and are integral over .

Proof 2 (Dedekind):

If are integral over , and are finitely generated as -modules. Hence, so is

.

Furthermore, and . Hence, by theorem 21.10, are integral over .

Definition 21.13:

Let be a subring of the ring . The integral closure of over is the ring consisting of all elements of which are integral over .

Definition 21.14:

Let be a subring of the ring . If all elements of are integral over , is called an integral ring extension of .

Irreducibility, algebraic sets and varieties[edit | edit source]

Irreducibility[edit | edit source]

Definition 21.1:

Let be a topological space. is said to be irreducible if and only if no two non-empty open subsets of are disjoint.

Some people (topologists) call irreducible spaces hyperconnected.

Theorem 21.2 (characterisation of irreducible spaces):

Let be a topological space. The following are equivalent:

  1. is irreducible.
  2. can not be written as the union of two proper closed subsets.
  3. Every open subset of is dense in .
  4. The interior of every proper closed subset of is empty.

Proof 1: We prove 1. 2. 3. 4. 1.

1. 2.: Assume that , where , are proper and closed. Define and . Then are open and

by one of deMorgan's rules, contradicting 1.

2. 3.: Assume that is open but not dense. Then is closed and proper in , and so is . Furthermore, , contradicting 2.

3. 4.: Let be closed such that . By definition of the closure, , which is why is a non-dense open set, contradicting 3.

4. 1.: Let be open and non-empty such that . Define . Then is a proper, closed subset of , since . Furthermore, , which is why has non-empty interior.

Proof 2: We prove 1. 4. 3. 2. 1.

1. 4.: Assume we have a proper closed subset of with nonempty interior. Then and are two disjoint nonempty open subsets of .

4. 3.: Let be open. If was not dense in , then would be a proper closed subset of with nonempty interior.

3. 2.: Assume , proper and closed. Set . Then , and hence is not dense within .

2. 1.: Let be open. If they are disjoint, then .

Remaining arrows:

1. 3.: Assume open, not dense. Then is nonempty and disjoint from .

3. 1.: Let be open. If they are disjoint, then and thus is not dense.

2. 4.: Let be proper and closed with nonempty interior. Then .

4. 2.: Let , proper and closed. Then .


We shall go on to prove a couple of properties of irreducible spaces.

Theorem 21.3:

Every irreducible space is connected and locally connected.

Proof:

1. Connectedness: Assume , open, non-empty. This certainly contradicts irreducibility.

2. Local connectedness: Let , where is open. But any open subset of is connected as in 1., which is why we have local connectedness.

Theorem 21.4:

Let be an irreducible space. Then is Hausdorff if and only if .

Proof:

If , then is trivially Hausdorff. Assume that is Hausdorff and contains two distinct points . Then we find open such that , and , contradicting irreducibility.

Theorem 21.5:

Let be topological spaces, where is irreducible, and let be a continuous function (i.e. a morphism in the category of topological spaces). Then is irreducible with the subspace topology induced by .

Proof: Let be two disjoint non-empty open subsets of . Since we are working with the subspace topology, we may write , , where are open. We have

and similarly .

Hence, and are open in by continuity, and since they further are disjoint (since if , then and thus ) and non-empty (since e.g. if , since , for an and hence ), we have a contradiction.

Corollary 21.6:

If is irreducible, is Hausdorff and is continuous, then is constant.

Proof: Follows from theorems 21.4 and 21.5.

We may now connect irreducible spaces with Noetherian spaces.

Theorem 21.7:

Let be a Noetherian topological space, and let be closed. Then there exists a finite decomposition

where each is irreducible, and no is a subset of (or equals) any of the other . Furthermore, this decomposition is unique up to order.

Proof:

First we prove existence. Let be closed. Then either is irreducible, and we are done, or can be written as the union of two proper closed subsets . Now again either and are irreducible, or they can be written as the union of two proper closed subsets again. The process of thus splitting up the sets must eventually terminate with all involved subsets being irreducible, since is Noetherian and otherwise we would have an infinite properly descending chain of closed subsets, contradiction. To get the last condition satisfied, we unite any subset contained within another with the greater subset (this can be done successively since there are only finitely many of them). Hence, we have a decomposition of the desired form.

We proceed to proving uniqueness up to order. Let be two such decompositions. For , we may thus write . Assume that there does not exist such that . Then we may define and then successively

for . Then we set and increase until is a decomposition of into two proper closed subsets (such an exists since it equals the first such that ). Thus, our assumption was false; there does exist such that . Thus, each is contained within a , and by symmetry is contained within some . Since by transitivity of this implies , and . For a fixed , we set , where is thus defined ( is unique since otherwise there exist two equals among the -sets). In a symmetric fashion, we may define , where . Then and are inverse to each other, and hence follows (sets with a bijection between them have equal cardinality) and the definition of , for example, implies that both decompositions are equal except for order.

Exercises[edit | edit source]

  • Exercise 21.1.1: Let be an irreducible topological space, and let be open. Prove that is irreducible.

Algebraic sets and varieties[edit | edit source]

Definition 21.8:

Let be a field. Then the sets of the form

,

where is a subset of the ring of polynomials in variables over (that is ), are called algebraic sets. If for a single , we shall occasionally write

.

The following picture depicts three algebraic sets (apart from the cube lines):

The orange surface is the set , the blue surface is the set , and the green line is the intersection of the two, equal to the set , where

and
.

Three immediate lemmata are apparent.

Lemma 21.9:

.

Proof: Being in is the stronger condition.

Lemma 21.10 (formulas for algebraic sets):

Let be a field and set . Then the following rules hold for algebraic sets of :

  1. ( a set)
  2. and
  3. ( ideals)
  4. ( sets)

Proof:

1. Let . If follows . This proves . The other direction follows from lemma 21.9.

2. follows from the constant functions being contained within , and gives no condition on the points of to be contained within it.

3. follows by

since clearly .

We will first prove for the case . Indeed, let , that is, neither nor . Hence, we find a polynomial such that and a polynomial such that . The polynomial is contained within and , since every field is an integral domain. Thus, .

Assume holds for many sets. Then we have

.

4.

From this lemma we see that the algebraic sets form the closed sets of a topology, much like the Zariski-closed sets we got to know in chapter 14. We shall soon find a name for that topology, but we shall first define it in a different way to justify the name we will give.

Lemma 21.11:

Let be a field and . Then

;

we recall that is the radical of .

Proof: "" follows from lemma 21.9. Let on the other hand and . Then for a suitable . Thus, . Assume . Then , contradiction. Hence, .

From calculus, we all know that there is a natural topology on , namely the one induced by the Euclidean norm. However, there exists also a different topology on , and in fact, on for any field . This topology is called the Zariski topology on . Now the Zariski topology actually is a topology on , for a ring, isn't it? Yes, and if , then is in bijective correspondence with a subset of . Through this correspondence we will define the Zariski topology. So let's establish this correspondence by beginning with the following lemma.

Lemma 21.12:

Let be a field and set . If , then the ideal

is a maximal ideal of .

Proof:

Set

.

This is a surjective ring homomorphism. We claim that its kernel is given by . This is actually not trivial and requires explanation. The relation is trivial. We shall now prove the other direction, which isn't. For a given , we define ; hence,

Furthermore, if and only if . The latter condition is satisfied if and only if has no constant, and this happens if and only if is contained within the ideal . This means we can write as an -linear combination of , and inserting for gives the desired statement.

Hence, by the first isomorphism theorem for rings,

.

Thus, is a field and hence is maximal.

Lemma 21.13:

Let be a field. Define

(according to the previous lemma this is a subset of , as maximal ideals are prime). Then the function

is a bijection.

Proof:

The function is certainly surjective. Let , and assume for a certain . Then , and thus

.

Thus, contains a unit and therefore equals , contradicting its maximality that was established in the last lemma.

Definition 21.14:

Let be a field. Then the Zariski topology on is defined to consist of the open sets

, open

where and are given as in lemma 21.13 (that is, the Zariski topology on is defined to be the initial topology with respect to ).

It is easy to check that the sets , really do form a topology.

There is a very simple different way to characterise the Zariski topology:

Theorem 21.15:

Let be a field. The closed sets of the Zariski topology on are exactly the algebraic sets.

Proof:

Unfortunately, for a set , the notation is now ambiguous; it could refer to the algebraic set associated to , or to the set of prime ideals of satisfying . Hence, we shall write the latter as for the remainder of this wikibook.

Let be closed w.r.t. the Zariski topology; that is, , where is the function from lemma 21.13 and . We claim that . Indeed, for ,

.

Let now be an algebraic set. We claim . Indeed, the above equivalences prove also this identity (with replacing ).

In fact, we could have defined the Zariski topology in this way (that is, just defining the closed sets to be the algebraic sets), but then we would have hidden the connection to the Zariski topology we already knew.

We shall now go on to give the next important definition, which also shows why we dealt with irreducible spaces.

Definition 21.16:

Let be a field and let be an algebraic set. If is irreducible w.r.t. the subspace topology induced by the Zariski topology, is called an algebraic variety.

Often, we shall just write variety for algebraic variety.

We have an easy characterisation of algebraic varieties. But in order to prove it, we need a definition with theorem first.

Theorem and definition 21.17:

Let be an algebraic set. We define

and call the ideal associated to or the ideal of vanishing of . We have

and any set such that is contained within .

Proof:

Let first be any set such that . Then for all and , and hence . Thus .

Therefore, , and hence by lemma 21.9. On the other hand, if , then for all by definition. Hence . This proves .

Theorem 21.18:

Let be a field and let be an algebraic set. Then is an algebraic variety if and only if for a prime ideal .

Proof:

Let first be a prime ideal. Assume that , where are two proper closed subsets of (according to lemma 21.10, all subsets of closed w.r.t. the subspace topology of have this form). Then there exist and . Hence, there is such that and such that . Furthermore, since for all either or , but neither nor .

Let now be an algebraic set, and assume that is not prime. Let such that neither nor . Set and . Then and are strictly larger than . According to 21.17, and , since otherwise or respectively. Hence, both and are proper subsets of . But if , then . Hence, either or , and thus either or . Thus, is the union of two proper closed subsets,

,

and is not irreducible. Hence, if irreducibility is present, then is prime and from 21.17 .

Theorem 21.19:

, equipped with the Zariski topology, is a Noetherian space.

Proof:

Let be an ascending chain of open sets. Let and be given as in lemma 21.13 and definition 21.14. Set for all . Then, since , being a function, preserves inclusion,

.

Since is a Noetherian ring, so is (by repeated application of Hilbert's basis theorem). Hence, the above ascending chain of the eventually stabilizes at some . Since is a bijection, . Hence, the stabilize at as well.

Corollary 21.20:

Every algebraic set has a decomposition

for certain prime ideals such that none of the is a proper subset of the other. This decomposition is unique up to order.

That is, we can decompose algebraic sets into algebraic varieties.

Proof:

Combine theorems 21.19, 21.7 and 21.18.

Exercises[edit | edit source]

  • Exercise 21.2.1: Let . Prove that .

Noether's normalisation lemma[edit | edit source]

Computational preparation[edit | edit source]

Lemma 23.1:

Let be a ring, and let be a polynomial. Let be a number that is strictly larger than the degree of any monomial of (where the degree of an arbitrary monomial of is defined to be ). Then the largest monomial (with respect to degree) of the polynomial

has the form for a suitable .

Proof:

Let be an arbitrary monomial of . Inserting for , for gives

.

This is a polynomial, and moreover, by definition consists of certain coefficients multiplied by polynomials of that form.

We want to find the largest coefficient of . To do so, we first identify the largest monomial of

by multiplying out; it turns out, that always choosing yields a strictly larger monomial than instead preferring the other variable . Hence, the strictly largest monomial of that polynomial under consideration is

.

Now is larger than all the involved here, since it's even larger than the degree of any monomial of . Therefore, for coming from monomials of , the numbers

represent numbers in the number system base . In particular, no two of them are equal for distinct , since numbers of base must have same -cimal places to be equal. Hence, there is a largest of them, call it . The largest monomial of

is then

;

its size dominates certainly all monomials coming from the monomial of with powers , and by choice it also dominates the largest monomial of any polynomials generated by any other monomial of . Hence, it is the largest monomial of measured by degree, and it has the desired form.

Algebraic independence in algebras[edit | edit source]

A notion well-known in the theory of fields extends to algebras.

Theorem 23.2:

Let be a ring and an -algebra. Elements in are called algebraically independent over iff there does not exist a polynomial such that (where the polynomial is evaluated as explained in chapter 21).

Transitivity of localisation[edit | edit source]

The theorem[edit | edit source]

Theorem 23.3 (Noether's normalisation lemma):

Let be an integral domain, and let be a ring extension of that is finitely generated as a -module; in particular, is a -algebra, where the algebra operations are induced by the ring operations. Then we may pick a such that there exist ( denoting the localisation of at ) which are algebraically independent over as a -algebra

Localisation of fields[edit | edit source]

Hilbert's Nullstellensatz[edit | edit source]

Zariski's lemma[edit | edit source]

Definition 24.1 (Finitely generated algebra):

Let be a ring. An -algebra is called finitely generated, iff there are elements such that is already all of ; that is .

being a finitely generated -algebra thus means that we may write any element of as a polynomial for a certain (where polynomials are evaluated as explained in chapter 21).

Lemma 24.2 (Artin–Tate):

Let be ring extensions such that is a Noetherian ring, and is finitely generated as an -module and also finitely generated as an -algebra. Then is finitely generated as an -algebra.

Proof:

Since is finitely generated as an -module, there exist such that as an -module. Further, since is finitely generated as -algebra, we find such that equals . Now by the generating property of the , we may determine suitable coefficients (where ranges in and in ) such that

, .

Furthermore, there exist suitable () such that

.

We define ; this notation shall mean: is the algebra generated by all the elements . Since the algebra operations of are the ones induced by its ring operations, , being a subalgebra, is a subring of . Furthermore, and . Since is a Noetherian ring, is also Noetherian by theorem 16.?.

We claim that is even finitely generated as an -module. Indeed, if any element is given, we may write it as a polynomial in the . Using , multiplying everything out, and then using repeatedly, we can write this polynomial as a linear combination of the with coefficients all in . This proves that indeed, is finitely generated as an -module. Hence, is Noetherian as an -module.

Therefore, , being a submodule of as -module, is finitely generated as an -module. We claim that is finitely generated as an -algebra. To this end, assume we are given a set of generators of as an -module. Any element can be written

, .

Each of the is a polynomial in the generators of (that is, the elements ) with coefficients in . Inserting this, we see that is a polynomial in the elements with coefficients in . But this implies the claim.

Theorem 24.3 (Zariski's lemma):

Let be a field extension of a field . Assume that for some in , is a field. Then every is algebraic over .

Proof 1 (Azarang 2015):


Before giving the proof of the lemma, we recall the following two well-known facts.

Fact 1. If a field is integral over a subdomain , then is a field.

Fact 2. If is any principal ideal domain (or just a UFD) with infinitely many (non-associate) prime elements, then its field of fractions is not a finitely generated -algebra.

Proof of the Lemma: We use induction on for arbitrary fields and . For the assertion is clear. Let us assume that and the lemma is true for less than . Now to show it for , one may assume that one of , say , is not algebraic over and since is a field, by induction hypothesis, we infer are all algebraic over . This implies that there are polynomials such that all 's are integral over the domain . Since is integral over , by Fact 1, is a field. Consequently, , which contradicts Fact 2.



Proof 2 (Artin–Tate):

If all of the generators of over are algebraic over , the last paragraph of the preceding proof shows that is a finite field extension of . Hence, we only have to consider the case where at least one of the generators of over is transcendental over .

Indeed, assume that . By reordering, we may assume that are transcendental over () and are algebraic over . We have , and furthermore since is a field extension of containing all the elements . Hence, .

Since all the are algebraic over , they are also algebraic over . Assume that there exists a polynomial such that . Then is algebraic over ; for, the part of the monomials not being a power of may be seen as coefficients within that field. Hence, we may lower by one and still obtain that are algebraic over . Repetition of this process eventually terminates, or otherwise would be algebraic over , and would be a finite tower of algebraic extensions (, and so on) and thus a finite field extension.

Therefore, we may assume that are algebraically independent over . In this case, the map

is an isomorphism (it is a homomorphism, surjective and injective), and hence, is a unique factorisation domain (since is).

Now set . Then , and is finitely generated as an -algebra and finitely generated as a -module (since it is a finite field extension of ). Therefore, by lemma 24.2, is finitely generated as an -algebra. Let

be generators of as -algebra. Let be all the primes occuring in the (unique) prime factorisations of . Now contains an infinite number of primes. This is seen as follows.

Assume were the only primes of . Since we have prime factorisation, the element is divisible by at least one of , say . This means

for a certain , which is absurd, since applying the inverse of the above isomorphism to , we find that is mapped to , but the right hand side has strictly positive degree.

Hence, we may pick prime. Then can not be written as a polynomial in terms of the generators, but is nonetheless contained within . This is a contradiction.

Proof 3 (using Noether normalisation):

According to Noether's normalisation lemma for fields, we may pick algebraically independent over such that is a finitely generated -module. Let be elements of that generate as an -module. Then according to theorem 21.10 3. 1., the generators are all integral over , and since the integral elements form a ring, is integral over . Hence, is a field by theorem 21.11. But if , then the being algebraically independent means that the homomorphism

is in fact an isomorphism, whence is not a field, contradiction. Thus, , and hence is finitely generated as an -module. This implies that we have a finite field extension; all elements of are finite -linear combinations of certain generators.

Hilbert's Nullstellensatz[edit | edit source]

There are several closely related results bearing the name Hilbert's Nullstellensatz. We shall state and prove the ones commonly found in the literature. These are the "weak form", the "common roots form" and the "strong form". The result that Hilbert originally proved was the strong form.

Weak form[edit | edit source]

The formulation and proof of the weak form of Hilbert's Nullstellensatz are naturally preceded by the following lemma.

Lemma 24.5:

Let be any field. For any maximal ideal , the field is a finite field extension of the field . In particular, if is algebraically closed (and thus has no proper finite field extensions), then .

Proof 1 (using Zariski's lemma):

is a finitely generated -algebra, where all the operations are induced by the ring structure of ; this is because the set constitutes a set of generators, since every element in can be written as polynomials in those elements over . Therefore, Zariski's lemma implies that is a finite field extension of the field .

Proof 2 (using Jacobson rings):

We proceed by induction on .

The case follows by noting that is a principal ideal domain (as an Euclidean domain) and hence, if is a (maximal) ideal, then for a suitable . Now is a field if is maximal; we claim that it is a finite field extension of the field . Indeed, as basis elements we may take , where is the degree of the generating polynomial of the maximal ideal . Any element of can thus be expressed as linear combination of these basis elements, since the relation

(where )

allows us to express monomials of degree in terms of smaller ones.

Assume now the case is proven. Let be a maximal ideal. According to Jacobson's first criterion, is a Jacobson ring (since is, being a field). Now and hence is a maximal ideal of . Thus, Goldman's second criterion asserts that is a maximal ideal of . Thus, is a field, and, by the induction hypothesis, a finite field extension of .

We define the ideal . The following map is manifestly an isomorphism:

This map sends to (and, being an isomorphism, vice versa).

Furthermore, since , the ideal is maximal in . Hence, is maximal in and thus is a field. By the case it is a finite field extension of the field .


In general, any proper ideal of , where is a field, does not contain any constants (apart from zero), for else it would contain a unit and thus be equal to the whole of . This applies, in particular, to all maximal ideals of . Thus, elements of of the form are distinct for pairwise distinct . By definition of addition and multiplication of residue class rings, this implies that we have an isomorphism of rings (and thus, of fields)

.

Hence, in the case that is algebraically closed, the above lemma implies via that isomorphism.

Theorem 24.6 (Hilbert's Nullstellensatz, weak form):

Let be an algebraically closed field. For any , set

;

according to lemma 21.12, is a maximal ideal.

The claim of the weak Hilbert's Nullstellensatz is this: Every maximal ideal has the form for a suitable .

Proof:

Let be any maximal ideal of . According to the preceding lemma, and since is algebraically closed, we have via an isomorphism that sends elements of the type to . Now this isomorphism must send any element of the type to some element of . But further, the element is sent to . Since we have an isomorphism (in particular injectivity), we have . Thus for suitable . Since the ideal is maximal (lemma 21.12), we have equality: .

Common roots form[edit | edit source]

Theorem 24.7 (Hilbert's Nullstellensatz, common root form):

Let be an algebraically closed field and let . If

,

then there exists such that .

Proof:

This follows from the weak form, since is contained within some maximal ideal , which by the weak form has the form for suitable and hence ; in particular, , that is, is a common root of .

Strong form[edit | edit source]

Theorem 24.8 (Hilbert's Nullstellensatz, strong form):

Let be an algebraically closed field. If is an arbitrary ideal, then

;

recall: is the radical of .

In particular, if is a radical ideal (that is, ), then

.

Note that together with the rule

for any algebraic set (that was established in chapter 22), this establishes a bijective correspondence between radical ideals of and algebraic sets in , given by the function

and inverse

.

Proof 1 (using Jacobson rings):

Certainly, a field is a Jacobson ring. Furthermore, from Goldman's first criterion (theorem 14.4) we may infer that is a Jacobson ring as well. Let now be a polynomial vanishing at all of , and let be any maximal ideal of that contains . By the weak Nullstellensatz, has the form for a suitable .

Now we have , since any polynomial in can be written as a -linear combination of the generators . Hence, is not all of ; due to the constant functions, only the empty set has this ideal of vanishing. This, in combination with the fact that and the maximality of implies .

Furthermore, , and hence . Therefore, .

Since was arbitrary, is thus contained in all maximal ideals containing and hence, since is Jacobson, . However, the other direction is easy to see (we will prove this in the first paragraph of the next proof; there is no need to repeat the same argument in two proofs). Thus, .

Proof 2 (Rabinowitsch trick):

First we note : Indeed, if , then for all . Hence also for all since a field does not have nilpotent elements except zero (in fact, not even zero divisors). This implies .

is the longer direction. Note that any field is Noetherian, and thus, by Hilbert's basis theorem, so is . Hence, , being an ideal of , is finitely generated. Write

.

Let . Consider the polynomial ring , which is augmented by an additional variable. In that ring, consider the polynomial . The polynomials have no common zero (where the polynomials are seen as polynomials in the variables by the way of ), since if all the polynomials are zero at (where the variable does not matter for the evaluation of ), then so is . Hence, in this case, .

Now we may apply the common roots form of the Nullstellensatz for the case of variables. The polynomials have no common zero, and therefore, the common roots form Nullstellensatz implies that the ideal must be all of . In particular, we can find such that

.

Passing to the field of rational functions , we may insert for (recall that we assumed ) to obtain

,

where we left out the variables of so that it still fits on the screen. Now , whence

.

Multiplying this equation by an appropriate power of , call it , sufficiently large such that we clear out all denominators, and noting that the last variable does not matter for , yields that equals an -linear combination of and is thus contained within . Hence, .

Note how Yuri Rainich ("Rabinowitsch") may have found this trick. Perhaps he realized that the weak Nullstellensatz is a claim for arbitrary , and for the proof of the strong Nullstellensatz, we can do one at a time, using the infinitude of cases of the common roots form Nullstellensatz. That is, compared to a particular dimensional case in the strong Nullstellensatz, the infinitude of cases for the common roots form Nullstellensatz are not so weak at all, despite the common roots form being a consequence of the weak Nullstellensatz. This could have given Rainich evidence that using more cases, one obtains a stronger tool. And indeed, it worked out.

A diagram depicting the different paths to Hilbert's Nullstellensatz covered in this wikibook.