A category is a collection of objects together with morphisms, which go from an object to an object each (where is called the domain and the codomain), such that
Any morphism can be composed with a morphism such that the composition of the two is a morphism .
For each , there exists a morphism such that for any morphism we have and for any morphism we have .
The collection of all groups together with group homomorphisms as morphisms is a category.
The collection of all rings together with ring homomorphisms is a category.
Sets together with ordinary functions form the category of sets.
To every category we may associate an opposite category:
Definition 1.3 (opposite categories):
Let be a category. The opposite category of is the category consisting of the objects of , but all morphisms are considered to be inverted, which is done by simply define codomains to be the domain of the former morphism and domains to be codomains of former morphisms.
For instance, within the opposite category of sets, a function (where , are sets) is a morphism .
A category is such a general object that some important algebraic structures arise as special cases. For instance, consider a category with one object. Then this category is a monoid with composition as its operation. On the other hand, if we are given an arbitrary monoid, we can define the elements of that monoid to be the morphisms from a single object to itself, and thus have found a representation of that monoid as a category with one object.
If we are given a category with one object, and the morphisms all happen to be invertible, then we have in fact a group structure. And further, just as described for monoids, we can turn every group into a category.
Exercise 1.3.1: Come up with a category , where the objects are some finitely many sets, such that there exists an epimorphism that is not surjective, and a monomorphism that is not injective (Hint: Include few morphisms).
Terminal, initial and zero objects and zero morphisms
Within many categories, such as groups, rings, modules,... (but not fields), there exist some sort of "trivial" objects which are the simplest possible; for instance, in the category of groups, there is the trivial group, consisting only of the identity. Indeed, within the category of groups, the trivial group has the following property:
Let and let be another group. Then there exists exactly one homomorphism and exactly onee homomorphism .
Futhermore, if is any other group with the property that for every other group , there exists exactly one homomorphism and exactly one homomorphism , then .
Proof: We begin with the first part. Let be a homomorphism, where . Then must take the value of the one element of everywhere and is thus uniquely determined. If furthermore is a homomorphism, by the homomorphism property we must have (otherwise obtain a contradiction by taking a power of ).
Assume now that , and let be an element within that does not equal the identity. Let . We define a homomorphism by . In addition to that homomorphism, we also have the trivial homomorphism . Hence, we don't have uniqueness.
Using the characterisation given by theorem 1.6, we may generalise this concept into the language of category theory.
Let be a category. A zero object of is an object of such that for all other objects of there exist unique morphisms and .
Within many usual categories, such as groups (as shown above), but also rings and modules, there exist zero objects. However, not so within the category of sets. Indeed, let be an arbitrary set. If , then from any nonempty set there exist at least 2 morphisms with codomain , namely the two constant functions. If , we may pick a set with and obtain two morphisms from mapping to . If , then there does not exist a function .
But, if we split the definition 1.6 in half, each half can be found within the category of sets.
Let be a category. An object of is called
terminal iff for every other object of there exists exactly one morphism ;
initial iff for every other object of there exists exactly one morphism .
In the category of sets, there exists one initial object and millions (actually infinitely many, to be precise) terminal objects. The initial object is the empty set; the argument above definition 1.7 shows that this is the only remaining option, and it is a valid one because any morphism from the empty set to any other set is the empty function. Furthermore, every set with exactly one element is a terminal object, since every morphism mapping to that set is the constant function with value the single element of that set. Hence, by generalizing the concept of a zero object in two different directions, we have obtained a fine description for the symmetry breaking at the level of sets.
Now returning to the category of groups, between any two groups there also exist a particularly trivial homomorphism, that is the zero homomorphism. We shall also elevate this concept to the level of categories. The following theorem is immediate:
Let be the trivial group, and let and be any two groups. If and are homomorphisms, then is the trivial homomorphism.
Now we may proceed to the categorical definition of a zero morphism. It is only defined for categories that have a zero object. (There exists a more general definition, but it shall be of no use to us during the course of this book.)
Let be a category with a zero object , and let be objects of that category. Then the zero morphism from to is defined as the composition of the two unique morphisms and .
I'm not sure if there is a precise definition of a forgetful functor, but in fact, believe it or not, the notion is easily explained in terms of a few examples.
Consider the category of groups with homomorphisms as morphisms. We may define a functor sending each group to it's underlying set and each homomorphism to itself as a function. This is a functor from the category of groups to the category of sets. Since the target objects of that functor lack the group structure, the group structure has been forgotten, and hence we are dealing with a forgetful functor here.
Consider the category of rings. Remember that each ring is an Abelian group with respect to addition. Hence, we may define a functor from the category of rings to the category of groups, sending each ring to the underlying group. This is also a forgetful functor; one which forgets the multiplication of the ring.
Let be categories, and let be two functors. A natural transformation is a family of morphisms in , where ranges over all objects of , that are compatible with the images of morphisms of by the functors and ; that is, the following diagram commutes:
Let be the category of all fields and the category of all rings. We define a functor
as follows: Each object of shall be sent to the ring consisting of addition and multiplication inherited from the field, and whose underlying set are the elements
where is the unit of the field . Any morphism of fields shall be mapped to the restriction ; note that this is well-defined (that is, maps to the object associated to under the functor ), since both
where is the unit of the field .
We further define a functor
sending each field to its associated prime field, seen as a ring, and again restricting morphisms, that is sending each morphism to (this is well-defined by the same computations as above and noting that , being a field morphism, maps inverses to inverses).
In this setting, the maps
given by inclusion, form a natural transformation from to ; this follows from checking the commutative diagram directly.
Let be categories, let be a functor, let be an object of . A universal arrow is a morphism , where is a fixed object of , such that for any other object of and morphism there exists a unique morphism such that the diagram
Let be a category with zero objects, and let be a morphism between two objects of . A kernel of is an arrow , where is what we shall call the object associated to the kernel , such that
for each object of and each morphism such that , there exists a unique such that .
The second property is depicted in the following commutative diagram:
Note that here, we don't see kernels only as subsets, but rather as an object together with a morphism. This is because in the category of groups, for example, we can take the morphism just by inclusion. Let me explain.
In the category of groups, every morphism has a kernel.
Let be groups and a morphism (that is, a group homomorphism). We set
the inclusion. This is indeed a kernel in the category of groups. For, if is a group homomorphism such that , then maps wholly to , and we may simply write . This is also clearly a unique factorisation.
For kernels the following theorem holds:
Let be a category with zero objects, let be a morphism and let be a kernel of . Then is a monic (that is, a monomorphism).
Let . The situation is depicted in the following picture:
Here, the three lower arrows depict the general property of the kernel. Now the morphisms and are both factorisations of the morphism over . By uniqueness in factorisations, .
Kernels are essentially unique:
Let be a category with zero objects, let be a morphism and let , be two kernels of . Then
that is to say, and are isomorphic.
From the first property of kernels, we obtain and . Hence, the second property of kernels imply the commutative diagrams
We claim that and are inverse to each other.
Since both and are monic by theorem 3.3, we may cancel them to obtain
that is, we have inverse arrows and thus, by definition, isomorphisms.
An analogous notion is that of a cokernel. This notion is actually common in mathematics, but not so much at the undergraduate level.
Let be a category with zero objects, and let be a morphism between two objects of . A cokernel of is an arrow , where is an object of which we may call the object associated to the cokernel , such that
for each object of and each morphism such that , there exists a unique factorisation for a suitable morphism .
The second property is depicted in the following picture:
Again, this notion is just a generalisation of facts observed in "everyday" categories. Our first example of cokernels shall be the existence of cokernels in Abelian groups. Now actually, cokernels exist even in the category of groups, but the construction is a bit tricky since in general, the image need not be a normal subgroup, which is why we may not be able to form the factor group by the image. In Abelian groups though, all subgroups are normal, and hence this is possible.
In the category of Abelian groups, every morphism has a cokernel.
Let be any two Abelian groups, and let be a group homomorphism. We set
we may form this quotient group because within an Abelian group, all subgroups are normal. Further, we set
the projection (we adhere to the custom of writing Abelian groups in an additive fashion). Let now be a group homomorphism such that , where is another Abelian group. Then the function
is well-defined (because of the rules for group morphisms) and the desired unique factorisation of is given by .
Every cokernel is an epi.
Let be a morphism and a corresponding cokernel. Assume that . The situation is depicted in the following picture:
Now again, , and and are by their equality both factorisations of . Hence, by the uniqueness of such factorisations required in the definition of cokernels, .
If a morphism has two cokernels and (let's call the associated objects and ), then ; that is, and are isomorphic.
Once again, we have and , and hence we obtain commutative diagrams
We once again claim that and are inverse to each other. Indeed, we obtain the equations
and by cancellation (both and are epis due to theorem 8.7) we obtain
Let be a category, and let and be objects of . Then a coproduct of and is another object of , denoted , together with two morphisms and such that for any morphisms and , there exist morphisms such that and .
Let be a category that contains two objects and . Assume we are given an object of together with four morphisms that make it into a product, and simultaneously into a coproduct. Then we call a biproduct of the two objects and and denote it by
Within the category of Abelian groups, a biproduct is given by the product group; if are Abelian groups, set the product group of and to be
the cartesian product, with component-wise group operation.
Given Abelian groups and morphisms (that is, since we are in the category of Abelian groups, group homomorphisms)
we may define the whole of those to be a sequence of Abelian groups, and denote it by
Note that if one of the objects is the trivial group, we denote it by and simply leave out the caption of the arrows going to it and emitting from it, since the trivial group is the zero object in the category of Abelian groups.
There are also infinite exact sequences, indicated by a notation of the form
it just goes on and on and on. The exact sequence to be infinite means, that we have a sequence (in the classical sense) of objects and another classical sequence of morphisms between these objects (here, the two have same cardinality: Countably infinite).
Definition 4.2 (exact sequence):
A given sequence
is called exact iff for all ,
There is a fundamental example to this notion.
Example 4.3 (short exact sequence):
A short exact sequence is simply an exact sequence of the form
for suitable Abelian groups and group homomorphisms .
The exactness of this sequence means, considering the form of the image and kernel of the zero morphism:
Set , , , where we only consider the additive group structure, and define the group homomorphisms
This gives a short exact sequence
as can be easily checked.
A similar construction can be done for any factorisation of natural numbers (in our example, , , ).
We now should like to briefly exemplify a supremely important method of proof called diagram chase in the case of Abelian groups. We shall later like to generalize this method, and we will see that the classical diagram lemmas hold in huge generality (that includes our example below), namely in the generality of Abelian categories (to be introduced below).
Theorem 4.5 (the short five lemma):
Assume we have a commutative diagram
where the two rows are exact. If and are isomorphisms, then so must be .
We first prove that is injective. Let for a . Since the given diagram is commutative, we have and since is an isomorphism, . Since the top row is exact, it follows that , that is, for a suitable . Hence, the commutativity of the given diagram implies , and hence since is injective as the composition of two injective maps. Therefore, .
Next, we prove that is surjective. Let thus be given. Set . Since is surjective as the composition of two surjective maps, there exists such that . The commutativity of the given diagram yields . Thus, by linearity, whence , and since is an isomorphism, we find such that . The commutativity of the diagram yields , and hence .
Now comes the clincher we have been working towards. In the ordinary diagram chase, we used elements of sets. We will now replace those elements by arrows in a simple way: Instead of looking at "elements" "" of some object of an abelian category , we look at arrows towards that element; that is, arrows for arbitrary objects of . For "the codomain of an arrow is ", we write
where the subscript stands for "member".
We have now replaced the notion of elements of a set by the notion of members in category theory. We also need to replace the notion of equality of two elements. We don't want equality of two arrows, since then we would not obtain the usual rules for chasing diagrams. Instead, we define yet another equivalence relation on arrows with codomain (that is, on members of ). The following lemma will help to that end.
Lemma 4.18 (square completion):
Construction 4.19 (second equivalence relation):
Now we are finally able to prove the proposition that will enable us doing diagram chases using the techniques we apply also to diagram chases for Abelian groups (or modules, or any other Abelian category).
Theorem 4.20 (diagram chase enabling theorem):
Let be an Abelian category and an object of . We have the following rules concerning properties of a morphism:
is monic iff .
is monic iff .
is epic iff .
is the zero arrow iff .
A sequence is exact iff
for each with , there exists such that .
If is a morphism such that , there exists a member of , which we shall call (the brackets indicate that this is one morphism), such that:
We have thus constructed a relatively elaborate machinery in order to elevate our proof technique of diagram chase (which is quite abundant) to the very abstract level of Abelian categories.
We shall now ask the question: Given a module and certain submodules , which module is the smallest module containing all the ? And which module is the largest module that is itself contained within all ? The following definitions and theorems answer those questions.
Definition and theorem 5.5 (sum of submodules):
Let be a module over a certain ring and let be submodules of . The set
is a submodule of , which is the smallest submodule of that contains all the . It is called the sum of .
1. is a submodule:
It is an Abelian subgroup since if , then
It is closed under the module operation, since
2. Each is contained in :
This follows since for each and each .
3. is the smallest submodule containing all the : If is another such submodule, then must contain all the elements
due to closedness under addition and submodule operation.
Definition and theorem 5.6 (intersection of submodules):
Let be a module over a ring , and let be submodules of . Then the set
is a submodule of , which is the largest submodule of containing all the . It is called the intersection of the .
1. It's a submodule: Indeed, if , then for each and thus for each , hence .
2. It is contained in all by definition of the intersection.
3. Any set that contains all elements from each of the is contained within the intersection.
We have the following rule for computing with intersections and sums:
Theorem 5.7 (modular law; Dedekind):
Let be a module and such that . Then
: Let . Since , and hence . Since also by assumption, .
: Let . Since , and since further , . Hence, .
More abstractly, the properties of the sum and intersection of submodules may be theoretically captured in the following way:
A lattice is a set together with two operations (called the join or least upper bound) and (called the meet or greatest lower bound) such that the following laws hold:
Commutative laws: ,
Idempotency laws: ,
Absorption laws: ,
Associative laws: ,
There are some special types of lattices:
A modular lattice is a lattice such that the identity
Theorem 5.10 (ordered sets as lattices):
Let be a partial order on the set such that
every set has a least upper bound (where a least upper bound of satisfies for all (i.e. it is an upper bound) and for every other upper bound of ) and
every set has a greatest lower bound (defined analogously to least upper bound with inequality reversed).
Then , together with the joint operation sending to the least upper bound of that set and the meet operation analogously, is a lattice.
In fact, it suffices to require conditions 1. and 2. only for sets with two elements. But as we have shown, in the case that is the set of all submodules of a given module, we have the "original" conditions satisfied.
First, we note that least upper bound and greatest lower bound are unique, since if for example are least upper bounds of , then and and hence . Thus, the joint and meet operation are well-defined.
The commutative laws follow from .
The idempotency laws from clearly being the least upper bound, as well as the greatest lower bound, of the set .
The first absorption law follows as follows: Let be the least upper bound of . Then in particular, . Hence, is a lower bound of , and any lower bound satisfies , which is why is the greatest lower bound of . The second absorption law is proven analogously.
The first associative law follows since if is the least upper bound of and is the upper bound of , then (as is an upper bound for ) and if is the least upper bound of , then since is an upper bound and further, and . The same argument (with and swapped) proves that is also the least upper bound of the l.u.b. of and . Again, the second associative law is proven similarly.
From theorems 5.5-5.7 and 5.10 we note that the submodules of a module form a modular lattice, where the order is given by set inclusion.
Exercise 5.2.1: Let be a ring. Find a suitable module operation such that together with its own addition and this module operation is an -module. Make sure you define this operation in the simplest possible way. Prove further, that with respect to this module operation, the submodules of are exactly the ideals of .
Exercise 5.3.1: Let be rings regarded as modules over themselves as in exercise 5.2.1. Prove that the ring homomorphisms are exactly the module homomorphisms ; that is, every ring hom. is a module hom. and vice versa.
Let be a module over the ring . is called a Noetherian module iff for every ascending chain of submodules
of , there exists an such that
We also say that ascending chains of submodules eventually become stationary.
Definition 6.7 (Artinian modules):
A module over a ring is called Artinian module iff for every descending chain of submodules
of , there exists an such that
We also say that descending chains of submodules eventually become stationary.
We see that those definitions are similar, although they define a bit different objects.
Using the axiom of choice, we have the following characterisation of Noetherian modules:
Let be a module over . The following are equivalent:
All the submodules of are finitely generated.
Every nonempty set of submodules of has a maximal element.
We prove 1. 2. 3. 1.
1. 2.: Assume there is a submodule of which is not finitely generated. Using the axiom of dependent choice, we choose a sequence in such that
it is possible to find such a sequence since we may just always choose , since is not finitely generated. Thus we have an ascending sequence of submodules
which does not stabilize.
2. 3.: Let be a nonempty set of submodules of . Due to Zorn's lemma, it suffices to prove that every chain within has an upper bound (of course, our partial order is set inclusion, i.e. ). Hence, let be a chain within . We write
Since every submodule is finitely generated, so is
We write , where only finitely many of the are nonzero. Hence, we have
for suitably chosen . Now each is eventually contained in some . Since the are an ascending sequence with respect to inclusion, we may just choose large enough such that all are contained within . Hence, is the desired upper bound.
3. 1.: Let
be an ascending chain of submodules of . The set has a maximal element and thus this ascending chain becomes stationary at .
We prove 1. 3. 2. 1.
1. 3.: Let be a set of submodules of which does not have a maximal element. Then by the axiom of dependent choice, for each we may choose