# Objects and morphisms

## Basics

Definition 1.1 (categories):

A category ${\displaystyle {\mathcal {C}}}$ is a collection of objects together with morphisms, which go from an object ${\displaystyle a\in {\mathcal {C}}}$ to an object ${\displaystyle b\in {\mathcal {C}}}$ each (where ${\displaystyle a}$ is called the domain and ${\displaystyle b}$ the codomain), such that

1. Any morphism ${\displaystyle f:a\to b}$ can be composed with a morphism ${\displaystyle g:b\to c}$ such that the composition of the two is a morphism ${\displaystyle g\circ f:a\to c}$.
2. For each ${\displaystyle a\in {\mathcal {C}}}$, there exists a morphism ${\displaystyle 1_{a}:a\to a}$ such that for any morphism ${\displaystyle f:a\to b}$ we have ${\displaystyle f\circ 1_{a}=f}$ and for any morphism ${\displaystyle g:b\to a}$ we have ${\displaystyle 1_{a}\circ g=g}$.

Examples 1.2:

1. The collection of all groups together with group homomorphisms as morphisms is a category.
2. The collection of all rings together with ring homomorphisms is a category.
3. Sets together with ordinary functions form the category of sets.

To every category we may associate an opposite category:

Definition 1.3 (opposite categories):

Let ${\displaystyle {\mathcal {C}}}$ be a category. The opposite category of ${\displaystyle {\mathcal {C}}}$ is the category consisting of the objects of ${\displaystyle {\mathcal {C}}}$, but all morphisms are considered to be inverted, which is done by simply define codomains to be the domain of the former morphism and domains to be codomains of former morphisms.

For instance, within the opposite category of sets, a function ${\displaystyle f:S\to T}$ (where ${\displaystyle S}$, ${\displaystyle T}$ are sets) is a morphism ${\displaystyle T\to S}$.

## Algebraic objects within category theory

A category is such a general object that some important algebraic structures arise as special cases. For instance, consider a category with one object. Then this category is a monoid with composition as its operation. On the other hand, if we are given an arbitrary monoid, we can define the elements of that monoid to be the morphisms from a single object to itself, and thus have found a representation of that monoid as a category with one object.

If we are given a category with one object, and the morphisms all happen to be invertible, then we have in fact a group structure. And further, just as described for monoids, we can turn every group into a category.

## Special types of morphisms

The following notions in category may have been inspired by stuff that happens within the category of sets and similar categories.

In the category of sets, we have surjective functions and injective functions. We may characterise those as follows:

Theorem 1.4:

Let ${\displaystyle X,Y}$ be sets and ${\displaystyle f:X\to Y}$ be a function. Then:

• ${\displaystyle f}$ is surjective if and only if for all sets ${\displaystyle Z}$ and functions ${\displaystyle g,h:Y\to Z}$ ${\displaystyle g\circ f=h\circ f}$ implies ${\displaystyle g=h}$.
• ${\displaystyle f}$ is injective iff for all sets ${\displaystyle W}$ and functions ${\displaystyle g,h:W\to X}$ ${\displaystyle f\circ g=f\circ h}$ implies ${\displaystyle g=h}$.

Proof:

We begin with the characterisation of surjectivity.

${\displaystyle \Rightarrow }$: Let ${\displaystyle f}$ be surjective, and let ${\displaystyle g\circ f=h\circ f}$. Let ${\displaystyle y\in Y}$ be arbitrary. Since ${\displaystyle f}$ is surjective, we may choose ${\displaystyle x\in X}$ such that ${\displaystyle f(x)=y}$. Then we have ${\displaystyle g(y)=g(f(x))=h(f(x))=h(y)}$. Since ${\displaystyle y\in Y}$ was arbitrary, ${\displaystyle g=h}$.

${\displaystyle \Leftarrow }$: Assume that for all sets ${\displaystyle Z}$ and functions ${\displaystyle g,h:Y\to Z}$ ${\displaystyle g\circ f=h\circ f}$ implies ${\displaystyle g=h}$. Assume for contradiction that ${\displaystyle f}$ isn't surjective. Then there exists ${\displaystyle y_{0}\in Y}$ outside the image of ${\displaystyle f}$. Let ${\displaystyle Z=\{1,2\}}$. We define ${\displaystyle g,h:Y\to Z}$ as follows:

${\displaystyle g(y)=1\forall y\in Y}$, ${\displaystyle h(y)={\begin{cases}1&y\neq y_{0}\\2&y=y_{0}\end{cases}}}$.

Then ${\displaystyle g\circ f=h\circ f=1}$ (since ${\displaystyle y_{0}}$, the only place where the second function might be ${\displaystyle 2}$, is never hit by ${\displaystyle f}$), but ${\displaystyle g\neq h}$.

Now we prove the characterisation of injectivity.

${\displaystyle \Rightarrow }$: Let ${\displaystyle f}$ be injective, let ${\displaystyle W}$ be another set and let ${\displaystyle g,h:W\to X}$ be two functions such that ${\displaystyle f\circ g=f\circ h}$. Assume that ${\displaystyle g(w)\neq h(w)}$ for a certain ${\displaystyle w\in W}$. Then ${\displaystyle f(g(w))\neq f(h(w))}$ due to the injectivity of ${\displaystyle f}$, contradiction.

${\displaystyle \Leftarrow }$: Assume that for all sets ${\displaystyle W}$ and functions ${\displaystyle g,h:W\to X}$ ${\displaystyle f\circ g=f\circ h}$ implies ${\displaystyle g=h}$. Let ${\displaystyle x,y\in X}$ be arbitrary such that ${\displaystyle f(x)=f(y)}$. Take ${\displaystyle W=\{1\}}$ and ${\displaystyle g(1)=x,h(1)=y}$. Then ${\displaystyle x=y}$ and hence surjectivity.${\displaystyle \Box }$

It is interesting that the change from injectivity and surjectivity swapped the use of indirect proof from the ${\displaystyle \Leftarrow }$-direction to the ${\displaystyle \Rightarrow }$-direction.

Since in the characterisation of injectivity and surjectivity given by the last theorem there is no mention of elements of sets any more, we may generalise those concepts to category theory.

Definition 1.5:

Let ${\displaystyle {\mathcal {C}}}$ be a category, and let ${\displaystyle f}$ be a morphism of ${\displaystyle {\mathcal {C}}}$. We say that

• ${\displaystyle f:X\to Y}$ is an epimorphism if and only if for all objects ${\displaystyle Z}$ of ${\displaystyle {\mathcal {C}}}$ and all morphisms ${\displaystyle g,h:Y\to Z}$ ${\displaystyle g\circ f=h\circ f\Rightarrow g=h}$, and
• ${\displaystyle f:X\to Y}$ is a monomorphism if and only if for all objects ${\displaystyle W}$ of ${\displaystyle {\mathcal {C}}}$ and all morphisms ${\displaystyle g,h:W\to X}$ ${\displaystyle f\circ g=f\circ h\Rightarrow g=h}$.

### Exercises

• Exercise 1.3.1: Come up with a category ${\displaystyle {\mathcal {C}}}$, where the objects are some finitely many sets, such that there exists an epimorphism that is not surjective, and a monomorphism that is not injective (Hint: Include few morphisms).

## Terminal, initial and zero objects and zero morphisms

Within many categories, such as groups, rings, modules,... (but not fields), there exist some sort of "trivial" objects which are the simplest possible; for instance, in the category of groups, there is the trivial group, consisting only of the identity. Indeed, within the category of groups, the trivial group has the following property:

Theorem 1.6:

Let ${\displaystyle |G|=1}$ and let ${\displaystyle H}$ be another group. Then there exists exactly one homomorphism ${\displaystyle f:H\to G}$ and exactly onee homomorphism ${\displaystyle g:G\to H}$.

Futhermore, if ${\displaystyle {\tilde {G}}}$ is any other group with the property that for every other group ${\displaystyle H}$, there exists exactly one homomorphism ${\displaystyle {\tilde {G}}\to H}$ and exactly one homomorphism ${\displaystyle H\to {\tilde {G}}}$, then ${\displaystyle |{\tilde {G}}|=1}$.

Proof: We begin with the first part. Let ${\displaystyle f:H\to G}$ be a homomorphism, where ${\displaystyle |G|=1}$. Then ${\displaystyle f}$ must take the value of the one element of ${\displaystyle G}$ everywhere and is thus uniquely determined. If furthermore ${\displaystyle g:G\to H}$ is a homomorphism, by the homomorphism property we must have ${\displaystyle g(\iota )=1_{H}}$ (otherwise obtain a contradiction by taking a power of ${\displaystyle \iota }$).

Assume now that ${\displaystyle |{\tilde {G}}|>1}$, and let ${\displaystyle \tau }$ be an element within ${\displaystyle {\tilde {G}}}$ that does not equal the identity. Let ${\displaystyle n:={\text{ord}}\tau }$. We define a homomorphism ${\displaystyle f:Z_{n}\to {\tilde {G}}}$ by ${\displaystyle f(k):=\tau ^{k}}$. In addition to that homomorphism, we also have the trivial homomorphism ${\displaystyle Z_{n}\to {\tilde {G}}}$. Hence, we don't have uniqueness.${\displaystyle \Box }$

Using the characterisation given by theorem 1.6, we may generalise this concept into the language of category theory.

Definition 1.7:

Let ${\displaystyle {\mathcal {C}}}$ be a category. A zero object of ${\displaystyle {\mathcal {C}}}$ is an object ${\displaystyle Z}$ of ${\displaystyle {\mathcal {C}}}$ such that for all other objects ${\displaystyle X,Y}$ of ${\displaystyle {\mathcal {C}}}$ there exist unique morphisms ${\displaystyle f:X\to Z}$ and ${\displaystyle g:Z\to Y}$.

Within many usual categories, such as groups (as shown above), but also rings and modules, there exist zero objects. However, not so within the category of sets. Indeed, let ${\displaystyle S}$ be an arbitrary set. If ${\displaystyle |S|\geq 2}$, then from any nonempty set there exist at least 2 morphisms with codomain ${\displaystyle S}$, namely the two constant functions. If ${\displaystyle |S|=1}$, we may pick a set ${\displaystyle T}$ with ${\displaystyle |t|>2}$ and obtain two morphisms from ${\displaystyle S}$ mapping to ${\displaystyle T}$. If ${\displaystyle S=\emptyset }$, then there does not exist a function ${\displaystyle T\to S}$.

But, if we split the definition 1.6 in half, each half can be found within the category of sets.

Definition 1.8:

Let ${\displaystyle {\mathcal {C}}}$ be a category. An object ${\displaystyle X}$ of ${\displaystyle {\mathcal {C}}}$ is called

• terminal iff for every other object ${\displaystyle Y}$ of ${\displaystyle {\mathcal {C}}}$ there exists exactly one morphism ${\displaystyle Y\to X}$;
• initial iff for every other object ${\displaystyle Y}$ of ${\displaystyle {\mathcal {C}}}$ there exists exactly one morphism ${\displaystyle X\to Y}$.

In the category of sets, there exists one initial object and millions (actually infinitely many, to be precise) terminal objects. The initial object is the empty set; the argument above definition 1.7 shows that this is the only remaining option, and it is a valid one because any morphism from the empty set to any other set is the empty function. Furthermore, every set with exactly one element is a terminal object, since every morphism mapping to that set is the constant function with value the single element of that set. Hence, by generalizing the concept of a zero object in two different directions, we have obtained a fine description for the symmetry breaking at the level of sets.

Now returning to the category of groups, between any two groups there also exist a particularly trivial homomorphism, that is the zero homomorphism. We shall also elevate this concept to the level of categories. The following theorem is immediate:

Theorem 1.9:

Let ${\displaystyle T}$ be the trivial group, and let ${\displaystyle H}$ and ${\displaystyle G}$ be any two groups. If ${\displaystyle f:H\to T}$ and ${\displaystyle g:T\to G}$ are homomorphisms, then ${\displaystyle g\circ f}$ is the trivial homomorphism.

Now we may proceed to the categorical definition of a zero morphism. It is only defined for categories that have a zero object. (There exists a more general definition, but it shall be of no use to us during the course of this book.)

Definition 1.10:

Let ${\displaystyle {\mathcal {C}}}$ be a category with a zero object ${\displaystyle Z}$, and let ${\displaystyle X,Y}$ be objects of that category. Then the zero morphism from ${\displaystyle X}$ to ${\displaystyle Y}$ is defined as the composition of the two unique morphisms ${\displaystyle X\to Z}$ and ${\displaystyle Z\to Y}$.

# Functors, natural transformations, universal arrows

## Functors

### Definitions

There are two types of functors, covariant functors and contravariant functors. Often, a covariant functor is simply called a functor.

Definition 2.1:

Let ${\displaystyle {\mathcal {C}},{\mathcal {D}}}$ be two categories. A covariant functor ${\displaystyle F:{\mathcal {C}}\to {\mathcal {D}}}$ associates

• to each object ${\displaystyle A}$ of ${\displaystyle {\mathcal {C}}}$ an object ${\displaystyle F(A)}$ of ${\displaystyle {\mathcal {D}}}$, and
• to each morphism ${\displaystyle f:A\to B}$ in ${\displaystyle {\mathcal {C}}}$ a morphism ${\displaystyle F(f):F(A)\to F(B)}$,

such that the following rules are satisfied:

1. For all objects ${\displaystyle A}$ of ${\displaystyle {\mathcal {C}}}$ we have ${\displaystyle F(1_{A})=1_{F(A)}}$, and
2. for all morphisms ${\displaystyle f:A\to B}$ and ${\displaystyle g:B\to C}$ of ${\displaystyle {\mathcal {C}}}$ we have ${\displaystyle F(g\circ f)=F(g)\circ F(f)}$.

Definition 2.2:

Let ${\displaystyle {\mathcal {C}},{\mathcal {D}}}$ be two categories. A contravariant functor ${\displaystyle F:{\mathcal {C}}\to {\mathcal {D}}}$ associates

• to each object ${\displaystyle A}$ of ${\displaystyle {\mathcal {C}}}$ an object ${\displaystyle F(A)}$ of ${\displaystyle {\mathcal {D}}}$, and
• to each morphism ${\displaystyle f:A\to B}$ in ${\displaystyle {\mathcal {C}}}$ a morphism ${\displaystyle F(f):F(B)\to F(A)}$,

such that the following rules are satisfied:

1. For all objects ${\displaystyle A}$ of ${\displaystyle {\mathcal {C}}}$ we have ${\displaystyle F(1_{A})=1_{F(A)}}$, and
2. for all morphisms ${\displaystyle f:A\to B}$ and ${\displaystyle g:B\to C}$ of ${\displaystyle {\mathcal {C}}}$ we have ${\displaystyle F(g\circ f)=F(f)\circ F(g)}$.

### Forgetful functors

I'm not sure if there is a precise definition of a forgetful functor, but in fact, believe it or not, the notion is easily explained in terms of a few examples.

Example 2.3:

Consider the category of groups with homomorphisms as morphisms. We may define a functor sending each group to it's underlying set and each homomorphism to itself as a function. This is a functor from the category of groups to the category of sets. Since the target objects of that functor lack the group structure, the group structure has been forgotten, and hence we are dealing with a forgetful functor here.

Example 2.4:

Consider the category of rings. Remember that each ring is an Abelian group with respect to addition. Hence, we may define a functor from the category of rings to the category of groups, sending each ring to the underlying group. This is also a forgetful functor; one which forgets the multiplication of the ring.

## Natural transformations

Definition 2.5:

Let ${\displaystyle {\mathcal {C}},{\mathcal {D}}}$ be categories, and let ${\displaystyle F,G:{\mathcal {C}}\to {\mathcal {D}}}$ be two functors. A natural transformation is a family of morphisms in ${\displaystyle {\mathcal {D}}}$ ${\displaystyle \eta _{X}:F(X)\to G(X)}$, where ${\displaystyle X}$ ranges over all objects of ${\displaystyle {\mathcal {C}}}$, that are compatible with the images of morphisms ${\displaystyle f:X\to Y}$ of ${\displaystyle {\mathcal {C}}}$ by the functors ${\displaystyle F}$ and ${\displaystyle G}$; that is, the following diagram commutes:

Example 2.6:

Let ${\displaystyle {\mathcal {C}}}$ be the category of all fields and ${\displaystyle {\mathcal {D}}}$ the category of all rings. We define a functor

${\displaystyle F:{\mathcal {C}}\to {\mathcal {D}}}$

as follows: Each object ${\displaystyle \mathbb {F} }$ of ${\displaystyle {\mathcal {C}}}$ shall be sent to the ring ${\displaystyle R_{\mathbb {F} }}$ consisting of addition and multiplication inherited from the field, and whose underlying set are the elements

${\displaystyle S_{\mathbb {F} }\{\overbrace {1_{\mathbb {F} }+1_{\mathbb {F} }+\cdots +1_{\mathbb {F} }} ^{n{\text{ times}}}|n\in \mathbb {N} _{0}\}\cup \{\overbrace {-1_{\mathbb {F} }-1_{\mathbb {F} }-\cdots -1_{\mathbb {F} }} ^{n{\text{ times}}}|n\in \mathbb {N} \}}$,

where ${\displaystyle 1_{\mathbb {F} }}$ is the unit of the field ${\displaystyle \mathbb {F} }$. Any morphism ${\displaystyle f:\mathbb {F} \to \mathbb {G} }$ of fields shall be mapped to the restriction ${\displaystyle f\upharpoonright _{S_{\mathbb {F} }}}$; note that this is well-defined (that is, maps to the object associated to ${\displaystyle \mathbb {G} }$ under the functor ${\displaystyle F}$), since both

${\displaystyle f(1_{\mathbb {F} }+1_{\mathbb {F} }+\cdots +1_{\mathbb {F} })=f(1_{\mathbb {F} })+f(1_{\mathbb {F} })+\cdots +f(1_{\mathbb {F} })=1_{\mathbb {G} }+1_{\mathbb {G} }+\cdots +1_{\mathbb {G} }}$

and

${\displaystyle f(-1_{\mathbb {F} }-1_{\mathbb {F} }-\cdots -1_{\mathbb {F} })=-f(1_{\mathbb {F} })-f(1_{\mathbb {F} })-\cdots -f(1_{\mathbb {F} })=-1_{\mathbb {G} }-1_{\mathbb {G} }-\cdots -1_{\mathbb {G} }}$,

where ${\displaystyle 1_{\mathbb {G} }}$ is the unit of the field ${\displaystyle \mathbb {G} }$.

We further define a functor

${\displaystyle G:{\mathcal {C}}\to {\mathcal {D}}}$,

sending each field ${\displaystyle \mathbb {F} }$ to its associated prime field ${\displaystyle \mathbb {F} _{\text{prime}}}$, seen as a ring, and again restricting morphisms, that is sending each morphism ${\displaystyle f:\mathbb {F} \to \mathbb {G} }$ to ${\displaystyle f\upharpoonright _{\mathbb {F} _{\text{prime}}}}$ (this is well-defined by the same computations as above and noting that ${\displaystyle f}$, being a field morphism, maps inverses to inverses).

In this setting, the maps

${\displaystyle \eta _{\mathbb {F} }:R_{\mathbb {F} }\to \mathbb {F} _{\text{prime}}}$,

given by inclusion, form a natural transformation from ${\displaystyle F}$ to ${\displaystyle G}$; this follows from checking the commutative diagram directly.

## Universal arrows

Definition 2.7 (universal arrows):

Let ${\displaystyle {\mathcal {C}},{\mathcal {D}}}$ be categories, let ${\displaystyle F:{\mathcal {C}}\to {\mathcal {D}}}$ be a functor, let ${\displaystyle Y}$ be an object of ${\displaystyle {\mathcal {D}}}$. A universal arrow is a morphism ${\displaystyle g:Y\to F(X)}$, where ${\displaystyle X}$ is a fixed object of ${\displaystyle {\mathcal {C}}}$, such that for any other object ${\displaystyle Z}$ of ${\displaystyle {\mathcal {C}}}$ and morphism ${\displaystyle h:Y\to F(Z)}$ there exists a unique morphism ${\displaystyle f:X\to Z}$ such that the diagram

commutes.

# Kernels, cokernels, products, coproducts

## Kernels

Definition 3.1:

Let ${\displaystyle {\mathcal {C}}}$ be a category with zero objects, and let ${\displaystyle f:a\to b}$ be a morphism between two objects ${\displaystyle a,b}$ of ${\displaystyle {\mathcal {C}}}$. A kernel of ${\displaystyle f}$ is an arrow ${\displaystyle k:o_{k}\to a}$, where ${\displaystyle o_{k}}$ is what we shall call the object associated to the kernel ${\displaystyle k}$, such that

1. ${\displaystyle f\circ k=0_{o_{k},b}}$, and
2. for each object ${\displaystyle z}$ of ${\displaystyle {\mathcal {C}}}$ and each morphism ${\displaystyle g:z\to a}$ such that ${\displaystyle f\circ g=0_{z,b}}$, there exists a unique ${\displaystyle g':z\to o_{k}}$ such that ${\displaystyle g=k\circ g'}$.

The second property is depicted in the following commutative diagram:

Note that here, we don't see kernels only as subsets, but rather as an object together with a morphism. This is because in the category of groups, for example, we can take the morphism just by inclusion. Let me explain.

Example 3.2:

In the category of groups, every morphism has a kernel.

Proof:

Let ${\displaystyle G,H}$ be groups and ${\displaystyle \varphi :G\to H}$ a morphism (that is, a group homomorphism). We set

${\displaystyle o_{k}:=\{g\in G:\varphi (g)=0\}}$

and

${\displaystyle k:o_{k}\to G,k(g)=g}$,

the inclusion. This is indeed a kernel in the category of groups. For, if ${\displaystyle \theta :K\to G}$ is a group homomorphism such that ${\displaystyle \varphi \circ \theta =0}$, then ${\displaystyle \theta }$ maps wholly to ${\displaystyle o_{k}}$, and we may simply write ${\displaystyle \theta =k\circ \theta }$. This is also clearly a unique factorisation.${\displaystyle \Box }$

For kernels the following theorem holds:

Theorem 3.3:

Let ${\displaystyle {\mathcal {C}}}$ be a category with zero objects, let ${\displaystyle f:a\to b}$ be a morphism and let ${\displaystyle k:o_{k}\to a}$ be a kernel of ${\displaystyle f}$. Then ${\displaystyle k}$ is a monic (that is, a monomorphism).

Proof:

Let ${\displaystyle k\circ s=k\circ t}$. The situation is depicted in the following picture:

Here, the three lower arrows depict the general property of the kernel. Now the morphisms ${\displaystyle k\circ s}$ and ${\displaystyle k\circ t}$ are both factorisations of the morphism ${\displaystyle k\circ s}$ over ${\displaystyle k}$. By uniqueness in factorisations, ${\displaystyle s=t}$.${\displaystyle \Box }$

Kernels are essentially unique:

Theorem 3.4:

Let ${\displaystyle {\mathcal {C}}}$ be a category with zero objects, let ${\displaystyle f:a\to b}$ be a morphism and let ${\displaystyle k:o_{k}\to a}$, ${\displaystyle {\tilde {k}}:o_{\tilde {k}}\to a}$ be two kernels of ${\displaystyle f}$. Then

${\displaystyle o_{k}\cong o_{\tilde {k}}}$;

that is to say, ${\displaystyle k}$ and ${\displaystyle {\tilde {k}}}$ are isomorphic.

Proof:

From the first property of kernels, we obtain ${\displaystyle f\circ k=0}$ and ${\displaystyle f\circ {\tilde {k}}=0}$. Hence, the second property of kernels imply the commutative diagrams

and .

We claim that ${\displaystyle k'}$ and ${\displaystyle {\tilde {k}}'}$ are inverse to each other.

${\displaystyle {\tilde {k}}k'{\tilde {k}}'=k{\tilde {k}}'={\tilde {k}}={\tilde {k}}1_{o_{\tilde {k}}}}$ and ${\displaystyle k{\tilde {k}}'k'={\tilde {k}}k'=k=k1_{o_{k}}}$.

Since both ${\displaystyle k}$ and ${\displaystyle {\tilde {k}}}$ are monic by theorem 3.3, we may cancel them to obtain

${\displaystyle k'{\tilde {k}}'=1_{o_{\tilde {k}}}}$ and ${\displaystyle {\tilde {k}}'k'=1_{o_{k}}}$,

that is, we have inverse arrows and thus, by definition, isomorphisms.${\displaystyle \Box }$

## Cokernels

An analogous notion is that of a cokernel. This notion is actually common in mathematics, but not so much at the undergraduate level.

Definition 3.5:

Let ${\displaystyle {\mathcal {C}}}$ be a category with zero objects, and let ${\displaystyle f:a\to b}$ be a morphism between two objects ${\displaystyle a,b}$ of ${\displaystyle {\mathcal {C}}}$. A cokernel of ${\displaystyle f}$ is an arrow ${\displaystyle u:b\to o_{u}}$, where ${\displaystyle o_{u}}$ is an object of ${\displaystyle {\mathcal {C}}}$ which we may call the object associated to the cokernel ${\displaystyle u}$, such that

1. ${\displaystyle u\circ f=0_{a,o_{u}}}$, and
2. for each object ${\displaystyle c}$ of ${\displaystyle {\mathcal {C}}}$ and each morphism ${\displaystyle h:b\to c}$ such that ${\displaystyle h\circ f=0_{a,c}}$, there exists a unique factorisation ${\displaystyle h=h'\circ u}$ for a suitable morphism ${\displaystyle h'}$.

The second property is depicted in the following picture:

Again, this notion is just a generalisation of facts observed in "everyday" categories. Our first example of cokernels shall be the existence of cokernels in Abelian groups. Now actually, cokernels exist even in the category of groups, but the construction is a bit tricky since in general, the image need not be a normal subgroup, which is why we may not be able to form the factor group by the image. In Abelian groups though, all subgroups are normal, and hence this is possible.

Example 3.6:

In the category of Abelian groups, every morphism has a cokernel.

Proof:

Let ${\displaystyle G,H}$ be any two Abelian groups, and let ${\displaystyle \varphi :G\to H}$ be a group homomorphism. We set

${\displaystyle o_{u}:=H/\operatorname {im} \varphi }$;

we may form this quotient group because within an Abelian group, all subgroups are normal. Further, we set

${\displaystyle u:H\to H/\operatorname {im} \varphi ,u(h)=h+\operatorname {im} \varphi }$,

the projection (we adhere to the custom of writing Abelian groups in an additive fashion). Let now ${\displaystyle \eta :H\to I}$ be a group homomorphism such that ${\displaystyle \eta \circ \varphi =0}$, where ${\displaystyle I}$ is another Abelian group. Then the function

${\displaystyle \eta ':H/\operatorname {im} \varphi \to I,\eta '(h+\operatorname {im} \varphi ):=\eta (h)}$

is well-defined (because of the rules for group morphisms) and the desired unique factorisation of ${\displaystyle h}$ is given by ${\displaystyle h=\eta '\circ u}$.${\displaystyle \Box }$

Theorem 3.7:

Every cokernel is an epi.

Proof:

Let ${\displaystyle f}$ be a morphism and ${\displaystyle u}$ a corresponding cokernel. Assume that ${\displaystyle t\circ u=s\circ u}$. The situation is depicted in the following picture:

Now again, ${\displaystyle t\circ u\circ f=0}$, and ${\displaystyle t\circ u}$ and ${\displaystyle s\circ u}$ are by their equality both factorisations of ${\displaystyle t\circ u}$. Hence, by the uniqueness of such factorisations required in the definition of cokernels, ${\displaystyle s=t}$.${\displaystyle \Box }$

Theorem 3.8:

If a morphism ${\displaystyle f}$ has two cokernels ${\displaystyle u}$ and ${\displaystyle {\tilde {u}}}$ (let's call the associated objects ${\displaystyle o_{u}}$ and ${\displaystyle o_{\tilde {u}}}$), then ${\displaystyle u\cong {\tilde {u}}}$; that is, ${\displaystyle u}$ and ${\displaystyle {\tilde {u}}}$ are isomorphic.

Proof:

Once again, we have ${\displaystyle u\circ f=0}$ and ${\displaystyle {\tilde {u}}\circ f=0}$, and hence we obtain commutative diagrams

and .

We once again claim that ${\displaystyle u'}$ and ${\displaystyle {\tilde {u}}'}$ are inverse to each other. Indeed, we obtain the equations

${\displaystyle u'{\tilde {u}}'u=u'{\tilde {u}}=u=1_{o_{u}}u}$ and ${\displaystyle {\tilde {u}}'u'{\tilde {u}}={\tilde {u}}'u={\tilde {u}}=1_{o_{u'}}{\tilde {u}}}$

and by cancellation (both ${\displaystyle u}$ and ${\displaystyle {\tilde {u}}}$ are epis due to theorem 8.7) we obtain

${\displaystyle u'{\tilde {u}}'=1_{o_{u}}}$ and ${\displaystyle {\tilde {u}}'u'=1_{o_{u'}}}$

and hence the theorem.${\displaystyle \Box }$

## Interplay between kernels and cokernels

Theorem 3.9:

Let ${\displaystyle {\mathcal {C}}}$ be a category with zero objects, and let ${\displaystyle k}$ be a morphism of ${\displaystyle {\mathcal {C}}}$ such that ${\displaystyle k}$ is the kernel of some arbitrary morphism ${\displaystyle f}$ of ${\displaystyle {\mathcal {C}}}$. Then ${\displaystyle k}$ is also the kernel of any cokernel of itself.

Proof:

${\displaystyle k=\ker f}$ means

.

We set ${\displaystyle q:=\operatorname {coker} k}$, that is,

.

In particular, since ${\displaystyle fk=0}$, there exists a unique ${\displaystyle f'}$ such that ${\displaystyle f=f'q}$. We now want that ${\displaystyle k}$ is a kernel of ${\displaystyle p}$, that is,

.

Hence assume ${\displaystyle ql=0}$. Then ${\displaystyle fl=f'ql=0}$. Hence, by the topmost diagram (in this proof), ${\displaystyle l=kl'}$ for a unique ${\displaystyle l'}$, which is exactly what we want. Further, ${\displaystyle qk=0}$ follows from the second diagram of this proof. ${\displaystyle \Box }$

Theorem 3.10:

Let ${\displaystyle {\mathcal {C}}}$ be a category with zero objects, and let ${\displaystyle q}$ be a morphism of ${\displaystyle {\mathcal {C}}}$ such that ${\displaystyle q}$ is the kernel of some arbitrary morphism ${\displaystyle r}$ of ${\displaystyle {\mathcal {C}}}$. Then ${\displaystyle q}$ is also the cokernel of any kernel of itself.

Proof:

The statement that ${\displaystyle q}$ is the cokernel of ${\displaystyle r}$ reads

.

We set ${\displaystyle k:=\ker q}$, that is

.

In particular, since ${\displaystyle qr=0}$, ${\displaystyle r=kr'}$ for a suitable unique morphism ${\displaystyle r'}$. We now want ${\displaystyle q}$ to be a cokernel of ${\displaystyle k}$, that is,

.

Let thus ${\displaystyle mk=0}$. Then also ${\displaystyle mr=mkr'=0}$ and hence ${\displaystyle m}$ has a unique factorisation ${\displaystyle m=m'q}$ by the topmost diagram.${\displaystyle \Box }$

Corollary 3.11:

Let ${\displaystyle {\mathcal {C}}}$ be a category that has a zero object and where all morphisms have kernels and cokernels, and let ${\displaystyle f}$ be an arbitrary morphism of ${\displaystyle {\mathcal {C}}}$. Then

${\displaystyle \ker f=\ker(\operatorname {coker} (\ker f))}$

and

${\displaystyle \operatorname {coker} f=\operatorname {coker} (\ker(\operatorname {coker} f))}$.

The equation

${\displaystyle \ker f=\ker(\operatorname {coker} (\ker f))}$

is to be read "the kernel of ${\displaystyle f}$ is a kernel of any cokernel of itself", and the same for the other equation with kernels replaced by cokernels and vice versa.

Proof:

${\displaystyle k:=\ker f}$ is a morphism which is some kernel. Hence, by theorem 3.9

${\displaystyle k=\ker(\operatorname {coker} (k))}$

(where the equation is to be read "${\displaystyle k}$ is a kernel of any cokernel of ${\displaystyle k}$"). Similarly, from theorem 3.10

${\displaystyle q=\operatorname {coker} (\ker(q))}$,

where ${\displaystyle q:=\operatorname {coker} f}$.${\displaystyle \Box }$

## Products

Definition 3.12:

Let ${\displaystyle {\mathcal {C}}}$ be a category, and let ${\displaystyle a,b}$ be two objects of ${\displaystyle {\mathcal {C}}}$. A product of ${\displaystyle a}$ and ${\displaystyle b}$, denoted ${\displaystyle a\times b}$, is an object of ${\displaystyle {\mathcal {C}}}$ together with two morphisms

${\displaystyle \pi _{a}:a\times b\to a}$ and ${\displaystyle \pi _{b}:a\times b\to b}$,

called the projections of ${\displaystyle a\times b}$, such that for any morphisms ${\displaystyle f:c\to a}$ and ${\displaystyle g:d\to b}$ there exists a unique morphism such that the following diagram commutes:

[[]]

Example 3.13:

Theorem 3.14:

If ${\displaystyle {\mathcal {C}}}$ is a category, ${\displaystyle a,b}$ are objects of ${\displaystyle {\mathcal {C}}}$ and ${\displaystyle p,q}$ are products of ${\displaystyle a}$ and ${\displaystyle b}$, then

${\displaystyle p\cong q}$,

that is, ${\displaystyle p}$ and ${\displaystyle q}$ are isomorphic.

Theorem 3.15:

Let ${\displaystyle {\mathcal {C}}}$ be a category, ${\displaystyle a,b}$ objects of ${\displaystyle {\mathcal {C}}}$ and ${\displaystyle a\times b}$ a product of ${\displaystyle a}$ and ${\displaystyle b}$. Then the projection morphisms and are monics.

## Coproducts

Definition 3.16:

Let ${\displaystyle {\mathcal {C}}}$ be a category, and let ${\displaystyle a}$ and ${\displaystyle b}$ be objects of ${\displaystyle {\mathcal {C}}}$. Then a coproduct of ${\displaystyle a}$ and ${\displaystyle b}$ is another object of ${\displaystyle {\mathcal {C}}}$, denoted ${\displaystyle a\coprod b}$, together with two morphisms and such that for any morphisms and , there exist morphisms such that and .

Example 3.17:

Theorem 3.18:

Theorem 3.19:

## Biproducts

Definition 3.20:

Let ${\displaystyle {\mathcal {C}}}$ be a category that contains two objects ${\displaystyle a}$ and ${\displaystyle b}$. Assume we are given an object ${\displaystyle c}$ of ${\displaystyle {\mathcal {C}}}$ together with four morphisms that make it into a product, and simultaneously into a coproduct. Then we call ${\displaystyle c}$ a biproduct of the two objects ${\displaystyle a}$ and ${\displaystyle b}$ and denote it by

${\displaystyle c=a\oplus b}$.

Example 3.21:

Within the category of Abelian groups, a biproduct is given by the product group; if ${\displaystyle G,H}$ are Abelian groups, set the product group of ${\displaystyle G}$ and ${\displaystyle H}$ to be

${\displaystyle G\times H}$,

the cartesian product, with component-wise group operation.

Proof:

# Diagram chasing within Abelian categories

## Exact sequences of Abelian groups

Definition 4.1 (sequence):

Given ${\displaystyle n}$ Abelian groups ${\displaystyle A_{1},A_{2},\ldots ,A_{n}}$ and ${\displaystyle n-1}$ morphisms (that is, since we are in the category of Abelian groups, group homomorphisms)

${\displaystyle \varphi _{j}:A_{j}\to A_{j+1}}$,

we may define the whole of those to be a sequence of Abelian groups, and denote it by

${\displaystyle A_{1}{\overset {\varphi _{1}}{\longrightarrow }}A_{2}{\overset {\varphi _{2}}{\longrightarrow }}\cdots {\overset {\varphi _{n-2}}{\longrightarrow }}A_{n-1}{\overset {\varphi _{n-1}}{\longrightarrow }}A_{n}}$.

Note that if one of the objects is the trivial group, we denote it by ${\displaystyle 0}$ and simply leave out the caption of the arrows going to it and emitting from it, since the trivial group is the zero object in the category of Abelian groups.

There are also infinite exact sequences, indicated by a notation of the form

${\displaystyle A_{1}{\overset {\varphi _{1}}{\longrightarrow }}A_{2}{\overset {\varphi _{2}}{\longrightarrow }}\cdots {\overset {\varphi _{n-2}}{\longrightarrow }}A_{n-1}{\overset {\varphi _{n-1}}{\longrightarrow }}A_{n}{\overset {\varphi _{n}}{\longrightarrow }}\cdots }$;

it just goes on and on and on. The exact sequence to be infinite means, that we have a sequence (in the classical sense) of objects and another classical sequence of morphisms between these objects (here, the two have same cardinality: Countably infinite).

Definition 4.2 (exact sequence):

A given sequence

${\displaystyle A_{1}{\overset {\varphi _{1}}{\longrightarrow }}A_{2}{\overset {\varphi _{2}}{\longrightarrow }}\cdots {\overset {\varphi _{n-2}}{\longrightarrow }}A_{n-1}{\overset {\varphi _{n-1}}{\longrightarrow }}A_{n}}$

is called exact iff for all ${\displaystyle i}$,

${\displaystyle \operatorname {im} \varphi _{i}=\ker \varphi _{i+1}}$.

There is a fundamental example to this notion.

Example 4.3 (short exact sequence):

A short exact sequence is simply an exact sequence of the form

${\displaystyle 0\longrightarrow A{\overset {f}{\longrightarrow }}B{\overset {g}{\longrightarrow }}C\longrightarrow 0}$

for suitable Abelian groups ${\displaystyle A,B,C}$ and group homomorphisms ${\displaystyle f:A\to B,g:B\to C}$.

The exactness of this sequence means, considering the form of the image and kernel of the zero morphism:

1. ${\displaystyle f}$ injective
2. ${\displaystyle \ker g=\operatorname {im} f}$
3. ${\displaystyle g}$ surjective.

Example 4.4:

Set ${\displaystyle A:=\mathbb {Z} /3\mathbb {Z} }$, ${\displaystyle B:=\mathbb {Z} /15\mathbb {Z} }$, ${\displaystyle C:=\mathbb {Z} /5\mathbb {Z} }$, where we only consider the additive group structure, and define the group homomorphisms

${\displaystyle f:A\to B,f(n+3\mathbb {Z} ):=5n+15\mathbb {Z} }$ and ${\displaystyle g:B\to C,g(n+15\mathbb {Z} ):=n+5\mathbb {Z} }$.

This gives a short exact sequence

${\displaystyle 0\longrightarrow A{\overset {f}{\longrightarrow }}B{\overset {g}{\longrightarrow }}C\longrightarrow 0}$,

as can be easily checked.

A similar construction can be done for any factorisation of natural numbers ${\displaystyle k=m\cdot j}$ (in our example, ${\displaystyle k=15}$, ${\displaystyle m=3}$, ${\displaystyle j=5}$).

## Diagram chase: The short five lemma

We now should like to briefly exemplify a supremely important method of proof called diagram chase in the case of Abelian groups. We shall later like to generalize this method, and we will see that the classical diagram lemmas hold in huge generality (that includes our example below), namely in the generality of Abelian categories (to be introduced below).

Theorem 4.5 (the short five lemma):

Assume we have a commutative diagram

,

where the two rows are exact. If ${\displaystyle g}$ and ${\displaystyle h}$ are isomorphisms, then so must be ${\displaystyle f}$.

Proof:

We first prove that ${\displaystyle f}$ is injective. Let ${\displaystyle f(b)=0}$ for a ${\displaystyle b\in B}$. Since the given diagram is commutative, we have ${\displaystyle 0=t(f(b))=h(q(b))}$ and since ${\displaystyle h}$ is an isomorphism, ${\displaystyle q(b)=0}$. Since the top row is exact, it follows that ${\displaystyle b\in \operatorname {im} p}$, that is, ${\displaystyle b=p(a)}$ for a suitable ${\displaystyle a\in A}$. Hence, the commutativity of the given diagram implies ${\displaystyle 0=f(b)=f(q(a))=s(g(a))}$, and hence ${\displaystyle a=0}$ since ${\displaystyle s\circ g}$ is injective as the composition of two injective maps. Therefore, ${\displaystyle b=q(a)=q(0)=0}$.

Next, we prove that ${\displaystyle f}$ is surjective. Let thus ${\displaystyle b'\in B'}$ be given. Set ${\displaystyle c':=t(b')}$. Since ${\displaystyle h\circ q}$ is surjective as the composition of two surjective maps, there exists ${\displaystyle b\in B}$ such that ${\displaystyle h(q(b))=c'}$. The commutativity of the given diagram yields ${\displaystyle t(f(b))=c'}$. Thus, ${\displaystyle t(f(b)-b')=0}$ by linearity, whence ${\displaystyle f(b)-b'\in \ker t=\operatorname {im} s}$, and since ${\displaystyle g}$ is an isomorphism, we find ${\displaystyle a\in A}$ such that ${\displaystyle s(g(a))=f(b)-b'}$. The commutativity of the diagram yields ${\displaystyle f(b)-b'=s(g(a))=f(p(a))}$, and hence ${\displaystyle f(b-p(a))=b'}$.${\displaystyle \Box }$

Definition 4.6:

An additive category is a category ${\displaystyle {\mathcal {C}}}$ such that the following holds:

1. ${\displaystyle \operatorname {Hom} (a,b)}$ is an Abelian group for all objects ${\displaystyle a,b}$ of ${\displaystyle {\mathcal {C}}}$.
2. The composition of arrows
${\displaystyle \circ :\operatorname {Hom} (b,c)\times \operatorname {Hom} (a,b)\to \operatorname {Hom} (a,c)}$
is bilinear; that is, for ${\displaystyle f,f'\in \operatorname {Hom} (b,c)}$ and ${\displaystyle g,g'\in \operatorname {Hom} (b,c)}$, we have
${\displaystyle (f+f')\circ (g+g')=f\circ g+f'\circ g+f\circ g'+f'\circ g'}$
(note that, since no scalar multiplication is involved, this definition of bilinearity is less rich than bilinearity in vector spaces).
1. ${\displaystyle {\mathcal {C}}}$ has a zero object.
2. Each pair of objects ${\displaystyle a,b}$ of ${\displaystyle {\mathcal {C}}}$ has a biproduct ${\displaystyle a\oplus b}$.

Although additive categories are important in their own right, we shall only treat them as in-between step to the definition of Abelian categories.

## Abelian categories

Definition 4.7:

An Abelian category is an additive category ${\displaystyle {\mathcal {C}}}$ such that furthermore:

1. Every arrow of ${\displaystyle {\mathcal {C}}}$ has a kernel and a cokernel, and
2. every monic arrow of ${\displaystyle {\mathcal {C}}}$ is the kernel of some arrow, and every epic arrow of ${\displaystyle {\mathcal {C}}}$ is the cokernel of some arrow.

We now embark to obtain a canonical factorisation of arrows within Abelian categories.

Lemma 4.8:

Let ${\displaystyle {\mathcal {C}}}$ be a category with a zero object and kernels and cokernels for all arrows. Then every arrow ${\displaystyle f}$ of ${\displaystyle {\mathcal {C}}}$ admits a factorisation

${\displaystyle f=kq}$,

where ${\displaystyle k=\ker(\operatorname {coker} f))}$.

Proof:

The factorisation comes from the following commutative diagram, where we call ${\displaystyle u:=\operatorname {coker} f}$ and ${\displaystyle k:=\ker(\operatorname {coker} f)}$:

Indeed, by the property of ${\displaystyle k}$ as a kernel and since ${\displaystyle u\circ f=0}$, ${\displaystyle f}$ factors uniquely through ${\displaystyle k}$.${\displaystyle \Box }$

In Abelian categories, ${\displaystyle q}$ is even a monomorphism:

Lemma 4.9:

Let ${\displaystyle {\mathcal {C}}}$ be an Abelian category. If ${\displaystyle k=\ker(\operatorname {coker} f)}$ and we have any factorisation ${\displaystyle f=kq}$, then ${\displaystyle q}$ is an epimorphism.

Proof:

Theorem 4.10:

Let ${\displaystyle {\mathcal {C}}}$ be an Abelian category. Then every arrow ${\displaystyle f}$ of ${\displaystyle {\mathcal {C}}}$ has a factorisation

${\displaystyle f=me}$,

where ${\displaystyle m=\ker(\operatorname {coker} f)}$ and ${\displaystyle e=\operatorname {coker} (\ker f)}$.

## Exact sequences in Abelian categories

We begin by defining the image of a morphism in a general context.

Definition 4.12:

Let ${\displaystyle f}$ be a morphism of a (this time arbitrary) category ${\displaystyle {\mathcal {C}}}$. If it exists, a kernel of a cokernel of ${\displaystyle f}$ is called image of ${\displaystyle f}$.

Construction 4.13:

We shall now construct an equivalence relation on the set ${\displaystyle P_{c}}$ of all morphisms whose codomain is a certain ${\displaystyle c\in {\mathcal {C}}}$, where ${\displaystyle {\mathcal {C}}}$ is a category. We set

${\displaystyle f\leq g:\Leftrightarrow f=gf'}$ for a suitable ${\displaystyle f'}$ (that is, ${\displaystyle f}$ factors through ${\displaystyle g}$).

This relation is transitive and reflexive. Hence, if we define

${\displaystyle f\sim g:\Leftrightarrow f\leq g\wedge g\leq f}$,

we have an equivalence relation (in fact, in this way we can always construct an equivalence relation from a transitive and reflexive binary relation, that is, a preorder).

With the image at hand, we may proceed to the definition of sequences, exact sequences and short exact sequences in a general context.

Definition 4.14:

Let ${\displaystyle {\mathcal {C}}}$ be an Abelian category.

Definition 4.15:

Let ${\displaystyle {\mathcal {C}}}$ be an Abelian category.

Definition 4.16:

Let ${\displaystyle {\mathcal {C}}}$ be an Abelian category.

## Diagram chase within Abelian categories

Now comes the clincher we have been working towards. In the ordinary diagram chase, we used elements of sets. We will now replace those elements by arrows in a simple way: Instead of looking at "elements" "${\displaystyle x\in a}$" of some object ${\displaystyle a}$ of an abelian category ${\displaystyle {\mathcal {C}}}$, we look at arrows towards that element; that is, arrows ${\displaystyle x:d\to a}$ for arbitrary objects ${\displaystyle d}$ of ${\displaystyle {\mathcal {C}}}$. For "the codomain of an arrow ${\displaystyle x}$ is ${\displaystyle a}$", we write

${\displaystyle x\in _{m}a}$,

where the subscript ${\displaystyle m}$ stands for "member".

We have now replaced the notion of elements of a set by the notion of members in category theory. We also need to replace the notion of equality of two elements. We don't want equality of two arrows, since then we would not obtain the usual rules for chasing diagrams. Instead, we define yet another equivalence relation on arrows with codomain ${\displaystyle a}$ (that is, on members of ${\displaystyle a}$). The following lemma will help to that end.

Lemma 4.18 (square completion):

Construction 4.19 (second equivalence relation):

Now we are finally able to prove the proposition that will enable us doing diagram chases using the techniques we apply also to diagram chases for Abelian groups (or modules, or any other Abelian category).

Theorem 4.20 (diagram chase enabling theorem):

Let ${\displaystyle {\mathcal {C}}}$ be an Abelian category and ${\displaystyle a}$ an object of ${\displaystyle {\mathcal {C}}}$. We have the following rules concerning properties of a morphism:

1. ${\displaystyle f:a\to b}$ is monic iff ${\displaystyle \forall x\in _{m}a:fx\equiv 0\Rightarrow x\equiv 0}$.
2. ${\displaystyle f:a\to b}$ is monic iff ${\displaystyle \forall x,x'\in _{m}a:fx\equiv fx'}$.
3. ${\displaystyle f:a\to b}$ is epic iff ${\displaystyle \forall y\in _{m}b:\exists x\in _{m}a:fx\equiv y}$.
4. ${\displaystyle f:a\to b}$ is the zero arrow iff ${\displaystyle \forall x\in _{m}a:fx\equiv 0}$.
5. A sequence ${\displaystyle a{\overset {f}{\longrightarrow }}b{\overset {g}{\longrightarrow }}c}$ is exact iff
1. ${\displaystyle gf=0}$ and
2. for each ${\displaystyle y\in _{m}b}$ with ${\displaystyle gy\equiv 0}$, there exists ${\displaystyle x\in _{m}a}$ such that ${\displaystyle fx\equiv y}$.
6. If ${\displaystyle f:a\to b}$ is a morphism such that ${\displaystyle fx\equiv fy}$, there exists a member of ${\displaystyle a}$, which we shall call ${\displaystyle (x-y)}$ (the brackets indicate that this is one morphism), such that:
1. ${\displaystyle f(x-y)\equiv 0}$
2. ${\displaystyle gx\equiv 0\Rightarrow g(x-y)\equiv -gy}$
3. ${\displaystyle hy\equiv 0\Rightarrow h(x-y)\equiv hx}$

We have thus constructed a relatively elaborate machinery in order to elevate our proof technique of diagram chase (which is quite abundant) to the very abstract level of Abelian categories.

## Examples of diagram lemmas

Theorem 4.21 (the long five lemma):

Theorem 4.22 (the snake lemma):

# Modules, submodules and homomorphisms

## Basics

Definition 5.1 (modules):

Let ${\displaystyle R}$ be a ring. A left ${\displaystyle R}$-module is an Abelian group ${\displaystyle M}$ together with a function

${\displaystyle R\times M\to M,(r,m)\mapsto rm}$

such that

1. ${\displaystyle \forall m\in M:1_{R}m=m}$,
2. ${\displaystyle \forall m,n\in M,r\in R:r(m+n)=rm+rn}$,
3. ${\displaystyle \forall m\in M,r,s\in R:(r+s)m=rm+sm}$ and
4. ${\displaystyle \forall m\in M,r,s\in R:r(sm)=(rs)m}$.

Analogously, one can define right ${\displaystyle R}$-modules with an operation ${\displaystyle R\times M\to M,(r,m)\mapsto mr}$; the difference is only formal, but it will later help us define bimodules in a user-friendly way.

For the sake of brevity, we will often write module instead of left ${\displaystyle R}$-module.

• Exercise 5.1.1: Prove that every Abelian monoid ${\displaystyle (M,+)}$ together with an operation as specified in 1.) - 4.) of definition 5.1 is already a module.

## Submodules

Definition 5.2 (submodules):

A subgroup ${\displaystyle N\leq M}$ which is closed under the module function (i.e. the left multiplication operation defined above) is called a submodule. In this case we write ${\displaystyle N\leq M}$.

The following lemma gives a criterion for a subset of a module being a submodule.

Lemma 5.3:

A subset ${\displaystyle N\subseteq M}$ is a submodule iff

${\displaystyle \forall r\in R,n,q\in N:rn-q\in N}$.

Proof:

Let ${\displaystyle N}$ be a submodule. Then since ${\displaystyle -q\in N}$ since we have an Abelian group and further ${\displaystyle rn\in N}$ due to closedness under the module operation, also ${\displaystyle rn+(-q)=:rn-q\in N}$.

If ${\displaystyle N}$ is such that ${\displaystyle \forall r\in R,n,q\in N:rn-q\in N}$, then for any ${\displaystyle n,m\in N}$ also ${\displaystyle n+m=n+(-1_{R})(-m)\in N}$.

Definition and theorem 5.4 (factor modules): If ${\displaystyle N}$ is a submodule of ${\displaystyle N}$, the factor module by ${\displaystyle N}$ is defined as the group ${\displaystyle M/N}$ together with the module operation

${\displaystyle r(m+N):=rm+N}$.

This operation is well-defined and satisfies 1. - 4. from definition 5.1.

Proof:

Well-definedness: If ${\displaystyle m+N=p+N}$, then ${\displaystyle m-p\in N}$, hence ${\displaystyle r(m-p)=rm-rp\in N}$ and thus ${\displaystyle rm+N=rp+N}$.

1. ${\displaystyle 1_{R}(m+N)=(1_{r}m)+N=m+N}$
2. ${\displaystyle r(n+N+m+N)=r((m+n)+N)=r(m+n)+N=rm+rn+N=rm+N+rn+N}$
3. ${\displaystyle (r+s)(m+N)=(r+s)m+N=rm+sm+N=rm+N+rn+N}$
4. analogous to 3. (replace ${\displaystyle +}$ by ${\displaystyle \cdot }$)${\displaystyle \Box }$

### Sum and intersection of submodules

We shall now ask the question: Given a module ${\displaystyle M}$ and certain submodules ${\displaystyle \{N_{\alpha }\}_{\alpha \in A}}$, which module is the smallest module containing all the ${\displaystyle N_{\alpha }}$? And which module is the largest module that is itself contained within all ${\displaystyle N_{\alpha }}$? The following definitions and theorems answer those questions.

Definition and theorem 5.5 (sum of submodules):

Let ${\displaystyle M}$ be a module over a certain ring ${\displaystyle R}$ and let ${\displaystyle \{N_{\alpha }\}_{\alpha \in A}}$ be submodules of ${\displaystyle M}$. The set

${\displaystyle \sum _{\alpha \in A}N_{\alpha }:=\left\{\sum _{l=1}^{k}r_{l}n_{\alpha _{l}}{\big |}k\in \mathbb {N} ,r_{l}\in R,n_{\alpha _{l}}\in N_{\alpha _{l}}\right\}}$

is a submodule of ${\displaystyle M}$, which is the smallest submodule of ${\displaystyle M}$ that contains all the ${\displaystyle N_{\alpha }}$. It is called the sum of ${\displaystyle \{N_{\alpha }\}_{\alpha \in A}}$.

Proof:

1. ${\displaystyle \sum _{\alpha \in A}N_{\alpha }}$ is a submodule:

• It is an Abelian subgroup since if ${\displaystyle \sum _{l=1}^{k}r_{l}n_{\alpha _{l}},\sum _{j=1}^{m}s_{l}n_{\beta _{j}}\in \sum _{\alpha \in A}N_{\alpha }}$, then
${\displaystyle \sum _{l=1}^{k}r_{l}n_{\alpha _{l}}-\sum _{j=1}^{m}s_{l}n_{\beta _{j}}=\sum _{l=1}^{k}r_{l}n_{\alpha _{l}}+\sum _{j=1}^{m}(-s_{l})n_{\beta _{j}}\in \sum _{\alpha \in A}N_{\alpha }}$.
• It is closed under the module operation, since
${\displaystyle s\left(\sum _{l=1}^{k}r_{l}n_{\alpha _{l}}\right)=\sum _{l=1}^{k}(sr_{l})n_{\alpha _{l}}\in \sum _{\alpha \in A}N_{\alpha }}$.

2. Each ${\displaystyle N_{\alpha }}$ is contained in ${\displaystyle \sum _{\alpha \in A}N_{\alpha }}$:

This follows since ${\displaystyle 1_{r}n_{\alpha }\in \sum _{\alpha \in A}N_{\alpha }}$ for each ${\displaystyle \alpha \in A}$ and each ${\displaystyle n_{\alpha }\in N_{\alpha }}$.

3. ${\displaystyle \sum _{\alpha \in A}N_{\alpha }}$ is the smallest submodule containing all the ${\displaystyle N_{\alpha }}$: If ${\displaystyle K\leq M}$ is another such submodule, then ${\displaystyle K}$ must contain all the elements

${\displaystyle \sum _{l=1}^{k}r_{l}n_{\alpha _{l}},k\in \mathbb {N} ,r_{l}\in R,n_{\alpha _{l}}\in N_{\alpha _{l}}}$

due to closedness under addition and submodule operation.${\displaystyle \Box }$

Definition and theorem 5.6 (intersection of submodules):

Let ${\displaystyle M}$ be a module over a ring ${\displaystyle R}$, and let ${\displaystyle \{N_{\alpha }\}_{\alpha \in A}}$ be submodules of ${\displaystyle M}$. Then the set

${\displaystyle \bigcap _{\alpha \in A}N_{\alpha }}$

is a submodule of ${\displaystyle M}$, which is the largest submodule of ${\displaystyle M}$ containing all the ${\displaystyle N_{\alpha }}$. It is called the intersection of the ${\displaystyle N_{\alpha }}$.

Proof:

1. It's a submodule: Indeed, if ${\displaystyle r\in R,n,p\in \bigcap _{\alpha \in A}N_{\alpha }}$, then ${\displaystyle n,p\in N_{\alpha }}$ for each ${\displaystyle \alpha }$ and thus ${\displaystyle n-rp\in N_{\alpha }}$ for each ${\displaystyle \alpha }$, hence ${\displaystyle n-rp\in \bigcap _{\alpha \in A}N_{\alpha }}$.

2. It is contained in all ${\displaystyle N_{\alpha }}$ by definition of the intersection.

3. Any set that contains all elements from each of the ${\displaystyle N_{\alpha }}$ is contained within the intersection.${\displaystyle \Box }$

We have the following rule for computing with intersections and sums:

Theorem 5.7 (modular law; Dedekind):

Let ${\displaystyle M}$ be a module and ${\displaystyle K,L,N\leq M}$ such that ${\displaystyle L\subseteq K}$. Then

${\displaystyle K\cap (L+N)=L+(K\cap N)}$.

Proof:

${\displaystyle \subseteq }$: Let ${\displaystyle l+n\in (L+N)\cap K}$. Since ${\displaystyle L\subseteq K}$, ${\displaystyle l\in K}$ and hence ${\displaystyle n\in K}$. Since also ${\displaystyle n\in N}$ by assumption, ${\displaystyle l+n\in L+K\cap N}$.

${\displaystyle \supseteq }$: Let ${\displaystyle l+m\in L+(K\cap N)}$. Since ${\displaystyle L\subseteq K}$, ${\displaystyle l\in K}$ and since further ${\displaystyle m\in K}$, ${\displaystyle l+m\in K}$. Hence, ${\displaystyle l+m\in K\cap (L+N)}$.${\displaystyle \Box }$

More abstractly, the properties of the sum and intersection of submodules may be theoretically captured in the following way:

### Lattices

Definition 5.8:

A lattice is a set ${\displaystyle L}$ together with two operations ${\displaystyle \vee :L\times L\to L}$ (called the join or least upper bound) and ${\displaystyle \wedge :L\times L\to L}$ (called the meet or greatest lower bound) such that the following laws hold:

1. Commutative laws: ${\displaystyle a\Box b=b\Box a}$, ${\displaystyle \Box \in \{\vee ,\wedge \}}$
2. Idempotency laws: ${\displaystyle a\Box a=a}$, ${\displaystyle \Box \in \{\vee ,\wedge \}}$
3. Absorption laws: ${\displaystyle a\Box (a\triangledown b)=a}$, ${\displaystyle \{\Box ,\triangledown \}=\{\vee ,\wedge \}}$
4. Associative laws: ${\displaystyle a\Box (b\Box c)=(a\Box b)\Box c}$, ${\displaystyle \Box \in \{\vee ,\wedge \}}$

There are some special types of lattices:

Definition 5.9:

A modular lattice ${\displaystyle L}$ is a lattice such that the identity

holds.

Theorem 5.10 (ordered sets as lattices):

Let ${\displaystyle \leq }$ be a partial order on the set ${\displaystyle L}$ such that

1. every set ${\displaystyle S\subseteq L}$ has a least upper bound (where a least upper bound ${\displaystyle u}$ of ${\displaystyle S}$ satisfies ${\displaystyle u\geq s}$ for all ${\displaystyle s\in S}$ (i.e. it is an upper bound) and ${\displaystyle u\leq x}$ for every other upper bound ${\displaystyle x}$ of ${\displaystyle S}$) and
2. every set ${\displaystyle S\subseteq L}$ has a greatest lower bound (defined analogously to least upper bound with inequality reversed).

Then ${\displaystyle L}$, together with the joint operation sending ${\displaystyle \{a,b\}}$ to the least upper bound of that set and the meet operation analogously, is a lattice.

In fact, it suffices to require conditions 1. and 2. only for sets ${\displaystyle S}$ with two elements. But as we have shown, in the case that ${\displaystyle L}$ is the set of all submodules of a given module, we have the "original" conditions satisfied.

Proof:

First, we note that least upper bound and greatest lower bound are unique, since if for example ${\displaystyle u,u'}$ are least upper bounds of ${\displaystyle S}$, then ${\displaystyle u\leq u'}$ and ${\displaystyle u'\leq u}$ and hence ${\displaystyle u=u'}$. Thus, the joint and meet operation are well-defined.

The commutative laws follow from ${\displaystyle \{a,b\}=\{b,a\}}$.

The idempotency laws from clearly ${\displaystyle a}$ being the least upper bound, as well as the greatest lower bound, of the set ${\displaystyle \{a,a\}}$.

The first absorption law follows as follows: Let ${\displaystyle u}$ be the least upper bound of ${\displaystyle \{a,b\}}$. Then in particular, ${\displaystyle u\geq a}$. Hence, ${\displaystyle a}$ is a lower bound of ${\displaystyle \{a,u\}}$, and any lower bound ${\displaystyle l}$ satisfies ${\displaystyle l\leq a}$, which is why ${\displaystyle a}$ is the greatest lower bound of ${\displaystyle \{a,u\}}$. The second absorption law is proven analogously.

The first associative law follows since if ${\displaystyle u}$ is the least upper bound of ${\displaystyle \{a,b,c\}}$ and ${\displaystyle v}$ is the upper bound of ${\displaystyle \{a,b\}}$, then ${\displaystyle u\geq v}$ (as ${\displaystyle u}$ is an upper bound for ${\displaystyle \{a,b\}}$) and if ${\displaystyle w}$ is the least upper bound of ${\displaystyle \{v,c\}}$, then ${\displaystyle w=u}$ since ${\displaystyle u}$ is an upper bound and further, ${\displaystyle w\geq v\geq a}$ and ${\displaystyle w\geq b}$. The same argument (with ${\displaystyle a}$ and ${\displaystyle c}$ swapped) proves that ${\displaystyle u}$ is also the least upper bound of the l.u.b. of ${\displaystyle \{b,c\}}$ and ${\displaystyle a}$. Again, the second associative law is proven similarly.${\displaystyle \Box }$

From theorems 5.5-5.7 and 5.10 we note that the submodules of a module form a modular lattice, where the order is given by set inclusion.

### Exercises

• Exercise 5.2.1: Let ${\displaystyle R}$ be a ring. Find a suitable module operation such that ${\displaystyle R}$ together with its own addition and this module operation is an ${\displaystyle R}$-module. Make sure you define this operation in the simplest possible way. Prove further, that with respect to this module operation, the submodules of ${\displaystyle R}$ are exactly the ideals of ${\displaystyle R}$.

## Homomorphisms

We shall now get to know the morphisms within the category of modules over a fixed ring ${\displaystyle R}$.

Definition 5.11 (homomorphisms):

Let ${\displaystyle M,N}$ be two modules over a ring ${\displaystyle R}$. A homomorphism from ${\displaystyle M}$ to ${\displaystyle N}$, also called an ${\displaystyle R}$-linear function from ${\displaystyle M}$ to ${\displaystyle N}$, is a function

${\displaystyle f:M\to N}$

such that

1. ${\displaystyle \forall m,p\in M:f(m+p)=f(m)+f(p)}$ and
2. ${\displaystyle \forall r\in R,m\in M:f(rm)=rf(m)}$.

The kernel and image of homomorphisms of modules are defined analogously to group homomorphisms.

Since we are cool, we will often simply write morphisms instead of homomorphisms where it's clear from the context in order to indicate that we have a clue about category theory.

We have the following useful lemma:

Lemma 5.12:

${\displaystyle f:M\to N}$ is ${\displaystyle R}$-linear iff

${\displaystyle \forall r\in R,m,p\in M:f(rm+p)=rf(m)+f(p)}$.

Proof:

Assume first ${\displaystyle R}$-linearity. Then we have

${\displaystyle f(rm+p)=f(rm)+f(p)=rf(m)+f(p)}$.

Assume now the other condition. Then we have for ${\displaystyle m,p\in M}$

${\displaystyle f(m+p)=f(1_{R}m+p)=1_{R}f(m)+f(p)=f(m)+f(p)}$

and

${\displaystyle f(rm)=f(rm+0)=rf(m)+f(0)=rf(m)}$

since ${\displaystyle f(0)=0}$ due to ${\displaystyle f(0)=f(0+0)=f(0)+f(0)}$; since ${\displaystyle M}$ is an abelian group, we may add the inverse of ${\displaystyle f(0)}$ on both sides.${\displaystyle \Box }$

Lemma 5.13:

If ${\displaystyle f:M\to N}$ is ${\displaystyle R}$-linear, then ${\displaystyle \forall m\in M:f(-m)=-f(m)}$.

Proof:

This follows from the respective theorem for group homomorphisms, since each morphism of modules is also a morphism of Abelian groups.${\displaystyle \Box }$

Definition 5.8 (isomorphisms):

An isomorphism ${\displaystyle f:M\to N}$ is a homomorphism which is bijective.

Lemma 5.14:

Let ${\displaystyle f:M\to N}$ be a morphism. The following are equivalent:

1. ${\displaystyle f}$ is an isomorphism
2. ${\displaystyle \ker f=\{0\}}$
3. ${\displaystyle f}$ has an inverse which is an isomorphism

Proof:

Lemma 5.15:

The kernel and image of morphisms are submodules.

Proof:

1. The kernel:

${\displaystyle f(rn-q)=rf(n)+f(-q)=rf(n)-f(q)=0}$

2. The image:

${\displaystyle rf(m)\overbrace {-f(p)} ^{=+f(-p)}=f(rm-p)}$${\displaystyle \Box }$

The following four theorems are in complete analogy to group theory.

Theorem 5.16 (factoring of morphisms):

Let ${\displaystyle M,K}$ be modules, let ${\displaystyle \varphi :M\to K}$ be a morphism and let ${\displaystyle N\leq M}$ such that ${\displaystyle \ker \varphi \subseteq N}$. Then there exists a unique morphism ${\displaystyle {\overline {\varphi }}:M/N\to K}$ such that ${\displaystyle {\overline {\varphi }}\circ \pi =\varphi }$, where ${\displaystyle \pi :M\to M/N,\pi (m)=m+N}$ is the canonical projection. In this situation, ${\displaystyle \ker {\overline {\varphi }}=\ker \varphi /N}$.

Proof:

We define ${\displaystyle {\overline {\varphi }}(m+N):=\varphi (m)}$. This is well-defined since ${\displaystyle \ker \varphi \subseteq N}$. Furthermore, this definition is already enforced by ${\displaystyle {\overline {\varphi }}\circ \pi =\varphi }$. Further, ${\displaystyle {\overline {\varphi }}(m+N)=0\Leftrightarrow m\in \ker \varphi }$.${\displaystyle \Box }$

Corollary 5.17 (first isomorphism theorem):

Let ${\displaystyle M,K}$ be ${\displaystyle R}$-modules and let ${\displaystyle f:M\to K}$ be a morphism. Then ${\displaystyle M/\ker f\cong K}$.

Proof:

We set ${\displaystyle N=\ker f}$ and obtain a homomorphism ${\displaystyle {\overline {f}}:M/\ker f\to K}$ with kernel ${\displaystyle N/N}$ by theorem 5.11. From lemma 5.16 follows the claim.${\displaystyle \Box }$

Corollary 5.18 (third isomorphism theorem):

Let ${\displaystyle M}$ be an ${\displaystyle R}$-module, let ${\displaystyle N\leq M}$ and let ${\displaystyle L\leq N}$. Then

${\displaystyle M/N\cong (M/L){\big /}(N/L)}$.

Proof:

Since ${\displaystyle L\leq N}$ and ${\displaystyle N\leq M}$ also ${\displaystyle L\leq M}$ by definition. We define the function

${\displaystyle \varphi :M/L\to M/N,m+L\mapsto m+N}$.

This is well-defined since

${\displaystyle m+L=p+L\Leftrightarrow m-p\in L\Rightarrow m-p\in N\Leftrightarrow m+N=p+N}$.

Furthermore,

${\displaystyle m+L\in \ker \varphi \Leftrightarrow m+N=0+N\Leftrightarrow m\in N}$

and hence ${\displaystyle \ker \varphi =N/L}$. Hence, by theorem 5.17 our claim is proven.${\displaystyle \Box }$

Theorem 5.19 (second isomorphism theorem):

Let ${\displaystyle L,N\leq M}$. Then

${\displaystyle L/(L\cap N)\cong (L+N)/N}$.

Proof:

Consider the isomorphism

${\displaystyle \varphi :L\to (L+N)/N,\varphi (l):=l+N}$.

Then ${\displaystyle \varphi (l)=0\Leftrightarrow l\in N}$, which is why the kernel of that homomorphism is given by ${\displaystyle L\cap N}$. Hence, the theorem follows by the first isomorphism theorem.${\displaystyle \Box }$

And now for something completely different:

Theorem 5.20:

Let ${\displaystyle \varphi :M\to N}$ be a homomorphism of modules over ${\displaystyle R}$ and let ${\displaystyle L\leq N}$. Then ${\displaystyle \varphi ^{-1}(L)}$ is a submodule of ${\displaystyle M}$.

Proof:

Let ${\displaystyle a,b\in \varphi ^{-1}(L)}$. Then ${\displaystyle \varphi (a+b)=\varphi (a)+\varphi (b)\in L}$ and hence ${\displaystyle a+b\in \varphi ^{-1}(L)}$. Let further ${\displaystyle r\in R}$. Then ${\displaystyle \varphi (ra)=r\varphi (a)\in L}$.${\displaystyle \Box }$

Similarly:

Theorem 5.21:

Let ${\displaystyle \varphi :M\to N}$ be a homomorphism of modules over ${\displaystyle R}$ and let ${\displaystyle K\leq M}$. Then ${\displaystyle \varphi (K)}$ is a submodule of ${\displaystyle N}$.

Proof: Let ${\displaystyle a,b\in \varphi (K)}$. Then ${\displaystyle a=\varphi (i),b=\varphi (j)}$ and ${\displaystyle a+b=\varphi (i+j)\in \varphi (K)}$. Let further ${\displaystyle r\in R}$. Then ${\displaystyle ra=\varphi (ri)\in \varphi (K)}$.${\displaystyle \Box }$

### Exercises

• Exercise 5.3.1: Let ${\displaystyle R,S}$ be rings regarded as modules over themselves as in exercise 5.2.1. Prove that the ring homomorphisms ${\displaystyle \varphi :R\to S}$ are exactly the module homomorphisms ${\displaystyle R\to S}$; that is, every ring hom. is a module hom. and vice versa.

## The projection morphism

Definition 5.22:

Let ${\displaystyle M}$ be a module and ${\displaystyle N\leq M}$. By the mapping ${\displaystyle \pi _{N}:M\to M/N}$ we mean the canonical projection mapping ${\displaystyle m\in M}$ to ${\displaystyle m+N}$; that is,

${\displaystyle \pi _{N}:M\to M/N,\pi _{N}(m):=m+N}$.

The following two fundamental equations for ${\displaystyle \pi _{N}(\pi _{N}^{-1}(S))}$ and ${\displaystyle \pi _{N}^{-1}(\pi _{N}(K))}$ shall gain supreme importance in later chapters, ${\displaystyle S\subseteq M/N}$, ${\displaystyle K\leq M}$.

Theorem 5.23:

Let ${\displaystyle M}$ be a module and ${\displaystyle N\leq M}$. Then for every set ${\displaystyle S\subseteq M/N}$, ${\displaystyle \pi _{N}(\pi _{N}^{-1}(S))=S}$. Furthermore, for every other submodule ${\displaystyle K\subseteq M}$, ${\displaystyle \pi _{N}^{-1}(\pi _{N}(K))=K+N}$.

Proof:

Let first ${\displaystyle m+N\in S}$. Then ${\displaystyle m\in \pi ^{-1}(S)}$, since ${\displaystyle \pi _{N}(m)=m+N}$. Hence, ${\displaystyle m+N\in \pi _{N}(\pi ^{-1}(S))}$. Let then ${\displaystyle m+N\in \pi _{N}(\pi _{N}^{-1}(S))}$. Then there exists ${\displaystyle m'\in \pi _{N}^{-1}(S)}$ such that ${\displaystyle \pi _{N}(m')=m+N}$, that is ${\displaystyle m'+N=m+N}$. Now ${\displaystyle m'\in \pi _{N}^{-1}(S)}$ means that ${\displaystyle \pi (m')=m'+N\in S}$. Hence, ${\displaystyle m+N=m'+N\in S}$.

Let first ${\displaystyle m\in K+N}$, that is, ${\displaystyle m=k+n}$ for suitable ${\displaystyle k\in K}$, ${\displaystyle n\in N}$. Then ${\displaystyle \pi _{N}(m)=k+n+N=k+N=\pi _{N}(k)\in \pi _{N}(K)}$, which is why by definition ${\displaystyle m\in \pi _{N}^{-1}(\pi _{N}(K))}$. Let then ${\displaystyle m\in \pi _{N}^{-1}(\pi _{N}(K))}$. Then ${\displaystyle \pi _{N}(m)=m+N\in \pi _{N}(K)}$, that is ${\displaystyle m+N=k+N}$ with ${\displaystyle k\in K}$, that is ${\displaystyle m=k+n}$ for a suitable ${\displaystyle n\in N}$, that is ${\displaystyle m\in K+N}$.${\displaystyle \Box }$

The following lemma from elementary set theory have relevance for the projection morphism and we will need it several times:

Lemma 5.24:

Let ${\displaystyle f:S\to T}$ be a function, where ${\displaystyle S,T}$ are completely arbitrary sets. Then ${\displaystyle f}$ induces a function ${\displaystyle 2^{S}\to 2^{T}}$ via ${\displaystyle A\mapsto f(A)}$, the image of ${\displaystyle A}$, where ${\displaystyle A\subseteq S}$. This function preserves inclusion. Further, the function ${\displaystyle 2^{T}\to 2^{S},B\mapsto f^{-1}(B)}$, also preserves inclusion.

Proof:

If ${\displaystyle A'\subseteq A}$, let ${\displaystyle y'\in f(A')}$. Then ${\displaystyle y'=f(x')}$ for an ${\displaystyle x'\in A'\subseteq A}$. Similarly for ${\displaystyle f^{-1}}$.${\displaystyle \Box }$

# Generators and chain conditions

## Generators

Definition 6.1 (generators of modules):

Let ${\displaystyle M}$ be a module over the ring ${\displaystyle R}$. A generating set of ${\displaystyle M}$ is a subset ${\displaystyle \{m_{j}\}_{j\in J}\subseteq M}$ such that

${\displaystyle \forall n\in M:\exists j_{1},\ldots ,j_{k}\in J,r_{1},\ldots ,r_{k}\in R:n=\sum _{l=1}^{k}r_{l}m_{j_{l}}}$.

Example 6.2:

For every module ${\displaystyle M}$, the whole module itself is a generating set.

Definition 6.3:

Let ${\displaystyle M}$ be a module. ${\displaystyle M}$ is called finitely generated if there exists a generating set of ${\displaystyle M}$ which has a finite cardinality.

Example 6.4: Every ring ${\displaystyle R}$ is a finitely generated ${\displaystyle R}$-module over itself, and a generating set is given by ${\displaystyle \{1_{R}\}}$.

Definition 6.5 (generated submodules):

## Noetherian and Artinian modules

Definition 6.6 (Noetherian modules):

Let ${\displaystyle M}$ be a module over the ring ${\displaystyle R}$. ${\displaystyle M}$ is called a Noetherian module iff for every ascending chain of submodules

${\displaystyle N_{1}\subseteq N_{2}\subseteq N_{3}\subseteq \cdots \subseteq N_{k}\subseteq \cdots }$

of ${\displaystyle M}$, there exists an ${\displaystyle l\in \mathbb {N} }$ such that

${\displaystyle \forall k\geq l:N_{k}=N_{l}}$.

We also say that ascending chains of submodules eventually become stationary.

Definition 6.7 (Artinian modules):

A module ${\displaystyle M}$ over a ring ${\displaystyle R}$ is called Artinian module iff for every descending chain of submodules

${\displaystyle N_{1}\supseteq N_{2}\supseteq N_{3}\supseteq \cdots \supseteq N_{k}\supseteq \cdots }$

of ${\displaystyle M}$, there exists an ${\displaystyle l\in \mathbb {N} }$ such that

${\displaystyle \forall k\geq l:N_{k}=N_{l}}$.

We also say that descending chains of submodules eventually become stationary.

We see that those definitions are similar, although they define a bit different objects.

Using the axiom of choice, we have the following characterisation of Noetherian modules:

Theorem 6.8:

Let ${\displaystyle M}$ be a module over ${\displaystyle R}$. The following are equivalent:

1. ${\displaystyle M}$ is Noetherian.
2. All the submodules of ${\displaystyle M}$ are finitely generated.
3. Every nonempty set of submodules of ${\displaystyle M}$ has a maximal element.

Proof 1:

We prove 1. ${\displaystyle \Rightarrow }$ 2. ${\displaystyle \Rightarrow }$ 3. ${\displaystyle \Rightarrow }$ 1.

1. ${\displaystyle \Rightarrow }$ 2.: Assume there is a submodule ${\displaystyle N}$ of ${\displaystyle M}$ which is not finitely generated. Using the axiom of dependent choice, we choose a sequence ${\displaystyle (n_{k})_{k\in \mathbb {N} }}$ in ${\displaystyle N}$ such that

${\displaystyle \forall k\in \mathbb {N} :\langle n_{1},\ldots ,n_{k}\rangle \subsetneq \langle n_{1},\ldots ,n_{k+1}\rangle }$;

it is possible to find such a sequence since we may just always choose ${\displaystyle n_{k+1}\in N\setminus \langle n_{1},\ldots ,n_{k}\rangle }$, since ${\displaystyle N}$ is not finitely generated. Thus we have an ascending sequence of submodules

${\displaystyle \langle n_{1}\rangle \subsetneq \langle n_{1},n_{2}\rangle \subsetneq \cdots \subsetneq \langle n_{1},\ldots ,n_{k}\rangle \subsetneq \langle n_{1},\ldots ,n_{k+1}\rangle \subsetneq \cdots }$

which does not stabilize.

2. ${\displaystyle \Rightarrow }$ 3.: Let ${\displaystyle {\mathcal {M}}}$ be a nonempty set of submodules of ${\displaystyle M}$. Due to Zorn's lemma, it suffices to prove that every chain within ${\displaystyle {\mathcal {N}}}$ has an upper bound (of course, our partial order is set inclusion, i.e. ${\displaystyle N_{1}\leq N_{2}:\Leftrightarrow N_{1}\subseteq N_{2}}$). Hence, let ${\displaystyle {\mathcal {N}}}$ be a chain within ${\displaystyle {\mathcal {M}}}$. We write

${\displaystyle {\mathcal {N}}=\left(N_{1}\subseteq N_{2}\subseteq \cdots \right)=\left(\langle n_{1},\ldots ,n_{k_{1}}\rangle \subseteq \langle n_{1},\ldots ,n_{k_{1}},n_{k_{1}+1},\ldots ,n_{k_{2}}\rangle \subseteq \cdots \right)}$.

Since every submodule is finitely generated, so is

${\displaystyle \langle n_{1},n_{2},\ldots ,n_{k},n_{k+1},\ldots \rangle =\langle m_{1},\ldots ,m_{l}\rangle }$.

We write ${\displaystyle m_{j}=\sum _{u\in \mathbb {N} }r_{u}n_{u}}$, where only finitely many of the ${\displaystyle r_{u}}$ are nonzero. Hence, we have

${\displaystyle \langle n_{1},n_{2},\ldots ,n_{k},n_{k+1},\ldots \rangle =\langle n_{u_{1}},\ldots ,n_{u_{r}}\rangle }$

for suitably chosen ${\displaystyle u_{1},\ldots ,u_{r}}$. Now each ${\displaystyle u_{i}}$ is eventually contained in some ${\displaystyle N_{j}}$. Since the ${\displaystyle N_{j}}$ are an ascending sequence with respect to inclusion, we may just choose ${\displaystyle j}$ large enough such that all ${\displaystyle u_{i}}$ are contained within ${\displaystyle N_{j}}$. Hence, ${\displaystyle N_{j}}$ is the desired upper bound.

3. ${\displaystyle \Rightarrow }$ 1.: Let

${\displaystyle N_{1}\subseteq N_{2}\subseteq \cdots \subseteq N_{k}\subseteq N_{k+1}\subseteq \cdots }$

be an ascending chain of submodules of ${\displaystyle M}$. The set ${\displaystyle \{N_{j}|j\in \mathbb {N} \}}$ has a maximal element ${\displaystyle N_{l}}$ and thus this ascending chain becomes stationary at ${\displaystyle l}$.${\displaystyle \Box }$

Proof 2:

We prove 1. ${\displaystyle \Rightarrow }$ 3. ${\displaystyle \Rightarrow }$ 2. ${\displaystyle \Rightarrow }$ 1.

1. ${\displaystyle \Rightarrow }$ 3.: Let ${\displaystyle {\mathcal {N}}}$ be a set of submodules of ${\displaystyle M}$ which does not have a maximal element. Then by the axiom of dependent choice, for each ${\displaystyle N\in {\mathcal {N}}}$ we may choose