# Commutative Algebra/Algebras and integral elements

## Algebras

note to self: 21.4 is false when the constant polynomials are allowed!


Definition 21.1:

Let ${\displaystyle R}$ be a ring. An algebra ${\displaystyle A}$ over ${\displaystyle R}$ is an ${\displaystyle R}$-module together with a multiplication ${\displaystyle \cdot :A\times A\to A}$. This multiplication shall be ${\displaystyle R}$-bilinear.

Within an algebra it is thus true that we have an addition and a multiplication, and many of the usual rules of algebra stay true. Thus the name algebra.

Of course, there are some algebras whose multiplication is not commutative or associative. If the underlying ring is commutative, the ring gives a certain commutativity property in the sense of

${\displaystyle r(sa)=(rs)a=(sr)a=s(ra)}$.

Definition 21.2:

Let ${\displaystyle A}$ be an algebra, and let ${\displaystyle Z\subseteq A}$ be a subset of ${\displaystyle A}$. ${\displaystyle Z}$ is called a subalgebra of ${\displaystyle A}$ iff it is closed with respect to the operations

• multiplication
• module operation

of ${\displaystyle A}$.

Note that this means that ${\displaystyle Z}$, together with the operations inherited from ${\displaystyle A}$, is itself an ${\displaystyle R}$-algebra; the necessary rules just carry over from ${\displaystyle A}$.

Example 21.3: Let ${\displaystyle R}$ be a ring, let ${\displaystyle S}$ be another ring, and let ${\displaystyle \varphi :R\to S}$ be a ring homomorphism. Then ${\displaystyle S}$ is an ${\displaystyle R}$-algebra, where the module operation is given by

${\displaystyle rs:=\varphi (r)s}$,

and multiplication and addition for this algebra are given by the multiplication and addition of ${\displaystyle S}$, the ring.

Proof:

The required rules for the module operation follow as thus:

1. ${\displaystyle 1_{r}s=\varphi (1_{R})s=1_{S}s=s}$
2. ${\displaystyle r(s+t)=\varphi (r)(s+t)=\varphi (r)s+\varphi (r)t=rs+rt}$
3. ${\displaystyle (r+r')s=\varphi (r+r')s=(\varphi (r)+\varphi (r'))s=rs+r's}$
4. ${\displaystyle r(r's)=\varphi (r)r's=\varphi (r)\varphi (r')s=\varphi (rr')s=(rr')s}$

Since in ${\displaystyle S}$ we have all the rules for a ring, the only thing we need to check for the ${\displaystyle R}$-bilinearity of the multiplication is compatibility with the module operation.

Indeed,

${\displaystyle (rs)t=\varphi (r)st=r(st)}$

and analogously for the other argument.${\displaystyle \Box }$

We shall note that if we are given an ${\displaystyle R}$-algebra ${\displaystyle A}$, then we can take a polynomial ${\displaystyle p\in R[x_{1},\ldots ,x_{n}]}$ and some elements ${\displaystyle a_{1},\ldots ,a_{n}}$ of ${\displaystyle A}$ and evaluate ${\displaystyle p(a_{1},\ldots ,a_{n})\in A}$ as thus:

1. Using the algebra multiplication, we form the monomials ${\displaystyle a_{1}^{k_{1}}a_{2}^{k_{2}}\cdots a_{n}^{k_{n}}}$.
2. Using the module operation, we multiply each monomial with the respective coefficient: ${\displaystyle r_{k_{1},\ldots ,k_{n}}a_{1}^{k_{1}}a_{2}^{k_{2}}\cdots a_{n}^{k_{n}}}$.
3. Using the algebra addition (=module addition), we add all these ${\displaystyle r_{k_{1},\ldots ,k_{n}}a_{1}^{k_{1}}a_{2}^{k_{2}}\cdots a_{n}^{k_{n}}}$ together.

The commutativity of multiplication (1.) and addition (3.) ensure that this procedure does not depend on the choices of order, that can be made in regard to addition and multiplication.

Definition 21.4:

Let ${\displaystyle A}$ be an ${\displaystyle R}$-algebra, and let ${\displaystyle a_{1},\ldots ,a_{n}}$ be any elements of ${\displaystyle A}$. We then define a new object, ${\displaystyle R[a_{1},\ldots ,a_{n}]}$, to be the set of all elements of ${\displaystyle A}$ that arise when applying the algebra operations of ${\displaystyle A}$ and the module operation (with arbitrary elements ${\displaystyle r\in R}$ of the underlying ring) to the elements ${\displaystyle a_{1},\ldots ,a_{n}}$ a finite number of times, in an arbitrary fashion (for example the elements ${\displaystyle a_{1}\cdot a_{2}}$, ${\displaystyle a_{3}+ra_{1}\cdot a_{2}}$, ${\displaystyle a_{1}\cdot (ra_{2})}$ are all in ${\displaystyle R[a_{1},\ldots ,a_{n}]}$). By multiplying everything out (using the rules we are given for an algebra), we find that this is equal to

${\displaystyle R[a_{1},\ldots ,a_{n}]=\{p(a_{1},\ldots ,a_{n})|p\in R[x_{1},\ldots ,x_{n}]\}}$.

We call ${\displaystyle R[a_{1},\ldots ,a_{n}]}$ the algebra generated by the elements ${\displaystyle a_{1},\ldots ,a_{n}}$.

Theorem 21.5:

Let an ${\displaystyle R}$-algebra ${\displaystyle A}$ be given, and let ${\displaystyle a_{1},\ldots ,a_{n}\in A}$. Then

• ${\displaystyle R[a_{1},\ldots ,a_{n}]}$ is a subalgebra of ${\displaystyle A}$.

Furthermore,

• ${\displaystyle R[a_{1},\ldots ,a_{n}]=\bigcap _{\{a_{1},\ldots ,a_{n}\}\subseteq Z\subseteq A \atop Z{\text{ subalgebra}}}Z}$

and

• ${\displaystyle R[a_{1},\ldots ,a_{n}]}$ is (with respect to set inclusion) smaller than any other subalgebra of ${\displaystyle A}$ containing each element ${\displaystyle a_{1},\ldots ,a_{n}}$.

Proof:

The first claim follows from the very definition of subalgebras of ${\displaystyle A}$: The closedness under the three operations. For, if we are given any elements of ${\displaystyle R[a_{1},\ldots ,a_{n}]}$, applying any operation to them is just one further step of manipulations with the elements ${\displaystyle a_{1},\ldots ,a_{n}}$.

We go on to prove the equation

${\displaystyle R[a_{1},\ldots ,a_{n}]=\bigcap _{\{a_{1},\ldots ,a_{n}\}\subseteq Z\subseteq A \atop Z{\text{ subalgebra}}}Z}$.

For "${\displaystyle \subseteq }$" we note that since ${\displaystyle a_{1},\ldots ,a_{n}}$ are contained within every ${\displaystyle Z}$ occuring on the right hand side. Thus, by the closedness of these ${\displaystyle Z}$, we can infer that all finite manipulations by the three algebra operations (addition, multiplication, module operation) are included in each ${\displaystyle Z}$. From this follows "${\displaystyle \subseteq }$".

For "${\displaystyle \supseteq }$" we note that ${\displaystyle R[a_{1},\ldots ,a_{n}]}$ is also a subalgebra of ${\displaystyle A}$ containing ${\displaystyle \{a_{1},\ldots ,a_{n}\}}$, and intersection with more things will only make the set at most smaller.

Now if any other subalgebra of ${\displaystyle A}$ is given that contains ${\displaystyle a_{1},\ldots ,a_{n}}$, the intersection on the right hand side of our equation must be contained within it, since that subalgebra would be one of the ${\displaystyle Z}$.${\displaystyle \Box }$

### Exercises

• Exercise 21.1.1:

## Symmetric polynomials

Definition 21.6:

Let ${\displaystyle R}$ be a ring. A polynomial ${\displaystyle f\in R[x_{1},\ldots ,x_{n}]}$ is called symmetric if and only if for all ${\displaystyle \sigma \in S_{n}}$ (${\displaystyle S_{n}}$ being the symmetric group), we have

${\displaystyle f(x_{1},\ldots ,x_{n})=f(x_{\sigma (1)},\ldots ,x_{\sigma (n)})}$.

That means, we can permute the variables arbitrarily and still get the same result.

This section shall be devoted to proving a very fundamental fact about these polynomials. That is, there are some so-called elementary symmetric polynomials, and every symmetric polynomial can be written as a polynomial in those elementary symmetric polynomials.

Definition 21.7:

Fix an ${\displaystyle n\in \mathbb {N} }$. The elementary symmetric polynomials in ${\displaystyle n}$ variables are the ${\displaystyle n}$ polynomials

{\displaystyle {\begin{aligned}s_{n,1}(x_{1},\ldots ,x_{n})&:=x_{1}+x_{2}+\cdots +x_{n-1}+x_{n}\\s_{n,2}(x_{1},\ldots ,x_{n})&:=x_{1}x_{2}+\cdots +x_{1}x_{n}~~~~+~~~~x_{2}x_{3}+\cdots +x_{2}x_{n}~~~~+~~~~\cdots ~~~~+~~~~x_{n-2}x_{n-1}+x_{n-2}x_{n}~~~~+~~~~x_{n-1}x_{n}\\\vdots &\\s_{n,k}(x_{1},\ldots ,x_{n})&:=\sum _{1\leq j_{1}

Without further ado, we shall proceed to the theorem that we promised:

Theorem 21.8:

Let any symmetric polynomial ${\displaystyle f\in R[x_{1},\ldots ,x_{n}]}$ be given. Then we find another polynomial ${\displaystyle p\in R[x_{1},\ldots ,x_{n}]}$ such that

${\displaystyle f(x_{1},\ldots ,x_{n})=p(s_{n,1}(x_{1},\ldots ,x_{n}),s_{n,2}(x_{1},\ldots ,x_{n}),\ldots ,s_{n,n}(x_{1},\ldots ,x_{n}))}$.

Hence, every symmetric polynomial is a polynomial in the elementary symmetric polynomials.

Proof 1:

We start out by ordering all monomials (remember, those are polynomials of the form ${\displaystyle x_{1}^{k_{1}}x_{2}^{k_{2}}\cdots x_{n-1}^{k_{n-1}}x_{n}^{k_{n}}}$), using the following order:

${\displaystyle x_{1}^{k_{1}}x_{2}^{k_{2}}\cdots x_{n-1}^{k_{n-1}}x_{n}^{k_{n}}.

With this order, the largest monomial of ${\displaystyle s_{n,m}}$ is given by ${\displaystyle x_{1}\cdots x_{m}}$; this is because for all monomials of ${\displaystyle s_{n,m}}$, the sum of the exponent equals ${\displaystyle m}$, and the last condition of the order is optimized by monomials which have the first zero exponent as late as possible.

Furthermore, for any given ${\displaystyle r_{1},\ldots ,r_{n}\in \mathbb {N} _{0}}$, the largest monomial of

${\displaystyle s_{n,1}^{r_{1}}\cdots s_{n,n}^{r_{n}}}$

is given by ${\displaystyle x_{1}^{r_{1}+\cdots +r_{n}}x_{2}^{r_{2}+\cdots +r_{n}}\cdots x_{n-1}^{r_{n-1}+r_{n}}x_{n}^{r_{n}}}$; this is because the sum of the exponents always equals ${\displaystyle r_{1}+2r_{2}+\cdots +(n-1)r_{n-1}+nr_{n}}$, further the above monomial does occur (multiply all the maximal monomials from each elementary symmetric factor together) and if one of the factors of a given monomial of ${\displaystyle s_{n,1}^{r_{1}}\cdots s_{n,n}^{r_{n}}}$ coming from an elementary symmetric polynomial is not the largest monomial of that elementary symmetric polynomial, we may replace it by a larger monomial and obtain a strictly larger monomial of the product ${\displaystyle s_{n,1}^{r_{1}}\cdots s_{n,n}^{r_{n}}}$; this is because a part of the sum ${\displaystyle r_{1}+2r_{2}+\cdots +(n-1)r_{n-1}+nr_{n}}$ is moved to the front.

Now, let a symmetric polynomial ${\displaystyle f\in R[x_{1},\ldots ,x_{n}]}$ be given. We claim that if ${\displaystyle x_{1}^{k_{1}}x_{2}^{k_{2}}\cdots x_{n-1}^{k_{n-1}}x_{n}^{k_{n}}}$ is the largest monomial of ${\displaystyle f}$, then we have ${\displaystyle k_{1}\geq k_{2}\geq \cdots \geq k_{n-1}\geq k_{n}}$.

For assume otherwise, say ${\displaystyle k_{j}. Then since ${\displaystyle f}$ is symmetric, we may exchange the exponents of the ${\displaystyle j}$-th and ${\displaystyle j+1}$-th variable respectively and still obtain a monomial of ${\displaystyle f}$, and the resulting monomial will be strictly larger.

Thus, if we define for ${\displaystyle j=1,\ldots ,n-1}$

${\displaystyle d_{j}:=k_{j}-k_{j+1}}$

and furthermore ${\displaystyle d_{n}:=k_{n}}$, we obtain numbers that are non-negative. Hence, we may form the product

${\displaystyle h(x):=s_{n,1}^{d_{1}}\cdots s_{n,n}^{d_{n}}}$,

and if ${\displaystyle c}$ is the coefficient of the largest monomial of ${\displaystyle f}$, then the largest monomial of

${\displaystyle f(x)-ch(x)}$

is strictly smaller than that of ${\displaystyle f}$; this is because the largest monomial of ${\displaystyle h}$ is, by our above computation and calculating some telescopic sums, equal to the largest monomial of ${\displaystyle f}$, and the two thus cancel out.

Since the elementary symmetric polynomials are symmetric and sums, linear combinations and products of symmetric polynomials are symmetric, we may repeat this procedure until we are left with nothing. All the stuff that we subtracted from ${\displaystyle f}$ collected together then forms the polynomial in elementary symmetric polynomials we have been looking for.${\displaystyle \Box }$

Proof 2:

Let ${\displaystyle f\in R[x_{1},\ldots ,x_{n}]}$ be an arbitrary symmetric polynomial, and let ${\displaystyle d}$ be the degree of ${\displaystyle f}$ and ${\displaystyle n}$ be the number of variables of ${\displaystyle f}$.

In order to prove the theorem, we use induction on the sum ${\displaystyle n+d}$ of the degree and number of variables of ${\displaystyle f}$.

If ${\displaystyle n+d=1}$, we must have ${\displaystyle n=1}$ (since ${\displaystyle d=1}$ would imply the absurd ${\displaystyle n=0}$). But any polynomial of one variable is already a polynomial of the symmetric polynomial ${\displaystyle s_{1,1}(x)=x}$.

Let now ${\displaystyle n+d=k}$. We write

${\displaystyle f(x_{1},\ldots ,x_{n})=g(x_{1},\ldots ,x_{n})+x_{1}\cdots x_{n}h(x_{1},\ldots ,x_{n})}$,

where every monomial occuring within ${\displaystyle g}$ lacks at least one variable, that is, is not divisible by ${\displaystyle x_{1}\cdots x_{n}}$.

The polynomial ${\displaystyle g}$ is still symmetric, because any permutation of a monomial that lacks at least one variable, also lacks at least one variable and hence occurs in ${\displaystyle g}$ with same coefficient, since no bit of it could have been sorted to the "${\displaystyle x_{1}\cdots x_{n}h(x_{1},\ldots ,x_{n})}$" part.

The polynomial ${\displaystyle h}$ has the same number of variables, but the degree of ${\displaystyle h}$ is smaller than the degree of ${\displaystyle f}$. Furthermore, ${\displaystyle h}$ is symmetric because of

${\displaystyle h(x_{1},\ldots ,x_{n})={\frac {f(x_{1},\ldots ,x_{n})-g(x_{1},\ldots ,x_{n})}{x_{1}\cdots x_{n}}}}$.

Hence, by induction hypothesis, ${\displaystyle h}$ can be written as a polynomial in the symmetric polynomials:

${\displaystyle h(x_{1},\ldots ,x_{n})=p_{1}(s_{n,1}(x_{1},\ldots ,x_{n}),\ldots ,s_{n,n}(x_{1},\ldots ,x_{n}))}$

for a suitable ${\displaystyle p_{1}\in R[x_{1},\ldots ,x_{n}]}$.

If ${\displaystyle n=1}$, then ${\displaystyle f}$ is a polynomial of the elementary symmetric polynomial ${\displaystyle s_{1,1}(x)}$ anyway. Hence, it is sufficient to only consider the case ${\displaystyle n\geq 2}$. In that case, we may define the polynomial

${\displaystyle q(x_{1},\ldots ,x_{n-1}):=g(x_{1},\ldots ,x_{n-1},0)}$.

Now ${\displaystyle q}$ has one less variable than ${\displaystyle f}$ and at most the same degree, which is why by induction hypothesis, we find a representation

${\displaystyle q(x_{1},\ldots ,x_{n-1})=p_{2}(s_{n-1,1}(x_{1},\ldots ,x_{n-1}),\ldots ,s_{n-1,n-1}(x_{1},\ldots ,x_{n-1}))}$

for a suitable ${\displaystyle p_{2}\in R[x_{1},\ldots ,x_{n-1}]}$.

We observe that for all ${\displaystyle j\in \{1,\ldots ,n-1\}}$, we have ${\displaystyle s_{n-1,j}(x_{1},\ldots ,x_{n-1})=s_{n,j}(x_{1},\ldots ,x_{n-1},0)}$. This is because the unnecessary monomials just vanish. Hence,

${\displaystyle g(x_{1},\ldots ,x_{n-1},0)=p_{2}(s_{n,1}(x_{1},\ldots ,x_{n-1},0),\ldots ,s_{n,n-1}(x_{1},\ldots ,x_{n-1},0))}$.

We claim that even

${\displaystyle g(x_{1},\ldots ,x_{n-1},x_{n})=p_{2}(s_{n,1}(x_{1},\ldots ,x_{n-1},x_{n}),\ldots ,s_{n,n-1}(x_{1},\ldots ,x_{n-1},x_{n}))~~~~~~~(*)}$.

Indeed, by the symmetry of ${\displaystyle g}$ and ${\displaystyle s_{n,1},\ldots ,s_{n,n-1}}$ and renaming of variables, the above equation holds where we may set an arbitrary of the variables equal to zero. But each monomial of ${\displaystyle g}$ lacks at least one variable. Hence, by successively equating coefficients in ${\displaystyle (*)}$ where one of the variables is set to zero, we obtain that the coefficients on the right and left of ${\displaystyle (*)}$ are equal, and thus the polynomials are equal.${\displaystyle \Box }$

## Integral dependence

Definition 21.9:

If ${\displaystyle R}$ is any ring and ${\displaystyle S\subseteq R}$ a subring, ${\displaystyle r\in R}$ is called integral over ${\displaystyle S}$ iff

${\displaystyle r^{n}+a_{n-1}r^{n-1}+\cdots +a_{1}r+a_{0}=0}$

for suitable ${\displaystyle a_{n-1},\ldots ,a_{0}\in S}$.

A polynomial of the form

${\displaystyle x^{n}+a_{n-1}x^{n-1}+\cdots +a_{1}x+a_{0}}$ (leading coefficient equals ${\displaystyle 1}$)

is called a monic polynomial. Thus, ${\displaystyle r}$ being integral over ${\displaystyle S}$ means that ${\displaystyle r}$ is the root of a monic polynomial with coefficients in ${\displaystyle S}$.

Whenever we have a subring ${\displaystyle S\subseteq R}$ of a ring ${\displaystyle R}$, we consider the module structure of ${\displaystyle R}$ as an ${\displaystyle S}$-module, where the module operation and summation are given by the ring operations of ${\displaystyle R}$.

Theorem 21.10 (characterisation of integral dependence):

Let ${\displaystyle R}$ be a ring, ${\displaystyle S\subseteq R}$ a subring. The following are equivalent:

1. ${\displaystyle r}$ is integral over ${\displaystyle S}$
2. ${\displaystyle S[r]}$ is a finitely generated ${\displaystyle S}$-module.
3. ${\displaystyle S[r]}$ is contained in a subring ${\displaystyle T\subseteq R}$ that is finitely generated as an ${\displaystyle S}$-module.
4. There exists a faithful, nonzero ${\displaystyle S[r]}$-module which is finitely generated as an ${\displaystyle S}$-module.

Proof:

1. ${\displaystyle \Rightarrow }$ 2.: Let ${\displaystyle r}$ be integral over ${\displaystyle S}$, that is, ${\displaystyle r^{n}=-a_{n-1}r^{n-1}+\cdots +a_{1}r+a_{0}}$. Let ${\displaystyle b_{k}r^{k}+b_{k-1}r^{k-1}+\cdots +b_{1}r+b_{0}}$ be an arbitrary element of ${\displaystyle S[r]}$. If ${\displaystyle j}$ is larger or equal ${\displaystyle n}$, then we can express ${\displaystyle r^{j}}$ in terms of lower coefficients using the integral relation. Repetition of this process yields that ${\displaystyle 1,r,r^{2},\ldots ,r^{n-1}}$ generate ${\displaystyle S[r]}$ over ${\displaystyle S}$.

2. ${\displaystyle \Rightarrow }$ 3.: ${\displaystyle T=S[r]}$.

3. ${\displaystyle \Rightarrow }$ 4.: Set ${\displaystyle M=T}$; ${\displaystyle T}$ is faithful because if ${\displaystyle u\in S[r]}$ annihilates ${\displaystyle T}$, then in particular ${\displaystyle u=u\cdot 1=0}$.

4. ${\displaystyle \Rightarrow }$ 1.: Let ${\displaystyle M}$ be such a module. We define the morphism of modules

${\displaystyle \phi :M\to M,m\mapsto rm}$.

We may restrict the module operation of ${\displaystyle M}$ to ${\displaystyle S}$ to obtain an ${\displaystyle S}$-module. ${\displaystyle \phi }$ is also a morphism of ${\displaystyle S}$-modules. Further, set ${\displaystyle I=S}$. Then ${\displaystyle \phi (M)\subseteq M=IM}$ (${\displaystyle 1\in S}$). The Cayley–Hamilton theorem gives an equation

${\displaystyle r^{n}+a_{n-1}r^{n-1}+\cdots +a_{1}r+a_{0}=0}$, ${\displaystyle a_{n-1},\ldots ,a_{0}\in S}$,

where ${\displaystyle r}$ is to be read as the multiplication operator by ${\displaystyle r}$ and ${\displaystyle 0}$ as the zero operator, and by the faithfulness of ${\displaystyle M}$, ${\displaystyle r^{n}+a_{n-1}r^{n-1}+\cdots +a_{1}r+a_{0}=0}$ in the usual sense.${\displaystyle \Box }$

Theorem 21.11:

Let ${\displaystyle \mathbb {F} }$ be a field and ${\displaystyle S\subseteq \mathbb {F} }$ a subring of ${\displaystyle \mathbb {F} }$. If ${\displaystyle \mathbb {F} }$ is integral over ${\displaystyle S}$, then ${\displaystyle S}$ is a field.

Proof:

Let ${\displaystyle s\in S}$. Since ${\displaystyle \mathbb {F} }$ is a field, we find an inverse ${\displaystyle s^{-1}\in \mathbb {F} }$; we don't know yet whether ${\displaystyle s^{-1}}$ is contained within ${\displaystyle S}$. Since ${\displaystyle \mathbb {F} }$ is integral over ${\displaystyle S}$, ${\displaystyle s^{-1}}$ satisfies an equation of the form

${\displaystyle (s^{-1})^{n}+a_{n-1}(s^{-1})^{n-1}+\cdots +a_{1}s^{-1}+a_{0}=0}$

for suitable ${\displaystyle a_{n-1},\ldots ,a_{1},a_{0}\in S}$. Multiplying this equation by ${\displaystyle s^{n-1}}$ yields

${\displaystyle s^{-1}=-(a_{n-1}+a_{n-2}s+\cdots +a_{1}s^{n-2}+a_{0}s^{n-1})\in S}$.${\displaystyle \Box }$

Theorem 21.12:

Let ${\displaystyle S}$ be a subring of ${\displaystyle R}$. The set of all elements of ${\displaystyle R}$ which are integral over ${\displaystyle S}$ constitutes a subring of ${\displaystyle R}$.

Proof 1 (from the Atiyah–Macdonald book):

If ${\displaystyle x,y\in R}$ are integral over ${\displaystyle S}$, ${\displaystyle y}$ is integral over ${\displaystyle S[x]}$. By theorem 21.10, ${\displaystyle S[x]}$ is finitely generated as ${\displaystyle S}$-module and ${\displaystyle S[x][y]=S[x,y]}$ is finitely generated as ${\displaystyle S[x]}$-module. Hence, ${\displaystyle S[x,y]}$ is finitely generated as ${\displaystyle S}$-module. Further, ${\displaystyle S[x+y]\subseteq S[x,y]}$ and ${\displaystyle S[x\cdot y]\subseteq S[x,y]}$. Hence, by theorem 21.10, ${\displaystyle x+y}$ and ${\displaystyle x\cdot y}$ are integral over ${\displaystyle S}$.${\displaystyle \Box }$

Proof 2 (Dedekind):

If ${\displaystyle x,y}$ are integral over ${\displaystyle S}$, ${\displaystyle S[x]}$ and ${\displaystyle S[y]}$ are finitely generated as ${\displaystyle S}$-modules. Hence, so is

${\displaystyle S[x]\cdot S[y]:=\left\{\sum _{j=1}^{n}a_{j}b_{j}{\big |}n\in \mathbb {N} ,a_{j}\in S[x],b_{j}\in S[y]\right\}}$.

Furthermore, ${\displaystyle S[xy]\subseteq S[x]\cdot S[y]}$ and ${\displaystyle S[x+y]\subseteq S[x]\cdot S[y]}$. Hence, by theorem 21.10, ${\displaystyle x\cdot y,x+y}$ are integral over ${\displaystyle S}$.${\displaystyle \Box }$

Definition 21.13:

Let ${\displaystyle S}$ be a subring of the ring ${\displaystyle R}$. The integral closure of ${\displaystyle S}$ over ${\displaystyle R}$ is the ring consisting of all elements of ${\displaystyle R}$ which are integral over ${\displaystyle S}$.

Definition 21.14:

Let ${\displaystyle S}$ be a subring of the ring ${\displaystyle R}$. If all elements of ${\displaystyle R}$ are integral over ${\displaystyle S}$, ${\displaystyle R}$ is called an integral ring extension of ${\displaystyle S}$.