# Biological Physics/Stirling's Approximation

Calculating entropy directly calls for us to calculate factorials. Taking factorials of relatively small numbers is not a problem (at least with a computer). However, once you begin to work in systems that have a very large number of particles and energy packets (on the order of a mole, or ${\displaystyle 10^{23}}$), it becomes clear that a direct calculation for factorials is not feasible. Rather, an approximation for the entropy must be developed. We will look more closely at what is known as Stirling's Approximation.

Recall that the multiplicity Ω for ideal solids is ${\displaystyle \Omega ={\frac {(N+n-1)!}{n!(N-1)!}}}$
and entropy is
${\displaystyle S=k_{B}{\rm {ln}}(\Omega )}$.
${\displaystyle S=k_{B}{\rm {ln}}({\frac {(N+n-1)!}{n!(N-1)!}})}$.
By the rules of logarithms,
${\displaystyle {\frac {S}{k_{B}}}=[{\rm {ln}}(N+n-1)!-{\rm {ln}}(n!)-{\rm {ln}}(N-1)!]}$.

All terms are essentially logarithms of factorials, so let's study the general case.

${\displaystyle {\rm {ln}}(x!)={\rm {ln}}((x)(x-1)(x-2)(x-3)...(2)(1))}$,
which implies by the rules of logarithms ${\displaystyle {\rm {ln}}(x!)={\rm {ln}}(x)+{\rm {ln}}(x-1)+{\rm {ln}}(x-2)+...+{\rm {ln}}(2)+{\rm {ln}}(1)}$, or
${\displaystyle {\rm {ln}}(x!)=\sum \limits _{i=0}^{x}{\rm {ln}}(i)}$.
For very large values (think on the order of magnitude of moles), this can be approximated by
${\displaystyle {\rm {ln}}(x!)\approx \int _{1}^{x}{\rm {ln}}(i)di}$, which has a solution ${\displaystyle \int _{1}^{x}{\rm {ln}}(i)di=i{\rm {ln}}(i)-i{\Big |}_{1}^{x}}$.
And by the fundamental theorem of calculus, ${\displaystyle {\rm {ln}}(x!)\approx x{\rm {ln}}(x)-x+1}$. For very large numbers, the 1 becomes negligible, so ${\displaystyle {\rm {ln}}(x!)\approx x{\rm {ln}}(x)-x}$. So this gives us the following approximations

${\displaystyle {\rm {ln}}(N+n-1)\approx (N+n-1){\rm {ln}}(N+n-1)-(N+n-1)}$

${\displaystyle {\rm {ln}}(x!)\approx n{\rm {ln}}(n)-n}$

${\displaystyle {\rm {ln}}(N-1)\approx (N-1){\rm {ln}}(N-1)-(N-1)}$

So ${\displaystyle (N+n-1){\rm {ln}}(N+n-1)-{\rm {ln}}(n)-{\rm {ln}}(N-1)\approx (N+n-1){\rm {ln}}(N+n-1)-(N+n-1)-n{\rm {ln}}(n)+n-(N-1){\rm {ln}}(N-1)+(N-1)}$.

With some simplification,
${\displaystyle (N+n-1){\rm {ln}}(N+n-1)-n{\rm {ln}}(n)-(N-1){\rm {ln}}(N-1)}$

Once again, let's make the assumption that these numbers are very large, making 1 negligible. So ${\displaystyle (N+n){\rm {ln}}(N+n)-n{\rm {ln}}(n)-(N){\rm {ln}}(N)}$

Therefore, entropy can be approximated with ${\displaystyle S=k_{B}[(N+n){\rm {ln}}(N+n)-n{\rm {ln}}(n)-(N-1){\rm {ln}}(N-1)]}$. This is Stirling's Approximation. If you recall that numbers out in front of a logarithm can be written as the power of the number inside the logarithm, so ${\displaystyle S=k_{B}[{\rm {ln}}((N+n)^{(N+n)})-{\rm {ln}}(n^{n})-{\rm {ln}}(N^{N})]}$, or ${\displaystyle S=k_{B}{\rm {ln}}[{\frac {(N+n)^{(N+n)}}{(n^{n})(N^{N})}}]}$. For calculations, the first equation would be better since it doesn't exponentiate before it takes a logarithm.

Next Page: Absorption | Previous Page: Probability, Entropy, & the Second Law
Home: Biological Physics