Jump to content

Computational Physics/Printable version

From Wikibooks, open books for an open world


Computational Physics

The current, editable version of this book is available in Wikibooks, the open-content textbooks collection, at
https://en.wikibooks.org/wiki/Computational_Physics

Permission is granted to copy, distribute, and/or modify this document under the terms of the Creative Commons Attribution-ShareAlike 3.0 License.

Why Computational Physics?

Definition

[edit | edit source]

Computational Physics is the study and implementation of numerical algorithm and the techniques which make calculations easy using computers.

Purpose and Philosophy

[edit | edit source]

The purpose of this course is demonstrate to students how computers can enable us to both broaden and deepen our understanding of physics by vastly increasing the range of mathematical calculations which we can conveniently perform.

Our approach to computational physics is to write self-contained programs in a high-level scientific language--i.e., either FORTRAN or C++. Of course, there are many other possible approaches, each with their own peculiar advantages and disadvantages. It is instructive to briefly examine the available options.

Scientific Programming Methodology

[edit | edit source]

Basically, there are three possible methods by which we could perform the numerical calculations which we are going to encounter during this course.

First, we could use a mathematical software package, such as MATHEMATICA, MAPLE or MATLAB. The main advantage of these packages is that they facilitate the very rapid coding up of numerical problems. The main disadvantage is that they produce executable code which is interpreted, rather than compiled. Compiled code is translated directly from a high-level language into machine code instructions, which, by definition, are platform dependent--after all, an Intel x86 chip has a completely different instruction set to a Power-PC chip. Interpreted code is translated from a high-level language into a set of meta-code instructions which are platform independent. Each meta-code instruction is then translated into a fixed set of machine code instructions which is peculiar to the particular hardware platform on which the code is being run. In general, interpreted code is nowhere near as efficient, in terms of computer resource utilization, as compiled code: i.e., interpreted code run a lot slower than equivalent compiled code. Thus, although MATHEMATICA, MAPLE, and MATLAB are ideal environments in which to perform relatively small calculations, they are not suitable for full-blown research projects, since the code which they produce generally runs far too slowly.

Second, we could write our own programs in a high-level language, but use calls to pre-written, pre-compiled routines in commonly available subroutine libraries, such as NAG,4 LINPACK,5 and ODEPACK,6 to perform all of the real numerical work. This is the approach used by the majority of research physicists.

Third, we could write our own programs--completely from scratch--in a high-level language. This is the approach used in this course. I have opted not to use pre-written subroutine libraries, simply because I want students to develop the ability to think for themselves about scientific programming and numerical techniques. Students should, however, realize that, in many cases, pre-written library routines offer solutions to numerical problems which are pretty hard to improve upon.

Choice of Scientific Programming Languages

[edit | edit source]

What is the best high-level language to use for scientific programming? This, unfortunately, is a highly contentious question. Over the years, literally hundreds of high-level languages have been developed. However, few have stood the test of time. Many languages (e.g., Algol, Pascal, Haskell) can be dismissed as ephemeral computer science fads. Others (e.g., Cobol, Lisp, Ada) are too specialized to adapt for scientific use. Let us examine the remaining options:

FORTRAN 77

[edit | edit source]

FORTRAN was the first high-level programming language to be developed: in fact, it predates the languages listed below by decades. Before the advent of FORTRAN. Moreover, FORTRAN was specifically designed for scientific computing. Indeed, in the early days of computers all computing was scientific in nature--i.e., physicists and mathematicians were the original computer scientists! FORTRAN's main advantages are that it is very straightforward, and it interfaces well with most commonly available, pre-written subroutine libraries (since these libraries generally consist of compiled FORTRAN code). FORTRAN's main disadvantages are all associated with its relative antiquity. For instance. FORTRAN's control statements are fairly rudimentary, whereas its input/output facilities are positively paleolithic.

This language was originally developed by computer scientists to write operating systems. Indeed, all UNIX operating systems are written in C. C is, consequently, an extremely flexible and powerful language. Amongst its major advantages are its good control statements and excellent input/output facilities. C's main disadvantage is that, since it was not specifically written to be a scientific language, some important scientific features (e.g., complex arithmetic) are missing. Although C is a high-level language, it incorporates many comparatively low-level features, such as pointers (this is hardly surprisingly, since C was originally designed to write operating systems). The low-level features of C--in particular, the rather primitive implementation of arrays--sometimes make scientific programming more complicated than need be the case, and undoubtedly facilitate programming errors. On the other hand, these features allow scientific programmers to write extremely efficient code. Since efficiency is generally the most important concern in scientific computing, the low-level features of C are, on balance, advantageous.

This language is a major extension of C whose main aim is to facilitate object-orientated programming. Object-orientation is a completely different approach to programming than the more traditional procedural approach: it is particularly well suited to large projects involving many people who are each writing different segments of the same code. However, object-orientation represents a large, and somewhat unnecessary, overhead for the type of straightforward, single person programming tasks considered in this course. Note, however, that C++ incorporates some non-object-orientated extensions to C which are extremely useful. Of the above languages, we can immediately rule out C++, because object-orientation is an unnecessary complication (at least, for our purposes), and FORTRAN 90, because of the absence of an inexpensive compiler. The remaining options are FORTRAN 77 and C. I have chosen to use C++ (because it not only can be object oriented it can also be procedural like C) in this course, simply because I find the complex features of FORTRAN 77 too embarrassing to teach students in the 21st century.


Chapter 1

The Spreadsheet Program

[edit | edit source]

This book will describe how specifically to use Microsoft Excel, as this is a widely used program. If you have access to this and are farmiliar with it is suggested that you use it. If you have another preferred spreadsheet program please feel free to use that, however be aware that you will have to refer to the help section where your spreadsheet program differs from this book.

If you would like a free spreadsheet program that works with nearly any operating system then 'Calc' from OpenOffice.org is highly recommended. Hopefully in the future this book will describe how to use this as well as Microsoft Excel.

Why use a spreadsheet? This chapter aims to show you that you can do things quickly and simply without having to learn and type excessive amounts code. We will see towards the end that using a spreadsheet has its limitations, but hopefully also the power of spreadsheets other than for just plotting graphs of your experimental results!

Examples of Spreadsheet Program

[edit | edit source]

Using Spreedsheets to calculate the Numerical Solutions of Ordinary Differential Equations

[edit | edit source]

If x is a function of a real variable t and f is a function of both x and t, then the equation

             x’(t) = f(x(t), t) 		(1.1)

is called a first order differential equation. Solving such an equation involves more than algebraic manipulation; indeed, although the equation itself involves three quantities, x, f, and t, to find a solution we must identify a function x, defined solely in terms of the independent variable t, which satisfies the relationship of (1.1) for all t in some open interval. For many equations, exact solution is not possible and we have to rely on approximations. We will discuss one technique for finding both approximate and, where possible, exact solutions to differential equations. We have already seen many examples of differential equations we discussed finding the position of an object moving in a straight line given its velocity function and its initial position, when we discussed the motion of a projectile, when we considered the two-body problem. Indeed, in many ways the study of differential equations is at the heart of calculus. To study the interaction of physical bodies in the world is to study the ramifications of physical laws such as the law of gravitation and Newton’s second law of motion, laws which frequently lead questions involving the solution of differential equations. Newton was the first to realize the power of calculus for solving a vast array of physical problems. The mathematicians that followed him enlarged and refined his techniques until they began to believe that the entire future of the universe, as well as its past, could be discerned from the knowledge of the current positions and velocities of all physical bodies and the forces at work between them. Today we know more about the limits to our knowledge, but, nevertheless, the study of differential equations remains a key component to our understanding of the universe. To begin our study, consider the equation

                   x’(t) = f(x(t), t)		(1.2)

with the initial condition x(t0) = x0. To simplify notation, we will frequently omit the independent variable when referring to x(t) and write simply

                      x’ = f(x, t)		 (1.3)

Now if f(x, t) depends only on the value of t, that is, if f(x, t) = g(t) for all values of x, where g is a function of t alone, then we may solve (1.3) by integration. That is, integrating both sides (1.3) gives us

            x(t)  - x(t0) =   integral of [g(t), over [t,to]]	(1.4)


which gives

            x(t) = x(t0) + integral of [g(t), over [t,to]]	(1.5)

Assuming we can compute the definite integral, which can at least always be done numerically, we have solved the differential equation.

For Example

Consider the equation

          x’ = 4 sin(3t)

with initial condition x(0) = 5. Then

              x(t) = 5 - 4/3 cos3t + 4/3 = 1/3 (19 - 4 cos(3t)).

Knowing the value of t we can calculate the exact value of x at t.

Indeed, knowing the difficulty of evaluating ordinary definite integrals, it is not hard to believe that many, if not most; differential equations may be solved only through numerical approximation. Although the function x in equation (1.3) is unknown, we do have enough information to find its best affine approximation at t0. Namely, the best affine approximation to x at t0 is obtained by the Taylor theorem using subinterval [t0,t1] of an entire interval [t0,t].

    x(t1) = x0 + f(x0, t0)(t - t0) + 0((t - t0)2 +...). where t1 = t0+h 	(1.6)

Equivalently,

                  x(t0 + h) = x0 + hf(x0, t0)		(1.7)

Thus for a small value of h,

                     x1 = x0 + hf(x0, t0)			(1.8)

Where

                     h = t1 – t0/n

[in the above case n = 1]

Will provide a good approximation to x(t0 + h). However, we want to do more than this; since x is a function, we want to be able to approximate its values over an entire interval, say [t0, t]. To do so, we choose a small value for h and iterate the process that gave us x1. Specifically, we let t = t0 + nh, n = 0, 1, 2. 3,…, where n is chosen large enough and the generalized form of the Euler method is

                xn+1 = xn + hf(xn, tn) 			(1.9)

That is, we compute an approximation for x(tn+1) by applying the best affine approximation to our approximation for x(tn), repeating the process enough times until we have approximated x over the entire interval [t0, t]. This method of approximation is known as Euler’s method.

Euler’s method

[edit | edit source]

To approximate a solution to the equation

                           x’ = f(x, t)

with initial condition x(t0) = x0 on an interval [t0, t], choose a small value for h > 0 and an integer n such that t0 + nh = t. Letting t = t0 + nh, compute x1, x2, . . . , xn+1 using the difference equation xn+1 = xn + hf(xn, tn). (1.10). Then xn+1 is an approximation for x(t0 + nh).

Consider the differential equation x’ = 0.04x with initial condition x(0) = 50. We know that the solution to this equation is

                       x(t) = 50e 0.04t .

In particular, x(50) = 50e 2 = 369.45, rounding our answer to the second decimal place. To approximate x on the interval [0, 50] using Euler’s method, we will first take h = 1 and, starting with x0 = 50, compute x1, x2. . . x50, where in this case xn+1 = xt will approximate x(t). Using (1.9) with f(x, t) = 0.04x and rounding to two decimal places, we have

n = 1:x1 = x0 + h(0.04x0) = 50 + (1)(0.04)(50) = 50 + 2 + 52.00

n = 2:x2 = x1 + h(0.04x1) = 52 + (1)(0.04)(52) = 52 + 2.08 = 54.08

n = 3: x3 = x2 + h(0.04x2) = 54.08 + (1)(0.04)(54.08) = 54.08 + 2.16 = 56.24

and so on, ending with x50 = 355.33.

Notice that the error in our approximation, that is, the difference between xn and x(n), increases as n increases. For example,

x(5) - x5 = 0.24, whereas

x(50) - x50 = 14.12.

This should be expected since the error of the approximation on each step is compounded by the errors made in each of the preceding steps. The only way we can control the amount of error in our approximations is to decrease the step size. For example, if we reduce the step size to h = 0.1, we obtain

x1 = x0 + h(0.04x0) = 50 + (0.1)(0.04)(50) = 50 + 0.2 = 50.2000, x2 = x1 + h(0.04x1) = 50.2 + (0.1)(0.04)(50.2) = 50.4008, x3 = x2 + h(0.04x2) = 50.4008 + (0.1)(0.04)(50.4008) = 50.6024, …………………………………………………………………. ……………………………………………………………….. and ending with

x500 = 367.98 (This can be easily calculated using computers)

We have to calculate the value of x500 and take it equivalent to x50 because from the definition of the parameter h

h = (t – t0)/n

putting h=0.1 clearly the value of n, which actually is the number of iterations to be made, becomes 500 for t0 = 0 and t = 50. [Alay]

Using Spreadsheets to calculate the Numerical Solutions of Integrals

[edit | edit source]

In calculus we come across very complicated and difficult integrals. They are difficult to solve using analytical methods for finding their numerical solutions. This problem can be solved by using numerical integrations e.g. Rectangular rule, limit of Sum definition, Trapezoidal Rule and Simpson’s Rule.

The Trapezoidal Rule

[edit | edit source]

Consider y = f[x] bounded by x=a to x=b. Than by definition integration of the function of x i.e. y w.r.t x is the area under the curve AB shown in figure

         Area =  F[x] = integral [f(x), over[a,b]dx

now dividing the trapezoid into a Rectangle and an approximate triangle i.e. number of rectangles n = 1 we can write

F[x] = h f[a] + ½ h {f[b] – f [a]}

= h {f[a] +1/2f[b] – 1/2f[a]}

= h/2 {f[a] + f[b]} = h/2 {f[x0] + f[x1]}

for n = 2 we have two rectangles and two triangle such that the approximate area under the curve now become

        F[x] = h/2 {f[a] +f[{a+b}/2]}+ h/2 {f[{a+b}/2] + f[b]}

= h/2 {f[a] + 2f[{a+b}/2] + f[b]}

= h/2 {f[x0] + 2f[x1] + f[x2]}

So generalizing for n divisions of the trapezoid we have

F[x] = h/2 {f[x0] + 2f[x1] + 2f[x2] + … + 2f[xn-2] + 2f[xn-a] + f[xn]}

Is an approximation of the integral of y = f[x].

Where h = b – a/n

n can be even or odd. And approximation can be improved using large value of n. [Alay]

Further reading

[edit | edit source]

The Random Walk

[edit | edit source]
An example of a one-dimensional random walk with 100 steps. The y-xis runs from 25 steps South (-25) to (25) steps North (25)

This is a book for physicists, so let's get right in and describe the physics we want to investigate. The random walk is a problem that occurs in various physics areas such as Brownian motion, which describes how molecules move about in a fluid (a fluid can either be a liquid or a gas). It also plays a role in quantum field theory, but this is much more complicated. Outside of physics the random walk can be applied to share prices. It is interesting that by investigating one problem we find solutions to entirely different ones, that have no other relationship to each other. This is something that occurs throughout physics, such as the way in which waves can be used to model traffic flows.

So what is a random walk? Imagine a drunk guy (yourself after a good night out if you like) who has just left their local bar. He is standing outside the bar and, being rather drunk, he randomly takes one step North up the street or one step South down the street. The probability of him going North or South is equal and all his steps are of the same length. If he does this for 100 steps how far on average has he got away from the bar? You may think on average he ends up very close to the bar, but actually this is not so! (Remember, distance means how many meters is he from the bar, and doesn't take account of where he is. If we found his average position treating North as positive, and South as negative, then by the symmetry of the situation we know that he would indeed end up outside the bar on average.)

We will investigate this problem and find how far he actually ends up away. This is a known as a one-dimensional random walk, as the drunkard walks only North or South. The same ideas can be applied in two dimensions, say a drunk in a field who staggers North, South, East or West. Three dimensions can similarly be investigated, and even four or more, although this is a bit of an abstract idea!

Can you see now how this links to Brownian motion (where molecules move randomly) or to share prices which have approximately equal chances of rising or falling? [John]

Making a one dimensional random walk

[edit | edit source]

On a blank worksheet in your spreadsheet double click on cell B4 and type =RAND() and press enter. In cell B4 you should now see a decimal number between 0 and 1. Try hitting the F9 key a few times, which recalculates the entire spreadsheet. You should see numbers being randomly generated in cell B4. By the way cell B4 is specified so there can be some consistency when referring you to specific cells. It will also allow the spreadsheet to be neatly formatted later if you're so inclined! On the subject of formatting the whole random number may not be showing currently, if you click on cell B4 so that it is highlighted and then go to the format menu, select column and then autofit selection the entire contents of the cell is shown. This can be done for the entire selecting the whole sheet, you can do this by clicking the grey box at the top left of the sheet,

So, why did we type what we typed? You may well know if you're familiar with spreadsheets, = specifies that we're typing a mathematical function and not just text. RAND() is a function that the spreadsheet recognises as a request to generate a random number. If this isn't clear right now it will become clear as we work through the rest of the chapter.

How do we transform this into a random walk? We need to transform this random number into an event with a 50:50 probability. What we need to do is generate a step South if the number is less than 0.5 and a step North if the number is more than 0.5. This is where we see how a simple spreadsheet can be transformed into a program.

Double click on cell C4 and type: =IF(B4<0.5,-1,1). This was a bit more of a complicated function than the previous! What the IF function does is to set the value of a cell to one value if the condition specified is true, or another value if the condition is false. The structure of this function goes like this: IF(Condition , Value if True, Value if False). B4<0.5 is the condition, this means 'if cell B4 is less than 0.5'.

You should now be able to work out what values of the random number will make it take a step South (generating a -1) or a step North (generating a 1). If not try it and find out! It should work the same way as it was said it would a bit earlier.

Taking one step isn't particularly exciting or impressive (or if you think it is you're going to absolutely love the rest of the book!) so we'll see how we can expand it to 100 steps. It's fairly easy actually, simply click on cell B4 so it is selected and the move the mouse over the little square in the bottom right corner. The mouse cursor will change and you can then drag down to fill the cells below with the same formulae as in this cell. Drag down to cell B103 so that we will be taking 100 steps. It is even easier to fill the cells in column C, all you need do now is double click on the little black square in the bottom corner of cell C4, this will fill it down to the same row as column B is filled down to. Notice that in cell C4 we have a function that relies on the value of the function in B4. But when we fill the other cells they automatically change so that the function depends on the value of the cell in the left. This is usually very handy, but we'll see in a bit that sometimes we need to override it.

Now we can hit F9 and each time generate a random walk! But it's still not very interesting to look at, or a very informative, we can't see where the drunkard is unless we add up all the cells at the moment, nor can we see where he is it any particular position. This can be easily corrected, enter into D4 the function SUM(C$4:C4). Hopefully you can guess what this does, it adds up (sums) all the cells between C4 and, well, um C4. Not too useful as it is you might think, and why is that $ sign there? Well, if you fill down into the cells below you'll see that each cell adds up the total of all the steps taken, and gives the current position of the drunkard after this step. If you look at cell D20 you should get an idea of what's happening, this cell sums between C4 and C20. The $ sign acts to fix a value when you fill cells below.

Now you can use the spreadsheet to make a graph of the random walk, if you don't know how to do this check the help section of your spreadsheet. As a couple of hints to make the graph look good put a '0' in cell D3 so that the drunkard can start outside the bar and select a line graph. Press F9 to generate new graphs.

Do the graphs you've created look like what you would expect, do they actually look random? You'll probably see some strange patterns in some graphs, but they are honestly random! This will hopefully make you think about what a random pattern truly looks like. [John]

How far does the drunk get?

[edit | edit source]

Try and work this out while this section is written!


Chapter 2

Computer algebra systems began to appear in the 1960s, and evolved out of two quite different sources - the requirements of theoretical physicists and research into artificial intelligence.

A prime example for the first development was the pioneering work conducted by the later Nobel Prize laureate in physics Martin Veltman, who designed a program for symbolic mathematics, especially High Energy Physics, called Schoonschip (Dutch for "clean ship") in 1963.

Using LISP as the programming basis, Carl Engelman created MATHLAB in 1964 at MITRE within an artificial intelligence research environment. Later MATHLAB was made available to users on PDP-6 and PDP-10 Systems running TOPS-10 or TENEX in universities. Today it can still be used on SIMH-Emulations of the PDP-10. MATHLAB ("mathematical laboratory") should not be confused with MATLAB ("matrix laboratory") which is a system for numerical computation built 15 years later at the University of New Mexico, accidentally named rather similarly.

The first popular computer algebra systems were muMATH, Reduce, Derive (based on muMATH), and Macsyma; a popular copyleft version of Macsyma called Maxima is actively being maintained. As of today, the most popular commercial systems are Mathematica[1] and Maple, which are commonly used by research mathematicians, scientists, and engineers. Freely available alternatives include Sage (which can act as a front-end to several other free and nonfree CAS).

In 1987 Hewlett-Packard introduced the first hand held calculator CAS with the HP-28 series, and it was possible, for the first time in a calculator, to arrange algebraic expressions, differentiation, limited symbolic integration, Taylor series construction and a solver for algebraic equations.

The Texas Instruments company in 1995 released the TI-92 calculator with an advanced CAS based on the software Derive. This, along with its successors (including the TI-89 series and the newer TI-Nspire CAS released in 2007) featured a reasonably capable and inexpensive hand-held computer algebra system.

CAS-equipped calculators are not permitted on the ACT, the PLAN, and in some classrooms because they may affect the integrity of the test/class,[2] though it may be permitted on all of College Board's calculator-permitted tests, including the SAT, some SAT Subject Tests and the AP Calculus, Chemistry, Physics, and Statistics exams.


muMATH is a computer algebra system, which was developed in the late 1970s and early eighties by Albert D. Rich and David Stoutemyer of the Soft Warehouse in Honolulu, Hawaii. It was implemented in the muSIMP programming language which was built on top of a LISP dialect called muLISP. Platforms supported were CP/M and TRS-DOS (since muMATH-79), Apple II (since muMATH-80) and MS-DOS (in muMATH-83, the last version, which was published by Microsoft). The Soft Warehouse later developed Derive, another computer algebra system. The company was purchased by Texas Instruments in 1999, and development of Derive ended in 2006.

MuPAD is a computer algebra system (CAS). Originally developed by the MuPAD research group at the University of Paderborn, Germany, development was taken over by the company SciFace Software GmbH & Co. KG in cooperation with the MuPAD research group and partners from some other universities starting in 1997.

Until autumn 2005, the version "MuPAD Light" was offered for free for research and education, but as a result of the closure of the home institute of the MuPAD research group, only the version "MuPAD Pro" became available for purchase. The MuPAD kernel is bundled with Scientific Notebook and Scientific Workplace. Former versions of MuPAD Pro were bundled with SciLab. Its version 14 release was adopted as the CAS engine for the MathCAD software package. In September 2008, SciFace was purchased by MathWorks and the MuPAD code was included in the Symbolic Math Toolbox add-on for MATLAB. On 28 September 2008, MuPAD was withdrawn from the market as a software product in its own right.[1] However, it is still available in the Symbolic Math Toolbox in MATLAB and can also be used as a stand-alone program.