Parallel Computing and Computer Clusters/Theory

From Wikibooks, open books for an open world
Jump to: navigation, search

Typical real-world applications[edit]

What do people do with clusters of computers?

  • The web
    • Search engine indexes
    • Databases
  • Scientific and engineering work
    • Weather modeling and prediction
    • Molecular modeling
    • Product design and simulation

Classes of problems[edit]

Embarrassingly parallel problems[edit]

History of distributed computing[edit]

The Eighties and Nineties: PVM and MPI[edit]

Today: GFS, MapReduce, Hadoop[edit]

Mathematics of parallel processing[edit]

Parallel processing is the simultaneous execution of the same task (split up and specially adapted) on multiple processors in order to obtain faster results. The parallel nature can come from a single machine with multiple processors or multiple machines connected together to form a cluster.

Amdahl's law[edit]

Amdahl's law is a demonstration of the law of diminishing returns: while one could speed up part of a computer a hundred-fold or more, if the improvement only affects 12% of the overall task, the best the speedup could possibly be is \frac{1}{1 - 0.12} = 1.136 times faster.

More technically, the law is concerned with the speedup achievable from an improvement to a computation that affects a proportion P of that computation where the improvement has a speedup of S. (For example, if an improvement can speedup 30% of the computation, P will be 0.3; if the improvement makes the portion affected twice as fast, S will be 2.) Amdahl's law states that the overall speedup of applying the improvement will be

\frac{1}{(1 - P) + \frac{P}{S}}.

To see how this formula was derived, assume that the running time of the old computation was 1, for some unit of time. The running time of the new computation will be the length of time the unimproved fraction takes (which is 1 − P) plus the length of time the improved fraction takes. The length of time for the improved part of the computation is the length of the improved part's former running time divided by the speedup, making the length of time of the improved part P/S. The final speedup is computed by dividing the old running time by the new running time, which is what the above formula does.

Parallelization[edit]

In the special case of parallelization, Amdahl's law states that if F is the fraction of a calculation that is sequential (i.e. cannot benefit from parallelisation), and (1 − F) is the fraction that can be parallelised, then the maximum speedup that can be achieved by using N processors is

\frac{1}{F + (1-F)/N}.

In the limit, as N tends to infinity, the maximum speedup tends to 1/F. In practice, price/performance ratio falls rapidly as N is increased once (1 − F)/N is small compared to F.

As an example, if F is only 10%, the problem can be sped up by only a maximum of a factor of 10, no matter how large the value of N used. For this reason, parallel computing is only useful for either small numbers of processors, or problems with very low values of F: so-called embarrassingly parallel problems. A great part of the craft of parallel programming consists of attempting to reduce F to the smallest possible value.