High performance computing

From Wikibooks, open books for an open world
Jump to: navigation, search

High Performance Computing (HPC)[edit]

What is HPC

High-Performance Computing or HPC, is the application of "supercomputers" to computational problems that are either too large for standard computers or would take too long. A desktop computer generally has a single processing chip, commonly called a CPU. A HPC system, on the other hand, is essentially a network of nodes, each of which contains one or more processing chips, as well as its own memory.

Parallel Computing

Programs for HPC systems must be split up into many smaller "programs" called threads, corresponding to each core. To piece the larger program together, the cores must be able communicate with each other efficiently, and the system as a whole must be organized well.

Programs on HPC systems create a vast amount of data, which can be very difficult for standard file systems and storage hardware to deal with. Standard file systems, or those defined for personal use, might have an upper limit on file size, number of files, or total storage. HPC file systems must be able to grow to contain and quickly transfer large amounts of data. In addition to data in use, researchers often keep previous data for comparison or as a starting point for future projects. Older data is kept in archival storage systems. Kraken, for example, uses a magnetic tape storage system, which can store several petabytes (millions of gigabytes) of data.