Jump to content

A-level Computing/OCR AS text book/1.1.1 Structure and function of the processor

From Wikibooks, open books for an open world

The specification highlights the following topics which must be covered.

a) The Arithmetic and Logic Unit; ALU, Control Unit and Registers (Program Counter; PC, Accumulator; ACC, Memory Address Register; MAR, Memory Data Register; MDR, Current Instruction Register; CIR). Buses: data, address and control: How this relates to assembly language programs.

b) The Fetch-Decode-Execute Cycle, including its effect on registers.

c) The factors affecting the performance of the CPU: clock speed, number of cores, cache.

d) The use of pipelining in a processor to improve efficiency.

e) Von Neumann, Harvard and contemporary processor architecture.


A) The Arithmetic and Logic Unit; ALU, Control Unit and Registers (Program Counter; PC, Accumulator; ACC, Memory Address Register; MAR, Memory Data Register; MDR, Current Instruction Register; CIR). Buses: data, address and control: How this relates to assembly language programs.

B) The Fetch-Decode-Execute Cycle, including its effect on registers.

C)The factors affecting performance of the CPU.

Clock speed: is the number of instructions which can be completed in one second, recently this has been measured in GHz (1 Billion instructions per second). So for a 3.0 GHz CPU, it would be able to perform 3 billion instructions per second.

Number of Cores: Each core in a processor is its own processing unit on the CPU. As well as having its own cache, the core will share a higher-level cache. When multitasking, different applications can be assigned on different cores. In addition, it is also possible to use multiple cores to solve a problem. A comparison between a 3.0 GHz single core CPU, and a dual core 3.0 GHz CPU. In theory this would lead to double the performance. However, the additional cores speeds are limited by algorithms which run the fetch-execute-decode cycle. A misconception when it comes to parallel processing, is that having 4 cores does not mean a processor will work at 4 times the speed. Furthermore, there are very few programs that are designed to be run on a parallel core system, so multi-core benefits are limited to running different programs as of now.

Cache: RAM, while faster than a standard hard drive, it is slower than the processor. This is known as a bottleneck. To try and reduce the likelihood of that bottleneck, a small amount but very fast memory was built into the CPU. This is known as cache memory. This will reduce the distance/time, data has to travel from and to it. By anticipating the data and instructions that are likely to be accessed regularly, it is kept in cache memory, hence it tries to avoid the bottleneck. As well as being expensive, the larger cache becomes slower as it operates. Therefore, modern processors have multiple (usually 3) levels of cache. When cache is stored, it starts from the smallest and fastest level of cache (L1) followed by the larger but slower levels of cache (L2, L3...), then eventually the RAM, and sometimes a small allocation of a users primary storage is set aside to store cache, this is known as virtual memory. Virtual memory is much slower than RAM, and much slower than the CPU cache.

The factor Increase in capacity or number Decrease in capacity or number
Clock Speed This will increase the speed of the CPU, as more instructions can be completed per second. This will decrease the speed of the CPU, as less instructions can be completed per second.
Number of cores As the number of cores increase (2, 4, 6, 8...) the speed will also increase, but it is not exactly proportional, so it is not necessarily doubling every time. As the number of cores decrease, the speed of the CPU will also decrease as less instructions can be completed in parallel.
Cache As levels (L1, L2, L3...) and/or size of the cache increases (25MB, 30MB, 35MB...) there is less reliance upon having the cache stored being stored in RAM where it is much slower. When it comes to demanding applications and software, As the levels and/or the size of cache decreases, there will be an increase in the reliance upon the RAM, and possibly virtual memory (a small allocation is set aside from the primary storage to be used as a form of RAM, to store cache). When it comes to demanding applications, it is noticeable to see longer loading times for a website, or when an input is made, that the output is taking a considerable amount of time to be returned. This is because, the cache is stored on the RAM, or as virtual memory, which is much slower than the levels of cache in a CPU.

D) The use of pipelining in a processor to improve efficiency.

Pipelining: In modern processors, the technique pipelining is used. What it does is it executes one instruction, whilst the next instruction is being decoded, and the instruction after that is being fetched. This allows high efficiency of the components/processes in a CPU.

Pipelining does have its downsides for instance, it is not always possible to predict the instruction that needs to be fetched and decoded next. For instance, a program that has IF statements will require ‘instruction jumps’. As a result, the pipeline must be ‘flushed’, and then obtain the correct instruction. The more often the pipeline is flushed, the less of a benefit the pipeline becomes.

E) Von Neumann, Harvard and contemporary processor architecture.

Von Neumann, has a single control unit and a single address, data and control bus. By working sequentially through the instructions, and data in the same memory location. Von Neumann cannot fetch multiple instructions at the same time, because data will fill the single bus. Known as Von Neumann bottleneck. Takes two clock cycles (decode and execute). (Desktops and laptops) Harvard, is almost identical to Von Neumann, however it stores data and instructions separately in memory units. The CPU is capable of reading an instruction and perform memory access at the same time; even without a cache, as there’s separate buses for data and instructions. It’s possible to use pipelining in Harvard architecture. When pipelining is used instructions are carried out on a single clock cycle. (Small embedded computers; single processing).