Microprocessor Design/Code Density

From Wikibooks, open books for an open world
Jump to: navigation, search

In early computers, program memory was expensive. Programmers spent a lot of time minimizing the size of a program to fit it into the limited memory available. Thus the combined size of all the instructions needed to perform a particular task, the code density, was an important characteristic of any instruction set.[1] Even today, it makes a large difference whether or not the vast majority of instructions executed fit into the instruction cache, which is by many orders of magnitude smaller than main memory.

In the early decades of CPU development, people gradually added more and more "powerful" instructions to the instruction set, reducing the total number of instructions stored to implement common tasks, but increasing the average number of bits required to store each instruction -- in many cases this improves overall code density. After the development of on-chip instruction cache, the RISC revolution was led by people who sacrificed code density and eliminated complex, "powerful" instructions in order to simplify and speed up CPU implementation and improve performance on other CPU performance metrics. More recently, some people have been experimenting with minimal instruction set computers (MISC), making the number of bits per instruction much shorter, in an attempt to improve code density.

For processors with variable-length instructions, it helps code density if frequently-used instructions are short, even if that forces other (rarely-used) instructions to be long.

For processors with fixed-width instructions, choosing 16-bit-wide instructions appears to give slightly better code density than 8-bit or 32-bit-wide instructions.[2]

There is a widespread belief that 32-bit microcontrollers have much bigger code size (worse code density) than 8-bit microcontrollers. However, in many cases 32-bit microcontrollers that use mostly 16-bit-wide instructions have smaller code size (better code density) than 8-bit microcontrollers that use mostly 14-bit-wide instructions or 16-bit microcontrollers that use mostly 16-bit-wide instructions.[3][4]


As typical RAM sizes increased, and after the development of the separate instruction cache large enough to contain inner loops and common subroutines (so code density had much less effect on CPU performance metrics), and after the invention of nested subroutines and threaded code and other instruction-set-independent executable compression techniques,[5] code density has become a less important part of instruction set architecture.

Clipboard

To do:
say a few words about ARM Thumb instructions here

Clipboard

To do:
say a few words about the EEMBC CoreMark code size benchmark here

References[edit]

  1. Wikipedia: code density
  2. "It seems that the 16-bit ISA hits somehow the "sweet spot" for the best code density, perhaps because the addresses are also 16-bit wide and are handled in a single instruction. In contrast, 8-bitters need multiple instructions to handle 16-bit addresses." -- "Insects of the computer world" by Miro Samek 2009.
  3. Joseph Yiu, Andrew Frame. DRAFT "32-Bit Microcontroller Code Size Analysis".
  4. "Code Density of LCP11xx".
  5. Data Compression/Executable Compression