Embedded Systems/Floating Point Unit

From Wikibooks, open books for an open world
Jump to navigation Jump to search

Floating point numbers are ....

Like all information, floating point numbers are represented by bits.

Early computers used a variety of floating-point number formats. Each one required slightly different subroutines to add, subtract, and do other operations on them.

Because some computer applications use floating point numbers a lot, Intel standardized on one particular format, and designed floating-point hardware that calculated much more quickly than the software subroutines. The 80186 shipped with a floating-point co-processor dubbed the 80187. The 80187 was a floating point math unit that handled the floating point arithmetic functions. In newer processors, the floating point unit (FPU) has been integrated directly into the microprocessor.

Many small embedded systems, however, do not have an FPU (internal or external). Therefore, they manipulate floating-point numbers, when necessary, the old way. They use software subroutines, often called a "floating point emulation library".

However, floating-point numbers are not necessary in many embedded systems. Many embedded system programmers try to eliminate floating point numbers from their programs,[1] instead using fixed-point arithmetic. Such programs use less space (fixed-point subroutine libraries are far smaller than floating-point libraries, especially when just one or two routines are put into the system). On microprocessors without a floating-point unit, the fixed-point version of a program usually runs faster than floating-point version. However, these embedded system programmers must figure out exactly how much accuracy a particular application needs, and make sure their fixed-point routines maintain at least that much accuracy.

Math Routines[edit | edit source]

(Is there a better place in this wikibook for this discussion? It doesn't even mention floating point.)

Low-end embedded microcontrollers typically don't even have integer multiply in their instruction set.[2] So on low-end CPUs, you must use routines that synthesize basic math operators (multiply, divide, square root, etc.) from even simpler steps. Practically all microprocessors have such routines, posted on the internet by their manufacturer or other users ("Multiplication and Division Made Easy" by Robert Ashby, "Novel Methods of Integer Multiplication and Division", "efficient bit twiddling methods", etc.).

Following the advice known as "Make It Work Make It Right Make It Fast" and "Make It Work Make It Small Make It Fast", many people pick one or two number resolutions that are adequate for the largest and most precise kind of data handled in a program, and use that resolution for everything. For desktop machines, often 32-bit integers and 64 bit "double precision floating point" numbers are more than adequate. For embedded systems, often 24-bit integers and 24-bit "fixed point" numbers are more than adequate. If the software fits in the microcontroller, and is plenty fast enough, it is a waste of valuable human time to try to "optimize" it further.

Alas, sometimes the software does not fit in the microcontroller.

  • If you run out of RAM, sometimes you only need 2 bytes or 1 byte or 4 bits or 1 bit to store a particular variable.
  • If you run out of time, sometimes you can add lower-precision math routines that quickly calculate the results needed for that inner loop, even though other parts of the code may need higher-precision math routines.
  • If you run out of ROM, sometimes you can trade time for ROM space. Rather than a collection of sets of math routines, each one customized to a slightly different width, you can use a single set of math routines that can handle the maximum possible width. If you have some variables less than that width (to save RAM), then you typically sign-extend variables into a full-size register or global buffer, do full-width calculations there, and then truncate and store the result to the small size.

Fixed-Point Arithmetic[edit | edit source]

Some embedded microprocessors may have an external unit for performing floating point arithmetic(FPU), but most low-end embedded systems have no FPU. Most C compilers will provide software floating point support, but this is significantly slower than a hardware FPU. As a result, many embedded projects enforce a no floating point rule on their programmers.[3][4][5][6][7] This is in strong contrast to PCs, where the FPU has been integrated into all the major microprocessors, and programmers take fast floating point number calculations for granted. Many DSPs also do not have an FPU and require fixed-point arithmetic to obtain acceptable performance.[8]

A common technique used to avoid the need for floating point numbers is to change the magnitude of data stored in your variables so you can utilize fixed point mathematics. For example, if you are adding inches and only need to be accurate to the hundredth of an inch, you could store the data as hundredths rather than inches. This allows you to use normal fixed point arithmetic. This technique works so long as you know the magnitude of data you are adding ahead of time, and know the accuracy to which you need to store your data.

FFT[edit | edit source]

People use many tricks and techniques to speed up Fourier transform calculation. The fast Fourier transform (FFT) is the biggest speedup, but there are several other tricks on top of that that each can give another factor of two improvement.[9][10]

Many people do FFT using fixed-point arithmetic.[11][12][13][14][15][16]

The Fast Hartley Transform is an alternative that requires less resources than the FFT.[17]

DTMF decoders often use the Goertzel algorithm because, when only a few frequencies need to be analyzed, it is much faster than the FFT algorithm.

... more tips and hints here ...

Further reading[edit | edit source]

  1. Avoiding floating point arithmetic on the iPhone
  2. Robert Ashby. "Simplifying Formulas". 2006. quote: "we still need to find ways to simplify complex problems before we can put them into our 50-cent micros. Hardware multiplication is rarely seen below $1.50 ... That leaves us with the target of reducing everything down to addition, subtraction, shifting bits, and comparisons."
  3. "Stop using floating point!". quote: "Please learn to use fixed-point arithmetic"
  4. "Writing efficient JavaScript (HTML)" quote: "Use integer arithmetic where possible"
  5. Erich Styger. "Adding/Removing Floating Point Format for S08 Projects". quote: "Usually I do *not* use floating point numbers in my projects."
  6. "Avoiding floating point math".
  7. "Avoid floating point in hash table implementation."
  8. Boris Lerner. "Fixed vs. floating point: a surprisingly hard choice".
  9. Douglas L. Jones. "Decimation-in-time (DIT) Radix-2 FFT". OpenStax-CNX. September 15, 2006.
  10. Douglas L. Jones. "Efficient FFT Algorithm and Programming Tricks". OpenStax-CNX. February 24, 2007
    • Kiss FFT library that can use either fixed or floating point data types.
  11. Simon Inns. "Fast Hartley Transformation Library for AVR microcontrollers".