Programmable Logic/Parallel Execution

From Wikibooks, open books for an open world
Jump to navigation Jump to search

This page is going to talk about parallel program execution, one of the biggest issues when programming an HDL.

Inline Execution[edit | edit source]

Inline execution is what normally occurs in a computer program. Instructions are performed sequentially, with one instruction being performed to completion before the next operation is performed to completion. This is not the way HDL programming works, although the HDLs that we will be studying here allow some functionality for sequential operations. It is typically highly inefficient to attempt to force a hardware design to be completely sequential, and in such instances it will typically be better design to simply use a micro controller, and perform your task in software instead of hardware.


Continuous Execution[edit | edit source]

Verilog, VHDL, and even SystemC are methods for describing electric circuits, as an alternative to large and complicated schematics. Electric hardware typically has many components that operate in parallel with one another, simultaneously.

As a result of this fact, the statements in an HDL are typically seen as being executed in parallel, unless specified otherwise. This is a very different paradigm from traditional computer programming, and it can be difficult for accomplished computer programmers to translate their talents to HDL.

Problems and Pitfalls[edit | edit source]

Many traditional computer algorithms rely on the fact that code is executed sequentially. For this reason, many such algorithms cannot be applied to HDL with any ease or certainty.

Attempting to make operations in hardware to be sequential is not only a difficult task (takes longer to program), but also requires more hardware resources in the form of latches, registers, and timing circuits. Abandoning sequential operations entirely can also cause problems with timing and program complexity.

Relation to Software[edit | edit source]

Programmers with experience in multi-threaded software may quickly relate to the idea of synchronous parallel execution, although there are some fine points that are worth discussing. In software, all instructions are considered to be executed sequentially, one after another in a predefined order. The illusion of concurrent processes or "threads" occurs because the different threads switch back and forth so quickly as to appear to be running in parallel. Even in multi-core processors, it is not really possible for all threads to be running parallel, as there are typically more threads of active execution scheduled then there are cores available to process them all simultaneously.

In hardware, separate processes do happen simultaneously with no illusions. Because multiple processes happen at the same time, and their actions are synchronized to a global clock signal, it is easier to get timing correct between processes, and to avoid race conditions or deadlock. In software, the task scheduler may schedule individual threads in any order, or spread them out among separate processing cores in an arbitrary way (it is typically not known by the application programmer what algorithm the scheduler uses).

Another issue is the fact that hardware modules do not load or unload dynamically, as software modules do. This means that all hardware to be used must be created at once and is effectively static. Additional modules cannot be loaded at runtime. Once a chip is produced, the only way to add functionality to it is to design and produce a new chip, and then interface the chips together. Another facet to this concept is that the hardware modules still exist, and still operate even if the programmer doesnt take that operation into explicit account.

Consider a chip design with 3 modules: an input module, a processing module, and an output module. Signals are read from an external bus by the input module, processed, and then written back to the bus by the output module. Let's say that the signal is currently in the processing module: What then are the other two modules doing at that same time? The operations of all modules needs to be taken into account at all times, and if some modules should act "dormant" at some times, that functionality needs to be added.