Jump to content

Foundations of Computer Science/Computing Machinery

From Wikibooks, open books for an open world

Computing Machinery

[edit | edit source]

We have studied some fundamental principles of computing and seen the power of computing demonstrated in powerful technologies that operate on these principles. At the beginning we imagined that computing can be done purely mechanically and blindly through symbol manipulation if we can build a simple device that is capable of some simple tasks. In this lesson we will study the history of the development of computers and the principles on which all modern computer hardware operate. We will see on the physical layer a computer is nothing but a simple device that follows simple rules to manipulate two symbols.

Computer Hardware

[edit | edit source]

Computer hardware that can carry out computation automatically has been around for too long. There is a short history from early computers to modern computers we have today.

Mechanical Computers

[edit | edit source]

Charles Babbage invented the concept of a programmable computer and designed the first mechanical computer in the early 19th century. His difference engine and analytical engine are mechanical devices designed to tabulate polynomial functions. The input to the machines are programs and data on punched cards and the output number can be pouched onto cards or directed to a printer, a curve plotter and a bell. The machines use ordinary base-10 fixed-point arithmetic. Charles Babbage's engine is the first design for a general-purpose computer that is Turing-complete.

Analog Computers

[edit | edit source]

In early 20th century mechanical analog computers are designed to perform increasingly sophisticated computing, e.g. solving differential equations by integration. Analog computers use continuously changeable aspects of physical phenomena such as electrical, mechanical, or hydraulic quantities to model/quantify computation, which lack the accuracy of modern digital computers.

Digital Computers

[edit | edit source]

The world's first fully automatic digital computer is the electromechanical programmable computer Z3 made by Konrad Zuse in 1941. Z3 uses electric switches that drive mechanical relays to perform computation. It replaces the decimal system with the binary system and pioneered the use of floating point numbers. Program and data are stored on punched film. The distinction between a digital device and an analog device is whether the representations of values are discrete or continuous. For example, black and white are discrete values but their infinite number of grayish colors (a grayscale).

Electronic Digital Computers

[edit | edit source]

The world's first electronic digital programmable computer is Colossus, built with a large number of valves (vacuum tubes) in 1943. The design was all electronic and was used to break the German Enigma code.

Transistor Computers

[edit | edit source]

From 1955 vacuum tubes were replaced by transistors in computer designs resulting in smaller, more reliable and more power efficient second generation computers, giving rise to the "second generation" of computers.

Integrated Circuit Computers

[edit | edit source]

The invention of integrated circuit in 1952 ushered in a new era of computing machinery - micro-computers. Microprocessors with integrated circuit are used to build common computing devices you see today: desktop computers, laptop computers, phones, and greeting cards.

Principles of Digital Computing

[edit | edit source]

The mathematical foundation of digital computing is Boolean logic invented by George Boole. Claude Shannon proved in the 1930s that electronic circuits can compute in binary using Boolean logic, which becomes the fundamental principle/idea behind all modern computing devices.

Boolean Algebra

[edit | edit source]

Boolean algebra has three operations AND, OR, and NOT on two values: true and false. The rules for evaluating the three operations are shown in the figure.

The truth table showing the rules for the three basic Boolean operations.

A Boolean operation operates on Boolean values and always result in a Boolean value. For the AND operation the result is true only when both operands are true. The OR operation, on the other hand, only results in a false value if either of the two operands is false. The NOT operation takes one operand and simply negates it. We will see if we can build electronic circuits that implement the three operations we can build circuits that can perform all kinds of arithmetic and logic functions.

The three boolean operations can be implemented physically using transistors. A transistor is fundamentally a tiny switch as shown in the figure.

A transistor is fundamentally a tiny switch with three pins. When a logical 1 is applied to the control pin the switch is closed connecting in to out.

When a high voltage (logical 1) is applied to the control pin the switch is closed connecting the in pin directly to the out pin. Transistor operates on two voltages: high and low, which can be used to represent two different logical values: true and false or two binary values: 1 and 0. We will use a high voltage to physically represent a logical 1 and a low voltage a logical 0.

Transistors and Logic Gates

[edit | edit source]

Transistors are simple devices that are tiny, but it is fundamental building block of electronic circuits. For example we can build a device called NOT gate using a single transistor as shown in the figure.

A NOT gate constructed using a single transistor.

If we treat what's inside the red box as a unit it behaves like negator, which is known as an NOT gate in digital logic design. As shown in the truth table for this device (next to the figure) when the input is logical 1 the switch is closed connecting the output to the ground dragging the output voltage to a low signifying a logical 0. On the other hand, when the input is logical 0 the switch stays open which result in a high voltage on the output line because it is connected to the power through a resister and without a current the resister will not cause any drop in power. Once this device is built we can use it as a building block to construct more complicated circuits. Will use the following symbol to represent a NOT gate.

NOT gate

With transistors and the NOT gate we can build a device that performs the AND operation.

An AND gate built using two transistors and a NOT gate.

As shown in the figure the device performs exact an AND operation. The output is logical 1 (high voltage) only when both inputs are logical 1, which causes both switches to close dragging the output before the NOT gate to low. The NOT gate then negates the output to be high voltage or logical 1.

Similarly we can build a device for performing the OR operation.

An OR gate built using two transistors and a NOT gate.

A shown in the figure this kind of parallel structure guarantees that the output is high voltage as long as any transistor is closed. In other words, the relationship between the two inputs and the output is a OR logical function.

Gates to Circuits

[edit | edit source]

With the three basic gates (AND, OR, NOT) we can build any combinational logic circuit. A circuit consists of input wires, gates connected by wires, and output wires. Once a circuit is designed it can be viewed as a black box that implements some logic mapping input to output. Here is the standard circuit construction algorithm:

  1. build a truth table from the desired logic
  2. build a logic expression in sum-of-products form
  3. convert the expression to circuit design using gates

We want to construct a circuit that test the equality of two bits. The two inputs are the two bits represented physically by a high voltage (logical 1) or a low voltage (logical 0). According to the desired logic of the circuit we can draw the following truth table:

This logic tests whether two bits are the same/equal.

The first two columns enumerates all possible value combinations of the two input lines. The output is logical 1 (true) only when the two inputs are the same. Based on the truth table, we can derive the following logic expression (sum-of-products form):

(a AND b) OR ((NOT a) AND (NOT b))

To derive the sum-of-products form we check lines in the truth table with a logical 1 for the output. We know the input combinations shown in these lines are supposed to cause the output to become logical 1. We can represent each of these lines using a logic expression, for instance, (a AND b) represents the last line because when both a and b are logical 1 the expression evaluates to a logical 1 according to the definition of the AND operation. Similarly ((NOT a) AND (NOT b)) represents the first line. If we combine the two cases, we can represent the desired logic using a single expression: (a AND b) OR ((NOT a) AND (NOT b)). If we plugin the inputs for the cases in the truth table this expression should evaluate to the same desired output values for the corresponding cases. Because we know how to build the devices (gates) that implement AND, OR, and operations we can build a device that can compare whether two bits are equal. This is device will be able to perform this kind of operation purely mechanically (blindly) because it doesn't know the meaning of the operation.

Using the same methodology we can gradually build more and more complicated circuits. For example we could build a device that can add two binary digits: one-bit adder. It is just a matter of figuring out the desired logic and construct the device using the building blocks we already know how to make.

An adder circuit that adds two binary bits.

Once we get the sum-of-products form of the logic expression from the truth table it is straight forward for us to construct the circuit because all we need are the three types of logic gates and wire connection. The following figure shows the design of a one-bit adder (with carry-in) circuit using the three basic logic gates.

The circuit for producing the sum bit of a one bit adder.

We can construct multi-bit adders by connecting multiple one-bit adders. The following figure shows that a two-bit adder can be formed by connecting the carry-out of the first one-bit adder to the carry-in of the second one-bit adder.

A 2-bit adder circuit constructed from two 1-bit adders.


A Simple Flow Chart for a Lamp Diagnosis Algorithm
A Flow Chart for the Algorithm that Computes N! (factorial of N)

Computer data storage