The Computer Revolution/Hardware/Binary
Computers use binary, or base 2, to store data. The binary system has only two possible digits, the 0 and the 1. To understand binary, we will start with something everyone is familiar with: decimal, or base 10. Base 10 has ten digits, from 0 through 9. Numbers greater than 9 are represented by altering the position of digits within the number. The number 10 has a one in the tens place and a zero in the ones place, and is understood to be (1*10) + (0*1). Similarly, 100 contains a one in the hundreds place, and is the same as (1*100) + (0*10) + (0*1). Now for a more complex number: 3,687. It's the same as (3*1000) + (6*100) + (8*10) + (7*1). 3,687 is also the same as (3*10^3) + (6*10^2) + (8*10^1) + (7*10^0). The carat symbol represents an exponent. There's a pattern—with the increasing digits, the number is multiplied by an increasing power of the base (in this case, 10). The first digit takes 10^0, the second 10^1, and so on. How about binary? How do you represent a number with two digits? The number one, of course, is one, but the number 10 is two, 100 is eight, and so on. Since computer scientists often mix number systems, the prefix 0b is placed in front of a binary number. 3687 is 0b111001100111 in binary, which is the same as (1*2^11) + (1*2^10) + (1*2^9)(0*2^8) + (0*2^7) + (1*2^6) + (1*2^5) + (0*2^4) + (0*2^3) + (1*2^2) + (1*2^1) + (1*2^0). So what are bits and bytes? The word bit is a shortening of the term binary digit . One bit of memory holds a single binary digit. The byte is a grouping of eight bits. The eight-bit byte can conveniently store a single character. There are 256 possible combinations of 8 bits.
A kilobyte (KB)is equal to 1024 bytes. The reason a kilobyte is 1024 and not 1,000 is that it fits into binary more easily (it's 2^10). A megabyte (MB)is equal to 1024 kilobytes, and 1048576 bytes. 1 MB is about 500 pages of pure text. A gigabyte (GB)is equal to 1 about billion bytes. A Terabyte is equal to 1 trillion bytes. 1 TB is about 500 million pages of text. A petabyte is equal to 1 quadrillion bytes. The greatest order of magnitude in common use is the exabyte, which approximately equals 1 quintillion bytes.
Computers and Binary
Nearly all computers use binary. The system fits neatly into the two states of a single bit memory: on represents one, and off represents zero. The 0’s and 1’s stand for different information that can be represented in a variety of different ways. Converting the 0’s and 1’s to information able to be understood is called digital data representation. The computer can only read directions given in binary form, which makes binary the computers natural language.
The Binary Code is the syntax that a computer uses to handle data. For example, a computer understands "HI" as 0100100001001001. With the use of the Binary Code you are able to write text data such as ASCII, EBCDIC, and Unicode. These codes are used to represent all characters that can appear in data such as numbers, letters, and special characters and symbols like the dollar sign, comma, percent symbol and many mathematical characters. Where ASCII and EBCDIC use only the Latin alphabet and is limited to the English language, Unicode is capable of representing every language in the binary code. It consists 1 to 4 bytes per character and is capable of representing over a million characters. Unicode is used by the majority of web related browser and applications with in browsers. A lot of software is adapting to Unicode including Microsoft Windows, MAC OS, Microsoft Office and even modern programing like Java and Python. Unicode is constantly updated and evolving for new languages added that weren’t originally encoded. The greatest advantage of using Unicode is that it can be used across the world and maintain consistent results. (Evans, A., Martin K. & Poatsy, M.(2008). Understanding Computers Today and Tomorrow.The System Unit: Processing and Memory, 2, page 54)
In this way, software programs must also be represented by 0s and 1s. Machine language is also one of the binary code to convert the instruction before any program instruction is executed by computer.
The computer uses a coding system to represent data (computer language). Most computers today are considered binary computers (digital computers) which only recognize two states, on and off which are represented by the numbers 0 and 1. Information entered into the computer is translated into computer form, and then processed back to the user in a form that can be understood by us. The 0’s and 1’s are considered bits, the smallest unit of data a computer can recognize. Like bits, there are many different units of data which are bits combined to form these units such as; byte, kilobyte, megabyte, gigabyte, and terabyte (there are many more). The binary numbering system is used by the computer to represent numbers and math problems using only 0’s and 1’s, similar to the decimal numbering system used by people. Some coding systems for text based data are ASCII, EBCDIC, and Unicode. ASCII and EBCDIC is mostly used on personal computers which uses a fixed set of bits for coding. Unicode is another text based coding system which can be used in any language unlike ASCII and EBCDIC, which is why Unicode is rapidly replacing ASCII and EBCDIC. Similar to numeric and text based data, graphics, audio, and videos are also represented in a binary system.