Evolution of Operating Systems Designs/What is an Operating System?
What is an Operating System?
A computer operating system is the software and/or firmware which manages the hardware of the computer and provides those resources, through an API, to application programs. By taking care of the hardware's needs and restrictions for the application program, the application program, and the application programmer, are freed from the additional work load and extra knowledge that is needed to deal with the hardware. This makes it easier to write application programs for that computer. It also makes it easier to keep a file system intact and working since all the changes to the file system go through the Operating system and are not done by each application programmer themselves. (Tischeritz and Bernstein)
The first computers didn't have operating systems, and could only run one program at a time. The first electronic computing circuits were little more than separate functions and didn't really need even a programming language to be tested. In about 1945 computers such as the Eniac were built that took up 4 square blocks of space and had about the same power as a 4 function calculator you might get on a keychain today. Programming of these behemoths was done by plugboards that fitted onto something a lot like a telephone switchboard. Since they were experimental engines running on vacuum tubes at that time and maintenance was an issue, little productivity was practical, and only a few people knew how to run them. Programming was done in each individual machines own machine language, and so operating systems weren't required. By the 1950's the invention of the punch card machine made it easier to read in a small program, but all the operating of the system required was the pushing of a few buttons, one to load the cards into memory and another to run the program.
In the mid 1950's, transistors made the computer circuit solid state, which eliminated the need to constantly patrol the innards of the computer to find burned out tubes and clean bugs out of the electrical contacts. The first solid state computers took up large air-conditioned computer rooms with racks of components and were used primarily by Governments to do processing of tables such as ballistic tables needed to aim military weapons. This resulted in a need for more control of the time spent per operation, which saw the separation of the maintenance and operations group from the programmers. At first, programmers wrote out their program on paper, then punched it into cards which the operators would load in sequence into the machine, one stack of cards at a time. Later a Job Control Language was designed that allowed the programmer to declare job information itself as part of their stack of cards, and it became possible to run a program called a monitor that read in the stack and stored it on a tape called the input tape. The input tape could then be run on a separate computer, the "Main Frame", and spool its output to a second tape called the output tape, which would be later mounted on the printer to output the response to the program. The batch monitor was the closest thing to an operating system that this early form of computer had. It's job was to read information from the cards and set up the job, automatically freeing up an operator. Also at about this time, Fortran was invented, and a "compiler" spool could be loaded on the mainframe to allow the programmers to write in a human readable form, while the computer operated still in machine language.
By 1965 the first operating system like programs were beginning to show up, like the Fortran Monitoring System, which had to deal with loading three types of programs, the Fortran Program, the Compiler, and the Machine language program that was the result of complication. This necessitated that the program be able to tell in some manner which type of file it was reading on the tape. A similar system, IBSYS was IBM's operating system for the 7094. Loading the operating system required that a tape containing the operating system be loaded first in order to start the operating system. Early operating systems did not have the bootstrap mechanism which all modern operating systems use to initialize the computer, so start up of a computer was a long and involved process.
By the early 1960's it became obvious that computer companies were having to support two completely different types of computers, 1401 like character based computers, and word-oriented 7094 like scientific computers. Maintenance and training on these completely different systems was problematic so IBM invented System/360 and created a range of products that used this common architecture. The IBM 360 was the first computer line to use small scale integrated circuits, and thus offered a major cut in price over earlier solid state machines. This radically expanded the market and the power of the machines.
OS360 was written to act as the operating system on all of the System360 machines from the smallest input processor to the largest number cruncher type processor. It was made up of literally thousands of lines of code written by hundreds of programmers, and maintenance was problematic since almost every time a bug was fixed a new one would be generated. Despite the impossibility to maintain such a massive monolithic program it was a distinct improvement on previous second generation operating systems, and became quite popular.
One of the concepts that it popularized was the idea of multi-computing, where instead of waiting for a single job, the computer could be running multiple jobs simultaneously. This eliminated waiting periods while the operators mounted the tapes, since the computer could be processing a different program while it waited. One problem with combining business computer operating systems with number crunching system operating systems, was that the size of the jobs involved required different strategies. Business jobs were simple, and required less time per job, and as a result mounting and unmounting tapes took up 80 to 90 percent of the processing time. While a large number cruncher was still expensive enough that small jobs were not economical so mounting tapes was more like 10 percent of the job, and not a significant problem. One way of getting around this was to partition the memory, so that each job could have its own partition. This went along with the Multi-programming concept, because it meant that more than one job could run at the same time on the same computer without interfering with the memory of other jobs.
Along with this concept came the idea of running the card reader from one partition, the number crunching in another, and the output to the printer from a third, eliminating the satellite 1401 type computers, and reducing the wear and tear on tapes from mounting stress. While third generation units were good for large runs they were essentially still batch systems and as programs began to become more complex, managing the card stacks for a program became unwieldy. This paved the way for timesharing systems where each user had his own terminal, and they shared a central mainframe.
Timesharing didn't become popular until late in the third generation when the hardware for protection mechanisms became widespread. At that time, the idea of a Computing Utility began to be worked on, the idea being that a single mainframe would supply timesharing opportunities to the whole Boston Area. Out of this concept the first 4th generation operating system MULTICS was born. Of course the idea was based on the small scale integration, and the development of LSI and then Later VLSI (Very large scale Integration) eventually replaced whole rooms of computers with a small desktop unit. As a result of LSI (Large Scale Integration) it became possible to build a whole computer system on a single rack. These Mini-computers began to compete with IBM for smaller businesses, and because their financing terms were not as pecuniary their ability to compete made them replacements for the small and midsized system360 components. It was not surprising by the 80's to find that a timesharing system used mini-computers as their front ends to handle terminal multiplexing and had a mainframe as the main computing mechanism.
In the 1970's Ken Thompson designed UNIX as a stripped down one user version of MULTICS based on a PDP-7 Minicomputer. Since he published the source code, it became a popular operating system and was used extensively by Universities and small businesses. By the 1990's it was so popular that more software was being written for it, by outsiders than was written by ATT, the then license holder for it. Legal battles between ATT and Universities as to who owned the rights to the new software involved in Unix V5 resulted in two things, the sale of the operating system to SCO and the development of an alternative kernel by Linus Torvalds that could run the by then public domain software produced by the universities because it met the POSIX standard which had been developed by IEEE.
It was this defiant rewrite of an operating system kernal that created the Open software movement that has powered the internet revolution. Most operating systems today have gained by the work of the Open Software Movement because their internet software's sophistication, depends on ideas pioneered and spread through free software.
For most purposes today, an operating system is best defined as the software that comes on an operating system install CD or DVD. This definition works well for users and operating system vendors. It has even been argued in court.
A definition of an operating system now might look like the following:
By programmatic we mean that the software is used primarily by programs, as opposed to interactively by the user directly.
So C++ for example is not an operating system since it fails to provide many critical functions of computation. Without execution of programs, scheduling of processes, and many other functions provided to C++ by system software it would be impossible for C++ to run on top of bare hardware. Basic, Lisp, Smalltalk, and Java in contrast can sometimes be considered operating systems. Although routinely hosted on top of other OSes, they have sometimes been run directly on top of exotic special-purpose hardware.
Moving onto principles, UNIX for example has as its principles "everything is a file, referred to by a descriptor" and "programs act as filters". All of the little programs and utilities that obey these principles are an integral part of the operating system whereas the X Window System, which does not obey these principles, is not.
In contrast, Plan 9, a derivative of Unix, has as its principle "everything is a filesystem" and many more functions are part of the operating system in Plan 9 than in Unix. In particular, Plan 9's windowing system is an integral part of the operating system.
A note on consistency
While an operating system is a set of principles and the software that obeys them, it is often the case that parts of the operating system disobey the principles. This is the case when the principles aren't expressed clearly and/or they are inadequate.
In the case of Unix and Plan 9, processes are neither filestreams nor filesystems. It is impossible, for example, to duplicate a process by copying it. So we see that the principles "everything is a filestream / filesystem" are inadequate. Note that just because something is a filter doesn't prevent it from being a filestream; the concept of filestreams is simply inadequate.
The result of inadequate principles is that development proceeds at random, following the path of least resistance. The end result of such development cannot be characterized by any design principles at all, and has abrogated any principles the system began with. This explains both modern Unix and C++. Such systems are only perceived to be cohesive in that their incoherence is sharply distinguished from other systems.
How to distinguish different operating systems?
For academic purposes, a good test of the differences between two operating systems is ease of porting software. If two operating systems are such that porting software from one to another is difficult because the concepts are different, then the OSes are relatively unalike. At the other extreme, if two platforms are such that no effort is required to "port" from one to another, then the two are relatively similar. Because there are legal reasons why we can't call two different products produced by different manufacturers the same, we call them Compatible.
Usually two systems that are compatible are so because they follow a single protocol called a standard. Organizations like IEEE, and ISO, publish these standards, and different companies can build their project to the standard even though they have proprietary elements at a lower level. For instance Linux and Unix share the IEEE POSIX standard so they are compatible at the interface level that standard defines.
By this definition, many common operating systems can be lumped together and considered to be similar or Compatible. OS/2 and Windows are similar. UNIX and BSD are Similar, Linux is Compatible.
The minimum distance between two distinct operating systems is probably that between Plan 9 and Unix. Despite their differences, both were created by the same person to achieve the exact same goals. And despite the time interval between Unix and Plan 9, it is obvious that the designers felt that many of the UNIX design decisions were still good. This all goes to show that Plan 9 and Unix are as close as two operating systems can ever get while remaining distinct.
Note that this test of the distinctiveness of operating systems has an analog used in biology to establish the distinctness of species. Biology recognizes species as distinct when they can no longer freely interbreed. For our purposes, operating systems are distinct when they can no longer freely share code.