Operating System Design/Print Version

From Wikibooks, open books for an open world
Jump to navigation Jump to search


Operating System Design/Cover

Numerous Universities offer courses on the design and programming of Operating Systems Kernels. This textbook is designed for the students of such courses. Please note, however, that while this book is aimed at individuals who hope, either professionally or as a hobby, to work on programming operating systems in the future, it should not be used as an organisational space for creating a new operating system. Rather, this book is aimed at discussing key issues in the design of Operating Systems, and existing theories about doing so effectively, from a Neutral Point of View.

Also, this Wikibook is focused on single-box operating systems. None of the Wikipedia:TOP500 computers are like that — most of them distribute their operating system in the way described in the Building a Beowulf Cluster Wikibook (many boxes in one building), and even more widely geographically distributed systems are also becoming important.

If you are interested in creating new operating systems you might want to check out...

On Wikibooks:

Other Wikis:

Some new OS's:

Discussions:

A few links and tips: "Create your own Operating System!"

Previous: ContentsNext: Introduction


Introduction

An operating system, often abbreviated OS, is the underlying software that directly interacts with the hardware of the platform and provides an environment for user applications to run.

Tasks

An operating system has several capabilities that it must provide:

Hardware control

See also: Operating System Design/Processes/Scheduling and Operating System Design/Memory Management

Hardware in a modern computer has too much variety for control mechanisms to be hard-coded. In the old days, each application had to provide its own drivers to be able to use the hardware. Because of the diversity of today's hardware, most operating systems abstract the implementation details from the application using the hardware. The operating system thus needs a mechanism to reliably juggle the various needs of the underlining platform. This is provided through a driver mechanism, and through this the operating system can keep the hardware in check.

In addition, multitasking would be impractical, if not impossible, without an operating system to manage the sharing of resources between competing applications. Each application would have to access and control the resources itself while voluntarily giving up control of the hardware occasionally (as was once common, called cooperative multitasking). This scenario has obvious implications in security and stability, as third-party applications in general cannot be trusted with direct hardware access. Therefore the operating system must schedule application processes (called pre-emptive scheduling) to have access to the processor according to an algorithm that may be modified by many factors.

Software environment

Applications need a safe and easy way to access hardware. Whether it be allocating memory, writing a file to permanent storage, playing a sound file, or showing a movie, at some point the application will need to call a function provided by the operating system. This is provided through an API, or Application Programming Interface. A well-rounded API will prevent code duplication and thus leave the application developer free to implement the needed behavior without hassle.

User interface

Most operating systems are going to need some way for the user to operate them on a day-to-day basis. The interface should generally be efficient for power users, while still holding the hands of less technically proficient users. The interface can be either graphical or text-based.

Components

Operating Systems generally consist of several parts. Principal parts are

  1. The Kernel, which is the "core" of the OS.
  2. The Libraries, which provide an array of functions to applications.
  3. Drivers, for interacting with, and controlling external hardware.

Also the operating system will come with

  1. The Boot Mechanism, which loads the kernel into memory.
  2. A Command Interpreter, or "shell", that takes input from the user.

An OS might also implement a file system for storing data.

Some OSs allow only one program to run at a time, but many new OSs allow multiple applications to run almost simultaneously. Such operating systems are called "Multitasking Operating Systems". Some OSs are very large, and depend on the user for input, but other OSs are very small, and are expected to do work without human intervention. The first type are the desktop OSs, and the second type are the "Real-Time" OSs.

Further Reading


Previous: Preface Forward: Case studies

History

The earliest computers were purely mechanical devices which could run through a series of inputs and produce some output. Usually instructions and data were combined, or the instructions were built into the computer. Over time more generalised computers appeared that could be programmed. These early programmable computers did not have any operating system. However, some tasks are common to most programs (reading input and writing output for example) and so standard routines were developed to perform those tasks.

As computers were large and expensive, companies offered computer services to those who could pay for them. Initially this would have been on an ad hoc basis, but quickly developed into time sharing services where many people would run their programs on the same computer (in quick succession) and would be billed for the amount of time that their program took. These time sharing systems were the earliest operating systems.

As hardware has developed so have operating systems, removing inefficiencies and providing more services to the application programmer, and even the end user. Interactive systems have become common, particularly as more modern schedulers allow a single processor to perform a task while another task waits for I/O.

Further reading

A good reference book for computer history is "A History of Modern Computing" by Paul E. Ceruzzi. It outlines the development of the computer from a single operator machine to a multiple operator machine and beyond.

Previous: Case studiesNext: Processes



Kernel Architecture




Kernel Architecture

Structure of monolithic, micro, and hybrid kernels.

The kernel is the core of an operating system. It is the software responsible for running programs and providing secure access to the machine's hardware. Since there are many programs, and resources are limited, the kernel also decides when and how long a program should run. This is called scheduling. Accessing the hardware directly can be very complex, since there are many different hardware designs for the same type of component. Kernels usually implement some level of hardware abstraction (a set of instructions universal to all devices of a certain type) to hide the underlying complexity from applications and provide a clean and uniform interface. This helps application programmers to develop programs without having to know how to program for specific devices. The kernel relies upon software drivers that translate the generic command into instructions specific to that device.

An operating system kernel is not strictly needed to run a computer. Programs can be directly loaded and executed on the "bare metal" machine, provided that the authors of those programs are willing to do without any hardware abstraction or operating system support. This was the normal operating method of many early computers, which were reset and reloaded between the running of different programs. Eventually, small ancillary programs such as program loaders and debuggers were typically left in-core between runs, or loaded from read-only memory. As these were developed, they formed the basis of what became early operating system kernels. The "bare metal" approach is still used today on many video game consoles and embedded systems, but in general, newer systems use kernels and operating systems.

Four broad categories of kernels:

  • Monolithic kernels provide rich and powerful abstractions of the underlying hardware.
  • Microkernels provide a small set of simple hardware abstractions and use applications called servers to provide more functionality.
  • Exokernels provide minimal abstractions, allowing low-level hardware access. In exokernel systems, library operating systems provide the abstractions typically present in monolithic kernels.
  • Hybrid (modified microkernels) are much like pure microkernels, except that they include some additional code in kernelspace to increase performance.


Monolithic Kernel

The monolithic approach is to define a high-level virtual interface over the hardware, with a set of primitives or system calls to implement operating system services such as process management, concurrency, and memory management in several modules that run in supervisor mode.

Even if every module servicing these operations is separate from the whole, the code integration is very tight and difficult to do correctly, and, since all the modules run in the same address space, a bug in one module can bring down the whole system. However, when the implementation is complete and trustworthy, the tight internal integration of components allows the low-level features of the underlying system to be effectively utilized, making a good monolithic kernel highly efficient. Proponents of the monolithic kernel approach make the case that if code is incorrect, it does not belong in a kernel, and if it is, there is little advantage in the microkernel approach. More modern monolithic kernels such as Linux, FreeBSD and Solaris can load executable modules at runtime, allowing easy extension of the kernel's capabilities as required, while helping to keep the amount of code running in kernel-space to a minimum. It is alone in supervise mode.

The monolithic operating system is the earliest and most common operating system architecture. Every component of the operating system is contained in the kernel and can directly communicate with any other (i.e., simply by using function calls). The kernel typically executes with unrestricted access to the computer system. OS/360, VMS and Linux are broadly characterized as monolithic operating systems. Direct intercommunication between components makes monolithic operating systems highly efficient. Because monolithic kernels group components together, However it is difficult to isolate the source of bugs and other errors. Further, because all code executes with unrestricted access to the system, systems with monolithic kernels are particularly susceptible to damage from errant or malicious code.

Case Study: Solaris

Solaris is a UNIX operating system developed by Sun Microsystems. It is most widely used on Sun's SPARC-based hardware, but is also available for x86 systems. The latest version of Solaris, Solaris 10, includes 64-bit support for the x86-64 architecture (ie AMDs Opteron and Intels Xeon processors).

Case Study: Linux

80386 computer, and the initial versions were technically limited and only supported the i386 architecture. With contributions from dozens of other FOSS programmers, support for numerous features and architectures was added.

Linus originally started the project as an updated replacement for Minix, which was written by Andrew S. Tanenbaum to use as an example for an operating system design and implementation course. Many Linux distributions combine the kernel with many of the GNU utilities.

Case Study: Windows 9x

Windows 9x references to the Windows family, Windows 95, 98 and Me operating systems. They were set apart from the earlier Windows versions: 1.0, 20, and 3.0 by their device drivers, virtual memory management and MSDOS.SYS and MS-DOS kernel. The reign of Windows 9x ended in 2001 when the Windows NT based Windows XP was released for both home and office use.

Architecture

The Windows 9x architecture was a step up in many ways from its predecessors. The GUI was redesigned, the kernel supported virtual memory and had a VFAT{virtual file allocation table} filesystem unlike its FAT16 and FAT12 filesystems before. Windows 95b was the 3rd version of 95 released and included support for FAT32. All versions of 9x supported FAT16 drive compression through Drivespace, a program originally from MS-DOS 6.22

The Kernel

The kernel was just a different version of the MS-DOS kernel, with added virtual memory and memory protection that was lacking in Windows 1.0+. It was of monolithic architecture, different from its successors NT 3.1+.

The Registry

The registry acted as a temporary and convenient holding place for program and system data. The directories are conventionally named as follows: HKEY_CLASSES_ROOT, HKEY_CURRENT_USER, HKEY_LOCAL_MACHINE, HKEY_USERS, HKEY_CURRENT_CONFIG, and HKEY_PERFORMANCE_DATA. Each one of these stored specific types of data like, hardware configuration data, application data, performance data, and user data. The system can access this anytime and the current user is allowed to access and edit it also.

File Names

File names in Windows 9x were allowed to have up to 255 characters, a special feature in the VFAT filesystem. Previous versions of Windows were limited to MS-DOS style 8.3 letter filenames.

Graphical User Interface

The GUI was one significantly changed in the Windows 9x series. With the start button, the toolbar and taskbar both by default on the bottom of the screen, allowing running program to be selected. There was also a new widget set (that is, a different standard look and feel to the applications) and many new fonts are available.


Microkernel

Structure of monolithic and microkernel-based operating systems, respectively.

The microkernel approach is to define a very simple abstraction over the hardware, with a set of primitives or system calls to implement minimal OS services such as thread management, address spaces and interprocess communication. All other services, those normally provided by the kernel such as networking, are implemented in user-space programs referred to as servers. Servers are programs like any others, allowing the operating system to be modified simply by starting and stopping programs. For a small machine without networking support, for instance, the networking server simply isn't started. Under a traditional system this would require the kernel to be recompiled, something well beyond the capabilities of the average end-user. In theory the system is also more stable, because a failing server simply stops a single program, rather than causing the kernel itself to crash.

However, part of the system state is lost with the failing server, and it is generally difficult to continue execution of applications, or even of other servers with a fresh copy. For example, if a (theoretic) server responsible for TCP/IP connections is restarted, applications could be told the connection was "lost" and reconnect, going through the new instance of the server. However, other system objects, like files, do not have these convenient semantics, are supposed to be reliable, not become unavailable randomly and keep all the information written to them previously. So, database techniques like transactions, replication and checkpointing need to be used between servers in order to preserve essential state across single server restarts.

Microkernels generally underperform traditional designs, sometimes dramatically. This is due in large part to the overhead of moving in and out of the kernel, a context switch, in order to move data between the various applications and servers. It was originally believed that careful tuning could reduce this overhead dramatically, but by the mid-90s most researchers had given up. In more recent times newer microkernels, designed for performance first, have addressed these problems to a very large degree. Nevertheless, the market for existing operating systems is so entrenched that little work continues on microkernel design.

Examples


Case Study: GNU Hurd

GNU Hurd is a kernel for GNU, a free Unix-like operating system developed by the GNU Project. Hurd was initiated in 1990 to add the only missing feature in the GNU operating system ‒ a kernel. Although under development for over 20 years, there is still no stable release.

It consists of a group of 24 server processes that run on the Mach microkernel. These servers provide most OS features, from authentication to filesystems.

Case Study: MINIX 3

Operating System Design/Case Studies/MINIX 3

Case Study: QNX

The QNX Operating System is ideal for realtime applications. It provides multitasking, priority-driven preemptive scheduling, and fast context switching ‒ all essential ingredients of a realtime system.

QNX is also remarkably flexible. Developers can easily customize the operating system to meet the needs of their application. From a "bare-bones" configuration of a kernel with a few small modules to a full-blown network-wide system equipped to serve hundreds of users, QNX lets you set up your system to use only those resources you require to tackle the job at hand.

QNX achieves its unique degree of efficiency, modularity, and simplicity through two fundamental principles:

  • microkernel architecture
  • message-based interprocess communication

QNX's microkernel architecture

QNX consists of a small kernel in charge of a group of cooperating processes. The structure is more like a team than a hierarchy, as several players of equal rank interact with each other and with their "quarterback" kernel.

The QNX Microkernel coordinating the system managers.

The kernel is the heart of any operating system. In some systems the "kernel" comprises so many functions, that for all intents and purposes it is the entire operating system! But the QNX Microkernel is truly a kernel. First of all, like the kernel of a realtime executive, the QNX Microkernel is very small. Secondly, it's dedicated to only two essential functions:

Message passing
the Microkernel handles the routing of all messages among all processes throughout the entire system
Scheduling
the scheduler is a part of the Microkernel and is invoked whenever a process changes state as the result of a message or interrupt

Unlike processes, the Microkernel itself is never scheduled for execution. It is entered only as the direct result of kernel calls, either from a process or from a hardware interrupt.


Exokernel

General

An exokernel is a type of operating system where the kernel is limited to extending resources to sub operating systems called LibOS's. Resulting in a very small, fast kernel environment. The theory behind this method is that by providing as few abstractions as possible programs are able to do exactly what they want in a controlled environment. Such as MS-DOS achieved through real mode, except with paging and other modern programming techniques.

LibOS

LibOS's provide a way for the programmer of an exokernel type system to easily program cross-platform programs using familiar interfaces, instead of having to write his\her own. Moreover, they provide an additional advantage over monolithic kernels in that by having multiple LibOS's running at the same time, one can theoretically run programs from Linux, Windows, and Mac (Provided that there is a LibOS for that system) all at the same time, on the same OS, and with no performance issues.

Case Study: XOK

XOK is the newest research exokernel to date. XOK was created by a computer science research team at MIT. XOK is able to compile most UNIX programs (perl, gcc, emacs, etc.) with little or no change, and has equal to or greater speed than FreeBSD when using standard libOS's. However, when using specilized LibOS's, XOK is able to get great performance boosts. The greatest of which has been the Cheetah web server, running at eight times its normal speed on FreedBSD or Linux.


Hybrid Kernel

A hybrid kernel is one that combines aspects of both micro and monolithic kernels, but there is no exact definition. Often, "hybrid kernel" means that the kernel is highly modular, but all runs in the same address space. This allows the kernel avoid the overhead of a complicated message passing system within the kernel, while still retaining some microkernel-like features.

Case Study: Windows NT/XP

Windows NT now refers to a family of Microsoft operating systems, all based on the original Windows NT released in 1993. All Windows operating systems since (XP, Vista, 7, etc.) are based on the NT kernel.

The Windows NT kernel is considered a hybrid kernel, and is also sometimes called a macrokernel. Although most system components run in the same address space, some subsystems run as user-mode server processes and many of the design objectives are the same as Mach.


Case Study: Mac OS X

XNU (X is not Unix), the kernel of the Mac OS X operating system, is a hybrid. It is based on Mach, a microkernel, but also incorporates elements from BSD, which is monolithic. Thus, it has some of the advantages and disadvantages of both.

Case Study: BeOS

BeOS is an operating system for personal computers. It was under development from 1991 until development was discontinued in 1997. Although it has partial compatibility with POSIX and a Bash shell, it is not based on Unix sources and remains closed-source.

The open-source Haiku operating system aims to pick up where BeOS left off. It began development in 2001 and became self-hosting and 2008.



Initialization




Initialization

Initialization

When a computer is first started, it is in an unknown state. Static electricity and remnants of previous states can lead to values that are not valid states for the machine. In defense, computer programmers have learned to initialize all variables before using them.

After the initial start-up process, the next step depends on the type of computer

Mainframes

For large mainframe computers, the Initial Program Load, or IPL, is used to load a bootstrap program, typically very tiny, whose purpose is to load the actual boot loader from disk, tape or other media.

Microcomputers

For typical desktop, server, or rack-mounted blade computers, the power-on-self-test or POST initializes the computer, which then passes execution over to the ROM, where the BIOS system initializes the bottom page or so of RAM, then passes execution to the boot process.

Bootstrap

One of the first things that the boot process does is load the boot loader, which then loads the operating system.


Bootloader

A boot loader is a small program which is started from the Master Boot Record (MBR) of a hard disk, floppy disk, CD/DVD or other storage device. It is loaded by the computer's BIOS after the BIOS has initialized a small portion of the system's hardware. The role of a boot loader is to load an operating system from a storage device, set up a minimal environment in which the OS can run, and run the operating system's startup procedure.

Due to the fact that on most systems (most notably the IA-32 IBM compatible systems), the boot loader is only allowed to have a very small size, (510 effective bytes on a floppy disk, 446 bytes on a hard disk), the boot loader is usually split into stages. Stage 1 will load stage 2 from a specific sector on the disk, then stage 2 will initialize the system and load the kernel from a specific file on the disk. This means that the stage 2 boot loader will have to be able to interpret the system's file system. Sometimes, an extra stage (commonly called stage 1.5) is placed between stage 1 and stage 2, which is also capable of interpreting the file system, and allows the stage 2 boot loader to be moved around the disk, possibly due to disk defragmentation or editing of the stage 2 boot loader.

Often, boot loaders allow the user to select between several different operating systems, and choose which one to boot. This feature is called multi booting (or dual-booting). Many boot loaders also support passing parameters to the kernel. These are like command-line arguments, and are generally used to tell the kernel about the configuration of the system. Some even load 'modules' into memory for the OS.

LILO, GRUB and GRUB2

For *nix users, the LILO and GRUB boot loaders are the most common. Apart from booting Linux, they can boot Windows using chain loading. Microsoft Windows has its own proprietary boot loader.

LILO is the LInux LOader.

GRUB is the acronym for the GR and Unified Boot loader. GRUB is popular among operating system developers because it can bring the system to 32-bit protected mode without much effort, after which the kernel can be started as if it were any other application. GRUB supports the Multiboot Specification, which specifies how any kernel can be loaded by GRUB.

The next version of GRUB, GRUB2, supports 64-bit systems and will have a new extensible implementation of the Multiboot Specification.

ReactOS Bootloader (FreeLoader)

The ReactOS bootloader is the bootloader from the ReactOS project. It supports only FAT, but can load Windows, ReactOS and Multiboot kernels as well.

See Also


Hardware Initialization

Hardware Initialization

Typically, when a CPU starts up, it does some internal consistency checks and transfers control to a PROM or EPROM device that contains permanent coding meant to survive a power loss. In some computers, this is all the code they need. However, in many general computing devices, this read-only memory defines a BIOS or Basic I/O System, capable of finding a boot sector on a standard secondary memory device. In some cartridge type game machines, the BIOS transfers control to the cartridge after doing some preliminary tests to make sure the machinery is working correctly. On other machines like the PC and Mac, the BIOS calls a utility from read only memory and lets that check the machinery.

Power On Self-Test

The POST, or Power On Self-Test triggers the initialization of peripheral devices and the memory of the computer, and may, if preset parameters are set to allow it, do a preliminary memory check. It also sets the bottom of memory so that the operating system knows how much memory it has to work with.

Once the Power On Self-Test is completed, the computer attempts to pass control to the boot sector of the secondary memory device. In cases like the PC where you may have multiple secondary memory devices, it can sample each device in turn, according to either a standard pattern, or according to parameters set in a battery protected static RAM device. The boot sector is part of the bootstrap system that loads the specific operating system.

Boot Sector

Before boot sectors, the operating system had to be fully loaded before it could be run. The utility of the boot-strap process is that it uses a process analogous to a technique developed for climbing: climber's spare boots were used as weights to sling a light line called the bootstrap up over a promontory, and then by tying the bootstrap to a heavier line and pulling on the light line, eventually allow the climber to sling his heavier climbing rope up and over the promontory.

What the bootstrap program does is allow a small sector-sized program to load a loader that eventually loads the operating system. The exact number of intermediate loaders needed depends on the operating system. In DOS, the boot sector loads IO.sys and Msdos.sys, which loads config.sys to configure the computer, then loads command.com, and runs Autoexec.bat on it. The sector sized program is called the boot sector.

Kernel Initialization

Once the kernel is fully loaded, the next step in initialization is to set the kernel parameters and options, and add any modules that have been selected in the kernel set-up file. Once the kernel is fully initialized it takes over control of the computer and continues initialization with the file-systems and processes.

File System Initialization

The kernel starts up the processes, and loads the file-systems. The main file-system then includes initialization files, which can be used to set up the operating systems environment, and initialize all the services, daemons, and applications.

Plug and Play

The idea of Plug and Play is that during initialization, the system builds up a database of devices and afterwards an application can access the database, to find out details about any device on the system. Along with this form of initialization comes an extended BIOS on the device, which allows it to detect the operating system and set its parameters to be more compatible with that operating system. This combination is especially useful, for secondary initialization where you might want to initialize an application (like windows) to accept the drivers for that device.

Hot Socketing

A further extension of this concept was developed for the Universal Serial Bus, which allows devices to be hot socketed, or installed while the computer is running. The USB bus contacts the new device and learns from it the necessary information to match it to a driver. This information gets put into the database, and whenever that device is again plugged in, the same driver is found for it. When the device is unsocketed, the driver shuts itself down and removes itself from the list of active devices. It can do this because it can monitor the USB controller to make sure it's device is still attached.


Processes 50% developed  as of Oktober 25, 2008

Memory Management 0% developed  as of Oktober 25, 2008

File Systems 25% developed  as of Oktober 25, 2008

Security 0% developed  as of Oktober 25, 2008

Interface 50% developed  as of Oktober 25, 2008

Glossary