Evolution of Operating Systems Designs/Security: capabilities
An actor is something that may perform an action. This commonly will be a user, a process, a thread, or similar.
An actee or object is something that may be manipulated by an actor. It may be a file, registry key, relational database entry, hardware device, or similar. Generally, this is data. Actors can generally serve as objects too; one process (serving as actor) may kill another process (serving as object).
An action or right is something that an actor may do to an object. Example actions include reading and writing.
Security systems generally implement a useful subset of a 3-dimensional matrix with dimensions of actor, object, and action. Implementing the full matrix is never done for both performance and usability reasons.
Mainstream operating system security is primarily based on access control lists. For each object, we can maintain a list of actors which may manipulate it.
Capability-based systems, not to be confused with systems that refer to special administrative abilities as capabilities, flip things around the other way. For each actor, they maintain a list of objects upon which the actor may act.
Mainstream operating system security is discretionary. That is, an object owner may decide who else has access to the object.
Mandatory access control is non-discretionary. A set of rules acts to enforce security, setting up permissions that users are unable to override. This stops insiders from being effective spies, selling designs to the competition, sharing medical records with the press, running spyware, and making many types of user errors.
None of the above methods excludes any other of the above methods. DG/UX had capabilities, discretionary ACLs, and mandatory access control. Linux provides discretionary ACLs and several choices for mandatory access control. File descriptors, available on all UNIX-like systems, can serve as capabilities.
The first security attempts were protection schemes, which controlled the access of programs to sensitive areas like the 0 page, where the software interrupts for the operating system calls were usually stored for efficient calling, or the operating system areas, where the operating system code was kept while it was running. This capability was required for time-sharing because neophyte programmers sometimes overwrote the operating system shutting down the whole computer because they didn't understand the addressing system. CPU's like the Z80 began to be designed to set aside "System" areas so that they couldn't be overwritten by mistake.
One way of doing this was to create a separate name space for the system and control access to it. Often the documentation of how to access the protected mode was scant, or missing from popular books, as an attempt to secure it by obscurity. This turned out to be relatively useless since it meant that an underground market for information was created, and only the crackers knew for sure how they were breaking into the operating systems.
Some CPU's set up complex software interrupts as gateways between the protected mode memory and the user areas. Each user area had its own software interrupt area, that did a system call to the protected system area because the interrupt area was within the user space, the interrupt vectors could be overwritten with impunity, without affecting any other programs use of the operating system. This was a useful mechanism because peripheral drivers tended to supply their own interrupt service routines for specific peripherals.
An application like Windows could create a virtualized version of itself, and modify the interrupts within that one virtual version, without affecting the rest of the operating system. This is usually what the difference is, between 286 and 386 protective mode, the Virtual copy of windows with the ability to modify its own interrupt vectors and not affect the rest of the machine. Up until the 386 protected mode, all programs shared the same interrupt vectors and one programs mishandling or malware sabotage of the interrupts could trigger the collapse of the whole system.
Access Control Lists
Traditional UNIX access control is a type of discretionary ACL. The many possible actions are grouped into four categories: read, write, execute, and special operations normally reserved for an object owner. The list of actors associated with each object is rather restricted, simplifying both the implementation and the user experience. There are three actors listed: owner, group, and other. An actor gets the permissions of the first of those that it qualifies as. The owner always gets special operations that the others do not get. The group is an indirect reference to a list of actors specified elsewhere, and thus is a form of compression. The "other" is just that, all other actors. Through the creation of groups, traditional UNIX access control can provide a great deal of power. The creation of groups is normally limited however, often being an administrative action that requires human approval.
The action categories may be more or less fine-grained. Netware uses read, write, create, erase, modify, file scan, access control, and supervisor. A system may split the normal "write" category into overwrite and append. A system may lack an "execute" category, instead simply requiring read access to execute a file.
Many systems allow for somewhat arbitrary lists of actors to be associated with each object. This includes Windows, modern UNIX-like systems including Linux, and Netware. The list might support a dozen entries or a few hundred entries, as determined to be a good compromise between performance and control.
An interesting innovation in ACLs has been hierarchical actors. For example, the VSTa OS used decimal, dot-delimited actor identifiers. In this scheme, a userid is a series of decimal numbers separated by dots (e.g., 184.108.40.2063.888) and if a user possesses a userid which corresponds to the object's userid up to its end, then the user owns the object. So for example, a user possessing userid 1.85.23 would own any object assigned to 220.127.116.113.888 whereas the object itself (whether another user or program) wouldn't be able to access other objects owned by 1.85.23. This scheme allows for the dynamic creation of a hierarchy of users and subusers. A weaker form of this, compatible with the vast body of POSIX software, can be had by providing a mechanism for users of a UNIX-like system to create and control groups.
Capabilities are unforgeable references to objects that let their holder access a well-defined subset of operations defined on that object.
There are many different things which may serve as capabilities. There are human-readable capabilities called "passwords" which people make use of. There are flat, low-level, centralized capabilities implemented by experimental OS kernels. There are tickets granted by Kerberos servers. There are UNIX-style file descriptors, which may be passed from process to process to provide access.
Operating systems such as KeyKOS and EROS have used flat, low-level and highly centralized capabilities, leaving the implementation of human-useful capabilities to higher level layers.
Capabilities in such schemes are flat because they can't be interrogated. You can't ask such a capability about, say, when it was created or how many times it's been accessed. Neither do such capabilities possess internal state which can be mutated. Typically, one can create a new downgraded capability from a capability one possesses, but one can't downgrade or change in any way the original capability itself. These capabilities aren't objects but primitive data types.
Capabilities in such schemes are low-level because they do not offer high-level security abstractions. In KeyKOS, capabilities were exactly of two types, read and read+write, though KeyKOS capabilities could contain additional information to implement service levels and other such. Capabilities in such schemes do not naturally aggregate other capabilities (a very simple and highly useful security abstraction), one must artificially create an object in between whose sole job is to aggregate them. Note that certain security abstractions (like dynamic permission inheritance) are practically unimplementable on top of a low-level system.
Capabilities in such schemes are centralized because the kernel, and only the kernel, manages all capabilities. Rather than every module exporting its own capabilities, the only capabilities there are in the kernel. This is actually a consequence of using low-level capabilities. If capabilities were sufficiently high-level as to provide message-passing services, then it would be feasible to force every module to use secondary capabilities that all route through a single protected kernel capability. Instead, each module (called a Domain) in KeyKOS had access to a maximum of exactly 16 capabilities in special slots protected by the kernel and accessible only through special calls.
Mandatory Access Control
Tradational mandatory access control or MAC is modeled on the scheme used by the U.S. government for handling classified information. More specifically, this is mandatory sensitivity control, or MSEN. There are levels of security such as TOP SECRET, SECRET, CLASSIFIED, and UNCLASSIFIED. There are categories based on need-to-know or special program access. An actor with SECRET access is prohibited from reading TOP SECRET data, but might be permitted to write to it. (there may be an additional discretionary ACL which prevents this though) One may be able to read from lower security levels and write to higher security levels. There may be a concept of contamination, allowing an actor with SECRET access to write UNCLASSIFIED data as long as no CLASSIFIED or SECRET data has been read.
Mandatory integrity control (MINT) works the other way, preventing pristine objects from being contaminated with junk. In this case, all actors are permitted to read the most trusted data. An actor that tries to execute something downloaded from an untrusted source will either be stopped or will lose the ability to write to more trusted objects.
More modern systems combine the functions of MSEN and MINT into a state-transition model with roles. As actors do things, their state may change. In a UNIX-like system with role-based access control, the execution of a new executable is a particularly important point at which state transitions happen. Role-based security is particularly useful for enforcing privacy regulations such as HIPPA (U.S. medical info law) and the EU privacy laws. Two popular implementations of this are SE Linux and RSBAC, both available for Linux.
Cryptographic Access Control
The advent of International Networking spread the security concerns outside the usual realm of the operating system, to the data that was moving to and from the system over the network. The ability to mimmic a valid data packet meant that data could be changed enroute simply by rerouting the valid packet, and replacing it with an invalid packet. As a result, security had to be spread not only to the local system, but also to all critical correspondence between systems. At first such mechanisms were implemented by sending digests of the original data as part of the datastream, under the assumption that a changed packet would not fit the original digest. However it was found that digests were not cryptographically secure and could be fooled into thinking that data was valid when it wasn't. As well, sending data in clear meant that someone in an intervening system could read the packet with a packet sniffer, and learn what the information was.
The idea when this was determined became to cryptographically protect the data, which would at least it was hoped slow down the reading of the mail, and cryptographically secure the digest, so that it couldn't be fooled as easily.