Security Architecture and Design/Print Version

From Wikibooks, open books for an open world
Jump to navigation Jump to search


Computer Systems Architecture
[edit | edit source]

A computer system consists of different types of components like hardware, software, operating systems and firmware.

The most important and common components being

  • Hardware Components like: CPU, Storage Devices, I/O Devices, Communication Devices
  • Software Components like: Operating Systems, Application Programs
  • Firmware

Central Processing Unit (CPU)[edit | edit source]

Overview

  • CPU is the brain of the computer.
  • It fetches the instructions from memory and executes them
  • Each CPU type has its own instruction set and architecture

CPU Components

  • Registers: are temporary storage locations that can store references to memory locations, next instruction to be executed etc and also enable the CPU to keep its status information.
  • Arithmetic Logic Unit (ALU): performs the actual execution of complex mathematical functions and logical operations on data.
  • Control Unit: manages and synchronizes the system while different applications code and OS instructions are being executed. It fetches the code,interprets the code and oversees the execution of the different instruction sets.It determines what application instructions get processed and in what priority and time slice. It controls when instructions are executed, and this execution enables applications to process data.
  • General registers: are used to hold variables and temporary results as the ALU works through its execution steps.
  • Special registers: (dedicated registers) hold information such as the program counter, stack pointer, and program status word (PSW)
    • Program counter (PC): register contains the memory address of the next instruction that needs to be fetched
    • Program status word (PSW): holds different condition bits. One of the bits indicates whether the CPU should be working in user mode (also called problem state) or privileged mode (also called kernel or supervisor mode).
    • Stack pointer: directs the CPU where the next piece of data is located.
  • Address bus: is a hardwired connection to the RAM chips in the system and the individual (I/O) devices. It is used by the CPU to indicate the location of the instructions that need to be processed.
  • Data bus: is used by the memory or I/O device in response to CPU request for sending the data that resides at the requested location.

Storage[edit | edit source]

Overview

  • A storage device is a hardware device capable of storing data.
  • Storage devices can be classified into 3 categories
    • Primary Storage or Memory: which are directly accessible to the CPU like Cache Memory (L1,L2,L3),Main Memory (RAM)
    • Secondary Storage: Which are permanent storage devices like Hard Disks, Floppy Disks, CDs, DVDs, flash memory, ZIP drives etc.
    • Tertiary Storage : Tapes

Random Access Memory

  • RAM stores data and program instructions temporarily by the operating system and application programs.
  • It is described as volatile because if the computer’s power supply is terminated, then all information within this type of memory is lost.
    • RAM Types
      • Dynamic RAM- uses capacitors which have to be refreshed periodically to hold the data, slow.
        • Synchronous DRAM (SDRAM)- timing of the CPU and the timing of the memory activities are synchronized. can access only one block at a time.
        • Extended data out DRAM (EDO DRAM)- can capture the next block of data while the first block is being sent to the CPU for processing, faster than SDRAM.
        • Burst EDO DRAM (BEDO DRAM)- Build on top of EDO DRAM.It reads and sends up to four memory addresses in a small number of clock cycles.
        • Double data rate SDRAM (DDR SDRAM)- Carries out read operations on the rising and falling cycles of a clock pulse. speed is twice of SDRAM
      • Static RAM- uses more transistors, faster than DRAM,expensive, used in cache.

Read Only Memory

  • ROMs are non-volatile memories.the instructions stored in these memories are called as firmware.
  • ROM Types
    • Programmable read-only memory (PROM)-can be programmed only one time after the manufacturing.
    • Erasable and programmable read-only memory (EPROM)- can be erased, modified, and upgraded.
    • Flash memory- a special type of memory that is used in digital cameras, BIOS chips, memory cards for laptops, and video game consoles. It is used more as a type of hard drive than memory.

Cache Memory

  • The cache memory is a smaller, faster memory which stores copies of the data from the most frequently used main memory locations.
  • Cache Levels
    • L1 Cache- located within the CPU
    • L2 Cache- located between the CPU and the main memory (RAM)
    • L3 Cache- can be an externally managed memory that has faster access time than RAM.
  • Cache Types
    • Prefetch Cache(P-Cache)- used to store data that has been brought in as a result of a prefetch operation
    • W-Cache - acts as a holding station for stored data.
    • Instruction Cache- used to store instructions.
    • Data Cache- stores data
    • Translation Look aside Buffer (TLB)- stores the translated addresses of virtual page address to a valid physical address.
  • Cache Organization- describes the organization of lines and the replacement policy
    • Direct Mapping- The (virtual or physical) memory address of the incoming cache line controls which cache location is going to be used.Suffers from thrashing.
    • Fully Associative- based on LRU policy where the LRU line is replaced.
    • Set Associative- uses several direct-mapped caches or set. A cache controller decides which set the line will go into. Within the set, a direct-mapped scheme is used to allocate a slot in the cache.

Virtual Memory

  • Virtual memory is a logical memory that gives an application program the impression that it has a contiguous working memory.
  • A VM logically extends the capabilities of RAM by allocation a separate portion of the hard disk space called swap space.

Operating Systems[edit | edit source]

Overview

  • An operating system provides an environment for applications and users to work within.
  • It is responsible for managing the underlying hardware components, memory management, I/O operations, file system, process management, and providing system services
  • OS Architectures
    • Monolithic operating system architecture- mainly made up of various procedures that can call upon each other in a haphazard manner, provides single layer security only. For example, DOS.
    • Layered operating system- separates system functionality into hierarchical layers,provide data hiding, provides multilayer security. For example, Unix.
      • Layer 0 controlled access to the processor and provided multiprogramming functionality
      • Layer 1 carried out memory management
      • Layer 2 provided interprocess communication
      • Layer 3 dealt with I/O devices and
      • Layer 4 was where the applications resided
    • Microkernel Based- The OS functions are divided into several different processes that run in user mode, instead of kernel mode.The processes communicate in a C/S model.The server processes are called as subsystems and provide services to user process and other OS processes. For example, Windows 2000.

Process Management

  • A process is a program in execution that is loaded and actuated by the OS.It contains a set of the instructions and the assigned resources.
  • When a process is created, the operating system assigns resources to it, such as a memory segment, CPU time slot (interrupt), access to system application programming interfaces (APIs), and files to interact with.
  • Process States: A process can run in running state (CPU is executing its instructions and data),ready state (waiting to send instructions to the CPU), or blocked state (waiting for input data, such as keystrokes from a user).
  • Process Table: A data structure that contains each individual process’s state, stack pointer, memory allocation,program counter, and status of open files in use.
  • Threads: A thread is a unit of program execution. conversely a process is a single thread of execution.A thread is made up of individual instruction set and the data that needs to be worked on by the CPU.All the threads of a process share the resources of the processes that created them.
  • Process Scheduling: governs the way different processes communication(or synchronize) between each other in order to overcome deadlock conditions.

Memory Management

  • The main objectives of memory management is to
    • Provide an abstraction level for programmers
    • Maximize performance with the limited amount of memory available
    • Protect the operating system and applications loaded into memory
  • The memory manager has five basic responsibilities:
    • Relocation
      • Swap contents from RAM to the hard drive as needed
      • Provide pointers for applications if their instructions and memory segment have been moved to a different location in main memory
    • Protection
      • Limit processes to interact only with the memory segments that are assigned to them
      • Provide access control to memory segments
    • Sharing
      • Use complex controls to ensure integrity and confidentiality when processes need to use the same shared memory segments
      • Allow many users with different levels of access to interact with the same application running in one memory segment
    • Logical organization
      • Allow for the sharing of specific software modules, such as dynamic link library (DLL) procedures
    • Physical organization
      • Segment the physical memory space for application and operating system processes

I/O Device Management

  • I/O devices are usually considered block or character devices.
  • A block device works with data in fixed-size blocks, each block with its own unique address. For example, disk drive .
  • A character device, such as a printer, network interface card, or mouse, works with streams of characters, without using any fixed sizes. This type of data is not addressable
  • There are different ways that operating systems can manage software I/O procedures.
    • Programmed I/O- the CPU sends data to an I/O device and polls the device to see if it is ready to accept more data. If the device is not ready to accept more data, the CPU wastes time by waiting for the device to become ready. very slow
    • Interrupt-driven I/O- the CPU sends a character/block over to the device and then goes and works on another process’s request. When the device is done with its job it sends an interrupt to the CPU. The CPU stops what it is doing, sends the next block/character and this continues until the entire job is processes by the device.lot of interrupt handling overhead.
    • I/O using DMA- A DMA controller feeds the characters from the memory to the device without bothering the CPU. This method is sometimes referred to as unmapped I/O.

OS Protection Mechanisms

  • CPU Modes
    • CPU modes also called processor modes or CPU privilege levels, are operating modes for the central processing unit of some computer architectures that place restrictions on the operations that can be performed by the process currently running in the CPU. This design allows the operating system to run at different privilege levels like kernel mode, user mode, master mode etc.
  • Protection Rings
    • Protection Rings are concentric rings that provide strict boundaries and definitions for what the processes that work within each ring can access and what operations they can successfully execute.
    • The processes that operate within the inner rings have more privileges than the processes operating in the outer rings, because the inner rings only permit the most trusted components and processes to operate within them.
    • Protection rings support the availability, integrity, and confidentiality requirements of multitasking operating systems.
    • The most commonly used architecture provides four protection rings:
      • Ring 0 Operating system kernel
      • Ring 1 Remaining parts of the operating system
      • Ring 2 I/O drivers and utilities
      • Ring 3 Applications and user activity
  • Memory Protection
    • Limit processes to interact only with the memory segments that are assigned to them
    • Provide access control to memory segments
  • Process Isolation
    • Ensures that processes do not “step on each other’s toes," negatively affect each other’s productivity and thus communicate in an insecure manner.
    • Methods for process isolation:
      • Encapsulation of objects- no other process understands or interacts with the internal programming code of a process.
      • Time multiplexing of shared resources- allows processes to use the same resources on a time sharing basis.
      • Naming distinctions- different processes have their own name or identification value called as PID
      • Virtual mapping- Every process has its own virtual memory address space.
  • Domains
    • A domain is defined as a set of objects that a subject is able to access.
    • The domain can be all the resources a user can access, all the files available to a program, the memory segments available to a process, or the services and processes available to an application.
    • A process that resides in a privileged domain needs to be able to execute its instructions and process its data with the assurance that programs in a different domain cannot negatively affect its environment. This is referred to as an execution domain
    • The higher the level of trust, the larger the number of available resources or objects

Firmware[edit | edit source]

Overview

  • Firmware is a computer program that is embedded in a hardware device
  • Like software, it is a computer program which is executed by a microprocessor or a microcontroller. But it is also tightly linked to a piece of hardware, and has little meaning outside of it.

BIOS

  • BIOS refers to the firmware code run by a personal computer when first powered on.
  • The primary function of the BIOS is to identify and initiate component hardware (such as hard disk, floppy and optical disk drives). This is to prepare the machine so other software programs stored on various media can load, execute, and assume control of the PC.This process is known as booting, or booting up, which is short for bootstrapping.

ROM image

  • A ROM image, or simply ROM, is a computer file which contains a copy of the data from a read-only memory chip, often from a video game cartridge, a computer's firmware, or from an arcade game's main board.
  • The term is frequently used in the context of emulation, whereby older games or computer firmware are copied to ROM files on modern computers and can, using a piece of software known as an emulator, be run on the newer computer.

Virtual Machines[edit | edit source]

Overview

  • A virtual machine (VM) is a software implementation of a machine (computer) that executes programs like a real machine
  • An essential characteristic of a virtual machine is that the software running inside is limited to the resources and abstractions provided by the virtual machine -- it cannot break out of its virtual world

VM Categories

  • Virtual machines are separated in two major categories, based on their use and degree of correspondence to any real machine.
    • system virtual machine provides a complete system platform which supports the execution of a complete operating system.
    • process virtual machine is designed to run a single program, which means that it supports a single process.

Systems Security Architecture
[edit | edit source]

The security architecture is one component of a product’s overall architecture and is developed to provide guidance during the design of the product. It outlines the level of assurance that is required and potential impacts that this level of security could have during the development stages and on the product overall.

Security Design Principles[edit | edit source]

Security is a system requirement just like performance, capability, cost, etc.Therefore, it may be necessary to trade off certain security requirements to gain others.

Principles of Secure Design

  • Design security in from the start
  • Allow for future security enhancements
  • Minimize and isolate security controls
  • Employ least privilege
  • Structure the security relevant features
  • Make security friendly
  • Don’t depend on secrecy for security

Principles for Software Security

  • Secure the weakest link
  • Practice defense in depth
  • Fail securely- If your software has to fail, make sure it does it securely
  • Follow the principle of least privilege
  • Compartmentalize- Minimize the amount of damage that can be done by breaking the system into units
  • Keep it simple- Complex design is never easy to understand
  • Promote privacy- Try not to do anything that compromises the privacy of the user
  • Remember that hiding secrets is hard
  • Be reluctant to trust- Instead of making assumptions that need to hold true, you should be reluctant to extend trust
  • Use your community resources- Public scrutiny promotes trust

Design Principles for Protection Mechanisms

  • Least privilege- Should only have the rights necessary to complete your task.
  • Economy of mechanism- Should be sufficiently small and as simple as to be verified and implemented – e.g., security kernel. Complex mechanisms should be correctly Understood, Modeled, Configured, Implemented and Used
  • Complete mediation- Every access to every object must be checked
  • Open design- Let the design be open. Security through obscurity is a bad idea
  • Should be open for scrutiny by the community- Better to have a friend/colleague find an error than a foe
  • Separation of privilege- Access to objects should depend on more than one condition being satisfied
  • Least common mechanism- Minimize the amount of mechanism common to more than one user and depended on by all users
  • Psychological acceptability- User interface must be easy to use, so that users routinely and automatically apply the mechanisms correctly. Otherwise, they will be bypassed
  • Fail-safe defaults. Should be lack of access

Trusted Computing Base[edit | edit source]

Overview

  • A Trusted Computing Base (TCB) is the whole combination of protection mechanisms within a computer system.
  • The TCB addresses all the security components of the hardware, software, and firmware within the system.
  • It does not address the level of security that a system provides, but rather the level of trust that a system provides as because no computer system can be totally secure
  • If the TCB is enabled, then the system has a trusted path, a trusted shell, and system integrity–checking capabilities
    • A trusted path is a communication channel between the user, or program, and the kernel. The TCB provides protection resources to ensure that this channel cannot be compromised in any way
    • A trusted shell means that someone who is working in that shell cannot “bust out of it” and other processes cannot “bust into” it.
  • The TCB contains components that directly enforce the security policy (is a set of rules and practices that dictates how sensitive information and resources are managed, protected, and distributed.)

Basic Functions of a TCB

  • Process Activation- deals with the activities that have to take place when a process is going to have its instructions and data processed by the CPU.
  • Execution Domain Switching- takes place when a process needs to call upon a process in a higher protection ring.
  • Memory protection and
  • I/O operations

Evaluating the TCB

  • Evaluating the trust level of a system includes identifying the architecture, security services, and assurance mechanisms that make up the TCB.
  • During the evaluation process, the tests must show how the TCB is protected from accidental or intentional tampering and compromising activity.
  • For systems to achieve a higher trust level rating, they must meet well-defined TCB requirements,and the details of their operational states, developing stages, testing procedures, and documentation will be reviewed with more granularity than systems that are attempting to achieve a lower trust rating.

Reference Monitor and Security Kernel

  • The reference monitor is an abstract machine that mediates all access subjects have to objects, both to ensure that the subjects have the necessary access rights and to protect the objects from unauthorized access and destructive modification.
  • The security kernel is made up of hardware, software, and firmware components that fall within the TCB and implements and enforces the reference monitor concept.
  • The security kernel mediates all access and functions between subjects and objects. The security kernel is the core of the TCB and is the most commonly used approach to building trusted computing systems. There are three main requirements of the security kernel:
    • It must provide isolation for the processes carrying out the reference monitor concept, and the processes must be tamper-proof.
    • It must be invoked for every access attempt and must be impossible to circumvent. Thus, the security kernel must be implemented in a complete and foolproof way.
    • It must be small enough to be able to be tested and verified in a complete and comprehensive manner.

Security Perimeter

  • A security perimeter is a boundary that divides the trusted from the untrusted.
  • For the system to stay in a secure and trusted state, precise communication standards must be developed to ensure that when a component within the TCB needs to communicate with a component outside the TCB, the communication cannot expose the system to unexpected security compromises. This type of communication is handled and controlled through interfaces.

The Relation

  • The reference monitor is a concept in which an abstract machine mediates all access to objects by subjects.
  • The security kernel is the hardware, firmware, and software of a TCB that implements this concept.
  • The TCB is the totality of protection mechanisms within a computer system that work together to enforce a security policy. The TCB contains the security kernel and all other security protection mechanisms

Security Modes of Operation[edit | edit source]

Overview

  • A system can operate in different modes depending on the sensitivity of the data being processed, the clearance level of the users, and what those users are authorized to do.
  • The mode of operation describes the security conditions under which the system actually functions.
  • Trust vs Assurance
    • A trust is a level of confidence or belief that tells the customer how much protection he can expect out of the system.
    • In a trusted system, all protection mechanisms work together to process sensitive data for many types of uses, and will provide the necessary level of protection per classification level
    • Assurance is an higher level of confidence that looks at the same issue but in more depth and detail where the system is thoroughly inspected.

Dedicated Security Mode All users must have…

  • Proper clearance for all information on the system
  • Formal access approval for all information on the system
  • Signed NDA for all information on the system
  • Valid need to know for all information on the system

All users can access all data.

System High-Security Mode All users must have…

  • Proper clearance for all information on the system
  • Formal access approval for all information on the system
  • Signed NDA for all information on the system
  • Valid need to know for some information on the system

All users can access some data, based on their need to know.

Compartmented Security Mode All users must have…

  • Proper clearance for the highest level of data classification on the system
  • Formal access approval for all information they will access on the system
  • Signed NDA for all information they will access on the system
  • Valid need to know for some of the information on the system

All users can access some data, based on their need to know and formal access approval.

Multilevel Security Mode All users must have…

  • Proper clearance for all information they will access on the system
  • Formal access approval for all information they will access on the system
  • Signed NDA for all information they will access on the system
  • Valid need to know for some of the information on the system

All users can access some data, based on their need to know, clearance, and formal access approval.

Security Models
[edit | edit source]

  • A security policy is a document that expresses clearly and concisely what the protection mechanisms are to achieve. Its a statement of the security we expect the system to enforce.
  • A security model is a specification of a security policy:
    • it describes the entities governed by the policy,
    • it states the rules that constitute the policy.
  • There are various types of security models:
    • Models can capture policies for confidentiality (Bell-LaPadula) or for integrity (Biba, Clark-Wilson).
    • Some models apply to environments with static policies (Bell-LaPadula), others consider dynamic changes of access rights (Chinese Wall).
    • Security models can be informal (Clark-Wilson), semi-formal, or formal (Bell-LaPadula, Harrison-Ruzzo-Ullman).
  • Model vs Policy
    • A security model maps the abstract goals of the policy to information system terms by specifying explicit data structures and techniques that are necessary to enforce the security policy. A security model is usually represented in mathematics and analytical ideas, which are then mapped to system specifications, and then developed by programmers through programming code
    • For Example, if a security policy states that subjects need to be authorized to access objects, the security model would provide the mathematical relationships and formulas explaining how x can access y only through the outlined specific methods
    • A security policy outlines goals without regard to how they will be accomplished. A model is a framework that gives the policy form and solves security access problems for particular situations.

Lattice Models[edit | edit source]

  • A lattice is a mathematical construct that is built upon the notion of a group.
  • A lattice is a mathematical construction with:
    • a set of elements
    • a partial ordering relation
    • the property that any two elements must have unique least upper bound and greatest lower bound
  • A security lattice model combines multilevel and multilateral security
  • Lattice elements are security labels that consist of a security level and set of categories

State Machine Models[edit | edit source]

  • In the state machine model, the state of a machine is captured in order to verify the security of a system.
  • A given state consists of all current permissions and all current instances of subjects accessing the objects. If the subject can access objects only by means that are concurrent with the security policy, the system is secure.
  • The model is used to describe the behavior of a system to different inputs. It provides mathematical constructs that represent sets (subjects, objects) and sequences. When an object accepts an input, this modifies a state variable thus transiting to a different state.
  • Implementation tips
    • The developer must define what and where the state variables are.
    • The developer must define a secure state for each state variable.
    • Define and identify the allowable state transition functions.
    • The state transition function should be tested to verify that the overall m/c state will not compromise and the integrity of the system is maintained.

Noninterference Models[edit | edit source]

  • The model ensures that any actions that take place at a higher security level do not affect, or interfere with, actions that take place at a lower level.
  • It is not concerned with the flow of data, but rather with what a subject knows about the state of the system. So if an entity at a higher security level performs an action, it can not change the state for the entity at the lower level.
  • The model also addresses the inference attack that occurs when some one has access to some type of information and can infer(guess) something that he does not have the clearance level or authority to know.


Bell—LaPadula Confidentiality Model[edit | edit source]

  • It was the first mathematical model with a multilevel security policy that is used to define the concept of a secure state machine and models of access and outlined rules of access.
  • It is a state m/c model that enforces the confidentiality aspects of access model.
  • The model focuses on ensuring that the subjects with different clearances(top secret, secret, confidential) are properly authenticated by having the necessary security clearance, need to know, and formal access approval-before accessing an object that are under different classification levels (top secret, secret, confidential).
  • The rules of Bell-Lapadula model
    • Simple security rule (no read up rule): It states that a subject at a given security level can not read data that resides at a higher security level.
    • Star property rule ( no write down rule): It states that a subject in a given security level can not write information to a lower security levels.
  • Strong star property rule: It states a subject that has read and write capabilities can only perform those functions at the same security level, nothing higher and nothing lower.
  • Tranquility principle : subjects and objects can not change their security levels once they have been instantiated (created).
  • All MAC systems are based on the Bell – Lapadula model because of it multilevel security.
  • Designed US govt and mostly adopted by govt agencies

Biba Integrity Model[edit | edit source]

  • It is developed after Bell – Lapadula model.
  • It addresses integrity of data unlike Bell – Lapadula which addresses confidentiality.
  • It uses a lattice of integrity levels unlike Bell – Lapadula which uses a lattice of security levels.
  • It is also an information flow model like the Bell – Lapadula because they are most concerned about data flowing from one level to another.
  • The rules of Biba model
    • simple integrity rule(no read down) : it states that a subject can not read data from a lower integrity level.
    • star integrity rule(no write up) : it states that a subject can not write data to an object at a higher integrity level.
    • invocation property : it states that a subject can not invoke(call upon) a subject at a higher integrity level.

Clark—Wilson Integrity Model[edit | edit source]

  • It was developed after Biba and addresses the integrity of information.
  • This model separates data into one subject that needs to be highly protected, referred to as a constrained data item(CDI)and another subset that does not require high level of protection, referred to as unconstrained data items(UDI).
  • Components
    • Subjects (users): are active agents.
    • Transformation procedures (TPs): the s/w procedures such as read, write, modify that perform the required operation on behalf of the subject (user).
    • Constrained data items (CDI): data that can be modified only by Tp’s.
    • Unconstrained data items (UDI): data that can be manipulated by subjects via primitive read/write operations.
    • Integrity verification procedure (IVP): programs that run periodically to check the consistency of CDIs with external reality. These integrity rules are usually defined by vendors.
  • Integrity goals of Clark – Wilson model
    • Prevent unauthorized users from making modification (addressed by Biba model).
    • Separation of duties prevents authorized users from making improper modifications.
    • Well formed transactions: maintain internal and external consistency i.e. it is a series of operations that are carried out to transfer the data from one consistent state to the other.

Access Control Matrix[edit | edit source]

  • This model addressed in access control.
  • Commonly used in OS and applications.

Information Flow Models[edit | edit source]

  • In this model, data is thought of as being held in individual discrete compartments.
  • Information is compartmentalized based on two factors.
    • Classification and
    • Need to know
  • The subjects clearance has to dominate the objects classification and the subjects security profile must contain the one of the categories listed in the object label, which enforces need to know.
  • For example:
    • Bell – Lapadula which prevents information flowing from higher source level to lower source level.
    • Biba which prevents information flowing from lower integrity level to higher integrity level

Covert channels

  • A covert channel is a way for an entity to receive information in an unauthorized manner.
  • It is an information flow that is not controlled by a security mechanism.
  • It is an unauthorized communication path that is not protected by the system because it was uncovered while developing the system.
  • Types of covert channels
    • Covert timing: in this channel, one process relays information to another by modulating its use of system resources.
    • Covert storage: in this channel, one process writes data to a storage location and another process directly, or indirectly reads it.

Graham—Denning Model[edit | edit source]

  • This model defines a set of basic rights in terms of commands that a specific subject can execute on an object.
  • It proposes the eight primitive protection rights, or rules of how these types of functionalities should take place securely.
    • How to securely create an object.
    • How to securely create a subject.
    • How to securely delete an object.
    • How to securely delete a subject.
    • How to provide read access rights.
    • How to provide grant access rights.
    • How to provide delete access rights.
    • How to provide transfer access rights.

Harrison—Ruzzo—Ullman Model[edit | edit source]

  • The HRU security model (Harrison, Ruzzo, Ullman model) is an operating system level computer security model which deals with the integrity of access rights in the system. The system is based around the idea of a finite set of procedures being available to edit the access rights of a subject s on an object o.
  • The model also discussed the possibilities and limitations of proving safety of a system using an algorithm.

Brewer—Nash (Chinese Wall)[edit | edit source]

  • This model provides access controls that can change dynamically depending upon a user’s previous actions.
  • The main goal of this model is to protect against conflicts of interests by user’s access attempts.
  • It is based on the information flow model, where no information can flow between subjects and objects in a way that would result in a conflict of interest.
  • The model states that a subject can write to an object if, and only if, the subject can not read another object that is in a different data set.

Security Product Evaluation Methods and Criteria
[edit | edit source]

  • A security evaluation examines the security-relevant parts of a system, meaning the TCB, access control mechanisms, reference monitor, kernel, and protection mechanisms. The relationship and interaction between these components are also evaluated.
  • There are different methods of evaluating and assigning assurance levels to systems. Two reasons explain why more than one type of assurance evaluation process exist:
    • methods and ideologies have evolved over time, and
    • various parts of the world look at computer security differently and rate some aspects of security differently
  • An evaluation program establishes a trust between the security product vendor and the customer.

Evaluation Standards

  • Information Technology Security Evaluation Criteria (ITSEC)- EU
  • Trusted Computing Security Evaluation Criteria (TCSEC) -US
  • Common Criteria- Hybrid of ITSEC and TCSEC

Rainbow Series[edit | edit source]

The Rainbow series was a series of books to cover all the areas of security like

  • Red Books- Network Security
  • Orange Books- Operating System Security
  • Yellow- Security Risk Management
  • Violet- Database Security

Orange Book or TCSEC[edit | edit source]

Overview

  • TCSEC was developed by US DoD and was published in an Orange book and hence also called as Orange Book
  • It mainly addresses the confidentiality, but not integrity and mainly addresses government and military requirements.
  • It is used to evaluate whether a product contains the security properties the vendor claims it does and whether the product is appropriate for a specific application or function.
  • It is used to review the functionality, effectiveness, and assurance of a product during its evaluation, and it uses classes that were devised to address typical patterns of security requirements.
  • TCSEC provides a classification system that is divided into different assurance levels with A representing the highest and D the lowest.
    • A: Verified protection
    • B: Mandatory protection: Variants- B1<B2<B3
    • C: Discretionary protection: Variants-C1<C2
    • D: Minimal security

The levels are concentric.i.e if a product is assured at level B2, that implies it meets D,C1,C2 and B1

The Criteria

  • Security policy- The policy must be explicit and well defined and enforced by the mechanisms within the system.
  • Identification- Individual subjects must be uniquely identified.
  • Labels Access- Control labels must be associated properly with objects.
  • Documentation- Documentation must be provided, including test, design, and specification documents, user guides, and manuals.
  • Accountability- Audit data must be captured and protected to enforce accountability.
  • Life cycle assurance- Software, hardware, and firmware must be able to be tested individually to ensure that each enforces the security policy in an effective manner throughout their lifetimes.
  • Continuous protection- The security mechanisms and the system as a whole must perform predictably and acceptably in different situations continuously.

The Assurance Levels

  • Division D: Minimal Protection- It is reserved for systems that have been evaluated but fail to meet the criteria and requirements of the higher divisions.
  • Division C: Discretionary Protection- The C rating category has two individual assurance ratings within it.
    • C1: Discretionary Security Protection
      • Based on individuals and/or groups.
      • It requires a separation of users and information, and identification and authentication of individual entities is provided.
      • The type of environment that would require this rating is one in which users are processing information at the same sensitivity level; thus, strict access control and auditing measures are not required.
      • It would be a trusted environment with low security concerns
    • C2: Controlled Access Protection
      • Users need to be identified individually to provide more precise access control and auditing functionality.
      • Logical access control mechanisms are used to enforce authentication and the uniqueness of each individual’s identification
      • The type of environment that would require systems with a C2 rating is one in which users are trusted but a certain level of accountability is required.
      • Overall, C2 is regarded to be the most reasonable class for commercial applications, but the level of protection is still relatively weak
  • Division B: Mandatory Protection- Mandatory access control is enforced by the use of security labels. The architecture is based on the Bell-LaPadula security model, and evidence of reference monitor enforcement must be available
    • B1: Labeled Security
      • Each data object must contain a classification label and each subject must have a clearance label.
      • When a subject attempts to access an object, the system must compare the subject’s and object’s security labels to ensure that the requested actions are acceptable.
      • Data leaving the system must also contain an accurate security label.
      • The security policy is based on an informal statement, and the design specifications are reviewed and verified.
      • This security rating is intended for environments that require systems to handle classified data.
    • B2: Structured Protection
      • The security policy is clearly defined and documented, and the system design and implementation are subjected to more thorough review and testing procedures.
      • This class requires more stringent authentication mechanisms and well-defined interfaces among layers.
      • Subjects and devices require labels, and the system must not allow covert channels.
      • The type of environment that would require B2 systems is one that processes sensitive data that requires a higher degree of security and are relatively resistant to penetration and compromise.
    • B3: Security Domains
      • In this class, more granularity is provided in each protection mechanism, and the programming code that is not necessary to support the security policy is excluded.
      • The reference monitor components must be small enough to test properly and be tamper-proof
      • The system must be able to recover from failures without its security level being compromised.
      • The type of environment that requires B3 systems is a highly secured environment that processes very sensitive information and are highly resistant to penetration.
  • Division A: Verified Protection- Formal methods are used to ensure that all subjects and objects are controlled with the necessary discretionary and mandatory access controls. The design, development, implementation, and documentation are looked at in a formal and detailed way
    • A1: Verified Design
      • Formal techniques are used to prove the equivalence between the TCB specifications and the security policy model.
      • More stringent change configuration is put in place with the development of an A1 system, and the overall design can be verified
      • The type of environment that would require A1 systems is the most secure of secured environments. This type of environment deals with top-secret information and cannot adequately trust anyone using the systems without strict authentication, restrictions, and auditing.

TCSEC Myths

  • It looks specifically at the operating system and not at other issues like networking, databases, and so on.
  • It focuses mainly on one attribute of security, confidentiality, and not at integrity and availability.
  • It works with government classifications and not the protection classifications that commercial industries use.
  • It has a relatively small number of ratings, which means many different aspects of security are not evaluated and rated independently.

Red Book or TNI[edit | edit source]

Overview

  • Red books also called Trusted Network Interpretation (TNI),addresses security evaluation topics for networks and network components.
  • Like the Orange Book, the Red Book does not supply specific details about how to implement security mechanisms; instead, it provides a framework for securing different types of networks
  • It rates the confidentiality of data and operations that happen within a network and the network components and products.

Security Items addresses in the red Books

  • Communication integrity
    • Authentication Protects against masquerading and playback attacks. Mechanisms include digital signatures, encryption, timestamp, and passwords.
    • Message integrity Protects the protocol header, routing information, and packet payload from being modified. Mechanisms include message authentication and encryption.
    • Non-repudiation Ensures that a sender cannot deny sending a message.Mechanisms include encryption, digital signatures, and notarization.
  • Denial-of-service prevention
    • Continuity of operations Ensures that the network is available even if attacked. Mechanisms include fault tolerant and redundant systems and the capability to reconfigure network parameters in case of an emergency.
    • Network management Monitors network performance and identifies attacks and failures. Mechanisms include components that enable network administrators to monitor and restrict resource access.
  • Compromise protection
    • Data confidentiality Protects data from being accessed in an unauthorized method during transmission. Mechanisms include access controls, encryption, and physical protection of cables.
    • Traffic flow confidentiality Ensures that unauthorized entities are not aware of routing information or frequency of communication via traffic analysis. Mechanisms include padding messages, sending noise, or sending false messages.
    • Selective routing Routes messages in a way to avoid specific threats.Mechanisms include network configuration and routing tables.

Information Technology Security Evaluation Criteria (ITSEC)[edit | edit source]

Overview

  • As TCSEC was developed by US, ITSEC was developed by the EU to address all the security evaluation issues.
  • The ITSEC had two main evaluation attributes
    • Functionality- When the functionality of a system’s protection mechanisms is being evaluated, the services that are provided to the subjects like access control mechanisms, auditing, authentication, and so on are examined and measured.
    • Assurance- Assurance, is the degree of confidence in the protection mechanisms,and their effectiveness and capability to perform consistently. Assurance is generally tested by examining development practices, documentation, configuration management, and testing mechanisms.

Evaluation Criteria

ITSEC had 10 classes F1 to F10 to evaluate the functional requirements and 7 classes E0 to E6 to evaluate the assurance requirements.

  • Security functional requirements
    • F00:Identification and authentication
    • F01:Audit
    • F02:Resource utilization
    • F03:Trusted paths/channels
    • F04:User data protection
    • F05:Security management
    • F06:Product access
    • F07:Communications
    • F08:Privacy
    • F09:Protection of the product’s security functions
    • F10:Cryptographic support
  • Security assurance requirements
    • E00:Guidance documents and manuals
    • E01:Configuration management
    • E02:Vulnerability assessment
    • E03:Delivery and operation
    • E04:Life-cycle support
    • E05:Assurance maintenance
    • E06:Development
    • Testing

ITSEC Ratings

  • The ratings are based on effectiveness and correctness.
    • Effectiveness means that the TOE meets the security claims that the vendor has specified. This analysis looks at the construction and operational vulnerabilities and the ease of use, to ensure that the proper security settings do not get in the way of productivity.- rates functionality
    • Correctness deals with how the TOE was built and implementation issues. This type of analysis looks at the architectural design, how the security mechanisms enforce the policy, and the operational documentation and environment.- rates assurance.

TCSEC vs ITSEC

  • TCSEC bundles functionality and assurance into one rating, whereas ITSEC evaluates these two attributes separately.
  • ITSEC provides more flexibility than TCSEC.
  • ITSEC addresses integrity, availability, and confidentiality whereas TCSEC addresses only confidentiality.
  • ITSEC also addresses networked systems, whereas TCSEC deals with stand-alone systems.

Common Criteria[edit | edit source]

Overview

  • Common criteria was the result of ISO's attempt to gather several organizations to come together to combine and align existing and emerging evaluation criteria like TCSEC, ITSEC, Canadian Trusted Computer Product Evaluation Criteria [CTCPEC], and the Federal Criteria.
  • It was developed through a collaboration among national security standards organizations within the United States, Canada, France, Germany, the United Kingdom, and the Netherlands.
  • Under the Common Criteria model, an evaluation is carried out on a product and is assigned an Evaluation Assurance Level (EAL)

Benefits of the CC

  • Helps consumers by reducing the complexity of the ratings and eliminating the need to understand the definition and meaning of different ratings within various evaluation schemes.
  • Helps manufacturers because now they can build to one specific set of requirements if they want to sell their products internationally, instead of having to meet several different ratings with varying rules and requirements.

CC Assurance Levels

  • The Common Criteria has seven assurance levels which ranges from EAL1(lowest), where functionality testing takes place, through EAL7(highest), where thorough testing is performed and the system design is verified.
    • EAL 1 Functionally tested
    • EAL 2 Structurally tested
    • EAL 3 Methodically tested and checked
    • EAL 4 Methodically designed, tested, and reviewed
    • EAL 5 Semi-formally designed and tested
    • EAL 6 Semi-formally verified design and tested
    • EAL 7 Formally verified design and tested

Protection Profile

  • A protection profile is a mechanism that is used by CC in its evaluation process to describe a real-world need of a product that is not currently on the market.
  • Protection Profile Characteristics
    • Contains the set of security requirements, their meaning and reasoning, and the corresponding EAL rating that the intended product will require.
    • Describes the environmental assumptions, the objectives, and the functional and assurance level expectations. Each relevant threat is listed along with how it is to be controlled by specific objectives.
    • Justifies the assurance level and requirements for the strength of each protection mechanism.
    • Provides a means for a consumer, or others, to identify specific security needs; this is the security problem that is to be conquered
    • Provide the necessary goals and protection mechanisms to achieve the necessary level of security and a list of the things that can go wrong during the type of system development.
  • Protection Profile Sections
    • Descriptive elements Provides the name of the profile and a description of the security problem that is to be solved.
    • Rationale Justifies the profile and gives a more detailed description of the real-world problem to be solved. The environment, usage assumptions, and threats are illustrated along with guidance on the security policies that can be supported by products and systems that conform to this profile.
    • Functional requirements Establishes a protection boundary, meaning the threats or compromises that are within this boundary to be countered. The product or system must enforce the boundary established in this section.
    • Development assurance requirements Identifies the specific requirements that the product or system must meet during the development phases, from design to implementation.
    • Evaluation assurance requirements Establishes the type and intensity of the evaluation.

CC Components

  • Protection profile- Description of needed security solution.
  • Target of evaluation- Product proposed to provide needed security solution.
  • Security target- Vendor’s written explanation of the security functionality and assurance mechanisms that meet the needed security solution; in other words, “This is what our product does and how it does it.”
  • Packages—EALs- Functional and assurance requirements are bundled into packages for reuse. This component describes what must be met to achieve specific EAL ratings.

Certification and Accreditation[edit | edit source]

Certification

  • Certification is the comprehensive technical evaluation of the security components and their compliance for the purpose of accreditation.
  • A certification process may use safeguard evaluation, risk analysis, verification, testing, and auditing techniques to assess the appropriateness of a specific system.
  • The goal of a certification process is to ensure that a system, product, or network is right for the customer’s purposes.

Accreditation

  • Accreditation is the formal acceptance of the adequacy of a system’s overall security and functionality by management.
  • Certification is a technical review that assesses the security mechanisms and evaluates their effectiveness. Accreditation is management’s official acceptance of the information in the certification process findings.