Section 1.5: Systems Engineering
Engineering applies scientific principles and other forms of knowledge to design, build, and operate systems which perform an intended function. It is a broad discipline, whose parts we will discuss later in Section 1.7 - Engineering Specialties. In a simple project, such as designing a bookcase for home use, a formal engineering process is not needed. One person can calculate the shelf loads and other parameters by themselves, with the help of a calculator and some reference data. Large and complicated projects, however, need the knowledge of multiple specialists, must satisfy multiple desired conditions and functions, and involve large amounts of time and money. A need then exists to coordinate the work, and ensure the final product meets the intended goal, in the most efficient way. Systems Engineering methods have been developed for this coordination task. They have become their own specialty field and are used in addition to the other engineering specialties. Systems Engineering can be used for any type of complex project. However, space systems are usually complicated enough to benefit from it, and systems thinking and methodologies are often used in this field.
Systems Engineering In General
What is a System?
Given an identified need or desire, how does one select the best design to satisfy it out of the infinite number of possible solutions? For a complex project, the concept of a System has proven useful. A System is defined as a functionally, physically, and/or behaviorally related group of regularly interacting or interdependent elements. They are distinguished from the rest of the Universe by a System Boundary (Figure 1.5-1). A system is not a physical entity, but rather a mental construct, created because of it's usefulness, by drawing a line or surface around a collection of elements. The elements have internal relationships to each other and form a comprehensible whole. The rest of the Universe outside the system is referred to as the System Environment, or simply the environment. Flows of many types enter and leave the system as Inputs from and Outputs to the environment by crossing the system boundary . The scope of a given engineering task is then defined by the system boundary, what crosses the boundary, and what is inside. Systems may contain smaller systems within them, which are called Subsystems. These may be nested to any level, but flows into and out of a subsystem must appear in the parent system, or at the top level in the environment. This rule may be called Conservation of Flows - that flows do not appear from or vanish into nothing. Following that rule ensures that all the required inputs and outputs are accounted for.
Systems Engineering Method
A single person may have the time and knowledge to do a preliminary concept or design. A complete space project is usually too complex or would take too long for one person to do. So the Systems Engineering method can be used to help carry out such projects. Aerospace projects, including space systems, are particularly suitable due to their complexity, and were among the main ones for which the method was developed. The method focuses on how projects should be designed and managed over their entire Life Cycle, that being from initial concept to final disposal. Since it applies to the whole project, it is interdisciplinary, connecting tasks performed by Systems Engineering specialists to those of other engineering branches. Key parts of the process include:
- Breaking down a complicated project in such a way that the smallest pieces are simple enough for humans to design.
- Modeling the system so it can be analyzed and optimized, and comparing the actual physical system to the models.
- Control and track the information and design of the pieces and their relationships so the total system will do what you wanted.
Figure 1.5-2 illustrates in general the steps within the Systems Engineering process. The trend is from top to bottom, but we do not show arrows connecting the steps because it is not a strict linear flow. As results are obtained in any task, they can feed back to earlier steps in an iterative fashion, until a stable design solution is reached. So these tasks can happen in parallel, and applied across the different stages of the life cycle. The tasks can also be applied at different levels of detail. They are started at a general level. Once a stable configuration is reached at one level, it then is re-applied at lower levels until detailed design can be done on individual elements. At all levels, there is communication with design specialties, and with outside entities such as the customer, suppliers, and other scientific and engineering organizations. The steps are described in more detail in sections 3 and 4 below and on page 2. It should be noted that an organization capable of designing and building complex space systems is itself a complex system. While it is not often done, Systems Engineering methods can be applied to the organization itself to design and optimize how it functions, or to any complex system of any type, not just space hardware.
The systems engineering process is bounded by natural and human-made constraints. Many of the human constraints are not directly related to design in the way physical properties of materials are. These indirect constraints include economics, laws, and safety of life and property. The process is then also outward-looking, beyond the design itself. Other engineering specialties are more focused on the internal details of the design.
Large Scale Systems
It is not required that a single organization do all the Systems Engineering tasks. Some very large systems, that have been deliberately engineered, such as the US Interstate highway system, the Internet, and the US program to land humans on the Moon, involved many entities working together. A national system of government, human civilization, or the Earth's biosphere can be considered as very large systems in terms of having inputs and outputs, a system boundary, and an external environment. There is a growing understanding that such large entities are systems composed of many smaller systems, whether designed or not. Analyzing such large entities as systems can help with understanding how they function and determining if corrective action is needed. Although some attempts at designing governments have been made, they have yet to be done based on scientific and engineering principles. Climate Engineering, which is the concept of deliberately affecting the Earth's climate, is an example of biosphere level engineering projects. Doing them deliberately, as opposed to the inadvertent side effect of civilization, is still in the conceptual stage. More work has been done in the field of Economics in analyzing economic systems, and sometimes attempting to design or influence them.
As a well-developed engineering specialty, there are a number of reference books, standards, and special methods and software used by systems engineers. They are used to understand and manage the interactions, and communicate the current state, of a complex project. The remainder of Part 1 of this book summarizes parts of the systems engineering method. This includes the elements of a system, engineering tools, involvement of other design specialties, and economics. A given program also has to be understood in the context of other existing and future programs. All of these tools and knowledge must be integrated properly for a new project.
For additional detail on Systems Engineering beyond what is in this book, see:
- DAU Press, Systems Engineering Fundamentals, 2001.
- NASA, Systems Engineering Handbook, 2007.
- NASA, Systems Engineering Class Materials - Website developed since approx. 2008.
The System Life Cycle
Complex systems evolve through a Life Cycle much the way living things do, from conception to disposal. The life cycle is divided into a number of stages where different tasks are performed (Figure 1.5-3). The design stages (the first three boxes in the figure) can be organized in different ways depending on the nature of the system. These include linear, parallel, spiral, or closed loop sequences, or some mixture of these. The illustration shows a typical linear sequence. A spiral process repeats stages in increasing detail, while a closed loop repeats at the same level of detail. Beyond the design stages, the process is more typically linear from production, through test, installation, operation, and retirement.
Life cycle stages are used for two important reasons. First, the design process should consider all the later stages, so that the best total solution is found, rather than optimizing for just one part of a system's life. Second, breaking down a system by time is another way to simplify the design work, along with breaking it down by subsystems and components. The stages are further broken down into internal tasks which have inputs and outputs that connect them, and have decision points for when it is time to proceed to the next stage.
A life cycle is a time oriented view of an entire system. Other views of the same system include functional diagrams, which show what tasks it performs and their inputs and outputs, and a work breakdown, which tabulates the elements and sub-elements which make up the system. Which view of the system is used depends on the design task at hand, though all the views need to be kept current or the design process can become disjointed.
Life Cycle Example
The names, and the task contents, of a given project's stages can vary according to the needs of the project. However a somewhat standard linear flow is often used in aerospace engineering, including space-related projects. The stages and typical major tasks include:
- Conceptual Design
- Identifying the need - what is it you want the system to do? This is embodied in goals and requirements.
- Establish selection criteria - how do you decide one design is better than another?
- Establish a system concept - this includes the main functions, operation, and maintenance of the system.
- Feasibility analysis - can the need be met at acceptable cost, schedule, and other parameters.
- Preliminary Design
- Functional analysis - identify and break down the complex system into smaller functions and their relationships, including alternate arrangements
- Design Allocation - subdivide and assign requirements to lower tier functions
- Formulate alternatives - develop alternate solutions - what are the range of possible options?
- System Modeling - develop mathematical models of the system so variations can be assessed.
- Optimization and selection - making each option as good as it can be, then compare options and choose the best.
- Synthesis and definition - combining the selected options into a total design, and recording the configuration and requirements details
- Detail Design
- Design - Once broken down to a low enough level, individual elements are assigned to engineering specialties or design teams to complete. Design includes physical hardware components and facilities, as well as software, operating procedures, training, and other non-physical elements.
- Integration - Design elements are combined into larger functional units that work together, up to the system as a whole.
- Engineering Models and Prototypes - Physical partial models and complete prototypes built to validate the design.
- Fabrication and Assembly
- Production of components - For physical items, the step where you make the parts.
- Assembly - Putting parts together into complete elements.
- Test and Verification
- Element and System Test - At each assembly level, testing that the assembly functions, then moving up to larger assemblies to the final product.
- Verification - Proving the system meets the stated design requirements, by a combination of test, demonstration, inspection, and analyses.
- Installation and Deployment
- After production, the system elements may need delivery, installation, and activation at the location they will be used.
- Operation and Maintenance
- Operation - Using the system for the purpose it was designed, in the intended environment.
- Support - which includes operator training, performance monitoring, and logistic support.
- Maintenance - includes planned maintenance and unplanned repair, and in-place upgrades.
- When the system has reached the end of its useful life, the removal, recycling, and disposal of system elements, and return of former sites to their original conditions.
Life Cycle Engineering
As a process that applies across the whole life cycle, Systems Engineering is not just used in the initial design phase. Part of good design practice is to know when to stop designing. A design can always be improved with more work, but at some point additional work does not provide enough added improvement to justify it. At that point the design should stop, and the system progresses to the next stage, which is usually fabrication and assembly. With time, the original design assumptions for a project, such as the available technology level, or launch to orbit traffic levels, will change. The systems engineering process can then be re-applied to see if a design change, upgrade, or even complete replacement of the system is warranted. Even if the system was optimally designed when first created, future events may require changes. If the system was properly modeled and documented, then monitoring of these external changes will reveal when it is time to restart the engineering.
Developing a new system starts with a desire or need which cannot be satisfied by existing systems. The needs and desires are expressed by a Customer, For systems engineering purposes, the direct customer is the person or entity who is paying for a project or can direct the engineering staff. For example, in the Boeing Company, that is the engineering managers and general managers of the company. The ultimate customers, which are airline passengers, cannot express their desires directly. So the company management serves as a proxy to express their desires as an input to the engineering process. Other methods, such as surveys, can be used to determine the desires of the ultimate customers.
The initial expression may be in the form of general verbal goals, system properties, levels of technical performance, and similar statements. The customer also will have some value preferences which describe what a better design is from their point of view. These can be things like "minimum cost", "minimum waste output", and "maximum efficiency".
The first major systems engineering step, Requirements Analysis, is the process of converting these general customer desires and preferences to specific measurable features which can be used for design and evaluation. Two main parts of this process are Requirements Definition and Measures of Effectiveness.
The highest level general desires are first converted to specific measurable features and values called System Requirements. These are later broken down into more detailed lower level requirements, which are assigned to logical elements of a system called Functions to perform. The assignment ensures that somewhere in the system all the top level goals are met. At the most detailed level a subset of the lower level requirements are assigned to a single function box. This now becomes the detailed design conditions for that function. Assuming the analysis has been carried to a low enough level, the detailed design of the element that performs the function can then be done with a reasonable effort.
The first step in requirements definition is documenting the original desire or need of the customer in as much detail as they can provide. We will use as an example the Apollo program to land humans on the Moon. That was expressed by President Kennedy as a well known goal with a deadline. That very general statement was not sufficient to design the hardware. The key task is to put all the requirements in forms that can be measured so that you can tell when you meet them. Experience shows if a desirable feature or parameter is not expressed and measured, it will not happen as desired. An example of this failure is the Space Shuttle Program. The original goal was to fly 60 times a year. Given a fleet of 4 Orbiters, each one had to fly 15 times a year, or one launch per 24 days. Subtracting 7 days on orbit and one day before and after flight for launch preparations and post-landing recovery, that leaves 15 days to complete ground processing. The stated goal was thus 160 hours for ground processing, composed of 2 shifts (16 hours) x 10 work days over two weeks, thus 14 days. This goal was expressed, but it may not have been included in the system requirements, and it definitely was not allocated to lower tier hardware and tracked at the lower levels like hardware weight was.
Only after the Shuttle was already flying was it noted that ground processing was taking too long, and efforts started to reduce it. At that point it was too late to make any fundamental changes in the design, and so ground processing never got below about 800 hours, about 5 times the original goal. This was a major contributor to the Shuttle never reaching its intended flight rate. In order to have reached their goal of 160 hours, processing time would have had to be allocated to sub-systems, such as landing gear or maneuvering thrusters, and then each subsystem designed to meet it's assigned time. Conversely to processing time, weight has always been a tracked parameter in aerospace systems, since airplanes cannot function if they are too heavy. The Space Shuttle had very detailed weight targets and a tracking system by component, with monthly reports. It more or less reached its design payload, which is the 1.5% of available launch weight remaining after the vehicle hardware and fuel are accounted for.
This example emphasizes why desired features must not only be stated quantitatively, but passed down for engineers to meet in detailed design, and tracked so you can tell if you are going to meet them. Measurable parameters can be a simple yes or no, for example "Does this airplane design meet FAA regulations?", or it can be a numerical value, range of values, table, formula, or graph indicating the range of acceptable values for that system characteristic.
Measures of Effectiveness
Desired features are often in opposition. For example, higher performance and reliability often come at the expense of higher cost. There are also alternate designs which have different amounts of each feature. Establishing Measures of Effectiveness is the quantitative method to account for these disparate features at the level of the whole system. Like requirements, they are derived from customer desires. In this case it is what features would be "better" when comparing one design over another. Since different features typically have different units of measurement, they need to be converted to a common measuring scale. This is done by formulas that convert each different value, such as cost or performance, to a score. These scores are given relative weights based on their importance to the customer. The weighted measures can then be used in a single mathematical model or formula to determine "better" as a total score, when the component values vary across different designs.
The value scale is often in a range such as 0 to 100%, or 1 to 10, but this is arbitrary. What is more important is a definite conversion from a measurable feature to a scoring value, and the relative weights and method of combining them to a total score. As an example, a value of 0% might be assigned to a payload of 15 tons, and 100% to a payload of 45 tons, with a linear scale in between, and payload given an importance of 30% in the total score. The total scoring system becomes a mathematical model of the customer's desires for the system. Getting a customer to define "better" in such detailed numerical form is often difficult, because it removes their freedom to choose what they personally prefer in a design in spite of the engineering solution. It is necessary, though, if you really want an optimal answer. At the least this process makes it obvious when the customer is over-riding the engineering process.
It should always be kept in mind that a particular design solution may not be "good enough" in terms of of it's measures when compared to existing systems. This can be found by including the existing system as one of the alternative designs being scored. In that case the proper answer is to stop development of the new system and stay with the existing ones. Often the cause is not enough performance improvement relative to cost, but other measures can result in a decision to stop.
The following subheadings list major types of system requirements. Not all would be relevant to a given project, and others besides these might be important to the customer, so it is presented as a starting point for consideration. Each type can include more specific requirement values. The types listed here are linked and somewhat overlapping. For example high reliability and high safety generally go together. Requirements set limits on a design, and overlaps between requirements in effect overlap the range of limits they impose. This is acceptable as long as the designer understands the range of overlaps and interactions among the requirements. A particular design parameter will be governed by the strictest requirement when there is such overlap. An example from civil engineering is that earthquake, wind, and snow loads are all requirements to meet in a building design, and they overlap in that all of them affect the required strength of structural elements.
When broken down to lower levels of the system design, the requirement types and values will become more specific and detailed. Care is needed to maintain logical and numerical consistency across system levels. Requirements, or parts thereof, should not be inserted or dropped at lower levels. Traceability is the ability to follow the chain of requirements across the system levels, and is maintained by documenting how they are connected. This is necessary so you can prove satisfying the lowest level details actually meets the top level system goals. Historically the first two requirement types, performance and cost, were the primary ones considered. As systems have grown more complex and their outside interactions and side effects become better understood, the number of desirable features, and thus the number of requirements, have increased. This trend is expected to continue in the future.
Aside from the biblical Ten Commandments, requirements are rarely set in stone. Not all of them will be identified at the start of a project. As a result of interaction with the customer and feedback from the design process, they can end up modified. For example, a launch capacity of 10 launches at 100 tons each might be specified for a rocket, and later analysis show that 20 launches at 50 tons each yields lower total cost. The requirements would then be modified to reflect that. At any given time, however, the current set of requirements guides the engineering work. Over time, requirements become more firmly fixed, generally from the higher to more detailed levels in sequence. Changing a requirement forces rework of previous designs. So the cost of changing a requirement grows later in the process, and this tends to exceed any benefit from the change.
Performance requirements are measures of the primary intended function of a system. Every system must have at least one performance measure for what it does, and often there are a number of them. As an example, the design capacity for space transport systems is often expressed as a Mission Model. The mission model quantifies the system performance in terms of multiple parameters like dates, flight rate, payload dimensions and mass, mission duration, destination orbit, type of cargo, and maximum g-level. For a space habitat, performance might be measured in number of crew supported, levels of atmosphere, food supplies, and gravity, and total living volume. An industrial system might have requirements for Throughput, in tons per day of materials processed, and Efficiency in terms of (theoretical energy required)/(actual energy used). The particular performance measures which matter will vary by system.
An example mission model for the Apollo program might have started out as follows, with more detail added as the project progresses. Even in this early version, it lists a number of different performance measures that the design needs to meet:
- Number of crew to the lunar surface: 2/mission
- Maximum Stay time: 4 days/mission
- Additional science equipment: 250 kg/flight
- Lunar samples returned: 100 kg/flight
- First Flight: as early as possible but before Jan 1, 1970
- Flight quantity: 10 to lunar surface (this was the original plan)
- Flight rate: 4 flights/year
Performance requirements only address what a system does when it is operating as intended. It does not address what happens outside that context, such as
- In between active operation, such as the 80 days between Lunar missions in the mission model above.
- When the system fails, as did the Apollo 13 mission,
- Before and after the 5 years of manned missions.
- Interactions external to the program, such the supply of technical personnel for the project, environmental impact of the launches, or return of Lunar germs to Earth. The last turned out to be a needless worry, but the quarantine system for returning astronauts and moon rocks is not something covered in performance measures.
So performance alone does not encompass the entire system over the entire life cycle, and other requirement types are needed.
Cost represents the net resource inputs to a project from outside the system. Space projects don't use Dollars or Euro directly, but rather use them to pay for labor, materials, and services they do use. So cost is a measure of flows across the system boundary, rather than an internal property of the system. Every system will consume some resources during its life, but funding sources are not unlimited. So cost limits are almost always considered a requirement, whether implicitly or explicitly. Total cost over the entire life of the project is called Life Cycle Cost. This can be further broken down into development, production, and operations costs, and then accounted in much greater detail across the system elements. In addition to total cost, limits can be placed on spending rates. This is most explicit in government agency budgets, but even private projects have limits on spending per year. Some systems generate revenue to offset costs. When revenues exceed cost, the system as a whole generates a profit in financial terms. Revenue may be delayed until after the design and construction phases and the system begins to operate. The peak net cost accumulated until revenues exceed expenses is described as Capital or Development Costs. Customers generally want high performance and low cost, so the ratio of performance/cost is often a key measure for a project.
Performance and cost requirements are set by the project's customer with the help the engineering team. Compliance Requirements are set by external human rules such as laws, regulations, codes, and standards. Human rules often set minimum requirements in areas like safety. This does not prevent a system from adopting stricter levels. Human rules are usually designed to prevent undesirable effects. For example, speed limits on driving are intended to reduce the frequency and severity of accidents. Compliance requirements exist whether or not they are explicitly incorporated into the engineering process. It is better to incorporate them explicitly to avoid later problems. Other requirements are set by nature, such as the minimum altitude for a stable Earth orbit. These do not fall under compliance, but are accounted for elsewhere. In the case of altitude, this might be a performance requirement that a rocket deliver the payload to a 250 km high orbit.
Especially in the early stages of design, the engineering process may reveal gaps in knowledge, performance uncertainties, resources which are not available, or other issues which prevent selection, optimization, or synthesis of a design. These issues can prevent progress to the next stage of the project, or cause a final design which does not meet desired goals. Measures of these unknowns are given the general name Technical Risks. For example, a new technology which has not been demonstrated yet, i.e. a fusion rocket, would be rated as high risk, while a chemical rocket, which has decades of operating history, would be comparatively low risk. A mass budget considerably below past experience or with insufficient margin during preliminary design would be high risk. New research, modeling, or prototyping can be done to reduce the risks, or the system modified to avoid them. Before these risk reduction efforts the risks will still exist, and it is necessary to account for them. Otherwise you accept the alternate risk of the system not performing as desired or even at all.
Not every risk will be known at the start of a project, but sound engineering practice is to identify them as early as possible, and to allow for modifying development plans when they appear. Depending on how much new technology is included in a project, sufficient performance, time, and cost margins should be included for unexpected problems caused by technical risks. Technical risk is gradually retired during the design and production of a system. Once a system is operating, a small uncertainty remains for things like operating life or failure rates. These are not eliminated until the end of program operations. Even after disposal of a system, some environmental risk may remain. A prime example is nuclear waste, which is a hazard long after the reactor that created it has been demolished.
Safety is the state of being protected against adverse consequences to living things, or damage and destruction of inanimate objects. It is an inverse measure of other risks than those under the previous heading. So a higher safety level is measured by lower risks. Under the principle of "protecting the innocents", hazards to a crew that volunteers to accept a risk can be higher than those allowed to the public at large. A safe system, such as a nuclear power plant or passenger airplane, may have less than one expected accident during the system life. So safety often involves assessing low probability events. Requirements to maintain control of a system despite failures, inherent fail-safe design, design margins, backup systems, and redundancy can improve safety when properly implemented.
Reliability is the probability the system will perform its intended function for a specified time period. The inverse is probability of failure. It is related to Resilience, which is the ability to function in the face of internal damage or external failures. It is also related to Robustness, which is the ability to function in the face of external or internal variables, such as line voltage or temperature. A closely related measure is Availability, which is the probability a system can start operating at a random requested time, or the percentage of a total time interval it can be operated. A high reliability system may require multiple units in place, so that at least the minimum required number are available at a given time. An example is passenger airplanes, which require multiple engines for high reliability, in case one stops working.
Durability is how long a system can perform its intended function. It is typically measured in terms of service life. Service life of components, and of the system as a whole, is related to their maintenance. If not enough maintenance is performed, the life is reduced. Once the life of the item has been reached, it will need repair or replacement. Service life can be measured in terms of number of uses, operating hours, or calendar time. Durability is related to the economics concept of a Durable Good, one that yields utility over time rather than being consumed in one use. A passenger airplane has high durability, because it can operate for decades and tens of thousands of flights. The opposite of durability is consumption. The fuel used in the airplane is consumed (used only once), and is therefore not durable.
Quality is a measure of how well a system meets expectations. One aspect of quality is a measure of the lack of variability or defects in the design and manufacturing stages of the life cycle. Variability and defects increase the chance that performance will fall outside required levels. Another way to put it is conformance to initial specifications. Wear or defects caused during normal operation are not a quality problem unless they are unexpectedly large. Normal wear would fall under maintenance requirements. Another quality factor is parameters like signal-to-noise ratio and error rates in electronic devices. Noise and quantum effects are natural variations which cannot be eliminated, but a large margin above those variations in effect reduces variability and increases quality.
Sustainability is the capacity of a system to endure. For example, does a system consume a scarce resource or generate a waste output that limits it's long term use? If so, it is not sustainable over the long term, although it may last a desired system life. A current example is hydrocarbon fueled rockets. If obtained from fossil sources, they are both limited in supply, and cause unwanted change to the atmosphere. If they are produced as a biofuel, they can be sustainable.
These are requirements to have a positive, or at least minimally negative, impact on the surrounding human community. This might mean employing local staff, or avoiding traffic problems during a shift change, or noise impact from rocket launches. Positive educational impact is another community effect.
Like community, these requirements relate to impact on the surroundings, but in this case the non-human portion. For space projects a key environmental requirement is to avoid contamination, either biological, chemical, or from radiation, both forward and backward from the ends of the mission.
This type of requirement covers items such as how many sources are there for a given manufacturing method, or how exacting the tolerances are. In sum they measure how easy or hard the system is to produce.
These requirements deal with the types and quantity of tests required for a system. Development tests occur during the design stage, qualification tests occur during approval, and periodic inspection and testing may be needed during the operations stage.
These requirements include parameters like hours and costs to maintain the system in an operating state, probability of system failure, and levels of spares required. A system can fail without safety risk. For example, your car may not start, which is different than the brakes failing to operate. The more often that items fail to operate, the more spares are needed in stock, and the more time and money to repair or replace items. So the various maintenance requirements are linked. Maintenance requirements can be divided into preventive, which is before something stops working, and corrective, which is after.
This is the ability of the system to adapt to new tasks or functions, or different performance levels for the original tasks, than first designed for. Similar requirements come under names like Extension, which is adding new tasks or functions while keeping the original ones, or Agility, which is concerned with how fast the system can adapt. Reconfiguration is the ability to change the arrangement of a system to perform a different function. An example is the Curiosity rover, which had one physical shape and software load to cover travel to and landing on Mars, and a different arrangement of wheels, camera mast, and software for surface operations. The change of state was accomplished by a combination of mechanical design, planned sequence, and new software upload.
This is the ability of the system to change in size either by scaling the size of a unit, or by installing more units. There is always a limit to scaling imposed by some physical constraint. If the system can be scaled to meet the full demand for it without reaching scaling limits, it can be said to be scalable. Modularity is a related parameter, concerned with how separate the elements of the system are, and how easy it is to replace them with other elements of the same or different types. Instances of modularity can be horizontal - at the same level of a system, or vertical, as in the layers of the Internet Protocol stack.
This requirement type considers how well the system can change over time to a different type of system. It is related to flexibility, which is more about changes while staying the same kind of system. Redesign is concerned with the difficulty and cost of making changes.
Usability requirements deal with the interface between humans and system elements. When a system can be used without great amounts of planning, preparation, physical strain, or training it is said to have high usability.
This is a measure of how the system fits with other systems. For example, a new airplane with doors that did not fit existing gates, or a computer network that exclusively uses a new protocol that nobody else uses would fail in this parameter. Compatibility is more concerned with the direct interfaces between systems, such as the output of a computer video card matching the input to a monitor. These features are more prominent in the information technology fields because of the sheer number and variety of hardware and software elements which must work together (with varying degrees of success).
This is the degree to which a system is composed of proprietary or secret elements compared to open, public, or standard elements.