# Control Systems/Digital Systems/Print version

The Wikibook of automatic

Control Systems

And Control Systems Engineering
With
Classical and Modern Techniques
And

# Preface

This book will discuss the topic of Control Systems, which is an interdisciplinary engineering topic. Methods considered here will consist of both "Classical" control methods, and "Modern" control methods. Also, discretely sampled systems (digital/computer systems) will be considered in parallel with the more common analog methods. This book will not focus on any single engineering discipline (electrical, mechanical, chemical, etc.), although readers should have a solid foundation in the fundamentals of at least one discipline.

This book will require prior knowledge of linear algebra, integral and differential calculus, and at least some exposure to ordinary differential equations. In addition, a prior knowledge of integral transforms, specifically the Laplace and Z transforms will be very beneficial. Also, prior knowledge of the Fourier Transform will shed more light on certain subjects. Wikibooks with information on calculus topics or transformation topics required for this book will be listed below:

Introduction to Control Systems

What are control systems? Why do we study them? How do we identify them? The chapters in this section should answer these questions and more.

# Introduction

## This Wikibook

This book was written at Wikibooks, a free online community where people write open-content textbooks. Any person with internet access is welcome to participate in the creation and improvement of this book. Because this book is continuously evolving, there are no finite "versions" or "editions" of this book. Permanent links to known good versions of the pages may be provided.

## What are Control Systems?

The study and design of automatic Control Systems, a field known as control engineering, has become important in modern technical society. From devices as simple as a toaster or a toilet, to complex machines like space shuttles and power steering, control engineering is a part of our everyday life. This book introduces the field of control engineering and explores some of the more advanced topics in the field. Note, however, that control engineering is a very large field, and this book serves as a foundation of control engineering and introduction to selected advanced topics in the field. Topics in this book are added at the discretion of the authors, and represent the available expertise of our contributors.

Control systems are components that are added to other components, to increase functionality, or to meet a set of design criteria. For example:

We have a particular electric motor that is supposed to turn at a rate of 40 RPM. To achieve this speed, we must supply 10 Volts to the motor terminals. However, with 10 volts supplied to the motor at rest, it takes 30 seconds for our motor to get up to speed. This is valuable time lost.

This simple example can be complex to both users and designers of the motor system. It may seem obvious that the motor should start at a higher voltage, so that it accelerates faster. Then we can reduce the supply back down to 10 volts once it reaches ideal speed.

This is clearly a simplistic example, but it illustrates an important point: we can add special "Controller units" to preexisting systems, to improve performance and meet new system specifications.

Here are some formal definitions of terms used throughout this book:

Control System
A Control System is a device, or a collection of devices that manage the behavior of other devices. Some devices are not controllable. A control system is an interconnection of components connected or related in such a manner as to command, direct, or regulate itself or another system.
Control System is a conceptual framework for designing systems with capabilities of regulation and/or tracking to give a desired performance. For this there must be a set of signals measurable to know the performance, another set of signals measurable to influence the evolution of the system in time and a third set which is not measurable but disturb the evolution.
Controller
A controller is a control system that manages the behavior of another device or system.
Compensator
A Compensator is a control system that regulates another system, usually by conditioning the input or the output to that system. Compensators are typically employed to correct a single design flaw, with the intention of affecting other aspects of the design in a minimal manner.

There are essentially two methods to approach the problem of designing a new control system: the Classical Approach, and the Modern Approach.

## Classical and Modern

Classical and Modern control methodologies are named in a misleading way, because the group of techniques called "Classical" were actually developed later than the techniques labeled "Modern". However, in terms of developing control systems, Modern methods have been used to great effect more recently, while the Classical methods have been gradually falling out of favor. Most recently, it has been shown that Classical and Modern methods can be combined to highlight their respective strengths and weaknesses.

Classical Methods, which this book will consider first, are methods involving the Laplace Transform domain. Physical systems are modeled in the so-called "time domain", where the response of a given system is a function of the various inputs, the previous system values, and time. As time progresses, the state of the system and its response change. However, time-domain models for systems are frequently modeled using high-order differential equations which can become impossibly difficult for humans to solve and some of which can even become impossible for modern computer systems to solve efficiently. To counteract this problem, integral transforms, such as the Laplace Transform and the Fourier Transform, can be employed to change an Ordinary Differential Equation (ODE) in the time domain into a regular algebraic polynomial in the transform domain. Once a given system has been converted into the transform domain it can be manipulated with greater ease and analyzed quickly by humans and computers alike.

Modern Control Methods, instead of changing domains to avoid the complexities of time-domain ODE mathematics, converts the differential equations into a system of lower-order time domain equations called State Equations, which can then be manipulated using techniques from linear algebra. This book will consider Modern Methods second.

A third distinction that is frequently made in the realm of control systems is to divide analog methods (classical and modern, described above) from digital methods. Digital Control Methods were designed to try and incorporate the emerging power of computer systems into previous control methodologies. A special transform, known as the Z-Transform, was developed that can adequately describe digital systems, but at the same time can be converted (with some effort) into the Laplace domain. Once in the Laplace domain, the digital system can be manipulated and analyzed in a very similar manner to Classical analog systems. For this reason, this book will not make a hard and fast distinction between Analog and Digital systems, and instead will attempt to study both paradigms in parallel.

## Who is This Book For?

This book is intended to accompany a course of study in under-graduate and graduate engineering. As has been mentioned previously, this book is not focused on any particular discipline within engineering, however any person who wants to make use of this material should have some basic background in the Laplace transform (if not other transforms), calculus, etc. The material in this book may be used to accompany several semesters of study, depending on the program of your particular college or university. The study of control systems is generally a topic that is reserved for students in their 3rd or 4th year of a 4 year undergraduate program, because it requires so much previous information. Some of the more advanced topics may not be covered until later in a graduate program.

Many colleges and universities only offer one or two classes specifically about control systems at the undergraduate level. Some universities, however, do offer more than that, depending on how the material is broken up, and how much depth that is to be covered. Also, many institutions will offer a handful of graduate-level courses on the subject. This book will attempt to cover the topic of control systems from both a graduate and undergraduate level, with the advanced topics built on the basic topics in a way that is intuitive. As such, students should be able to begin reading this book in any place that seems an appropriate starting point, and should be able to finish reading where further information is no longer needed.

## What are the Prerequisites?

Understanding of the material in this book will require a solid mathematical foundation. This book does not currently explain, nor will it ever try to fully explain most of the necessary mathematical tools used in this text. For that reason, the reader is expected to have read the following wikibooks, or have background knowledge comparable to them:

Algebra
Calculus
The reader should have a good understanding of differentiation and integration. Partial differentiation, multiple integration, and functions of multiple variables will be used occasionally, but the students are not necessarily required to know those subjects well. These advanced calculus topics could better be treated as a co-requisite instead of a pre-requisite.
Linear Algebra
State-space system representation draws heavily on linear algebra techniques. Students should know how to operate on matrices. Students should understand basic matrix operations (addition, multiplication, determinant, inverse, transpose). Students would also benefit from a prior understanding of Eigenvalues and Eigenvectors, but those subjects are covered in this text.
Ordinary Differential Equations
All linear systems can be described by a linear ordinary differential equation. It is beneficial, therefore, for students to understand these equations. Much of this book describes methods to analyze these equations. Students should know what a differential equation is, and they should also know how to find the general solutions of first and second order ODEs.
Engineering Analysis
This book reinforces many of the advanced mathematical concepts used in the Engineering Analysis book, and we will refer to the relevant sections in the aforementioned text for further information on some subjects. This is essentially a math book, but with a focus on various engineering applications. It relies on a previous knowledge of the other math books in this list.
Signals and Systems
The Signals and Systems book will provide a basis in the field of systems theory, of which control systems is a subset. Readers who have not read the Signals and Systems book will be at a severe disadvantage when reading this book.

## How is this Book Organized?

This book will be organized following a particular progression. First this book will discuss the basics of system theory, and it will offer a brief refresher on integral transforms. Section 2 will contain a brief primer on digital information, for students who are not necessarily familiar with them. This is done so that digital and analog signals can be considered in parallel throughout the rest of the book. Next, this book will introduce the state-space method of system description and control. After section 3, topics in the book will use state-space and transform methods interchangeably (and occasionally simultaneously). It is important, therefore, that these three chapters be well read and understood before venturing into the later parts of the book.

After the "basic" sections of the book, we will delve into specific methods of analyzing and designing control systems. First we will discuss Laplace-domain stability analysis techniques (Routh-Hurwitz, root-locus), and then frequency methods (Nyquist Criteria, Bode Plots). After the classical methods are discussed, this book will then discuss Modern methods of stability analysis. Finally, a number of advanced topics will be touched upon, depending on the knowledge level of the various contributors.

As the subject matter of this book expands, so too will the prerequisites. For instance, when this book is expanded to cover nonlinear systems, a basic background knowledge of nonlinear mathematics will be required.

### Versions

This wikibook has been expanded to include multiple versions of its text, differentiated by the material covered, and the order in which the material is presented. Each different version is composed of the chapters of this book, included in a different order. This book covers a wide range of information, so if you don't need all the information that this book has to offer, perhaps one of the other versions would be right for you and your educational needs.

Each separate version has a table of contents outlining the different chapters that are included in that version. Also, each separate version comes complete with a printable version, and some even come with PDF versions as well.

Take a look at the All Versions Listing Page to find the version of the book that is right for you and your needs.

## Differential Equations Review

Implicit in the study of control systems is the underlying use of differential equations. Even if they aren't visible on the surface, all of the continuous-time systems that we will be looking at are described in the time domain by ordinary differential equations (ODE), some of which are relatively high-order.

Let's review some differential equation basics. Consider the topic of interest from a bank. The amount of interest accrued on a given principal balance (the amount of money you put into the bank) P, is given by:

${\displaystyle {\frac {dP}{dt}}=rP}$

Where ${\displaystyle {\frac {dP}{dt}}}$ is the interest (rate of change of the principal), and r is the interest rate. Notice in this case that P is a function of time (t), and can be rewritten to reflect that:

${\displaystyle {\frac {dP(t)}{dt}}=rP(t)}$

To solve this basic, first-order equation, we can use a technique called "separation of variables", where we move all instances of the letter P to one side, and all instances of t to the other:

${\displaystyle {\frac {dP(t)}{P(t)}}=r\ dt}$

And integrating both sides gives us:

${\displaystyle \ln |P(t)|=rt+C}$

This is all fine and good, but generally, we like to get rid of the logarithm, by raising both sides to a power of e:

${\displaystyle P(t)=e^{rt+C}}$

Where we can separate out the constant as such:

${\displaystyle D=e^{C}}$
${\displaystyle P(t)=De^{rt}}$

D is a constant that represents the initial conditions of the system, in this case the starting principal.

Differential equations are particularly difficult to manipulate, especially once we get to higher-orders of equations. Luckily, several methods of abstraction have been created that allow us to work with ODEs, but at the same time, not have to worry about the complexities of them. The classical method, as described above, uses the Laplace, Fourier, and Z Transforms to convert ODEs in the time domain into polynomials in a complex domain. These complex polynomials are significantly easier to solve than the ODE counterparts. The Modern method instead breaks differential equations into systems of low-order equations, and expresses this system in terms of matrices. It is a common precept in ODE theory that an ODE of order N can be broken down into N equations of order 1.

Readers who are unfamiliar with differential equations might be able to read and understand the material in this book reasonably well. However, all readers are encouraged to read the related sections in Calculus.

## History

The field of control systems started essentially in the ancient world. Early civilizations, notably the Greeks and the Arabs were heavily preoccupied with the accurate measurement of time, the result of which were several "water clocks" that were designed and implemented.

However, there was very little in the way of actual progress made in the field of engineering until the beginning of the renaissance in Europe. Leonhard Euler (for whom Euler's Formula is named) discovered a powerful integral transform, but Pierre-Simon Laplace used the transform (later called the Laplace Transform) to solve complex problems in probability theory.

Joseph Fourier was a court mathematician in France under Napoleon I. He created a special function decomposition called the Fourier Series, that was later generalized into an integral transform, and named in his honor (the Fourier Transform).

 Pierre-Simon Laplace 1749-1827 Joseph Fourier 1768-1840

The "golden age" of control engineering occurred between 1910-1945, where mass communication methods were being created and two world wars were being fought. During this period, some of the most famous names in controls engineering were doing their work: Nyquist and Bode.

Hendrik Wade Bode and Harry Nyquist, especially in the 1930's while working with Bell Laboratories, created the bulk of what we now call "Classical Control Methods". These methods were based off the results of the Laplace and Fourier Transforms, which had been previously known, but were made popular by Oliver Heaviside around the turn of the century. Previous to Heaviside, the transforms were not widely used, nor respected mathematical tools.

Bode is credited with the "discovery" of the closed-loop feedback system, and the logarithmic plotting technique that still bears his name (bode plots). Harry Nyquist did extensive research in the field of system stability and information theory. He created a powerful stability criteria that has been named for him (The Nyquist Criteria).

Modern control methods were introduced in the early 1950's, as a way to bypass some of the shortcomings of the classical methods. Rudolf Kalman is famous for his work in modern control theory, and an adaptive controller called the Kalman Filter was named in his honor. Modern control methods became increasingly popular after 1957 with the invention of the computer, and the start of the space program. Computers created the need for digital control methodologies, and the space program required the creation of some "advanced" control techniques, such as "optimal control", "robust control", and "nonlinear control". These last subjects, and several more, are still active areas of study among research engineers.

## Branches of Control Engineering

Here we are going to give a brief listing of the various different methodologies within the sphere of control engineering. Oftentimes, the lines between these methodologies are blurred, or even erased completely.

Classical Controls
Control methodologies where the ODEs that describe a system are transformed using the Laplace, Fourier, or Z Transforms, and manipulated in the transform domain.
Modern Controls
Methods where high-order differential equations are broken into a system of first-order equations. The input, output, and internal states of the system are described by vectors called "state variables".
Robust Control
Control methodologies where arbitrary outside noise/disturbances are accounted for, as well as internal inaccuracies caused by the heat of the system itself, and the environment.
Optimal Control
In a system, performance metrics are identified, and arranged into a "cost function". The cost function is minimized to create an operational system with the lowest cost.
In adaptive control, the control changes its response characteristics over time to better control the system.
Nonlinear Control
The youngest branch of control engineering, nonlinear control encompasses systems that cannot be described by linear equations or ODEs, and for which there is often very little supporting theory available.
Game Theory
Game Theory is a close relative of control theory, and especially robust control and optimal control theories. In game theory, the external disturbances are not considered to be random noise processes, but instead are considered to be "opponents". Each player has a cost function that they attempt to minimize, and that their opponents attempt to maximize.

This book will definitely cover the first two branches, and will hopefully be expanded to cover some of the later branches, if time allows.

## MATLAB

Information about using MATLAB for control systems can be found in
the Appendix

MATLAB ® is a programming tool that is commonly used in the field of control engineering. We will discuss MATLAB in specific sections of this book devoted to that purpose. MATLAB will not appear in discussions outside these specific sections, although MATLAB may be used in some example problems. An overview of the use of MATLAB in control engineering can be found in the appendix at: Control Systems/MATLAB.

Resources

Nearly all textbooks on the subject of control systems, linear systems, and system analysis will use MATLAB as an integral part of the text. Students who are learning this subject at an accredited university will certainly have seen this material in their textbooks, and are likely to have had MATLAB work as part of their classes. It is from this perspective that the MATLAB appendix is written.

In the future, this book may be expanded to include information on Simulink ®, as well as MATLAB.

There are a number of other software tools that are useful in the analysis and design of control systems. Additional information can be added in the appendix of this book, depending on the experience and prior knowledge of contributors.

This book will use some simple conventions throughout.

### Mathematical Conventions

Mathematical equations will be labeled with the {{eqn}} template, to give them names. Equations that are labeled in such a manner are important, and should be taken special note of. For instance, notice the label to the right of this equation:

[Inverse Laplace Transform]

${\displaystyle f(t)={\mathcal {L}}^{-1}\left\{F(s)\right\}={1 \over {2\pi i}}\int _{c-i\infty }^{c+i\infty }e^{st}F(s)\,ds}$

Equations that are named in this manner will also be copied into the List of Equations Glossary in the end of the book, for an easy reference.

Italics will be used for English variables, functions, and equations that appear in the main text. For example e, j, f(t) and X(s) are all italicized. Wikibooks contains a LaTeX mathematics formatting engine, although an attempt will be made not to employ formatted mathematical equations inline with other text because of the difference in size and font. Greek letters, and other non-English characters will not be italicized in the text unless they appear in the midst of multiple variables which are italicized (as a convenience to the editor).

Scalar time-domain functions and variables will be denoted with lower-case letters, along with a t in parenthesis, such as: x(t), y(t), and h(t). Discrete-time functions will be written in a similar manner, except with an [n] instead of a (t).

Fourier, Laplace, Z, and Star transformed functions will be denoted with capital letters followed by the appropriate variable in parenthesis. For example: F(s), X(jω), Y(z), and F*(s).

Matrices will be denoted with capital letters. Matrices which are functions of time will be denoted with a capital letter followed by a t in parenthesis. For example: A(t) is a matrix, a(t) is a scalar function of time.

Transforms of time-variant matrices will be displayed in uppercase bold letters, such as H(s).

Math equations rendered using LaTeX will appear on separate lines, and will be indented from the rest of the text.

### Text Conventions

Information which is tangent or auxiliary to the main text will be placed in these "sidebox" templates.

Examples will appear in TextBox templates, which show up as large grey boxes filled with text and equations.

Important Definitions
Will appear in TextBox templates as well, except we will use this formatting to show that it is a definition.

# System Identification

## Systems

Systems, in one sense, are devices that take input and produce an output. A system can be thought to operate on the input to produce the output. The output is related to the input by a certain relationship known as the system response. The system response usually can be modeled with a mathematical relationship between the system input and the system output.

## System Properties

Physical systems can be divided up into a number of different categories, depending on particular properties that the system exhibits. Some of these system classifications are very easy to work with and have a large theory base for analysis. Some system classifications are very complex and have still not been investigated with any degree of success. By properly identifying the properties of a system, certain analysis and design tools can be selected for use with the system.

The early sections of this book will focus primarily on linear time-invariant (LTI) systems. LTI systems are the easiest class of system to work with, and have a number of properties that make them ideal to study. This chapter discusses some properties of systems.

Later chapters in this book will look at time variant systems and nonlinear systems. Both time variant and nonlinear systems are very complex areas of current research, and both can be difficult to analyze properly. Unfortunately, most physical real-world systems are time-variant, nonlinear, or both.

An introduction to system identification and least squares techniques can be found here. An introduction to parameter identification techniques can be found here.

## Initial Time

The initial time of a system is the time before which there is no input. Typically, the initial time of a system is defined to be zero, which will simplify the analysis significantly. Some techniques, such as the Laplace Transform require that the initial time of the system be zero. The initial time of a system is typically denoted by t0.

The value of any variable at the initial time t0 will be denoted with a 0 subscript. For instance, the value of variable x at time t0 is given by:

${\displaystyle x(t_{0})=x_{0}}$

Likewise, any time t with a positive subscript are points in time after t0, in ascending order:

${\displaystyle t_{0}\leq t_{1}\leq t_{2}\leq \cdots \leq t_{n}}$

So t1 occurs after t0, and t2 occurs after both points. In a similar fashion above, a variable with a positive subscript (unless specifying an index into a vector) also occurs at that point in time:

${\displaystyle x(t_{1})=x_{1}}$
${\displaystyle x(t_{2})=x_{2}}$

This is valid for all points in time t.

A system satisfies the property of additivity if a sum of inputs results in a sum of outputs. By definition: an input of ${\displaystyle x_{3}(t)=x_{1}(t)+x_{2}(t)}$ results in an output of ${\displaystyle y_{3}(t)=y_{1}(t)+y_{2}(t)}$. To determine whether a system is additive, use the following test:

Given a system f that takes an input x and outputs a value y, assume two inputs (x1 and x2) produce two outputs:

${\displaystyle y_{1}=f(x_{1})}$
${\displaystyle y_{2}=f(x_{2})}$

Now, create a composite input that is the sum of the previous inputs:

${\displaystyle x_{3}=x_{1}+x_{2}}$

Then the system is additive if the following equation is true:

${\displaystyle y_{3}=f(x_{3})=f(x_{1}+x_{2})=f(x_{1})+f(x_{2})=y_{1}+y_{2}}$

Systems that satisfy this property are called additive. Additive systems are useful because a sum of simple inputs can be used to analyze the system response to a more complex input.

### Example: Sinusoids

Given the following equation:

${\displaystyle y(t)=\sin(3x(t))}$

Create a sum of inputs as:

${\displaystyle x(t)=x_{1}(t)+x_{2}(t)}$

and construct the expected sum of outputs:

${\displaystyle y(t)=y_{1}(t)+y_{2}(t)}$

Now, substituting these values into our equation, test for equality:

${\displaystyle y_{1}(t)+y_{2}(t)=\sin(3[x_{1}(t)+x_{2}(t)])}$

The equality is not satisfied, and therefore the sine operation is not additive.

## Homogeneity

A system satisfies the condition of homogeneity if an input scaled by a certain factor produces an output scaled by that same factor. By definition: an input of ${\displaystyle ax_{1}}$ results in an output of ${\displaystyle ay_{1}}$. In other words, to see if function f() is homogeneous, perform the following test:

Stimulate the system f with an arbitrary input x to produce an output y:

${\displaystyle y=f(x)}$

Now, create a second input x1, scale it by a multiplicative factor C (C is an arbitrary constant value), and produce a corresponding output y1:

${\displaystyle y_{1}=f(Cx_{1})}$

Now, assign x to be equal to x1:

${\displaystyle x_{1}=x}$

Then, for the system to be homogeneous, the following equation must be true:

${\displaystyle y_{1}=f(Cx)=Cf(x)=Cy}$

Systems that are homogeneous are useful in many applications, especially applications with gain or amplification.

### Example: Straight-Line

Given the equation for a straight line:

${\displaystyle y=f(x)=2x+3}$
${\displaystyle y_{1}=f(Cx_{1})=2(Cx_{1})+3=C2x_{1}+3}$
${\displaystyle x_{1}=x}$

Comparing the two results, it is easy to see they are not equal:

${\displaystyle y_{1}=C2x+3\neq Cy=C(2x+3)=C2x+C3}$

Therefore, the equation is not homogeneous.

Exercise:

Prove that additivity implies homogeneity, but that homogeneity does not imply additivity.

## Linearity

A system is considered linear if it satisfies the conditions of Additivity and Homogeneity. In short, a system is linear if the following is true:

Take two arbitrary inputs, and produce two arbitrary outputs:

${\displaystyle y_{1}=f(x_{1})}$
${\displaystyle y_{2}=f(x_{2})}$

Now, a linear combination of the inputs should produce a linear combination of the outputs:

${\displaystyle f(Ax_{1}+Bx_{2})=f(Ax_{1})+f(Bx_{2})=Af(x_{1})+Bf(x_{2})=Ay_{1}+By_{2}}$

This condition of additivity and homogeneity is called superposition. A system is linear if it satisfies the condition of superposition.

### Example: Linear Differential Equations

Is the following equation linear:

${\displaystyle {\frac {dy(t)}{dt}}+y(t)=x(t)}$

To determine whether this system is linear, construct a new composite input:

${\displaystyle x(t)=Ax_{1}(t)+Bx_{2}(t)}$

Now, create the expected composite output:

${\displaystyle y(t)=Ay_{1}(t)+By_{2}(t)}$

Substituting the two into our original equation:

${\displaystyle {\frac {d[Ay_{1}(t)+By_{2}(t)]}{dt}}+[Ay_{1}(t)+By_{2}(t)]=Ax_{1}(t)+Bx_{2}(t)}$

Factor out the derivative operator, as such:

${\displaystyle {\frac {d}{dt}}[Ay_{1}(t)+By_{2}(t)]+[Ay_{1}(t)+By_{2}(t)]=Ax_{1}(t)+Bx_{2}(t)}$

Finally, convert the various composite terms into the respective variables, to prove that this system is linear:

${\displaystyle {\frac {dy(t)}{dt}}+y(t)=x(t)}$

For the record, derivatives and integrals are linear operators, and ordinary differential equations typically are linear equations.

## Memory

A system is said to have memory if the output from the system is dependent on past inputs (or future inputs!) to the system. A system is called memoryless if the output is only dependent on the current input. Memoryless systems are easier to work with, but systems with memory are more common in digital signal processing applications.

Systems that have memory are called dynamic systems, and systems that do not have memory are static systems.

## Causality

Causality is a property that is very similar to memory. A system is called causal if it is only dependent on past and/or current inputs. A system is called anti-causal if the output of the system is dependent only on future inputs. A system is called non-causal if the output depends on past and/or current and future inputs.

## Time-Invariance

A system is called time-invariant if the system relationship between the input and output signals is not dependent on the passage of time. If the input signal ${\displaystyle x(t)}$ produces an output ${\displaystyle y(t)}$ then any time shifted input, ${\displaystyle x(t+\delta )}$, results in a time-shifted output ${\displaystyle y(t+\delta )}$ This property can be satisfied if the transfer function of the system is not a function of time except expressed by the input and output. If a system is time-invariant then the system block is commutative with an arbitrary delay. This facet of time-invariant systems will be discussed later.

To determine if a system f is time-invariant, perform the following test:

Apply an arbitrary input x to a system and produce an arbitrary output y:

${\displaystyle y(t)=f(x(t))}$

Apply a second input x1 to the system, and produce a second output:

${\displaystyle y_{1}(t)=f(x_{1}(t))}$

Now, assign x1 to be equal to the first input x, time-shifted by a given constant value δ:

${\displaystyle x_{1}(t)=x(t-\delta )}$

Finally, a system is time-invariant if y1 is equal to y shifted by the same value δ:

${\displaystyle y_{1}(t)=y(t-\delta )}$

## LTI Systems

A system is considered to be a Linear Time-Invariant (LTI) system if it satisfies the requirements of time-invariance and linearity. LTI systems are one of the most important types of systems, and they will be considered almost exclusively in the beginning chapters of this book.

Systems which are not LTI are more common in practice, but are much more difficult to analyze.

## Lumpedness

A system is said to be lumped if one of the two following conditions are satisfied:

1. There are a finite number of states that the system can be in.
2. There are a finite number of state variables.

The concept of "states" and "state variables" are relatively advanced, and they will be discussed in more detail in the discussion about modern controls.

Systems which are not lumped are called distributed. A simple example of a distributed system is a system with delay, that is, ${\displaystyle A(s)y(t)=B(s)u(t-\tau )}$, which has an infinite number of state variables (Here we use ${\displaystyle s}$ to denote the Laplace variable). However, although distributed systems are quite common, they are very difficult to analyze in practice, and there are few tools available to work with such systems. Fortunately, in most cases, a delay can be sufficiently modeled with the Pade approximation. This book will not discuss distributed systems much.

## Relaxed

A system is said to be relaxed if the system is causal, and at the initial time t0 the output of the system is zero, i.e., there is no stored energy in the system.

${\displaystyle y(t_{0})=f(x(t_{0}))=0}$

In terms of differential equations, a relaxed system is said to have "zero initial state". Systems without an initial state are easier to work with, but systems that are not relaxed can frequently be modified to approximate relaxed systems.

## Stability

Control Systems engineers will frequently say that an unstable system has "exploded". Some physical systems actually can rupture or explode when they go unstable.

Stability is a very important concept in systems, but it is also one of the hardest function properties to prove. There are several different criteria for system stability, but the most common requirement is that the system must produce a finite output when subjected to a finite input. For instance, if 5 volts is applied to the input terminals of a given circuit, it would be best if the circuit output didn't approach infinity, and the circuit itself didn't melt or explode. This type of stability is often known as "Bounded Input, Bounded Output" stability, or BIBO.

There are a number of other types of stability, most of which are based off the concept of BIBO stability. Because stability is such an important and complicated topic, an entire section of this text is devoted to its study.

## Inputs and Outputs

Systems can also be categorized by the number of inputs and the number of outputs the system has. Consider a television as a system, for instance. The system has two inputs: the power wire and the signal cable. It has one output: the video display. A system with one input and one output is called single-input, single output, or SISO. a system with multiple inputs and multiple outputs is called multi-input, multi-output, or MIMO.

These systems will be discussed in more detail later.

Exercise:

Based on the definitions of SISO and MIMO, above, determine what the acronyms SIMO and MISO mean.

# Digital and Analog

## Digital and Analog

There is a significant distinction between an analog system and a digital system, in the same way that there is a significant difference between analog and digital data. This book is going to consider both analog and digital topics, so it is worth taking some time to discuss the differences, and to display the different notations that will be used with each.

### Continuous Time

This operation can be performed using this MATLAB command:
isct

A signal is called continuous-time if it is defined at every time t.

A system is a continuous-time system if it takes a continuous-time input signal, and outputs a continuous-time output signal. Here is an example of an analog waveform:

### Discrete Time

This operation can be performed using this MATLAB command:
isdt

A signal is called discrete-time if it is only defined for particular points in time. A discrete-time system takes discrete-time input signals, and produces discrete-time output signals. The following image shows the difference between an analog waveform and the sampled discrete time equivalent:

### Quantized

A signal is called Quantized if it can only be certain values, and cannot be other values. This concept is best illustrated with examples:

1. Students with a strong background in physics will recognize this concept as being the root word in "Quantum Mechanics". In quantum mechanics, it is known that energy comes only in discrete packets. An electron bound to an atom, for example, may occupy one of several discrete energy levels, but not intermediate levels.
2. Another common example is population statistics. For instance, a common statistic is that a household in a particular country may have an average of "3.5 children", or some other fractional number. Actual households may have 3 children, or they may have 4 children, but no household has 3.5 children.
3. People with a computer science background will recognize that integer variables are quantized because they can only hold certain integer values, not fractions or decimal points.

The last example concerning computers is the most relevant, because quantized systems are frequently computer-based. Systems that are implemented with computer software and hardware will typically be quantized.

Here is an example waveform of a quantized signal. Notice how the magnitude of the wave can only take certain values, and that creates a step-like appearance. This image is discrete in magnitude, but is continuous in time:

## Analog

By definition:

Analog
A signal is considered analog if it is defined for all points in time and if it can take any real magnitude value within its range.

An analog system is a system that represents data using a direct conversion from one form to another. In other words, an analog system is a system that is continuous in both time and magnitude.

### Example: Motor

If we have a given motor, we can show that the output of the motor (rotation in units of radians per second, for instance) is a function of the voltage that is input to the motor. We can show the relationship as such:

${\displaystyle \Theta (v)=f(v)}$

Where ${\displaystyle \Theta }$ is the output in terms of Rad/sec, and f(v) is the motor's conversion function between the input voltage (v) and the output. For any value of v we can calculate out specifically what the rotational speed of the motor should be.

### Example: Analog Clock

Consider a standard analog clock, which represents the passage of time though the angular position of the clock hands. We can denote the angular position of the hands of the clock with the system of equations:

${\displaystyle \phi _{h}=f_{h}(t)}$
${\displaystyle \phi _{m}=f_{m}(t)}$
${\displaystyle \phi _{s}=f_{s}(t)}$

Where φh is the angular position of the hour hand, φm is the angular position of the minute hand, and φs is the angular position of the second hand. The positions of all the different hands of the clock are dependent on functions of time.

Different positions on a clock face correspond directly to different times of the day.

## Digital

Digital data is represented by discrete number values. By definition:

Digital
A signal or system is considered digital if it is both discrete-time and quantized.

Digital data always have a certain granularity, and therefore there will almost always be an error associated with using such data, especially if we want to account for all real numbers. The tradeoff, of course, to using a digital system is that our powerful computers with our powerful, Moore's law microprocessor units, can be instructed to operate on digital data only. This benefit more than makes up for the shortcomings of a digital representation system.

Discrete systems will be denoted inside square brackets, as is a common notation in texts that deal with discrete values. For instance, we can denote a discrete data set of ascending numbers, starting at 1, with the following notation:

x[n] = [1 2 3 4 5 6 ...]

n, or other letters from the central area of the alphabet (m, i, j, k, l, for instance) are commonly used to denote discrete time values. Analog, or "non-discrete" values are denoted in regular expression syntax, using parenthesis. Here is an example of an analog waveform and the digital equivalent. Notice that the digital waveform is discrete in both time and magnitude:

 Analog Waveform Digital Waveform

### Example: Digital Clock

As a common example, let's consider a digital clock: The digital clock represents time with binary electrical data signals of 1 and 0. The 1's are usually represented by a positive voltage, and a 0 is generally represented by zero voltage. Counting in binary, we can show that any given time can be represented by a base-2 numbering system:

Minute Binary Representation
1 1
10 1010
30 11110
59 111011

But what happens if we want to display a fraction of a minute, or a fraction of a second? A typical digital clock has a certain amount of precision, and it cannot express fractional values smaller than that precision.

## Hybrid Systems

Hybrid Systems are systems that have both analog and digital components. Devices called samplers are used to convert analog signals into digital signals, and Devices called reconstructors are used to convert digital signals into analog signals. Because of the use of samplers, hybrid systems are frequently called sampled-data systems.

### Example: Automobile Computer

Most modern automobiles today have integrated computer systems that monitor certain aspects of the car, and actually help to control the performance of the car. The speed of the car, and the rotational speed of the transmission are analog values, but a sampler converts them into digital values so the car computer can monitor them. The digital computer will then output control signals to other parts of the car, to alter analog systems such as the engine timing, the suspension, the brakes, and other parts. Because the car has both digital and analog components, it is a hybrid system.

## Continuous and Discrete

Note:
We are not using the word "continuous" here in the sense of continuously differentiable, as is common in math texts.

A system is considered continuous-time if the signal exists for all time. Frequently, the terms "analog" and "continuous" will be used interchangeably, although they are not strictly the same.

Discrete systems can come in three flavors:

1. Discrete time (sampled)
2. Discrete magnitude (quantized)
3. Discrete time and magnitude (digital)

Discrete magnitude systems are systems where the signal value can only have certain values.Discrete time systems are systems where signals are only available (or valid) at particular times. Computer systems are discrete in the sense of (3), in that data is only read at specific discrete time intervals, and the data can have only a limited number of discrete values.

A discrete-time system has a sampling time value associated with it, such that each discrete value occurs at multiples of the given sampling time. We will denote the sampling time of a system as T. We can equate the square-brackets notation of a system with the continuous definition of the system as follows:

${\displaystyle x[n]=x(nT)}$

Notice that the two notations show the same thing, but the first one is typically easier to write, and it shows that the system in question is a discrete system. This book will use the square brackets to denote discrete systems by the sample number n, and parenthesis to denote continuous time functions.

## Sampling and Reconstruction

The process of converting analog information into digital data is called "Sampling". The process of converting digital data into an analog signal is called "Reconstruction". We will talk about both processes in a later chapter. For more information on the topic than is available in this book, see the Analog and Digital Conversion wikibook. Here is an example of a reconstructed waveform. Notice that the reconstructed waveform here is quantized because it is constructed from a digital signal:

# System Metrics

## System Metrics

When a system is being designed and analyzed, it doesn't make any sense to test the system with all manner of strange input functions, or to measure all sorts of arbitrary performance metrics. Instead, it is in everybody's best interest to test the system with a set of standard, simple reference functions. Once the system is tested with the reference functions, there are a number of different metrics that we can use to determine the system performance.

It is worth noting that the metrics presented in this chapter represent only a small number of possible metrics that can be used to evaluate a given system. This wikibook will present other useful metrics along the way, as their need becomes apparent.

## Standard Inputs

Note:
All of the standard inputs are zero before time zero. All the standard inputs are causal.

There are a number of standard inputs that are considered simple enough and universal enough that they are considered when designing a system. These inputs are known as a unit step, a ramp, and a parabolic input.

Unit Step
A unit step function is defined piecewise as such:

[Unit Step Function]

${\displaystyle u(t)=\left\{{\begin{matrix}0,&t<0\\1,&t\geq 0\end{matrix}}\right.}$
The unit step function is a highly important function, not only in control systems engineering, but also in signal processing, systems analysis, and all branches of engineering. If the unit step function is input to a system, the output of the system is known as the step response. The step response of a system is an important tool, and we will study step responses in detail in later chapters.
Ramp
A unit ramp is defined in terms of the unit step function, as such:

[Unit Ramp Function]

${\displaystyle r(t)=tu(t)}$
It is important to note that the unit step function is simply the differential of the unit ramp function:
${\displaystyle r(t)=\int u(t)dt=tu(t)}$
This definition will come in handy when we learn about the Laplace Transform.
Parabolic
A unit parabolic input is similar to a ramp input:

[Unit Parabolic Function]

${\displaystyle p(t)={\frac {1}{2}}t^{2}u(t)}$
Notice also that the unit parabolic input is equal to the integral of the ramp function:
${\displaystyle p(t)=\int r(t)dt=\int tu(t)dt={\frac {1}{2}}t^{2}u(t)={\frac {1}{2}}tr(t)}$
Again, this result will become important when we learn about the Laplace Transform.

Also, sinusoidal and exponential functions are considered basic, but they are too difficult to use in initial analysis of a system.

Note:
To be more precise, we should have taken the limit as t approaches infinity. However, as a shorthand notation, we will typically say "t equals infinity", and assume the reader understands the shortcut that is being used.

When a unit-step function is input to a system, the steady-state value of that system is the output value at time ${\displaystyle t=\infty }$. Since it is impractical (if not completely impossible) to wait till infinity to observe the system, approximations and mathematical calculations are used to determine the steady-state value of the system. Most system responses are asymptotic, that is that the response approaches a particular value. Systems that are asymptotic are typically obvious from viewing the graph of that response.

### Step Response

The step response of a system is most frequently used to analyze systems, and there is a large amount of terminology involved with step responses. When exposed to the step input, the system will initially have an undesirable output period known as the transient response. The transient response occurs because a system is approaching its final output value. The steady-state response of the system is the response after the transient response has ended.

The amount of time it takes for the system output to reach the desired value (before the transient response has ended, typically) is known as the rise time. The amount of time it takes for the transient response to end and the steady-state response to begin is known as the settling time.

It is common for a systems engineer to try and improve the step response of a system. In general, it is desired for the transient response to be reduced, the rise and settling times to be shorter, and the steady-state to approach a particular desired "reference" output.

## Target Value

The target output value is the value that our system attempts to obtain for a given input. This is not the same as the steady-state value, which is the actual value that the system does obtain. The target value is frequently referred to as the reference value, or the "reference function" of the system. In essence, this is the value that we want the system to produce. When we input a "5" into an elevator, we want the output (the final position of the elevator) to be the fifth floor. Pressing the "5" button is the reference input, and is the expected value that we want to obtain. If we press the "5" button, and the elevator goes to the third floor, then our elevator is poorly designed.

## Rise Time

Rise time is the amount of time that it takes for the system response to reach the target value from an initial state of zero. Many texts on the subject define the rise time as being the time it takes to rise between the initial position and 80% of the target value. This is because some systems never rise to 100% of the expected, target value, and therefore they would have an infinite rise-time. This book will specify which convention to use for each individual problem. Rise time is typically denoted tr, or trise.

## Percent Overshoot

Underdamped systems frequently overshoot their target value initially. This initial surge is known as the "overshoot value". The ratio of the amount of overshoot to the target steady-state value of the system is known as the percent overshoot. Percent overshoot represents an overcompensation of the system, and can output dangerously large output signals that can damage a system. Percent overshoot is typically denoted with the term PO.

Example: Refrigerator

Consider an ordinary household refrigerator. The refrigerator has cycles where it is on and when it is off. When the refrigerator is on, the coolant pump is running, and the temperature inside the refrigerator decreases. The temperature decreases to a much lower level than is required, and then the pump turns off.

When the pump is off, the temperature slowly increases again as heat is absorbed into the refrigerator. When the temperature gets high enough, the pump turns back on. Because the pump cools down the refrigerator more than it needs to initially, we can say that it "overshoots" the target value by a certain specified amount.

Example: Refrigerator

Another example concerning a refrigerator concerns the electrical demand of the heat pump when it first turns on. The pump is an inductive mechanical motor, and when the motor first activates, a special counter-acting force known as "back EMF" resists the motion of the motor, and causes the pump to draw more electricity until the motor reaches its final speed. During the startup time for the pump, lights on the same electrical circuit as the refrigerator may dim slightly, as electricity is drawn away from the lamps, and into the pump. This initial draw of electricity is a good example of overshoot.

Usually, the letter e or E will be used to denote error values.

Sometimes a system might never achieve the desired steady-state value, but instead will settle on an output value that is not desired. The difference between the steady-state output value to the reference input value at steady state is called the steady-state error of the system. We will use the variable ess to denote the steady-state error of the system.

## Settling Time

After the initial rise time of the system, some systems will oscillate and vibrate for an amount of time before the system output settles on the final value. The amount of time it takes to reach steady state after the initial rise time is known as the settling time. Notice that damped oscillating systems may never settle completely, so we will define settling time as being the amount of time for the system to reach, and stay in, a certain acceptable range. The acceptable range for settling time is typically determined on a per-problem basis, although common values are 20%, 10%, or 5% of the target value. The settling time will be denoted as ts.

## System Order

The order of the system is defined by the number of independent energy storage elements in the system, and intuitively by the highest order of the linear differential equation that describes the system. In a transfer function representation, the order is the highest exponent in the transfer function. In a proper system, the system order is defined as the degree of the denominator polynomial. In a state-space equation, the system order is the number of state-variables used in the system. The order of a system will frequently be denoted with an n or N, although these variables are also used for other purposes. This book will make clear distinction on the use of these variables.

### Proper Systems

A proper system is a system where the degree of the denominator is larger than or equal to the degree of the numerator polynomial. A strictly proper system is a system where the degree of the denominator polynomial is larger than (but never equal to) the degree of the numerator polynomial. A biproper system is a system where the degree of the denominator polynomial equals the degree of the numerator polynomial.

It is important to note that only proper systems can be physically realized. In other words, a system that is not proper cannot be built. It makes no sense to spend a lot of time designing and analyzing imaginary systems.

### Example: System Order

1=Find the order of this system:

${\displaystyle G(s)={\frac {1+s}{1+s+s^{2}}}}$

The highest exponent in the denominator is s2, so the system is order 2. Also, since the denominator is a higher degree than the numerator, this system is strictly proper.}}

In the above example, G(s) is a second-order transfer function because in the denominator one of the s variables has an exponent of 2. Second-order functions are the easiest to work with.

## System Type

Let's say that we have a process transfer function (or combination of functions, such as a controller feeding in to a process), all in the forward branch of a unity feedback loop. Say that the overall forward branch transfer function is in the following generalized form (known as pole-zero form):

[Pole-Zero Form]

${\displaystyle G(s)={\frac {K\prod _{i}(s-s_{i})}{s^{M}\prod _{j}(s-s_{j})}}}$
Poles at the origin are called integrators, because they have the effect of performing integration on the input signal.

we call the parameter M the system type. Note that increased system type number correspond to larger numbers of poles at s = 0. More poles at the origin generally have a beneficial effect on the system, but they increase the order of the system, and make it increasingly difficult to implement physically. System type will generally be denoted with a letter like N, M, or m. Because these variables are typically reused for other purposes, this book will make clear distinction when they are employed.

Now, we will define a few terms that are commonly used when discussing system type. These new terms are Position Error, Velocity Error, and Acceleration Error. These names are throwbacks to physics terms where acceleration is the derivative of velocity, and velocity is the derivative of position. Note that none of these terms are meant to deal with movement, however.

Position Error
The position error, denoted by the position error constant ${\displaystyle K_{p}}$. This is the amount of steady-state error of the system when stimulated by a unit step input. We define the position error constant as follows:

[Position Error Constant]

${\displaystyle K_{p}=\lim _{s\to 0}G(s)}$
Where G(s) is the transfer function of our system.
Velocity Error
The velocity error is the amount of steady-state error when the system is stimulated with a ramp input. We define the velocity error constant as such:

[Velocity Error Constant]

${\displaystyle K_{v}=\lim _{s\to 0}sG(s)}$
Acceleration Error
The acceleration error is the amount of steady-state error when the system is stimulated with a parabolic input. We define the acceleration error constant to be:

[Acceleration Error Constant]

${\displaystyle K_{a}=\lim _{s\to 0}s^{2}G(s)}$

Now, this table will show briefly the relationship between the system type, the kind of input (step, ramp, parabolic), and the steady-state error of the system:

Unit System Input
Type, M Au(t) Ar(t) Ap(t)
0 ${\displaystyle e_{ss}={\frac {A}{1+K_{p}}}}$ ${\displaystyle e_{ss}=\infty }$ ${\displaystyle e_{ss}=\infty }$
1 ${\displaystyle e_{ss}=0}$ ${\displaystyle e_{ss}={\frac {A}{K_{v}}}}$ ${\displaystyle e_{ss}=\infty }$
2 ${\displaystyle e_{ss}=0}$ ${\displaystyle e_{ss}=0}$ ${\displaystyle e_{ss}={\frac {A}{K_{a}}}}$
> 2 ${\displaystyle e_{ss}=0}$ ${\displaystyle e_{ss}=0}$ ${\displaystyle e_{ss}=0}$

### Z-Domain Type

Likewise, we can show that the system order can be found from the following generalized transfer function in the Z domain:

${\displaystyle G(z)={\frac {K\prod _{i}(z-z_{i})}{(z-1)^{M}\prod _{j}(z-z_{j})}}}$

Where the constant M is the type of the digital system. Now, we will show how to find the various error constants in the Z-Domain:

[Z-Domain Error Constants]

Error Constant Equation
Kp ${\displaystyle K_{p}=\lim _{z\to 1}G(z)}$
Kv ${\displaystyle K_{v}=\lim _{z\to 1}(z-1)G(z)}$
Ka ${\displaystyle K_{a}=\lim _{z\to 1}(z-1)^{2}G(z)}$

## Visually

Here is an image of the various system metrics, acting on a system in response to a step input:

The target value is the value of the input step response. The rise time is the time at which the waveform first reaches the target value. The overshoot is the amount by which the waveform exceeds the target value. The settling time is the time it takes for the system to settle into a particular bounded region. This bounded region is denoted with two short dotted lines above and below the target value.

# System Modeling

## The Control Process

It is the job of a control engineer to analyze existing systems, and to design new systems to meet specific needs. Sometimes new systems need to be designed, but more frequently a controller unit needs to be designed to improve the performance of existing systems. When designing a system, or implementing a controller to augment an existing system, we need to follow some basic steps:

1. Model the system mathematically
2. Analyze the mathematical model
3. Design system/controller
4. Implement system/controller and test

The vast majority of this book is going to be focused on (2), the analysis of the mathematical systems. This chapter alone will be devoted to a discussion of the mathematical modeling of the systems.

## External Description

An external description of a system relates the system input to the system output without explicitly taking into account the internal workings of the system. The external description of a system is sometimes also referred to as the Input-Output Description of the system, because it only deals with the inputs and the outputs to the system.

If the system can be represented by a mathematical function h(t, r), where t is the time that the output is observed, and r is the time that the input is applied. We can relate the system function h(t, r) to the input x and the output y through the use of an integral:

[General System Description]

${\displaystyle y(t)=\int _{-\infty }^{\infty }h(t,r)x(r)dr}$

This integral form holds for all linear systems, and every linear system can be described by such an equation.

If a system is causal (i.e. an input at t=r affects system behaviour only for ${\displaystyle t\geq r}$) and there is no input of the system before t=0, we can change the limits of the integration:

${\displaystyle y(t)=\int _{0}^{t}h(t,r)x(r)dr}$

### Time-Invariant Systems

If furthermore a system is time-invariant, we can rewrite the system description equation as follows:

${\displaystyle y(t)=\int _{0}^{t}h(t-r)x(r)dr}$

This equation is known as the convolution integral, and we will discuss it more in the next chapter.

Every Linear Time-Invariant (LTI) system can be used with the Laplace Transform, a powerful tool that allows us to convert an equation from the time domain into the S-Domain, where many calculations are easier. Time-variant systems cannot be used with the Laplace Transform.

## Internal Description

If a system is linear and lumped, it can also be described using a system of equations known as state-space equations. In state-space equations, we use the variable x to represent the internal state of the system. We then use u as the system input, and we continue to use y as the system output. We can write the state-space equations as such:

${\displaystyle x'(t)=A(t)x(t)+B(t)u(t)}$
${\displaystyle y(t)=C(t)x(t)+D(t)u(t)}$

We will discuss the state-space equations more when we get to the section on modern controls.

## Complex Descriptions

Systems which are LTI and Lumped can also be described using a combination of the state-space equations, and the Laplace Transform. If we take the Laplace Transform of the state equations that we listed above, we can get a set of functions known as the Transfer Matrix Functions. We will discuss these functions in a later chapter.

## Representations

To recap, we will prepare a table with the various system properties, and the available methods for describing the system:

Properties State-Space
Equations
Laplace
Transform
Transfer
Matrix
Linear, Time-Variant, Distributed no no no
Linear, Time-Variant, Lumped yes no no
Linear, Time-Invariant, Distributed no yes no
Linear, Time-Invariant, Lumped yes yes yes

We will discuss all these different types of system representation later in the book.

## Analysis

Once a system is modeled using one of the representations listed above, the system needs to be analyzed. We can determine the system metrics and then we can compare those metrics to our specification. If our system meets the specifications we are finished with the design process. However if the system does not meet the specifications (as is typically the case), then suitable controllers and compensators need to be designed and added to the system.

Once the controllers and compensators have been designed, the job isn't finished: we need to analyze the new composite system to ensure that the controllers work properly. Also, we need to ensure that the systems are stable: unstable systems can be dangerous.

### Frequency Domain

For proposals, early stage designs, and quick turn around analyses a frequency domain model is often superior to a time domain model. Frequency domain models take disturbance PSDs (Power Spectral Densities) directly, use transfer functions directly, and produce output or residual PSDs directly. The answer is a steady-state response. Oftentimes the controller is shooting for 0 so the steady-state response is also the residual error that will be the analysis output or metric for report.

Table 1: Frequency Domain Model Inputs and Outputs
Input Model Output
PSD Transfer Function PSD

#### Brief Overview of the Math

Frequency domain modeling is a matter of determining the impulse response of a system to a random process.

Figure 1: Frequency Domain System
${\displaystyle S_{YY}\left(\omega \right)=G^{*}\left(\omega \right)G\left(\omega \right)S_{XX}=\left|G\left(\omega \right)\right\vert S_{XX}}$[1]

where

${\displaystyle S_{XX}\left(\omega \right)}$ is the one-sided input PSD in ${\displaystyle {\frac {magnitude^{2}}{Hz}}}$
${\displaystyle G\left(\omega \right)}$ is the frequency response function of the system and
${\displaystyle S_{YY}\left(\omega \right)}$ is the one-sided output PSD or auto power spectral density function.

The frequency response function, ${\displaystyle G\left(\omega \right)}$, is related to the impulse response function (transfer function) by

${\displaystyle g\left(\tau \right)={\frac {1}{2\pi }}\int _{-\infty }^{\infty }e^{i\omega t}H\left(\omega \right)\,d\omega }$

Note some texts will state that this is only valid for random processes which are stationary. Other texts suggest stationary and ergodic while still others state weakly stationary processes. Some texts do not distinguish between strictly stationary and weakly stationary. From practice, the rule of thumb is if the PSD of the input process is the same from hour to hour and day to day then the input PSD can be used and the above equation is valid.

#### Notes

1. Sun, Jian-Qiao (2006). Stochastic Dynamics and Control, Volume 4. Amsterdam: Elsevier Science. ISBN 0444522301.

## Modeling Examples

Modeling in Control Systems is oftentimes a matter of judgement. This judgement is developed by creating models and learning from other people's models. ControlTheoryPro.com is a site with a lot of examples. Here are links to a few of them

## Manufacture

Once the system has been properly designed we can prototype our system and test it. Assuming our analysis was correct and our design is good, the prototype should work as expected. Now we can move on to manufacture and distribute our completed systems.

Classical Controls

The classical method of controls involves analysis and manipulation of systems in the complex frequency domain. This domain, entered into by applying the Laplace or Fourier Transforms, is useful in examining the characteristics of the system, and determining the system response.

# Sampled Data Systems

## Ideal Sampler

In this chapter, we are going to introduce the ideal sampler and the Star Transform. First, we need to introduce (or review) the Geometric Series infinite sum. The results of this sum will be very useful in calculating the Star Transform, later.

Consider a sampler device that operates as follows: every T seconds, the sampler reads the current value of the input signal at that exact moment. The sampler then holds that value on the output for T seconds, before taking the next sample. We have a generic input to this system, f(t), and our sampled output will be denoted f*(t). We can then show the following relationship between the two signals:

${\displaystyle f^{\,*}(t)=f(0){\big (}\mathrm {u} (t\,-\,0)\,-\,\mathrm {u} (t\,-\,T){\big )}\,+\,f(T){\big (}\mathrm {u} (t\,-\,T)\,-\,\mathrm {u} (t\,-\,2T){\big )}\,+\;\cdots \;+\,f(nT){\big (}\mathrm {u} (t\,-\,nT)\,-\,\mathrm {u} (t\,-\,(n\,+\,1)T){\big )}\,+\;\cdots }$

Note that the value of f * at time t = 1.5 T is the same as at time t = T. This relationship works for any fractional value.

Taking the Laplace Transform of this infinite sequence will yield us with a special result called the Star Transform. The Star Transform is also occasionally called the "Starred Transform" in some texts.

## Geometric Series

Before we talk about the Star Transform or even the Z-Transform, it is useful for us to review the mathematical background behind solving infinite series. Specifically, because of the nature of these transforms, we are going to look at methods to solve for the sum of a geometric series.

A geometic series is a sum of values with increasing exponents, as such:

${\displaystyle \sum _{k=0}^{n}ar^{k}=ar^{0}+ar^{1}+ar^{2}+ar^{3}+\cdots +ar^{n}\,}$

In the equation above, notice that each term in the series has a coefficient value, a. We can optionally factor out this coefficient, if the resulting equation is easier to work with:

${\displaystyle a\sum _{k=0}^{n}r^{k}=a\left(r^{0}+r^{1}+r^{2}+r^{3}+\cdots +r^{n}\,\right)}$

Once we have an infinite series in either of these formats, we can conveniently solve for the total sum of this series using the following equation:

${\displaystyle a\sum _{k=0}^{n}r^{k}=a{\frac {1-r^{n+1}}{1-r}}}$

Let's say that we start our series off at a number that isn't zero. Let's say for instance that we start our series off at n = 1 or n = 100. Let's see:

${\displaystyle \sum _{k=m}^{n}ar^{k}=ar^{m}+ar^{m+1}+ar^{m+2}+ar^{m+3}+\cdots +ar^{n}\,}$

We can generalize the sum to this series as follows:

[Geometric Series]

${\displaystyle \sum _{k=m}^{n}ar^{k}={\frac {a(r^{m}-r^{n+1})}{1-r}}}$

With that result out of the way, now we need to worry about making this series converge. In the above sum, we know that n is approaching infinity (because this is an infinite sum). Therefore, any term that contains the variable n is a matter of worry when we are trying to make this series converge. If we examine the above equation, we see that there is one term in the entire result with an n in it, and from that, we can set a fundamental inequality to govern the geometric series.

${\displaystyle r^{n+1}<\infty }$

To satisfy this equation, we must satisfy the following condition:

[Geometric convergence condition]

${\displaystyle r\leq 1}$

Therefore, we come to the final result: The geometric series converges if and only if the value of r is less than one.

## The Star Transform

The Star Transform is defined as such:

[Star Transform]

${\displaystyle F^{*}(s)={\mathcal {L}}^{*}[f(t)]=\sum _{k=0}^{\infty }f(kT)e^{-skT}}$

The Star Transform depends on the sampling time T and is different for a single signal depending on the frequency at which the signal is sampled. Since the Star Transform is defined as an infinite series, it is important to note that some inputs to the Star Transform will not converge, and therefore some functions do not have a valid Star Transform. Also, it is important to note that the Star Transform may only be valid under a particular region of convergence. We will cover this topic more when we discuss the Z-transform.

### Star ↔ Laplace

Complex Analysis/Residue Theory

The Laplace Transform and the Star Transform are clearly related, because we obtained the Star Transform by using the Laplace Transform on a time-domain signal. However, the method to convert between the two results can be a slightly difficult one. To find the Star Transform of a Laplace function, we must take the residues of the Laplace equation, as such:

${\displaystyle X^{*}(s)=\sum {\bigg [}{\text{residues of }}X(\lambda ){\frac {1}{1-e^{-T(s-\lambda )}}}{\bigg ]}_{{\text{at poles of E}}(\lambda )}}$

This math is advanced for most readers, so we can also use an alternate method, as follows:

${\displaystyle X^{*}(s)={\frac {1}{T}}\sum _{n=-\infty }^{\infty }X(s+jm\omega _{s})+{\frac {x(0)}{2}}}$

Neither one of these methods are particularly easy, however, and therefore we will not discuss the relationship between the Laplace transform and the Star Transform any more than is absolutely necessary in this book. Suffice it to say, however, that the Laplace transform and the Star Transform are related mathematically.

### Star + Laplace

In some systems, we may have components that are both continuous and discrete in nature. For instance, if our feedback loop consists of an Analog-To-Digital converter, followed by a computer (for processing), and then a Digital-To-Analog converter. In this case, the computer is acting on a digital signal, but the rest of the system is acting on continuous signals. Star transforms can interact with Laplace transforms in some of the following ways:

Given:

${\displaystyle Y(s)=X^{*}(s)H(s)}$

Then:

${\displaystyle Y^{*}(s)=X^{*}(s)H^{*}(s)}$

Given:

${\displaystyle Y(s)=X(s)H(s)}$

Then:

${\displaystyle Y^{*}(s)={\overline {XH}}^{*}(s)}$
${\displaystyle Y^{*}(s)\neq X^{*}(s)H^{*}(s)}$

Where ${\displaystyle {\overline {XH}}^{*}(s)}$ is the Star Transform of the product of X(s)H(s).

### Convergence of the Star Transform

The Star Transform is defined as being an infinite series, so it is critically important that the series converge (not reach infinity), or else the result will be nonsensical. Since the Star Transform is a geometic series (for many input signals), we can use geometric series analysis to show whether the series converges, and even under what particular conditions the series converges. The restrictions on the star transform that allow it to converge are known as the region of convergence (ROC) of the transform. Typically a transform must be accompanied by the explicit mention of the ROC.

## The Z-Transform

Let us say now that we have a discrete data set that is sampled at regular intervals. We can call this set x[n]:

x[n] = [ x[0] x[1] x[2] x[3] x[4] ... ]

This is also known as the Bilateral Z-Transform. We will only discuss this version of the transform in this book

we can utilize a special transform, called the Z-transform, to make dealing with this set more easy:

[Z Transform]

${\displaystyle X(z)={\mathcal {Z}}\left\{x[n]\right\}=\sum _{n=-\infty }^{\infty }x[n]z^{-n}}$
Z-Transform properties, and a table of common transforms can be found in:
the Appendix.

Like the Star Transform the Z Transform is defined as an infinite series and therefore we need to worry about convergence. In fact, there are a number of instances that have identical Z-Transforms, but different regions of convergence (ROC). Therefore, when talking about the Z transform, you must include the ROC, or you are missing valuable information.

### Z Transfer Functions

Like the Laplace Transform, in the Z-domain we can use the input-output relationship of the system to define a transfer function.

The transfer function in the Z domain operates exactly the same as the transfer function in the S Domain:

${\displaystyle H(z)={\frac {Y(z)}{X(z)}}}$
${\displaystyle {\mathcal {Z}}\{h[n]\}=H(z)}$

Similarly, the value h[n] which represents the response of the digital system is known as the impulse response of the system. It is important to note, however, that the definition of an "impulse" is different in the analog and digital domains.

### Inverse Z Transform

The inverse Z Transform is defined by the following path integral:

[Inverse Z Transform]

${\displaystyle x[n]=Z^{-1}\{X(z)\}={\frac {1}{2\pi j}}\oint _{C}X(z)z^{n-1}dz\ }$

Where C is a counterclockwise closed path encircling the origin and entirely in the region of convergence (ROC). The contour or path, C, must encircle all of the poles of X(z).

This math is relatively advanced compared to some other material in this book, and therefore little or no further attention will be paid to solving the inverse Z-Transform in this manner. Z transform pairs are heavily tabulated in reference texts, so many readers can consider that to be the primary method of solving for inverse Z transforms. There are a number of Z-transform pairs available in table form in The Appendix.

### Final Value Theorem

Like the Laplace Transform, the Z Transform also has an associated final value theorem:

[Final Value Theorem (Z)]

${\displaystyle \lim _{n\to \infty }x[n]=\lim _{z\to 1}(z-1)X(z)}$

This equation can be used to find the steady-state response of a system, and also to calculate the steady-state error of the system.

## Star ↔ Z

The Z transform is related to the Star transform though the following change of variables:

${\displaystyle z=e^{sT}}$

Notice that in the Z domain, we don't maintain any information on the sampling period, so converting to the Z domain from a Star Transformed signal loses that information. When converting back to the star domain however, the value for T can be re-insterted into the equation, if it is still available.

Also of some importance is the fact that the Z transform is bilinear, while the Star Transform is unilinear. This means that we can only convert between the two transforms if the sampled signal is zero for all values of n < 0.

Because the two transforms are so closely related, it can be said that the Z transform is simply a notational convenience for the Star Transform. With that said, this book could easily use the Star Transform for all problems, and ignore the added burden of Z transform notation entirely. A common example of this is Richard Hamming's book "Numerical Methods for Scientists and Engineers" which uses the Fourier Transform for all problems, considering the Laplace, Star, and Z-Transforms to be merely notational conveniences. However, the Control Systems wikibook is under the impression that the correct utilization of different transforms can make problems more easy to solve, and we will therefore use a multi-transform approach.

### Z plane

Note:
The lower-case z is the name of the variable, and the upper-case Z is the name of the Transform and the plane.

z is a complex variable with a real part and an imaginary part. In other words, we can define z as such:

${\displaystyle z=\operatorname {Re} (z)+j\operatorname {Im} (z)}$

Since z can be broken down into two independent components, it often makes sense to graph the variable z on the Z-plane. In the Z-plane, the horizontal axis represents the real part of z, and the vertical axis represents the magnitude of the imaginary part of z.

Notice also that if we define z in terms of the star-transform relation:

${\displaystyle z=e^{sT}}$

we can separate out s into real and imaginary parts:

${\displaystyle s=\sigma +j\omega }$

We can plug this into our equation for z:

${\displaystyle z=e^{(\sigma +j\omega )T}=e^{\sigma T}e^{j\omega T}}$

Through Euler's formula, we can separate out the complex exponential as such:

${\displaystyle z=e^{\sigma T}(\cos(\omega T)+j\sin(\omega T))}$

If we define two new variables, M and φ:

${\displaystyle M=e^{\sigma T}}$
${\displaystyle \phi =\omega T}$

We can write z in terms of M and φ. Notice that it is Euler's equation:

${\displaystyle z=M\cos(\phi )+jM\sin(\phi )}$

Which is clearly a polar representation of z, with the magnitude of the polar function (M) based on the real-part of s, and the angle of the polar function (φ) is based on the imaginary part of s.

### Region of Convergence

To best teach the region of convergance (ROC) for the Z-transform, we will do a quick example.

We have the following discrete series or a decaying exponential:

${\displaystyle x[n]=e^{-2n}u[n]}$

Now, we can plug this function into the Z transform equation:

${\displaystyle X(z)={\mathcal {Z}}[x[n]]=\sum _{n=-\infty }^{\infty }e^{-2n}u[n]z^{-n}}$

Note that we can remove the unit step function, and change the limits of the sum:

${\displaystyle X(z)=\sum _{n=0}^{\infty }e^{-2n}z^{-n}}$

This is because the series is 0 for all time less than n → 0. If we try to combine the n terms, we get the following result:

${\displaystyle X(z)=\sum _{n=0}^{\infty }(e^{2}z)^{-n}}$

Once we have our series in this term, we can break this down to look like our geometric series:

${\displaystyle a=1}$
${\displaystyle r=(e^{2}z)^{-1}}$

And finally, we can find our final value, using the geometric series formula:

${\displaystyle a\sum _{k=0}^{n}r^{k}=a{\frac {1-r^{n+1}}{1-r}}=1{\frac {1-((e^{2}z)^{-1})^{n+1}}{1-(e^{2}z)^{-1}}}}$

Again, we know that to make this series converge, we need to make the r value less than 1:

${\displaystyle |(e^{2}z)^{-1}|=\left|{\frac {1}{e^{2}z}}\right|\leq 1}$
${\displaystyle |e^{2}z|\geq 1}$

And finally we obtain the region of convergance for this Z-transform:

${\displaystyle |z|\geq {\frac {1}{e^{2}}}}$

### Laplace ↔ Z

There are no easy, direct ways to convert between the Laplace transform and the Z transform directly. Nearly all methods of conversions reproduce some aspects of the original equation faithfully, and incorrectly reproduce other aspects. For some of the main mapping techniques between the two, see the Z Transform Mappings Appendix.

However, there are some topics that we need to discuss. First and foremost, conversions between the Laplace domain and the Z domain are not linear, this leads to some of the following problems:

1. ${\displaystyle {\mathcal {L}}[G(z)H(z)]\neq G(s)H(s)}$
2. ${\displaystyle {\mathcal {Z}}[G(s)H(s)]\neq G(z)H(z)}$

This means that when we combine two functions in one domain multiplicatively, we must find a combined transform in the other domain. Here is how we denote this combined transform:

${\displaystyle {\mathcal {Z}}[G(s)H(s)]={\overline {GH}}(z)}$

Notice that we use a horizontal bar over top of the multiplied functions, to denote that we took the transform of the product, not of the individual pieces. However, if we have a system that incorporates a sampler, we can show a simple result. If we have the following format:

${\displaystyle Y(s)=X^{*}(s)H(s)}$

Then we can put everything in terms of the Star Transform:

${\displaystyle Y^{*}(s)=X^{*}(s)H^{*}(s)}$

and once we are in the star domain, we can do a direct change of variables to reach the Z domain:

${\displaystyle Y^{*}(s)=X^{*}(s)H^{*}(s)\to Y(z)=X(z)H(z)}$

Note that we can only make this equivalence relationship if the system incorporates an ideal sampler, and therefore one of the multiplicative terms is in the star domain.

### Example

Let's say that we have the following equation in the Laplace domain:

${\displaystyle Y(s)=A^{*}(s)B(s)+C(s)D(s)}$

And because we have a discrete sampler in the system, we want to analyze it in the Z domain. We can break up this equation into two separate terms, and transform each:

${\displaystyle {\mathcal {Z}}[A^{*}(s)B(s)]\to {\mathcal {Z}}[A^{*}(s)B^{*}(s)]=A(z)B(z)}$

And

${\displaystyle {\mathcal {Z}}[C(s)D(s)]={\overline {CD}}(z)}$

And when we add them together, we get our result:

${\displaystyle Y(z)=A(z)B(z)+{\overline {CD}}(z)}$

## Z ↔ Fourier

By substituting variables, we can relate the Star transform to the Fourier Transform as well:

${\displaystyle e^{sT}=e^{j\omega }}$
${\displaystyle e^{(\sigma +j\omega )T}=e^{j\omega }}$

If we assume that T = 1, we can relate the two equations together by setting the real part of s to zero. Notice that the relationship between the Laplace and Fourier transforms is mirrored here, where the Fourier transform is the Laplace transform with no real-part to the transform variable.

There are a number of discrete-time variants to the Fourier transform as well, which are not discussed in this book. For more information about these variants, see Digital Signal Processing.

## Reconstruction

Some of the easiest reconstruction circuits are called "Holding circuits". Once a signal has been transformed using the Star Transform (passed through an ideal sampler), the signal must be "reconstructed" using one of these hold systems (or an equivalent) before it can be analyzed in a Laplace-domain system.

If we have a sampled signal denoted by the Star Transform ${\displaystyle X^{*}(s)}$, we want to reconstruct that signal into a continuous-time waveform, so that we can manipulate it using Laplace-transform techniques.

Let's say that we have the sampled input signal, a reconstruction circuit denoted G(s), and an output denoted with the Laplace-transform variable Y(s). We can show the relationship as follows:

${\displaystyle Y(s)=X^{*}(s)G(s)}$

Reconstruction circuits then, are physical devices that we can use to convert a digital, sampled signal into a continuous-time domain, so that we can take the Laplace transform of the output signal.

### Zero order Hold

Zero-Order Hold impulse response

A zero-order hold circuit is a circuit that essentially inverts the sampling process: The value of the sampled signal at time t is held on the output for T time. The output waveform of a zero-order hold circuit therefore looks like a staircase approximation to the original waveform.

The transfer function for a zero-order hold circuit, in the Laplace domain, is written as such:

[Zero Order Hold]

${\displaystyle G_{h0}={\frac {1-e^{-Ts}}{s}}}$

The Zero-order hold is the simplest reconstruction circuit, and (like the rest of the circuits on this page) assumes zero processing delay in converting between digital to analog.

A continuous input signal (gray) and the sampled signal with a zero-order hold (red)

### First Order Hold

Impulse response of a first-order hold.

The zero-order hold creates a step output waveform, but this isn't always the best way to reconstruct the circuit. Instead, the First-Order Hold circuit takes the derivative of the waveform at the time t, and uses that derivative to make a guess as to where the output waveform is going to be at time (t + T). The first-order hold circuit then "draws a line" from the current position to the expected future position, as the output of the waveform.

[First Order Hold]

${\displaystyle G_{h1}={\frac {1+Ts}{T}}\left[{\frac {1-e^{-Ts}}{s}}\right]^{2}}$

Keep in mind, however, that the next value of the signal will probably not be the same as the expected value of the next data point, and therefore the first-order hold may have a number of discontinuities.

An input signal (grey) and the first-order hold circuit output (red)

### Fractional Order Hold

The Zero-Order hold outputs the current value onto the output, and keeps it level throughout the entire bit time. The first-order hold uses the function derivative to predict the next value, and produces a series of ramp outputs to produce a fluctuating waveform. Sometimes however, neither of these solutions are desired, and therefore we have a compromise: Fractional-Order Hold. Fractional order hold acts like a mixture of the other two holding circuits, and takes a fractional number k as an argument. Notice that k must be between 0 and 1 for this circuit to work correctly.

[Fractional Order Hold]

${\displaystyle G_{hk}=(1-ke^{-Ts}){\frac {1-e^{-Ts}}{s}}+{\frac {k}{Ts^{2}}}(1-e^{-Ts})^{2}}$

This circuit is more complicated than either of the other hold circuits, but sometimes added complexity is worth it if we get better performance from our reconstruction circuit.

### Other Reconstruction Circuits

Impulse response to a linear-approximation circuit.

Another type of circuit that can be used is a linear approximation circuit.

An input signal (grey) and the output signal through a linear approximation circuit

# System Delays

## Delays

A system can be built with an inherent delay. Delays are units that cause a time-shift in the input signal, but that don't affect the signal characteristics. An ideal delay is a delay system that doesn't affect the signal characteristics at all, and that delays the signal for an exact amount of time. Some delays, like processing delays or transmission delays, are unintentional. Other delays however, such as synchronization delays, are an integral part of a system. This chapter will talk about how delays are utilized and represented in the Laplace Domain. Once we represent a delay in the Laplace domain, it is an easy matter, through change of variables, to express delays in other domains.

### Ideal Delays

An ideal delay causes the input function to be shifted forward in time by a certain specified amount of time. Systems with an ideal delay cause the system output to be delayed by a finite, predetermined amount of time.

## Time Shifts

Let's say that we have a function in time that is time-shifted by a certain constant time period T. For convenience, we will denote this function as x(t - T). Now, we can show that the Laplace transform of x(t - T) is the following:

${\displaystyle {\mathcal {L}}\{x(t-T)\}\Leftrightarrow e^{-sT}X(s)}$

What this demonstrates is that time-shifts in the time-domain become exponentials in the complex Laplace domain.

### Shifts in the Z-Domain

Since we know the following general relationship between the Z Transform and the Star Transform:

${\displaystyle z\Leftrightarrow e^{sT}}$

We can show what a time shift in a discrete time domain becomes in the Z domain:

${\displaystyle x((n-n_{s})\cdot T)\equiv x[n-n_{s}]\Leftrightarrow z^{-n_{s}}X(z)}$

## Delays and Stability

A time-shift in the time domain becomes an exponential increase in the Laplace domain. This would seem to show that a time shift can have an effect on the stability of a system, and occasionally can cause a system to become unstable. We define a new parameter called the time margin as the amount of time that we can shift an input function before the system becomes unstable. If the system can survive any arbitrary time shift without going unstable, we say that the time margin of the system is infinite.

## Delay Margin

When speaking of sinusoidal signals, it doesn't make sense to talk about "time shifts", so instead we talk about "phase shifts". Therefore, it is also common to refer to the time margin as the phase margin of the system. The phase margin denotes the amount of phase shift that we can apply to the system input before the system goes unstable.

We denote the phase margin for a system with a lowercase Greek letter φ (phi). Phase margin is defined as such for a second-order system:

[Delay Margin]

${\displaystyle \phi _{m}=\tan ^{-1}\left[{\frac {2\zeta }{({\sqrt {4\zeta ^{4}+1}}-2\zeta ^{2})^{1/2}}}\right]}$

Oftentimes, the phase margin is approximated by the following relationship:

[Delay Margin (approx)]

${\displaystyle \phi _{m}\approx 100\zeta }$

The Greek letter zeta (ζ) is a quantity called the damping ratio, and we discuss this quantity in more detail in the next chapter.

## Transform-Domain Delays

The ordinary Z-Transform does not account for a system which experiences an arbitrary time delay, or a processing delay. The Z-Transform can, however, be modified to account for an arbitrary delay. This new version of the Z-transform is frequently called the Modified Z-Transform, although in some literature (notably in Wikipedia), it is known as the Advanced Z-Transform.

### Delayed Star Transform

To demonstrate the concept of an ideal delay, we will show how the star transform responds to a time-shifted input with a specified delay of time T. The function :${\displaystyle X^{*}(s,\Delta )}$ is the delayed star transform with a delay parameter Δ. The delayed star transform is defined in terms of the star transform as such:

[Delayed Star Transform]

${\displaystyle X^{*}(s,\Delta )={\mathcal {L}}^{*}\left\{x(t-\Delta )\right\}=X(s)e^{-\Delta Ts}}$

As we can see, in the star transform, a time-delayed signal is multiplied by a decaying exponential value in the transform domain.

### Delayed Z-Transform

Since we know that the Star Transform is related to the Z Transform through the following change of variables:

${\displaystyle z=e^{-sT}}$

We can interpret the above result to show how the Z Transform responds to a delay:

${\displaystyle {\mathcal {Z}}(x[t-T])=X(z)z^{-T}}$

This result is expected.

Now that we know how the Z transform responds to time shifts, it is often useful to generalize this behavior into a form known as the Delayed Z-Transform. The Delayed Z-Transform is a function of two variables, z and Δ, and is defined as such:

${\displaystyle X(z,\Delta )={\mathcal {Z}}\left\{x(t-\Delta )\right\}={\mathcal {Z}}\left\{X(s)e^{-\Delta Ts}\right\}}$

And finally:

[Delayed Z Transform]

${\displaystyle {\mathcal {Z}}(x[n],\Delta )=X(z,\Delta )=\sum _{n=-\infty }^{\infty }x[n-\Delta ]z^{-n}}$

## Modified Z-Transform

The Delayed Z-Transform has some uses, but mathematicians and engineers have decided that a more useful version of the transform was needed. The new version of the Z-Transform, which is similar to the Delayed Z-transform with a change of variables, is known as the Modified Z-Transform. The Modified Z-Transform is defined in terms of the delayed Z transform as follows:

${\displaystyle X(z,m)=X(z,\Delta ){\big |}_{\Delta \to 1-m}={\mathcal {Z}}\left\{X(s)e^{-\Delta Ts}\right\}{\big |}_{\Delta \to 1-m}}$

And it is defined explicitly:

[Modified Z Transform]

${\displaystyle X(z,m)={\mathcal {Z}}(x[n],m)=\sum _{n=-\infty }^{\infty }x[n+m-1]z^{-n}}$

# Poles and Zeros

## Poles and Zero

Poles and Zeros of a transfer function are the frequencies for which the value of the denominator and numerator of transfer function becomes zero respectively. The values of the poles and the zeros of a system determine whether the system is stable, and how well the system performs. Control systems, in the most simple sense, can be designed simply by assigning specific values to the poles and zeros of the system.

Physically realizable control systems must have a number of poles greater than or equal to the number of zeros. Systems that satisfy this relationship are called Proper. We will elaborate on this below.

## Time-Domain Relationships

Let's say that we have a transfer function with 3 poles:

${\displaystyle H(s)={\frac {a}{(s-l)(s-m)(s-n)}}}$

The poles are located at s = l, m, n. Now, we can use partial fraction expansion to separate out the transfer function:

${\displaystyle H(s)={\frac {a}{(s-l)(s-m)(s-n)}}={\frac {A}{s-l}}+{\frac {B}{s-m}}+{\frac {C}{s-n}}}$

Using the inverse transform on each of these component fractions (looking up the transforms in our table), we get the following:

${\displaystyle h(t)=Ae^{lt}u(t)+Be^{mt}u(t)+Ce^{nt}u(t)}$

But, since s is a complex variable, l m and n can all potentially be complex numbers, with a real part (σ) and an imaginary part (jω). If we just look at the first term:

${\displaystyle Ae^{lt}u(t)=Ae^{(\sigma _{l}+j\omega _{l})t}u(t)=Ae^{\sigma _{l}t}e^{j\omega _{l}t}u(t)}$

Using Euler's Equation on the imaginary exponent, we get:

${\displaystyle Ae^{\sigma _{l}t}[\cos(\omega _{l}t)+j\sin(\omega _{l}t)]u(t)}$

If a complex pole is present it is always accomponied by another pole that is its complex conjugate. The imaginary parts of their time domain representations thus cancel and we are left with 2 of the same real parts. Assuming that the complex conjugate pole of the first term is present, we can take 2 times the real part of this equation and we are left with our final result:

${\displaystyle 2Ae^{\sigma _{l}t}\cos(\omega _{l}t)u(t)}$

We can see from this equation that every pole will have an exponential part, and a sinusoidal part to its response. We can also go about constructing some rules:

1. if σl = 0, the response of the pole is a perfect sinusoid (an oscillator)
2. if ωl = 0, the response of the pole is a perfect exponential.
3. if σl < 0, the exponential part of the response will decay towards zero.
4. if σl > 0, the exponential part of the response will rise towards infinity.

From the last two rules, we can see that all poles of the system must have negative real parts, and therefore they must all have the form (s + l) for the system to be stable. We will discuss stability in later chapters.

## What are Poles and Zeros

Let's say we have a transfer function defined as a ratio of two polynomials:

${\displaystyle H(s)={N(s) \over D(s)}}$

Where N(s) and D(s) are simple polynomials. Zeros are the roots of N(s) (the numerator of the transfer function) obtained by setting N(s) = 0 and solving for s.

The polynomial order of a function is the value of the highest exponent in the polynomial.

Poles are the roots of D(s) (the denominator of the transfer function), obtained by setting D(s) = 0 and solving for s. Because of our restriction above, that a transfer function must not have more zeros than poles, we can state that the polynomial order of D(s) must be greater than or equal to the polynomial order of N(s).

### Example

Consider the transfer function:

${\displaystyle H(s)={s+2 \over s^{2}+0.25}}$

We define N(s) and D(s) to be the numerator and denominator polynomials, as such:

${\displaystyle N(s)=s+2}$
${\displaystyle D(s)=s^{2}+0.25}$

We set N(s) to zero, and solve for s:

${\displaystyle N(s)=s+2=0\to s=-2}$

So we have a zero at s → -2. Now, we set D(s) to zero, and solve for s to obtain the poles of the equation:

${\displaystyle D(s)=s^{2}+0.25=0\to s=+i{\sqrt {0.25}},-i{\sqrt {0.25}}}$

And simplifying this gives us poles at: -i/2 , +i/2. Remember, s is a complex variable, and it can therefore take imaginary and real values.

## Effects of Poles and Zeros

As s approaches a zero, the numerator of the transfer function (and therefore the transfer function itself) approaches the value 0. When s approaches a pole, the denominator of the transfer function approaches zero, and the value of the transfer function approaches infinity. An output value of infinity should raise an alarm bell for people who are familiar with BIBO stability. We will discuss this later.

As we have seen above, the locations of the poles, and the values of the real and imaginary parts of the pole determine the response of the system. Real parts correspond to exponentials, and imaginary parts correspond to sinusoidal values. Addition of poles to the transfer function has the effect of pulling the root locus to the right, making the system less stable. Addition of zeros to the transfer function has the effect of pulling the root locus to the left, making the system more stable.

## Second-Order Systems

The canonical form for a second order system is as follows:

[Second-order transfer function]

${\displaystyle H(s)={\frac {K\omega ^{2}}{s^{2}+2\zeta \omega s+\omega ^{2}}}}$

Where K is the system gain, ζ is called the damping ratio of the function, and ω is called the natural frequency of the system. ζ and ω, if exactly known for a second order system, the time responses can be easily plotted and stability can easily be checked. More information on second order systems can be found here.

### Damping Ratio

The damping ratio of a second-order system, denoted with the Greek letter zeta (ζ), is a real number that defines the damping properties of the system. More damping has the effect of less percent overshoot, and slower settling time. Damping is the inherent ability of the system to oppose the oscillatory nature of the system's transient response. Larger values of damping coefficient or damping factor produces transient responses with lesser oscillatory nature.

### Natural Frequency

The natural frequency is occasionally written with a subscript:

${\displaystyle \omega \to \omega _{n}}$

We will omit the subscript when it is clear that we are talking about the natural frequency, but we will include the subscript when we are using other values for the variable ω. Also, ${\displaystyle \omega ~=~\omega _{n}}$ when ${\displaystyle \zeta ~=0}$.

## Higher-Order Systems

Modern Controls

The modern method of controls uses systems of special state-space equations to model and manipulate systems. The state variable model is broad enough to be useful in describing a wide range of systems, including systems that cannot be adequately described using the Laplace Transform. These chapters will require the reader to have a solid background in linear algebra, and multi-variable calculus.

## Digital Systems

Digital systems, expressed previously as difference equations or Z-Transform transfer functions can also be used with the state-space representation. Also, all the same techniques for dealing with analog systems can be applied to digital systems, with only minor changes.

## Digital Systems

For digital systems, we can write similar equations, using discrete data sets:

${\displaystyle x[k+1]=Ax[k]+Bu[k]}$
${\displaystyle y[k]=Cx[k]+Du[k]}$

### Zero-Order Hold Derivation

If we have a continuous-time state equation:

${\displaystyle x'(t)=Ax(t)+Bu(t)}$

We can derive the digital version of this equation that we discussed above. We take the Laplace transform of our equation:

${\displaystyle X(s)=(sI-A)^{-1}Bu(s)+(sI-A)^{-1}x(0)}$

Now, taking the inverse Laplace transform gives us our time-domain system, keeping in mind that the inverse Laplace transform of the (sI - A) term is our state-transition matrix, Φ:

${\displaystyle x(t)={\mathcal {L}}^{-1}(X(s))=\Phi (t-t_{0})x(0)+\int _{t_{0}}^{t}\Phi (t-\tau )Bu(\tau )d\tau }$

Now, we apply a zero-order hold on our input, to make the system digital. Notice that we set our start time t0 = kT, because we are only interested in the behavior of our system during a single sample period:

${\displaystyle u(t)=u(kT),kT\leq t\leq (k+1)T}$
${\displaystyle x(t)=\Phi (t,kT)x(kT)+\int _{kT}^{t}\Phi (t,\tau )Bd\tau u(kT)}$

We were able to remove u(kT) from the integral because it did not rely on τ. We now define a new function, Γ, as follows:

${\displaystyle \Gamma (t,t_{0})=\int _{t_{0}}^{t}\Phi (t,\tau )Bd\tau }$

Inserting this new expression into our equation, and setting t = (k + 1)T gives us:

${\displaystyle x((k+1)T)=\Phi ((k+1)T,kT)x(kT)+\Gamma ((k+1)T,kT)u(kT)}$

Now Φ(T) and Γ(T) are constant matrices, and we can give them new names. The d subscript denotes that they are digital versions of the coefficient matrices:

${\displaystyle A_{d}=\Phi ((k+1)T,kT)}$
${\displaystyle B_{d}=\Gamma ((k+1)T,kT)}$

We can use these values in our state equation, converting to our bracket notation instead:

${\displaystyle x[k+1]=A_{d}x[k]+B_{d}u[k]}$

## Relating Continuous and Discrete Systems

Continuous and discrete systems that perform similarly can be related together through a set of relationships. It should come as no surprise that a discrete system and a continuous system will have different characteristics and different coefficient matrices. If we consider that a discrete system is the same as a continuous system, except that it is sampled with a sampling time T, then the relationships below will hold. The process of converting an analog system for use with digital hardware is called discretization. We've given a basic introduction to discretization already, but we will discuss it in more detail here.

### Discrete Coefficient Matrices

Of primary importance in discretization is the computation of the associated coefficient matrices from the continuous-time counterparts. If we have the continuous system (A, B, C, D), we can use the relationship t = kT to transform the state-space solution into a sampled system:

${\displaystyle x(kT)=e^{AkT}x(0)+\int _{0}^{kT}e^{A(kT-\tau )}Bu(\tau )d\tau }$
${\displaystyle x[k]=e^{AkT}x[0]+\int _{0}^{kT}e^{A(kT-\tau )}Bu(\tau )d\tau }$

Now, if we want to analyze the k+1 term, we can solve the equation again:

${\displaystyle x[k+1]=e^{A(k+1)T}x[0]+\int _{0}^{(k+1)T}e^{A((k+1)T-\tau )}Bu(\tau )d\tau }$

Separating out the variables, and breaking the integral into two parts gives us:

${\displaystyle x[k+1]=e^{AT}e^{AkT}x[0]+\int _{0}^{kT}e^{AT}e^{A(kT-\tau )}Bu(\tau )d\tau +\int _{kT}^{(k+1)T}e^{A(kT+T-\tau )}Bu(\tau )d\tau }$

If we substitute in a new variable β = (k + 1)T + τ, and if we see the following relationship:

${\displaystyle e^{AkT}x[0]=x[k]}$

We get our final result:

${\displaystyle x[k+1]=e^{AT}x[k]+\left(\int _{0}^{T}e^{A\alpha }d\alpha \right)Bu[k]}$

Comparing this equation to our regular solution gives us a set of relationships for converting the continuous-time system into a discrete-time system. Here, we will use "d" subscripts to denote the system matrices of a discrete system, and we will use a "c" subscript to denote the system matrices of a continuous system.

Matrix Dimensions:
A: p × p
B: p × q
C: r × p
D: r × q

 ${\displaystyle A_{d}=e^{A_{c}T}}$ ${\displaystyle B_{d}=\int _{0}^{T}e^{A_{c}\tau }d\tau B_{c}}$ ${\displaystyle C_{d}=C_{c}}$ ${\displaystyle D_{d}=D_{c}}$
This operation can be performed using this MATLAB command:
c2d

If the Ac matrix is nonsingular, then we can find its inverse and instead define Bd as:

${\displaystyle B_{d}=A_{c}^{-1}(A_{d}-I)B_{c}}$

The differences in the discrete and continuous matrices are due to the fact that the underlying equations that describe our systems are different. Continuous-time systems are represented by linear differential equations, while the digital systems are described by difference equations. High order terms in a difference equation are delayed copies of the signals, while high order terms in the differential equations are derivatives of the analog signal.

If we have a complicated analog system, and we would like to implement that system in a digital computer, we can use the above transformations to make our matrices conform to the new paradigm.

### Notation

Because the coefficient matrices for the discrete systems are computed differently from the continuous-time coefficient matrices, and because the matrices technically represent different things, it is not uncommon in the literature to denote these matrices with different variables. For instance, the following variables are used in place of A and B frequently:

${\displaystyle \Omega =A_{d}}$
${\displaystyle R=B_{d}}$

These substitutions would give us a system defined by the ordered quadruple (Ω, R, C, D) for representing our equations.

As a matter of notational convenience, we will use the letters A and B to represent these matrices throughout the rest of this book.

## Converting Difference Equations

Now, let's say that we have a 3rd order difference equation, that describes a discrete-time system:

${\displaystyle y[n+3]+a_{2}y[n+2]+a_{1}y[n+1]+a_{0}y[n]=u[n]}$

From here, we can define a set of discrete state variables x in the following manner:

${\displaystyle x_{1}[n]=y[n]}$
${\displaystyle x_{2}[n]=y[n+1]}$
${\displaystyle x_{3}[n]=y[n+2]}$

Which in turn gives us 3 first-order difference equations:

${\displaystyle x_{1}[n+1]=y[n+1]=x_{2}[n]}$
${\displaystyle x_{2}[n+1]=y[n+2]=x_{3}[n]}$
${\displaystyle x_{3}[n+1]=y[n+3]}$

Again, we say that matrix x is a vertical vector of the 3 state variables we have defined, and we can write our state equation in the same form as if it were a continuous-time system:

${\displaystyle x[n+1]={\begin{bmatrix}0&1&0\\0&0&1\\-a_{0}&-a_{1}&-a_{2}\end{bmatrix}}x[n]+{\begin{bmatrix}0\\0\\1\end{bmatrix}}u[n]}$
${\displaystyle y[n]={\begin{bmatrix}1&0&0\end{bmatrix}}x[n]}$

## Solving for x[n]

We can find a general time-invariant solution for the discrete time difference equations. Let us start working up a pattern. We know the discrete state equation:

${\displaystyle x[n+1]=Ax[n]+Bu[n]}$

Starting from time n = 0, we can start to create a pattern:

${\displaystyle x[1]=Ax[0]+Bu[0]}$
${\displaystyle x[2]=Ax[1]+Bu[1]=A^{2}x[0]+ABu[0]+Bu[1]}$
${\displaystyle x[3]=Ax[2]+Bu[2]=A^{3}x[0]+A^{2}Bu[0]+ABu[1]+Bu[2]}$

With a little algebraic trickery, we can reduce this pattern to a single equation:

[General State Equation Solution]

${\displaystyle x[n]=A^{n}x[n_{0}]+\sum _{m=0}^{n-1}A^{n-1-m}Bu[m]}$

Substituting this result into the output equation gives us:

[General Output Equation Solution]

${\displaystyle y[n]=CA^{n}x[n_{0}]+\sum _{m=0}^{n-1}CA^{n-1-m}Bu[m]+Du[n]}$

## Time Variant Solutions

If the system is time-variant, we have a general solution that is similar to the continuous-time case:

${\displaystyle x[n]=\phi [n,n_{0}]x[n_{0}]+\sum _{m=n_{0}}^{n-1}\phi [n,m+1]B[m]u[m]}$
${\displaystyle y[n]=C[n]\phi [n,n_{0}]x[n_{0}]+C[n]\sum _{m=n_{0}}^{n-1}\phi [n,m+1]B[m]u[m]+D[n]u[n]}$

Where φ, the state transition matrix, is defined in a similar manner to the state-transition matrix in the continuous case. However, some of the properties in the discrete time are different. For instance, the inverse of the state-transition matrix does not need to exist, and in many systems it does not exist.

### State Transition Matrix

The discrete time state transition matrix is the unique solution of the equation:

${\displaystyle \phi [k+1,k_{0}]=A[k]\phi [k,k_{0}]}$

Where the following restriction must hold:

${\displaystyle \phi [k_{0},k_{0}]=I}$

From this definition, an obvious way to calculate this state transition matrix presents itself:

${\displaystyle \phi [k,k_{0}]=A[k-1]A[k-2]A[k-3]\cdots A[k_{0}]}$

Or,

${\displaystyle \phi [k,k_{0}]=\prod _{m=1}^{k-k_{0}}A[k-m]}$

## MATLAB Calculations

MATLAB is a computer program, and therefore calculates all systems using digital methods. The MATLAB function lsim is used to simulate a continuous system with a specified input. This function works by calling the c2d, which converts a system (A, B, C, D) into the equivalent discrete system. Once the system model is discretized, the function passes control to the dlsim function, which is used to simulate discrete-time systems with the specified input.

Because of this, simulation programs like MATLAB are subjected to round-off errors associated with the discretization process.

Stability

System stability is an important topic, because unstable systems may not perform correctly, and may actually be harmful to people. There are a number of different methods and tools that can be used to determine system stability, depending on whether you are in the state-space, or the complex domain.

# Stability

## Stability

When a system is unstable, the output of the system may be infinite even though the input to the system was finite. This causes a number of practical problems. For instance, a robot arm controller that is unstable may cause the robot to move dangerously. Also, systems that are unstable often incur a certain amount of physical damage, which can become costly. Nonetheless, many systems are inherently unstable - a fighter jet, for instance, or a rocket at liftoff, are examples of naturally unstable systems. Although we can design controllers that stabilize the system, it is first important to understand what stability is, how it is determined, and why it matters.

The chapters in this section are heavily mathematical, and many require a background in linear differential equations. Readers without a strong mathematical background might want to review the necessary chapters in the Calculus and Ordinary Differential Equations books (or equivalent) before reading this material.

For most of this chapter we will be assuming that the system is linear, and can be represented either by a set of transfer functions or in state space. Linear systems have an associated characteristic polynomial, and this polynomial tells us a great deal about the stability of the system. Negativeness of any coefficient of a characteristic polynomial indicates that the system is either unstable or at most marginally stable. If any coefficient is zero/negative then we can say that the system is unstable. It is important to note, though, that even if all of the coefficients of the characteristic polynomial are positive the system may still be unstable. We will look into this in more detail below.

## BIBO Stability

A system is defined to be BIBO Stable if every bounded input to the system results in a bounded output over the time interval ${\displaystyle [t_{0},\infty )}$. This must hold for all initial times to. So long as we don't input infinity to our system, we won't get infinity output.

A system is defined to be uniformly BIBO Stable if there exists a positive constant k that is independent of t0 such that for all t0 the following conditions:

${\displaystyle \|u(t)\|\leq 1}$
${\displaystyle t\geq t_{0}}$

implies that

${\displaystyle \|y(t)\|\leq k}$

There are a number of different types of stability, and keywords that are used with the topic of stability. Some of the important words that we are going to be discussing in this chapter, and the next few chapters are: BIBO Stable, Marginally Stable, Conditionally Stable, Uniformly Stable, Asymptotically Stable, and Unstable. All of these words mean slightly different things.

## Determining BIBO Stability

We can prove mathematically that a system f is BIBO stable if an arbitrary input x is bounded by two finite but large arbitrary constants M and -M:

${\displaystyle -M

We apply the input x, and the arbitrary boundaries M and -M to the system to produce three outputs:

${\displaystyle y_{x}=f(x)}$
${\displaystyle y_{M}=f(M)}$
${\displaystyle y_{-M}=f(-M)}$

Now, all three outputs should be finite for all possible values of M and x, and they should satisfy the following relationship:

${\displaystyle y_{-M}\leq y_{x}\leq y_{M}}$

If this condition is satisfied, then the system is BIBO stable.

A SISO linear time-invariant (LTI) system is BIBO stable if and only if ${\displaystyle g(t)}$ is absolutely integrable from [0,∞] or from:

${\displaystyle \int _{0}^{\infty }|g(t)|\,dt\leq M<{\infty }}$

### Example

Consider the system:

${\displaystyle h(t)={\frac {2}{t}}}$

We can apply our test, selecting an arbitrarily large finite constant M, and an arbitrary input x such that M>x>-M

As M approaches infinity (but does not reach infinity), we can show that:

${\displaystyle y_{-M}=\lim _{M\to \infty }{\frac {2}{-M}}=0^{-}}$

And:

${\displaystyle y_{M}=\lim _{M\to \infty }{\frac {2}{M}}=0^{+}}$

So now, we can write out our inequality:

${\displaystyle y_{-M}\leq y_{x}\leq y_{M}}$
${\displaystyle 0^{-}\leq x<0^{+}}$

And this inequality should be satisfied for all possible values of x. However, we can see that when x is zero, we have the following:

${\displaystyle y_{x}=\lim _{x\to 0}{\frac {2}{x}}=\infty }$

Which means that x is between -M and M, but the value yx is not between y-M and yM. Therefore, this system is not stable.

## Poles and Stability

When the poles of the closed-loop transfer function of a given system are located in the right-half of the S-plane (RHP), the system becomes unstable. When the poles of the system are located in the left-half plane (LHP) and the system is not improper, the system is shown to be stable. A number of tests deal with this particular facet of stability: The Routh-Hurwitz Criteria, the Root-Locus, and the Nyquist Stability Criteria all test whether there are poles of the transfer function in the RHP. We will learn about all these tests in the upcoming chapters.

If the system is a multivariable, or a MIMO system, then the system is stable if and only if every pole of every transfer function in the transfer function matrix has a negative real part and every transfer function in the transfer function matrix is not improper. For these systems, it is possible to use the Routh-Hurwitz, Root Locus, and Nyquist methods described later, but these methods must be performed once for each individual transfer function in the transfer function matrix.

## Poles and Eigenvalues

Note:
Every pole of G(s) is an eigenvalue of the system matrix A. However, not every eigenvalue of A is a pole of G(s).

The poles of the transfer function, and the eigenvalues of the system matrix A are related. In fact, we can say that the eigenvalues of the system matrix A are the poles of the transfer function of the system. In this way, if we have the eigenvalues of a system in the state-space domain, we can use the Routh-Hurwitz, and Root Locus methods as if we had our system represented by a transfer function instead.

On a related note, eigenvalues and all methods and mathematical techniques that use eigenvalues to determine system stability only work with time-invariant systems. In systems which are time-variant, the methods using eigenvalues to determine system stability fail.

## Transfer Functions Revisited

We are going to have a brief refesher here about transfer functions, because several of the later chapters will use transfer functions for analyzing system stability.

Let us remember our generalized feedback-loop transfer function, with a gain element of K, a forward path Gp(s), and a feedback of Gb(s). We write the transfer function for this system as:

${\displaystyle H_{cl}(s)={\frac {KGp(s)}{1+H_{ol}(s)}}}$

Where ${\displaystyle H_{cl}}$ is the closed-loop transfer function, and ${\displaystyle H_{ol}}$ is the open-loop transfer function. Again, we define the open-loop transfer function as the product of the forward path and the feedback elements, as such:

${\displaystyle H_{ol}(s)=KGp(s)Gb(s)}$ <---Note this definition now contradicts the updated definition in the "Feedback" section.

Now, we can define F(s) to be the characteristic equation. F(s) is simply the denominator of the closed-loop transfer function, and can be defined as such:

[Characteristic Equation]

${\displaystyle F(s)=1+H_{ol}=D(s)}$

We can say conclusively that the roots of the characteristic equation are the poles of the transfer function. Now, we know a few simple facts:

1. The locations of the poles of the closed-loop transfer function determine if the system is stable or not
2. The zeros of the characteristic equation are the poles of the closed-loop transfer function.
3. The characteristic equation is always a simpler equation than the closed-loop transfer function.

These functions combined show us that we can focus our attention on the characteristic equation, and find the roots of that equation.

## State-Space and Stability

As we have discussed earlier, the system is stable if the eigenvalues of the system matrix A have negative real parts. However, there are other stability issues that we can analyze, such as whether a system is uniformly stable, asymptotically stable, or otherwise. We will discuss all these topics in a later chapter.

## Marginal Stability

When the poles of the system in the complex S-Domain exist on the complex frequency axis (the vertical axis), or when the eigenvalues of the system matrix are imaginary (no real part), the system exhibits oscillatory characteristics, and is said to be marginally stable. A marginally stable system may become unstable under certain circumstances, and may be perfectly stable under other circumstances. It is impossible to tell by inspection whether a marginally stable system will become unstable or not.

We will discuss marginal stability more in the following chapters.

# Discrete Time Stability

## Discrete-Time Stability

The stability analysis of a discrete-time or digital system is similar to the analysis for a continuous time system. However, there are enough differences that it warrants a separate chapter.

## Input-Output Stability

### Uniform Stability

An LTI causal system is uniformly BIBO stable if there exists a positive constant L such that the following conditions:

${\displaystyle x[n_{0}]=0}$
${\displaystyle \|u[n]\|\leq k}$
${\displaystyle k\geq 0}$

imply that

${\displaystyle \|y[n]\|\leq L}$

### Impulse Response Matrix

We can define the impulse response matrix of a discrete-time system as:

[Impulse Response Matrix]

${\displaystyle G[n]=\left\{{\begin{matrix}CA^{k-1}B&{\mbox{ if }}k>0\\0&{\mbox{ if }}k\leq 0\end{matrix}}\right.}$

Or, in the general time-varying case:

${\displaystyle G[n]=\left\{{\begin{matrix}C\phi [n,n_{0}]B&{\mbox{ if }}k>0\\0&{\mbox{ if }}k\leq 0\end{matrix}}\right.}$

A digital system is BIBO stable if and only if there exists a positive constant L such that for all non-negative k:

${\displaystyle \sum _{n=0}^{k}\|G[n]\|\leq L}$

## Stability of Transfer Function

A MIMO discrete-time system is BIBO stable if and only if every pole of every transfer function in the transfer function matrix has a magnitude less than 1. All poles of all transfer functions must exist inside the unit circle on the Z plane.

## Lyapunov Stability

There is a discrete version of the Lyapunov stability theorem that applies to digital systems. Given the discrete Lyapunov equation:

[Digital Lypapunov Equation]

${\displaystyle A^{T}MA-M=-N}$

We can use this version of the Lyapunov equation to define a condition for stability in discrete-time systems:

Lyapunov Stability Theorem (Digital Systems)
A digital system with the system matrix A is asymptotically stable if and only if there exists a unique matrix M that satisfies the Lyapunov Equation for every positive definite matrix N.

## Poles and Eigenvalues

Every pole of G(z) is an eigenvalue of the system matrix A. Not every eigenvalue of A is a pole of G(z). Like the poles of the transfer function, all the eigenvalues of the system matrix must have magnitudes less than 1. Mathematically:

${\displaystyle {\sqrt {\operatorname {Re} (z)^{2}+\operatorname {Im} (z)^{2}}}\leq 1}$

If the magnitude of the eigenvalues of the system matrix A, or the poles of the transfer functions are greater than 1, the system is unstable.

## Finite Wordlengths

Digital computer systems have an inherent problem because implementable computer systems have finite wordlengths to deal with. Some of the issues are:

1. Real numbers can only be represented with a finite precision. Typically, a computer system can only accurately represent a number to a finite number of decimal points.
2. Because of the fact above, computer systems with feedback can compound errors with each program iteration. Small errors in one step of an algorithm can lead to large errors later in the program.
3. Integer numbers in computer systems have finite lengths. Because of this, integer numbers will either roll-over, or saturate, depending on the design of the computer system. Both situations can create inaccurate results.

# Jury's Test

## Routh-Hurwitz in Digital Systems

Because of the differences in the Z and S domains, the Routh-Hurwitz criteria can not be used directly with digital systems. This is because digital systems and continuous-time systems have different regions of stability. However, there are some methods that we can use to analyze the stability of digital systems. Our first option (and arguably not a very good option) is to convert the digital system into a continuous-time representation using the bilinear transform. The bilinear transform converts an equation in the Z domain into an equation in the W domain, that has properties similar to the S domain. Another possibility is to use Jury's Stability Test. Jury's test is a procedure similar to the RH test, except it has been modified to analyze digital systems in the Z domain directly.

### Bilinear Transform

One common, but time-consuming, method of analyzing the stability of a digital system in the z-domain is to use the bilinear transform to convert the transfer function from the z-domain to the w-domain. The w-domain is similar to the s-domain in the following ways:

• Poles in the right-half plane are unstable
• Poles in the left-half plane are stable
• Poles on the imaginary axis are partially stable

The w-domain is warped with respect to the s domain, however, and except for the relative position of poles to the imaginary axis, they are not in the same places as they would be in the s-domain.

Remember, however, that the Routh-Hurwitz criterion can tell us whether a pole is unstable or not, and nothing else. Therefore, it doesn't matter where exactly the pole is, so long as it is in the correct half-plane. Since we know that stable poles are in the left-half of the w-plane and the s-plane, and that unstable poles are on the right-hand side of both planes, we can use the Routh-Hurwitz test on functions in the w domain exactly like we can use it on functions in the s-domain.

### Other Mappings

There are other methods for mapping an equation in the Z domain into an equation in the S domain, or a similar domain. We will discuss these different methods in the Appendix.

## Jury's Test

Jury's test is a test that is similar to the Routh-Hurwitz criterion, except that it can be used to analyze the stability of an LTI digital system in the Z domain. To use Jury's test to determine if a digital system is stable, we must check our z-domain characteristic equation against a number of specific rules and requirements. If the function fails any requirement, it is not stable. If the function passes all the requirements, it is stable. Jury's test is a necessary and sufficient test for stability in digital systems.

Again, we call D(z) the characteristic polynomial of the system. It is the denominator polynomial of the Z-domain transfer function. Jury's test will focus exclusively on the Characteristic polynomial. To perform Jury's test, we must perform a number of smaller tests on the system. If the system fails any test, it is unstable.

### Jury Tests

Given a characteristic equation in the form:

${\displaystyle D(z)=a_{0}+a_{1}z+a_{2}z^{2}+\cdots +a_{N}z^{N}}$

The following tests determine whether this system has any poles outside the unit circle (the instability region). These tests will use the value N as being the degree of the characteristic polynomial.

The system must pass all of these tests to be considered stable. If the system fails any test, you may stop immediately: you do not need to try any further tests.

Rule 1
If z is 1, the system output must be positive:
${\displaystyle D(1)>0}$
Rule 2
If z is -1, then the following relationship must hold:
${\displaystyle (-1)^{N}D(-1)>0}$
Rule 3
The absolute value of the constant term (a0) must be less than the value of the highest coefficient (aN):
${\displaystyle |a_{0}|

If Rule 1 Rule 2 and Rule 3 are satisified, construct the Jury Array (discussed below).

Rule 4
Once the Jury Array has been formed, all the following relationships must be satisifed until the end of the array:
${\displaystyle |b_{0}|>|b_{N-1}|}$
${\displaystyle |c_{0}|>|c_{N-2}|}$
${\displaystyle |d_{0}|>|d_{N-3}|}$
And so on until the last row of the array. If all these conditions are satisifed, the system is stable.

While you are constructing the Jury Array, you can be making the tests of Rule 4. If the Array fails Rule 4 at any point, you can stop calculating the array: your system is unstable. We will discuss the construction of the Jury Array below.

### The Jury Array

The Jury Array is constructed by first writing out a row of coefficients, and then writing out another row with the same coefficients in reverse order. For instance, if your polynomial is a third order system, we can write the First two lines of the Jury Array as follows:

${\displaystyle {\overline {\underline {\begin{matrix}z^{0}&z^{1}&z^{2}&z^{3}&\ldots &z^{N}\\a_{0}&a_{1}&a_{2}&a_{3}&\ldots &a_{N}\\a_{N}&\ldots &a_{3}&a_{2}&a_{1}&a_{0}\end{matrix}}}}}$

Now, once we have the first row of our coefficients written out, we add another row of coefficients (we will use b for this row, and c for the next row, as per our previous convention), and we will calculate the values of the lower rows from the values of the upper rows. Each new row that we add will have one fewer coefficient then the row before it:

${\displaystyle {\overline {\underline {\begin{matrix}1)&a_{0}&a_{1}&a_{2}&a_{3}&\ldots &a_{N}\\2)&a_{N}&\ldots &a_{3}&a_{2}&a_{1}&a_{0}\\3)&b_{0}&b_{1}&b_{2}&\ldots &b_{N-1}\\4)&b_{N-1}&\ldots &b_{2}&b_{1}&b_{0}\\\vdots &\vdots &\vdots &\vdots \\2N-3)&v_{0}&v_{1}&v_{2}\end{matrix}}}}}$

Note: The last file is the (2N-3) file, and always has 3 elements. This test doesn't have sense if N=1, but in this case you know the pole!

Once we get to a row with 2 members, we can stop constructing the array.

To calculate the values of the odd-number rows, we can use the following formulae. The even number rows are equal to the previous row in reverse order. We will use k as an arbitrary subscript value. These formulae are reusable for all elements in the array:

${\displaystyle b_{k}={\begin{vmatrix}a_{0}&a_{N-k}\\a_{N}&a_{k}\end{vmatrix}}}$
${\displaystyle c_{k}={\begin{vmatrix}b_{0}&b_{N-1-k}\\b_{N-1}&b_{k}\end{vmatrix}}}$
${\displaystyle d_{k}={\begin{vmatrix}c_{0}&c_{N-2-k}\\c_{N-2}&c_{k}\end{vmatrix}}}$

This pattern can be carried on to all lower rows of the array, if needed.

### Example: Calculating e5

Give the equation for member e5 of the jury array (assuming the original polynomial is sufficiently large to require an e5 member).

Going off the pattern we set above, we can have this equation for a member e:

${\displaystyle e_{k}={\begin{vmatrix}d_{0}&d_{N-R-k}\\d_{N-R}&d_{k}\end{vmatrix}}}$

Where we are using R as the subtractive element from the above equations. Since row c had R → 1, and row d had R → 2, we can follow the pattern and for row e set R → 3. Plugging this value of R into our equation above gives us:

${\displaystyle e_{k}={\begin{vmatrix}d_{0}&d_{N-3-k}\\d_{N-3}&d_{k}\end{vmatrix}}}$

And since we want e5 we know that k is 5, so we can substitute that into the equation:

${\displaystyle e_{5}={\begin{vmatrix}d_{0}&d_{N-3-5}\\d_{N-3}&d_{5}\end{vmatrix}}={\begin{vmatrix}d_{0}&d_{N-8}\\d_{N-3}&d_{5}\end{vmatrix}}}$

When we take the determinant, we get the following equation:

${\displaystyle e_{5}=d_{0}d_{5}-d_{N-8}d_{N-3}}$

We will discuss the bilinear transform, and other methods to convert between the Laplace domain and the Z domain in the appendix:

# Root Locus

## The Problem

Consider a system like a radio. The radio has a "volume" knob, that controls the amount of gain of the system. High volume means more power going to the speakers, low volume means less power to the speakers. As the volume value increases, the poles of the transfer function of the radio change, and they might potentially become unstable. We would like to find out if the radio becomes unstable, and if so, we would like to find out what values of the volume cause it to become unstable. Our current methods would require us to plug in each new value for the volume (gain, "K"), and solve the open-loop transfer function for the roots. This process can be a long one. Luckily, there is a method called the root-locus method, that allows us to graph the locations of all the poles of the system for all values of gain, K

## Root-Locus

As we change gain, we notice that the system poles and zeros actually move around in the S-plane. This fact can make life particularly difficult, when we need to solve higher-order equations repeatedly, for each new gain value. The solution to this problem is a technique known as Root-Locus graphs. Root-Locus allows you to graph the locations of the poles and zeros for every value of gain, by following several simple rules. As we know that a fan switch also can control the speed of the fan.

Let's say we have a closed-loop transfer function for a particular system:

${\displaystyle {\frac {N(s)}{D(s)}}={\frac {KG(s)}{1+KG(s)H(s)}}}$

Where N is the numerator polynomial and D is the denominator polynomial of the transfer functions, respectively. Now, we know that to find the poles of the equation, we must set the denominator to 0, and solve the characteristic equation. In other words, the locations of the poles of a specific equation must satisfy the following relationship:

${\displaystyle D(s)=1+KG(s)H(s)=0}$

from this same equation, we can manipulate the equation as such:

${\displaystyle 1+KG(s)H(s)=0}$
${\displaystyle KG(s)H(s)=-1}$

And finally by converting to polar coordinates:

${\displaystyle \angle KG(s)H(s)=180^{\circ }}$

Now we have 2 equations that govern the locations of the poles of a system for all gain values:

[The Magnitude Equation]

${\displaystyle 1+KG(s)H(s)=0}$

[The Angle Equation]

${\displaystyle \angle KG(s)H(s)=180^{\circ }}$

### Digital Systems

The same basic method can be used for considering digital systems in the Z-domain:

${\displaystyle {\frac {N(z)}{D(z)}}={\frac {KG(z)}{1+K{\overline {GH}}(z)}}}$

Where N is the numerator polynomial in z, D is the denominator polynomial in z, and ${\displaystyle {\overline {GH}}(z)}$ is the open-loop transfer function of the system, in the Z domain.

The denominator D(z), by the definition of the characteristic equation is equal to:

${\displaystyle D(z)=1+K{\overline {GH}}(z)=0}$

We can manipulate this as follows:

${\displaystyle 1+K{\overline {GH}}(z)=0}$
${\displaystyle K{\overline {GH}}(z)=-1}$

We can now convert this to polar coordinates, and take the angle of the polynomial:

${\displaystyle \angle K{\overline {GH}}(z)=180^{\circ }}$

We are now left with two important equations:

[The Magnitude Equation]

${\displaystyle 1+K{\overline {GH}}(z)=0}$

[The Angle Equation]

${\displaystyle \angle K{\overline {GH}}(z)=180^{\circ }}$

If you will compare the two, the Z-domain equations are nearly identical to the S-domain equations, and act exactly the same. For the remainder of the chapter, we will only consider the S-domain equations, with the understanding that digital systems operate in nearly the same manner.

## The Root-Locus Procedure

Note:
In this section, the rules for the S-Plane and the Z-plane are the same, so we won't refer to the differences between them.

In the transform domain (see note at right), when the gain is small, the poles start at the poles of the open-loop transfer function. When gain becomes infinity, the poles move to overlap the zeros of the system. This means that on a root-locus graph, all the poles move towards a zero. Only one pole may move towards one zero, and this means that there must be the same number of poles as zeros.

If there are fewer zeros than poles in the transfer function, there are a number of implicit zeros located at infinity, that the poles will approach.

First thing, we need to convert the magnitude equation into a slightly more convenient form:

${\displaystyle KG(s)H(s)+1=0\to G(s)H(s)={\frac {-1}{K}}}$
Note:
We generally use capital letters for functions in the frequency domain, but a(s) and b(s) are unimportant enough to be lower-case.

Now, we can assume that G(s)H(s) is a fraction of some sort, with a numerator and a denominator that are both polynomials. We can express this equation using arbitrary functions a(s) and b(s), as such:

${\displaystyle {\frac {a(s)}{b(s)}}={\frac {-1}{K}}}$

We will refer to these functions a(s) and b(s) later in the procedure.

We can start drawing the root-locus by first placing the roots of b(s) on the graph with an 'X'. Next, we place the roots of a(s) on the graph, and mark them with an 'O'.

Next, we examine the real-axis. starting from the right-hand side of the graph and traveling to the left, we draw a root-locus line on the real-axis at every point to the left of an odd number of poles or zeros on the real-axis. This may sound tricky at first, but it becomes easier with practice.

Now, a root-locus line starts at every pole. Therefore, any place that two poles appear to be connected by a root locus line on the real-axis, the two poles actually move towards each other, and then they "break away", and move off the axis. The point where the poles break off the axis is called the breakaway point. From here, the root locus lines travel towards the nearest zero.

It is important to note that the s-plane is symmetrical about the real axis, so whatever is drawn on the top-half of the S-plane, must be drawn in mirror-image on the bottom-half plane.

Once a pole breaks away from the real axis, they can either travel out towards infinity (to meet an implicit zero), or they can travel to meet an explicit zero, or they can re-join the real-axis to meet a zero that is located on the real-axis. If a pole is traveling towards infinity, it always follows an asymptote. The number of asymptotes is equal to the number of implicit zeros at infinity.

## Root Locus Rules

Here is the complete set of rules for drawing the root-locus graph. We will use p and z to denote the number of poles and the number of zeros of the open-loop transfer function, respectively. We will use Pi and Zi to denote the location of the ith pole and the ith zero, respectively. Likewise, we will use ψi and ρi to denote the angle from a given point to the ith pole and zero, respectively. All angles are given in radians (π denotes π radians).

There are 11 rules that, if followed correctly, will allow you to create a correct root-locus graph.

Rule 1
There is one branch of the root-locus for every root of b(s).
Rule 2
The roots of b(s) are the poles of the open-loop transfer function. Mark the roots of b(s) on the graph with an X.
Rule 3
The roots of a(s) are the zeros of the open-loop transfer function. Mark the roots of a(s) on the graph with an O. There should be a number of O's less than or equal to the number of X's. There is a number of zeros p - z located at infinity. These zeros at infinity are called "implicit zeros". All branches of the root-locus will move from a pole to a zero (some branches, therefore, may travel towards infinity).
Rule 4
A point on the real axis is a part of the root-locus if it is to the left of an odd number of poles and zeros.
Rule 5
The gain at any point on the root locus can be determined by the inverse of the absolute value of the magnitude equation.
${\displaystyle \left|{\frac {b(s)}{a(s)}}\right|=|K|}$
Rule 6
The root-locus diagram is symmetric about the real-axis. All complex roots are conjugates.
Rule 7
Two roots that meet on the real-axis will break away from the axis at certain break-away points. If we set s → σ (no imaginary part), we can use the following equation:
${\displaystyle K=-{\frac {b(\sigma )}{a(\sigma )}}}$
And differentiate to find the local maximum:
${\displaystyle {\frac {dK}{d\sigma }}={\frac {d}{d\sigma }}{\frac {b(\sigma )}{a(\sigma )}}}$
Rule 8
The breakaway lines of the root locus are separated by angles of ${\displaystyle {\frac {\pi }{\alpha }}}$, where α is the number of poles intersecting at the breakaway point.
Rule 9
The breakaway root-loci follow asymptotes that intersect the real axis at angles φω given by:
${\displaystyle \phi _{\omega }={\frac {\pi +2N\pi }{p-z}},\quad N=0,1,...p-z-1}$
The origin of these asymptotes, OA, is given as the sum of the pole locations, minus the sum of the zero locations, divided by the difference between the number of poles and zeros:
${\displaystyle OA={\frac {\sum _{p}P_{i}-\sum _{z}Z_{i}}{p-z}}}$
The OA point should lie on the real axis.
Rule 10
The branches of the root locus cross the imaginary axis at points where the angle equation value is π (i.e., 180o).
Rule 11
The angles that the root locus branch makes with a complex-conjugate pole or zero is determined by analyzing the angle equation at a point infinitessimally close to the pole or zero. The angle of departure, φd is given by the following equation:
${\displaystyle \sum _{p}\psi _{i}+\sum _{z}\rho _{i}+\phi _{d}=\pi }$
The angle of arrival, φa, is given by:
${\displaystyle \sum _{z}\rho _{i}+\sum _{p}\psi _{i}+\phi _{a}=\pi }$

We will explain these rules in the rest of the chapter.

## Root Locus Equations

Here are the two major equations:

[Root Locus Equations]

S-Domain Equations Z-Domain Equations
${\displaystyle 1+KG(s)H(s)=0}$ ${\displaystyle 1+K{\overline {GH}}(z)=0}$
${\displaystyle \angle KG(s)H(s)=180^{o}}$ ${\displaystyle \angle K{\overline {GH}}(z)=180^{o}}$

Note that the sum of the angles of all the poles and zeros must equal to 180.

### Number of Asymptotes

If the number of explicit zeros of the system is denoted by Z (uppercase z), and the number of poles of the system is given by P, then the number of asymptotes (Na) is given by:

[Number of Asymptotes]

${\displaystyle N_{a}=P-Z}$

The angles of the asymptotes are given by:

[Angle of Asymptotes]

${\displaystyle \phi _{k}=(2k+1){\frac {\pi }{P-Z}}}$

for values of ${\displaystyle k=[0,1,...N_{a}-1]}$.

### Asymptote Intersection Point

The asymptotes intersect the real axis at the point:

[Origin of Asymptotes]

${\displaystyle \sigma _{0}={\frac {\sum _{P}-\sum _{Z}}{P-Z}}}$

Where ${\displaystyle \sum _{P}}$ is the sum of all the locations of the poles, and ${\displaystyle \sum _{Z}}$ is the sum of all the locations of the explicit zeros.

### Breakaway Points

The breakaway points are located at the roots of the following equation:

[Breakaway Point Locations]

${\displaystyle {\frac {dG(s)H(s)}{ds}}=0}$ or ${\displaystyle {\frac {d{\overline {GH}}(z)}{dz}}=0}$

Once you solve for z, the real roots give you the breakaway/reentry points. Complex roots correspond to a lack of breakaway/reentry.

The breakaway point equation can be difficult to solve, so many times the actual location is approximated.

## Root Locus and Stability

The root locus procedure should produce a graph of where the poles of the system are for all values of gain K. When any or all of the roots of D are in the unstable region, the system is unstable. When any of the roots are in the marginally stable region, the system is marginally stable (oscillatory). When all of the roots of D are in the stable region, then the system is stable.

It is important to note that a system that is stable for gain K1 may become unstable for a different gain K2. Some systems may have poles that cross over from stable to unstable multiple times, giving multiple gain values for which the system is unstable.

Here is a quick refresher:

Region S-Domain Z-Domain
Stable Region Left-Hand S Plane ${\displaystyle \sigma <0}$ Inside the Unit Circle ${\displaystyle |z|<1}$
Marginally Stable Region The vertical axis ${\displaystyle \sigma =0}$ The Unit Circle ${\displaystyle |z|=1}$
Unstable Region Right-Hand S Plane ${\displaystyle \sigma >0}$ Outside the Unit Circle, ${\displaystyle |z|>1}$

## Examples

### Example 1: First-Order System

Find the root-locus of the open-loop system:

${\displaystyle T(s)={\frac {1}{1+2s}}}$

If we look at the characteristic equation, we can quickly solve for the single pole of the system:

${\displaystyle D(s)=1+2s=0}$
${\displaystyle s=-{\frac {1}{2}}}$

We plot that point on our root-locus graph, and everything on the real axis to the left of that single point is on the root locus (from the rules, above). Therefore, the root locus of our system looks like this:

From this image, we can see that for all values of gain this system is stable.

### Example 2: Third Order System

We are given a system with three real poles, shown by the transfer function:

${\displaystyle T(s)={\frac {1}{(s+1)(s+2)(s+3)}}}$

Is this system stable?

To answer this question, we can plot the root-locus. First, we draw the poles on the graph at locations -1, -2, and -3. The real-axis between the first and second poles is on the root-locus, as well as the real axis to the left of the third pole. We know also that there is going to be breakaway from the real axis at some point. The origin of asymptotes is located at:

${\displaystyle OA={\frac {-1+-2+-3}{3}}=-2}$,

and the angle of the asymptotes is given by:

${\displaystyle \phi ={\frac {180(2k+1)}{3}}\;\mathrm {for} \;k=0,1,2}$

We know that the breakaway occurs between the first and second poles, so we will estimate the exact breakaway point. Drawing the root-locus gives us the graph below.

We can see that for low values of gain the system is stable, but for higher values of gain, the system becomes unstable.

### Example: Complex-Conjugate Zeros

Find the root-locus graph for the following system transfer function:

${\displaystyle T(s)=K{\frac {s^{2}+4.5s+5.625}{s(s+1)(s+2)}}}$

If we look at the denominator, we have poles at the origin, -1, and -2. Following Rule 4, we know that the real-axis between the first two poles, and the real axis after the third pole are all on the root-locus. We also know that there is going to be a breakaway point between the first two poles, so that they can approach the complex conjugate zeros. If we use the quadratic equation on the numerator, we can find that the zeros are located at:

${\displaystyle s=(-2.25+j0.75),(-2.25-j0.75)}$

If we draw our graph, we get the following:

We can see from this graph that the system is stable for all values of K.

### Example: Root-Locus Using MATLAB/Octave

Use MATLAB, Octave, or another piece of mathematical simulation software to produce the root-locus graph for the following system:

${\displaystyle T(s)=K{\frac {s^{2}+7s+12}{s^{2}+3s+6)}}}$

First, we must multiply through in the denominator:

${\displaystyle N(s)=S^{2}+7S+12}$
${\displaystyle D(s)=S^{2}+3S+2}$

Now, we can generate the coefficient vectors from the numerator and denominator:

 num = [0 1 7 12];
den = [0 1 3 2];


Next, we can feed these vectors into the rlocus command:

 rlocus(num, den);


Note:In Octave, we need to create a system structure first, by typing:

 sys = tf(num, den);
rlocus(sys);


Either way, we generate the following graph:

# Nyquist Criterion

## Nyquist Stability Criteria

The Nyquist Stability Criteria is a test for system stability, just like the Routh-Hurwitz test, or the Root-Locus Methodology. However, the Nyquist Criteria can also give us additional information about a system. Routh-Hurwitz and Root-Locus can tell us where the poles of the system are for particular values of gain. By altering the gain of the system, we can determine if any of the poles move into the RHP, and therefore become unstable. The Nyquist Criteria, however, can tell us things about the frequency characteristics of the system. For instance, some systems with constant gain might be stable for low-frequency inputs, but become unstable for high-frequency inputs.

Here is an example of a system responding differently to different frequency input values: Consider an ordinary glass of water. If the water is exposed to ordinary sunlight, it is unlikely to heat up too much. However, if the water is exposed to microwave radiation (from inside your microwave oven, for instance), the water will quickly heat up to a boil.

Also, the Nyquist Criteria can tell us things about the phase of the input signals, the time-shift of the system, and other important information.

## Contours

A contour is a complicated mathematical construct, but luckily we only need to worry ourselves with a few points about them. We will denote contours with the Greek letter Γ (gamma). Contours are lines, drawn on a graph, that follow certain rules:

1. The contour must close (it must form a complete loop)
2. The contour may not cross directly through a pole of the system.
3. Contours must have a direction (clockwise or counterclockwise, generally).
4. A contour is called "simple" if it has no self-intersections. We only consider simple contours here.

Once we have such a contour, we can develop some important theorems about them, and finally use these theorems to derive the Nyquist stability criterion.

## Argument Principle

Here is the argument principle, which we will use to derive the stability criterion. Do not worry if you do not understand all the terminology, we will walk through it:

The Argument Principle
If we have a contour, Γ, drawn in one plane (say the complex laplace plane, for instance), we can map that contour into another plane, the F(s) plane, by transforming the contour with the function F(s). The resultant contour, ${\displaystyle \Gamma _{F(s)}}$ will circle the origin point of the F(s) plane N times, where N is equal to the difference between Z and P (the number of zeros and poles of the function F(s), respectively).

When we have our contour, Γ, we transform it into ${\displaystyle \Gamma _{F(s)}}$ by plugging every point of the contour into the function F(s), and taking the resultant value to be a point on the transformed contour.

### Example: First Order System

Let's say, for instance, that Γ is a unit square contour in the complex s plane. The vertices of the square are located at points I,J,K,L, as follows:

${\displaystyle I=1+j}$
${\displaystyle J=1-j}$
${\displaystyle K=-1-j}$
${\displaystyle L=-1+j}$

we must also specify the direction of our contour, and we will say (arbitrarily) that it is a clockwise contour (travels from I to J to K to L). We will also define our transform function, F(s), to be the following:

${\displaystyle F(s)=2s+1}$

We can factor the denominator of F(s), and we can show that there is one zero at s → -0.5, and no poles. Plotting this root on the same graph as our contour, we see clearly that it lies within the contour. Since s is a complex variable, defined with real and imaginary parts as:

${\displaystyle s=\sigma +j\omega }$

We know that F(s) must also be complex. We will say, for reasons of simplicity, that the axes in the F(s) plane are u and v, and are related as such:

${\displaystyle F(s)=u+vj=2(\sigma +j\omega )+1}$

From this relationship, we can define u and v in terms of σ and ω:

${\displaystyle u=2\sigma +1}$
${\displaystyle v=2\omega }$

Now, to transform Γ, we will plug every point of the contour into F(s), and the resultant values will be the points of ${\displaystyle \Gamma _{F(s)}}$. We will solve for complex values u and v, and we will start with the vertices, because they are the simplest examples:

${\displaystyle u+vj=F(I)=3+2j}$
${\displaystyle u+vj=F(J)=3-2j}$
${\displaystyle u+vj=F(K)=-1+2j}$
${\displaystyle u+vj=F(L)=-1-2j}$

We can take the lines in between the vertices as a function of s, and plug the entire function into the transform. Luckily, because we are using straight lines, we can simplify very much:

• Line from I to J: ${\displaystyle \sigma =1,u=3,v=\omega }$
• Line from J to K: ${\displaystyle \omega =-1,u=2\sigma +1,v=-1}$
• Line from K to L: ${\displaystyle \sigma =-1,u=-1,v=\omega }$
• Line from L to I: ${\displaystyle \omega =1,u=2\sigma +1,v=1}$

And when we graph these functions, from vertex to vertex, we see that the resultant contour in the F(s) plane is a square, but not centered at the origin, and larger in size. Notice how the contour encircles the origin of the F(s) plane one time. This will be important later on.

### Example: Second-Order System

Let's say that we have a slightly more complicated mapping function:

${\displaystyle F(s)={\frac {s+0.5}{2s^{2}+2s+1}}}$

We can see clearly that F(s) has a zero at s → -0.5, and a complex conjugate set of poles at s → -0.5 + 0.5j and s → -0.5 - 0.5j. We will use the same unit square contour, Γ, from above:

${\displaystyle I=1+j}$
${\displaystyle J=1-j}$
${\displaystyle K=-1-j}$
${\displaystyle L=-1+j}$

We can see clearly that the poles and the zero of F(s) lie within Γ. Setting F(s) to u + vj and solving, we get the following relationships:

${\displaystyle u+vj=F(\sigma +j\omega )={\frac {(\sigma +0.5)+j(\omega )}{(2\sigma ^{2}-2\omega ^{2}+2\sigma +1)+j(2\sigma \omega +\omega )}}}$

This is a little difficult now, because we need to simplify this whole expression, and separate it out into real and imaginary parts. There are two methods to doing this, neither of which is short or easy enough to demonstrate here to entirety:

1. We convert the numerator and denominator polynomials into a polar representation in terms of r and θ, then perform the division, and then convert back into rectangular format.
2. We plug each segment of our contour into this equation, and simplify numerically.

## The Nyquist Contour

The Nyquist contour, the contour that makes the entire nyquist criterion work, must encircle the entire unstable region of the complex plane. For analog systems, this is the right half of the complex s plane. For digital systems, this is the entire plane outside the unit circle. Remember that if a pole to the closed-loop transfer function (or equivalently a zero of the characteristic equation) lies in the unstable region of the complex plane, the system is an unstable system.

Analog Systems
The Nyquist contour for analog systems is an infinite semi-circle that encircles the entire right-half of the s plane. The semicircle travels up the imaginary axis from negative infinity to positive infinity. From positive infinity, the contour breaks away from the imaginary axis, in the clock-wise direction, and forms a giant semicircle.
Digital Systems
The Nyquist contour in digital systems is a counter-clockwise encirclement of the unit circle.

## Nyquist Criteria

Let us first introduce the most important equation when dealing with the Nyquist criterion:

${\displaystyle N=Z-P}$

Where:

• N is the number of encirclements of the (-1, 0) point.
• Z is the number of zeros of the characteristic equation.
• P is the number of poles in the of the open-loop characteristic equation.

With this equation stated, we can now state the Nyquist Stability Criterion:

Nyquist Stability Criterion
A feedback control system is stable, if and only if the contour ${\displaystyle \Gamma _{F(s)}}$ in the F(s) plane does not encircle the (-1, 0) point when P is 0.
A feedback control system is stable, if and only if the contour ${\displaystyle \Gamma _{F(s)}}$ in the F(s) plane encircles the (-1, 0) point a number of times equal to the number of poles of F(s) enclosed by Γ.

In other words, if P is zero then N must equal zero. Otherwise, N must equal P. Essentially, we are saying that Z must always equal zero, because Z is the number of zeros of the characteristic equation (and therefore the number of poles of the closed-loop transfer function) that are in the right-half of the s plane.

Keep in mind that we don't necessarily know the locations of all the zeros of the characteristic equation. So if we find, using the nyquist criterion, that the number of poles is not equal to N, then we know that there must be a zero in the right-half plane, and that therefore the system is unstable.

## Nyquist ↔ Bode

A careful inspection of the Nyquist plot will reveal a surprising relationship to the Bode plots of the system. If we use the Bode phase plot as the angle θ, and the Bode magnitude plot as the distance r, then it becomes apparent that the Nyquist plot of a system is simply the polar representation of the Bode plots.

To obtain the Nyquist plot from the Bode plots, we take the phase angle and the magnitude value at each frequency ω. We convert the magnitude value from decibels back into gain ratios. Then, we plot the ordered pairs (r, θ) on a polar graph.

## Nyquist in the Z Domain

The Nyquist Criteria can be utilized in the digital domain in a similar manner as it is used with analog systems. The primary difference in using the criteria is that the shape of the Nyquist contour must change to encompass the unstable region of the Z plane. Therefore, instead of an infinitesimal semi-circle, the Nyquist contour for digital systems is a counter-clockwise unit circle. By changing the shape of the contour, the same N = Z - P equation holds true, and the resulting Nyquist graph will typically look identical to one from an analog system, and can be interpreted in the same way.

# State-Space Stability

## State-Space Stability

If a system is represented in the state-space domain, it doesn't make sense to convert that system to a transfer function representation (or even a transfer matrix representation) in an attempt to use any of the previous stability methods. Luckily, there are other analysis methods that can be used with the state-space representation to determine if a system is stable or not. First, let us first introduce the notion of unstability:

Unstable
A system is said to be unstable if the system response approaches infinity as time approaches infinity. If our system is G(t), then, we can say a system is unstable if:
${\displaystyle \lim _{t\to \infty }\|G(t)\|=\infty }$

Also, a key concept when we are talking about stability of systems is the concept of an equilibrium point:

Equilibrium Point
Given a system f such that:
${\displaystyle x'(t)=f(x(t))}$

A particular state xe is called an equilibrium point if

${\displaystyle f(x_{e})=0}$

for all time t in the interval ${\displaystyle [t_{0},\infty )}$, where t0 is the starting time of the system.

An equilibrium point is also known as a "stationary point", a "critical point", a "singular point", or a "rest state" in other books or literature.

The definitions below typically require that the equilibrium point be zero. If we have an equilibrium point xe = a, then we can use the following change of variables to make the equilibrium point zero:

${\displaystyle {\bar {x}}=x_{e}-a=0}$

We will also see below that a system's stability is defined in terms of an equilibrium point. Related to the concept of an equilibrium point is the notion of a zero point:

Zero State
A state xz is a zero state if xz = 0. A zero state may or may not be an equilibrium point.

### Stability Definitions

The equilibrium x = 0 of the system is stable if and only if the solutions of the zero-input state equation are bounded. Equivalently, x = 0 is a stable equilibrium if and only if for every initial time t0, there exists an associated finite constant k(t0) such that:

${\displaystyle \operatorname {sup} _{t\geq t_{0}}\|\phi (t,t_{0})\|=k(t_{0})<\infty }$

Where sup is the supremum, or "maximum" value of the equation. The maximum value of this equation must never exceed the arbitrary finite constant k (and therefore it may not be infinite at any point).

Uniform Stability
The system is defined to be uniformly stable if it is stable for all initial values of t0:
${\displaystyle \operatorname {sup} _{t\geq 0}[\operatorname {sup} _{t\geq t_{0}}\|\phi (t,t_{0})\|]=k_{0}<\infty }$

Uniform stability is a more general, and more powerful form of stability than was previously provided.

Asymptotic Stability
A system is defined to be asymptotically stable if:
${\displaystyle \lim _{t\to \infty }\|\phi (t,t_{0})\|=0}$

A time-invariant system is asymptotically stable if all the eigenvalues of the system matrix A have negative real parts. If a system is asymptotically stable, it is also BIBO stable. However the inverse is not true: A system that is BIBO stable might not be asymptotically stable.

Uniform Asymptotic Stability
A system is defined to be uniformly asymptotically stable if the system is asymptotically stable for all values of t0.
Exponential Stability
A system is defined to be exponentially stable if the system response decays exponentially towards zero as time approaches infinity.

For linear systems, uniform asymptotic stability is the same as exponential stability. This is not the case with non-linear systems.

### Marginal Stability

Here we will discuss some rules concerning systems that are marginally stable. Because we are discussing eigenvalues and eigenvectors, these theorems only apply to time-invariant systems.

1. A time-invariant system is marginally stable if and only if all the eigenvalues of the system matrix A are zero or have negative real parts, and those with zero real parts are simple roots of the minimal polynomial of A.
2. The equilibrium x = 0 of the state equation is uniformly stable if all eigenvalues of A have non-positive real parts, and there is a complete set of distinct eigenvectors associated with the eigenvalues with zero real parts.
3. The equilibrium x = 0 of the state equation is exponentially stable if and only if all eigenvalues of the system matrix A have negative real parts.

## Eigenvalues and Poles

A Linearly Time Invariant (LTI) system is stable (asymptotically stable, see above) if all the eigenvalues of A have negative real parts. Consider the following state equation:

${\displaystyle x'=Ax(t)+Bu(t)}$

We can take the Laplace Transform of both sides of this equation, using initial conditions of x0 = 0:

${\displaystyle sX(s)=AX(s)+BU(s)}$

Subtract AX(s) from both sides:

${\displaystyle sX(s)-AX(s)=BU(s)}$
${\displaystyle (sI-A)X(s)=BU(s)}$

Assuming (sI - A) is nonsingular, we can multiply both sides by the inverse:

${\displaystyle X(s)=(sI-A)^{-1}BU(s)}$

Now, if we remember our formula for finding the matrix inverse from the adjoint matrix:

${\displaystyle A^{-1}={\frac {\operatorname {adj} (A)}{|A|}}}$

We can use that definition here:

${\displaystyle X(s)={\frac {\operatorname {adj} (sI-A)BU(s)}{|(sI-A)|}}}$

Let's look at the denominator (which we will now call D(s)) more closely. To be stable, the following condition must be true:

${\displaystyle D(s)=|(sI-A)|=0}$

And if we substitute λ for s, we see that this is actually the characteristic equation of matrix A! This means that the values for s that satisfy the equation (the poles of our transfer function) are precisely the eigenvalues of matrix A. In the S domain, it is required that all the poles of the system be located in the left-half plane, and therefore all the eigenvalues of A must have negative real parts.

## Impulse Response Matrix

We can define the Impulse response matrix, G(t, τ) in order to define further tests for stability:

[Impulse Response Matrix]

${\displaystyle G(t,\tau )=\left\{{\begin{matrix}C(t)\phi (t,\tau )B(\tau )&{\mbox{ if }}t\geq \tau \\0&{\mbox{ if }}t<\tau \end{matrix}}\right.}$

The system is uniformly stable if and only if there exists a finite positive constant L such that for all time t and all initial conditions t0 with ${\displaystyle t\geq t_{0}}$ the following integral is satisfied:

${\displaystyle \int _{0}^{t}\|G(t,\tau )\|d\tau \leq L}$

In other words, the above integral must have a finite value, or the system is not uniformly stable.

In the time-invariant case, the impulse response matrix reduces to:

${\displaystyle G(t)=\left\{{\begin{matrix}Ce^{At}B&{\mbox{ if }}t\geq 0\\0&{\mbox{ if }}t<0\end{matrix}}\right.}$

In a time-invariant system, we can use the impulse response matrix to determine if the system is uniformly BIBO stable by taking a similar integral:

${\displaystyle \int _{0}^{\infty }\|G(t)\|dt\leq L}$

Where L is a finite constant.

## Positive Definiteness

These terms are important, and will be used in further discussions on this topic.

• f(x) is positive definite if f(x) > 0 for all x.
• f(x) is positive semi-definite if ${\displaystyle f(x)\geq 0}$ for all x, and f(x) = 0 only if x = 0.
• f(x) is negative definite if f(x) < 0 for all x.
• f(x) is negative semi-definite if ${\displaystyle f(x)\leq 0}$ for all x, and f(x) = 0 only if x = 0.

A Hermitian matrix X is positive definite if all its principle minors are positive. Also, a matrix X is positive definite if all its eigenvalues have positive real parts. These two methods may be used interchangeably.

Positive definiteness is a very important concept. So much so that the Lyapunov stability test depends on it. The other categorizations are not as important, but are included here for completeness.

## Lyapunov Stability

### Lyapunov's Equation

For linear systems, we can use the Lyapunov Equation, below, to determine if a system is stable. We will state the Lyapunov Equation first, and then state the Lyapunov Stability Theorem.

[Lyapunov Equation]

${\displaystyle MA+A^{T}M=-N}$

Where A is the system matrix, and M and N are p × p square matrices.

Lyapunov Stability Theorem
An LTI system ${\displaystyle x'=Ax}$ is stable if there exists a matrix M that satisfies the Lyapunov Equation where N is an arbitrary positive definite matrix, and M is a unique positive definite matrix.

Notice that for the Lyapunov Equation to be satisfied, the matrices must be compatible sizes. In fact, matrices A, M, and N must all be square matrices of equal size. Alternatively, we can write:

Lyapunov Stability Theorem (alternate)
If all the eigenvalues of the system matrix A have negative real parts, then the Lyapunov Equation has a unique solution M for every positive definite matrix N, and the solution can be calculated by:
${\displaystyle M=\int _{0}^{\infty }e^{A^{T}t}Ne^{At}dt}$

If the matrix M can be calculated in this manner, the system is asymptotically stable.

Controllers and Compensators

There are a number of preexisting devices for use in system control, such as lead and lag compensators, and powerful PID controllers. PID controllers are so powerful that many control engineers may use no other method of system control! The chapters in this section will discuss some of the common types of system compensators and controllers.

# Controllability and Observability

## System Interaction

In the world of control engineering, there are a slew of systems available that need to be controlled. The task of a control engineer is to design controller and compensator units to interact with these pre-existing systems. However, some systems simply cannot be controlled (or, more often, cannot be controlled in specific ways). The concept of controllability refers to the ability of a controller to arbitrarily alter the functionality of the system plant.

The state-variable of a system, x, represents the internal workings of the system that can be separate from the regular input-output relationship of the system. This also needs to be measured, or observed. The term observability describes whether the internal state variables of the system can be externally measured.

## Controllability

Complete state controllability (or simply controllability if no other context is given) describes the ability of an external input to move the internal state of a system from any initial state to any other final state in a finite time interval

We will start off with the definitions of the term controllability, and the related terms reachability and stabilizability.

Controllability
A system with internal state vector x is called controllable if and only if the system states can be changed by changing the system input.
Reachability
A particular state x1 is called reachable if there exists an input that transfers the state of the system from the initial state x0 to x1 in some finite time interval [t0, t).
Stabilizability
A system is Stabilizable if all states that cannot be reached decay to zero asymptotically.

We can also write out the definition of reachability more precisely:

A state x1 is called reachable at time t1 if for some finite initial time t0 there exists an input u(t) that transfers the state x(t) from the origin at t0 to x1.

A system is reachable at time t1 if every state x1 in the state-space is reachable at time t1.

Similarly, we can more precisely define the concept of controllability:

A state x0 is controllable at time t0 if for some finite time t1 there exists an input u(t) that transfers the state x(t) from x0 to the origin at time t1.

A system is called controllable at time t0 if every state x0 in the state-space is controllable.

### Controllability Matrix

For LTI (linear time-invariant) systems, a system is reachable if and only if its controllability matrix, ζ, has a full row rank of p, where p is the dimension of the matrix A, and p × q is the dimension of matrix B.

[Controllability Matrix]

${\displaystyle \zeta ={\begin{bmatrix}B&AB&A^{2}B&\cdots &A^{p-1}B\end{bmatrix}}\in R^{p\times pq}}$

A system is controllable or "Controllable to the origin" when any state x1 can be driven to the zero state x = 0 in a finite number of steps.

A system is controllable when the rank of the system matrix A is p, and the rank of the controllability matrix is equal to:

${\displaystyle Rank(\zeta )=Rank(A^{-1}\zeta )=p}$

If the second equation is not satisfied, the system is not .

MATLAB allows one to easily create the controllability matrix with the ctrb command. To create the controllability matrix ${\displaystyle \zeta }$ simply type

${\displaystyle \zeta =ctrb(A,B)}$

where A and B are mentioned above. Then in order to determine if the system is controllable or not one can use the rank command to determine if it has full rank.

If

${\displaystyle Rank(A)

Then controllability does not imply reachability.

• Reachability always implies controllability.
• Controllability only implies reachability when the state transition matrix is nonsingular.

### Determining Reachability

There are four methods that can be used to determine if a system is reachable or not:

1. If the p rows of ${\displaystyle \phi (t,\tau )B(t)}$ are linearly independent over the field of complex numbers. That is, if the rank of the product of those two matrices is equal to p for all values of t and τ
2. If the rank of the controllability matrix is the same as the rank of the system matrix A.
3. If the rank of ${\displaystyle \operatorname {rank} [\lambda I-A,B]=p}$ for all eigenvalues λ of the matrix A.
4. If the rank of the reachability gramian (described below) is equal to the rank of the system matrix A.

Each one of these conditions is both necessary and sufficient. If any one test fails, all the tests will fail, and the system is not reachable. If any test is positive, then all the tests will be positive, and the system is reachable.

### Gramians

Gramians are complicated mathematical functions that can be used to determine specific things about a system. For instance, we can use gramians to determine whether a system is controllable or reachable. Gramians, because they are more complicated than other methods, are typically only used when other methods of analyzing a system fail (or are too difficult).

All the gramians presented on this page are all matrices with dimension p × p (the same size as the system matrix A).

All the gramians presented here will be described using the general case of Linear time-variant systems. To change these into LTI (time-invariant equations), the following substitutions can be used:

${\displaystyle \phi (t,\tau )\to e^{A(t-\tau )}}$
${\displaystyle \phi '(t,\tau )\to e^{A'(t-\tau )}}$

Where we are using the notation X' to denote the transpose of a matrix X (as opposed to the traditional notation XT).

### Reachability Gramian

We can define the reachability gramian as the following integral:

[Reachability Gramian]

${\displaystyle W_{r}(t_{0},t_{1})=\int _{t_{0}}^{t_{1}}\phi (t_{1},\tau )B(\tau )B'(\tau )\phi '(t_{1},\tau )d\tau }$

The system is reachable if the rank of the reachability gramian is the same as the rank of the system matrix:

${\displaystyle \operatorname {rank} (W_{r})=p}$

<chemistry>/control{range}

### Controllability Gramian

We can define the controllability gramian of a system (A, B) as:

[Controllability Gramian]

${\displaystyle W_{c}(t_{0},t_{1})=\int _{t_{0}}^{t_{1}}\phi (t_{0},\tau )B(\tau )B'(\tau )\phi '(t_{0},\tau )d\tau }$

The system is controllable if the rank of the controllability gramian is the same as the rank of the system matrix:

${\displaystyle \operatorname {rank} (W_{c})=p}$

If the system is time-invariant, there are two important points to be made. First, the reachability gramian and the controllability gramian reduce to be the same equation. Therefore, for LTI systems, if we have found one gramian, then we automatically know both gramians. Second, the controllability gramian can also be found as the solution to the following Lyapunov equation:

${\displaystyle AW_{c}+W_{c}A'=-BB'}$

Many software packages, notably MATLAB, have functions to solve the Lyapunov equation. By using this last relation, we can also solve for the controllability gramian using these existing functions.

## Observability

The state-variables of a system might not be able to be measured for any of the following reasons:

1. The location of the particular state variable might not be physically accessible (a capacitor or a spring, for instance).
2. There are no appropriate instruments to measure the state variable, or the state-variable might be measured in units for which there does not exist any measurement device.
3. The state-variable is a derived "dummy" variable that has no physical meaning.

If things cannot be directly observed, for any of the reasons above, it can be necessary to calculate or estimate the values of the internal state variables, using only the input/output relation of the system, and the output history of the system from the starting time. In other words, we must ask whether or not it is possible to determine what the inside of the system (the internal system states) is like, by only observing the outside performance of the system (input and output)? We can provide the following formal definition of mathematical observability:

Observability
A system with an initial state, ${\displaystyle x(t_{0})}$ is observable if and only if the value of the initial state can be determined from the system output y(t) that has been observed through the time interval ${\displaystyle t_{0}. If the initial state cannot be so determined, the system is unobservable.
Complete Observability
A system is said to be completely observable if all the possible initial states of the system can be observed. Systems that fail this criteria are said to be unobservable.
Detectability
A system is Detectable if all states that cannot be observed decay to zero asymptotically.
Constructability
A system is constructable if the present state of the system can be determined from the present and past outputs and inputs to the system. If a system is observable, then it is also constructable. The relationship does not work the other way around.

A system state xi is unobservable at a given time ti if the zero-input response of the system is zero for all time t. If a system is observable, then the only state that produces a zero output for all time is the zero state. We can use this concept to define the term state-observability.

State-Observability
A system is completely state-observable at time t0 or the pair (A, C) is observable at t0 if the only state that is unobservable at t0 is the zero state x = 0.

### Constructability

A state x is unconstructable at a time t1 if for every finite time t < t1 the zero input response of the system is zero for all time t.

A system is completely state constructable at time t1 if the only state x that is unconstructable at t0 is x = 0.

If a system is observable at an initial time t0, then it is constructable at some time t > t0, if it is constructable at t1.

### Observability Matrix

The observability of the system is dependent only on the system states and the system output, so we can simplify our state equations to remove the input terms:

Matrix Dimensions:
A: p × p
B: p × q
C: r × p
D: r × q

${\displaystyle x'(t)=Ax(t)}$
${\displaystyle y(t)=Cx(t)}$

Therefore, we can show that the observability of the system is dependent only on the coefficient matrices A and C. We can show precisely how to determine whether a system is observable, using only these two matrices. If we have the observability matrix Q:

[Observability Matrix]

${\displaystyle Q={\begin{bmatrix}C\\CA\\CA^{2}\\\vdots \\CA^{p-1}\end{bmatrix}}}$

we can show that the system is observable if and only if the Q matrix has a rank of p. Notice that the Q matrix has the dimensions pr × p.

MATLAB allows one to easily create the observability matrix with the obsv command. To create the observability matrix ${\displaystyle Q}$ simply type

Q=obsv(A,C)

where A and C are mentioned above. Then in order to determine if the system is observable or not one can use the rank command to determine if it has full rank.

### Observability Gramian

We can define an observability gramian as:

[Observability Gramian]

${\displaystyle W_{o}(t_{0},t_{1})=\int _{t_{0}}^{t_{1}}\phi '(\tau ,t_{0})C'(\tau )C(\tau )\phi (\tau ,t_{0})d\tau }$

A system is completely state observable at time t0 < t < t1 if and only if the rank of the observability gramian is equal to the size p of the system matrix A.

If the system (A, B, C, D) is time-invariant, we can construct the observability gramian as the solution to the Lyapunov equation:

${\displaystyle A'W_{o}+W_{o}A=-C'C}$

### Constructability Gramian

We can define a constructability gramian as:

[Constructability Gramian]

${\displaystyle W_{cn}(t_{0},t_{1})=\int _{t_{0}}^{t_{1}}\phi '(\tau ,t_{1})C'(\tau )C(\tau )\phi (\tau ,t_{1})d\tau }$

A system is completely state observable at an initial time t0 if and only if there exists a finite t1 such that:

${\displaystyle \operatorname {rank} (W_{0})=\operatorname {rank} (W_{cn})=p}$

Notice that the constructability and observability gramians are very similar, and typically they can both be calculated at the same time, only substituting in different values into the state-transition matrix.

## Duality Principle

The concepts of controllability and observability are very similar. In fact, there is a concrete relationship between the two. We can say that a system (A, B) is controllable if and only if the system (A', C, B', D) is observable. This fact can be proven by plugging A' in for A, and B' in for C into the observability Gramian. The resulting equation will exactly mirror the formula for the controllability gramian, implying that the two results are the same.

# System Specifications

## System Specification

There are a number of different specifications that might need to be met by a new system design. In this chapter we will talk about some of the specifications that systems use, and some of the ways that engineers analyze and quantify systems.

## Sensitivity

The sensitivity of a system is a parameter that is specified in terms of a given output and a given input. The sensitivity measures how much change is caused in the output by small changes to the reference input. Sensitive systems have very large changes in output in response to small changes in the input. The sensitivity of system H to input X is denoted as:

${\displaystyle S_{H}^{X}(s)}$

## Disturbance Rejection

All physically-realized systems have to deal with a certain amount of noise and disturbance. The ability of a system to reject the noise is known as the disturbance rejection of the system.

## Control Effort

The control effort is the amount of energy or power necessary for the controller to perform its duty.

# Controllers

## Controllers

There are a number of different standard types of control systems that have been studied extensively. These controllers, specifically the P, PD, PI, and PID controllers are very common in the production of physical systems, but as we will see they each carry several drawbacks.

## Proportional Controllers

A Proportional controller block diagram

Proportional controllers are simply gain values. These are essentially multiplicative coefficients, usually denoted with a K. A P controller can only force the system poles to a spot on the system's root locus. A P controller cannot be used for arbitrary pole placement.

We refer to this kind of controller by a number of different names: proportional controller, gain, and zeroth-order controller.

## Derivative Controllers

A Proportional-Derivative controller block diagram

In the Laplace domain, we can show the derivative of a signal using the following notation:

${\displaystyle D(s)={\mathcal {L}}\left\{f'(t)\right\}=sF(s)-f(0)}$

Since most systems that we are considering have zero initial condition, this simplifies to:

${\displaystyle D(s)={\mathcal {L}}\left\{f'(t)\right\}=sF(s)}$

The derivative controllers are implemented to account for future values, by taking the derivative, and controlling based on where the signal is going to be in the future. Derivative controllers should be used with care, because even small amount of high-frequency noise can cause very large derivatives, which appear like amplified noise. Also, derivative controllers are difficult to implement perfectly in hardware or software, so frequently solutions involving only integral controllers or proportional controllers are preferred over using derivative controllers.

Notice that derivative controllers are not proper systems, in that the order of the numerator of the system is greater than the order of the denominator of the system. This quality of being a non-proper system also makes certain mathematical analysis of these systems difficult.

### Z-Domain Derivatives

We won't derive this equation here, but suffice it to say that the following equation in the Z-domain performs the same function as the Laplace-domain derivative:

${\displaystyle D(z)={\frac {z-1}{Tz}}}$

Where T is the sampling time of the signal.

## Integral Controllers

A Proportional-Integral Controller block diagram

To implemenent an Integral in a Laplace domain transfer function, we use the following:

${\displaystyle {\mathcal {L}}\left\{\int _{0}^{t}f(t)\,dt\right\}={1 \over s}F(s)}$

Integral controllers of this type add up the area under the curve for past time. In this manner, a PI controller (and eventually a PID) can take account of the past performance of the controller, and correct based on past errors.

### Z-Domain Integral

The integral controller can be implemented in the Z domain using the following equation:

${\displaystyle D(z)={\frac {z+1}{z-1}}}$

## PID Controllers

A block diagram of a PID controller

PID controllers are combinations of the proportional, derivative, and integral controllers. Because of this, PID controllers have large amounts of flexibility. We will see below that there are definite limites on PID control.

### PID Transfer Function

The transfer function for a standard PID controller is an addition of the Proportional, the Integral, and the Differential controller transfer functions (hence the name, PID). Also, we give each term a gain constant, to control the weight that each factor has on the final output:

[PID]

${\displaystyle D(s)=K_{p}+{K_{i} \over s}+K_{d}s}$

Notice that we can write the transfer function of a PID controller in a slightly different way:

${\displaystyle D(s)={\frac {A_{0}+A_{1}s}{B_{0}+B_{1}s}}}$

This form of the equation will be especially useful to us when we look at polynomial design.

### PID Tuning

The process of selecting the various coefficient values to make a PID controller perform correctly is called PID Tuning. There are a number of different methods for determining these values:[1]

1) Direct Synthesis (DS) method

2) Internal Model Control (IMC) method

3) Controller tuning relations

4) Frequency response techniques

5) Computer simulation

6) On-line tuning after the control system is installed

7)Trial and error

Notes:

1. Seborg, Dale E.; Edgar, Thomas F.; Mellichamp, Duncan A. (2003). Process Dynamics and Control, Second Edition. John Wiley & Sons,Inc. ISBN 0471000779

### Digital PID

In the Z domain, the PID controller has the following transfer function:

[Digital PID]

${\displaystyle D(z)=K_{p}+K_{i}{\frac {T}{2}}\left[{\frac {z+1}{z-1}}\right]+K_{d}\left[{\frac {z-1}{Tz}}\right]}$

And we can convert this into a canonical equation by manipulating the above equation to obtain:

${\displaystyle D(z)={\frac {a_{0}+a_{1}z^{-1}+a_{2}z^{-2}}{1+b_{1}z^{-1}+b_{2}z^{-2}}}}$

Where:

${\displaystyle a_{0}=K_{p}+{\frac {K_{i}T}{2}}+{\frac {K_{d}}{T}}}$
${\displaystyle a_{1}=-K_{p}+{\frac {K_{i}T}{2}}+{\frac {-2K_{d}}{T}}}$
${\displaystyle a_{2}={\frac {K_{d}}{T}}}$
${\displaystyle b_{1}=-1}$
${\displaystyle b_{2}=0}$

Once we have the Z-domain transfer function of the PID controller, we can convert it into the digital time domain:

${\displaystyle y[n]=x[n]a_{0}+x[n-1]a_{1}+x[n-2]a_{2}-y[n-1]b_{1}-y[n-2]b_{2}}$

And finally, from this difference equation, we can create a digital filter structure to implement the PID.

## Bang-Bang Controllers

Despite the low-brow sounding name of the Bang-Bang controller, it is a very useful tool that is only really available using digital methods. A better name perhaps for a bang-bang controller is an on/off controller, where a digital system makes decisions based on target and threshold values, and decides whether to turn the controller on and off. Bang-bang controllers are a non-linear style of control.

Consider the example of a household furnace. The oil in a furnace burns at a specific temperature—it can't burn hotter or cooler. To control the temperature in your house then, the thermostat control unit decides when to turn the furnace on, and when to turn the furnace off. This on/off control scheme is a bang-bang controller.

## Compensation

There are a number of different compensation units that can be employed to help fix certain system metrics that are outside of a proper operating range. Most commonly, the phase characteristics are in need of compensation, especially if the magnitude response is to remain constant. There are four major types of compensation 1. Lead compensation 2. Lag compensation 3. Lead-lag compensation 4. Lag-lead compensation

## Phase Compensation

Occasionally, it is necessary to alter the phase characteristics of a given system, without altering the magnitude characteristics. To do this, we need to alter the frequency response in such a way that the phase response is altered, but the magnitude response is not altered. To do this, we implement a special variety of controllers known as phase compensators. They are called compensators because they help to improve the phase response of the system.

There are two general types of compensators: Lead Compensators, and Lag Compensators. If we combine the two types, we can get a special Lag-lead Compensator system.(lead-lag system is not practically realisable).

When designing and implementing a phase compensator, it is important to analyze the effects on the gain and phase margins of the system, to ensure that compensation doesn't cause the system to become unstable. phase lead compensation:- 1 it is same as addition of zero to open loop TF since from pole zero point of view zero is nearer to origin than pole hence effect of zero dominant.

The transfer function for a lead-compensator is as follows:

${\displaystyle T_{lead}(s)={\frac {s-z}{s-p}}}$

To make the compensator work correctly, the following property must be satisfied:

${\displaystyle |z|<|p|}$

And both the pole and zero location should be close to the origin, in the LHP. Because there is only one pole and one zero, they both should be located on the real axis.

Phase lead compensators help to shift the poles of the transfer function to the left, which is beneficial for stability purposes.

## Phase Lag

The transfer function for a lag compensator is the same as the lead-compensator, and is as follows:

[Lag Compensator]

${\displaystyle T_{lag}(s)={\frac {s-z}{s-p}}}$

However, in the lag compensator, the location of the pole and zero should be swapped:

${\displaystyle |p|<|z|}$

Both the pole and the zero should be close to the origin, on the real axis.

The Phase lag compensator helps to improve the steady-state error of the system. The poles of the lag compensator should be very close together to help prevent the poles of the system from shifting right, and therefore reducing system stability.

${\displaystyle T_{Lag-lead}(s)={\frac {(s-z_{1})(s-z_{2})}{(s-p_{1})(s-p_{2})}}.}$
${\displaystyle |p_{1}|>|z_{1}|>|z_{2}|>|p_{2}|}$