# Control Systems/Sampled Data Systems

## Ideal Sampler

In this chapter, we are going to introduce the ideal sampler and the Star Transform. First, we need to introduce (or review) the Geometric Series infinite sum. The results of this sum will be very useful in calculating the Star Transform, later.

Consider a sampler device that operates as follows: every T seconds, the sampler reads the current value of the input signal at that exact moment. The sampler then holds that value on the output for T seconds, before taking the next sample. We have a generic input to this system, f(t), and our sampled output will be denoted f*(t). We can then show the following relationship between the two signals:

${\displaystyle f^{\,*}(t)=f(0){\big (}\mathrm {u} (t\,-\,0)\,-\,\mathrm {u} (t\,-\,T){\big )}\,+\,f(T){\big (}\mathrm {u} (t\,-\,T)\,-\,\mathrm {u} (t\,-\,2T){\big )}\,+\;\cdots \;+\,f(nT){\big (}\mathrm {u} (t\,-\,nT)\,-\,\mathrm {u} (t\,-\,(n\,+\,1)T){\big )}\,+\;\cdots }$

Note that the value of f * at time t = 1.5 T is the same as at time t = T. This relationship works for any fractional value.

Taking the Laplace Transform of this infinite sequence will yield us with a special result called the Star Transform. The Star Transform is also occasionally called the "Starred Transform" in some texts.

## Geometric Series

Before we talk about the Star Transform or even the Z-Transform, it is useful for us to review the mathematical background behind solving infinite series. Specifically, because of the nature of these transforms, we are going to look at methods to solve for the sum of a geometric series.

A geometic series is a sum of values with increasing exponents, as such:

${\displaystyle \sum _{k=0}^{n}ar^{k}=ar^{0}+ar^{1}+ar^{2}+ar^{3}+\cdots +ar^{n}\,}$

In the equation above, notice that each term in the series has a coefficient value, a. We can optionally factor out this coefficient, if the resulting equation is easier to work with:

${\displaystyle a\sum _{k=0}^{n}r^{k}=a\left(r^{0}+r^{1}+r^{2}+r^{3}+\cdots +r^{n}\,\right)}$

Once we have an infinite series in either of these formats, we can conveniently solve for the total sum of this series using the following equation:

${\displaystyle a\sum _{k=0}^{n}r^{k}=a{\frac {1-r^{n+1}}{1-r}}}$

Let's say that we start our series off at a number that isn't zero. Let's say for instance that we start our series off at n = 1 or n = 100. Let's see:

${\displaystyle \sum _{k=m}^{n}ar^{k}=ar^{m}+ar^{m+1}+ar^{m+2}+ar^{m+3}+\cdots +ar^{n}\,}$

We can generalize the sum to this series as follows:

[Geometric Series]

${\displaystyle \sum _{k=m}^{n}ar^{k}={\frac {a(r^{m}-r^{n+1})}{1-r}}}$

With that result out of the way, now we need to worry about making this series converge. In the above sum, we know that n is approaching infinity (because this is an infinite sum). Therefore, any term that contains the variable n is a matter of worry when we are trying to make this series converge. If we examine the above equation, we see that there is one term in the entire result with an n in it, and from that, we can set a fundamental inequality to govern the geometric series.

${\displaystyle r^{n+1}<\infty }$

To satisfy this equation, we must satisfy the following condition:

[Geometric convergence condition]

${\displaystyle r\leq 1}$

Therefore, we come to the final result: The geometric series converges if and only if the value of r is less than one.

## The Star Transform

The Star Transform is defined as such:

[Star Transform]

${\displaystyle F^{*}(s)={\mathcal {L}}^{*}[f(t)]=\sum _{k=0}^{\infty }f(kT)e^{-skT}}$

The Star Transform depends on the sampling time T and is different for a single signal depending on the frequency at which the signal is sampled. Since the Star Transform is defined as an infinite series, it is important to note that some inputs to the Star Transform will not converge, and therefore some functions do not have a valid Star Transform. Also, it is important to note that the Star Transform may only be valid under a particular region of convergence. We will cover this topic more when we discuss the Z-transform.

### Star ↔ Laplace

Complex Analysis/Residue Theory

The Laplace Transform and the Star Transform are clearly related, because we obtained the Star Transform by using the Laplace Transform on a time-domain signal. However, the method to convert between the two results can be a slightly difficult one. To find the Star Transform of a Laplace function, we must take the residues of the Laplace equation, as such:

${\displaystyle X^{*}(s)=\sum {\bigg [}{\text{residues of }}X(\lambda ){\frac {1}{1-e^{-T(s-\lambda )}}}{\bigg ]}_{{\text{at poles of E}}(\lambda )}}$

This math is advanced for most readers, so we can also use an alternate method, as follows:

${\displaystyle X^{*}(s)={\frac {1}{T}}\sum _{n=-\infty }^{\infty }X(s+jm\omega _{s})+{\frac {x(0)}{2}}}$

Neither one of these methods are particularly easy, however, and therefore we will not discuss the relationship between the Laplace transform and the Star Transform any more than is absolutely necessary in this book. Suffice it to say, however, that the Laplace transform and the Star Transform are related mathematically.

### Star + Laplace

In some systems, we may have components that are both continuous and discrete in nature. For instance, if our feedback loop consists of an Analog-To-Digital converter, followed by a computer (for processing), and then a Digital-To-Analog converter. In this case, the computer is acting on a digital signal, but the rest of the system is acting on continuous signals. Star transforms can interact with Laplace transforms in some of the following ways:

Given:

${\displaystyle Y(s)=X^{*}(s)H(s)}$

Then:

${\displaystyle Y^{*}(s)=X^{*}(s)H^{*}(s)}$

Given:

${\displaystyle Y(s)=X(s)H(s)}$

Then:

${\displaystyle Y^{*}(s)={\overline {XH}}^{*}(s)}$
${\displaystyle Y^{*}(s)\neq X^{*}(s)H^{*}(s)}$

Where ${\displaystyle {\overline {XH}}^{*}(s)}$ is the Star Transform of the product of X(s)H(s).

### Convergence of the Star Transform

The Star Transform is defined as being an infinite series, so it is critically important that the series converge (not reach infinity), or else the result will be nonsensical. Since the Star Transform is a geometic series (for many input signals), we can use geometric series analysis to show whether the series converges, and even under what particular conditions the series converges. The restrictions on the star transform that allow it to converge are known as the region of convergence (ROC) of the transform. Typically a transform must be accompanied by the explicit mention of the ROC.

## The Z-Transform

Let us say now that we have a discrete data set that is sampled at regular intervals. We can call this set x[n]:

x[n] = [ x[0] x[1] x[2] x[3] x[4] ... ]

This is also known as the Bilateral Z-Transform. We will only discuss this version of the transform in this book

we can utilize a special transform, called the Z-transform, to make dealing with this set more easy:

[Z Transform]

${\displaystyle X(z)={\mathcal {Z}}\left\{x[n]\right\}=\sum _{n=-\infty }^{\infty }x[n]z^{-n}}$
Z-Transform properties, and a table of common transforms can be found in:
the Appendix.

Like the Star Transform the Z Transform is defined as an infinite series and therefore we need to worry about convergence. In fact, there are a number of instances that have identical Z-Transforms, but different regions of convergence (ROC). Therefore, when talking about the Z transform, you must include the ROC, or you are missing valuable information.

### Z Transfer Functions

Like the Laplace Transform, in the Z-domain we can use the input-output relationship of the system to define a transfer function.

The transfer function in the Z domain operates exactly the same as the transfer function in the S Domain:

${\displaystyle H(z)={\frac {Y(z)}{X(z)}}}$
${\displaystyle {\mathcal {Z}}\{h[n]\}=H(z)}$

Similarly, the value h[n] which represents the response of the digital system is known as the impulse response of the system. It is important to note, however, that the definition of an "impulse" is different in the analog and digital domains.

### Inverse Z Transform

The inverse Z Transform is defined by the following path integral:

[Inverse Z Transform]

${\displaystyle x[n]=Z^{-1}\{X(z)\}={\frac {1}{2\pi j}}\oint _{C}X(z)z^{n-1}dz\ }$

Where C is a counterclockwise closed path encircling the origin and entirely in the region of convergence (ROC). The contour or path, C, must encircle all of the poles of X(z).

This math is relatively advanced compared to some other material in this book, and therefore little or no further attention will be paid to solving the inverse Z-Transform in this manner. Z transform pairs are heavily tabulated in reference texts, so many readers can consider that to be the primary method of solving for inverse Z transforms. There are a number of Z-transform pairs available in table form in The Appendix.

### Final Value Theorem

Like the Laplace Transform, the Z Transform also has an associated final value theorem:

[Final Value Theorem (Z)]

${\displaystyle \lim _{n\to \infty }x[n]=\lim _{z\to 1}(z-1)X(z)}$

This equation can be used to find the steady-state response of a system, and also to calculate the steady-state error of the system.

## Star ↔ Z

The Z transform is related to the Star transform though the following change of variables:

${\displaystyle z=e^{sT}}$

Notice that in the Z domain, we don't maintain any information on the sampling period, so converting to the Z domain from a Star Transformed signal loses that information. When converting back to the star domain however, the value for T can be re-insterted into the equation, if it is still available.

Also of some importance is the fact that the Z transform is bilinear, while the Star Transform is unilinear. This means that we can only convert between the two transforms if the sampled signal is zero for all values of n < 0.

Because the two transforms are so closely related, it can be said that the Z transform is simply a notational convenience for the Star Transform. With that said, this book could easily use the Star Transform for all problems, and ignore the added burden of Z transform notation entirely. A common example of this is Richard Hamming's book "Numerical Methods for Scientists and Engineers" which uses the Fourier Transform for all problems, considering the Laplace, Star, and Z-Transforms to be merely notational conveniences. However, the Control Systems wikibook is under the impression that the correct utilization of different transforms can make problems more easy to solve, and we will therefore use a multi-transform approach.

### Z plane

Note:
The lower-case z is the name of the variable, and the upper-case Z is the name of the Transform and the plane.

z is a complex variable with a real part and an imaginary part. In other words, we can define z as such:

${\displaystyle z=\operatorname {Re} (z)+j\operatorname {Im} (z)}$

Since z can be broken down into two independent components, it often makes sense to graph the variable z on the Z-plane. In the Z-plane, the horizontal axis represents the real part of z, and the vertical axis represents the magnitude of the imaginary part of z.

Notice also that if we define z in terms of the star-transform relation:

${\displaystyle z=e^{sT}}$

we can separate out s into real and imaginary parts:

${\displaystyle s=\sigma +j\omega }$

We can plug this into our equation for z:

${\displaystyle z=e^{(\sigma +j\omega )T}=e^{\sigma T}e^{j\omega T}}$

Through Euler's formula, we can separate out the complex exponential as such:

${\displaystyle z=e^{\sigma T}(\cos(\omega T)+j\sin(\omega T))}$

If we define two new variables, M and φ:

${\displaystyle M=e^{\sigma T}}$
${\displaystyle \phi =\omega T}$

We can write z in terms of M and φ. Notice that it is Euler's equation:

${\displaystyle z=M\cos(\phi )+jM\sin(\phi )}$

Which is clearly a polar representation of z, with the magnitude of the polar function (M) based on the real-part of s, and the angle of the polar function (φ) is based on the imaginary part of s.

### Region of Convergence

To best teach the region of convergance (ROC) for the Z-transform, we will do a quick example.

We have the following discrete series or a decaying exponential:

${\displaystyle x[n]=e^{-2n}u[n]}$

Now, we can plug this function into the Z transform equation:

${\displaystyle X(z)={\mathcal {Z}}[x[n]]=\sum _{n=-\infty }^{\infty }e^{-2n}u[n]z^{-n}}$

Note that we can remove the unit step function, and change the limits of the sum:

${\displaystyle X(z)=\sum _{n=0}^{\infty }e^{-2n}z^{-n}}$

This is because the series is 0 for all time less than n → 0. If we try to combine the n terms, we get the following result:

${\displaystyle X(z)=\sum _{n=0}^{\infty }(e^{2}z)^{-n}}$

Once we have our series in this term, we can break this down to look like our geometric series:

${\displaystyle a=1}$
${\displaystyle r=(e^{2}z)^{-1}}$

And finally, we can find our final value, using the geometric series formula:

${\displaystyle a\sum _{k=0}^{n}r^{k}=a{\frac {1-r^{n+1}}{1-r}}=1{\frac {1-((e^{2}z)^{-1})^{n+1}}{1-(e^{2}z)^{-1}}}}$

Again, we know that to make this series converge, we need to make the r value less than 1:

${\displaystyle |(e^{2}z)^{-1}|=\left|{\frac {1}{e^{2}z}}\right|\leq 1}$
${\displaystyle |e^{2}z|\geq 1}$

And finally we obtain the region of convergance for this Z-transform:

${\displaystyle |z|\geq {\frac {1}{e^{2}}}}$

### Laplace ↔ Z

There are no easy, direct ways to convert between the Laplace transform and the Z transform directly. Nearly all methods of conversions reproduce some aspects of the original equation faithfully, and incorrectly reproduce other aspects. For some of the main mapping techniques between the two, see the Z Transform Mappings Appendix.

However, there are some topics that we need to discuss. First and foremost, conversions between the Laplace domain and the Z domain are not linear, this leads to some of the following problems:

1. ${\displaystyle {\mathcal {L}}[G(z)H(z)]\neq G(s)H(s)}$
2. ${\displaystyle {\mathcal {Z}}[G(s)H(s)]\neq G(z)H(z)}$

This means that when we combine two functions in one domain multiplicatively, we must find a combined transform in the other domain. Here is how we denote this combined transform:

${\displaystyle {\mathcal {Z}}[G(s)H(s)]={\overline {GH}}(z)}$

Notice that we use a horizontal bar over top of the multiplied functions, to denote that we took the transform of the product, not of the individual pieces. However, if we have a system that incorporates a sampler, we can show a simple result. If we have the following format:

${\displaystyle Y(s)=X^{*}(s)H(s)}$

Then we can put everything in terms of the Star Transform:

${\displaystyle Y^{*}(s)=X^{*}(s)H^{*}(s)}$

and once we are in the star domain, we can do a direct change of variables to reach the Z domain:

${\displaystyle Y^{*}(s)=X^{*}(s)H^{*}(s)\to Y(z)=X(z)H(z)}$

Note that we can only make this equivalence relationship if the system incorporates an ideal sampler, and therefore one of the multiplicative terms is in the star domain.

### Example

Let's say that we have the following equation in the Laplace domain:

${\displaystyle Y(s)=A^{*}(s)B(s)+C(s)D(s)}$

And because we have a discrete sampler in the system, we want to analyze it in the Z domain. We can break up this equation into two separate terms, and transform each:

${\displaystyle {\mathcal {Z}}[A^{*}(s)B(s)]\to {\mathcal {Z}}[A^{*}(s)B^{*}(s)]=A(z)B(z)}$

And

${\displaystyle {\mathcal {Z}}[C(s)D(s)]={\overline {CD}}(z)}$

And when we add them together, we get our result:

${\displaystyle Y(z)=A(z)B(z)+{\overline {CD}}(z)}$

## Z ↔ Fourier

By substituting variables, we can relate the Star transform to the Fourier Transform as well:

${\displaystyle e^{sT}=e^{j\omega }}$
${\displaystyle e^{(\sigma +j\omega )T}=e^{j\omega }}$

If we assume that T = 1, we can relate the two equations together by setting the real part of s to zero. Notice that the relationship between the Laplace and Fourier transforms is mirrored here, where the Fourier transform is the Laplace transform with no real-part to the transform variable.

There are a number of discrete-time variants to the Fourier transform as well, which are not discussed in this book. For more information about these variants, see Digital Signal Processing.

## Reconstruction

Some of the easiest reconstruction circuits are called "Holding circuits". Once a signal has been transformed using the Star Transform (passed through an ideal sampler), the signal must be "reconstructed" using one of these hold systems (or an equivalent) before it can be analyzed in a Laplace-domain system.

If we have a sampled signal denoted by the Star Transform ${\displaystyle X^{*}(s)}$, we want to reconstruct that signal into a continuous-time waveform, so that we can manipulate it using Laplace-transform techniques.

Let's say that we have the sampled input signal, a reconstruction circuit denoted G(s), and an output denoted with the Laplace-transform variable Y(s). We can show the relationship as follows:

${\displaystyle Y(s)=X^{*}(s)G(s)}$

Reconstruction circuits then, are physical devices that we can use to convert a digital, sampled signal into a continuous-time domain, so that we can take the Laplace transform of the output signal.

### Zero order Hold

Zero-Order Hold impulse response

A zero-order hold circuit is a circuit that essentially inverts the sampling process: The value of the sampled signal at time t is held on the output for T time. The output waveform of a zero-order hold circuit therefore looks like a staircase approximation to the original waveform.

The transfer function for a zero-order hold circuit, in the Laplace domain, is written as such:

[Zero Order Hold]

${\displaystyle G_{h0}={\frac {1-e^{-Ts}}{s}}}$

The Zero-order hold is the simplest reconstruction circuit, and (like the rest of the circuits on this page) assumes zero processing delay in converting between digital to analog.

A continuous input signal (gray) and the sampled signal with a zero-order hold (red)

### First Order Hold

Impulse response of a first-order hold.

The zero-order hold creates a step output waveform, but this isn't always the best way to reconstruct the circuit. Instead, the First-Order Hold circuit takes the derivative of the waveform at the time t, and uses that derivative to make a guess as to where the output waveform is going to be at time (t + T). The first-order hold circuit then "draws a line" from the current position to the expected future position, as the output of the waveform.

[First Order Hold]

${\displaystyle G_{h1}={\frac {1+Ts}{T}}\left[{\frac {1-e^{-Ts}}{s}}\right]^{2}}$

Keep in mind, however, that the next value of the signal will probably not be the same as the expected value of the next data point, and therefore the first-order hold may have a number of discontinuities.

An input signal (grey) and the first-order hold circuit output (red)

### Fractional Order Hold

The Zero-Order hold outputs the current value onto the output, and keeps it level throughout the entire bit time. The first-order hold uses the function derivative to predict the next value, and produces a series of ramp outputs to produce a fluctuating waveform. Sometimes however, neither of these solutions are desired, and therefore we have a compromise: Fractional-Order Hold. Fractional order hold acts like a mixture of the other two holding circuits, and takes a fractional number k as an argument. Notice that k must be between 0 and 1 for this circuit to work correctly.

[Fractional Order Hold]

${\displaystyle G_{hk}=(1-ke^{-Ts}){\frac {1-e^{-Ts}}{s}}+{\frac {k}{Ts^{2}}}(1-e^{-Ts})^{2}}$

This circuit is more complicated than either of the other hold circuits, but sometimes added complexity is worth it if we get better performance from our reconstruction circuit.

### Other Reconstruction Circuits

Impulse response to a linear-approximation circuit.

Another type of circuit that can be used is a linear approximation circuit.

An input signal (grey) and the output signal through a linear approximation circuit