# Signals and Systems/Time Domain Analysis

There are many tools available to analyze a system in the time domain, although many of these tools are very complicated and involved. Nonetheless, these tools are invaluable for use in the study of linear signals and systems, so they will be covered here.

## Linear Time-Invariant (LTI) Systems

This page will contain the definition of a LTI system and this will be used to motivate the definition of convolution as the output of a LTI system in the next section. To begin with a system has to be defined and the LTI properties have to be listed. Then, for a given input it can be shown (in this section or the following) that the output of a LTI system is a convolution of the input and the system's impulse response, thus motivating the definition of convolution.

Consider a system for which an input of xi(t) results in an output of yi(t) respectively for i = 1, 2.

### Linearity

There are 3 requirements for linearity. A function must satisfy all 3 to be called "linear".

1. Additivity: An input of ${\displaystyle x_{3}(t)=x_{1}(t)+x_{2}(t)}$ results in an output of ${\displaystyle y_{3}(t)=y_{1}(t)+y_{2}(t)}$.
2. Homogeneity: An input of ${\displaystyle ax_{1}}$ results in an output of ${\displaystyle ay_{1}}$
3. If x(t) = 0, y(t) = 0.

"Linear" in this sense is not the same word as is used in conventional algebra or geometry. Specifically, linearity in signals applications has nothing to do with straight lines. Here is a small example:

${\displaystyle y(t)=x(t)+5}$

This function is not linear, because when x(t) = 0, y(t) = 5 (fails requirement 3). This may surprise people, because this equation is the equation for a straight line!

Being linear is also known in the literature as "satisfying the principle of superposition". Superposition is a fancy term for saying that the system is additive and homogeneous. The terms linearity and superposition can be used interchangably, but in this book we will prefer to use the term linearity exclusively.

We can combine the three requirements into a single equation: In a linear system, an input of ${\displaystyle a_{1}x_{1}(t)+a_{2}x_{2}(t)}$ results in an output of ${\displaystyle a_{1}y_{1}(t)+a_{2}y_{2}(t)}$.

A system is said to be additive if a sum of inputs results in a sum of outputs. To test for additivity, we need to create two arbitrary inputs, x1(t) and x2(t). We then use these inputs to produce two respective outputs:

${\displaystyle y_{1}(t)=f(x_{1}(t))}$
${\displaystyle y_{2}(t)=f(x_{2}(t))}$

Now, we need to take a sum of inputs, and prove that the system output is a sum of the previous outputs:

${\displaystyle y_{1}(t)+y_{2}(t)=f(x_{1}(t)+x_{2}(t))}$

If this final relationship is not satisfied for all possible inputs, then the system is not additive.

### Homogeneity

Similar to additivity, a system is homogeneous if a scaled input (multiplied by a constant) results in a scaled output. If we have two inputs to a system:

${\displaystyle y_{1}(t)=f(x_{1}(t))}$
${\displaystyle y_{2}(t)=f(x_{2}(t))}$

Where

${\displaystyle x_{1}(t)=cx_{2}(t)}$

Where c is an arbitrary constant. If this is the case then the system is homogeneous if

${\displaystyle y_{1}(t)=cy_{2}(t)}$

for any arbitrary c.

### Time Invariance

If the input signal x(t) produces an output y(t) then any time shifted input, x(t + δ), results in a time-shifted output y(t + δ).

This property can be satisfied if the transfer function of the system is not a function of time except expressed by the input and output.

### Example: Simple Time Invariance

To demonstrate how to determine if a system is time-invariant, consider the two systems:

• System A: ${\displaystyle y(t)=t\,x(t)}$
• System B: ${\displaystyle \,\!b(t)=10x(t)}$

Since system A explicitly depends on t outside of x(t) and y(t), it is time-variant. System B, however, does not depend explicitly on t so it is time-invariant (given x(t) is time-invariant).

### Example: Formal Proof

A more formal proof of why systems A & B from above are respectively time varying and time-invariant is now presented. To perform this proof, the second definition of time invariance will be used.

System A
Start with a delay of the input ${\displaystyle x_{d}(t)=\,\!x(t+\delta )}$
${\displaystyle y(t)=t\,x(t)}$
${\displaystyle y_{1}(t)=t\,x_{d}(t)=t\,x(t+\delta )}$
Now delay the output by δ
${\displaystyle y(t)=t\,x(t)}$
${\displaystyle y_{2}(t)=\,\!y(t+\delta )=(t+\delta )x(t+\delta )}$
Clearly ${\displaystyle y_{1}(t)\,\!\neq y_{2}(t)}$, therefore the system is not time-invariant.
System B
Start with a delay of the input ${\displaystyle x_{d}(t)=\,\!x(t+\delta )}$
${\displaystyle y(t)=10\,x(t)}$
${\displaystyle y_{1}(t)=10\,x_{d}(t)=10\,x(t+\delta )}$
Now delay the output by δ
${\displaystyle y(t)=10\,x(t)}$
${\displaystyle y_{2}(t)=y(t+\delta )=10\,x(t+\delta )}$
Clearly ${\displaystyle y_{1}(t)=\,\!y_{2}(t)}$, therefore the system is time-invariant.

## Linear Time Invariant (LTI) Systems

The system is linear time-invariant (LTI) if it satisfies both the property of linearity and time-invariance. This book will study LTI systems almost exclusively, because they are the easiest systems to work with, and they are ideal to analyze and design.

## Other Function Properties

Besides being linear, or time-invariant, there are a number of other properties that we can identify in a function:

### Memory

A system is said to have memory if the output from the system is dependent on past inputs (or future inputs) to the system. A system is called memoryless if the output is only dependent on the current input. Memoryless systems are easier to work with, but systems with memory are more common in digital signal processing applications. A memory system is also called a dynamic system whereas a memoryless system is called a static system.

### Causality

Causality is a property that is very similar to memory. A system is called causal if it is only dependent on past or current inputs. A system is called non-causal if the output of the system is dependent on future inputs. Most of the practical systems are causal.

### Stability

Stability is a very important concept in systems, but it is also one of the hardest function properties to prove. There are several different criteria for system stability, but the most common requirement is that the system must produce a finite output when subjected to a finite input. For instance, if we apply 5 volts to the input terminals of a given circuit, we would like it if the circuit output didn't approach infinity, and the circuit itself didn't melt or explode. This type of stability is often known as "Bounded Input, Bounded Output" stability, or BIBO.

Studying BIBO stability is a relatively complicated course of study, and later books on the Electrical Engineering bookshelf will attempt to cover the topic.

## Linear Operators

Mathematical operators that satisfy the property of linearity are known as linear operators. Here are some common linear operators:

1. Derivative
2. Integral
3. Fourier Transform

### Example: Linear Functions

Determine if the following two functions are linear or not:

1. ${\displaystyle y(t)=\int _{-\infty }^{\infty }x(t)dt}$
2. ${\displaystyle y(t)={\frac {d}{dt}}x(t)}$

## Impulse Response

### Zero-Input Response

${\displaystyle x(t)=u(t)}$
${\displaystyle h(t)=e^{-x}u(t)}$

### Zero-State Response

zero state response means transient response or natural response.

### Second-Order Solution

• Example. Finding the total response of a driven RLC circuit.

## Convolution

This operation can be performed using this MATLAB command:
conv

Convolution (folding together) is a complicated operation involving integrating, multiplying, adding, and time-shifting two signals together. Convolution is a key component to the rest of the material in this book.

The convolution a * b of two functions a and b is defined as the function:

${\displaystyle (a*b)(t)=\int _{-\infty }^{\infty }a(\tau )b(t-\tau )d\tau }$

The greek letter τ (tau) is used as the integration variable, because the letter t is already in use. τ is used as a "dummy variable" because we use it merely to calculate the integral.

In the convolution integral, all references to t are replaced with τ, except for the -t in the argument to the function b. Function b is time inverted by changing τ to -τ. Graphically, this process moves everything from the right-side of the y axis to the left side and vice-versa. Time inversion turns the function into a mirror image of itself.

Next, function b is time-shifted by the variable t. Remember, once we replace everything with τ, we are now computing in the tau domain, and not in the time domain like we were previously. Because of this, t can be used as a shift parameter.

We multiply the two functions together, time shifting along the way, and we take the area under the resulting curve at each point. Two functions overlap in increasing amounts until some "watershed" after which the two functions overlap less and less. Where the two functions overlap in the t domain, there is a value for the convolution. If one (or both) of the functions do not exist over any given range, the value of the convolution operation at that range will be zero.

After the integration, the definite integral plugs the variable t back in for remaining references of the variable τ, and we have a function of t again. It is important to remember that the resulting function will be a combination of the two input functions, and will share some properties of both.

### Properties of Convolution

The convolution function satisfies certain conditions:

Commutativity
${\displaystyle f*g=g*f\,}$
Associativity
${\displaystyle f*(g*h)=(f*g)*h\,}$
Distributivity
${\displaystyle f*(g+h)=(f*g)+(f*h)\,}$
Associativity With Scalar Multiplication
${\displaystyle a(f*g)=(af)*g=f*(ag)\,}$

for any real (or complex) number a.

Differentiation Rule
${\displaystyle (f*g)'=f'*g=f*g'\,}$

### Example 1

Find the convolution, z(t), of the following two signals, x(t) and y(t), by using (a) the integral representation of the convolution equation and (b) muliplication in the Laplace domain.

The signal y(t) is simply the Heaviside step, u(t).

The signal x(t) is given by the following infinite sinusoid, x0(t), and windowing function, xw(t):

${\displaystyle x_{0}(t)=\sin(t)\,}$
${\displaystyle x_{w}(t)=u(t)-u(t-2\pi )\,}$

Thus, the convolution we wish to perform is therefore:

${\displaystyle z(t)=x(t)*y(t)\,}$
${\displaystyle z(t)=\sin(t)\left[u(t)-u(t-2\pi )\right]*u(t)\,}$
${\displaystyle z(t)=\left[\sin(t)u(t)-\sin(t)u(t-2\pi )\right]*u(t)\,}$

From the distributive law:

${\displaystyle z(t)=\sin(t)u(t)*u(t)-\sin(t)u(t-2\pi )*u(t)\,}$

## Correlation

This operation can be performed using this MATLAB command:
xcorr

Akin to Convolution is a technique called "Correlation" that combines two functions in the time domain into a single resultant function in the time domain. Correlation is not as important to our study as convolution is, but it has a number of properties that will be useful nonetheless.

The correlation of two functions, g(t) and h(t) is defined as such:

${\displaystyle R_{gh}(t)=\int _{-\infty }^{\infty }g(\tau )h(t+\tau )d\tau }$

Where the capital R is the Correlation Operator, and the subscripts to R are the arguments to the correlation operation.

We notice immediately that correlation is similar to convolution, except that we don't time-invert the second argument before we shift and integrate. Because of this, we can define correlation in terms of convolution, as such:

${\displaystyle R_{gh}(t)=g(t)*h(-t)}$

### Uses of Correlation

Correlation is used in many places because it demonstrates one important fact: Correlation determines how much similarity there is between the two argument functions. The more the area under the correlation curve, the more is the similarity between the two signals.

### Autocorrelation

The term "autocorrelation" is the name of the operation when a function is correlated with itself. The autocorrelation is denoted when both of the subscripts to the Correlation operator are the same:

${\displaystyle R_{xx}(t)=x(t)*x(-t)}$

While it might seem ridiculous to correlate a function with itself, there are a number of uses for autocorrelation that will be discussed later. Autocorrelation satisfies several important properties:

1. The maximum value of the autocorrelation always occurs at t = 0. The function always decreases (or stays constant) as t approaches infinity.
2. Autocorrelation is symmetric about the x axis.

### Crosscorrelation

Cross correlation is every instance of correlation that is not considered "autocorrelation". In general, crosscorrelation occurs when the function arguments to the correlation are not equal. Crosscorrelation is used to find the similarity between two signals.

${\displaystyle R_{gh}(t)=\int _{-\infty }^{\infty }g(\tau )h(t+\tau )d\tau }$