# Engineering Analysis/Linear Transformations

Some sections in this chapter require that the student know how to take the derivative of a function with respect to a particular variable. This is commonly known as partial differentiation, and is covered in Calculus. |

## Contents

## Linear Transformations[edit]

A linear transformation is a matrix M that operates on a vector in space *V*, and results in a vector in a different space *W*. We can define a transformation as such:

In the above equation, we say that *V* is the **domain space** of the transformation, and *W* is the **range space** of the transformation. Also, we can use a "function notation" for the transformation, and write it as:

Where x is a vector in *V*, and y is a vector in *W*. To be a linear transformation, the principle of superposition must hold for the transformation:

Where a and b are arbitrary scalars.

## Null Space[edit]

The Nullspace of an equation is the set of all vectors x for which the following relationship holds:

Where M is a linear transformation matrix. Depending on the size and rank of M, there may be zero or more vectors in the nullspace. Here are a few rules to remember:

- If the matrix M is invertable, then there is no nullspace.
- The number of vectors in the nullspace (N) is the difference between the rank(R) of the matrix and the number of columns(C) of the matrix:

If the matrix is in row-eschelon form, the number of vectors in the nullspace is given by the number of rows without a leading 1 on the diagonal. For every column where there is not a leading one on the diagonal, the nullspace vectors can be obtained by placing a negative one in the leading position for that column vector.

We denote the nullspace of a matrix A as:

## Linear Equations[edit]

If we have a set of linear equations in terms of variables x, scalar coefficients a, and a scalar result b, we can write the system in matrix notation as such:

Where x is a m × 1 vector, b is an n × 1 vector, and A is an n × m matrix. Therefore, this is a system of n equations with m unknown variables. There are 3 possibilities:

- If Rank(A) is not equal to Rank([A b]), there is no solution
- If Rank(A) = Rank([A b]) = n, there is exactly one solution
- If Rank(A) = Rank([A b]) < n, there are infinitely many solutions.

### Complete Solution[edit]

The complete solution of a linear equation is given by the sum of the **homogeneous solution**, and the **particular solution**. The homogeneous solution is the nullspace of the transformation, and the particular solution is the values for x that satisfy the equation:

Where

- is the homogeneous solution, and is the nullspace of A that satisfies the equation
- is the particular solution that satisfies the equation

### Minimum Norm Solution[edit]

If Rank(A) = Rank([A b]) < n, then there are infinitely many solutions to the linear equation. In this situation, the solution called the **minimum norm** solution must be found. This solution represents the "best" solution to the problem. To find the minimum norm solution, we must minimize the norm of x subject to the constraint of:

There are a number of methods to minimize a value according to a given constraint, and we will talk about them later.

## Least-Squares Curve Fit[edit]

If Rank(A) doesnt equal Rank([A b]), then the linear equation has no solution. However, we can find the solution which is the closest. This "best fit" solution is known as the Least-Squares curve fit.

We define an error quantity E, such that:

Our job then is to find the minimum value for the norm of E:

We do this by differentiating with respect to x, and setting the result to zero:

Solving, we get our result: