# Engineering Analysis/Linear Transformations

## Linear Transformations

A linear transformation is a matrix M that operates on a vector in space V, and results in a vector in a different space W. We can define a transformation as such:

${\displaystyle T:V\to W}$

In the above equation, we say that V is the domain space of the transformation, and W is the range space of the transformation. Also, we can use a "function notation" for the transformation, and write it as:

${\displaystyle M(x)=Mx=y}$

Where x is a vector in V, and y is a vector in W. To be a linear transformation, the principle of superposition must hold for the transformation:

${\displaystyle M(av_{1}+bv_{2})=aM(v_{1})+bM(v_{2})}$

Where a and b are arbitrary scalars.

## Null Space

The Nullspace of an equation is the set of all vectors x for which the following relationship holds:

${\displaystyle Mx=0}$

Where M is a linear transformation matrix. Depending on the size and rank of M, there may be zero or more vectors in the nullspace. Here are a few rules to remember:

1. If the matrix M is invertable, then there is no nullspace.
2. The number of vectors in the nullspace (N) is the difference between the rank(R) of the matrix and the number of columns(C) of the matrix:
${\displaystyle N=R-C}$

If the matrix is in row-eschelon form, the number of vectors in the nullspace is given by the number of rows without a leading 1 on the diagonal. For every column where there is not a leading one on the diagonal, the nullspace vectors can be obtained by placing a negative one in the leading position for that column vector.

We denote the nullspace of a matrix A as:

${\displaystyle {\mathcal {N}}\{A\}}$

## Linear Equations

If we have a set of linear equations in terms of variables x, scalar coefficients a, and a scalar result b, we can write the system in matrix notation as such:

${\displaystyle Ax=b}$

Where x is a m × 1 vector, b is an n × 1 vector, and A is an n × m matrix. Therefore, this is a system of n equations with m unknown variables. There are 3 possibilities:

1. If Rank(A) is not equal to Rank([A b]), there is no solution
2. If Rank(A) = Rank([A b]) = n, there is exactly one solution
3. If Rank(A) = Rank([A b]) < n, there are infinitely many solutions.

### Complete Solution

The complete solution of a linear equation is given by the sum of the homogeneous solution, and the particular solution. The homogeneous solution is the nullspace of the transformation, and the particular solution is the values for x that satisfy the equation:

${\displaystyle A(x)=b}$
${\displaystyle A(x_{h}+x_{p})=b}$

Where

${\displaystyle x_{h}}$ is the homogeneous solution, and is the nullspace of A that satisfies the equation ${\displaystyle A(x_{h})=0}$
${\displaystyle x_{p}}$ is the particular solution that satisfies the equation ${\displaystyle A(x_{p})=b}$

### Minimum Norm Solution

If Rank(A) = Rank([A b]) < n, then there are infinitely many solutions to the linear equation. In this situation, the solution called the minimum norm solution must be found. This solution represents the "best" solution to the problem. To find the minimum norm solution, we must minimize the norm of x subject to the constraint of:

${\displaystyle Ax-b=0}$

There are a number of methods to minimize a value according to a given constraint, and we will talk about them later.

## Least-Squares Curve Fit

If Rank(A) doesnt equal Rank([A b]), then the linear equation has no solution. However, we can find the solution which is the closest. This "best fit" solution is known as the Least-Squares curve fit.

We define an error quantity E, such that:

${\displaystyle E=Ax-b\neq 0}$

Our job then is to find the minimum value for the norm of E:

${\displaystyle \|E\|^{2}=\|Ax-b\|^{2}=}$

We do this by differentiating with respect to x, and setting the result to zero:

${\displaystyle {\frac {\partial \|E\|^{2}}{\partial x}}=2A'(Ax-b)=0}$

Solving, we get our result:

${\displaystyle x=(A^{T}A)^{-1}A^{T}b}$