Circuit Theory/Systems of Linear Equations

Linear equations

 A linear equation is an equation that has the form ${\displaystyle a_{1}x_{1}+a_{2}x_{2}+\cdots +a_{n}x_{n}=b}$

a1,a2, etc. are called the coefficients of the equations and b is called the constant term. Variables in linear algebra are usually denoted by xn instead of x, y, z, etc. because real world problems can have millions of variables. Problems in this text will have no more than 5 or 6.

Terms that appear on the left side of a linear equation must have a power of exactly 1. Terms that appear on the right hand side must have a power of zero.

Examples

1. ${\displaystyle 2x_{1}-5x_{2}+x_{3}=9\!}$
a linear equation

2. ${\displaystyle x_{1}+2x_{2}+2{\sqrt {x_{3}}}=1\!}$
NOT a linear equation because of the square root. A square root ${\displaystyle {\sqrt {x_{3}}}}$ is the same as ${\displaystyle x_{3}}$ to the power ${\displaystyle 1/2}$ and not to the power 1.

3. ${\displaystyle -10x_{1}+2x_{2}=0\!}$
a linear equation

4. ${\displaystyle x_{1}x_{2}+2x_{3}=0\!}$
NOT a linear equation because ${\displaystyle x_{1}x_{2}}$ is a term of power 2.

Systems of linear equations

 A system of m equations in n variables has the form ${\displaystyle {\begin{matrix}a_{11}x_{1}&+a_{12}x_{2}&\cdots &+a_{1n}x_{n}&=&b_{1}\\a_{21}x_{1}&+a_{22}x_{2}&\cdots &+a_{2n}x_{n}&=&b_{2}\\\cdots &\cdots &\cdots &\cdots &\cdots &\\a_{m1}x_{1}&+a_{m2}x_{2}&\cdots &+a_{mn}x_{n}&=&b_{m}\end{matrix}}}$

Note that if the coefficient of a variable in a linear equation is zero, we could omit it. Therefore not every variable needs to be present in every equation. Below are two systems of linear equations:

1. ${\displaystyle {\begin{matrix}2x_{1}&-2x_{2}&+x_{3}&=&1\\-3x_{1}&+2x_{2}&&=&3\\3x_{1}&+2x_{2}&+x_{3}&=&7\end{matrix}}}$

2. ${\displaystyle {\begin{matrix}2x_{1}&-2x_{2}&-x_{3}&+2x_{4}&=&0\\-3x_{1}&&+2x_{3}&&=&0\\3x_{1}&+2x_{2}&+x_{3}&-x_{4}&=&0\end{matrix}}}$

The second system is called a homogeneous system as all the constant terms are zero.

Usually, a system of linear equations consists of two or more linear equations that have the same variables. Theoretically, we may treat a single linear equation as a system also.

Matrix formulation

Arrange the coefficients of a linear system in a m-by-n matrix (i.e. an array with m rows and n columns), we get

${\displaystyle A=\left({\begin{matrix}a_{11}&a_{12}&\cdots &a_{1n}\\a_{21}&a_{22}&\cdots &a_{2n}\\\cdots &\cdots &\cdots &\cdots \\a_{m1}&a_{m2}&\cdots &a_{mn}\end{matrix}}\right)}$

and let ${\displaystyle b=\left({\begin{matrix}b_{1}\\b_{2}\\\vdots \\b_{n}\end{matrix}}\right)}$ and ${\displaystyle x=\left({\begin{matrix}x_{1}\\x_{2}\\\vdots \\x_{n}\end{matrix}}\right)}$. The system of linear equations could be written as

${\displaystyle Ax=b}$

This motivates the study of matrix theory. For further introduction, see Linear Algebra/Matrices.

Solutions of systems of linear equations

A solution of a system of linear equations is a set of numbers for each variable that makes every equation true. For example, a solution of the first system given is (0,1.5,4) because 2(0)-1.5(2)+1(4)=1, -3(0)+2(1.5)=3, and 3(0)+2(1.5)+4=7.

Applications

Solving systems of linear equations is essential in modern engineering. Physical systems of high complexity, which would require impossible formulas to describe, are approximated with high accuracy by solving a very large set of linear equations. By breaking the subject at hand into tiny, finite pieces, a solution can be obtained after a great deal of brute-force calculation. When such numerical analysis introduces errors into the calculation at hand, a sufficiently complex analysis often makes up for the simplifications often introduced in less numerical methods. Error can be avoided with smaller pieces or with more sophisticated algorithms. Specific methods include finite difference analysis, finite element analysis, and boundary layer analysis. Specific applications include computational fluid dynamics, heat transfer, fatigue analysis, strain analysis, and stress analysis.

Systems of linear equations are also used in statistical regression. A common algorithm for the least squares regression uses a matrix with n rows and m columns, where n represents the number of data points and m represents the number of base functions, or the number of coefficients sought. (A polynomial ax^3 + bx^2 + cx + d, for example, uses four base functions: x^3, x^2, x, and 1.) A good explanation of this algorithm may be found in Numerical Recipes: The Art of Scientific Computing.