Circuit Theory/Systems of Linear Equations
A linear equation is an equation that has the form
a1,a2, etc. are called the coefficients of the equations and b is called the constant term. Variables in linear algebra are usually denoted by xn instead of x, y, z, etc. because real world problems can have millions of variables. Problems in this text will have no more than 5 or 6.
Terms that appear on the left side of a linear equation must have a power of exactly 1. Terms that appear on the right hand side must have a power of zero.
a linear equation
NOT a linear equation because of the square root. A square root is the same as to the power and not to the power 1.
a linear equation
NOT a linear equation because is a term of power 2.
Systems of linear equations
A system of m equations in n variables has the form
Note that if the coefficient of a variable in a linear equation is zero, we could omit it. Therefore not every variable needs to be present in every equation. Below are two systems of linear equations:
The second system is called a homogeneous system as all the constant terms are zero.
Usually, a system of linear equations consists of two or more linear equations that have the same variables. Theoretically, we may treat a single linear equation as a system also.
Arrange the coefficients of a linear system in a m-by-n matrix (i.e. an array with m rows and n columns), we get
and let and . The system of linear equations could be written as
This motivates the study of matrix theory. For further introduction, see Linear Algebra/Matrices.
Solutions of systems of linear equations
A solution of a system of linear equations is a set of numbers for each variable that makes every equation true. For example, a solution of the first system given is (0,1.5,4) because 2(0)-1.5(2)+1(4)=1, -3(0)+2(1.5)=3, and 3(0)+2(1.5)+4=7.
Solving systems of linear equations is essential in modern engineering. Physical systems of high complexity, which would require impossible formulas to describe, are approximated with high accuracy by solving a very large set of linear equations. By breaking the subject at hand into tiny, finite pieces, a solution can be obtained after a great deal of brute-force calculation. When such numerical analysis introduces errors into the calculation at hand, a sufficiently complex analysis often makes up for the simplifications often introduced in less numerical methods. Error can be avoided with smaller pieces or with more sophisticated algorithms. Specific methods include finite difference analysis, finite element analysis, and boundary layer analysis. Specific applications include computational fluid dynamics, heat transfer, fatigue analysis, strain analysis, and stress analysis.
Systems of linear equations are also used in statistical regression. A common algorithm for the least squares regression uses a matrix with n rows and m columns, where n represents the number of data points and m represents the number of base functions, or the number of coefficients sought. (A polynomial ax^3 + bx^2 + cx + d, for example, uses four base functions: x^3, x^2, x, and 1.) A good explanation of this algorithm may be found in Numerical Recipes: The Art of Scientific Computing.