# Introduction to Chemical Engineering Processes/Systems of algebraic equations

## Contents

## What is a System of Equations?[edit]

A **system** of equations is any number of equations with more than one total unknown, such that the same unknown must have the same value in every equation. You have probably dealt a great deal, in the past, with *linear systems of equations*, for which many solution methods exist. A linear system is a system of the form:

**Linear Systems**

And so on, where the a's and b's are constant.

Any system that is not linear is **nonlinear**. Nonlinear equations are, generally, far more difficult to solve than linear equations but there are techniques by which some special cases can be solved for an exact answer. For other cases, there may not be any solutions (which is even true about linear systems!), or those solutions may only be obtainable using a *numerical method* similar to those for single-variable equations. As you might imagine, these will be considerably more complicated on a multiple-variable system than on a single equation, so it is recommended that you use a computer program if the equations get too nasty.

## Solvability[edit]

A system is **solvable** if and only if there are only a finite number of solutions. This is, of course, what you usually want, since you want the results to be somewhat predictable of whatever you're designing.

Here is how you can tell if it will *definitely* be impossible to solve a set of equations, or if it merely *may* be impossible.

**Solvability of systems**:

- If a set of n
**independent**equations has n unknowns, then the system has a finite (possibly 0) number of solutions. - If a set of n
**independent**equations has*less than*n unknowns then the system has an infinite number of solutions. - If a set of n
**independent or dependent**equations has*more than*n unknowns then the system has no solutions. - Any dependent equations in a system do not count towards n.

Note that even if a system is solvable it doesn't mean it has solutions, it just means that there's not an infinite number.

## Methods to Solve Systems[edit]

As you may recall there are many ways to solve systems of *linear* equations. These include:

**Linear Combination**: Add multiples of one equation to the others in order to get rid of one variable. This is the basis for Gaussian elimination which is one of the faster techniques to use with a computer.**Cramer's rule**which involves determinants of coefficient matrices.**Substitution**: Solve one equation for one variable and then substitute the resulting expression into all other equations, thus eliminating the variable you solved for.

The last one, substitution, is most useful when you have to solve a set of **nonlinear** equations. Linear combination can only be employed if the same type of term appears in all equations (which is unlikely except for a linear system), and no general analogue for Cramer's rule exists for nonlinear systems. However, substitution is still equally valid. Let's look at a simple example.

### Example of the Substitution Method for Nonlinear Systems[edit]

**Example**:

Solve the following system of equations for X and Y

Solution: We want to employ substitution, so we should ask: **which variable is easier to solve for?**. In this case, X (in the top equation) is easiest to solve for so we do that to obtain:

Substituting into the bottom equation gives:

This can be solved by the **method of substitution**:

Let . Plugging this in:

Solving by factoring:

Thus since we obtain **four** solutions for Y!

Notice, however, that depending on where this system *came* from, the negative solutions may not make sense, so think before you continue!

Let's take into account all of them for now. Since we have Y we can now solve for X:

Notice that even a small system like this has a large number of solutions and, indeed, some systems will have an infinite number, such as:

## Numerical Methods to Solve Systems[edit]

There are numerical equivalents in multiple variables to *some* of the methods demonstrated in the previous section. Many of them in their purest forms involve the use of calculus (in fact, the Taylor method does as well), but as before, they can be reduced to approximate algebraic forms at the expense of some accuracy.

### Shots in the Dark[edit]

If you can solve all of the equations explicitly for the same variable (say, y) you can guess all but one and then compare how different the resulting values of y are in each equation. This method is entirely brute-force, because **if there are more than two equations, it is necessary to guess all of the variables but one using this method**, and there is no way to tell what the next guess should be. Trying to guess multiple variables at once from thin air gets to be a hastle even with a computer.

Since there are so many problems with this method, it will not be discussed further

### Fixed-point iteration[edit]

Again, the multivariate form of fixed-point iteration is so unstable that it generally can be assumed that it will not work. Weighted iteration is also significantly more difficult.

### Looping method[edit]

This is one method that *does* work, and that is somewhat different from any single-variable method. In the looping method technique, it is necessary to be able to solve *each equation for a unique variable*, and then you'll go around in a loop essentially, starting with an initial guess on (ideally) a *single* variable, say y, and then evaluating all equations until you return to your original variable with a new value y'. If the result is not the same as the guess(es) you started with, you need to make a new guess based on the *trends in the results*.

More specifically, here is an algorithm you can use:

- Solve all equations for a
**different variable**. - Make a guess on one variable (or as many as necessary to evaluate a second one, if it's more than one it gets harder though, so it is recommended to use another method)
- Go through all of the equations until you end up recalculating the variable (or all of the variables) which you had originally guessed. Note whether the result is higher or lower than your guess.
- Make another guess on the variable(s). Go through the loop again.
- After these two guesses, we know whether increasing or guess will increase or decrease the recalculated value. Therefore, we can deduce whether we need to increase or decrease our guess to get a recalculated value equal to the guess.
- Keep guessing appropriately until the recalculated value equals the guess.

This technique is often necessary in engineering calculations because they are based on data, not on explicit equations for quantities. As we'll see, however, it can be difficult to get it to converge, and this method isn't that convenient to do by hand (though it is the most reliable one to do realistically). It is great, however, for inputting guesses into a computer or spreadsheet until it works.

**Example**:

Solve this system:

First we need to solve one of them for x, let's choose the first one:

- To start off, we make a guess: y = 0.1 Then from the first equation, x = 2.303
- Plug this back into the second equation and you'll come out with y' = 0.834. The recalculated value is
**too high**.

- Now make a new guess on y: say, y = 0.5. This results in x = 0.6931
- Plugging back into the second equation gives y' = -0.3665. The recalculated value is
**too low**.

- Lets now try y = 0.25.
- This results in x = 1.386 from the first equation and y' = 0.326 from the second.
**Too high**so we need to increase our guess.

- Let's guess y = 0.3
- This yields x = 1.204 and thus y' = 0.185, which is
**too low**indicating the guessed value was too high.

- Guess y = 0.28, hence x = 1.273 and y'= 0.241. The guess is therefore still too high.

- Guess y = 0.27, hence x = 1.309 and y' = 0.269. Therefore we have now converged:

#### Looping Method with Spreadsheets[edit]

We can do the guessing procedure more easily by programming it into a spreadsheet. First set up three rows like so:

A B C 1 y guess x y' 2 =-ln(A2) =ln(B2)

In B2 we put the first function solved for x, and in C2 we have the second function solved for y. Now all we need to do is type in guesses in A2 until the value in C2 is the same as our guess (the spreadsheet will automatically calculate B2 and C2 for you). To make things even easier, put the line into cell D2. Since we want y' to equal y, just keep guessing until the value in D2 is as close to zero as you like.

As a more in-depth example (which would be significantly more difficult to do by hand), consider the system:

**Example**:

Solve:

In order for this to work, we only need to solve each equation for a unique variable, the expression need not be explicit! The following will work (assuming that X is a positive quantity), and this will be evident shortly:

Now we need to ask: which variable would be the best to guess to start the iteration procedure? In this case the best answer is T because from this guess, we can calculate P from equation 3, then X from equation 2, and finally a new guess on T from equation 1, and use this new value as a gauge of our old guess.

Lets program this into the spreadsheet:

A B C D E 1 T guess P X T' T' - T guess 2 =0.1*A2 =sqrt(A2^3 - B2) =(2*B2^2*C2^2 - 3*exp(-C2/A2))/(A2 - 2) =D2 - A2

Once all this is programmed in, you can just input guesses as before, with the eventual result that:

### Multivariable Newton Method[edit]

There is a multivariate extension to Newton's method which is highly useful. It converges quickly, like the single-variable version, with the downside that, at least by hand, it is tedious. However, a computer can be programmed to do this with little difficulty, and the method is not limited only to systems which can be explicitly solved like the looping method is. In addition, unlike the looping method, the Newton method will actually give you the next set of values to use as a guess.

The method works as follows:

1. Solve all of the equations for 0, i.e. let for all functions F in the system.

2. Guess a value for all variables, and put them into a matrix (X). Calculate the value of all functions F at this guess, and put them into a matrix (F).

3. We need to find estimates for all the **partial derivatives** of the function at the guessed values, which is described later.

4. Construct a matrix (to become the Jacobian) as follows: make an empty matrix with n rows and n columns, where n is the number of equations or the number of variables (remember, a solvable system generally has the same number of equations as variables. Then label the columns with the names of variables and the rows with the names of your functions. It should look something like this:

5. Put the appropriate partial derivative in the labeled spot. For example, put the partial derivative with respect to x1 from function 1 in the first spot.

6. Once the Jacobian matrix is completely constructed, find the inverse of the matrix. There are multiple computer programs that can do this including this one (WARNING:Not tested software, use at your own risk!). Or you can do it by hand if you know how.

7. Matrix-multiply the inverse Jacobian with the transpose function matrix F (to make it a column matrix), then subtract this from the transposition of X (again, make it a column matrix):

**Multivariable Newton Method Formula**

8. The result is your next guess. Repeat until convergence.

#### Estimating Partial Derivatives[edit]

You MUST make sure you carry out quite a few decimal places when doing this, because changing the variables by a very small amount may not change the function values too much, but even small changes are important! |

A **Partial derivative** is, in its most basic sense, the slope of the tangent line of a function with more than one variable when all variables except one are held constant. The way to calculate it is:

Now we need to stay organized, so let's introduce some notation:

To calculate it:

- Calculate one function F at your guess.
- Increase
*one*variable, x, by a very small amount .**Leave all other variables constant**. - Recalculate F at the modified guess to give you F'.

The partial derivative of the function F with respect to x is then .

#### Example of Use of Newton Method[edit]

Let's go back to our archetypal example:

**Step 1**: We need to solve each for zero:

**Step 2**: Lets guess that x = 2.303 and y = 0.1 (it's a good idea to choose guesses that satisfy one of the equations). Then:

The values of F at this guess are , and hence by definition:

**Step 3-5**: Calculate the partial derivatives
Lets choose . Then:

The partial derivatives of F2 can be similarly calculated to be and

Therefore, the Jacobian of the system is:

**Step 6**: Using any method you know how to do, you can come up with the inverse of the matrix:

**Step 7**: The transposition of F is simply:

Therefore by doing matrix multiplication you can come up with the following *modifying matrix*:

Therefore, we should subtract 1.3682 from x and 0.1418 from y to get the next guess:

Notice how much closer this is to the true answer than what we started with. However, this method is generally better suited to a computer due to all of the tedious matrix algebra.