# Chapter 7 - Systems of Linear Equations

This chapter discusses systems of linear equations. It contains the following sections:
• section 7.1 - In this section we introduce systems of linear equations, solutions, and talk about the graphical method of solving a system of two equations in two unknowns.

• section 7.2 - In this section we explain the substitution method of solving systems of equations.

• section 7.3 - In this section we explain the elimination method of solving systems of equations.

## 7.1 - Introduction to Systems of Linear Equations

### Background

A system has these properties:
• It consists of several parts which interact and affect one another.

• It produces an effect or output as a result of some cause or input.
One example is a business organization. This system’s inputs are its capital, employees, raw materials and factories. Its outputs are its products. Management decides what the interactions among the inputs should be to give the maximum output (for example, how many factories should make which products, etc.)

A linear system is a system where the output is proportional to the input. For example, if the business organization is a linear system, then if we double the capital, employees, raw materials and factories (the inputs) then we expect to get double the production (the output).

We can describe mathematically how the parts of a linear system relate to one another and to the input using a system of linear equations. If a linear system has n parts (where n is some number), then we can describe it with a system of n linear equations in n unknowns or variables. The unknowns in these equations are the values of the inputs. If we did the analysis correctly then there will be a unique solution for the values of the inputs.

Here is an example of a system of two linear equations in the two unknowns x and y:
From the input-output point of view, the numbers 4 and 6 on the right-hand-sides are the output, and the unknowns x and y on the left-hand-sides are the input. The numbers 1, 1, 2 and −3 multiplying x and y express the relationships between the parts of the system.

We can verify that {x = 3.6, y = 0.4} is the solution of the system by substituting it into the system and getting the pair of equations 4 = 4 and 6 = 6. Now, suppose that we double the output (replace 4 and 6 by 8 and 12):
Then we can verify that the input is also doubled; the solution now is {x = 7.2, y = 0.8}. Thus the system is linear. The linearity can be traced back to the fact that the numbers 1, 1, 2 and −3 multiplying x and y are constants. If they were replaced by expressions involving x and y then the equations would be non-linear.

 Definitions: A linear equation in one unknown is an equation of the form a x = b, where a and b are constants and x is an unknown that we wish to solve for. Similarly, a linear equation in n unknowns x1, x2, …, xn is an equation of the form: a1 · x1 + a2 · x2 + … + an · xn = b, where a1, a2, …, an and b are constants. The name linear comes from the fact that such an equation in two unknowns or variables represents a straight line. A set of such equations is called a system of linear equations.

Here is an example of a system of three linear equations in the three unknowns x, y and z:

### Methods of solving systems of linear equations

There are many methods of solving systems of linear systems. Each one has its advantages and disadvantages. The graphical method is useful for introducing concepts such as the uniqueness of the solution or the meaning of inconsistent systems but is useless as a computational tool.

The substitution method is useful because it can be applied to non-linear as well as linear systems, but it bogs down for anything but small systems. The Algebra Coach can solve any system of linear equations using this method.

The elimination method is a good method for systems of medium size containing, say, 3 to 30 equations. It is easy to implement on a computer. The Algebra Coach can solve any system of linear equations using this method. Gauss elimination and Gauss-Jordan elimination are two variations of this method. Other methods such as the LU decomposition method are based on it.

Cramer's rule (also known as the determinant method) is nice for hand-calculation because it avoids fractions. However it is only practical for small systems (3 equations or less). The Algebra Coach does not explain this method.

And there are other methods that are useful under certain circumstances. For example, the problem of “predicting the weather” on a 100 × 100 grid leads to a system of 10,000 linear equations. Such large systems are solved by iterative improvement. In this method you start with any guess whatsoever for the solution. Then you iterate (recycle) this solution, improving it with each iteration. When the accuracy is good enough, you stop.

Another method, the tri-diagonal matrix method, is useful for systems that can be organized into well-defined stages, and where each stage depends directly only on the previous stage.

### Some lessons to learn from graphing 2 equations in 2 unknowns

The graphical method is not very useful as a computational tool but it is useful for visualizing concepts such as the uniqueness of the solution and the meaning of inconsistent and redundant systems. Consider the following system of two linear equations in two unknowns:
In this method we simply draw graphs of the equations as we have done to the right. Notice that the graph of each equation is a straight line. (This is characteristic of a linear system. There are no curves, only straight lines.)

Any point on one line is a solution of one equation and any point on the other line is a solution of the other equation. The point where the lines cross {x = 3.6, y = 0.4} is the solution that satisfies both equations simultaneously. Notice that the solution is unique. This is because the lines are straight and there is only one point where they can cross. A system of linear equations with a unique solution is the “normal” situation.

However it is possible to have a system of equations with no solution or an infinite number of solutions. Such systems of equations are called inconsistent and redundant, respectively. They are the result of an inaccurate or incorrect analysis of the physical system being described by the system of equations.

Consider the following system of two equations in two unknowns:
This system of equations is inconsistent because there is no way that x + y can equal 2 and 4 at the same time. As shown to the right, the graph of this system consists of two parallel lines that never cross. Thus there is no solution.
Now consider the following system of equations:
This system is redundant because the second equation is equivalent to the first one. The graph consists of two lines that lie on top of one another. They “cross” at an infinite number of points, so there are an infinite number of solutions.
To summarize, a system of linear equations with 2 unknowns must have at least 2 equations to get a unique solution. Having 1 equation is not enough, because 1 equation in 2 unknowns is represented by an entire line. Having 2 equations is exactly enough, as long as they are not redundant or inconsistent. Having 3 (or more) equations is too many. The third equation must be either redundant or inconsistent.

 Counting equations and unknowns These results can be generalized to linear systems of equations with any number of equations and any number of unknowns: A linear system of equations with n unknowns must have at least n equations to get a unique solution. Having any fewer equations is not enough; the solution will not be unique. Having n equations is exactly enough, as long as they are not redundant or inconsistent. Having any more than n equations is too many; the system will be either redundant or inconsistent.

## 7.2 - The Substitution Method

In this section we explain the substitution method. Suppose we have a system of n linear equations in n variables or unknowns. In the substitution method, we pick one equation and solve it for one of the unknowns, say x. Then we substitute this solution for x into the other n − 1 equations wherever the variable x appears, and simplify. The result is that the n − 1 equations contain only n − 1 unknowns (x no longer appears).

We repeat this process until we get 1 equation in 1 unknown, which is then easily solved. The final step is to back-substitute the solution for x into the previous equations to find the values of all the other unknowns.

Example: Solve the system:
Solution: We arbitrarily choose to solve the first equation for x:
x = 4 − 1 y − 2 z.
Substituting this into the second and third equations we get:
Simplifying these equations we get:
Note that we now have 2 equations in 2 unknowns. We repeat the process. We arbitrarily choose to solve the first of these equations for y:
y = −1.2 z + 0.4
Substituting this expression into the second equation gives:
−3(−1.2 z + 0.4) − 5 z = 0.
Simplifying gives:
z = −0.8571
Now that we have the solution for z, we enter the back-substitute phase to find the values of x and y. First substitute the value for z back into either of the previous pair of equations:
Either choice yields y = 1.429. Now substitute the values for both z and y back into any of the original three equations:
Any choice yields x = 4.286. Thus the final solution (to 3 significant figures) is:
{ x = 4.29, y = 1.43, z = −0.857 }.

### Less equations than unknowns

If the number of equations in the system is less than the number of unknowns then the last time you repeat the above process you will have one equation remaining but more than one unknown remaining. You will not be able to get a unique value for any variable; the best you will be able to do is solve for one variable in terms of the others, and back-substitute that expression.

### Recognizing Redundant or Inconsistent Systems

If the number of equations is greater than the number of unknowns then the systems is guaranteed to be either redundant or inconsistent. But if the number of equations is equal to or less than the number of unknowns then you will generally not recognize a system as being redundant or inconsistent until the very end of the calculation. This is especially true if the system is large.

If you are solving the system of equations by the substitution method and the system is redundant, then you will end up with a final equation that states 0 = 0. If the system is inconsistent, then you will end up with one that states a contradiction like 0 = 5. In either case back-substitution is impossible so the best you can do is leave the system of equations in its present state.

## 7.3 - The Elimination Method

In this section we explain the elimination method. This method makes use of the fact that the solution of an equation is not changed if we
• multiply both sides of the equation by the same factor.

• subtract equal quantities from both sides of an equation.
This means that we can take one equation and subtract a multiple of another equation from it without changing the solution of the equations.

The elimination method uses this fact to solve a system of linear equations. Suppose we start with a system of n equations in n unknowns. Pick the first equation and subtract suitable multiples of it from the other n − 1 equations. In each case the multiple is chosen so that the subtraction cancels or eliminates the same variable, say x. The result is that the n − 1 equations contain only n − 1 unknowns (x no longer appears).

We repeat this elimination process until we get 1 equation in 1 unknown, which is then easily solved.

The final step is to back-substitute the solution already obtained for the 1 unknown into the previous equations to find the values of all the other unknowns.

Example: Solve this system of equations by elimination:
Solution: Let’s take twice the first equation, namely:
x + 2 y = 8
and subtract it from the second equation, like this:
The result is one equation in the one unknown, y. The other unknown, x, has been eliminated. Solving this equation yields y = 0.4.

It remains to find x. If we back-substitute y = 0.4 into either of the original equations we get x = 3.6. Thus the solution is:
{ x = 3.6, y = 0.4 }.
(Note that we could have instead found x without back-substitution if we had subtracted 3 times the first equation from the second equation, since this eliminates y.)

### The Augmented Matrix

We have explained the essence of elimination. For larger systems we need a systematic procedure to avoid getting mixed up. Gaussian elimination and Gauss-Jordan elimination are two such procedures.

Before we describe them, we introduce some short-hand. A system of equations such as:
will be represented by a rectangular array of numbers called an augmented matrix:

Definitions:
• The individual numbers in the matrix are called elements.

• Columns run down the matrix. e.g. the 4th column contains the elements 80, 7 and 22.

• Rows run across. The 3rd row contains 3, −1, 2 and 22. Note that the number of columns in an augmented matrix is always 1 more than the number of rows.

• The diagonal is the set of elements that starts at the top, left corner of the matrix and runs diagonally down and to the right. The diagonal of the above matrix consists of the numbers 4, 1 and 2.

• Any numbers in position D are said to be on the diagonal, any in position a are above the diagonal, and any in position b are below the diagonal.
Keep in mind the following:
• The i-th row of the augmented matrix represents the i-th equation.

• The j-th column (to the left of the vertical line) contains the coefficients of the j-th variable or unknown.

• The vertical line represents the equal signs.

• The column on the right of the vertical line represents the right-hand-side of the equations.

### The Elementary Row Operations

We have seen that the solution of a system of equations is not changed if we:
• divide an both sides of an equation by a constant, or

• subtract a multiple of one equation from another equation.
These same operations can be applied to the rows of an augmented matrix, since each row just represents an equation. They are then called Elementary Row Operations.

 The Elementary Row Operations (E.R.O.’s) are: E.R.O.#1: Choose a row of the augmented matrix and divide (every element of) the row by a constant. E.R.O.#2: Choose any row of the augmented matrix and subtract a multiple of any other row from it (element by element). E.R.O.#3: It is sometimes useful to swap two rows. This is a valid operation because the order of the equations is immaterial. You may apply these E.R.O.’s to an augmented matrix as often as you like without changing the solution of the equations represented by the matrix.

Example: This example shows how we apply E.R.O.#1 and the notation we use to indicate it. We will divide the first row of the augmented matrix on the left by 2 to produce the new augmented matrix on the right:
Note:   ← ÷ by 2   means “divide the row being pointed to by 2 to produce the new matrix”.

Example: This example shows how we apply E.R.O.#2 and the notation we use to indicate it. In the augmented matrix on the left we will take the second row and from it subtract 3 times the first row to produce the new augmented matrix on the right:
Note:   ← R 2 − 3 · R 1 means “take the row being pointed to (row 2) and subtract 3 times row 1 from it to produce the new row 2.

Example: This example shows how we apply E.R.O.#3 and the notation we use to indicate it. In the augmented matrix on the left we swap rows 1 and 2 to produce the new augmented matrix on the right:
Note:   R 1R 2 means “swap rows 1 and 2.

### Gaussian Elimination

The Gaussian elimination procedure is a certain sequence of E.R.O.’s that transforms the augmented matrix into Gauss form (also known as row echelon form) This form is characterized by 1’s on the diagonal, 0’s below the diagonal and any numbers above the diagonal. Here is an example:
This augmented matrix represents the system of equations:
It is solved by back-substitution. Plugging z = 3 into the second equation gives y = 5. Then plugging both z = 3 and y = 5 into the first equation gives x = 7.

 Algorithm for Gaussian Elimination We transform one column at a time into row echelon (or Gauss) form. The column presently being transformed is called the pivot column. We proceed systematically, letting the pivot column be the first column, then the second column, etc. until the last column before the vertical line of the augmented matrix. For each pivot column, we do the following two steps before moving on to the next pivot column: We locate the diagonal element in the pivot column. This element is called the pivot. The row containing the pivot is called the pivot row. We divide every element in the pivot row by the pivot (i.e. we use E.R.O. #1) to get a new pivot row with a 1 in the pivot position. We get a 0 in each position below the pivot position by subtracting a suitable multiple of the pivot row from each of the rows below it (i.e. by using E.R.O. #2). When all the columns to the left of the vertical line have been transformed using the Gaussian elimination procedure the augmented matrix will be in Gauss form and we then solve the system by back-substitution.

Example: Use Gaussian elimination to solve the system of equations:
Solution: Perform this sequence of E.R.O.’s on the augmented matrix:

Set the pivot column to column 1. Get a 1 in the diagonal position (in red):

Next, get 0’s below the pivot (in red):

Now, let pivot column = second column.

First, get a 1 in the diagonal position:

Next, get a 0 in the position below the pivot:

Now, let pivot column = third column. Get a 1 in the diagonal position:

This matrix, which is now in Gauss form, represents this system of 3 equations:

It is solved by back-substitution. Plugging z = 3 into the second equation we get y = 5. And plugging z = 3 and y = 5 into the first equation we get x = 7. Thus the solution is:
{ x = 7, y = 5, z = 3 }.

### Gauss-Jordan Elimination

The Gauss-Jordan elimination procedure is a slightly different sequence of E.R.O.’s that transforms the augmented matrix into Gauss-Jordan form (also known as reduced row echelon form). This form is characterized by 1’s on the diagonal, 0’s above and below the diagonal on the left side of the vertical line, and any numbers on the right side of the vertical line. Here is an example:
This augmented matrix represents the system of equations:
This system is already solved: x = 7, y = 5, z = 3. Back-substitution is not required. However, about twice as many E.R.O.’s are required to produce the Gauss-Jordan form as the Gauss form.

 Algorithm for Gauss-Jordan Elimination We transform one column at a time into reduced row echelon (or Gauss-Jordan) form. The column presently being transformed is called the pivot column. We proceed systematically, letting the pivot column be the first column, then the second column, etc. until the last column before the vertical line of the augmented matrix. For each pivot column, we do the following two steps before moving on to the next pivot column: We locate the diagonal element in the pivot column. This element is called the pivot. The row containing the pivot is called the pivot row. We divide every element in the pivot row by the pivot (i.e. we use E.R.O. #1) to get a new pivot row with a 1 in the pivot position. We get a 0 in each position above and below the pivot position by subtracting a suitable multiple of the pivot row from each of the rows above and below it (i.e. by using E.R.O. #2). When all the columns to the left of the vertical line have been transformed the augmented matrix will be in Gauss-Jordan form and we can read off the solution from the right-hand column.

Notice that the only difference from the Gauss procedure is that in the second step we get 0’s above the diagonal as well as below the diagonal. It is important to get all of these 0’s before moving on to the next pivot column.

Example: Use Gauss-Jordan elimination to solve the system of equations:
Solution: Perform this sequence of E.R.O.’s on the augmented matrix. Set the pivot column to column 1. Get a 1 in the diagonal position (in red) by using E.R.O. # 1:
Next, get 0’s below the pivot (in red) by using E.R.O. # 2:
Now, let pivot column = second column. First, get a 1 in the diagonal position by using E.R.O. # 1:
Next, get 0’s in the positions above and below the pivot (in red) by using E.R.O. # 2:
Now, let pivot column = third column. Get a 1 in the diagonal position by using E.R.O. # 1:
Next, get 0’s in the positions above the pivot (in red) by using E.R.O. # 2:
This matrix, which is now in Gauss-Jordan or reduced row echelon form, represents the solution:
{x = 49, y = −18, z = 8}.

### Redundant and Inconsistent Systems

If the number of equations is greater than the number of unknowns then the systems is guaranteed to be either redundant or inconsistent. But if the number of equations is equal to or less than the number of unknowns then you will generally not recognize a system as being redundant or inconsistent until the very end of the calculation. This is especially true if the system is large.

If you are solving the system of equations by the substitution method and the system is redundant, then you will end up with a final equation that states 0 = 0. Or if the system is inconsistent, then you will end up with one that states a contradiction like 0 = 5. Something similar happens when using Gauss or Gauss-Jordan elimination. If the system is redundant, then at the end of the elimination procedure, when you have the augmented matrix in Gauss or Gauss-Jordan form, the last row of the augmented matrix will be:
This last row represents the equation 0 = 0, a useless piece of information.

If the system is inconsistent then the last row of the augmented matrix will look something like:
The last row represents the equation 0 = 5, a contradiction.

Example: Use Gaussian elimination to put this system of equations into row echelon form and interpret the result:
Solution: Perform this sequence of E.R.O.’s on the augmented matrix. Set the pivot column to column 1. There is already a 1 in the pivot position, so proceed to get 0’s in the two positions below the pivot:
Now, set the pivot column to the second column. First, get a 1 in the diagonal position:
Next, get a 0 in the position below the pivot:
Now, set the pivot column to the third column. The first thing to do is to get a 1 in the diagonal position, but there is no way to do this. In fact this matrix is already in row echelon form and represents:
This system of equations can’t be solved by back-substitution because we have no value for z. The last equation merely states that 0=0. There is no unique solution because z can take on any value.

The cause of this problem is that if we have 3 unknowns then we need 3 pieces of information (equations) about them to solve them. Mathematicians say the the equations must be linearly independent. In a redundant system some of the information just duplicates other information. In this example a bit of experimentation shows that the third equations is just twice the second equation minus the first equation.

In general, an augmented matrix which has been put into row echelon form and which contains one or more rows of zeros at the bottom of the matrix indicates a redundant system of equations.

Example: Use Gaussian elimination to put this system of equations into row echelon form and interpret the result:
Solution: Perform this sequence of E.R.O.’s on the augmented matrix. Set the pivot column to column 1. There is already a 1 in the pivot position, so proceed to get 0’s below the pivot:
Now, set the pivot column to the second column. There is already a 1 in the pivot position, so proceed to get 0’s below the pivot:
Now, set the pivot column to the third column. The first thing to do is to get a 1 in the diagonal position, but there is no way to do this. In fact this matrix is already in row echelon form and represents:
This system of equations is inconsistent and has no solution. The last equation states a contradiction, namely 0 = −50.

In general, an augmented matrix which has been put into row echelon form and which contains one or more bottom rows consisting of all zeros to the left of the vertical line and a non-zero number to the right indicates an inconsistent system of equations with no solution.

### Less equations than unknowns

If the number of equations in the system is less than the number of unknowns then you will reach a point in the Gauss or Gauss-Jordan procedure where you cannot transform the pivot column because you have run out of pivot rows. Here is an example:
The third column has no pivot and no pivot row so you have to stop. This augmented matrix represents this system of equations:
In the second form we see that if a value is given for z then x and y can be expressed in terms of it. The next matrix shows that giving a value for z, say z = 5, amounts to having another row: