Solving Systems of Linear Equations

The key ideas are best introduced through an example. We want to develop a systematic method for solving systems of linear equations like the one below.
       3x  +2y  -5z       =  3
      -2x  - y  +3z  + w  =  0 
      - x  + y       +6w  = 11 
        x  + y  -2z  + w  =  3 
We can perform any of these operations on the system:
(1) Interchange two equations;

(2) Multiply each term of an equation by a nonzero constant;

(3) Replace an equation by adding to it a multiple of another equation.

To use the Gauss-Jordan technique, sometimes called Gaussian elimination, choose an equation with a coefficient of 1 in the first column. (It may be necessary to first create one, by dividing each term of one of the equations by its coefficient of x, or by adding a multiple of one of the equations to another to get the 1.) This equation is called the pivot, and it should be moved to the top position. Use it to eliminate the x term in the other equations.

Repeat this procedure for each of the columns. The solution given below illustrates Gauss-Jordan elimination.

       3x  +2y  -5z       =  3
      - x  + y       +6w  = 11
      -2x  - y  +3z  + w  =  0
        x  + y  -2z  + w  =  3 
~>
        x  + y  -2z  + w  =  3 
      - x  + y       +6w  = 11 
      -2x  - y  +3z  + w  =  0 
       3x  +2y  -5z       =  3 
~>
        x  + y  -2z  + w  =  3 
            2y  -2z  +7w  = 14 
             y  - z  +3w  =  6 
           - y  + z  -3w  = -6 
~>
        x  + y  -2z  + w  =  3 
             y  - z  +3w  =  6 
            2y  -2z  +7w  = 14 
           - y  + z  -3w  = -6 
~>
        x       - z  -2w  = -3 
             y  - z  +3w  =  6 
                       w  =  2 
~>
        x       - z       =  1 
             y  - z       =  0 
                       w  =  2 

This gives us the final solution: x = z + 1, y = z, w = 2.

We do not have to write down the variables each time, provided we keep careful track of their positions. The solution using matrices to represent the system looks like this.

     _                   _
    |                     |
    |  3   2  -5   0   3  |
    | -1   1   0   6  11  |
    | -2  -1   3   1   0  |
    |  1   1  -2   1   3  |
    |_                   _|
~>
     _                   _
    |                     |
    |  1   1  -2   1   3  |
    | -1   1   0   6  11  |
    | -2  -1   3   1   0  |
    |  3   2  -5   0   3  |
    |_                   _|
~>
     _                   _
    |                     |
    |  1   1  -2   1   3  |
    |  0   2  -2   7  14  |
    |  0   1  -1   3   6  |
    |  0  -1   1  -3  -6  |
    |_                   _|
~>
     _                   _
    |                     |
    |  1   1  -2   1   3  |
    |  0   1  -1   3   6  |
    |  0   2  -2   7  14  |
    |  0  -1   1  -3  -6  |
    |_                   _|
~>
     _                   _
    |                     |
    |  1   0  -1  -2  -3  |
    |  0   1  -1   3   6  |
    |  0   0   0   1   2  |
    |  0   0   0   0   0  |
    |_                   _|
~>
     _                   _
    |                     |
    |  1   0  -1   0   1  |
    |  0   1  -1   0   0  |
    |  0   0   0   1   2  |
    |  0   0   0   0   0  |
    |_                   _|

Finally, we put the variables back in, to get the solution: x -z = 1, y -z = 0, w = 2. This can be rewritten in the form x = z + 1, y = z, w = 2.

The answer shows that there are infinitely many solutions. Any value can be chosen for z, and then using the corresponding values for x, y, and w gives a solution.

We can think if z as an "independent variable" and x, y, w as "dependent variables".

Some numerical algorithms in Rn

Algorithm 1. To test whether the vectors v1, v2, ..., vk are linearly independent or linearly dependent:

If the only solution is all zeros, then the vectors are linearly independent.
If there is a nonzero solution, then the vectors are linearly dependent.

Algorithm 2. To check that the vectors v1, v2, ..., vk span the subspace W:

One criterion is to check that the corresponding augmented matrix has the same rank as the matrix of coefficients.

Algorithm 3. To find a basis for the subspace S( v1, v2, ..., vk ) by deleting vectors:

  1. Construct the matrix whose columns are the coordinate vectors for the v's
  2. Row reduce
  3. Keep the vectors whose column contains a leading 1
The advantage of this procedure is that the answer consists of some of the vectors in the original set.

Algorithm 4. To find a basis for the solution space of the system A x = 0 :

  1. Row reduce A
  2. Identify the independent variables in the solution
  3. In turn, let one of these variables be 1, and all others be 0
  4. The corresponding solution vectors form a basis

Algorithm 5. To find a simplified basis for the subspace S( v1, v2, ..., vk ) :

  1. Construct the matrix whose rows are the coordinate vectors for the v's
  2. Row reduce
  3. The nonzero rows form a basis
The advantage of this procedure is that the vectors in the basis have lots of zeros, so they are in a useful form. Note: the rows of any matrix in row echelon form are linearly independent.