Professor John Beachy,

Department of Mathematical Sciences,

Northern Illinois University

If you have been trained to believe that a calculator will always give you the correct answer, you may be in for a shock if you try to solve a system of equations by just plugging the coefficients into your calculator and pressing a button. Even a system of two equations in two unknowns can present problems for the program used by your calculator. Here is one example.

We will try to solve this system of equations.

416785The coefficients in the problem have six significant digits. Since the TI-85 calculator stores more than twice that many significant digits internally, solving the system would seem to present no problem. Using the equation solver on the TI-85 gives the following "answer".x+ 415872y= 1 415872x+ 414961y= 0

You can "check" the calculator answer by substituting it back into the system. Be sure to use the values stored in the calculator. Then, to the limits of the calculator's accuracy, everything checks out, and, in fact, both solutions appear to be correct.Calculator answerx = 414843.184536 y = -415753.925885Exact solutionx = 414961 y = -415872

To find the exact solution, we can use elementary row operations on the system of equations. The first goal is to reduce the size of the numbers, but retain integer values. Here are the results.

416785x + 415872y = 1 415872x + 414961y = 0 913x + 911y = 1 415872x + 414961y = 0 913x + 911y = 1 457x + 456y = -455 456x + 455y = 456 457x + 456y = -455Now the determinant is (456)(456)-(457)(455), and it has the form

nand therefore the original coefficient matrix has determinant 1. The inverse of the original matrix is^{2}- (n+1)(n-1) = n^{2}- (n^{2}- 1) = 1

_ _ -1 _ _ | | | | | 416785 415872 | | 414961 -415872 | | | = | | | 415872 414961 | | -415872 416785 | |_ _| |_ _|and multiplying by the inverse gives us the exact solution x = 414961 and y = -415872. It is also possible to continue the row reduction.

456x + 455y = 456 457x + 456y = -455 456x + 455y = 456 x + y = -911 x = 414961 x + y = -911 x = 414961 y = -415872

This is not a problem unique to the TI-85. Using MATLAB on a SUN workstation also gives an answer that differs substantially from the correct one. The difficulties are inherent in the problem. To look at this problem from the geometric point of view, we could compute the slope of each line. To 12 decimal point accuracy, we get

416785To most calculators, the lines appear to be parallel, and so there should be no solution at all! Because the angle between the two lines is very small, a small change in the coefficients caused by roundoff error can make a very large difference in the solution. From a geometric point of view, shifting the two lines just a little bit can make a bit difference in the point of intersection.÷415872 = -1.00219538704 415872÷414961 = -1.00219538704

_ _ -1 _ _ | | | | | 416785 415872 | | 414961 -415872 | | | = | | | 415872 414961 | | -415872 416785 | |_ _| |_ _|Using the matrix inversion routine on the TI-85 gives the following answer. (Note that the first column contains the same answers we obtained as the solution of the previous system of equations.)

_ _ -1 _ _ | | | | | 416785 415872 | | 414843.184536 -415753.925885 | | | = | | | 415872 414961 | | -415753.925885 416666.666667 | |_ _| |_ _|To illustrate some of the inherent difficulties in doing Gaussian elimination using floating point arithmetic, we will look at the row reduction of a standard "badly behaved" matrix.

The matrix given below is called a Hilbert matrix. It is a well-known example of a matrix that causes problems for numerical algorithms. To help understand the problems, we will do an exact row reduction, compared to a row reduction done using floating point arithmetic. To see how the error in the approximations can be compounded, we will use a highly simplified example, in which the floating point arithmetic is carried out with accuracy to only three significant digits.

Exact calculations in this column Floating point calculations here _ _ _ _ | | | | | 1 1/2 1/3 1/4 | | 1.00 .500 .333 .250 | | | | | | 1/2 1/3 1/4 1/5 | | .500 .333 .250 .200 | (1) | | | | | 1/3 1/4 1/5 1/6 | | .333 .250 .200 .167 | | | | | | 1/4 1/5 1/6 1/7 | | .250 .200 .167 .143 | |_ _| |_ _|~>

_ _ _ _ | | | | | 1 1/2 1/3 1/4 | | 1.00 .500 .333 .250 | | | | | | 0 1/12 1/12 3/40 | | 0.00 .0830 .0830 .0750 | (2) | | | | | 0 1/12 4/45 1/12 | | 0.00 .0830 .0890 .0837 | | | | | | 0 3/40 1/12 9/112 | | 0.00 .0750 .0837 .0805 | |_ _| |_ _|~>

_ _ _ _ | | | | | 1 1/2 1/3 1/4 | | 1.00 .500 .333 .250 | | | | | | 0 1/12 1/12 3/40 | | 0.00 .0830 .0830 .0750 | (3) | | | | | 0 0 1/180 1/120 | | 0.00 .000 .00600 .00870 | | | | | | 0 0 1/120 9/700 | | 0.00 .000 .00870 .0127 | |_ _| |_ _|~>

_ _ _ _ | | | | | 1 1/2 1/3 1/4 | | 1.00 .500 .333 .250 | | | | | | 0 1/12 1/12 3/40 | | 0.00 .0830 .0830 .0750 | (4) | | | | | 0 0 1/120 9/700 | | 0.00 .000 .00870 .0127 | | | | | | 0 0 1/180 1/120 | | 0.00 .000 .00600 .00870 | |_ _| |_ _|~>

_ _ _ _ | | | | | 1 1/2 1/3 1/4 | | 1.00 .500 .333 .250 | | | | | | 0 1/12 1/12 3/40 | | 0.00 .0830 .0830 .0750 | (5) | | | | | 0 0 1/120 9/700 | | 0.00 .000 .00870 .0127 | | | | | | 0 0 0 -1/4200 | | 0.00 .000 .000 -.0000600 | |_ _| |_ _|In the original matrix, labeled (1), some of the decimal entries are already inaccurate. In matrix (3), the three digit computation that produces the 3rd entry in row 4 is this:

.837×10^{-1} - (.904)(.830×10^{-1})
= .837×10^{-1} - .750×10^{-1}
= .870×10^{-2}

In matrix (5), the last entry of row 4 is computed as follows:

.837×10^{-2} - (.690)(.127×10^{-1})
= .837×10^{-2} - .876×10^{-2}
= .006×10^{-2}

The method for defining a Hilbert matrix can be extended to larger sizes, and the 10 by 10 Hilbert matrix presents substantial problems for even a very sophisticated numerical algorithm. You can experiment on your calculator, by inverting the Hilbert matrices of larger and larger sizes.

- Yves Nievergelt,
Numerical Linear Algebra on the HP-28 or How to Lie With Supercalculators,
*American Mathematical Monthly*, (1991), 539-544 - David S. Watkins,
**Fundamentals of Matrix computations**, John Wiley & Sons, 1991