Newsgroups: sci.math.num-analysis
From: lpa@netcom.com (Pierre Asselin)
Subject: Re: Proof of bad condition of Hilbert matrix
Date: Sat, 31 Aug 1996 04:56:05 GMT
tmoon@artemis.ece.usu.edu (Todd K. Moon) writes:
>It is well known that the condition number of large Hilbert matrices
>grows very large, and is easy to verify experimentally. However, I
>have been unable to find a proof of this fact. I would appreciate a
>pointer to a reference, if such exists.
Well, let's see... You need estimates of the max and min values
that the Rayleigh quotient R(u) can take, where
R(u) = (u^t H u)/(u^t u) , u= any nonzero vector.
Now, H is the Gram matrix of the polynomials over the interval [0,1].
If p(x) = u_0 + u_1*x + u_2*x^2 + ... then
u^t H u = integral[from 0 to 1](p(x)^2 dx).
Take p(x) = Tchebychev polynomial (shifted from the inverval [-1,1]
to [0,1]). The coefficients grow exponentially with n, so (u^t u)
grows exponentially with n; yet p(x) oscillates between -1 and +1,
so the integral is less than 1. So Rmin decreases exponentially
with n.
For Rmax, use u= [1,0,0,0...]; Rmax is >= 1.
So Rmax/Rmin grows exponentially. If you want a tight bound you'll
need to find a good estimates of the coefficients of Tchebychev
polynomials. Legendre polynomials should give an even tighter bound.
--
--Pierre Asselin, Westminster, Colorado
lpa@netcom.com
pa@verano.sba.ca.us
==============================================================================
From: jacquesg@clic1.qbc.clic.net (Jacques Gelinas)
Newsgroups: sci.math.num-analysis
Subject: Re: Proof of bad condition of Hilbert matrix
Date: 31 Aug 1996 13:08:35 GMT
Todd K. Moon (tmoon@artemis.ece.usu.edu) wrote:
: It is well known that the condition number of large Hilbert matrices
: grows very large, and is easy to verify experimentally. However, I
: have been unable to find a proof of this fact. I would appreciate a
: pointer to a reference, if such exists.
Well, I once knew a CS prof who tested each new system by inverting 9x9,
10x10 ... Hilbert natrices until it crashed. Not as severe as the BLAS
test, but efficient.
In the well known article "Pitfalls in Computation, or Why a Math Book
ins'nt enough, Amer.Math.Month. 77(9), Nov 1970, 931-955", G.E. Forsythe
starts from a 1894 article of David Hilbert and shows that least squares
approximation on [0,1] of a function f(x) by a polynomial of degree n-1
leads to a linear system based on the Hilbert matrix of order n.
He then shows that the growth of the elements of the inverse matrix
leads to numerical instability because any small variation in the RHS
could be greatly magnified in multiplying by the inverse matrix:
"It cannot be demonstrated here, but if the maximal element of the
inverse matris is much bigger than b^s, you cannot solve the
system with s-digit arithmetic in base b."
Forsythe gives a reference to his 1967 book on linear algebra (with Moler).
Here is how I would suggest you attack the problem of estimating the
0) Adopt ||A|| ||A^-1|| as a measure of ill-conditioning with, say,
the maximal matrix norm.
1) Use a theorem of Cauchy on determinants to estimate the elements
of the inverse matrix.
A. Ralston, A First Course on Numerical Analysis, p. 263
"A theorem of Cauchy states that if a1...an,b1...bn are 2n numbers,
the determinant with elements (ai+bk)^-1, i=1..n, k=1..n, has the
value
\prod^n_{i>k=1} (a_i-a_k)(b_i-b_k) / \prod^n_{i=1}\prod^n_{k=1}(a_i+b_k)"