From: israel@math.ubc.ca (Robert Israel)
Subject: Re: Help! Closeness of square roots
Date: 22 Jan 1999 00:52:24 GMT
Newsgroups: sci.math
Keywords: Square-root as a function on (positive-definite) matrices
In article <36A79D73.41C6@ldeo.columbia.edu>, Alexey Kaplan writes:
|> If two symmetric positive-definite matrices are close,
|> how to show that their symmetric (or triangular?) square
|> roots are also close? Any norm will suffice. Is it written somewhere?
I suppose you mean the positive-definite square roots are close.
My apologies if you wanted an elementary proof: my inclination is to
approach this as a functional analyst, using the Spectral Theorem.
This defines a functional calculus: an isomorphism f -> f(A) from continuous
functions on the spectrum sigma(A) of a bounded normal operator A on
a Hilbert space H into the bounded operators on H. In particular, if
A is a positive semidefinite matrix, we can take f(z) = sqrt(z) and obtain
f(A) the positive semidefinite square root of A. Moreover, if ||A|| < R,
we can find a polynomial P(z) such that |P(z) - sqrt(z)| < epsilon for
0 <= z <= R, and then ||P(A) - sqrt(A)|| < epsilon as well
(I'm using the operator norm here).
Now if B is also positive semidefinite and sufficiently close to A,
||P(A) - P(B)|| < epsilon and (if ||B|| < R) ||P(B) - sqrt(B)|| < epsilon,
so ||sqrt(A) - sqrt(B)|| < 3 epsilon.
Robert Israel israel@math.ubc.ca
Department of Mathematics http://www.math.ubc.ca/~israel
University of British Columbia
Vancouver, BC, Canada V6T 1Z2
==============================================================================
From: kovarik@mcmail.cis.McMaster.CA (Zdislav V. Kovarik)
Subject: Re: Help! Closeness of square roots
Date: 22 Jan 1999 17:18:36 -0500
Newsgroups: sci.math
In article <36A79D73.41C6@ldeo.columbia.edu>,
Alexey Kaplan wrote:
>If two symmetric positive-definite matrices are close,
>how to show that their symmetric (or triangular?) square
>roots are also close? Any norm will suffice. Is it written somewhere?
>
>Thanks a lot,
>Alexey
This approach is also functional analytic, and with square roots we have
the luxury of integral representation:
A^(1/2)
= (1/pi) * integral[0 to infinity] A*(z*I+A)^(-1) z^(-1/2) dz
which can, in case of matrices, be verified in every eigenspace
separately (restricted to an eigenspace, A acts as a constant multiple of
identity).
We can express a bound on norm(A^(1/2) - B^(1/2)) in terms of norm(A-B)
and lower bounds for A and B, respectively: positive definiteness of A and
B implies A-m*I and B-n*I non-negative semidefinite for some positive m
and n.
Claim: norm(A^(1/2) - B^(1/2)) <= norm(A-B) / (m^(1/2) + n^(1/2)).
Outline of proof:
The "backward resolvents" subtract as follows:
A*(z*I+A)^(-1) - B*(z*I+B)^(-1)
= z * (z*I+A)^(-1) * (A - B) * (z*I+B)^(-1)
and the norm of (z*I+A)^(-1) is not more than 1/(z+m) (just subtract
(z+m)^(-1)*I from (z*I+A)^(-1)).
Similarly for norm(z*I+B)<=1/(z+n).
Put together,
norm(A^(1/2) - B^(1/2)) <= (1/pi) * norm(A-B) * f
where f = (1/pi)*integral[0 to infinity] z^(1/2)*dz/((z+n)*(z+m)) dz
= (m^(1/2) - n^(1/2))/(m-n) = 1/(m^(1/2) + n^(1/2)) if m <> n,
and f = 1/(2*m^(1/2)) if m=n, by passing to a limit.
A similar estimate can be obtained for A^(-1/2) (just drop the factor A
from the integral representation).
The result is well-known to people working in operator inequalities (I am
just a spectator there:-)=
Hope it helps, ZVK(Slavek).
==============================================================================
From: kovarik@mcmail.cis.McMaster.CA (Zdislav V. Kovarik)
Subject: Re: Square root of a matrix
Date: 13 Apr 1999 19:40:05 -0400
Newsgroups: sci.math
In article <7f03ab$elh$1@nnrp1.dejanews.com>, wrote:
:Does anyone know how to calculate the square root of a matrix in C??????
:
:Or
:
:Does anyone know an algorithm I can use to code and calculate it
:myself????
You might be better off reposting in sci.math.num-analysis .
Also, try DejaNews for a recent thread on this topic.
This being a sci.math newsgroup, here is an algorithm due to Riesz. It is
inefficient in its raw form, and works only for invertible matrices. It
needs an integration routine. Also, it signals the non-uniqueness of the
square root relation for M, even if it is supposed to double-commute with
M.
Let S be a piecewise smooth path in the complex plane which runs once
around every eigenvalue of the matrix M, but does not separate 0 from
infinity.
Then z^(1/2) has a holomorphic branch in a neighbourhood of S, and
M^(1/2) = (1/(2*pi*i)) * integral [along S] (z*I - M)^(-1) z^(1/2) dz.
You can achieve some amount of uniqueness ("principal value") if you
demand that the path miss the cut along the negative half-semiaxis,
slightly rotated in the positive direction to avoid negative eigenvalues.
Aren't you sorry you asked mathematicians? :-)=
For efficient algorithms, try Golub & Van Loan, Matrix Computations.
Cheers, ZVK(Slavek).
==============================================================================
From: kovarik@mcmail.cis.McMaster.CA (Zdislav V. Kovarik)
Subject: Re: Square root of a Matrix ..
Date: 2 Dec 1999 12:49:19 -0500
Newsgroups: comp.dsp,sci.math.num-analysis
Keywords: Computing square roots of matrices
In article <38465F20.CCEEF496@hrz.uni-kassel.de>,
Gottfried Helms wrote:
[old thread snipped -- djr]
:That leaves me again confused:
:
: is S a square-root of M if
:
: S*S = M
:
: or if
:
: S*S'=M (S' = transposed of S).
:
:The latter is mostly used in statistics, but without the
:labeling as a square-root.
:
And justly so: it is *not* a square root.
:In another thread I saw a derivation of exp(S) as
:
: exp(S) = I + S / 1! + S*S / 2! + S*S*S/3! +...,
:
:which suggests, that S^2 = S*S and hence the square-
:root of M had to be determined using M= S*S .
:
:Could someone refer the definition please?
:
:Gottfried Helms.
Referred or not: There are several levels of restrictions on what is to be
called a square root of a matrix.
No restrictions except the obvious: S is a square root of M if S*S=M.
Remark: You can get a whole family of unrestricted square roots even of
the 2-by-2 identity I: I itself, -I, and every I-2*u*v' where v'*u=1.
Example: A =
[-29 20 ]
[-42 29 ]
Another remark: Some matrices don't have square roots at all (try to prove
it for the following):
N =
[ 0 1 ]
[ 0 0 ]
Restriction by commutativity: S*S=M, and S should commute with every X
which commutes with M, that is, M*X=X*M implies S*X=X*S.
Remark: Even then you can have several square roots, if M has more than
one non-zero eigenvalue.
Restriction by confining it to subspace: for an n-by-n matrix M, we
require besides S*S=M that S belong to the linear span of I, M, ...,
M^(n-1) .
Remark: It is the same as the previous restriction (quite an involved
proof).
Principal value: S*S=M, plus commutativity, plus the eigenvalues being in
the open right halfplane, augmented with the ray from 0 to i*infinity.
Still a smaller class, and for positive definite symmetric real matrices
this leads to uniqueness (the conditions can be vastly relaxed).
Cheers, ZVK(Slavek).
==============================================================================
From: "Brett George"
Subject: Re: Square root of a Matrix ..
Date: Wed, 8 Dec 1999 09:45:26 +1100
Newsgroups: comp.dsp,sci.math.num-analysis
It is actually refered to as Cholesky decomposition,
check out chapter 2.9 in Numerical recipies:
http://www.ulib.org/webRoot/Books/Numerical_Recipes/bookcpdf.html
Brett.
wrote in message
news:828sj6$crl$1@nnrp1.deja.com...
[above article quoted -- djr]
> Hi Slavek,
>
> I've got your reasons, but then why there are transposition and
> Hermitian conjugation in the definitions of orthogonal and unitary
> matrices?
>
> Your example of identity implies (implicitly!:) that I is an Hermitian;
> look at the Householder square root: it is HH=I just because H^T=H
> or H^*=H by its definition.
>
> --
> Andy
>
>
> Sent via Deja.com http://www.deja.com/
> Before you buy.
==============================================================================
From: trhoffend@mmm.com (Tom Hoffend)
Subject: Re: Square root of a Matrix ..
Date: 29 Nov 1999 21:06:12 GMT
Newsgroups: comp.dsp,sci.math.num-analysis
In sci.math.num-analysis Sheung Hun (Bobby) Cheng wrote:
: For definition and methods, look at the following paper and the reference
: therein:
: Stable Iterations for the Matrix Square Root,
: Nicholas J. Higham,
: Numerical Algorithms, 15(2): 227-242, 1997.
: Approximating the Logarithm of a Matrix to Specified Accuracy.
: Sheung Hun Cheng, Nicholas J. Higham, Charles S. Kenney and Alan J.Laub,
: NA Report 353, November 1999.
: ftp://ftp.ma.man.ac.uk/pub/narep/narep353.ps.gz
For a little older treatment, see also pp. 261-267 in "Functional Analysis" by
F. Riesz ans B. Sz.-Nagy (I have the second edition which is available on
Dover) for square root of a positive symmetric transformation A on a Hilbert
space H (their definition of positive is positive semidefinite,
(i.e. .ge. 0 for all f in H or all characteristic values of A .ge. 0).
They give the construction of the square root in terms of the limit of a
sequence of polynomials in A and also prove uniqueness.
For a symmetric positive semi-definite matrix, Golub and Van Loan
(the "biblical" text "Matrix Computations"), give a short discussion
of the matrix square root in terms of the SVD of the Cholesky factorization.
TRH
--
Thomas R. Hoffend Jr., Ph.D. EMAIL: trhoffend@mmm.com
3M Company
3M Center Bldg. 236-2A-06 My opinions are my own and not
St. Paul, MN 55144-1000 those of 3M Company.
Opinions expressed herein are my own and may not represent those of my employer.
==============================================================================
Date: Mon, 12 Nov 2001 14:04:09 +0100
From: Gottfried Helms
To: feedback06@math-atlas.org
Subject: sqrt of matrix
Hi -
in
http://www.math.niu.edu/~rusin/known-math/99/sqrt_mat
I am cited on the topic of sqrt-of-a-matrix. Don't know, whether
you are maintaining theses pages.
There are obvious different definitions in the articles,
without one definitely being the authority.
There are 3 answers:
the sqrt of a matrix m is s ,
[1] if s*s = m
[2] if s*s' = m .
The latter is explicitely denied from one article, but is
implied, if someone says
[3] if s is cholesky-decomposition of m
[3] is identical with [2].
I myself accepted later version[1] and in my matrix-calculator
I implemented a routine to find it via the newton(?) iteration:
-1
m*s0 + s0
------------ = s1
2
I think, you should update the articles...
Regards -
Gottfried Helms