From: Michael Somos
Subject: Math. Reviews
Date: Thu, 8 Jul 1999 17:06:31 -0400 (EDT)
Newsgroups: [missing]
To: rusin@math.niu.edu
Keywords: Asymptotics of a quadratic recurrence
Dave Rusin,
[deletia -- djr]
Oh, while I am at it, I was looking at the recursion a(n) = a(n-1)-a(n-1)^2
with a(1)=1/2. Do you know of any results more precise than the crude
a(n) ~ 1/(n+log(n))? Shalom, Michael
==============================================================================
From: Dave Rusin
Subject: Re: Math. Reviews
Date: Thu, 8 Jul 1999 17:29:43 -0500 (CDT)
Newsgroups: [missing]
To: somos@grail.cba.csuohio.edu
>I was looking at the recursion a(n) = a(n-1)-a(n-1)^2
>with a(1)=1/2. Do you know of any results more precise than the crude
>a(n) ~ 1/(n+log(n))?
Not so crude, is it? If a(n)=1/(n+log(n)+1+b(n)) then the recursion is
b(n+1) = b(n) - log(1+ 1/n) + 1/(n + log(n) + b(n) ) ; the dominant part is
b(n+1) = b(n) - log(n)/n^2 + O(1/n^2), and sum( log(n)/n^2 ) converges,
so the b's should converge to a constant. Do you need more than that?
b(n)-limit should be O(log(n)/n). You could try to evaluate the limit
carefully (numerically I mean) and then compare to the 'known constants'
at Plouffe's Inverter.
But of course you're looking at a quadratic recurrence and as you know,
these have a chaotic characteristic in general, so I wouldn't expect
anything particularly nice.
Or did you have some connection to elliptic curves or something? I wasn't
using anything high-powered at all, and I can't claim there isn't some
delightful number theory hiding.
dave
==============================================================================
From: Michael Somos
Subject: a(n) = a(n-1)-a(n-1)^2
Date: Thu, 8 Jul 1999 19:04:07 -0400 (EDT)
Newsgroups: [missing]
To: rusin@math.niu.edu
Dave Rusin,
Thanks for the reply. I have not counted the number of hours I have
spent on that sequence. Using gp/pari I was able to get as far as the
1/(n+log(n)) or 1/n + log(n)/n^2 and when I tried to get further
terms I got stuck. I could not solve for the coefficients. I tried in
general a(n) = sum ( c(i,j) n^-i * log(n)^j ) but could not get it
to be solvable. I know about the chaos stuff, but this is in the non
chaotic range. The a(n) decreases smoothly to zero. I originally was
looking at Grossman's constant. You heard of that? It is based on the
recursion a(n) = a(n-2)/(1+a(n-1)) ( notice the close resemblance to
a(n)=(1+a(n-1))/a(n-2) which gives the Lyness 5-cycle ). I got as far
as getting a recurrence close to a(n) = a(n)-a(n)^2 but then was stuck
in figuring out the long term behavior. I will try the equations again
as you suggest, but unless I really messed up, there is another kind
of term which is missing. For example, where does the log(n) come from?
If I knew, I might be able to come up with further terms. Shalom, Michael
==============================================================================
From: Michael Somos
Subject: Okay I have it
Date: Thu, 8 Jul 1999 23:43:29 -0400 (EDT)
Newsgroups: [missing]
To: rusin@math.niu.edu
Dave Rusin,
I don't know where I made the little mistake, but I have it now.
Here is my current result which still needs to be improved and
maybe even proven later. I deal with the following:
let b(n) = b(n-1)^2/(b(n-1)-1) be the recursion. Here if b(n)=1/a(n),
then a(n) = a(n-1)-a(n-1)^2 . Let L = log(n) for brevity. Then
b(n) = n + ( L + c ) + ( L + c -1/2 )/n
- ( 1/2*L^2 + (c-3/2)*L + (1/2*c^2-3/2*c+5/6) )/n^2 + O(1/n^3)
There are higher terms which are polynomials in L . There is a lot
more work to be done now. Thanks for the suggestion which confirmed
what I believed before, but I just made some unexplained algebra error
which prevented me from making progress. Oh, the constant c depends on
the initial value of the sequence. Shalom, Michael
==============================================================================
From: Dave Rusin
Subject: Re: Okay I have it
Date: Fri, 9 Jul 1999 01:17:38 -0500 (CDT)
Newsgroups: [missing]
To: somos@grail.cba.csuohio.edu
We agree. I solidified some details; use as you see fit.
We start with the recursion
a(n+1) = a(n)-a(n)^2 , a(1)=1/2 say.
Seeing the terms get smaller, set a(n)=1/b(n), and deduce
b(n+1) = b(n)^2/(b(n)-1), b(1)=2 or 1/a(1) in general.
Seeking to simplify the denominator, set b(n)=1+c(n) and deduce
c(n+1) = c(n) + 1 + 1/c(n), c(1)=1 or 1/a(1) - 1 in general.
Observing the increment "+1" here, expect c~n, so set c(n)=n+d(n) and get
d(n+1) = d(n) + 1/( n + d(n) ), d(1)=0 or 1/a(1)-2 in general.
Observing the increment "+1/n", expect d(n) ~ log(n). With the benefit of
hindsight I decide next to write d(n) = log(n) - e(n), and deduce
e(n+1) = e(n) + log(1+ 1/n) - 1/(n + log(n) - e(n)).
I pursued this starting with e(1) = 0 but now you appear interested in
the general starting value 2 - 1/a(1). I'd have to adapt the argument
below for the general case I guess, so let me stick to the case a(1)=2.
Prop. 0=e(1) < e(2) < ... < e(n) < e(n+1) < ... < 1.
Proof by induction (obviously!), the first few cases easily checked by hand.
(The inductive statement must include both e(n) e(n) I need only show
log(1+ 1/n) > 1/(n + log(n) - e(n)).
The left side is at least 1/n - 1/(2n^2) (alternating Taylor series. Gosh,
I love Calc-2 !). On the right we can get an upper bound by minorizing the
denominator; by induction this is at least n + log(n) - 1. So the RHS
is at most (1/n) / (1 + (log(n) - 1)/n ) < (1/n)(1 - (log(n)-1)/n + (ditto)^2)
Thus desired inequality will follow upon dividing
1 - 1/(2n) > 1 - (log(n)-1)/n + (ditto)^2
by n. Rearranging terms some more, we find we need only show
3/2 < log(n) - (log(n)-1)^2/n
Yadda yadda. This is true for all n >= 5.
The inequality e(n+1) < 1 follows from combining the recursive statements
to get
e(n+1) = e(1) + Sum_{1 <= k <= n} ( log(1+1/k) - 1/(k+log(k)-e(k)) ).
which by induction (and using e(1)=0 ) is at most
e(n+1) <= Sum_{1 <=k <= n} (log(1+1/k) - 1/(k+log(k)) )
Now use Taylor arguments to bound the summands as being, say, at most
(1/k - 1/(2k^2) + 1/(3k^3)) - (1/k)(1 - log(k)/k)
= (log(k) - 1/2)/k^2 + 1/(3k^3).
The sum of these expressions for k=1, 2, ..., n is of course less than the
infinite sums; "standard techniques" (in my case, cheating with Maple :-) )
estimate the two implied sums as -Zeta(1,2) - (1/2)(Pi^2/6) + (1/3) zeta(3),
which evaluates to .5157668550 < 1.
QED
We can describe the e's a little better. I had Maple compute directly the
floating-point values of e(n), n through 10000. (I did it with 100 digits
carried accuracy, so "this must be right" :-) ). This gives a tighter range
of values which may be inserted into the proof above. For n > 10000
we thus compute e(n+1) as
e(10^4) + Sum_{10^4 <= k <= n} ( log(1+1/k) - 1/(k+log(k)-e(k)) )
and so may obtain lower and upper bounds for e(n+1) by computing this
sum with the expression "e(k)" replaced, respectively, by .515767 and
e(10^4)=.23105873678. We may thus obtain lower and upper bounds for the
limit e(oo) = sup{e(n), n>1} by taking infinite sums. (Here I took a
series expansion and estimated the separate sums with an integral test).
My computations yield .231977767 <= e(oo) <= .23200621
Note that with this improved upper bound on e(oo) (and thus on every e(n) )
we may perform again the same computations with ".515767" replaced by ".232007"
to improve the lower bound on e(oo) ; in this way I conclude
.23200612 < e(oo) < .23200621
I don't really know how to narrow the band any better except by computing e(n)
directly for n past 10^4. (I'll try to get estimates for e(n) below, but
it seems too much trouble to incorporate this into the sums above.)
Tracing back to your earliest sequences, we now have a fairly precise
asymptotic estimate
a(n) = 1/(n + log(n) + (1-e(n))) ~ 1/(n + log(n) + .778 ).
Next we ask if it is possible to do better, that is, to estimate the rate
at which e(n) climbs to e(oo). The answer is "yes", of course, since
e(oo) - e(n) = Sum_{n <= k } ( log(1+1/k) - 1/(k+log(k)-e(k)) )
and we may estimate the sums on the right in terms of n. Not wishing to
repeat the Taylor arguments yet again, let me simply observe that the
dominant term in the kth summand is log(k)/k^2, and Sum( log(k)/k^2, k>= n)
is estimated by the integral test as (log(n)+1)/n. So we have an
approximation e(n) = e(oo) - ( log(n)/n )*(1 + o(1) ). (This is consistent
with the table below if e(oo) = .232 or so.)
My best estimate, then, for the original series, is
a(n) = 1/(n + log(n) + (1-e(oo)) + (log(n)/n)*(1+o(1)) )
where 1 - e(oo) is a constant equal to about 0.777994.
dave
Collected data:
e(1000) = .2248491874532859625960457347592
e(2000) = .2280775805130294016059905365088
e(3000) = .2292509971933401405444724441447
e(4000) = .2298674700599966248063131109201
e(5000) = .2302503760062495129272756816559
e(6000) = .2305125020781817003078841351362
e(7000) = .2307037864028038605263636422376
e(8000) = .2308498425982927675763913096597
e(9000) = .2309652012911092031968191804526
e(10000)= .2310587367800037729315546345725
==============================================================================
From: Michael Somos
Subject: It gets better
Date: Fri, 9 Jul 1999 02:25:05 -0400 (EDT)
Newsgroups: [missing]
To: rusin@math.niu.edu
Dave Rusin,
In the last few hours since I made the breakthrough, I have come
across a remarkable new technique for generating numbers. It is
analogous to continued fractions. Recall that I made some minor
algebra error which prevented me from deducing the general form
of the a(n) with the log(n) terms that I was certain was there.
After I got the general term I saw that it could be simplified.
In fact, the simplifcation process could be iterated. This was a
big surprise. Here is my setup. I know that I will have series
in powers of 1/n so I let x = 1/n. Also, let y = log(n). Now the
idea is to use the transformation f --> 1/(1/x + c - log(f)) .
That is the correct mathematical form, but due to limitations of
my algebra package I have to adust this slightly to the form
f --> 1/(1/x + c + y + log(x/f)) which is mathematiclly equivalent.
Here is my calculations:
gp> F(f,c) = 1/(1/x + c + y + log(x/f))
gp> x+O(x^2)
%2 = x + O(x^2)
gp> F(%,c1)
%3 = x + (-y - c1)*x^2 + O(x^3)
gp> F(%,c2)
%4 = x + (-y - c2)*x^2 + (y^2 + (2*c2 - 1)*y + (-c1 + c2^2))*x^3 + O(x^4)
In conventional notation this represents
1/n +(-log(n)-c2)/n^2 +(log(n)^2 +(2*c2-1)*log(n) +(c2^2-c1))/n^3 +O(1/n^4)
In our case the result we want is that a(n) =
x - y*x^2 + (y^2 - y + 1/2)*x^3 + (-y^3 + 5/2*y^2 - 5/2*y + 5/6)*x^4 + O(x^5)
This has been a most interesting problem. I am sure that there is much
more to this than I have discovered so far. Oh, I also did a lot of long
numerical computations along the way. Almost all useless now. It was good
to see how close my ideas were to numerical reality though. Shalom, Michael
--
Michael Somos Cleveland State University
http://grail.cba.csuohio.edu/~somos/ Cleveland, Ohio, USA 44115
==============================================================================
From: Michael Somos
Subject: simply marvelous
Date: Fri, 9 Jul 1999 16:03:34 -0400 (EDT)
Newsgroups: [missing]
To: rusin@math.niu.edu
Dave Rusin,
It is not often that I discover such nice results, so I want to share
a little of it with you. Recall our little recursion:
a(n) = a(n-1)-a(n-1)^2
where the a(1) is suitably chosen to simply the following results.
We notice numerically that
1/a(n) ~ n + c1 + log(n) + ...
The wonderful result is that
1/a(n) = n + c1 + log(n + c2 + log(n + c3 + ... ))
where the c's are rational numbers of a striking form. They are:
c1 = 0, c2 = -1/2, c3 = -17/24, c4 = -919/1152, c5 = -3630361/4423680,
c6 = -93968746997933/117413668454400, ...
where each denominator is more than square of the previous. In fact,
d(n) = n*d(n-1)^2
almost exactly. I have not proven a thing, but this is really nice.
I wonder if this is known mathematics. Shalom, Michael
P.S. You may publicise this if you wish. I am not sure when I will have
the time to present this properly to the public.