From: ikastan@uranus.uucp (Ilias Kastanas)
Subject: Re: true or false? [Proof]
Date: 17 Oct 1999 03:59:33 GMT
Newsgroups: sci.math,sci.logic
Keywords: characterize sine by magnitude of all derivatives
In article <7u4ove$hqi$1@vixen.cso.uiuc.edu>,
Jim Ferry <"jferry"@[delete_this]uiuc.edu> wrote:
>Ilias Kastanas wrote:
>>
>> I posted a short sketch of my proof several months ago, in a
>> prior thread. If there is interest I could give more detail.
>
>I'd be interested in seeing a repost of the proof.
>
This will be, uh, a tad longer than my old posting, so here is an
overview. After some preliminaries (a, b), we focus on f(z)/sin(z) in C.
Keeping clear of the zeros of sin(z), we establish boundedness... a modest
first dent. We parlay that into information about (f/sin)', coaxed out of
contour integrals (d); and then we go back to R and use it.
The problem: f is infinitely differentiable, |f^(n)(x)| <= 1
for all n and all x in R, and f'(0)=1. Show that f(x) = sin(x).
Proof
-----
a) Clearly f's Taylor series has infinite radius of convergence, is = f(x),
and extends f(x) to an entire function f(z), i.e. analytic everywhere
on the complex plane C. From the series we get the estimate
|f(z)| <= Sum |z|^n /n! = e^|z|.
b) We improve this estimate via Phragmen-Lindeloef: consider f(z) e^iz.
On the ray theta = 0, its modulus is <= |f(x)| <=1; on theta = pi/2
it is <= |f(iy) e^(-y)| <= e^|iy| e^(-y) = 1; and in the sector between
the rays |f(z) e^iz| <= e^|z| e^(-y) <= e^|z|. Hence, by P-L, we have
|f(z) e^iz| <= 1 throughout the sector. That is, |f(z)| <= e^|y|.
( |e^iz| = |e^ix e^(-y)| = |e^(-y)|; here, y > 0).
Similarly for the other three sectors (when y < 0, use f(z) e^(-iz) ).
Therefore: |f(z)| <= e^|y| for all z (= x+iy) in C.
c) We can now start the proof proper.
Remove from C open disks of radius delta around each zero of sin(z)
(i.e. z_k = k pi), for some small delta, thus obtaining C'. Look at the
function f(z)/sin(z) on C': it is analytic _and bounded_... because
|f(z)/sin(z)| <= e^|y| /|sin(z)|, and the latter is bounded on C' (easy
calculation... via 1/|e^2iz -1|, or otherwise).
So |f(z)/sin(z)| < K on C'.
[Note by the way that e^|z| /|sin(z)| is _not_ bounded on C'... that's
why we needed the improved estimate in (b).
K depends on delta; but we keep delta fixed. No subtlety here... delta = 1
will do.]
d) We apply (c) to study the contour integrals
I_m = Integral f(w)/ sin(w) (w-z)^2 dw
over circles of increasing radius R_m in C' (e.g. R_m = (m + 1/2)pi ).
Thus |integrand| < const/ R_m^2, and |I_m| < 2 pi R_m const/ R_m^2 -> 0
as m -> inf. We calculate the residues (at w = z, w = z_n = n pi),
express I_m as sum of residues, and obtain
(*) d/dz(f(z)/sin(z)) + Sum[n=-inf...inf] (-1)^n f(n pi) /(z- n pi)^2 = 0
The proof hinges on this key fact about f/sin. Note that -1 <= f(n pi) <= 1
while Sum[n=-inf...inf] 1/(z- n pi)^2 = 1/(sin(z))^2.
We haven't used the hypothesis f'(0) = 1 yet.
e) In (*), z has to be != n pi... which is inconvenient when it comes
to f'(0). However, the whole proof up to here goes through just as well
with f(z - pi/2) in place of f(z) (indeed, with f(z + any real) )! Thus
d/dz(f(z- pi/2)/sin(z)) + Sum (-1)^n f(n pi -pi/2) /(z- n pi)^2 = 0
Let S(z) denote -Sum, i.e. Sum (-1)^(n+1) (...) . Writing out (f/sin)' =
f'/sin - f cos/sin^2 we see that, for z = pi/2,
f'(0) - S(pi/2) = 0
i.e. S(pi/2) = 1. On the reals, however,
|S(x)| <= Sum 1/(x - n pi)^2 = 1/(sin(x))^2
Since this <= is = for x= pi/2, ___every single one of the numbers
(-1)^n+1 f(n pi - pi/2) must be = 1 ___ !
Which means that S(x) = 1/(sin(x))^2 for all x ( != n pi, all right).
The "secret", then, lurking in the background, is the way S(x) is hugged
by the positive expansion of 1/sin^2. Equal for one x, equal for all.
f) The rest is easy. Integrating d/dz(f(z-pi/2)/sin(z)) = 1/(sin(z))^2 we
get f(z-pi/2)/sin(z) = -cot(z) + C, f(z-pi/2) = -cos(z) + Csin(z), and
so f(z) = sin(z) + Ccos(z). If C != 0, let C = cot(d); then f(z) =
cos(z-d)/sin(d) and |f(d)| = |1/sin(d)| > 1. Therefore C = 0, and
f(z) = sin(z).
Ilias
==============================================================================
From: ikastan@uranus.uucp (Ilias Kastanas)
Subject: Re: true or false? [Proof]
Date: 18 Oct 1999 19:43:07 GMT
Newsgroups: sci.math,sci.logic
In article <7ufnt2$jq@edrn.newsguy.com>,
Daryl McCullough wrote:
@ikastan@uranus.uucp says...
@
@>
@> The problem: f is infinitely differentiable, |f^(n)(x)| <= 1
@>for all n and all x in R, and f'(0)=1. Show that f(x) = sin(x).
@>
@>
@>Proof
@>-----
@>
@>a) Clearly f's Taylor series has infinite radius of convergence, is = f(x),
@>and extends f(x) to an entire function f(z), i.e. analytic everywhere
@>on the complex plane C. From the series we get the estimate
@>|f(z)| <= Sum |z|^n /n! = e^|z|.
@
@Why does it follow that f's Taylor series converges to f(x)? What
@about a function such as
@
@ f(x) = A/(x-B) e^{-C/(x-B)^2)
@
@This function has derivatives at every point, but it is not
@analytic at x=B.
@
Because of the boundedness of all derivatives. The remainder,
f(x) - first n terms, is = f^n(ksi) x^n /n!, so its | | is <= |x|^n /n!,
which -> 0 as n -> inf.
Ilias
==============================================================================
From: David Petry
Subject: Re: true or false? [Proof]
Date: Fri, 22 Oct 1999 14:32:51 -0700
Newsgroups: sci.math,sci.logic
nospam@nospam.mit.edu wrote:
> In article <7ubhj5$1dg$1@hades.csu.net>,
> Ilias Kastanas wrote:
> ,
> wrote:
> <>Ilias Kastanas wrote:
> <>>
> <>> I posted a short sketch of my proof several months ago, in a
> <>> prior thread. If there is interest I could give more detail.
>
> Nice proof! Now, how tight are the hypotheses? Can we weaken them at all
> and still get the same conclusion? For example, if we replace the bound
> by 1+e, can we get anything other than sin(x) + c with |c| < e?
Note that if 'f' is a function with support on [-1, 1] with L^1 norm<= 1, then
the Fourier transform of f and all it's derivatives are bounded
by 1. That idea gives a way of coming up with functions that
work with slightly weakened hypotheses.
==============================================================================
From: ikastan@uranus.uucp (Ilias Kastanas)
Subject: Re: true or false? [Proof]
Date: 24 Oct 1999 07:40:09 GMT
Newsgroups: sci.math,sci.logic
In article <7uqbq5$8dk@schubert.mit.edu>, wrote:
@In article <7ubhj5$1dg$1@hades.csu.net>,
@Ilias Kastanas wrote:
@,
@ wrote:
@<>Ilias Kastanas wrote:
@<>>
@<>> I posted a short sketch of my proof several months ago, in a
@<>> prior thread. If there is interest I could give more detail.
@
@Nice proof! Now, how tight are the hypotheses? Can we weaken them at all
@and still get the same conclusion? For example, if we replace the bound
@by 1+e, can we get anything other than sin(x) + c with |c| < e?
@--
@Tim Chow tchow-at-alum-dot-mit-dot-edu
We can weaken them, in certain ways. What does the proof need to
go through? For one thing, we can establish f(z) entire and O(e^|z|) from,
say: (existence on R and) boundedness of f, f', f''... at a _single_ real
(e.g. |f^n(0)| < B). Then we proceed with O(e^|y|), contour integrals
and the key formula for (f/sin)'.
Now add the hypothesis |f(x)| <= K (all x in R); using the formula
you can then _prove_ |f'(x)| <= K (and hence |f^n(x)| <= K, all n).
That is, if a power series with coefficients |a_n| < B/n! has a bound,
its derivatives will have that same bound.
So there is a wealth of such f(x). Uniqueness comes from requiring
that f' (or some higher f^n) actually achieve the value K... forcing f(x)
= K sin(x - b) . If we only ask that f' hit 1, and not the f bound 1+e,
there are lots of solutions. E.g. appropriate convex combinations cf + (1-c)g
of (shifted/scaled) sin(x), sin(x)/x, J_0(x), J_1(x) or so.
Ilias
==============================================================================
[The problem had actually arisen much earlier --djr]
==============================================================================
From: Dave Rusin
Subject: puzzle (functional analysis?)
Date: Thu, 12 Nov 1998 11:47:53 -0600 (CST)
Newsgroups: [missing]
To: grubb@math.niu.edu
Puzzle:
Given f in C^\infty(R) such that for all p >= 0 and all x in R,
| f^(p) | <= 1. In particular, |f'(0)| <= 1. This upper bound is attained
when f(x) = sin(x). Is it attained for any other f ?
This appeared on sci.math. Probably the answer is "no" -- that
f = sin is the unique optimal point -- but I don't know how to prove it.
My attempts to do so lead to some functional analysis.
Question:
What is known about the function spaces, metrics, bases, transforms below?
Let V be the subspace of functions in C^\infty(R) such that for all p,
f^(p) is bounded. What does V look like? Some observations:
0. V is a subspace.
1. V contains the constants but no other polynomials.
2. A rational function P/Q is in V iff deg(Q)>=deg(P) and roots(Q)={}.
3. V is closed under the differentiation operator.
4. V is closed under affine operators ( T_{m,b} (f)(x) = f(mx+b) )
5. V includes sin and cos . Note that it also then includes e.g.
(1/2)( sin(x) + sin(x/Pi) ) which is not periodic.
Let B be the subset of V of functions for which all the upper bounds
are at most 1. Remarks 1, 3, 5 above apply to B, too; so does remark 4
but only for |m| <= 1. Instead of remark 0 we make the remark that B is
a convex set containing 0 in V. In contrast to remark 2, I think I can show
B contains _no_ nonconstant rational functions. (The derivatives at 0 are
there in the Taylor series, whose coefficients satisfy a recurrence relation
with constant coefficients.)
I'm not sure what else is in B. Suppose f is in V and let
M_p = sup{ |f^(p) (x)|, x in R }.
If log(M_p) is bounded by a linear function in p, then there is a biscaling
g(x) = a f( b x ) which is in B. But I can't easily construct elements
of V with this growth on the bounds.
Here's where the functional analysis comes in: somehow there ought to be
a metric on V built upon the M_p in such a way that B is the unit ball
in V and E(f) = f'(0) is a continuous linear map E : V -> R. Then we
are doing maximization of a continuous linear function on a convex set.
Moreover, we can then at least make sense of the following conjecture: that
the span of the set {sin(cx), cos(cx) : c in R} is dense in V. I'm not
even quite sure how to say that here; maybe it means one would then expect
a representation of a general f in V as a (Stieltjes) integral transform
f(x) = \int_0^\infty sin( x y ) g(y) dy (+ same with cosine)
for some function g; the finite linear combinations suggested in Remark 5
correspond to discrete measures, but any non-negative g which vanishes
outside [0,1] and for which \int_0^1 g(y) dy = 1 will put f into B.
Taking g(y) = y, for example, leads to the function
f(x) = (sin(x)-x*cos(x))/x^2
all of whose derivatives are bounded, and yet this function is not a
simple linear combination of sines and cosines.
(You can even multiply this f by 2.2926... and keep all the values below 1,
but even so, f'(0) is only 0.7642... -- we haven't found the optimal
function yet.)
I don't know very much about extremal points in convex families of functions,
except to say that this line of reasoning shows up when trying to prove
the existence of really useful uniformization functions, so the topic
must be of interest to analysts. The Bieberbach conjecture was also of this
sort, of course, and difficult to prove.
dave
==============================================================================
From: Dave Rusin
Subject: That sin(x) problem
Date: Thu, 12 Nov 1998 12:33:50 -0600 (CST)
Newsgroups: [missing]
To: ouazad@aol.com
Did you say that you _do_, in fact, have a proof that f(x) = sin(x)?
I mentioned the problem to a local functional analyst, who says he can
probably produce a proof, but it doesn't sound at all easy.
How did the question arise?
dave
==============================================================================
From: Dave Rusin
Subject: odds-n-ends
Date: Tue, 17 Nov 1998 17:20:59 -0600 (CST)
Newsgroups: [missing]
To: rusin@math.niu.edu
Problem:
Let B = { f : R -> R ; | f^(p) (x) | <= 1 for all p >= 0, x in R}
Thus |f'(0)| <= 1 for any f in B.
If |f'(0)| = 1, is f = sin ?
Comments:
1) f in B => f is entire; use complex analysis?
2) f in B => F = (f)^2 + (f')^2 <= 2; note F' = (f')(f+f"). Show
that optimal f have F'==0? (Thus f=const or f"=-f)
(Note: where f=1 (or local max), have f'=0 and f"<0.
If f"<=-f then ... (induction on p?)
3) Fourier analysis techniques? If f in B is periodic, then
no Fourier summands can have period < 1
4) Many elements of B:
sin(c x) , c <= 1
(sin x - x cos x)/x^2 [note: f=int_0^1 y sin(xy) dy ]
Si(x)/1.851937... [note: Si'(x) = sinx/x = int_0^1 cos(xy)dy]
[Si(x) = int(sin(xy)/y,c=0..1)]