From: rusin@vesuvius.math.niu.edu (Dave Rusin)
Newsgroups: sci.math, sci.math.research
Subject: Re: Analysis (sin) elementary solution ?
Date: 11 Nov 1998 22:49:08 GMT and 11 Nov 1998 22:55:11 GMT
Ouazad wrote:
> I know a non elementary solution for the following problem:
>
> f: R->R
> f is continuous and all its derivatives are continuous, and f satisfies the
> following property:
>
> for all p in N, for all x in R, |f(p)(x)|<=1
> and f'(0)=1
>
> f(p) is the p-th derivative of f.
> Prove that f(x)=sin x for all x in R.
I didn't see either any obvious counterexamples, nor an easy proof.
I do observe that the set B of functions f with the property
-1 <= f^(p) (x) <= 1 for all real x, all integer p>=0
forms a convex subset in C^\infty(R), including the zero function of course.
Hence f in B implies c.f in B for all c in [-1, 1]. This set B
includes the constant functions in [-1,1], but no other polynomials.
Also note that B is closed under the differentiation operator and under
the translation operators ( T_t(f)(x) = f(x+t) ) and the reflection
operator ( R(f)(x) = f(-x) ). It's closed under some biscalings: if
f is in B, then g(x) = a f( b x ) is in B if |a|<=1 and |b|<=1.
Since sin and cos both lie in B, it follows that convex combinations
of sin(b_i x) and cos(b_i x) also lie in B as long as the |b_i| <= 1.
(We gain nothing else with translations and reflections.)
I'm not sure what else is in B. Suppose f is in the larger set C of
functions which have the property that for all p there is a supremum
M_p = sup{ |f^(p) (x)|, x in R }.
If log(M_p) is bounded by a linear function in p, then there is a biscaling
g(x) = a f( b x ) which is in B. But I didn't find any rational functions
with this property, and there may indeed be none (of course they would have
to have denominators with degree at least as large as the numerators').
For example, Christian Stoecker wrote:
> I claim that f(x)=atan(x) does the same.
This implies the claim that 1/(1+x^2) is in C (perhaps the poster even
thought f = atan lay in B, which is certainly false as f'''(0)=-2.)
I believe log(M_p) is, for odd p, attained at the origin for this function;
checking the values as far as p=20, it seems these maxima increase
super-linearly.
So I don't know what this set B looks like. It's surprising to me that
the only extremal elements I know of in B are periodic, since the definition
of B requires nothing of the sort (and indeed ( sin(x) + sin(x/sqrt(2)) )/2
is a non-periodic element of B ).
On the other hand, the original question simply asks for the maximum
value of the evaluation function E : B -> R given by E(f) = f'(0)
on B. This E is linear, so we will not attain a maximum on the
interior of B; testing the only candidate extremal points I know of,
sin(b x) and cos(b x), it is clear that we cannot do better
than to take f = sin.
I don't know very much about extremal points in convex families of functions,
except to say that this line of reasoning shows up when trying to prove
the existence of really useful uniformization functions, so the topic
is of interest to analysts. The Bieberbach conjecture was also of this sort,
and difficult to prove.
I hope this is sufficient inducement for those holding proofs or
counterexamples to come forward :-)
dave
==============================================================================
From: Robin Chapman
Newsgroups: sci.math
Subject: Re: Analysis (sin) elementary solution ?
Date: Thu, 12 Nov 1998 08:11:39 GMT
In article <72d494$89e$1@gannett.math.niu.edu>,
rusin@vesuvius.math.niu.edu (Dave Rusin) wrote:
> Ouazad wrote:
>
> > I know a non elementary solution for the following problem:
> >
> > f: R->R
> > f is continuous and all its derivatives are continuous, and f satisfies the
> > following property:
> >
> > for all p in N, for all x in R, |f(p)(x)|<=1
> > and f'(0)=1
> >
> > f(p) is the p-th derivative of f.
> > Prove that f(x)=sin x for all x in R.
>
> I didn't see either any obvious counterexamples or an easy proof.
>
> I do observe that the set B of functions f with the property
> -1 <= f^(p) (x) <= 1 for all real x, all integer p>=0
> forms a convex subset in C^\infty(R), including the zero function of course.
> Hence f in B implies c.f in B for all c in [-1, 1]. This set B
> includes the constant functions in [-1,1], but no other polynomials.
> Also note that B is closed under the differentiation operator and under
> the translation operators ( T_t(f)(x) = f(x+t) ) and the reflection
> operator ( R(f)(x) = f(-x) ). It's closed under some biscalings: if
> f is in B, then g(x) = a f( b x ) is in B if |a|<=1 and |b|<=1.
>
> Since sin and cos both lie in B, it follows that convex combinations
> of sin(b_i x) and cos(b_i x) also lie in B as long as the |b_i| <= 1.
> (We gain nothing else with translations and reflections.)
>
> I'm not sure what else is in B. Suppose f is in the larger set C of
> functions which have the property that for all p there is a supremum
> M_p = sup{ |f^(p) (x)|, x in R }.
> If log(M_p) is bounded by a linear function in p, then there is a
biscaling
> g(x) = a f( b x ) which is in B. But I didn't find any rational functions
> with this property, and there may indeed be none (of course they would have
> to have denominators with degree at least as large as the numerators').
All the functions in B extend to entire functions on the complex plane.
I thought that perhaps all the functions in B have Fourier transforms
(as tempered distributions) with support in [-1,1]. Alas my analysis
is not up to proving this.
> For example, Christian Stoecker wrote:
> > I claim that f(x)=atan(x) does the same.
> This implies the claim that 1/(1+x^2) is in C (perhaps the poster even
> thought f = atan lay in B, which is certainly false as f'''(0)=-2.)
> I believe log(M_p) is, for odd p, attained at the origin for this function;
> checking the values as far as p=20, it seems these maxima increase
> super-linearly.
That's correct: write 1/(1+x^2) = (1/2i)[1/(x-i) - 1/(x+i)]. Then
the p-th derivative of arctan has terms 1/(x+-i)^p, which are maximized
in absolute value at 0, and for p odd they have the same argument.
Of course f^(2k+1)(0) = (-1)^k (2k)! (from Maclaurin series).
> So I don't know what this set B looks like. It's surprising to me that
> the only extremal elements I know of in B are periodic, since the definition
> of B requires nothing of the sort (and indeed ( sin(x) + sin(x/sqrt(2)) )/2
> is a non-periodic element of B ).
>
> On the other hand, the original question simply asks for the maximum
> value of the evaluation function E : B -> R given by E(f) = f'(0)
> on B. This E is linear, so we will not attain a maximum on the
> interior of B; testing the only candidate extremal points I know of,
> sin(b x) and cos(b x), it is clear that we cannot do better
> than to take f = sin.
>
> I don't know very much about extremal points in convex families of functions,
> except to say that this line of reasoning shows up when trying to prove
> the existence of really useful uniformization functions, so the topic
> is of interest to analysts. The Bieberbach conjecture was also of this sort,
> and difficult to prove.
>
> I hope this is sufficient inducement for those holding proofs or
> counterexamples to come forward :-)
>
It's a good problem!
Robin Chapman + "They did not have proper
SCHOOL OF MATHEMATICal Sciences - palms at home in Exeter."
University of Exeter, EX4 4QE, UK +
rjc@maths.exeter.ac.uk - Peter Carey,
http://www.maths.ex.ac.uk/~rjc/rjc.html + Oscar and Lucinda, chapter 20
-----------== Posted via Deja News, The Discussion Network ==----------
http://www.dejanews.com/ Search, Read, Discuss, or Start Your Own
==============================================================================
From: Robert Israel
Newsgroups: sci.math.research
Subject: Re: Elementary solution?
Date: Thu, 12 Nov 1998 18:27:22 -0800 (PST)
Dave Rusin wrote:
| I do observe that the set B of functions f with the property
| -1 <= f^(p) (x) <= 1 for all real x, all integer p>=0
| forms a convex subset in C^\infty(R), including the zero function of
| course.
| Hence f in B implies c.f in B for all c in [-1, 1]. This set B
| includes the constant functions in [-1,1], but no other polynomials.
...
| I'm not sure what else is in B. Suppose f is in the larger set C of
| functions which have the property that for all p there is a supremum
| M_p = sup{ |f^(p) (x)|, x in R }.
| If log(M_p) is bounded by a linear function in p, then there is a
| biscaling
| g(x) = a f( b x ) which is in B. But I didn't find any rational functions
| with this property, and there may indeed be none (of course they would have
| to have denominators with degree at least as large as the numerators').
If f is in B, then f is equal to the sum of its Taylor series about any
point (use Taylor's Theorem with Lagrange remainder), and thus extends
to an entire function. In particular, of course, no non-constant rational
function can be in B (the only entire rational functions are polynomials).
Moreover, by expanding f(z) about Re(z), we have
|f(z)| <= sum_n |f^(n)(Re(z))| |Im(z)|^n/n! <= exp(|Im(z)|)
By a Paley-Wiener type theorem, the Fourier transform of f (as a
distribution) is supported on the interval [-1,1]. I think that
further analysis of the Fourier transform is the way to attack
this problem.
Robert Israel israel@math.ubc.ca
Department of Mathematics http://www.math.ubc.ca/~israel
University of British Columbia
Vancouver, BC, Canada V6T 1Z2
==============================================================================
From: John Rickard
Newsgroups: sci.math
Subject: Re: Analysis (sin) elementary solution ?
Date: 12 Nov 1998 12:45:05 +0000 (GMT)
Ron Bloom wrote:
: : > Ouazad wrote
: : > > f: R->R
: : > > for all p in N, for all x in R, |f(p)(x)|<=1
: : > > and f'(0)=1
: : > >
: : > > f(p) is the p-th derivative of f.
: : > > Prove that f(x)=sin x for all x in R.
: The question is, really, is the claim even plausible?
: Why "should" it be true?
It seems plausible.
Fourier analysis says that f should be a combination of sin and cos
functions. Let's suppose, for simplicity, that it's a *finite*
combination:
f(x) = Sum (a_i sin(k_i x)) + Sum (b_i cos (l_i x))
Suppose furthermore that there aren't any cos's. (In fact, it's clear
that if f satisfies the conditions then so does the odd part of f,
f_o(x) = (f(x)-f(-x))/2; moreover, it's not hard to show that if the
odd part of f is sin, then the even part must be zero; thus it would
be sufficient to prove the statement for odd functions.)
We can take all the k_i positive. If any of the k_i are greater than
1, then differentiating n times for large enough n gives a dominating
term with a factor of k^n, where k is the largest k_i; thus f does not
satisfy the condition.
So 0 < k_i <= 1. Since f'(0) = 1, we have
Sum a_i k_i = 1
so if any of the k_i are less than 1 then Sum |a_i| > 1. If the k_i
were linearly independent over Q, then we could find an x so that f(x)
was arbitrarily close to Sum |a_i|, so f couldn't satisfy the
condition. Playing around a bit, I can't find examples with the k_i
not linearly independent that work. For example, one try was
f(x) = (9 sin(x) + 2 sin(x/2))/10, but that has
f(pi/2) = (9 + sqrt(2))/10 > 1. More generally, if all the a_i are
positive then f(pi/2) > 1; but if any are negative, then Sum |a_i|
must be even bigger, making things harder. It seems plausible to me
that there are no counterexamples.
This might be the basis of a proof, but of course it has very big
holes in it. More specifically, it's missing the generalization to
the case where f is not a finite combination of sin and cos functions
(which might be straightforward to someone skilled in analysis), and
an argument to deal with the case with all the k_i <= 1 (which needs
an idea).
[I see that Robin Chapman has conjectured, in effect, a generalization
to the continuous case of the fact that all k_i <= 1; he has
presumably been thinking along similar lines to the above.]
--
John Rickard
==============================================================================
From: rusin@vesuvius.math.niu.edu (Dave Rusin)
Newsgroups: sci.math
Subject: Re: Analysis (sin) elementary solution ?
Date: 12 Nov 1998 16:49:39 GMT
John Rickard wrote:
>Fourier analysis says that f should be a combination of sin and cos
>functions.
Maybe I just don't know enough, but Fourier analysis, in the context I learnt
it, only suggests such a description of _periodic_ functions. There's no
reason to expect periodicity here, and indeed some close contenders are not
even periodic (e.g. (sin(x)+sin(x/pi) )/2 .) I never seem to remember as much
functional analysis as I need, but I do think there's a metric one can put
on the space of all C^\infty functions with all derivatives bounded, possibly
allowing for the span of the functions sin(c x) and cos(c x) to be dense;
I guess one would then expect a representation of a general function of this
type to be not a sum but a (Stieltjes) integral transform
f(x) = \int_0^\infty sin( x y ) g(y) dy (+ same with cosine)
for some function g; the finite linear combinations suggested above
correspond to discrete measures, but any non-negative g which vanishes
outside [0,1] and for which \int_0^1 g(y) dy = 1 will do. Taking g(y) = y,
for example, leads to the function
f(x) = (sin(x)-x*cos(x))/x^2
all of whose derivatives are bounded, and yet this function is not a
simple linear combination of sines and cosines. (Thus I have a "new" element
in the convex set C which I described in a post yesterday.)
(You can even multiply this f by 2.2926... and keep all the values below 1,
but even so, f'(0) is only 0.7642... -- we haven't found the optimal
function yet.)
dave
==============================================================================
From: John Rickard
Newsgroups: sci.math
Subject: Re: Analysis (sin) elementary solution ?
Date: 13 Nov 1998 13:53:27 +0000 (GMT)
Dave Rusin wrote:
: John Rickard wrote:
: >Fourier analysis says that f should be a combination of sin and cos
: >functions.
:
: Maybe I just don't know enough, but Fourier analysis, in the context I learnt
: it, only suggests such a description of _periodic_ functions.
Remember that this was just part of some handwaving to explain why I
thought that the statement being asserted was plausible. (Perhaps the
word "plausible" is suggesting different things to us: I meant only
that it seems reasonably likely that the statement is true, and that
it seems to have some motivation rather than just being a claim to
which nobody has happened to find a counterexample. I didn't mean
that I have a near-proof that only needs making rigorous -- though I
now think that I may be on the road to that.)
For non-periodic functions, there does exist the Fourier transform;
dredging my vague memories, I think a function needs to be in L2 (or
something like that) for the Fourier transform to exist as a function,
but the Fourier transform exists as a distribution (or "generalized
function") in more cases -- perhaps the function just needs to be
measurable and bounded by a polynomial?
There was a general principle that the faster f(x) tended to zero as x
tended to infinity, the better behaved the Fourier transform was
locally (and vice versa: the better behaved f locally, the faster the
Fourier transform tended to zero). This would suggest that since f is
bounded, the Fourier transform should not be too nasty a distribution;
I'd guess nothing nastier than delta functions -- this may well be
equivalent to saying that f can be expressed as the Stieltjes integral
that you mention below. ("Nasty"? Well, I think that if f were
linear rather than bounded then the Fourier transform would somewhere
look like the derivative of the delta function, which I consider
nastier.)
(All that is still very handwavy!)
: I guess one would then expect a representation of a general function of this
: type to be not a sum but a (Stieltjes) integral transform
: f(x) = \int_0^\infty sin( x y ) g(y) dy (+ same with cosine)
: for some function g;
(If I remember what a Stieltjes integral is, then g might be be a
distribution rather than a function, but g(y)dy = dG(y), where G is a
genuine function; is that right?)
: the finite linear combinations suggested above
: correspond to discrete measures, but any non-negative g which vanishes
: outside [0,1] and for which \int_0^1 g(y) dy = 1 will do. Taking g(y) = y,
: for example, leads to the function
: f(x) = (sin(x)-x*cos(x))/x^2
: all of whose derivatives are bounded, and yet this function is not a
: simple linear combination of sines and cosines.
I was thinking of things like that as being a combination of the sin
functions sin(x y) for 0 <= y <= 1, weighted by g(y). Whether such a
blend of uncountably many sin functions behaves sufficiently like a
finite combination is unclear, but at least it's *plausible*.
My remark about positive coefficients does generalize, I think -- if,
as here, g(y) is everywhere non-negative (and vanishes outside [0,1])
then f(pi/2) > f'(0), so f is not a counterexample.
Since my previous post, I've got further with the argument for the
case where f is a finite linear combination of sin functions, and I
think that it will probably generalize to f(x) of the form above, with
g vanishing outside [0,1]. I think that such f (and certainly the
finite linear combinations of sin(k_i x) with each k_i in [0,1])
satisfy
f'(0) = (8/pi^2)(f(pi/2) - f(3pi/2)/3^2 + f(5pi/2)/5^2 -+ ...)
so if f'(0) = 1 and |f(x)| <= 1 everywhere, we must have f(pi/2) =
f(5pi/2) = ... = 1 and f(3pi/2) = f(7pi/2) = ... = -1. So f(x)-sin(x)
vanishes at odd multiples of pi/2; since f(x)-sin(x) has no components
of frequency > 1 this *ought* to imply (does it?) that f(x)-sin(x) is
a multiple of cos(x); but it's trivial that no non-zero multiple of
cos(x) works, so it would follow that f(x) = sin(x).
--
John Rickard
==============================================================================
From: baloglou@panix.com (George Baloglou)
Newsgroups: sci.math.research,sci.math
Subject: Re: Elementary solution ?
Date: 14 Nov 1998 15:12:18 -0500
In article <19981110133841.12671.00002060@ng14.aol.com>
ouazad@aol.com (Ouazad) writes:
>I know a non elementary solution for the following problem:
>
>f: R->R
>f is continuous and all its derivatives are continuous, and f satisfies the
>following
>property:
>
> for all p in N, for all x in R, |f(p)(x)|<=1
>and f'(0)=1
>
>f(p) is the p-th derivative of f.
>Prove that f(x)=sin x for all x in R.
Well, I tried this one out following a simple-minded approach that seems
capable of settling the question but has not so far worked; I am posting
it here with the hope that somebody will be able to complete it ...
**************************************************************************
Let f(x) = a_0 + a_1*x + a_2*x^2 + ... + a_p*x^p+ ... , so that
f'(x) = a_1 + 2*a_2*x + 3*a_3*x^2 + ... + p*a_p*x^(p-1) + ... ,
f''(x) = 2*a_2 + 6*a_3*x + ... + p*(p-1)*a_p*x^(p-2) + ... , etc.
With f'(0) = 1 assumed, a_1 = 1; and, from f(p)(x) = p!*a_p + ... and
|f(p)(x)| <= 1 it follows, for x = 0, that |a_p| <= 1/p! for p > 1.
From f'(x) <= 1 and a_1 = 1 we get 2*a_2*x + 3*a_3*x^2 + ... <= 0
for all x. In view of a_p >= -1/p! it follows, *for x < 0*, that
2*a_2*x <= x^2/2! + ... + x^p/p! + ... = e^x - x - 1, so that
a_2 <= (e^x - x - 1)/(2*x) for x > 0 and, by l' Hospital's Rule, a_2 <= 0.
A slightly more subtle observation rules out a_2 < 0 and yields a_2 = 0.
Indeed, let's assume a_2 < 0. Then f'(x) = 1 + 2*a_2*x + g(x), where
|g(x)| = |3*a_3*x^2 + ... | <= 3*|a_3|*|x|^2 + ... <= |x|^2/2! + ... =
= e^|x| - |x| - 1; but then, *via l' Hospital's Rule as above*, it
follows that |a_2| > |g(x)|/|2*x| for sufficiently small |x|, so that
2*a_2*x + g(x) becomes *strictly positive* for *negative x of very
small absolute value*, thus leading to violation of f'(x) <= 1 at 0-.
(Notice that this argument could also cover the case a_2 > 0 for x ~ 0+.)
The next crucial step is to show that a_3 = -1/6: if that is true, then
a_4 = 0 is established through the arguments of the preceding paragraphs
applied to f'''(x) = -1 + 24*a_4*x + ...; and after that a_5 = 1/5! would
likely follow along the lines of a_3 = -1/6, and so on. For a_3 = -1/6
my strategy calls for a_3 <= -1/6, which yields a_3 = -1/6 because of
|a_3| <= 1/6. I tried many different approaches for a_3 <= -1/6, one
of them posted, somewhat inappropriately perhaps, right below: *I KNOW*
that there is an error at the very end of the second paragraph (while
the first one is correct), but posting it here might give you ideas :-)
---------------------- failed attempt at a_3 = -1/6 --------------------
From f'(x) <= 1 and f'(x) = 1 + 3*a_3*x^2 + ... + p*a_p*x^(p-1) + ...
it follows that 3*a_3*x^2 < - 4*a_4*x^3 - ... - p*a_p*x^(p-1) - ...;
arguing as two paragraphs above, we use a_p > -1/p! to conclude that,
for x > 0 again, 3*a_3*x^2 < - x^3/3! - ... - x^(p-1)/(p-1)! - ... =
= - (e^x - x^2/2 - x - 1) hence a_3 < (1 + x + x^2/2 - e^x)/(3*x^2)
and (l' Hospital's Rule again) a_3 <= 0.
Writing f'(x) = 1 + 3*a_3*x^2 + h(x), where |h(x)| <= e^x - x^2/2 - x - 1
for x > 0, |f'(x)| <= 1 implies |1 + 3*a_3*x^2| - |h(x)| <= 1 therefore
1 - |3*a_3*x^2| <= |1+ 3*a_3*x^2| <= |h(x)| + 1 <= e^x - x^2/2 - x; in
view of a_3 <= 0, we obtain (3*a_3 + 1/2)*x^2 <= e^x - x - 1 for x > 0
hence 3*a_3 + 1/2 <= (e^x - x - 1)/(x^2) and (l' Hospital's Rule again!)
3*a_3 + 1/2 <= 0. Now |a_3| <= 1/3! and 3*a_3 + 1/2 <= 0 together imply
a_3 = -1/6.
The error is in l' Hospital's Rule's second application, of course ...
while the whole paragraph has all the futures typical of a solver in
despair :-) ... But it is just a sample of my many attempts to isolate
3*a_3 + 1/2 in a productive way: no result so far, but I will return to
this and other approaches if no one settles it and when I have the time!
George Baloglou baloglou@oswego.edu http://www.oswego.edu/~baloglou
"The Mathematics of our brain is not our Mathematics"
==============================================================================
From: John Roe
Newsgroups: sci.math.research
Subject: Re: Elementary solution ?
Date: Sat, 14 Nov 1998 20:48:47 -0500
I wrote an article on a related problem in Math Proc Camb Phil Soc
87(1980) 67-73; the methods may be relevant. (There was a sequel by Ken
Falconer giving a Banach algebra interpretation...)
John Roe
Penn State Math Dept
Ouazad wrote:
>
> I know a non elementary solution for the following problem:
>
> f: R->R
> f is continuous and all its derivatives are continuous, and f satisfies the
> following
> property:
>
> for all p in N, for all x in R, |f(p)(x)|<=1
> and f'(0)=1
>
> f(p) is the p-th derivative of f.
> Prove that f(x)=sin x for all x in R.
==============================================================================
From: ikastan@sol.uucp (ilias kastanas 08-14-90)
Newsgroups: sci.math
Subject: Re: Analysis (sin) elementary solution ?
Date: 20 Nov 1998 13:35:15 GMT
In article <72i0i6$1ru$1@nntp.ucs.ubc.ca>,
Robert Israel wrote:
>If all |f^(p)(x)| <= 1, then f is equal to the sum of its Taylor series
about any point (use Taylor's Theorem with Lagrange remainder), and thus extends
>to an entire function. Moreover, by expanding f(z) about Re(z), we have
>
>|f(z)| <= sum_n |f^(n)(Re(z))| |Im(z)|^n/n! <= exp(|Im(z)|)
>
>By a Paley-Wiener type theorem (see e.g. Reed and Simon, Methods of Modern
>Mathematical Physics vol. II: Fourier Analysis, Self-Adjointness, Theorem
>IX.12) , the Fourier transform of f (as a distribution) is supported on the
> interval [-1,1]. I think that further analysis of the Fourier transform is
>the way to attack this problem.
This is quite possibly so -- for the moment I don't see how to make
good use of that Fourier transform, but there may well be a way.
In any case, here is another approach: if f(z) is an entire function,
the sequence f(0), f'(0), f''(0),... is bounded, and |f(z)| <= B for real
z, then it follows by a Phragmen-Lindeloef argument that |f'(z)| <= B for real
z and that equality obtains only if f(z) = c_1 sin(z) + c_2 cos(z) (c_1, c_2
being appropriate constants). In our problem f'(0) = 1 = B... and clearly
c_1 = 1, c_2 = 0.
So f(z) has to be sin(z); whether a more elementary proof exists, I
don't know. (Hmm... f(z) has finite order; if one could argue that f(z) =
= sin(g(z)) for g(z) entire, it would follow g(z) is a polynomial... since
the alternative, g(z) finite order non-polynomial and sin(z) of order 0, is
false; sin(z) has order 1. One would then proceed to show g(z) to be z...)
Ilias
==============================================================================
[Here are some references from MathSciNet which may be relevant -- djr]
==============================================================================
81a:33001 33A10 (26A12)
Roe, J.
A characterization of the sine function.
Math. Proc. Cambridge Philos. Soc. 87 (1980), no. 1, 69--73.
If f{sup}((k))(x) is a sequence of functions with f
{sup}((k+1))(x)=(df{sup}((k))/dx)(x), k=0, {plmin}1, {plmin}2, ... ,
and if {vert}f{sup}((k))(x){vert} <= M for all k and x, then
f{sup}((0))(x)=a sin(x+ phi ) with a and phi real constants. The basic
idea is to use integration by parts to show that the Fourier transform
of f has support at y={plmin}1. However, the Fourier transform may not
exist, so a summability factor exp( - epsilon {vert}x{vert}) and
arguments from distribution theory are used.
Reviewed by R. A. Askey
Cited in reviews: 93j:42006 93e:35077 89h:47059 89g:33001 83j:46058
81k:26013. Here they are:
_________________________________________________________________
93j:42006 42B10 (35J05 35P05 43A80)
Strichartz, Robert S.(1-CRNL)
Characterization of eigenfunctions of the Laplacian by boundedness
conditions.
Trans. Amer. Math. Soc. 338 (1993), no. 2, 971--979.
J. Roe [Math. Proc. Cambridge Philos. Soc. 87 (1980), no. 1, 69--73;
MR 81a:33001] has shown that if $L=d\sp 2/dx\sp 2$ and $\{f\sb k\}\sp
\infty\sb {-\infty}$ is a sequence of functions such that $Lf\sb
k=f\sb {k+1}$ and $\Vert f\sb k\Vert \sb \infty\leq M\leq \infty$ for
all $k$, then $Lf\sb 0=-f\sb 0$. In this paper the author shows that
this result remains true when $L$ is the Laplacian or Hermite operator
on $\bold R\sp n$ or the sub-Laplacian on the Heisenberg group, and
true in a slightly modified form when $L$ is a d'Alembertian on $\bold
R\sp n$, but false when $L$ is the Laplace-Beltrami operator on
hyperbolic space. Given some standard results of harmonic analysis on
these spaces, the arguments are rather simple in all cases except the
Heisenberg group, which requires some extra ingenuity.
Reviewed by Gerald B. Folland
_________________________________________________________________
93e:35077 35P05 (35J05 47F05)
Howard, Ralph(1-SC); Reese, Margaret(1-OLAF)
Characterization of eigenfunctions by boundedness conditions.
Canad. Math. Bull. 35 (1992), no. 2, 204--213.
The authors give a characterization of the generalized eigenfunctions
of the Laplacian operator $\Delta$ by boundedness conditions. The main
result obtained is as follows: Let $\langle f\sb k(x)\rangle\sp
\infty\sb {-\infty}$ be a sequence of complex-valued functions on
$\bold R\sp n$ such that $\Delta f\sb k=f\sb {k+1}$ and $\vert f\sb
k(x)\vert \leq M\sb k(1+\vert x\vert )\sp a$ for some $a>0$, where the
coefficients $M\sb k$ are assumed to have sublinear growth, $\lim\sb
{k\to\infty}M\sb k/k=\lim\sb {k\to\infty}M\sb {-k}/k=0$. Then $\Delta
f\sb 0=-f\sb 0$. This generalizes the result obtained by J. Roe [Math.
Proc. Cambridge Philos. Soc. 87 (1980), no. 1, 69--73; MR 81a:33001]
in the one-dimensional case. The authors also discuss a similar
problem for formally selfadjoint partial differential operators with
constant coefficients.
Reviewed by Hideo Tamura
_________________________________________________________________
89h:47059 47D10 (47B25)
Wallen, Lawrence J.(1-HI)
One parameter groups and the characterization of the sine function.
Proc. Amer. Math. Soc. 102 (1988), no. 1, 59--60.
J. Roe [Math. Proc. Cambridge Philos. Soc. 87 (1980), no. 1, 69--73;
MR 81a:33001] proved that if $\{f\sb k\colon k\in\bold Z\}$ is a
uniformly bounded sequence of real functions on ${R}\sp 1$ such that
$f'\sb k= f\sb {k+1}$, then $f\sb 0(x)=a\,\roman{sin}(x-x\sb 0)$. This
result is generalized as follows. Let $S\sb t$ be a uniformly bounded,
strongly continuous group of operators on the Banach space $X$. Let
$A$ be its infinitesimal generator, with domain $D$. Suppose $\{x\sb
k\colon k\in\bold Z\}\subset D$ is bounded and $Ax\sb {k+1}=x\sb
{k+1}$. Then $x\sb 0=z\sb 1+z\sb 2$ where $Az\sb 1=iz\sb 1$ and $Az\sb
2=-iz\sb 2$.
Reviewed by S. Kantorovitz
_________________________________________________________________
89g:33001 33A10 (42A38)
Howard, Ralph(1-SC)
A note on Roe's characterization of the sine function.
Proc. Amer. Math. Soc. 105 (1989), no. 3, 658--663.
Let $\{f\sp {(n)}(x)\}\sb {n=-\infty}\sp \infty$ be a sequence of
complex-valued functions on the real line with $f\sp {(n+1)}(x)=
(d/dx)f\sp {(n)}(x)$. J. Roe [Math. Proc. Cambridge Philos. Soc. 87
(1980), no. 1, 69--73; MR 81a: 33001] characterized the sine function
through the size of its derivatives and antiderivatives. More
specifically if $\vert f\sp {(n)}(x)\vert \le M$ for all $n$ and all
real $x$ then $f\sp {(0)}(x)=a\sin (x+\varphi)$ for some real
constants $a$ and $\varphi$. In this paper the author proves that,
under weaker conditions on $\vert f\sp {(n)}\vert $, $f\sp {(0)}(x)
=p(x) e\sp {ix}+q(x) e\sp {-ix}$, where $p(x)$, $q(x)$ are
polynomials.
Reviewed by W. A. Al-Salam
_________________________________________________________________
83j:46058 46H05 (47A12 47B15)
Falconer, K. J.
Growth conditions on powers of Hermitian elements.
Math. Proc. Cambridge Philos. Soc. 92 (1982), no. 1, 115--119.
J. Roe [same journal 87 (1980), no. 1, 69 - 73; MR 81a:33001] has
characterised the functions ae{sup}(it)+be{sup}( - it) as the only
functions on R whose derivatives and "integrals" are uniformly
bounded. H. Burkill [ibid. 89 (1981), no. 1, 71 - 77; MR 81k:26013]
gave a generalization for the case in which the coefficients a, b are
replaced by polynomials. Following a well established method the
author sets out to present some interesting theorems for Banach
algebras which are equivalent to the above theorems, in particular in
relation to Hermitian elements. An error (the assertion in the middle
of page 117 that T is Hermitian) necessitates restatement of all the
main results. The corrected results in fact provide a better link with
the Roe and Burkill theorems. The author intends to publish a
corrected version in due course.
Reviewed by John Duncan
_________________________________________________________________
81k:26013 26D10 (33A10 44A10)
Burkill, H.
Sequences characterizing the sine function.
Math. Proc. Cambridge Philos. Soc. 89 (1981), no. 1, 71--77.
J. Roe (same journal 87 (1980), no. 1, 69 - 73; MR 81a:33001) used
Fourier transforms of distributions to show that f{sub}0(t)=a cos t+b
sin t is the only function with f'{bspace}{sub}n(t)=f{sub}(n+1)(t),
n=0, {plmin}1, ..., and {vert}f{sub}n(t){vert} <= M, - {infin} < t <
{infin}, n=0, {plmin}1, .... This theorem is extended by replacing the
boundedness by {vert}f{sub}n(t){vert} <= M({vert}n{vert}+1){sup}c
({vert}t{vert}+1){sup}d for some nonnegative c and d, and the
conclusion becomes f{sub}0(t)=p {sub}1(t) cos t+p{sub}2(t) sin t,
where p{sub}1(t) and p{sub}2(t) are polynomials of degree less than or
equal to min((c), (d)). The proof uses Laplace transforms and a bit of
analytic continuation.
Reviewed by R. A. Askey
© Copyright American Mathematical Society 1998