From: fateman@peoplesparc.cs.berkeley.edu (Richard J. Fateman)
Newsgroups: sci.math.symbolic
Subject: Re: Benchmarking: Factoring 2^101-1
Date: 30 Apr 1998 16:59:31 GMT
In article <35489D75.B566D05@wolfram.com>,
Daniel Lichtblau wrote:
>Dave Rusin wrote:
>>
>> In article <35488375.58219F53@wolfram.com> you write:
>> I am becoming increasingly impressed with Magma's efficiency.
>> [snip]
>>
>"Before any of the general factorization techniques is employed, it is
>checked whether |n| is of the special form b^k+-1, in which case an
>intelligent database look-up is used which is likely to be successful
>if b and k are not too large."
This reminds me of some early benchmarks on polynomial factorization
which included tests of Macsyma. Many of the benchmarks never got
to the interior of Berlekamp's algorithm (small-primes version) that
was supposedly to be tested. Many of the "neat" examples were caught
by the pre-processing heuristics and either proved to be irreducible,
or factored by methods like square-free factorization.
If the actual algorithm were being tested, it would have made sense to
test various implementations of the null-space algorithm; one or more
versions in lisp, and ones in C (in particular by Paul Wang and Bill
Schelter). I'm not sure what algorithms are used in different systems
now. Cantor-Zassenhaus or Berlekamp's (1967) method. My exploration of the
LLL algorithm (some time ago) suggested it is non-competitive
even though (in theory) better, and that seems to be the conclusion
of others as well.
This is still an area for investigation (e.g. see Kaltofen, ISSAC 94)
though that paper seems to be targeting polynomials of degree 250
or more.
--
Richard J. Fateman
fateman@cs.berkeley.edu http://http.cs.berkeley.edu/~fateman/