3

From Wikipedia:

"Anindya De, Chandan Saha, Piyush Kurur and Ramprasad Saptharishi[11] gave a similar algorithm using modular arithmetic in 2008 achieving the same running time. However, these latter algorithms are only faster than Schönhage–Strassen for impractically large inputs."

I would be very interested in the size of such impractically large integers.

Maybe someone did implement both algorithms in a certain way and could do some benchmarks?

Thanks

Tibor Vass
  • 157
  • 14
  • 1
    Fürer's algorithm and it's modular equivalent... very deep research topic. Nobody actually knows how big the cross-over point is. And it's likely to be highly sensitive to hardware and implementation details. In any case, that might be completely irrelevant since that cross-over point is likely to be well beyond 64-bit computing limits. – Mysticial Apr 21 '12 at 09:04
  • @Mysticial: Care to explain how this question is relevant to whether ones uses 8-bit, 64-bit or 1024-bit ? – ypercubeᵀᴹ Apr 21 '12 at 09:26
  • 1
    Basically, the cross-over point is so large that it would require more memory than what 64-bit allows. And since 128-bit hardware is virtually non-existent, it's pointless to speculate exactly where that cross-over point is because it will be extremely sensitive to details of the (currently non-existent) hardware. Even a factor of 2 in the big-O constant could mean a several orders of magnitude difference in the cross-over point. – Mysticial Apr 21 '12 at 09:29
  • @Mysticial: I understand your point. However, there must be some "non-mathematical" threshold for a given implementation. I edited my question to specify this. – Tibor Vass Apr 21 '12 at 09:39
  • @TeaBee: "there must be ..." Would you care to justify this assumption? – Niklas B. Apr 21 '12 at 09:39
  • 2
    You need to find n such that log(log n)>c2^(log* n), where c is quotient of the constants. Assuming that c=100, you get n > 2^(2^100), a number not that will not fit in 64 bit hardware. I speculate the constant will be higher than 100. – sdcvvc Apr 21 '12 at 09:43
  • @NiklasB.: From the proof of complexity stating that the DSKS (I call it this way) algorithm has the same complexity as Fürer's that is n log(n) 2Θ(log*(n)) where as Schönhage-Strassen algorithm has a greater complexity of Θ(n log(n) log(log(n))). – Tibor Vass Apr 21 '12 at 09:44
  • 2
    @TeaBee: That doesn't mean that an implementation of Fürer exists that's actually faster for some testable input. Maybe you have a misunderstanding in what the O-notation means: The two algorithms could well differ by a constant factor that's in the billions or even larger. – Niklas B. Apr 21 '12 at 09:45
  • @NiklasB.: You're correct. My misunderstanding indeed :) – Tibor Vass Apr 21 '12 at 09:49
  • @sdcvvc: So according to your calculation, the multiplication of numbers with billions of digits is still less efficient with the DSKS algorithm than with the Schönhage-Strassen algorithm. Am I correct? – Tibor Vass Apr 21 '12 at 09:49
  • 1
    TeaBee: Yes. If your number has "only" 10^20 digits, then n=10^(10^20) ~ 2^(2^68) and the quotient of constants would need to be less than 2! – sdcvvc Apr 21 '12 at 09:57
  • 3
    I'm familiar with both Schönhage-Strassen and Fürer's algorithm. I've implemented Schönhage-Strassen and I understand how Fürer's algorithm works. It's very possible that the cross-over point is so high that a computer capable of holding the parameters will be larger than the size of the observable universe. That's the problem when you have complexities that differ by less than a logarithm. It takes exponentially large input sizes to compensate even for small differences in the Big-O constant. In this case, Fürer's algorithm is known to have a *very very very* large Big-O constant. – Mysticial Apr 21 '12 at 10:03
  • 1
    @Mysticial you should post all the above as an answer. – andrew cooke Apr 21 '12 at 10:32

1 Answers1

5

Fürer's algorithm and it's modular equivalent (DSKS) are very deep research topics and, for now, remain only as academic interest. Nobody actually knows how big the cross-over point is. And in all likeliness it doesn't matter because that cross-over point is likely to be well beyond 64-bit computing limits.

I've implemented Schönhage-Strassen before and I understand how Fürer's algorithm works. So I'm quite familiar with both of them. I can say it's very possible that the cross-over point between Schönhage-Strassen and Fürer's algorithm is so high that a computer capable of holding the parameters will be larger than the size of the observable universe.

That's the problem when you have complexities that differ by less than a logarithm. It takes exponentially large input sizes to compensate even for small differences in the Big-O constant.

In this case, Fürer's algorithm is known to have a very very very large Big-O constant.

Joel
  • 4,732
  • 9
  • 39
  • 54
Mysticial
  • 464,885
  • 45
  • 335
  • 332