5

I'm using GMP (with MPIR) for arbitrary size datatypes. I also use its primality test function, which uses Miller-Rabin method, but it is not accurate. This is what I want to fix.

I was able to confirm that the number 18446744073709551253 is a prime by using brute-force, with the sqrt approach.

Is there any way of checking large numbers being prime or not, with 100% accuracy?

  • It should not use too much memory/storage space, few megabytes is acceptable.

  • It should be faster than the sqrt method I used.

  • It should work for numbers that are at least 64bit in size, or larger.

  • Finally, it should be 100% accurate, no maybes!

What are my options ?

I could live with the brute force method (for 64bit numbers) though, but out of interest, I want faster & larger. Also, the 64bit number check was too slow: total 43 seconds!

templatetypedef
  • 362,284
  • 104
  • 897
  • 1,065
Rookie
  • 4,064
  • 6
  • 54
  • 86
  • 2
    For numbers up to `2^64`, the Baillie Pomerance Selfridge Wagstaff test is reliable. Above that, APRCL, or elliptic curve primality proving. – Daniel Fischer Dec 11 '12 at 22:04
  • What do you need 100% accuracy for? Unless the purpose of your program is searching for big prime numbers (not like there exist no programs for doing this), I don't see any applications where an infinitesimal chance of being wrong isn't acceptable. – Grizzly Dec 11 '12 at 22:12
  • 1
    @Grizzly, where i need it? in my mind of course :) if i make a button in a program thats supposed to tell if the number is a prime or not, it is annoying to hear "maybe its a prime, i dont know, ask someone else!" :p – Rookie Dec 11 '12 at 22:35
  • 7
    @Rookie: "maybe" might not be the right term. Using Miller-Rabin you can easily make the probability of a false positive smaller then the probability of your computer doing calculation incorrectly (bit errors in memory aren't impossible afterall) or randomly catching fire. Since I assume that you don't account for those scenarios, 100% certainty is a somehwat fluid term. – Grizzly Dec 11 '12 at 22:41
  • 1
    @Grizzly, it annoys me that i know there is a chance of error, and usually when such is possible, with my luck i somehow manage to type the only possible value that falls into that pit of error! (im not kidding here!). furthermore, i expect these kind of things to be exact. as you said about the word "maybe" i will say that the term "prime" isnt the right term here if its not 100% sure to be a prime. – Rookie Dec 11 '12 at 23:34
  • 2
    @Rookie: So what do you do about the chance of random bit errors, spontaneous hardware failure and so on? My point is that probability of say 1/2^512 is not just unlikely. Such a probability is simply not worth worrying about, since a) the computation is much more likely to fail due to a hardware defect then because of a false positive and b) you won't see a false positive in your lifetime (or the estimated remaining lifetime of this universe), even if you let your computer (or even all currently existing computers in the world) does nothing but test primes from now on. – Grizzly Dec 12 '12 at 00:01
  • 1
    @Grizzly, as i said, im looking for a prime check function, not a "probably prime" check function. – Rookie Dec 12 '12 at 11:53
  • 1
    @Rookie I know what you mean. Correctness has an elegance about it, and although an engineer might be able to be convinced that a 'probably prime' function is better, to a mathematician it is just imperfect and annoying – wim Dec 12 '12 at 13:34
  • Most highly trusted RSA implementations that I am aware of generate probably primes use miller-rabin. – President James K. Polk Dec 14 '12 at 00:04

4 Answers4

6

For very large numbers, the AKS primality test is a deterministic primality test that runs in time O(log7.5n log log n), where n is the number of interest. This is exponentially faster than the O(√n) algorithm. However, the algorithm has large constant factors, so it's not practical until your numbers get rather large.

Hope this helps!

templatetypedef
  • 362,284
  • 104
  • 897
  • 1,065
3

As a general point 100% certainty is not possible on a physical computer since there is a small but finite possibility that some component has failed invisibly and that the answer given at the end is not correct. Given that fact, then you can run enough probabilistic Miller-Rabin tests that the probability of the number being composite is far less than the probability that your hardware has failed. It is not difficult to test up to a 1 in 2^256 level of certainty:

boolean isPrime(num)
  limit <- 256
  certainty <- 0
  while (certainty < limit)
    if (millerRabin returns notPrime)
      return false
      exit
    else
      certainty <- certainty + 2
    endif
  endwhile
  return true
end isPrime

This will test that the number is prime, up to a certainty of 1 in 2^256. Each M-R test adds a factor of four to the certainty. I have seen the resulting primes called "industrial strength primes", good enough for all practical purposes, but not quite for theoretical mathematical certainty.

rossum
  • 15,344
  • 1
  • 24
  • 38
1

For small n, trial division works; the limit there is probably somewhere around 10^12. For somewhat larger n, there are various studies (see works of Gerhard Jaeschke and Zhou Zhang) that calculate the smallest pseudoprime for various collections of Miller-Rabin bases; that will take you to about 10^25. After that, things get hard.

The "big guns" of primality proving are the APRCL method (it may be called Jacobi sums or Gaussian sums) and the ECPP method (based on elliptic curves). Both are complex, so you will want to find an implementation, don't write your own. These methods can both handle numbers of several hundred digits.

The AKS method is proven polynomial time, and is easy to implement, but the constant of proportionality is very high, so it is not useful in practice.

If you can factor n-1, or even partially factor it, Pocklington's method can determine the primality of n. Pocklington's method itself is quick, but the factoring may not be.

For all of these, you want to be reasonably certain that a number is prime before you try to prove it. If your number is not prime, all these methods will correctly determine that, but first they will waste much time trying to prove that a composite number is prime.

I have implementations of AKS and Pocklington at my blog.

user448810
  • 17,381
  • 4
  • 34
  • 59
1

The method of proving depends on the type of prime number you are trying to prove (for example, the Mersenne primes have special methods for proving primality that work only with them) and the size in decimal digits. If you are looking at hundreds of digits, then there is only one solution, albeit an inadequate one: The AKS algorithm. It is provably faster than other primality proving algorithms for large enough primes, but by the time it becomes useful, it will take so long that it really isn't worth the trouble.

Primality proving for big numbers is still a problem that is not yet sufficiently solved. If it was, the EFF awards would all be awarded and cryptography would have some problems, not for the list of primes, but for the methods used to find them.

I believe that, in the near future, a new algorithm for proving primality will arise that doesn't depend on a pre-generated list of primes up to the square root of n, and that doesn't do a brute-force method for making sure that all primes (and a lot of non-primes as well) under the square root are used as witnesses to n's primality. This new algorithm will probably depend on math concepts that are much simpler than those used by analytic number theory. There are patterns in the primes, that much is certain. Identifying those patterns is a different matter entirely.

Adam
  • 3,668
  • 6
  • 30
  • 55