For quite a while now I've successfully used a simple function that reliably (over-)estimates the number of primes up to a given n, for example for allocating space to hold the primes. Now I'm looking for something that does the same for the number of primes in an interval [m, n].
The objective is to get as close to the actual prime count as possible in order to minimise the waste of memory, but never, ever underestimate the actual count because that would mean costly resizings/reallocations.
Here's the function that I've been using:
double overestimate_prime_count_up_to (double n)
{
if (n < 137)
return 32;
else
return 32 + (n / (Math.Log(n) - 1.08513));
}
ATM it is only certified for the 32-bit range but I intend to change that.
Anyway, now I'm looking for a way to get a similar estimate for the number of primes in a given interval [m, n] instead of for all primes up to n.
If a reliable underestimate for the number of primes up to n could be found then it could be used to construct a reliable overestimate for an interval [m, n], by subtracting an (under-)estimate for the lower boundary from on (over-)estimate for the upper boundary.
Naturally, the underestimate thing is just an idea for a possible solution; the goal remains to get a reliable (over-)estimate of the number of primes in an interval, in order to waste as little space as possible while never paying the price for an underestimate (or at least so rarely that it doesn't matter overall).
I've studied the page How Many Primes Are There? at the superb Prime Pages site, followed oodles of links and read umpteen papers. But all it did was make my head swim...
P.S.: comments regarding the extension of usability to the 64-bit range are very welcome, especially as this may well require differentiating more cases or even completely different strategies (like approximating prime density functions).