-1

Recently, I've been trying to implement the 15 tests for randomness described in NIST SP800-22. As a check of my function implementation, I have been running the examples that the NIST document provides for each of it's tests. Some of these tests require bit strings that are very long (up to a million bits). For example, on one of the examples, the input is "the first 100,000 bits of e." That brings up the question: how do I generate a bit representation of a float value that exceeds the precision available for floating point numbers in Python?

I have found articles converting integers to binary strings (the bin() function), and converting floating point fractions to binary (repeated division by 2 (slow!) and limited by floating point precision). I've considered constructing it iteratively in some way using $e=\sum_{n=0}^{\infty}\frac{2n+2}{(2n+1)!}$, calculating the next portion value, converting it to a binary representation, and somehow adding it to the cumulative representation (still thinking through how to do this). However, I've hit the same wall going down this path: the precision of the floating point values as I go farther out on this sum.

Does anyone have some suggestions on creating arbitrarily long bit strings from arbitrarily precise floating point values?

PS - Also, is there any way to get my Markdown math equation above to render properly here? :-)

Pat B.
  • 419
  • 3
  • 12

1 Answers1

0

I maintain the gmpy2 library and its supports arbitrary-precision binary arithmetic. Here is an example of generating the first 100 bits of e.

>>> import gmpy2
>>> gmpy2.get_context().precision=100
>>> gmpy2.exp(1).digits(2)[0] 
'101011011111100001010100010110001010001010111011010010101001101010101
1111101110001010110001000000010'
casevh
  • 11,093
  • 1
  • 24
  • 35