2

Python's float data type really uses double precision (64bit). However, for my specific implementation (that transmits typetagged values via OSC) I would to like differentiate between values that can be represented as (32bit) single precision floats and (64bit) double precision floats.

More precisely, I'd like to do something like this:

 if isdouble(value):
    binary=struct.pack('>d', value)
 else:
    binary=struct.pack('>f', value)

Is there any feasible way to achieve this?

Ben
  • 51,770
  • 36
  • 127
  • 149
umläute
  • 28,885
  • 9
  • 68
  • 122
  • You could use NumPy's `float32` and `float64` types. There's no easy way, AFAIK, to check whether a 64-bit float can be converted to 32 bits without loss of precision. – Fred Foo Nov 14 '13 at 13:20
  • How do you define storing a double-precision value as a single-precision value? Do you care about the loss of precision? Just as many real numbers are rounded to the same double-precision value, so are many double-precision values rounded to the same single-precision value. – chepner Nov 14 '13 at 13:36

4 Answers4

4

You could test the range, if you don't mind the loss of a little precision (see Alfe's answer):

def isdouble(value):
    return not (1.18e-38 <= abs(value) <= 3.4e38)

or invert to test for single precision:

def issingle(value):
    return 1.18e-38 <= abs(value) <= 3.4e38:

These would prevent raising an OverflowError exception, the alternative is to just catch that.

Do note that float('-0'), float('+0'), float('inf'), float('-inf') and float('nan') will test as double with these tests; if you want these to be stored in 4 bytes rather than 8, test for these explicitly.

Community
  • 1
  • 1
Martijn Pieters
  • 1,048,767
  • 296
  • 4,058
  • 3,343
1

I propose to just try it using float and if that fails (due to range overflow) use the double version:

try:
  binary = struct.pack('>f', value)
except OverflowError:
  binary = struct.pack('>d', value)

The range is the only aspect in which your question makes perfect sense.

If it comes to precision, your question loses to make sense because, as you say, Python always uses doubles internally, and even a simple 3.3 is, packed and unpacked as float, only 3.299999952316284 afterwards:

struct.unpack('>f', struct.pack('>f', 3.3))
(3.299999952316284,)

So virtually no double can be represented as a float. (Typically none that isn't an int or otherwise coming out of a float originally.)

You could, however, make a check whether the packed-unpacked version of your number equals the original, and if it does, use the float version:

try:
  binary = struct.pack('>f', value)
  if struct.unpack('>f', binary)[0] != value:
    binary = struct.pack('>d', value)
except OverflowError:
  binary = struct.pack('>d', value)
Alfe
  • 56,346
  • 20
  • 107
  • 159
  • 3
    Either “So literally no double can be represented as a float” is one of these new uses of “literally” that I keep hearing about, or you are quite wrong. There are literally 2^32 doubles that can be represented as a float (some of which could be argued to represent the same value). – Pascal Cuoq Nov 14 '13 at 13:36
  • 1
    Don't webster me on the word "literally" (you might be right about that). But from all possible double values, the ones representable as floats without deviation are extremely seldom and, what I wanted to show with the 3.3 example, even ones we could expect to be representable, aren't. You probably know enough about floating point representation to understand why 3.3 is not as simple as it looks, but I got the feeling the OP would have expected otherwise. – Alfe Nov 14 '13 at 13:38
  • Only about 1 in 4 billion doubles can be represented as floats. "Literally no" is an exaggeration, but "virtually no" is true, if the OP cares about maintaining precision. I assume he does, otherwise why not just store all values as single-precision? – chepner Nov 14 '13 at 13:40
  • I changed that to "virtually" (thanks, chepner) and explained a bit. – Alfe Nov 14 '13 at 13:41
  • The statement that the question does not make sense regarding the precision aspect does not make sense. For whatever reason, the OP postulates having values that are representable as `float`, and such values could be packed and unpacked as `float` without change. The example value of 3.3 is impossible since 3.3 is not representable as a `double`. (Arguments about how frequently this is useful based on the density of `float` values in `double` are pointless because the floating-point values used in computers are not uniformly distributed. They are influenced by human behavior.) – Eric Postpischil Nov 14 '13 at 16:43
  • If floating-point values were uniformly distributed, you would be quite lucky to witness just one double-precision zero in your lifetime. Obviously, they are not remotely close to being uniformly distributed, and some of the most frequently occurring double-precision values *are* representable as floats. – Stephen Canon Nov 14 '13 at 17:13
  • @StephenCanon, yes, you are right, sums of lower powers of two (i. e. things like 0.25 or 0.125 or their sum) are perfectly representable. But 0.24 already isn't. Without an understanding of the inner workings of floats one cannot expect this, so I find it careless to give advice which relies on this fact. Since the OP never expressed that the values he's trying to store are somehow restricted to sums of lower powers of two or that their origin lets us assume this, I took the liberty to assume a free distribution. And in this case only testing them makes sense. – Alfe Nov 15 '13 at 09:27
0

You can check whether a double x is exactly representable as a float by converting the float back to double and comparing the result to x.

All floats are exactly representable in double, so the back conversion involves no rounding. The result will be equal to the original double if, and only if, the float is equal to that double.

Patricia Shanahan
  • 25,849
  • 4
  • 38
  • 75
-1

Single Precision

The IEEE single precision floating point standard representation requires a 32 bit word, which may be represented as numbered from 0 to 31, left to right.

The first bit is the sign bit, S, the next eight bits are the exponent bits, 'E', and the final 23 bits are the fraction 'F':


Double Precision

The IEEE double precision floating point standard representation requires a 64 bit word, which may be represented as numbered from 0 to 63, left to right.

The first bit is the sign bit, S, the next eleven bits are the exponent bits, 'E', and the final 52 bits are the fraction 'F':

user2903316
  • 77
  • 1
  • 12
  • so how do i know whether a `value` that satisfies `type(value) == float` is one or the other? – umläute Nov 14 '13 at 13:33
  • 3
    Quoting definitions is *not* helping the OP here. The problem is not understanding the difference, but how to detect if a Python `float` value (which always fits in a double) can safely be encoded to just 4 bytes. – Martijn Pieters Nov 14 '13 at 13:34