13

Is it true that the more the floating point number is big (either positive or negative) the less we have bits to encode the decimal digits?

Can we encode more decimal digits between 21 and 22 than between 216 and 232?

Is there the same count of values between this two ranges?

nbro
  • 15,395
  • 32
  • 113
  • 196
Guillaume Paris
  • 10,303
  • 14
  • 70
  • 145

4 Answers4

17

IEEE 754, binary-32 numbers are specified as follows:

IEEE 754, binary-32 number format

Essentially it has three parts:

  • 1 bit float32_sign representing sign
  • 23 bit float32_fraction[] representing binary fraction co-efficients
  • 8 bit float32_exp represnting an integer exponent of 2

See wikipedia/Single-precision_floating-point_format for details.

The formula to get the actual number is:

IEEE 754, binary-32 number formula

Forgetting the exponent, the fraction part can represent pow(2, 23) = 8388608 values accurately. The maximum and minimum values in this range are:

    ( 1 + 0, 1 + sum(pow(2, -i)) )  # All co-efficients being 0 and 1 resp. in the above formula
=>  ( 1, 2 - pow(2, -23) )          # By geometric progression
~>  ( 1, 2 )                        # Approximation by upper-bound

So for exponent being 1 (float32_exp = 128), we will have 8388608 numbers between (1,2) and (-1,-2).

However for large numbers such as when exponent is 126 (float32_exp = 253), we still have only 8388608 numbers to represent the gap between (2^126), 2^127) and (-2^126, -2^127).

A distribution graph between 1 and 128 looks like:

enter image description here

The graph is so steep at 0 that plotting it would make it look like a single value at 0 only. Do note that the graph is a hyperbola.

The formula to get the number of floating point numbers between two values is:

def num_floats(begin, end):
    # pow(2, 23) * (log(end, 2) - log(start, 2)) == pow(2, 23) * log(end/start, 2)
    return 8388608 * math.log(float(end)/float(begin), 2)
tinkerbeast
  • 1,707
  • 1
  • 20
  • 42
  • 2
    Can you enlighten me how you came up with the distribution graph? I'm not able to interpret it. What does the x axes mean? The y axes seems to be the number of floating-point numbers in the interval [2^x, 2^x+1). Any help highly appreciated! – Max Maier Mar 19 '19 at 12:28
  • @MaxMaier I did this post a long time back so I don't remember exactly how I plotted it. However I do understand your confusion, x cannot represent an instantaneous value. The x here represents a starting value. Also a delta is needed for the next value. We will get different graphs for different values of delta. After some experimentation, I think the delta I used for this graph is 0.1. So this is definitely not the ideal graph. My limit and differentiation theory is a bit rusty, so I can't come up with the ideal formula at the moment where delta tends to 0. If someone can, I'll update this. – tinkerbeast Mar 20 '19 at 06:18
  • 1
    @MaxMaier I figured out the decay function: The num_floats function over an interval is actually the integral of the continuous function over the interval. Let, alpha = 2^23 * log(e, 2), Then, integral(f(x)) = alpha * ln(x) => f(x) = alpha * 1/x – tinkerbeast Mar 24 '19 at 09:51
  • Could you add background colors for the images? In 2022, many people use dark mode :) – ynn Aug 20 '22 at 07:50
7

Yes the density of numbers that are exactly representable by a floating point number gets smaller as the numbers get bigger.

Put it another way, floating point numbers only have a fixed number of bits for the mantissa and as the numbers get bigger, fewer of those manitssa digits will be after the decimal point (which is what I think you were asking).

The alternative would be fixed point numbers where the number of digits after the decimal point is constant. But not many systems use fixed point numbers, so if that's what you want you have to roll your own, or use a third party library.

john
  • 85,011
  • 4
  • 57
  • 81
1

From What Every Computer Scientist Should Know About Floating-Point Arithmetic :

In general, a floating-point number will be represented as ± d.dd... d × e, where d.dd... d is called the significand and has p digits. More precisely ± d0 . d1 d2 ... dp-1 × e represents the number.

Therefore, the answer is yes, because mantissa (old word for significand) has fixed number of bits.

Jan Hudec
  • 73,652
  • 13
  • 125
  • 172
BЈовић
  • 62,405
  • 41
  • 173
  • 273
  • Please, the link is useful, but should not be called just "this". Anyone who wants to do some serious calculations should really read it, but it's quite long and dense if you want to read all of it. – Jan Hudec Aug 10 '11 at 06:27
1

A floating point number is a binary representation of a mantissa and an exponent. For a IEEE 754 short real, the most prevalent 32-bit representation, there is a sign bit, 23+1 bits for the mantissa, and an exponent range of −126 to +127 over a power of two.

So, to address your points:

  1. The number of bits to encode the digits is constant. about 7 decimal digits for a 32-bit float, and about 16 for a 64-bit float.

  2. See 1.

  3. Yes there are.

wallyk
  • 56,922
  • 16
  • 83
  • 148
  • by 1 et 2 I wondered if the gap between two adjacent values is more important if the number is big ? – Guillaume Paris Aug 10 '11 at 06:18
  • It is 7 decimal digits for a 32-bit float (23 bits + implicit 1 give range of 2^24 and that's approx. 1.6*10^7), but it's only 15 for 64-bit one (double), because it has 51 bits + implicit 1 which gives range of 2^52 and that's only about 4*10^15. – Jan Hudec Aug 10 '11 at 06:19
  • 1
    @Guillaume07: Floating point is used in cases where the *relative* gap is important and that stays the same. If the absolute gap is important, you should be using integers. – Jan Hudec Aug 10 '11 at 06:21
  • You can see how many decimal digits by calculating n * ln(2) / ln(10), where *n* is number of bits. That is approx. n * 0.3. – Rudy Velthuis Aug 10 '11 at 22:44