3

I'm doing some graphing-type stuff, and I frequently want to take the slope between two points as dy / dx. However, if my dx is exactly zero, I will get a divide-by-zero error.

If dx is zero, I can set it to something small, say 0.001. However, I would like to increase the accuracy of my solution.

There might be better solutions to this given problem, but I'm sure that other problems exist whose possible solutions beg for the same thing: the smallest possible nonzero number.

Also, how expensive is it to obtain? Is there a significant chance that this number cannot be reliably replicated, perhaps due to, say, rounding error?

Colin Hancey
  • 219
  • 2
  • 8
  • 2
    I think this is a https://meta.stackexchange.com/questions/66377/what-is-the-xy-problem I doubt that you can usde the smallest non-zeero number for the purpose you have in mind. What you really want is the smallest number with which you can still calculated a meaningful slope. But for a delta x of zero, if it is existing, there IS a meaningful slope, which is "vertical". So what I think you really should do is to use the semantical meaning "vertical" for any 0-deltax, instead of changing the deltax to something which gets you an only seemingly meaningful slope. – Yunnosch Jan 20 '21 at 06:25
  • 1
    How to do that is of course a different prioblem. Please describe your context in more detail. Then we can look for a solution for what you need, instead of helping you to force the ultimatly probably dead ended-path you currently follow. – Yunnosch Jan 20 '21 at 06:32
  • 2
    Why is this tagged *both* C and Python? Their floating point types are not equivalent. What specific type in what language are you looking for? Are you aware that Python has arbitrary-precision rational types as well? – MisterMiyagi Jan 20 '21 at 06:43
  • MisterMiyagi, No, I did not know that! I would like to know the answer for each of these languages. Yunnosch, I'm not looking for a solution to a specific problem. I'm asking so that I might be better able to solve future problems. There is no dead end. – Colin Hancey Jan 20 '21 at 19:01

3 Answers3

3

Could you use the decimal module for this? You can make a very small fixed floating point value with this:

import decimal
from decimal import Decimal

>>> almost_zero = Decimal((0, (1,), decimal.getcontext().Emin))
>>> almost_zero
Decimal('1E-999999')

which is a pretty small number that should be suitable for your purposes.

>>> 1 / almost_zero
Decimal('1E+999999')

Or you could dig out some stuff from sys.float_info.

>>> import sys
>>> sys.float_info
sys.float_info(max=1.7976931348623157e+308, max_exp=1024, max_10_exp=308, min=2.2250738585072014e-308, min_exp=-1021, min_10_exp=-307, dig=15, mant_dig=53, epsilon=2.220446049250313e-16, radix=2, rounds=1)
>>> almost_zero = sys.float_info.min
>>> 1 / almost_zero
4.49423283715579e+307
mhawke
  • 84,695
  • 9
  • 117
  • 138
2

I'm doing some graphing-type stuff

Then consider studying (for inspiration) the source code of GNUplot. It is free software.... Also look inside the source code of GraphViz. It is open source.

What is the Smallest Nonzero Number that I can Reliably Generate?

In theory, this could be compiler specific, or implementation specific.

The n1570 draft C standard mentions in §5.2.4.2.2 some DBL_EPSILON macro....

I would suggest coding explicitly if (dy==0.0) return; since on current computers that is really fast. Actually, a test like if (fabs(dy)<4.0*DBL_EPSILON) return; could be better (but would run slower).

Regarding rounding errors in your C code (in practice), see the floating point guide and consider getting then using the Fluctuat tool or the CADNA tool or perhaps the ABSINT tool.

If you want to analyze rounding errors in binary executables contact my colleagues working on BinSec.

You could also use some arbitrary-precision arithmetic library like GMPlib.

Is there a significant chance that this number cannot be reliably replicated, perhaps due to, say, rounding error?

Probabilistic static analysis (at compile time) of floating point rounding errors are above the state of the art. Consider making your PhD thesis on that topic. Your PhD advisor could be Patrick Cousot in the USA or Eric Goubault or Sylvie Putot or maybe Emmanuel Haucourt in France. Or some colleagues (e.g. Franck Védrine) from the Frama-C team (near Paris, France). Look also into proceedings of ACM SIGPLAN conferences.

Perhaps in mid 2021 you might use RefPerSys or Bismon to analyze your C code (or maybe your Python code; then see also this) ? In that case, contact me by email.

Notice that floating point rounding errors did kill several dozens of people (and might explain some Boeing 737 MAX crashes). Fixed point overflows are related to Ariane 501 failure. So your future PhD (on static analysis of floating point errors) could be co-funded by Boeing, Airbus, NASA, ESA, Dassault, or CNES and probably defense (artillery is using computers since the 1940s), robotics (think of cobots in neurosurgery), or automotive industries (since autonomous vehicles are using floating point).

Read also the blog of the late Jacques Pitrat. It is relevant to your interests.

In 2021, an interesting application related to floating point is the simulation of the Covid-19 pandemic. So I would imagine that big hospitals could also co-fund your PhD (e.g. for a better estimate of the social distancing... In Europe the recommended distance is different in different countries).

Of course in the USA the NSF (and perhaps Google or Facebook) could also co-fund your PhD (I guess that the US DoD could also co-fund it, since rounding errors in missiles or weapons did kill several US soldiers in the past).

PS. If you start your PhD on these topics, please email me. I am interested.

Basile Starynkevitch
  • 223,805
  • 18
  • 296
  • 547
0

Below is a program that can help you with the values you want. The values are all constants defined in the file <float.h> and describe constants like the number of digits a float or a double uses, and the maximum and minimum values you can use in your program.

The DBL_MIN and DBL_TRUE_MIN and DBL_EPSILON deserve some explanation. You should read the standard IEEE 754 about floating point binary numbers. The first is the minimum value that can be represented with full precision, under that you can continue representing numbers, but they lose more and more precision, until you gen just one bit in the DBL_TRUE_MIN which is the minimum value representable in your machine. But in my opinion what you are looking for is the DBL_EPSILON which is the gap that exists between two consecutive numbers in floating point. As floating point numbers are relative size, it is given as the difference bewteen 1.0 and the next number following 1.0, and this means that numbers cannot be closer than that (you need to multiply this *_EPSILON by the number that provides the scale to get how far is one number to the next at that scale.

The program that gives you the values is below:

#include <stdio.h>
#include <float.h>

#define P(_nam,_fmt) printf("%20s = "_fmt"\n", #_nam, _nam)

int main()
{
        P(FLT_DIG,     "%18d");
        P(FLT_MAX,     "%18.6g");
        P(FLT_EPSILON, "%18.6g");
        P(FLT_MIN,     "%18.6g");
#if FLT_HAS_SUBNORM
        P(FLT_TRUE_MIN,"%18.2g");
#endif
        puts("");
        P(DBL_DIG,     "%18d");
        P(DBL_MAX,     "%18.15lg");
        P(DBL_EPSILON, "%18.15lg");
        P(DBL_MIN,     "%18.15lg");
#if DBL_HAS_SUBNORM
        P(DBL_TRUE_MIN,"%18.2lg");
#endif
}

and the results it gives on my system are (float and double values are shown):

             FLT_DIG =                  6
             FLT_MAX =        3.40282e+38
         FLT_EPSILON =        1.19209e-07
             FLT_MIN =        1.17549e-38
        FLT_TRUE_MIN =            1.4e-45

             DBL_DIG =                 15
             DBL_MAX = 1.79769313486232e+308
         DBL_EPSILON = 2.22044604925031e-16
             DBL_MIN = 2.2250738585072e-308
        DBL_TRUE_MIN =           4.9e-324
Luis Colorado
  • 10,974
  • 1
  • 16
  • 31