0

I am trying to separate two doubles of equal value by the minimum amount. The context is an event simulation. I don't want events to occur at the same time, so I increment the time that a new event is set to occur, by a minimal amount.

(This happens annoyingly often (self-implemented RNG), so I actually need to probe until I find a time when there isn't an event occurring.)

Currently, my code wants to do something like this (not tested, treat as pseudocode):

typedef struct event 
 { 
   double time;
   struct event* nexteventinqueue;
 } event;
//insert newevent into the queue:
void add(event *newevent)
{
static event *firsteventinqueue = NULL;
if(firsteventinqueue == NULL)
{
    firsteventinqueue = newevent;
    return;
}
event *currentevent = firsteventinqueue;
event *temp;
//find an event point in the queue that does not precede the new event.
while((currentevent->time) < (newevent -> time))
{
    temp = currentevent;
    currentevent = currentevent->nexteventinqueue;
}
if(currentevent == NULL)//no such event found, so tag it onto the end.
{
    temp->next = newevent;
    return;
}
//handle coincidences by delaying the new event
while((currentevent->time) == (newevent->time))
{
    double d = (maximally precise increment of a double precision floating point number);
    while((currentevent->time) == (newevent.time)) //loop I want to get rid of
    {
        newevent.time += d;
        d *= 2;
    }
    temp = currentevent;
    currentevent = currentevent.nexteventinqueue;
}
temp.nexteventinqueue = newevent;
newevent.nexteventinqueue = currentevent;
return;
}

Now there are quite a few issues with this, but I want to somehow get rid of the while loop in the middle. Most of my times aren't even close to holding the maximum precision that a double floating point can muster, so it's a waste of time to start by assuming they do, and because my RNG isn't especially random, this loop must execute quite frequently.

Is there a way to either (1) directly increment the fractional part of a double floating point variable, or to (2) figure out how precise a given floating point variable x in less than O(log(x))?

  • Normally discrete event engines support events at the same time. The important thing is to ensure that when that happens there is a defined order of execution. Usually that means last scheduled is last executed, which gives almost the same behaviour you have without tinkering with the time. – Keith Nov 13 '14 at 05:58
  • Never use `double` for time. For floating point you get variable precision (larger values are less precise), and it ends up being very wrong for most purposes (e.g. "2_years + 1_second = 2_years"). Instead use an integer where the precision is constant (e.g. maybe `uint64_t` depending on the range and precision you want). – Brendan Nov 13 '14 at 06:22
  • @Keith: What you are saying makes perfect sense, but it violates one of the assumptions of a poisson process, so some might object out of principle. (There are other, worse violations of those assumptions going on in this case, though.) – morescientistthancoderfornow Nov 13 '14 at 06:35
  • @Brendan: Ah, just have the integer represent some power-of-2 fraction of a second! Brilliant! Thank you. – morescientistthancoderfornow Nov 13 '14 at 06:38
  • @morescientistthancoderfornow: I typically go with something like "64-bit nanoseconds since the epoch" (which is good enough to handle about 584 years, with more precision than most things need) – Brendan Nov 13 '14 at 14:42

2 Answers2

2

Use nextafter (from <math.h>) to get the immediately next largest value after a double-precision value. (Or nextafterf for a single-precision value).

For more information, man nextafter, also available here.

rici
  • 234,347
  • 28
  • 237
  • 341
  • So the precision would be `nextafter(currentevent->time, ((double)1/0)) ) - (currentevent->time)`? – morescientistthancoderfornow Nov 13 '14 at 06:03
  • You might have problems with `(double)1/0`, that's going to be a rather large number that will need to be trapped. Any number greater than `currentevent->time` will cause `nextafter` to return the next representable value larger than `currentevent->time`. – David C. Rankin Nov 13 '14 at 06:13
  • @morescientistthancoderfornow: I would call that the epsilon rather than the precision, but yes. – rici Nov 13 '14 at 06:23
  • @DavidC.Rankin Right, thank you for giving words to my concern. Maybe ((currentevent->time)*4) could work in its stead, or the maximum finite double value, though I worry the latter wouldn't be so platform-portable. – morescientistthancoderfornow Nov 13 '14 at 06:25
  • @DavidC.Rankin: 1/0 is positive infinity (which is representable in IEEE 754); as the manpage says, "If x equals y, the functions return y." (y in this case is probably 1/0 or `HUGE_VAL`). Fortunately, the time is rarely infinite. – rici Nov 13 '14 at 06:25
  • Yes, I understand, but why force the trap of `HUGE_VAL` just to tell `nextafter` what direction you want to go? – David C. Rankin Nov 13 '14 at 06:26
  • @DavidC.Rankin: I guess it's too late here for me to understand the objection. The question, as I understand it, is "how do I correct an event time so that it is not the same as another event time, quickly and without modifying the order much", and the answer, it seems to me, is repeatedly call `nextafter` until you find a value that is not the same as any known event time. Of course, if events can happen at an infinite time, that's not going to work, but the you'll never get around to those events so there is another problem. – rici Nov 13 '14 at 06:30
  • @DavidC.Rankin what do you mean by "trapping"? I did some googling and don't see a straight reference. (Sorry, new to this!) Could I equally well just use HUGE_VAL directly? E: You mean a syscall? Isn't all floating-point arithmetic via syscall? – morescientistthancoderfornow Nov 13 '14 at 06:30
  • Yeah, I was going to ask that, too. You can just use `HUGE_VAL`. – rici Nov 13 '14 at 06:31
  • @rici "epsilon"...that word the mathematicians throw around...yes, fits the context. Thank you. – morescientistthancoderfornow Nov 13 '14 at 06:32
  • @morescientistthancoderfornow: No, these days most CPUs have builtin floating point. And anyway 1./0. is not a trap (or at least, it is not necessarily a trap; you can set the floating point unit to issue a trap, but unix doesn't). http://coliru.stacked-crooked.com/a/45c19701a6f9d2cb – rici Nov 13 '14 at 06:35
  • @morescientistthancoderfornow: By the way, don't compute epsilon and then add it repeatedly. Eventually, you will hit a point where epsilon increases, at which point the addition becomes a no-op. You have to call `nextafter` repeatedly. Fortunately, it is *very cheap*. – rici Nov 13 '14 at 06:38
  • @rici Right, I just wanted to bring that out because I asked it too in the question...maybe someone just wants the current epsilon/"precision" for some reason, and they come to this aptly-titled question. – morescientistthancoderfornow Nov 13 '14 at 06:40
0

Following @Brendan's comment and @rici's comment, it may be useful to see why an integer representation will ultimately decide what the smallest nextafter value is and why it is the integer precision that controls. Floats/doubles are stored in IEEE-754 single/double precision floating point format in memory. For every float or double there is an integer representation that corresponds to the floating point representation. The next available increment, or nextafter value is an increment of the value at bit-0. The following example illustrates the point for the number 123.456 (example of float shown for space saving, double is simply a larger number of bits involved):

Example of next value for   : 123.456

    The float value entered : 123.456001

    As unsigned integer     : 1123477881

    binary value in memory  : 01000010-11110110-11101001-01111001

    next larger int value   : 1123477882

    binary value in memory  : 01000010-11110110-11101001-01111010

    next larger float value : 123.456009

In order to find the next possible incremental value, the value in memory is incremented by 1 (or decremented by 1 if the most significant bit is 1 (indicating a negative value)). The results are then translated back into floating point to provide the next possible floating point value. Brendan's point is well taken. By using an integer value, you avoid the necessity of the nextafter calculation. But regardless of whether you are using floating point or an integer value, the smallest increment in memory is still the increment (or decrement) of 1 to bit-0. For the example above, it is the difference between 123.456001 and 123.456009 (in single precision) as shown below:

IEEE-754 Single Precision Floating Point Representations

  0 1 0 0 0 0 1 0 1 1 1 1 0 1 1 0 1 1 1 0 1 0 0 1 0 1 1 1 1 0 0 1
  |- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -|
  |s|      exp      |                  mantissa                   |

  0 1 0 0 0 0 1 0 1 1 1 1 0 1 1 0 1 1 1 0 1 0 0 1 0 1 1 1 1 0 1 0
  |- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -|
  |s|      exp      |                  mantissa                   |

Hopefully, the visual along with the explanation adds to the understanding. Note: there is no actual 'translation' or 'conversion' involved. It is just looking at the same value in memory as either a floating point number, or integer. The only difference to your code is whether you add 1 or add a few more lines of code to find out what the next floating point value would be after you add 1. None of it is processor or time intensive.

David C. Rankin
  • 81,885
  • 6
  • 58
  • 85