2

I have the following function which is suppose to convert a floating point number to int32. The problem is that for negative numbers it just doesn't work (my if statement isn't executing). I've tried a similar program for a conversion from float to int16 and everything works just fine. Sry if this is too simple , but I just can't figure out what I'm missing and why it doesn't work for negative values.

#define MaxInt32 2147483647
#define MinInt32 -2147483648
…
bool CastFloatToInt32 ( float  fNumber, int32 *ConvertedValue) {
    bool CastStatus = False;

    if ( ( fNumber >= MinInt32 ) && ( fNumber <= MaxInt32 ) ) {
        *ConvertedValue = ( int32 ) ( fNumber );
        CastStatus = True;

    } else {
        if (fNumber < MinInt32) {
            *ConvertedValue = MinInt32;

        } else {
            *ConvertedValue = MaxInt32;
        }
    }

    return CastStatus;
}
Pascal Cuoq
  • 79,187
  • 7
  • 161
  • 281
Daria Claws
  • 27
  • 1
  • 4
  • Could this be an issue involving the sign bit? – Tim Biegeleisen Jun 02 '15 at 08:17
  • any reason you did not use [`floor()`](http://linux.die.net/man/3/floor) or [`ceil()`](http://linux.die.net/man/3/ceil)? – Sourav Ghosh Jun 02 '15 at 08:19
  • 2
    Note that `fNumber <= MaxInt32` is equivalent to `fNumber <= (float)MaxInt32`, where the conversion to `float` of `MaxInt32` rounds **up**. For the value of `fNumber` 2^31, the condition will be true and the later conversion `( int32 ) ( fNumber )` will invoke undefined behavior. – Pascal Cuoq Jun 02 '15 at 08:24
  • 1
    @SouravGhosh It is not clear to me in what way you are suggesting that the floating-point to floating-point functions `floor` and `ceil` would be useful. Perhaps you should expand on this remark in an answer. – Pascal Cuoq Jun 02 '15 at 08:26
  • 3
    I think the answer is in how `MinInt32` and `MaxInit32` are represented as float or maybe the values are wrong to begin with. How are they defined? – Aaron Digulla Jun 02 '15 at 08:27
  • I can't use library functions like floor and ceil as they seem to not be recognized and defined on th FP unit of the circuit board I am working on Tim I thought about a sign bit problem too but it's weird how this works for int16, but not for int 32. I've defined MaxInt32 as "#define MaxInt32 2147483647" and "#define MinInt32 -2147483648" – Daria Claws Jun 02 '15 at 08:29

2 Answers2

2

You can see why here: https://stackoverflow.com/a/20910712/1073171

Thus, you can fix this code by changing your defines to either:

#define MaxInt32 (int32)0x7FFFFFFF
#define MinInt32 (int32)0x80000000

Or else:

#define MaxInt32 (int32)2147483647
#define MinInt32 (int32)(-2147483647 - 1)

The reasoning's are given in the answer I linked above. If you're using GCC, you could always move to -std=gnu99 or similar too!

Community
  • 1
  • 1
Brian Sidebotham
  • 1,706
  • 11
  • 15
1

The compiler parses "-2147483648" in 2 stages: text to number and then negation.

2147483648 is likely an unsigned long/unsigned value. Negating that does not change the type and curiously retains the same unsigned integer value of 2147483648.

Thus fNumber >= MinInt32 is a signed / unsigned compare which converts fNumber to an unsigned integer before comparing.

Suggest using @Brian Sidebotham solution or if that is not acceptable, at least cast the MinInt32 to its expected type.

#define MinInt32 ((int32)-2147483648)
Community
  • 1
  • 1
chux - Reinstate Monica
  • 143,097
  • 13
  • 135
  • 256