11

Please consider the following code:

float float_value = x; // x is any valid float value
int int_value = 0;
size_t size = sizeof(int) < sizeof(float) ? sizeof(int) : sizeof(float);
memcpy(&int_value, &float_value, size);

As far as i know this could result in an trap representation. My questions:

  1. Is that true?
  2. If not, why?
  3. If not, is there another way avoiding a possible trap representation?
Jolta
  • 2,620
  • 1
  • 29
  • 42
Johannes
  • 871
  • 1
  • 10
  • 16

3 Answers3

11

The sanctioned way which won't produce any trap representation is

unsigned char obj[sizeof float];
memcpy(obj, &float_value, sizeof float);

Then you can use the bytes of the object representation to build your desired int.

But using fixed-width integers as mentioned by Stephen Canon is better - unless you have a weird float size.

Daniel Fischer
  • 181,706
  • 17
  • 308
  • 431
3

You don't need memcpy if you only want to inspect the values there. The easiest is to just cast

unsigned char const* p = &float_object;

pointer cast to all char types is always guaranteed to give something valid with which you can do simple arithmetic. You are safe as long as you do dereferencing inside the bounds given by sizeof float_object.

If you want to treat that as a number the safest is to chose an unsigned integer of fixed width, most probably uint32_t. If you know that the width requirements are fulfilled, this should give you everything you need.

As mentioned this works well as long as you don't write through that pointer. Then the aliasing rules for pointers can have the optimizer go wrong afterwards.

Jens Gustedt
  • 76,821
  • 6
  • 102
  • 177
2

Yes, that could result in a trap representation. As for how to avoid it:

  • Assert that sizeof(int32_t) == sizeof(float)
  • Use int32_t instead of int.

The fixed-width integer types may not admit trap representations. Specifically, the standard requires that they have no padding bits, and you cannot have a trap representation without padding.

Stephen Canon
  • 103,815
  • 19
  • 183
  • 269
  • are you sure about that? i though only *unsigned char* never traps. could you please quote C99? thanks for the answer! – Johannes Dec 15 '11 at 22:28
  • @Johannes: §7.18.1.1 "The typedef name intN_t designates a signed integer type with width N, no padding bits, and a two’s complement representation." Because there are no padding bits, all N bits are *value* bits, thus every combination of bits has a value defined by the standard and cannot be a trap representation. – Stephen Canon Dec 15 '11 at 22:33
  • @Stephen, you could have a trap representation without padding, namely the restrictions that you cite still leave two possible values for `INTXX_MIN`. But the standard closes this loophole by fixing what the value has to be in addition. In the copy I have it even looks that this was added later with one of the updates. – Jens Gustedt Dec 15 '11 at 22:33
  • @JensGustedt: You're overlooking the requirement that a twos-complement representation is used. There is a unique `INT32_MIN`. (as specified in §7.18.2.1) – Stephen Canon Dec 15 '11 at 22:39
  • @Stephen, this was exactly the point in the standard I was referring to. Two's complement alone doesn't guarantee that, the standard explicitly allows "the value with sign bit 1 and all value bits zero" to be a trap representation. You need to fix the min value in addition to that value in addtion. This information is missing in your answer. – Jens Gustedt Dec 15 '11 at 22:45
  • @JensGustedt: That's addressed in §7.18.2.1, which requires that `INTN_MIN` is exactly -2**(n-1). I assume that's what you are getting at. – Stephen Canon Dec 15 '11 at 22:48
  • @Stephen, yes exactly, this is what I was saying from the start, and what was missing from your answer. – Jens Gustedt Dec 15 '11 at 22:56