0

After quite a long time of spending in multiple programs I have found that, depending on the platform, I sometimes need to lower the RAM usage drastically, because of highly limited resources on some platforms. I normally store large maps and matrices in terms of these types, so switching from int32 to int16 or from float to double (in case they are actually of different size) easily reduces my usage by almost a half. Thus, I have just added redefinitions as such:

typedef double Float;
typedef int32_t Int;
typedef uint32_t UInt;

This allows me to quickly adjust all important primitive types in my program. Note that none of my integers in the program actually exceed the size of a 2 byte integer, so there is no issue by using any of int16 to int64.

Additionally, it seems a bit more readable to just have a nice "Int" there instead of "uint32_t". And in some cases I have observed a change in performance by both reducing the size of primitive types and increasing it.

My question is: are there any disadvantages that I simply miss? I couldn't really find anything about this topic on SO yet, so please lead me there if I have missed that as well. The code is primarily for me, others might see it, but it would in every case be given by me personally or with a proper documentation.

EDIT: Sorry for the past mistake, I indeed use typedefs.

philkark
  • 2,417
  • 7
  • 38
  • 59
  • well a "disadvantage" I could see is when sharing the code with others because they are not used to the typedefs. And if you check boundaries you should use the numeric_limits and not hardcode them (which is generally always better ) because when you change the type to a smaller one the boundaries also change – Hayt Sep 02 '16 at 08:42
  • @Hayt Thanks for the comment. Yes I have thought about using numeric limits in general. However, the largest integers that I reach are about of size 2000-3000 and there is no way to exceed this in my program. For floating points it's similar. – philkark Sep 02 '16 at 08:44
  • If you never exceed the bounds of the smaller types, is there any reason not to use them unconditionally? Also, the idea of having a type `Int` on a modern architecture secretly being only 16 bits scares me. – Sebastian Redl Sep 02 '16 at 08:54
  • Qt does a similar thing: `qreal` is `typedef`d `double` on x86 and `float` on arm. Maybe you could also use real instead of Float to avoid confusion. – Karsten Koop Sep 02 '16 at 08:55
  • @SebastianRedl In general yes, but I have observed performance hits for smaller types on some platforms. – philkark Sep 02 '16 at 08:57

1 Answers1

2

typedef int32_t Int; is NOT BAD, but typedef double Float; is NOT GOOD. Because it's confusing: a Float is, in fact, a double!?

Why not use preprocessor to define two sets of types, one for large types, and one for small types.

#ifdef LARGE
typedef int32_t Int;
typedef double Real;
#else
typedef int16_t Int;
typedef float Real;
#endif
void f() {
    cout << sizeof(Int) << endl;
    cout << sizeof(Real) << endl;
}

To use large types: g++ -o test test.cpp -DLARGE

To use small types: g++ -o test test.cpp

for_stack
  • 21,012
  • 4
  • 35
  • 48
  • I Like the idea of preprocesser defines. I would have to make it a bit more adjustable, because only small/large won't always cover it, but I will most likely do that. "Real" is additionally, a very good name, I have in face named it FloatPt in the past, to refer to a general floating point, but "Real" is indeed better. – philkark Sep 02 '16 at 08:55