2

I'm starting to fiddle around a little bit with GTK+ for some little project.

GLib defines a series of data type, like gint gpointer and so on, which are just typedefs of base data types (gint is a typedef for int, gpointer for void* and so on).

Now, say I have a function or a class that does in no way make use of GTK. I would be really tempted to use the base data types so that I can reuse the class/function somewhere else even if I don't include the GTK headers.

On the other hand, I find it quite ugly to have a mix of gint and int in the code, when they are actually the same thing.

In summary, I am wondering whether there is a standard practice of when to use one or the other, or if one should just mix them at will...

nico
  • 50,859
  • 17
  • 87
  • 112
  • 4
    Havoc Pennington commented here on this matter: http://stackoverflow.com/questions/2800310/converting-an-array-of-characters-to-a-const-gchar/2800318#2800318 It sounds reasonable to me. – ptomato Jan 02 '12 at 13:38

1 Answers1

0

I deal with this issue a lot working with third party libraries where they all want their own type alias for integers, floats, longs, shorts, byte aliases instead of chars, etc.

It's very annoying. This is often done to ensure portability but ends up giving each library its own standards.

What I find displeasing most here is from a coupling perspective. I might have a general mesh interface which should be decoupled from any rendering concerns. Yet some of its data may be passed directly to an OpenGL function which wants to assume that size of the integers we pass will match sizeof(GLint).

In some cases this isn't merely aesthetic. It might not even be plausible to include GL headers in this mesh header, as it may be part of a widely-used software development kit which should not require such compile-time dependencies on the third party plugin writers who use it.

Yet portability is an issue. I managed to survive a nightmarish scenario in a very large-scale legacy C codebase where the implicit assumption was made throughout the codebase that sizeof(int) == sizeof(void*). It took years of looking for needles in a haystack to port this codebase to 64-bit.

What I've settled on personally is to start favoring plain old unaliased data types over the years. I've also taken a liking to just using signed integers, e.g. I found it a nuisance in the past to even avoid warnings in basic loops through containers where some would use int, others unsigned int, others size_t, etc. to indicate the number of elements contained. At least personally, I found my maintenance time reduced by just favoring int without a very good reason not to do so.

To try to mitigate a potential worst-case scenario on some platform where sizeof(int) != sizeof(GLint), e.g., I tend to liberally sprinkle assertions around code that makes the assumption that these two are equal: assert(sizeof(int) == sizeof(GLint));. This should significantly mitigate the pain associated with that kind of nightmarish scenario I faced before when porting from 32-bit to 64-bit. It also explicitly expresses these assumptions.

I've found this to establish a comfortable balance for my case. Of course this is all subjective and can vary considerably based on your use cases. But this is one possible solution that might allow you to just favor plain old unaliased data types more and more in spite of all these third party libraries and not face a worst-case scenario if your assumptions cease to be correct on some platform.