// <windef.h>
typedef int BOOL;
Isn't this a waste of memory since an int is 32 bits?
Just in case I was wrong, I tried sending a normal bool*
to a function that required BOOL*
and didn't work until I used the typedef int.
// <windef.h>
typedef int BOOL;
Isn't this a waste of memory since an int is 32 bits?
Just in case I was wrong, I tried sending a normal bool*
to a function that required BOOL*
and didn't work until I used the typedef int.
Wow, slow down a little bit there. First of all, I'm pretty sure programmers have been using 4-byte int
s for boolean variables since the beginning of programming on x86. (There used to be no such thing as a bool
datatype). And I'd venture to guess that this same typedef is in the Windows 3.1 <Windows.h>
.
Second, you need to understand a bit more about the architecture. You have a 32-bit machine, which means that all of the CPU registers are 4-bytes or 32-bits wide. So for most memory accesses, it is more efficient to store and access 4-byte values than it is for a 1-byte value.
If you have four 1-byte boolean variables packed into one 4-byte chunk of memory, three of those are not DWORD (4-byte) aligned. This means the CPU / memory controller actually has to do more work to get the value.
And before you go smashing on MS for making that "wasteful" typedef. Consider this: Under the hood, most compilers (probabily) still implement the bool
datatype as a 4-byte int
for the same reasons I just mentioned. Try it in gcc, and take a look at the map file. I bet I am right.
Firstly, the type used in the system API has to be as language-independent as possible, because that API will be used by a multitude of programming languages. For this reason, any "conceptual" types that might either not exist in some languages or might be implemented differently in other languages are out of question. For example, bool
fits into that category. On top of that, in a system API it is a very good idea to keep the number of interface types to a minimum. Anything that can be represented by int
should be represented by int
.
Secondly, your assertion about this being "a waste of memory" makes no sense whatsoever. In order to become "a waste of memory" one would have to build an aggregate data type that involves an extremely large number of BOOL
elements. Windows API uses no such data types. If you built such wasteful data type in your program, it is actually your fault. Meanwhile, Windows API does not in any way force you to store your boolean values in BOOL
type. You can use bytes and even bits for that purpose. In other words, BOOL
is a purely interface type. Object of BOOL
type normally don't occupy any long-term memory at all, if you are using it correctly.
Historically BOOL
was used as an anything-not-0 = TRUE type. For example, a dialog procedure returned a BOOL
, that could carry a lot of information. The signature below is from Microsoft's own documentation:
BOOL CALLBACK DlgProc(HWND hwndDlg, UINT message, WPARAM wParam, LPARAM lParam)
The signature and function result conflated several issues, so in the modern API it's instead
INT_PTR CALLBACK DialogProc(
_In_ HWND hwndDlg,
_In_ UINT uMsg,
_In_ WPARAM wParam,
_In_ LPARAM lParam
);
This newfangled declaration has to remain compatible with the old one. Which means that INT_PTR
and BOOL
have to be the same size. Which means that in 32-bit programming, BOOL
is 32 bits.
In general, since BOOL
can be any value, not just 0 and 1, it's a very ungood idea to compare a BOOL
to TRUE
. And even though it works to compare it against FALSE
, that's generally also bad practice because it can easily give people the impression that comparing against TRUE
would be OK. Also, because it's quite unnecessary.
By the way, there are more boolean types in the Windows API, in particular VARIANT_BOOL
which is 16 bits and where logical TRUE is represented as the all 1 bitpattern, i.e. -1
as a signed value…
That's an additional reason why it's not a good idea to compare directly with logical FALSE or TRUE.
The processor is 32 bit, and has a special flag when it operates on a zero integer, making testing for 32 bit boolean values very, very, very fast.
Testing for a 1 bit, or one byte boolean value is going to be many times slower.
If you are worried about memory space, then you might worry about 4 byte bool variables.
Most programmers, however, are more worried about performance, and thus the default is to use the faster 32 bit bool.
You might be able to get your compiler to optimize for memory usage if this bothers you.
Most of the answers in here seem to be misinformed. Using 4 bytes for a boolean value is not faster than using 1 byte. x86 architecture can read 1 byte just as fast as it can read 4, but 1 byte is less memory. One of the biggest threats to performance is memory usage. Use too much memory, and you'll have more cache misses, and you'll have a slower program. This stuff doesn't really matter if you're dealing with only a handful (hundreds!) of booleans, but if you have a ton of them, using less memory is key to higher performance. In the case of a massive array, I'd recommend 1 bit instead of 1 byte, as the extra logic to mask that bit is inconsequential if it's saving 87% of memory. You see this practice a lot with flag bitfields.
The answer for the question is most definitely just "legacy reasons." That is, "Don't touch things that aren't broken." Changing a line of code like for a minor optimization might introduce hundreds of other problems that nobody wants to deal with.