44

I'm new to the Win32 API and the many new types begin to confuse me.

Some functions take 1-2 ints and 3 UINTS as arguments.

  • Why can't they just use ints? What are UINTS?

Then, there are those other types:

DWORD LPCWSTR LPBOOL 
  • Again, I think the "primitive" C types would be enough - why introduce 100 new types?

This one was a pain: WCHAR*

I had to iterate through it and push_back every character to an std::string as there wasn't another way to convert it to one. Horrible.

  • Why WCHAR? Why reinvent the wheel? They could have just used char* instead, or?
Earlz
  • 62,085
  • 98
  • 303
  • 499
sub
  • 2,653
  • 3
  • 27
  • 29
  • 14
    Not only is it C, but it also has leftovers from the big switch from Win16 (i.e. 16 bit) to Win32. Some of the types (actually #define's) also express aspects of the infamous Hungarian Notation that was also prevalent at the time. Also, by hiding the actual types behind the #define's, MS was able to support different C compiler vendors; believe it or not, there were many different C compiler vendors back then. Forgotten names like Lattice C, Watcom C, and many others. Ahh.... the memories. – kmontgom Apr 15 '10 at 18:35
  • 5
    WCHAR is a *wide* character, UTF-16LE. That takes up 2 bytes per character. With that in mind, you can probably guess what would happen if you tried to use it as a regular char array. – Michael Madsen Apr 15 '10 at 18:47
  • 7
    `stdint` types are much more sane `uint` and `uint32_t` and stuff. All lower case type names make me happy – Earlz Apr 15 '10 at 18:50

4 Answers4

66

The Windows API was first created back in the 1980's, and has had to support several different CPU architectures and compilers over the years. They've gone from single-user single-process standalone systems to networked multi-user multi-core security-conscious systems. They had to work around issues with 16-bit vs. 32-bit processors, and now 64-bit processors. They had to work around issues with pre-ANSI C compilers. They had to support C++ compilers in the early unstandardized times. They had to deal with segmented memory. They had to support internationalization before Unicode existed. They had to support some source-level compatibility with MS-DOS, with OS/2, and with Mac OS. They've had to run on several generations of Intel chips, and PowerPC, and MIPS, and Alpha, and ARM. The same basic API is used for desktop, server, mobile, and embedded systems.

Back in the 1980's, C was considered to be a high-level language (yes, really!) and many people considered it good form to use abstract types rather than just specifying everything as a primitive int, char, or void *. Back when we didn't have IntelliSense and infotips and code browsers and online documentation and the like, such usage hints were helpful, and it made it easier to port code between different compilers and different programming languages.

Yes, it looks like a horrible mess now, but that doesn't mean anybody did anything wrong.

Kristopher Johnson
  • 81,409
  • 55
  • 245
  • 302
  • 16
    One of the more obvious artifacts of the Windows API heritage is the '`LP`' prefix used on many pointer types - that prefix stands for 'long pointer' (also known as a 'far pointer') and was required for many parameters due to Win16's underlying segmented architecture, where a pointer could be 'near' (pointing within an assumed segment) or 'far' (where the segment was specified as part of the pointer). Near and far pointers are long gone with Win32, but the names remain the same. – Michael Burr Apr 15 '10 at 18:44
  • 3
    It's kind of funny. The windows platform headers still have defines for FAR pointers. Strange they still haven't cleaned up the mess yet, after 20 or so years. The C Win32 API feels like it's left and forgotten. –  Apr 15 '10 at 18:57
  • 12
    @Mads: Forgotten? Hardly. Keeping the old definitions allows older apps to be updated without having to rewrite them. – Adrian McCarthy Apr 15 '10 at 19:14
  • 3
    Another obvious artifact is the WPARAM type. The "W" originally stood for "word", meaning 16-bit value. Now, it is a 32-bit value, but they kept the "W" prefix. See http://blogs.msdn.com/oldnewthing/archive/2003/11/25/55850.aspx for other commentary. – Kristopher Johnson Apr 16 '10 at 11:28
  • 4
    Another reason to add to the great list of reasons here was interoperability across different languages. For example, you can call native code from, say, VB, and the Windows API needs to make sure everything is using e.g. a "32-bit unsigned integer" instead of hoping that whatever your C compiler treated as `unsigned int` matched your VB `UInteger` or Delphi `Cardinal`, or could be coerced into your VB `Integer` or Pascal `LongInt` or whatever. – Jason C Nov 23 '16 at 19:30
  • 1
    @KristopherJohnson: "*Another obvious artifact is the WPARAM type. The "W" originally stood for "word", meaning 16-bit value. Now, it is a 32-bit value, but they kept the "W" prefix*" - and now, it is actually a pointer-sized value (same with `LPARAM`). They are 32-bit or 64bit depending on the OS platform. – Remy Lebeau Feb 20 '18 at 18:18
  • Very well worded, indeed! While you do explain the necessity for an ABI, you don't mention that term. Defining an ABI requires an additional layer of abstraction on top of programming languages, expressed in terms of programming languages, to allow for both sides of the contract to change (programming language implementations and the API implementation). I'm not sure how I would incorporate that into this answer, or whether it adds anything significant at all. – IInspectable Feb 20 '18 at 19:22
6

Win32 actually has very few primitive types. What you're looking at is decades of built-up #defines and typedefs and Hungarian notation. Because there were so few types and little or no IntelliSense developers gave themselves "clues" as to what a particular type was actually supposed to do.

For example, there is no boolean type but there is an "aliased" representation of an integer that tells you that a particular variable is supposed to be treated as a boolean. Take a look at the contents of WinDef.h to see what I mean.

You can take a look here: http://msdn.microsoft.com/en-us/library/aa383751(VS.85).aspx for a peek at the veritable tip of the iceberg. For example, notice how HANDLE is the base typedef for every other object that is a "handle" to a windows object. Of course, HANDLE is defined somewhere else as a primitive type.

toku-sa-n
  • 798
  • 1
  • 8
  • 27
Paul Sasik
  • 79,492
  • 20
  • 149
  • 189
2

A coworker of mine would say, "There is no problem that can't be solved (obfuscated?) by a level of indirection." In Win32, you'll be dealing with WCHAR, UINT, etc., and you'll get used to it. You won't have to worry when you deploy that DLL which basic type a WCHAR or UINT compiles to—it will "just work".

It is best to read through some of the documentation to get used to it. Especially on the "wide char" support (WCHAR, etc.). There's a nice definition on MSDN for WCHAR.

Cody Gray - on strike
  • 239,200
  • 50
  • 490
  • 574
AlG
  • 14,697
  • 4
  • 41
  • 54
  • 10
    "There is no complexity problem in programming that cannot be eased by adding a layer of indirection. And there is no performance problem in programming that cannot be eased by removing a layer of indirection." - Donald Knuth – Simon Buchan Apr 16 '10 at 11:38
2

UINT is an unsigned integer. If a parameter value will not / cannot be negative, it makes sense to specify unsigned. LPCWSTR is a pointer to const wide char array, while WCHAR* is non-const.

You should probably compile your app for UNICODE when working with wide chars, or use a conversion routine to convert from narrow to wide.
http://msdn.microsoft.com/en-us/library/dd319072%28VS.85%29.aspx

http://msdn.microsoft.com/en-us/library/dd374083%28v=VS.85%29.aspx

Kyle Alons
  • 6,955
  • 2
  • 33
  • 28