36

I see u8 u16 u32 u64 data types being used in kernel code. And I am wondering why is there need to use u8 or u16 or u32 or u64 and not unsigned int?

3 Answers3

42

Often when working close to the hardware or when trying to control the size/format of a data structure you need to have precise control of the size of your integers.

As for u8 vs uint8_t, this is simply because Linux predated <stdint.h> being available in C, which is technically a C99-ism, but in my experience is available on most modern compilers even in their ANSI-C / C89 modes.

Brian McFarland
  • 9,052
  • 6
  • 38
  • 56
  • 12
    And `u8` involves less typing :-) – TripeHound Jun 17 '15 at 16:13
  • 18
    True... but it gets super annoying when you need to mix & match libraries and everyone tries to define their own known-wdith types so. You'll have `U8`, `u8`, `uint8`, `BYTE`, `UINT8` and `unt8_t` all in the same file. You can even have potential conflicts that ends up generating warnings. Most commonly with the 32 and 64-bit types which may have multiple valid ways to typedef them on a given platform. For new code, please, please, please just stick with stdint.h types :). – Brian McFarland Jun 17 '15 at 16:19
  • I am using `u8` `u16` etc in kernel space with `linux/types.h`. Is `stdint.h` for user space? –  Jun 17 '15 at 17:09
  • 3
    You can use the standard names (`uint8_t`, etc) in kernel mode too. Typedefs for those are in `linux/types.h` as well. – Gil Hamilton Jun 17 '15 at 17:28
  • 2
    @BrianMcFarland as long as `u8` etc are your *own* typedefs that you can guarantee are identical to the standard ones, you'll be fine though. As Gil Hamilton said, [the kernel does this](http://lxr.free-electrons.com/source/include/linux/types.h) – o11c Jun 17 '15 at 18:50
9

Adding my 10 cents to this answer:

u64 means an 'unsigned 64 bits' value, so, depending on the architecture where the code will run/be compiled, it must be defined differently in order to really be 64 bits long.

For instance, on a x86 machine, an unsigned long is 64 bits long, so u64 for that machine could be defined as follows:

typedef unsigned long u64;

The same applies for u32. On a x86 machine, unsigned int is 32 bits long, so u32 for that machine could be defined as follows:

typedef unsigned int u32;

You'll generally find the typedef declaration for these types on a types.h file which corresponds to the architecture you're compiling your source to.

6

Share what I learned about this question recently.

The reason why we need such explicitly sized type, such as u32, is that the normal C data types are not the same size on all architectures.

The following image shows that long integers and pointers feature a different size on various platforms.

enter image description here

In this way, u32 can guarantee that you get 4 bytes long integer.

Chris Bao
  • 2,418
  • 8
  • 35
  • 62